Our new platform is already available at www.gandi.net

Go to the new Gandi

Following the discovery of an intermittent but serious network issue, Gandi teams have determined that a rolling maintenance will be necessary. 

We regret that, while most of the affected systems will simply require migration and no interruption in service, some will almost certainly require restarting. We will endevor to make these interruptions as short as possible, and only perform them when absolutely necessary. 

We are starting with dc0 (Paris) this week. We will proceed on to dc2 (Luxembourg) on Monday, June 9. 

The issue is not detected on dc1 (Baltimore) at the moment but, if necessary, we will proceed to fix it there.  We apologize for any inconvenience this may cause.  


Here's the incident history:

 

* 08:25 UTC 12 hosting nodes are made inaccessible due to a switch failure. ~200 Virtual machines (VMs) are made unreachable. 

* 08:40 UTC Switches are recovered and VMs are once again accessible. Investigation does not reveal cause of incident. 

* 12:01 UTC A second incident occurs, affecting 8 nodes and ~180 VMs. 

* 12:09 UTC Switches are recovered, VMs are made available again. Additional data collection measures are put in place to help determine cause. 

* 14:56 UTC A third incident occurs, affecting 10 nodes and 321 VMs. 

* 15:10 UTC Nodes and VMs are available again. This time extensive forensic data is made available, and we expect to find the root cause and execute a permanent fix, which will be implemented as soon as possible. 

We do apologise for the inconvenience this issue may have caused. 



A maintenance will occur on the webmails databases.

It will happen the 27th May 2014 between 2AM and 3AM CEST (Paris/France).

Sorry for the inconvenience it may cause to you.

 

UPDATE : The maintenance lasted 10 minutes between 2AM and 2.10AM CEST.


Since 11:00 am we are facing storage-related issues on the Gandimail service.
In order to fix this problem we will perform an emergency maintenance.
Perturbations on the email access may occur until the maintenance is complete.
We apologise for the inconvenience.

UPDATE : 
- 10:26 UTC : first filer maintenance completed successfully.
- 10:40 UTC : problem with the second filer.
- 11:00 UTC : second filer came back after reboot and service checks
- 11:37 UTC : end of maintenance
 

As previously reported, an incident on our database infrastructure caused misbehavior on our GandiMail and DNS provisioning services.

Impact:

  • Difficulties accessing GandiMail mailboxes caused by random authentication failures
  • Delayed creation of new GandiMail mailboxes
  • DNS provisioning was not effective until 10am PDT

Timeline:

  • Wednesday, 8 May 10pm PDT: A bug in our infrastructure caused a database slowdown, causing some operations to stall.
  • Thursday, 12pm PDT: The problem was acknowledged, delayed by human error
  • 1am PDT: GandiMail performance issues due to random login failures
  • 10am PDT: DNS provisioning reestablished

Full functionality of GandiMail is expected to be restored by 12:30am PDT.


We experienced an outage on the Gandi mail platform this morning. The outage was due to human error, rather than any failure of equipment or software. 

Here is the timeline (in UTC):

14:57 - A human error was made that introduced a problem for customers connecting to Gandi's email infrastructure. 

15:00 - Notification of the outage posted to Gandi's web site

15:06 - Physical connectivity restored. One machine unresponsive. 

15:23 - Full connectivity and machine function restored. 

In summary, some 15,975 mailboxes were not accessible for checking received messages or sending emails from 14:57 to 15:23 UTC. No data was lost. 

We apologize for any inconvenience this incident may have caused. 


Hi,

 

We are facing problems reaching the .eu registry servers. The corresponding domain availability search service is not available. Our teams are working on the matter.

 

Thanks for your patience.


Some physical machines hosting IaaS VMs are unreachable due to an issue we are currently analyzing.

We are fixing the issues and restarting the VMs on other physical machines as soon as possible.

Thank you not to do any operation on your VM until the emergency maintenance is ongoing. and not finished.

At the end of the maintenance, if your server does not respond well, please contact our hosting support team by mail using the 'blocked server' option.

 

Sorry for the inconvenience this issue may cause to you.

 

UPDATE 12h20 : the situation is back to normal, we are still analyzing the metrics and the logs.


A maintenance will be performed on our hosting infrastructure at Baltimore (USA) and Bissen (Luxembourg).

The following services will be interrupted for a few minutes starting at 2014-04-29 05h00 UTC (2014-04-29 10pm PST):

  • New SFTP/GIT sessions
  • Reverse resolvers for customer IPs
  • Operations on servers and Simple Hosting

Please accept our apologies for the inconvenience.


Page   1 2 3 4 5 615 16 17
Change the news ticker size