Two of our links between FR and US are down. Due to that, performance is a bit degraded (more latency than usual).

Our team is working on it.

Update: The issue has been resolved.

Gandi has scheduled a network maintenance Tuesday, 10th of June 2014 between 22:00 and 23:00 UTC
The web redirection service may encounter a few minutes downtime.
We apologize for any inconvenience this may cause.
If you require further information or assistance, please do not hesitate to contact the Support Team.

Gandi Team
Update: maintenance complete at 22:24 UTC. No impact.

We are currently experiencing a routing issue causing a part of our network to be unreachable from certain locations.

Our technical team is analyzing the problem in order to correct it as quickly as possible.

We will provide updates as the situation evolves.


UPDATE 05h15 : The release of a new network configuration disturbed our network. Sorry for the inconvenience this issue may have caused to you.

Gandi has scheduled network maintenance tomorrow Wednesday, 26 March 2014 between 22:00 and 23:59 UTC

You may see some packet loss during their duration.

We apologize for any inconvenience this may cause.

If you require further information or assistance, please do not hesitate to contact the Support Team.


Gandi Team

Our platform is targeted by an ongoing DDOS. Our team is currently mitigating the attack.


13:12 (CET) : a quite large DDOS is impacting our network.

13:16 (CET) : a initial mitigation is activated by our network team.

A subset of PaaS instances (behing is still unreachable.

13:34 (CET) : we are shutting down some peers.

13:38 (CET) : situation is now more stable. PaaS instances on are nevertheless still unreachable.

14:07 (CET) : PaaS instances are now reachable.

14:45 (CET) : we have enable our peering back. The situation is now back to normal.

Our infrastructure has experienced several noticeable slowdowns today, 7 February 2014, due to an ongoing DDoS. Our teams are working to mitigate the attacks.

Updates will be provided here as the situation evolves.

We experienced a hardware fault on routing equipment on the simple hosting platform.
Below is a chronology of the various events:
- 20:06 UTC : CPU load on the equipment shows significant increase.
- 20:06 UTC : Equipment is running at 100% CPU for no apparent reason, and has failed to respond to commands.
- 20:08 UTC : We made the decision to migrate to secondary equipment.
- 20:08 UTC : The secondary equipement exhibits the same symptoms as the primary, so traffic was not transferred.
- 20:09 UTC : Debugging underway as to ascertain the cause of the problem.
- 20:26 UTC : Migration to the now-stabilised secondary equipment.
- 20:27 UTC : Service returned to nominal operation.
- 22:42 UTC : Following this incident, there was a secondary effect on DNS resolution; the Simple Hosting instances failing to resolve DNS since 20:06 UTC.  the problem is now resolved.
- The network equipment used for the Gateways for this service are visibly showing signs of weakness.  An in-depth analysis of the anomaly and behaviour of the primary unit is underway (likely due to a memory fault).  We are currently running on the secondary gateway for the moment.

Page 1 2 3 4 5 6
Change the news ticker size