Clever Cloud Status

Incidents

Full history of incidents.

Oldest first

December 2025

Fixed · Infrastructure · Global

A hypervisor has crashed and remains offline. Team actively investigating and working on recovery. Addons hosted on this hypervisor are currently unavailable, apps have been redeployed.

November 2025

Fixed · Services Logs · Global

The logs stack is currently unavailable. We are looking into it. Logs viewer from the console, clever-tools (CLI) and the API are impacted.

Fixed · API · Global

The central Clever Cloud API has stopped responding. It seems like the issue is in the database access.

We are investigating

Fixed · Infrastructure · Global

An hypervisor on Paris region is unreachable, we are investigating

Fixed · Reverse Proxies · Global

Public load balancers in the Paris region are experiencing increased TLS handshake times since 08:23 UTC, resulting in:

  • TLS handshake timeouts for some HTTPS queries
  • Extended HTTPS request processing times due to longer TLS handshake

We are investigating the root cause and mitigation options.

Fixed · Reverse Proxies · Global

Incident Summary

Duration: 08:30 UTC - 10:55 UTC (2h 25m)

Impact:

Two load balancers experienced increased TLS handshake times in the Paris region, resulting in:

  • TLS handshake timeouts for some HTTPS queries
  • Extended HTTPS request processing times due to longer TLS handshake

Timeline:

  • 08:30 UTC: First load balancer began experiencing increased TLS handshake latency
  • 09:05 UTC: Second load balancer affected, amplifying customer impact
  • 10:55 UTC: Incident resolved - TLS handshake times returned to normal on both load balancers

Root cause for the increased latency is under investigation. We will also review and improve monitoring thresholds and alert configurations for load balancer performance metrics to detect the issue more quickly.

Fixed · Infrastructure · Global

An hypervisor on the SGP region is down since 15:14 UTC+1

We are working to reboot it

October 2025

Fixed · Pulsar · Global

The cluster pulsar of Paris is currently having troubles with creation of producers, reading and consuming messages, we are investigating the underlying issue. We are investigating the issue. Impacted services are:

  • Pulsar addons
  • Network Groups
  • Access logs
  • Drains
  • Logs
Fixed · Access Logs · Global

We are currently investigating an issue affecting the access logs ingestion pipeline.

Impact:

  • Access logs may be delayed or temporarily unavailable in the console
  • Applications and addons continue to run normally without any impact
Fixed · Infrastructure · Global

An hypervisor on the PAR region has been down from 06:20 to 06:35 UTC+1

Services are now back up and we are monitoring the hypervisor

Fixed · Infrastructure · Global

An hypervisor is unreachable on the SGP region since 14:58 UTC+2. Services on it were automatically restarted elsewhere.

Fixed · Jenkins · Global

Newly started Jenkins are unavailable after a deployment, we are investigating. You can reach our support team to restore the service

Fixed · Infrastructure · Global

An hypervisor in paris region is currently unreachable, we are investigating the issue.

Fixed · Infrastructure · Global

Status: We are currently investigating infrastructure issues impacting multiple hypervisors in the Paris availability zone.

Update 06:22 UTC – The investigation remains in progress.

Update 06:23 UTC – The orchestration system has been temporarily stopped.

Update 06:24 UTC – A potential cooling issue has been identified in one of our Paris datacenters.

Update 06:44 UTC – Root cause investigation is ongoing, and preparations to relaunch orchestration are underway.

Update 07:08 UTC – Orchestration has been relaunched and is now catching up.

Update 07:25 UTC – We have observed that additional hypervisors in the same datacenter are also experiencing issues, and our teams are actively investigating.

Update 07:45 UTC – The datacenter team is actively addressing the cooling issue, with resolution expected by the end of the morning. In the meantime, most infrastructure has been migrated to other datacenters within the same availability zone.

September 2025

Fixed · Infrastructure · Global

A hypervisor in the PAR region has been unreachable since 08:26 UTC+2. The affected services were automatically restarted on other hosts. Our engineering team is actively investigating the root cause.

Fixed · Infrastructure · Global

An hypervisor is unreachable on the RBX region since 11:28 UTC+2. Services on it were automatically restarted elsewhere. One of the IP behind domain.rbx.clever-cloud.com is also unreachable (87.98.177.176), it has been dropped from the DNS but you might still use it if you used A records for your domains.

Fixed · Access Logs · Global

We are currently investigating an issue with access logs, the system have issues processing messages.

UTC 08:36: We are currently deploying a fix

Fixed · Reverse Proxies · Global

In some edge (and not common) cases, you could have some issues to reach database addons (MySQL, PostgreSQL,...). We are investigating.

EDIT:

A network configuration on MTL, MEA and GRAHDS regions was responsible for this issue. Some application instances in some very precise conditions were not able to join their databases.

Fixed · Infrastructure · Global

A hypervisor in the RBX HDS AZ is not responding. We are investigating the issue.

August 2025

Fixed · Infrastructure · Global

Duration: 04:50 - [Current Time] UTC (Ongoing)

Affected Services: All virtual machines running on the affected hypervisor, including production workloads and their dependent services.

Impact: One hypervisor on the RBX region is experiencing erratic behavior requiring an emergency reboot. This has resulted in service interruptions for all VMs hosted on this hypervisor

Current Status: In Progress - Emergency reboot initiated. Hypervisor is currently restarting and VMs are being brought back online.