Clever Cloud Status

Incidents

Full history of incidents.

Oldest first

February 2026

Fixed · Infrastructure · Global

One of our hypervisors has gone offline. We are working toward service recovery

Fixed · Infrastructure · Global

We are currently experiencing infrastructure instability resulting in timeouts and service interruptions across multiple applications.

Technical teams are fully engaged in identifying the root cause and implementing corrective actions to restore full service availability.

Fixed · Infrastructure · Global

One of our hypervisors has gone offline. We are working toward service recovery

Fixed · Infrastructure · Global

One of our hypervisors has gone offline. We are working with our infrastructure provider on this region to identify the cause and restore availability. All application located on this machine was redeployed, addons are currently down.

Fixed · Pulsar · Global

Through a planned maintenance in internal components of the pulsar cluster, some customers may have experienced some disconnections and sometimes errors. We've quickly identified the issue & fixed the problem at 10:15 UTC.

Fixed · Infrastructure · Global

One of our hypervisors has gone offline. We are working with our infrastructure provider on this region to identify the cause and restore availability. We will update this page as we learn more.

Fixed · Reverse Proxies · Global

We are currently experiencing an issue affecting our cleverapps.io load balancers, resulting in degraded or non-functional traffic routing.

Applications configured with custom domain names are not affected and continue to operate normally.

Fixed · Drains · Global

We are currently experiencing issues with the drain process and are actively investigating the root cause. Drains continue to accumulate logs, so no data loss is expected. Further updates will follow.

January 2026

Fixed · Deployments · Global

Deployments cannot be performed at this time. The deployment process is currently blocked, preventing any new releases. The issue is under investigation.

Fixed · Infrastructure · Global

We have identified a DNS issue affecting one of our hypervisors in the PAR zone. As a result, some applications and databases are currently unreachable.

The root cause has been identified, and the issue is actively being resolved. Further updates will be provided as progress is made.

Fixed · Deployments · Global

An hypervisor in the Paris region was causing intermittent deployment failures. Some deployments could fail randomly or appear stuck during startup. The faulty hypervisor has been identified and removed from the production pool at 14:58 CET. Deployments should now proceed normally. We are keeping this incident open and monitoring to confirm this was indeed the root cause of the issues.

We apologize for the inconvenience.

Fixed · Infrastructure · Global

The monitoring report than an hypervisor is unreachable on Paris, we are currently investigating

Fixed · Pulsar · Global

The Pulsar product management and its dashboard are currently unavailable. The issue has been identified, and our teams are actively working on a fix.

Fixed · Infrastructure · Global

On Saturday, January 3rd, our Paris region experienced degraded performance for approximately 6 minutes. During this period, some users encountered slower response times when accessing the platform, while others experienced intermittent network timeouts when attempting to connect to their services.


Timeline

7:37 PM UTC+1 - Our monitoring systems detected degraded performance on the Paris region. Users began experiencing increased latency and occasional connection failures.

7:43 PM UTC+1 - Service returned to normal operating conditions. All platform features became fully operational again.


Impact

During this incident, users with resources hosted in our Paris region may have noticed slower than usual response times when interacting with the platform. Some connection attempts resulted in network timeouts, requiring users to retry their requests. We can confirm that no data was lost during this incident and all services continued to run normally once connectivity was restored.

Next Steps

We are currently conducting a thorough post-mortem analysis to understand the root cause of this degradation. We will update this report with our findings.

December 2025

Fixed · Infrastructure · Global

The monitoring report two hypervisors unreachable, we are investigating.

Fixed · Metrics · Global

Due to failure of an internal automated process, users may experience sampling or lag over their metrics. No data loss is expected, read path is not impacted.

Fixed · Infrastructure · Global

Due to elevated load on one hypervisor, we are observing issues affecting applications and databases

Fixed · API · Global

The API is currently returning incomplete flavor information for certain runtimes. This affects the Console, the pricing page, and the CLI, which may not display all available flavors for specific endpoints.

Fixed · Access Logs · Global

We identified an issue affecting the access logs ingestion pipeline.

Impact:

  • Access logs may be delayed or temporarily unavailable in the console
  • Applications and addons continue to run normally without any impact
Fixed · Metrics · Global

Due to failure of an internal automated process, users may experience sampling or lag over their metrics.