Intermittent System Slowness
Updates
Earlier today, we experienced intermittent but severe slowness where response times spiked across all customers. Our investigation determined that new instances were receiving traffic before they were fully ready to handle requests, causing thread blocking and significant bottlenecks.
A fix has been deployed to ensure instances are fully warmed up before being marked healthy by the load balancer. System performance has stabilized, and we are continuing to monitor closely.
Our team has identified a potential cause related to fetching an authentication key from blob storage, which may be leading to thread blocking and increased response times. We have engaged Azure Support and are continuing to investigate to confirm the root cause and implement a fix.
We are continuing to investigate intermittent slowness. A few unhealthy web application instances were restarted which has improved performance, but elevated latency persists. Early indicators point to delays when handling database requests, though the root cause has not yet been determined. Our engineering team continues to investigate.
We are currently investigating reports of intermittent performance degradation affecting portions of the application. Our engineering team is actively working to identify the root cause. Updates will be provided as more information becomes available.
← Back