No components marked as affected
Resolved
A regression in datastore configuration during the addition of additional checksum/error detection resulted in a temporary loss of resiliency.The regression has been identified and forward-fixed to retain additional error detection.
Monitoring
The API is responding normally, and we are continuing to investigate the root causes.
Monitoring
Data is fully replicated.The API is responding normally, and we will continue to monitor through the weekend.
Monitoring
Restoration of full replication level is proceeding, and expected to complete with the next 24 hours.The API is responding normally, and we continue to monitor.
Monitoring
We have identified a faulty storage node, and are migrating data away from it. Until migration is complete, data is below normal replication levels.The API is functioning normally again.
Investigating
The cluster error rates have returned to normal levels, but we continue to investigate the source of the problem.
Investigating
Error rates are declining, but we are continuing to investigate.
Investigating
We are continuing to investigate the cause of the issue and recover alternatives.We are working to make the most recent backup available as a fallback.
Investigating
The timeseries and sequences datastore replication process began experiencing errors at 9:07 UTC, and API error rates for timeseries and sequences increased significantly.
This will be visible in services and applications reading and writing to timeseries and sequences, including Fusion.
Investigating
We are continuing to investigate this issue.
Investigating
We are currently investigating the issue.