{"environment":"unified_incident_env","default_scenario_id":"worker_deploy_cascade","available_difficulties":["easy","medium","hard"],"filtered_difficulty":null,"scenarios":[{"id":"worker_deploy_cascade","difficulty":"easy","name":"Worker Deploy Cascade","description":"A bad worker deploy causes sustained database overload and login 502s at the gateway. The agent must diagnose from evidence, choose a safe remediation, verify recovery, and declare resolved only after checks pass.","root_cause":"A bad worker deploy is driving repeated database overload.","optimal_ticks":10},{"id":"db_config_rollout","difficulty":"medium","name":"Database Config Rollout Regression","description":"A database config push cut connection pool size and write requests now time out. A separate worker deploy landed around the same time and looks suspicious but is not the cause. The agent must avoid the decoy, roll back the database config, restart it, and verify recovery.","root_cause":"A bad database config rollout shrank the connection pool and is dropping writes.","optimal_ticks":10},{"id":"gateway_auth_rollout","difficulty":"hard","name":"Gateway Auth Rollout Regression","description":"A new api-gateway auth-middleware rollout is rejecting ~40% of valid logins. A recent worker deploy and elevated worker queue depth make the worker look like a plausible suspect. The agent must localize to the gateway, roll back its deploy, and verify recovery without unnecessary restarts.","root_cause":"A bad api-gateway auth-middleware rollout is rejecting valid logins.","optimal_ticks":8}]}