Every scenario is based on real production incidents. Candidates debug actual problems with real tools—the same way they would on the job.
Ready to use today. Based on real production incidents.
An ML feature service returns in ~100ms for most clients but 10-30s for one. Candidates must use observability to identify why, then fix the root cause.
Intermittent double-charges in a checkout flow. Candidates must identify the race condition and implement a fix that handles concurrent requests safely.
A Node.js service's memory usage grows until it OOMs. Candidates use profiling tools to identify the leak and fix it without breaking functionality.
A deployed PyTorch model is returning predictions but accuracy has dropped significantly since last week. Candidates investigate the pipeline to find what changed.
Finance flagged that the monthly sales report is wrong—numbers don't match the source data. The query has cross joins and window functions. Something's silently inflating totals.
An executive dashboard is showing different numbers than the underlying data. Candidates debug the notebook, fix the aggregations, and correct the visualisations.
Have an incident that would make a great hiring filter? We'll work with your team to turn it into a reproducible scenario with isolated infrastructure.
Book a demo and we'll help you pick—or design something custom for your team.