Sadly it's windows only yet, but they have plans to port it to other platforms.
Of course the CPU side usually lacks semantics to automatically create data visualizations (while modern 3D APIs have enough context to figure out data dependencies and data formats), and that would be the "interesting" part to solve - e.g. how to tunnel richer debug information from the programming language to the debugger.
Also there's a middle ground of directly adding a realtime runtime debugging UI to applications via something like Dear Imgui (https://github.com/ocornut/imgui/) which is at least extremely popular in game development - and in this case it's trivial to provide the additional context since you basically develop the debugging system alongside the application.
PS: I'd also like a timeslider that "just works", e.g. travelling back to a previous state, taking snapshots and exploring different "state forks". And of course while at it, live editing / hot code reloading, so that there is no difference between development and debugging session, both merge into the same workflow.
https://github.com/epasveer/seer
Interactive debugging is definitely useful when teaching but obviously teaching is a different context. But Seer is not an educational tool and I believe it will hold up in other cases as well.
Also rr is impressive in theory, although it never worked on codebases that I worked on.
https://www.youtube.com/watch?v=O-3gEsfEm0g
Casey also makes a good point here on why printf-debugging is still extremely popular.
I've worked in a company that, for all intents and purposes, had the same thing - single thread & multi process everything (i.e. process per core), asserts in prod (like why tf would you not), absurdly detailed in-memory ring buffer binary logs & good tooling to access them plus normal logs (journalctl), telemetry, graphing, etc.
So basically - it's about making your software debuggable and resilient in the first place. These two kind of go hand-in-hand, and absolutely don't have to cost you performance. They might even add performance, actually :P
If you're lucky enough to be able to code significant amounts with a modern agent (someone's paying, your task is amenable to it, etc) then you may experience development shifting (further) from "type in the code" to "express the concepts". Maybe you still write some code - but not as much.
What does this look like for debugging / understanding? There's a potential outcome of "AI just solves all the bugs" but I think it's reasonable to imagine that AI will be a (preferably helpful!) partner to a human developer who needs to debug.
My best guess is:
* The entities you manage are "investigations" (mapping onto agents) * You interact primarily through some kind of rich chat (includes sensibly formatted code, data, etc) * The primary artefact(s) of this workflow are not code but something more like "clues" / "evidence".
Managing all the theories and snippets of evidence is already core to debugging the old fashioned way. I think having agents in the loop gives us an opportunity to make that explicit part of the process (and then be able to assign agents to follow up gaps in the evidence, or investigate them yourself or get someone else to...).
Doesn't seem to meet all your desired features though.
Blows everything else out of the water.
https://pernos.co/ ( I’m not affiliated to them in any way, just a happy customer)
Takes some effort to configure it but beats "printf" (i.e. logging) in the end.
Most RE tools today will integrate a debugger (or talk to gdb).
To add something constructive, this demo represents an amazing ideal of what debugging could be: https://www.youtube.com/watch?v=72y2EC5fkcE