Our approach is having our CLI handle port assignments (and pass any connection details/ports along as env vars) and that way being able to spin up “isolated” copies of the local dev environment. Has the added benefit of us being able to deploy the same config straight to production and switch in production database connections strings and anything else needed.
The adjacent problem I’ve been focused on is what happens after the agent finishes in its isolated environment: how do you review what it actually changed before accepting the result?
I’m interested in diff/commit/rollback at the filesystem level, so you can selectively keep some changes and discard others.
Different problem, but they compose naturally.
Curious about the hot strategy: when you do umount -l /workspace + mount --bind + mount --make-rshared inside the DinD container, lazy unmount means a running file watcher can still hold open fds to the old worktree while the new bind is already live. Have you hit cases where it keeps writing to stale paths after the switch? Or does it just naturally recover once the watcher picks up the inotify events from the new mount?
Basically been relying on spinning up cursor / niteshift / devin workflows since they have their own containers but this could be interesting to keep it all on your main machine.
1) Could you run an agent in the coast?
You could... sort of. We started out with this in mind. We wanted to get Claude Max plans to work so we built a way to inject OAuth secrets from the host into the containerized host... unfortunately because the Coast runtime doesn't match the host machine the OAuth token is created on, Anthropic rapidly invalidates the OAuth tokens. This would really only work for TUIs/CLIs and you'd almost certainly have to bring a usage key (at least for Anthropic). You would also need to figure out how to get a browser runtime into the containerized host if you wanted things like playwright to work for your agent.
There's so many good host-side solutions for sandboxing. Coasts is not a sandboxing tool and we don't try to be. We should play well with all host-side sandboxing solutions though.
2) Why DinD and why not mount namespaces with unshare / nsenter?
Yes, DinD is heavy. A core principle of our design was to run the user's docker-compose unmodified. We wanted the full docker api inside the running containerized host. Raw mount namespaces can't provide image caches, network namespaces, and build layers without running against the host daemon or reimplementing Docker itself.
In practice, I've seen about 200mb of overhead with each containerized host running Dind. We have a Podman runtime in the works, which may cut that down some. But the bulk of utilization comes from the services you're running and how you decide to optimize your containerized hosts and docker stack. We have a concept of "shared-services". For example if you don't need isolated postgres or redis, you can declare those services as shared in your Coastfile, and they'll run once on the host Docker daemon instead of being duplicated inside each containerized host, coasts will route to them.
Would love to see this support stdio-to-HTTP bridging so local MCP servers can be exposed as remote ones without rewriting them.