Now your app doesn't have direct access to your stripe/github/aws/whatever keys (which is good!) but you still need to have _some_ authentication against your proxy.
If you have a per-app authentication, then if your app's key leaks, then whoever uses it will be able to reach all the external services your app can, i.e. with one key you lose everything. On the other hand, if you have per-endpoint authentication, then you didn't really solve anything, you still have to manage X secrets.
Even worse, from the perspective of the team who owns and runs the proxy, chances are you are going to use per-app AND per-endpoint authentication, because this will allow you to revoke bad keys without breaking everyone else, etc.
What this really solves is subscription management for (big?) organisations. Now that you have a proxy, you only need a single key to talk to <external-service>, no need to have to manage subscriptions, user onboarding and offboarding, etc. You just need to negotiate ratelimits.
For the integrations that aren't GitHub-style OAuth Apps, where upstream just ships a long-lived API key and someone still has to rotate it, how are you planning to handle the refresh lifecycle on the exe.dev side? Is that declared per-integration, or is the proxy expected to notice 401s and pull a fresh credential from somewhere upstream?
an 'mitm' tls proxy also gives you much better firewalling capabilities [1], not that firewalls aren't inherently leaky,
codex's a 'wildcard' based one [2]; hence "easy" to bypass [3] github's list is slightly better [4] but ymmv
[1] than a rudimentary "allow based on nslookup $host" we're seeing on new sandboxes popping up, esp. when the backing server may have other hosts.
[2] https://developers.openai.com/codex/cloud/internet-access#co...
[3] https://embracethered.com/blog/posts/2025/chatgpt-codex-remo...
[4] https://docs.github.com/en/copilot/reference/copilot-allowli...
You may not want to be doing this at the edge.