by shubhamintech
1 subcomments
- The full-session evaluation framing is the right call - most teams don't realize the failure happened in turn 2 until they've spent 3 hours blaming the model. One thing worth thinking about as you grow: connecting caught regressions to production conversation data. When your simulation flags a new failure mode, being able to say "this pattern has already surfaced X times in prod this week" cuts the prioritization debate in half. Does Cekura currently let you correlate simulation failures back to real user sessions, or is that still a manual step?
by MickeyShmueli
1 subcomments
- the mock tool platform thing is smart. testing agents against real APIs is a nightmare, you get flakiness, you burn through rate limits, and you can't reproduce failures
one thing i'm curious about: how do you handle testing the tool selection logic itself? like the agent choosing WHICH tool to call is often where things break, not the tool execution
we had a support agent that would sometimes call the "refund order" tool when the user just wanted to check order status. the tool worked perfectly, the LLM just kept picking the wrong one. your mock platform lets you verify the tool returns the right data, but does it catch when the agent calls the wrong tool entirely?
also the full-session evaluation vs turn-by-turn is spot on. had a similar issue with a verification flow where each individual turn looked fine in langsmith but the overall flow was completely broken. you'd see "assistant asked for name" (good), "assistant asked for phone" (good), "assistant processed request" (good), but it never actually verified the phone number matched the account
tbh this feels like one of those problems that's obvious in hindsight but nobody builds the tooling for until they get burned in production
- This is a solid framing. In my experience the nasty regressions are rarely a bad single response; they are state drift over 6-12 turns (verification skipped, tool called in the wrong order, recovery path never triggered).
One thing that's helped us is tagging each test with an explicit risk class (safety/compliance, business logic, UX) and tracking those buckets over time instead of relying on one pass/fail number. Release decisions get much less hand-wavy when one category starts creeping.
Session-level eval plus risk-bucket trends feels like the right combo for teams shipping agents weekly.
- the full-session evaluation framing resonates a lot. we've been running browser automation agents and the exact same problem shows up: individual actions pass, session fails. a click works, a form fill works, but the session gets flagged or blocked 8 turns in because something earlier created a signal that compounded.
one failure mode that's specific to browser agents and doesn't get much attention: the test environment is too clean. when you run simulations against a controlled setup, the agent never encounters the friction that real sessions do - bot detection challenges, CAPTCHAs, dynamic content that loads differently, fingerprinting checks mid-session. so you end up with agents that pass your test suite but fail in the wild, and the gap is in the environmental assumptions not the agent logic.
the mock tool platform approach is interesting precisely because it sidesteps this - you're testing the agent's decision-making in isolation from the messy runtime. that's valid for catching logic regressions. but i'd be curious how you handle cases where the tool call itself triggers secondary effects in the environment (e.g. the API call changes session state in ways that affect what the agent sees next).
also, does your session-level judge handle cases where the correct behavior is adaptive - where the agent should change strategy mid-session based on what it encountered? that feels like a harder eval problem than a fixed expected outcome.
- Any ideas how to solve the agent's don't have total common sense problem?
I have found when using agents to verify agents, that the agent might observe something that a human would immediately find off-putting and obviously wrong but does not raise any flags for the smart-but-dumb agent.
by niko-thomas
0 subcomment
- We've tried a few platforms for voice agent testing and Cekura has been the best by a long shot. Keep up the great work!
by sidhantkabra
0 subcomment
- Was really fun building this - would love feedback from the HN community and get insights on your current process.
by chrismychen
1 subcomments
- How do you handle sessions where the correct outcome is an incomplete flow — e.g. the agent correctly refuses to move forwards because the caller failed verification, or correctly escalates to a human?
- we treat each scenario as an explicit state machine. every conversation has checkpoints (ask for name, verify dob, gather phone) and the case only passes if each checkpoint flips true before the flow moves on. that means if the agent hallucinates, skips the verification step, or escalates to a human too early you get a session-level failure, not just a happily-green last turn. logging which checkpoint stayed false makes regressions obvious when you swap prompts/models.
- congrats on the launch! do you guys have anything planned to test chat agents directly in the ui? I have an agent, but no exposed api so can't really use your product even though I have a genuine need.
- Testing voice agents would require some kind of knowledge integration. Do you have any plans to support custom knowledge bases for test voice agents ?
by michaellee8
1 subcomments
- Interesting, I have built https://github.com/michaellee8/voice-agent-devkit-mcp exactly for this, launch a chromium instance with virtual devices powered by Pulsewire and then hook it up with tts and stt so that playwright can finally have mouth and ears. Any chance we can talk?
- [dead]
- [dead]
by agenthustler
0 subcomment
- [flagged]