Resonance Labs
AI agents are about to enter daily human life at an unprecedented scale. Every organization with a phone number will deploy an AI agent to answer it: every call center, every national helpline, every customer service line, and every outreach campaign. The character of those conversations will define the level of trust entire generations have in AI. Yet there are no dedicated labs training models for what those conversations actually demand.
We are Resonance Labs, an applied research lab teaching AI systems how to understand, talk to, and advocate for the people they're meant to help.
In 2023, our founding team built the conversational and clinical intelligence for the first AI Care Manager in the country. Then built the playbook to safely scale our empathetic voice Agent in production with enterprise clients across every healthcare domain: leading Agent go-lives in 3 days and scaling from 0 to 30k patients in 2 weeks. We're researchers who have managed 1000+ AI Agent call centers.
Since 2024, we have watched:
-
—
Every frontier model upgrade be optimized for software engineering and regress on conversational ability
-
—
500 voice AI startups each reinventing the same agent skills from scratchIdentity verification, form info collection, noise handling, intent recognition for transfer: 500 startups solving the same problems 500 different ways.
-
—
Successful pilots stall at scale for the same reasons: the underlying model doesn't know how to have a conversation
-
—
The same use cases and features losing deal desks across leading startupsSmall clinics, low-volume outreach, women's health, pediatrics, non-English speakers. The populations the industry is walking away from are the ones that most need AI help. The technical challenges these use cases surface produce the most defensible capabilities.
We are coming together to build Resonance Labs, a dedicated applied research lab for the missing frontier intelligence every voice AI builder is betting on: conversational intelligence — the 80% every voice agent needs, so builders can focus on the 20% that wins their market.
Conversational intelligence is a third profile of AI capability — alongside deliberation (reasoning, code, math) and domain depth (clinical, legal, financial). It is the ability to do the job through human-paced dialogue, under a hard latency budget. It has its own optimization target, and no foundation lab is pointed at it.
Today's voice AI stacks treat experience as something you can prompt. You cannot. Experience is trained.
We are training that intelligence, so the next generation of AI systems:
-
01
Understand how to talk to people.
-
02
Advocate for the person on the other end of the line.
-
03
Improve based on resolution outcomes from production.
-
04
Are built and verified for specific roles, not general use.
We are building the foundational intelligence to enable the next generation of conversational AI Agents in every domain that requires human connection. An advocate makes you feel heard AND gets the job done. These goals are not in tension. They are the same.
Conversational intelligence drives the best outcomes for organizations too: higher resolution, fewer escalations, deeper trust. User trust is the winning strategy. We are building for it.
Every use case that requires human connection is our market. We've seen the standard from deploying in healthcare. We will carry the same rigor into every domain where humans and AI meet:
We believe the same reinforcement learning paradigm that transformed coding AI will transform conversational AI. This requires a lab to build (1) a model training method that converts conversations into a verifiable learning signal and (2) a production-grade harness that makes the resulting models usable and scalable for AI Agent jobs in the real world. We are that lab.
Coding intelligence today
Conversational intelligence today
Millions of developers spending the majority of their time on repetitive, structured, high-value tasks.
Every use case that requires human connection.
Unit tests for if the code passes or not. Once DeepSeek's RLVR paper demonstrated that verifiable reward signals could drive dramatic model improvement through reinforcement learning, the path became clear. Labs could generate training data at scale by building feedback loops where correctness can be checked automatically.
Conversation has no equivalent to "the test passes". The rubric-as-verifier paradigm, proven on safety, math, science, and humanities, does not work yet on conversations. Without a model training signal that converts conversation into a reliable learning signal, reinforcement learning methods are not tractable.
Coding agents went from novelty to production tools generating $1B in revenue in under two years.
Best-in-class voice Agents are running off base models from 2024. Those that have upgraded to newer models hit a tradeoff: either resolution rates drop, or agent scope shrinks to preserve quality.
Our thesis: Conversational AI has the market. What it lacks is the verifier — an equivalent of "the test passes" for conversation. We are building it.
Our unlock to transform conversations into verifiable dimensions requires 3 intentional choices:
-
01
Bound the role to a measurable job definition so every agent has a knowable shape.
-
02
Score conversations across rubric dimensions co-designed with the domain experts who perform and evaluate the work today.
-
03
Train against production outcomes like resolution, escalation, and satisfaction rates to turn every deployed call into a learning signal for model training.
If this resonates, we'd love to hear from you.
To request our full research memo or connect, reach out at contact@theresonancecompany.com