the resonance company
An applied research lab for conversational intelligence
Our goal is to make conversational intelligence a trainable, verifiable, systematically improvable model capability.
AI is getting smarter every month. It is getting smarter at the wrong things.
Since 2024, frontier labs have shifted optimization toward reasoning, coding, and factual recall — domains where correctness is easy to verify and reinforcement learning scales cleanly. This has produced remarkable gains on benchmarks. It has also left a gap.
The ability to navigate a real conversation — to read hesitation, handle interruption, recover from misunderstanding, drive toward resolution — remains largely untrained. The models getting smarter are not getting smarter at this.
The quality of any outcome is bounded by the quality of the conversation that produced it. We are building the research infrastructure to close this gap.
Why Voice
Every business with a phone number is becoming an AI-native call center. Voice AI has crossed the chasm — scheduling appointments, resolving claims, triaging support across healthcare, insurance, and financial services. The demand is structural and accelerating.
The technology is not ready.
Every major model in production today was built for chat — optimized for turn-by-turn text exchanges where latency is measured in seconds and users tolerate verbose, exploratory responses. Voice is a fundamentally different domain. Latency budgets are ten times tighter. Turn structure is fluid. Resolution — not satisfaction — is the metric.
The industry is scaling the AI equivalent of Norman doors: systems that look capable but systematically fail their users, because the design ignores how humans actually interact.
Belugas navigate the deep ocean through sophisticated vocal coordination — calls, clicks, and whistles that carry meaning across vast, dark water. Their survival depends not on any single individual's capacity, but on the quality of communication between them. AI systems should work the same way.
Distributed Cognition
The frontier model race is built on a premise: build one general intelligence powerful enough, and it will handle everything. Decades of cognitive science tell a different story.
Edwin Hutchins studied how Navy navigation teams solved complex problems under pressure. His finding was counterintuitive: the intelligence wasn't located in any single person. It lived in the system — the interactions between people, tools, and environment. He called this distributed cognition.
The parallel to aviation is instructive. A pilot's expertise alone does not fly the plane. The cockpit does — the instruments, the crew, the checklists, the procedures.
Voice AI is no different. Call resolution emerges from three layers working in concert: the model's conversational ability, the engineering infrastructure that gives it hands, and the implementation layer that connects it to an organization's workflows. A smarter model on a brittle system will still transfer the call too early. A well-engineered system on a conversationally weak model will still lose trust.
The intelligence is distributed. The gap must be closed at every layer simultaneously.
Empathy as a Science
Conversational quality has been treated as subjective, emergent, unmeasurable. It is none of these.
Decades of research in motivational interviewing, behavioral economics, and human-computer interaction have produced structured, measurable techniques that change outcomes. Higher adherence. Lower escalation. Stronger retention. These are not soft skills. They are patterns that have been used to train humans for thirty years. They can be taught to agents.
The same paradigm that made models better at math and code — define a verifiable reward signal, then train against it — can make them better at conversation. The reason it hasn't been done is not that it can't be. It's that no one has built the verification infrastructure to make it work.
We are building it.
The Team
We are voice AI operators who identified the model layer as the missing piece. That distinction shapes everything — how we build, what we prioritize, and what we already know.
Our founding team has built, deployed, and operated voice AI systems at scale in healthcare — a domain where a wrong tone or a missed emotional cue has direct human consequences. We developed training recipes that treat empathy as a measurable, improvable capability. We are now generalizing that work into a training paradigm for conversation itself.