Chris Aeberli on Building Sonia, the Voice-Based AI Therapist, and the “Arbitrage of Happiness” Driving His Mission
“There are so many opportunities to ‘arbitrage happiness.’ It’s kind of a funny phrase, but I think it’s real. There are these little things you can do that take five minutes and can have a five-day or even five-week positive impact… We as humans could collectively arbitrage happiness and really increase net happiness in the world.”
I had the pleasure of talking with Chris Aeberli. Chris is a Swiss computer scientist and entrepreneur, and the co-founder of Sonia, an artificial intelligence company focused on expanding access to mental healthcare. A graduate of the Massachusetts Institute of Technology and ETH Zurich, Aeberli has combined a deep technical foundation with personal convictions about emotional well-being, leading to his current work in building AI systems designed to support mental health at scale.
Born and raised in Switzerland, Aeberli moved regions as a child, where he first learned English and began to develop an early interest in mathematics and logic. He considered a future in professional sports, first in ice hockey, then tennis, before deciding to pursue intellectual rather than athletic goals. By adolescence, he was drawn to probability theory and game mechanics, interests that would eventually shape both his academic and professional life.
Aeberli completed his undergraduate studies at ETH Zurich, one of Europe’s leading science and technology universities, where he earned a bachelor’s degree in Computer Science with top honors. He then enrolled at MIT for a master’s degree in Data Science, graduating with a perfect 5.0 GPA. While at ETH and MIT, Aeberli worked on a variety of research topics, including reinforcement learning algorithms for adaptive decision-making and optimization modeling with Liberty Mutual.
Outside the classroom, Aeberli pursued professional poker, motivated by both curiosity and his interest in game theory. He played cash games across Europe and in Las Vegas, participating in private tables that occasionally included celebrity figures. A significant portion of his winnings went to charitable causes. He later cited the 2008 film 21, about MIT students who beat the odds in casino games, as a formative influence on his early goals, inspiring both his poker ambitions and his pursuit of a degree at MIT.
In the final days before graduating from MIT, Aeberli reached out to alumni, seeking mentorship and support as he prepared to launch a startup. One such outreach led to a pivotal opportunity: a Bay Area founder offered Aeberli temporary housing and office access, allowing him to immerse himself in startup life immediately after graduation. For several months, Aeberli lived and worked in the company’s headquarters, using that time to experiment with early product ideas and engage directly with potential users.
Eventually, Aeberli reconnected in Boston with two close university peers, Lukas and Dustin, both fellow ETH Zurich graduates and MIT-trained AI researchers. The trio began working on what would become Sonia, a voice-based AI platform aimed at delivering structured, evidence-based therapy support. The team applied to and was accepted into Y Combinator, a prominent Silicon Valley startup accelerator, and subsequently raised $3.5 million in seed funding from investors including the founders of Reddit, Instacart, Verkada, and Paradigm.
Sonia’s technology centers on replicating core principles of structured therapy, such as cognitive behavioral therapy (CBT), dialectical behavior therapy (DBT), and acceptance and commitment therapy (ACT), through natural voice-based interactions. According to Aeberli, the system does not aim to replace human therapists but rather to address gaps in accessibility and scalability. The AI is designed to support individuals who may not otherwise have access to care, and in some cases, to provide complementary support in between or prior to human therapy.
One of the challenges Sonia has taken on, Aeberli explains, is not just simulating empathetic conversation, a capability many large language models can achieve, but replicating the deeper reasoning processes that therapists apply over long-term care relationships. The company has been working with clinicians and researchers, including a Stanford-based assistant professor, to design AI systems that analyze therapeutic progress, identify client needs over time, and tailor interventions accordingly.
While still in its early stages, Sonia’s approach is heavily focused on safety, evidence-based outcomes, and long-term research. The company is conducting controlled trials in partnership with academic institutions and has stated that its primary performance metrics are not engagement alone, but measurable reductions in symptoms of anxiety and depression.
Aeberli describes the company’s target users as those who fall between casual wellness-seekers and individuals in acute crisis. This middle segment, often underserved by both traditional therapy and consumer wellness tools, is where Sonia aims to make its greatest impact. He has also suggested that AI systems like Sonia may eventually work in tandem with human therapists, offering support during waitlists or after hours, and even providing summaries that can inform live sessions.
In describing his long-term vision, Aeberli has emphasized what he refers to as the “arbitrage of happiness”: the notion that small, intentional actions can yield outsized emotional returns for others. It is a personal philosophy that he traces back to childhood experiences with bullying, and one that he hopes can be embedded into the technology his team is building. While careful not to position AI therapy as a complete substitute for human care, Aeberli has stated his belief that AI can significantly expand the emotional and psychological support available to people globally.
Now based in the United States, Aeberli continues to lead Sonia’s product development while contributing to broader conversations about AI ethics and mental health. He remains active in mentoring early-stage founders and is vocal about the importance of combining technical rigor with moral responsibility, particularly in fields where technology touches human emotion.
Yitzi: It’s a delight to meet you. Before we dive in, our readers would love to learn about your personal origin story. Can you share with us a story of your childhood and how you grew up?
Chris: Sure, happy to. I grew up in Switzerland. I was born and raised there, originally in Zurich. Then, when I was around seven years old, we moved to a different part of the country. That’s where I first learned English, because I didn’t speak a word of it. That was a tough phase, but also a cool experience.
I was really into sports as a kid. My plan was to become a professional athlete. First, it was ice hockey, then tennis. Eventually, I realized I probably wouldn’t go pro, so I decided to try and use my mind instead to do something meaningful. That shift happened around the age of 12 or 13.
Even back then, I was always super passionate about math, probability, and game theory. I was the kind of kid who, when lying in bed with my parents at six years old, wanted them to ask me math riddles. Very stereotypical in some ways. But it was also funny, because growing up in Zurich, which is very finance- and consulting-heavy, no one was really into tech or nerdy stuff. I didn’t even know a world like Silicon Valley existed, where people thought math was cool. That’s kind of sad in hindsight, because I felt like, to be cool, I had to pretend I didn’t like math. So I went through a few years of trying to ignore that side of myself.
As I got older, I went to high school and started studying computer science at ETH. It was incredibly theoretical, but it gave me a really strong foundation. Around the age of 18, I realized there were two things I was really passionate about. One was building a tech company, because I’m super competitive and scaling technology felt exciting. The second was making as many people smile as I possibly could.
It’s kind of funny because those two goals don’t always seem aligned when you’re building companies. For a while, I thought my life would happen in two phases: first I’d build my tech company until I was 35, and then I’d focus on philanthropy. Back then, I was just exploring and taking a lot of time thinking through different ideas, not knowing what company I would build. Of course, now with Sonia, those two goals are perfectly aligned and I can chase them simultaneously. We build a company by making people happy, and we make people happy by building a company.
Back in undergrad, there was one movie that had a big impact on me, 21. It’s about MIT students who learn to make money playing Blackjack to pay for tuition. For some reason, I really strongly identified with the main character, honestly more than with any real person I met up to that point in my life. So I set myself two goals: get into MIT for grad school and learn how to make money with a card game. I thought it was the perfect blend of game theory, probability, math, applied in a really hands-on way.
For the next four or five years, I focused on those two things. I managed to get into MIT and also learned the foundations of poker. Both goals worked out. I did my Master’s at MIT, played poker in various places around the world, made good money with it, and donated a lot of it to charity. I had some really fun and crazy experiences, especially in private games.
At MIT, my focus was figuring out what I wanted to build and who I wanted to work with. I spent a lot of time hacking around Boston, trying to meet co-founders. My Master’s was in AI, but I was doing that more part-time, maybe one or two days a week.
Unfortunately, my plan to have everything figured out by graduation didn’t work out. About a week before graduating, I still didn’t know what I was going to work on. That was a stressful time. My parents were already in town and asking why I wasn’t going to Google, which was kind of the expectation.
So I did this thing where I made a list of successful MIT alums and cold-emailed them asking if they could teach me how to build a company. The emails were super blunt. I wrote something like, “Here’s who I am. I want to build a company like you. Can you please give me a bed and a desk while I figure it out?”
It was pretty desperate, just a few days before graduation, but, miraculously, a lot of people replied. Some offered advice, some introduced me to investors, and some even offered money. One guy in particular stood out. He replied, “Call me tomorrow morning at 5 AM.” So, of course, I stayed up all night doing research on him, trying to prepare. I called him at 5 AM, which already says a lot about him, and we had a seven-minute conversation. He asked about Switzerland, where I was from, why I wanted to build a company, and then he had to go.
Two days later, just before graduation, he emailed me saying something like, “Okay, you can come here. We’ll give you a bedroom, and you can work from here.” It was incredible. A week after graduation, I flew to San Francisco for the first time. He was running a Series D company with a huge headquarters and apartments on the top floor. He gave me an apartment upstairs and desks in their new products team.
I wasn’t officially working for the company, but I was sitting close to the CEO and engineering leaders, learning from a world-class team. I ended up spending the next nine months in that building without really leaving. They had amazing catering, I had my bed upstairs, and my desk downstairs. And that was basically my entire life.
I worked on all sorts of ideas during that time. Just the early scrappy days, coming up with ideas, pitching them to people on the street, getting rejected, feeling down, then waking up the next day to try again. It was also pretty lonely, but I guess that’s part of it.
Then two of my close friends from undergrad, Dustin and Lukas, also came to MIT. They’re now my co-founders. We have a pretty similar path. All studied computer science at ETH and later moved to the US to pursue AI at MIT. I told Dustin, “We should work together,” because at that point, I realized what I needed most was a co-founder I really trusted and enjoyed being around. I knew the next 10 years of my life would be mostly work, so I wanted to do it with someone smart, hard-working, and kind. I flew back to Boston, sat Dustin down in a cafe, and said, “Let’s start a company together.” He was committed to joining Google at the time, but thankfully I was able to convince him.
So I moved back to Boston and started hacking on ideas with him and later also Lukas in the labs of MIT. That’s when we began working on Sonia, an AI therapist for mental health.
The idea came from several directions. First, seeing how massive the problem of mental health is. I already saw a lot of it growing up in Switzerland. And then, coming to the US, I realized even more people are struggling, yet it’s even harder to access care. That struck me as a huge problem.
At the same time, all of us have strong backgrounds in AI. And we felt that AI had finally reached a point where it could enable really engaging, empathetic, and natural conversations. So it felt like a no-brainer to bring those two areas together.
Personally, it also made me realize that maybe I could combine my two life goals, building a tech company and making people smile, into one. Since then, we haven’t looked back. We’ve now been working on Sonia for about a year and a half.
Yitzi: Unbelievable. It’s an amazing story and you’re an amazing storyteller. Tell us the next chapter. Tell us about the company, what your big idea was, and what you’ve built so far.
Chris: Our idea is really to make mental healthcare accessible to anyone, anywhere, anytime. To zoom out a bit, I’m a huge believer in therapy. I think great human therapists are almost like magicians. At one point, I thought there was no way anyone could ever replicate what they do because they’re clearly such special people.
But as I started doing more research, I realized there’s actually a lot of structure and theory behind therapy, especially certain types like cognitive behavioral therapy, DBT, ACT, and so on. They follow a logical framework. It’s not as magical as I thought, it’s just really smart and sophisticated. But there is a structure. That led us to think, if there’s structure, and we come from this algorithmic world, how can we get a computer to implement and follow that structure in an effective and safe way?
That became our starting point. And it’s where the mission comes from: how can we take the knowledge of a top human therapist and make it far more accessible? Because top therapists are rare, and way more people need their help than there are therapists available. If we can take that knowledge, that competence, and the incredible impact they have, and offer it to people who can’t access them, that felt deeply meaningful.
The first thing we did was quickly build a prototype and test it. Obviously in a safe way, not your typical “move fast break things” approach. We shared it only with friends and people we trusted. And since day 1 we have strong emergency detection protocols in place because risk and safety are huge issues in this space.
Pretty quickly after that, we got accepted to Y Combinator, an accelerator here in the Bay Area. We got our first half a million in funding. At the time, we were based in Williamsburg, New York, but we flew to San Francisco and went through the three-month program. The whole goal there is traction and growth. So during that time, we were focused on getting the first few thousand users, gathering feedback, and iterating as fast as we could. We basically locked ourselves in an apartment and worked nonstop.
After that, we raised our seed round, three and a half million dollars from funds and top angels around the world, many of whom are founders themselves. Including the CEO of the startup I lived at after MIT. He’s been an incredible mentor.
Then we went back to work. One thing that’s really important to understand is we don’t just want to be another consumer app that supports people with mild stress or someone having a bad day. That’s meaningful, of course, but even something like ChatGPT can be great for that. Our mission goes deeper. We want to understand real mental health conditions and figure out how to truly address them. That includes understanding how to conceptualize these conditions, what kinds of memory and architecture are required, and building a fully custom AI engineering stack.
We also started expanding our clinical team and are working closely with an amazing professor from Stanford to really get a deep understanding of what good therapy looks like. While our engineering team is working to understand how to use AI models to apply that understanding.
In some ways, we’ve become almost like a research lab. We’re doing everything in a really controlled way. The main thing we optimize for is effectiveness. We look at measurable mental health outcomes for anxiety and depression, not just engagement. Of course, engagement matters too, because if people don’t use the app, they won’t get better. But our core focus is: can we create a significant impact?
Yitzi: That’s amazing. Just to clarify, what exactly is the UI? Is it text-based, video-based, or avatar-based?
Chris: It’s really voice-based AI therapy conversations. You can open the app and start a session immediately. It’s a natural voice conversation, like the one we’re having right now. There’s no tap-to-hold or press-to-speak. It knows when you’ve stopped talking, and it even tries to predict if a pause is just you thinking, based on the context of what you’ve said. So we’ve built a fully custom voice-to-voice interaction.
We’re experimenting with video, but it’s not quite at the level yet where it feels natural enough. And there are ethical considerations too. Do you even want to create the illusion that someone is talking to a human? On one hand, we’re very clear, this is not a human, it’s AI. It has limitations. It’s not meant for emergencies. But at the same time, you still need some sense of trust and openness, or people won’t feel comfortable enough to share.
Yitzi: That’s fascinating. You mentioned that ChatGPT can do similar things. How is your AI more advanced than ChatGPT when it comes to voice conversations?
Chris: I think the way tools like ChatGPT and other foundational models are built is that they serve as helpful assistants. Someone asks a question, they get an answer. Someone has a problem, they get a quick fix. And that works really well for a lot of use cases, like helping someone write a message or book a trip. It also works for very light emotional support.
If someone says, “I’m super nervous about my test tomorrow, what can I do?” ChatGPT can be amazing at responding with, “Try meditating for a few minutes,” or, “Visualize your exam.” And that’s genuinely helpful.
But therapy is different. In therapy, you need to conceptualize a client over a much longer time horizon. It’s not just about giving quick solutions. It’s about understanding where these emotions and thoughts are coming from.
So in the example of test anxiety, sure, the first step is helping you through the test tomorrow. But the deeper work is exploring why you’re anxious in the first place. What’s your academic history? Where might those fears come from? How has that shaped your mindset? How does it continue to influence you?
After the test, most people wouldn’t return to ChatGPT, because the immediate issue is resolved. But for us, that’s where the real therapeutic work begins.
We also separate those modes. People are usually impatient. They come to the app because they’re in pain or stressed, like the night before a test. When you’re doing well, you don’t feel the need to talk to an AI therapist. So we try to be helpful in that moment, offering something emotionally tuned and useful. But then we also encourage the user to go deeper. Sonia might say, “Hey, these are a few things you can do right now, but it would be incredibly helpful to explore this further. Let’s understand where this is coming from.”
That’s where our proprietary system comes in. A rich library of evidence‑based emotional‑regulation techniques, proprietary datasets and structured knowledge graphs feed AI models that not only generate responses — they also run in the background, continuously analyzing the client’s patterns and helping determine what direction to take the conversation into. What kind of client is this? What type of path should we take? How have certain interventions resonated in the past? It’s a specialized architecture built around therapy-specific interventions, memory, and the deeper conceptual understanding that long-term mental healthcare requires.
Yitzi: A devil’s advocate would ask, why can’t a custom GPT, or something like Gemini with custom instructions, do the same thing? How come yours is better?
Chris: Of course. Even with custom instructions, that’s really just a single prompt where you give it one set of guidelines and then it acts accordingly. But think about what a real therapist does in a conversation. What makes therapists so remarkable is that they carry an ongoing thread throughout the conversation, but at the same time, they’re running multiple parallel processes. They’re constantly evaluating: Where are we in the conversation? Is it heading in the right direction? How does this connect to what I know about this person from earlier? Where can we take the conversation next?
For us, the main model that generates the response isn’t where the core innovation lies. Speaking like a therapist is actually pretty easy. The hard part is thinking like a therapist. If you just start a natural conversation with a language model, none of those deeper processes even run.
We obviously work on making the model speak naturally too, and that’s important, but the real challenge is building the architecture, almost like the brain of a human therapist. How do you capture all those hundreds of subtle, implicit things they do?
Yitzi: That’s fascinating. So, did you create your own foundational model, or are you using OpenAI, Gemini, or another company’s model as the base?
Chris: We’re not building our own foundational models. We’re building on top of existing ones. We do a lot of fine-tuning and use a variety of models. But I think the biggest advantage doesn’t come from the foundational model itself.
Like I mentioned, speaking like a therapist is something foundational models are already quite good at. A lot of them have been trained on therapy transcripts or adjacent material. But what doesn’t exist in the world is a dataset on how therapists think. Even if you had thousands or millions of transcripts, you could build a model that’s very empathetic, but empathy alone isn’t the hard part. ChatGPT, for example, is already quite empathetic.
What’s really hard, and what’s missing, is the ability to think like a therapist. That dataset just doesn’t exist. So we’re building it ourselves, by modeling these thought processes explicitly, labeling them with top clinicians and then fine-tuning on that dataset.
Yitzi: Unbelievable. So, you said you’re in this sweet spot, the Goldilocks zone, where people with acute needs should definitely be seeing a real-life therapist, but you’re focused on people who have a general need, something important but not necessarily life-or-death. So is the goal to serve as a kind of band-aid until they can see a therapist, or is the idea that your system could fully help them and not need a therapist at all? Are you trying to replace human therapists, or act more like a bridge?
Chris: Yeah. It’s funny, I get that question a lot, and honestly, I barely even think about it in those terms. What I focus on is, how can “we”, which includes companies like ours, therapists, and everyone in the mental health space, have the most positive impact on as many people around the world as possible? That’s the real question we should be optimizing for.
This idea of “Do we want to replace therapists?”, of course not. But more importantly, it’s not even the right framework. From a business perspective, it just doesn’t matter. There’s so much more demand than supply. We’re not competing with therapists at all.
I think the space will naturally evolve. There will always be people who really benefit from speaking only to a human therapist. But there are also people who genuinely prefer talking to an AI. One big thing we’ve seen often is that people feel way more comfortable opening up to an AI. Many of them say, “I’ve never felt this safe talking to a human.”
I do think that, over time, even people with moderate or more serious mental health conditions could be effectively supported by an AI. We’re not fully there yet, but it’s getting closer.
There’s also the possibility of combining both. Some people love their weekly session with a human, but what if they’re overwhelmed with anxiety at 2 a.m.? An AI could be there to support them in that moment. Or think about people on long waitlists. So many therapists in the U.S. have waitlists that stretch for months. Right now the answer is, “Sorry, we’ll talk in six months,” and that person just goes on with life, unsupported, during that time.
There’s a huge opportunity for alignment between therapists and AI services. While someone is waiting, they could use an AI tool that provides meaningful support. Then, when they do see the therapist, the therapist might even get a short summary or context that helps make the session more productive.
I think there are lots of possible models. But honestly, I do strongly believe that AI can become a very clinically effective tool in its own right. I’m very confident in the potential of AI to make a real, meaningful difference in people’s lives.
Yitzi: Amazing. I could imagine that when robots become mainstream, every robot could be programmed with a therapist mode, with your AI ready to go, just switch into therapist mode.
Chris: That’s an interesting topic. It’s not so clear to me where the lines will be between different emotional applications in people’s lives. If you look at how things are today, it’s very specialized because we, as humans, are very specialized. A therapist has trained their whole life to be a great therapist. A headhunter has trained their whole life to be a great headhunter, and so on, recruiters, dating coaches, whatever. But I think so many of those roles are interconnected. There’s this overarching emotional well-being and emotional life of a client that spans across all those areas.
A lot of people are focused on productivity and workplace tools, connecting your email with AI, integrating outbound sales, syncing your CRM. But there’s real value in building something that addresses emotional well-being on a broader level. What’s really meaningful is that the foundation of all of that, the deepest insights about a person, how they want to live, who they want to be, often starts in a therapy conversation. But there are so many more applications where this kind of support could be useful in a meaningful way.
Take a simple example. Say someone is depressed because they lost their job and went through a divorce. They go to therapy, and a therapist helps them through their depression. But we want to go further than just helping them feel better. Through that process, you start to understand their values, their challenges, why their work didn’t pan out, why their relationship ended, you work through all of that.
But why stop there? Why not take the next step and help them move forward, not just out of struggle, but toward a flourishing life? That’s usually where support ends, often because of cost. Therapy is expensive. People will pay until they’re okay, but something more abundant could help them go much further, all the way to the life they really want to live.
Yitzi: To support what you’re saying, there’s this whole movement called positive psychology. The idea is that, up to now, psychology has mostly been about addressing pathology. There’s a certain baseline of mental health, and if you’re below that, psychology steps in to help. But what happens when the starting point is, “This is the ideal psychological state”? Then the question becomes, how do we use psychology to reach that optimal state? So it’s not just for people who are struggling or dealing with serious issues. Everyone could benefit from better psychological well-being. Therapy could evolve into something like “positive therapy,” where it’s for everyone, not just for people who are “mentally unwell”, but also for those looking to become more well, to reach their best possible mental state.
Chris: 100%, yeah. It should be way more proactive. I think the comparison to physical health makes a lot of sense. We don’t only exercise or go to the gym when we’re severely ill. We’re much more proactive now about physical health, and that puts people in a much better place overall. Mental health should be approached the same way.
The challenge is that people usually take action when there’s a clear pain point. So I think it’s important to build trust when someone is going through something difficult, like anxiety, relationship problems, depression, and so on. But once you’ve built that trust, you can take it further. You’ve already laid the foundation, and from there, you can help someone go beyond just healing to actually flourishing.
Yitzi: That’s great, it’s amazing. So is the end user an individual client, or is it more like a center, a hospital, or something like that? Who are the people actually paying you?
Chris: Yeah, so the end user will always be the person whose emotional well-being we’re trying to improve. But that doesn’t necessarily mean they’re the one paying. We’re experimenting with a bunch of different business models, but I think the most meaningful path forward is having it reimbursed by health plans, working with large employers, and generally finding ways to align incentives so it can be distributed at a larger scale.
Historically, the consumer space for mental health has been challenging, acquisition is hard, and you have to manually recruit each person. But if you can clearly prove that it works, that it has real effectiveness, and you can show metrics like symptom scores improving significantly, then it becomes a strong value proposition for health plans and large organizations.
Yitzi: Is that called B2B2C? Where you’re selling to businesses, and then they offer it to their end users?
Chris: Yeah, pretty much. Exactly.
Yitzi: This is our signature question, the centerpiece of the interview. Chris, you’ve been blessed with a lot of success. Looking back to when you first started, based on your experience, can you share five things you think are necessary to create a highly successful AI company?
Chris: I definitely wouldn’t call us a highly successful AI company yet, I think we’re still very much at the beginning. But there are a few factors that I think are key.
Three of them are actually our core values and how I try to build our team: hard work, high intelligence, and kindness.
The first two are very common. Everyone talks about the importance of hard work and intelligence when building a company. But I really believe in kindness. It’s something that genuinely motivates me in life. It’s the kind of energy I want to be around, and I think it’s a powerful driving force. It gives you a reason behind everything you’re doing.
One of the biggest lessons I’ve learned, especially through the MIT and apartment story, is the importance of resourcefulness. It’s about creativity, questioning how things work, not accepting the status quo, and being willing to write that cold email that could change your life.
So if I had to narrow it down, I’d say: kindness, hard work, intelligence, resourcefulness, and creativity.
Yitzi: Here’s our final aspirational question. Chris, because of your great work and the platform you’ve built, you’re a person of enormous influence. If you could put out an idea, spread a message, or inspire a movement that would bring the most good to the most people, what would that be?
Chris: I think it goes back to kindness. There’s this problem where I think a lot of people believe that being kind is un-cool or weak. Which is really sad.
There are so many opportunities to “arbitrage happiness”, it’s kind of a funny phrase, but I think it’s real. There are these little things you can do that take five minutes and can have a five-day or even five-week positive impact.
Like, for example, sending flowers to your grandma. It might cost $50 and five minutes of your time, but for someone who might not have many social interactions, who’s at home most of the time, those flowers can make her happy every single morning for five weeks when she sees them. So in that sense, we as humans could collectively arbitrage happiness and really increase net happiness in the world. I think more people should focus on doing that.
Yitzi: That’s fascinating. It’s a beautiful perspective, focusing on the arbitrage of happiness. Chris, it’s been a delight to meet you. How can our readers learn more or get involved in your work? How can they support what you’re doing?
Chris: I’m happy for anyone to reach out anytime. My email is chris@soniahealth.com. Happy to chat with anyone who’s looking to do something similar, who’s interested in what we’re building, wants to join one of our research projects, or is interested in working with us, we’re actively hiring. You can also find us on LinkedIn, our company page is Sonia, and my personal page is just Chris Aeberli.
Yitzi: Chris, it’s been an honor to meet you. I wish you continued success, good health, and many blessings. I hope we can do this again next year.
Chris: Thank you so much. That was wonderful. I appreciate your time.
Yitzi: Truly my pleasure.
Chris Aeberli on Building Sonia, the Voice-Based AI Therapist, and the “Arbitrage of Happiness”… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.