Guest: Dr. Jon R. Cohen, CEO, Talkspace
Host: Charles Rhyee, Managing Director, Health Care - Health Care Technology Research Analyst, TD Cowen
AI is disrupting many industries, and one of its most surprising use cases could be for mental health and therapy. A key question, however, is: are commercial AI models safe to use for this purpose? In this episode, we are joined by Dr. Jon R. Cohen, CEO of Talkspace, to discuss why people are turning to AI for therapy, the potential dangers of using generative AI in this space, and how Talkspace is uniquely positioned to solve for these issues.
This podcast was originally recorded on March 2, 2026.
Speaker 1:
Welcome to TD Cowen Insights, a space that brings leading thinkers together to share insights and ideas shaping the world around us. Join us as we converse with the top minds who are influencing our global sectors.
Charles Rhyee:
Hi, my name is Charles Rhyee, TD Cowen's healthcare technology and distribution analyst, and welcome to the TD Cowen Future Health Podcast. Today we're live at TD Cowen's 46th Annual Health Care Conference in Boston. And today's podcast is part of our ongoing series that continues TD Cowen's efforts to bring together thought leaders, innovators, and investors to discuss how the convergence of healthcare technology, consumerism, and policy is changing the way we look at health, healthcare, and the healthcare system.
And in this episode, we're discussing the role of AI and mental health. And to explore the topic, I'm here with Dr. Jon Cohen, CEO of Talkspace, a leading virtual behavioral healthcare provider committed to helping people lead healthier, happier lives through access to high quality mental healthcare. Jon, thanks for being here.
Dr. Jon R. Cohen:
Thanks, Charles. It's an honor and privilege to be here.
Charles Rhyee:
I appreciate that. The topic here is obviously AI, and I think the reason to bring this up and discuss it today is the use of AI chatbots has become a popular replacement for therapists as demand for mental health services has continued to increase. I guess the first question really is why do you think more people are turning to AI for therapy?
Dr. Jon R. Cohen:
I want to make sure we then make a difference between people turning to AI chat agents, chatbots, as opposed to people seeking therapy. I view it as two very different populations with overlap. So what I mean by that is there are 800 million people on ChatGPT, either ChatGPT or Gemini or Claude. The number now is 40% are using it for healthcare access. And then a subpopulation of that is using for mental health.
But what I mean by mental health is that people are going on and they're having a conversation. They're saying, "Listen, I'm upset today. My kid did something bad. I'm having trouble with my spouse." Whatever the conversation is, it's starting as a conversation. It's not like they're going mostly to the chat agents and saying, "I want therapy."
On the good side, I believe that the agents have democratized mental health, meaning millions of people now have access to something or some agent to talk to. Unfortunately, there's some been very bad results.
The answer to your question, why people are turning to it is because it's really readily available. It's easy to have a conversation, and then whether or not it ends up with therapies is something down the road.
Charles Rhyee:
Obviously, just looking online and looking some stats, it does seem like ... I think stats are something like 20 something percent of people have reached out, and that's actually using AI to find that stat. But I guess, and we had this conversation a while back, what are some of the dangers though if people, maybe instead of having just a conversation, are actually using AI in lieu of a therapist? What are the dangers of AI in that fashion?
Dr. Jon R. Cohen:
I don't think anybody, including the leaders of any of the large LLMs, no one predicted that people would use them for mental health support pretty much. It is a big surprise, I would say. And I think most of them would agree or have been public about saying that they were never built there.
So one of the dangers is that the large language models were not trained on mental health data. So Talkspace has one of the largest mental health databases, as you know, in the country, and our LLM that we are training is trained on mental health. It's called fine-tuning. So first off, you're dealing with an agent, the other ones, that have not been trained to do this. Now, they will eventually learn and some of them will get some clinical information so that they can be trained, but right now they haven't been. So the first thing is they haven't been trained.
So what are the dangers? So what happens is we refer to as you get what's called instant validation and it makes you ... They're very empathetic, they're very cheerful. They keep you engaged by telling you that you're right basically about everything. I like to refer to as mirror, mirror on the wall, who's the fairest of them all? And it's always how great you are. That's the problem with a lot of the chat agents is they take you down this road of flattery without challenging you. And eventually you get into believing and delusion of what you believe yourself to be without any pushback.
And there are two things. What happens is you end up with what's called social deskilling, which means you lack the ability to actually engage in the world, in the real world, the way you should engage with people and friends, et cetera. So that's been part of the danger.
The third part of the danger is it gets to a point where there's been significant harm by some of the chat agents. You may have heard there's a bunch of wrongful death suits out there for teens that have committed suicide based on encouragement from the AI agent. There have been some incredibly bad delusionary results for people who have had to be hospitalized as a result of it. And there are essentially thousands of this.
Some of the data that we've seen recently is that 560,000 to 600,000 people on the chat agents have acute psychosis that's not being dealt with. The chat agents have been reported to talking to discuss or having a conversation or interaction with people about suicide. It's about a million people a week, to give you an idea of the scope and size. So the danger has been enormous. The chat agents don't respond appropriately to harmful crises, which is what therapists do.
A couple other dangers. They have no clinical oversight. And one of the big issues is they're not HIPAA protected. So your information on a agent is out there in the ethernet for essentially anybody to find and discover.
Charles Rhyee:
You mentioned it a little bit before how the commercially available LLMs aren't trained on mental health data. Maybe talk a little bit about Talkspace's dataset and maybe why you guys are more maybe uniquely positioned to solve for these issues.
Dr. Jon R. Cohen:
So what we've made a decision about a year ago at TalkSpace to do is to build the first safe model. And what I mean that is several things. One, it's trained on our very, very large database. It's fine-tuned and trained. So it is a mental health LLM on top of other LLMs.
Second, we have proprietary algorithms that we've had reported back in 2018, 2019 that actually monitor the conversations and detect suicide risk, homicide violence risk, possibility of being abused at home, substance use, and six other clinical entities, OCD, psychosis, et cetera. So these algorithms are running in the background of all of our conversations. And what it does, if there's a risk identified, we have that risk that's monitored by trained clinicians, therapists. And then if necessary, we will take people off the platform and have a therapist intervene with live therapy.
The last piece is we've built it so it's HIPAA protected. So trained as an LLM for mental health, HIPAA protected, clinical oversight, risk identification, and off-ramping to real therapy. That makes our model right now, we believe, the only safe model, as we are now in beta testing, that's out there as far as we could tell.
Charles Rhyee:
And are those algorithms available to the current platform where you have therapists engaging with patients, and so they're able to take advantage of those tools as well?
Dr. Jon R. Cohen:
Yeah, so they were developed for that reason. We happen to be utilizing them now for our AI model, our AI LLM, but all the proprietary risk algorithms were developed for therapy, and they're actually being used and have been used since 2019, at least the suicide one, on the existing therapy. So what it does is it sends the therapist an alert. And what I mean by that is the therapist's having a conversation, it'll make the therapist alert that, "Hey, this patient is at risk for self-harm," and then the therapist will decide what to do with it. We don't tell them what to do. We give them alert that tells them that the patient is at risk.
Charles Rhyee:
What is the right use of AI in the delivery of care? I can imagine maybe support for therapists, obviously a lot of back office kind of functions, but how do you see this eventually being integrated into your care delivery?
Dr. Jon R. Cohen:
So first off, as I said, is one is I do think it'll bring forth a lot more people who are interested in having serious conversations who eventually might need therapy. So I think that's a big advantage to being able to provide more care to more people in terms of the therapy.
I think we will at some point be able to use the agent between therapy sessions. So you have a session with a therapist and in between, you may use the LLM model to help you with your exercises, to have an ongoing conversation before your next session. So we will use it between sessions at some point, not right now. It turns out it's pretty complicated because the relationship between the LLM and the live therapist, you have to work out those details. So it's not just like you turn it on between sessions. So that will be, I consider probably V2 of the LLM where we could use it between sessions.
Charles Rhyee:
And when you talked about the risks that you're able to identify, obviously you would imagine that's more magnified when, particularly when we think of adolescents and kids and young adults. I know that you're doing a lot of work with school systems, in particular like New York and Baltimore, and more recently, I guess what? Seattle, North Carolina. Maybe talk a little bit about the work you're doing with them and any kind of results and interventions you've been able to achieve so far.
Dr. Jon R. Cohen:
So the LLM will not be available to anybody under 18. So our large language model, AI agent, we will not make available to teens. We don't think they are ready for this. They are much more susceptible to conversations with an AI agent and we're concerned about that. So I would say that's further down the road, but for right now, we will not offer it to under 18.
The program you're talking about is we do have 500,000 teenagers across the US have free access to Talkspace because of our contractual relationships with governments, departments of health, departments of education, cities, states, and counties. So as a result of that, many, many teenagers across the country now have access to mental health 24/7.
The program is texting and messaging plus live video. They're a little bit different from city to city. New York City's any teenager 13 to 17. Seattle is anybody between 13 and 24. And Baltimore is all high school students. And North Carolina are really kids involved in the juvenile justice system, which means either they've been involved in juvenile justice, they have a parent who's incarcerated, or possibly they've had some other interaction with the juvenile justice system, particularly the foster care system, where they're at risk for needing mental health support. So they're very different.
The results that we've seen have been quite extraordinary. Over 90% of kids or teens are using texting and messaging as a way to get therapy. 40-ish percent, in addition to texting, messaging, will use live video. The results have been pretty significant, with 65 to 70% clinical improvement when they're on the program for about six weeks to three months.
I think to me, the most interesting thing is we're reaching teens where they are, which is really on their phones. And why that's important is we're getting to kids that are inner city, communities of color, communities [inaudible 00:11:37] disparate, that really have difficulty accessing healthcare because we're reaching kids on their phone. It doesn't matter where they live.
So that's also been, we look at the zip code analysis and it's been an extraordinarily positive impact to be able to reach teens wherever they are. Most of them are doing texting, messaging, or live video after school or on the weekends.
Charles Rhyee:
Okay, that's amazing. Sort of the last few minutes that we have here, wanted to talk more broadly, I guess, then going back to more specific about AI. Last November, FDA held a Digital Health Advisory Committee meeting to address generative AI enabled digital mental health medical devices. What were some of the key takeaways from that meeting?
Dr. Jon R. Cohen:
So we provided some fairly extensive written commentary to the FDA before the meeting. We thought it was a very positive session that they did and conversation that they had. We have not heard there are any recommendations as a result of it, or certainly they haven't put out anything based on it.
The issues they have recognized and everybody's recognized relative to, of course, the regulatory issues is what is the real danger that we talked about? How do you, in some sense, control this or make it safer? We believe that the oversight relative to the state regulations or feds are very close to what we're building in terms of being a safe model, but we do believe that there has to be some control and some restrictions relative to who has access, but mostly than how you make it safe and to who you offer it to. So we're seeing a lot of it.
Charles Rhyee:
So then do you see more regulations over the use of AI and mental health coming? Is that where we, do you think we need to get there? Because I know right now it's kind of state by state and it's not very consistent.
Dr. Jon R. Cohen:
The answer's yes. I can't say what exactly that should be, but I do believe that there has to be, as we've seen with eventual cell phone usage, protecting kids from cell phone usage and all of that, I think you'll see the same thing. And it's already happened in some states in terms of what the regulatory oversight will be. The most important thing to me is some sort of clinical oversight. There has to be some clinician, human in the loop at some point to protect people from what's happening with them.
Charles Rhyee:
And then lastly then, as we think about how things are going to evolve over the next several years, and how do you see Talkspace positioned in that environment?
Dr. Jon R. Cohen:
So I do believe we are and continue to be uniquely positioned. As I said, we're building the first real safe, trained LLM for mental health. There are a bunch of others out there. I think it will continue to evolve for all of the big LLMs relative to them figuring out risk and safety. But we continue to believe that Talkspace is in a very unusual, positive position relative to what we've done. We are a company that has been born on innovation. We did most of the original work on proving that texting and messaging actually works for therapy. And along those lines, we'll continue to be in the innovative IT front for using AI, as long as we do it in a safe and responsible manner.
Charles Rhyee:
Great. Well, looking forward to seeing that occur. So Jon, thanks so much.
Dr. Jon R. Cohen:
Thank you.
Speaker 1:
Thanks for joining us. Stay tuned for the next episode of TD Cowen Insights.
This podcast should not be copied, distributed, published or reproduced, in whole or in part. The information contained in this recording was obtained from publicly available sources, has not been independently verified by TD Securities, may not be current, and TD Securities has no obligation to provide any updates or changes. All price references and market forecasts are as of the date of recording. The views and opinions expressed in this podcast are not necessarily those of TD Securities and may differ from the views and opinions of other departments or divisions of TD Securities and its affiliates. TD Securities is not providing any financial, economic, legal, accounting, or tax advice or recommendations in this podcast. The information contained in this podcast does not constitute investment advice or an offer to buy or sell securities or any other product and should not be relied upon to evaluate any potential transaction. Neither TD Securities nor any of its affiliates makes any representation or warranty, express or implied, as to the accuracy or completeness of the statements or any information contained in this podcast and any liability therefore (including in respect of direct, indirect or consequential loss or damage) is expressly disclaimed.
Charles Rhyee
Managing Director, Health Care - Health Care Technology Research Analyst, TD Cowen
Charles Rhyee
Managing Director, Health Care - Health Care Technology Research Analyst, TD Cowen
Charles Rhyee is a managing director and senior research analyst covering the Health Care Technology and Distribution space. Mr. Rhyee has been recognized in polls conducted by The Wall Street Journal and The Financial Times. In 2023, he ranked #3 in Institutional Investor’s 2023 All-America Survey in Health Care Technology and Distribution and was named “Best Up & Coming Analyst” in 2008 and 2009.
Prior to joining TD Cowen in February 2011, he was an executive director covering the Health Care Technology and Distribution sector for Oppenheimer & Co. Mr. Rhyee began his equity research career at Salomon Smith Barney in 1999.
He holds a BA in economics from Columbia University.
Japan