Featured Content

    FinThrive_EXEC_Revenue Management Automation Guide-svg

    Your Guide to an Autonomous Revenue Cycle
    Plot a course toward forward-thinking innovation that improves efficiency, the patient experience and your bottom line.
     

    What AI can do for Mental Health with Michiel Rauws

    Healthcare Rethink - Episode 22

    While there is no shortage of mental health services in the United States, there is not a well-coordinated system in place to connect people in need with these essential services. This is what Michiel Rauws, CEO and Founder at Cass, is trying to solve with AI. Join him and host, Brian Urban, on this episode of Healthcare Rethink.

    Don’t miss a second of what’s trending in healthcare finance

    Check out our other topics.

    Show Me All Podcasts

     

    Healthcare Rethink: Hear From Leading Changemakers

    Ready for another episode?

    Show Me All Episodes 

    Brian Urban (00:22):
    Yes, this is the Healthcare Rethink podcast. I'm your host, Brian Urban, and today we are talking AI and mental health. That's right. We have CEO Michiel Rauws with us today from Cass, which is a leading AI mental health assistant focused on increasing access, lowering costs, and activating resources to help an individual. So without further ado, Michiel, welcome to our little show.

    Michiel Rauws (00:50):
    Thank you so much, Brian. Thanks for having me. 

    Brian Urban (00:52):
    This is going to be exciting. I think this is a milestone for our podcast, in that we haven't had the blend of AI addressing mental health in the ecosystem. So this is a real treat for me. So with every episode, we like to get to know our guests a little bit more, and I wanted to just get to know who you are, Michiel, how you came up with this. This, I'd say very impactful AI tool, but very impactful service. So tell us a little bit about who you are and what Cass's mission is in the mental health space.

    Michiel Rauws (01:27):
    Yes, of course. Happy to. So originally I'm from the Netherlands in Europe, which is also commonly known or referred to by me as the land of the value-based care. So I myself had a very good time, easy time to get access to care and also for that to be free and high quality. So that was great for me and that really helped on my own mental health journey. And afterwards I became a peer support counselor. So I was so excited about the things that had helped me that I was trying to figure out how do I help others. And I found myself just helping friends and family, then slowly coworkers and more and more people. And really a big part of what I was doing was repeating the things that I had learned in therapy.
    (02:15):
    So I thought that was a very powerful concept. And later on that's really what led me to thinking about what can I do within the fields of health. And one thing that struck me between Europe and the US is the availability of care, the access to care and the cost. And in a small way, how I was increasing capacity by being a peer support counselor I thought, what if I could find a way to really massively add capacity, to really empower the current healthcare workers to help so many more people? And that's where my thought came in of, well, I've been sort of repeating my counselor that helped me. What if I could build this little AI assistant that could repeat the things that I say to people and then help others? And that was really the seed of how this got.

    Brian Urban (03:05):
    I love that because your story is from a user, a counselor, and then now a developer to be able to scale help through artificial intelligence that is not only a dialogue, but it's then an access to resources near someone that's meaningful resources, not just a generic list. So I want to talk about your perspective because it comes from a few angles, Michiel, and I think about transformation in the space. So in the US you found out we do not have a mental healthcare system. We have a really, a sick care system. If you get hurt, you get an ambulance or a helicopter if it's a real big far rural emergency, you get to a hospital, you get treated and then that's an acute scenario. But for chronic things, you're in and out and there's really no prevention mechanism, but there's a structure.
    (04:00):
    For mental health, there's no structure. You can't get this type of help. And obviously your technology is helping filling in those gaps. So I want to talk about transformation knowing that in the US is in place. So I think you're servicing around 30 million people right now globally, which is unbelievable. And you've been involved in research, so you're going at it through so many angles. And then I think you leveled up in, you're working with organizations as well. So it's so many different levels. So with all that I just threw out at you, what is transforming in this space and do you think we'll have a mental health structure in the US at one point? Do you think your technology is going to push that direction?

    Michiel Rauws (04:47):
    Yes, that's very much our mission. So the mission of our organization, is called Cass. The name, actually I'm very proud of who did the recent rebrand. It stands for calm and safe space. So really what we are trying to do is to create that calm and safe space for people to have that on ramp into behavioral health. And in the end, there is of course a very solid behavioral health system in the US. It's just that on ramp that access to care is a little rough and difficult. And the mission of our company is to provide access to mental health coaching regardless of income or location. So what we really do is we reach out to people. And that's fairly expensive still, even if you have a care team. So some of our partners have extensive care teams as part of their health plan to reach out to those that are most in need of support and also drive a lot of cost for the health plan.
    (05:48):
    So because they drive that cost, they have budgets available to then have a care team reach out to them. But to have an individual call multiple times, tried multiple different phone numbers, if they even have the phone number or sent multiple emails to try and get in touch with this member who might live in a rural area, then that costs a lot of money. And that therefore limits the number of members they can reach even though there's so many other members would that would also be great to reach, but it's too expensive on a per person basis. That's really part of the big difference. What we are making is we're trying to amplify the work that these care teams already do and then the AI assistant, it's an assistant for a reason. The assistant can help amplify that work and reach out to the community and proactively reach out to them just where they are. So if they're on social media, we'll make sure to do an social media outreach campaign and just start chatting with them there. And one cool thing from our research, because I remember all the things you rattled off.
    (06:54):
    One cool thing from the research we found is that it's difference between you and I talking right now where it's a very rich medium. There's video, there's voice, I can see your expression, I can see your facial expressions. So I'll choose my words relatively carefully. I might also think about the whole audience who's listening to even choose my words more carefully. And then that's one end of the spectrum. The opposite end of the spectrum is chatting with Cass because it's one-on-one, it's not a person you're actually talking to, it's an AI assistant. So suddenly there's no person who's judging you. It's just a thing. We've seen in our own research, but also in general, this has been proven time and time again in research that the medium that's different has a big difference, impact. So if you are texting, you feel safer than if you're on video. You see that with the youth population a lot. And also the fact that you're chatting with a chatbot actually helps. It makes people feel safer.

    Brian Urban (08:01):
    I love that you connected that right to your brand mission as well through the actual capabilities of your technology, Michiel. So it's an amazing that you found that in your study too, because it's a common safe place when you're using these interesting new mediums that help with that on-ramp into a behavioral health system that a lot of us, we all struggle with different things in life. A lot of us need more access and an easier way to access elevated services. So I love what you're doing with taking that care management model, creating an AI assistant that is a safe medium to have a discussion with. And one of your research partners, I believe one of your partners overall is Stanford University. So obviously they're renowned clinical, behavioral health, community health researchers, not only in the US but globally. So what a great partnership that you've developed there and want to come back to that to see if you have some more research coming out.
    (09:01):
    But there is some great research and case studies that you have on your site that I would encourage our listeners to check out. But I think we covered pretty well what Cass is doing to enable access to mental health. I want to talk about tech now becoming more of an advocate, not just the connector, but an advocate. So your organization has a very niche solution. There's others in the space Wheel is out there, Happify, now Twill with digital therapeutics and chat rooms. Are you seeing a big movement into the space? Is it creating a lot of more advocacy, push change in the market? Or is it creating a lot more options for people?

    Michiel Rauws (09:42):
    Yeah, I've always welcomed more people into the space. So it's always been good to me to see more players entered the market. So that's suddenly such a big shift. And to a degree, that's the same shift we're seeing in the market where other players might have had an online course earlier, but they've realized it's hard to give access to people. It's of course so much easier than getting an appointment. However, the next hurdle is really getting them access. And that's what we're seeing now more and more, especially in the current economic climate where ROI is not enough. It needs to almost be ROI within a year, please, in order to-

    Brian Urban (10:57):
    That's crazy.

    Michiel Rauws (10:58):
    Yeah. Before the next budget cycle. And luckily the way we've always looked at things is focus on access to care. 'Cause we saw very early on that our system was able to reduce symptoms that those were our first randomized control trials, for example, with Northwestern University. That's the first one we published. And in total now we've published 13 research studies, all peer reviewed together with academic institutions. They're all available in PubMed. And it's both the efficacy to randomized controlled trials. It's also the safety which is as important to this of course.
    (11:35):
    And in that way, what we've learned is it's not enough to make it available. You need to make sure people use it. And that's where our work that I like to compare to kind of amplifying the care team and the social worker comes from, which is doing that outreach and making sure that after going live within a month or two or three, we start signing up people onto our product. And sometimes the help plan might have difficulty reaching out to their members because they might not have the phone numbers. So in that case, they see huge success with us doing our own marketing campaigns on social media to reach out to members in their area and then verifying that they're under that health plan. And this way we've seen examples of signing up 25,000 people per month.

    Brian Urban (12:22):
    Wow. First of all, that speaks to a couple different unique things happening in the US healthcare climate or healthcare service climate. One, a lot of people do trust different tech mediums more than they trust health plans, more than they trust maybe healthcare institutions in particular. So you all are actually a trust mechanism and a catalyst I'll say between the two users and healthcare or health plans that, so that's pretty interesting. I've heard that in some other regards as well with different high need populations. So in thinking about your relationships with these healthcare health plan institutions, one thing comes to mind of value-based care integration.
    (13:09):
    So you come from the land of value-based care in the Netherlands and when you look at the US, we're behind the curve in many ways from a policy front, from a even productizing services front, policies in terms of health plans for employer groups and individuals. But now that you have some highest measures coming down the pike, new value-based care contracting that will include measures of addressing mental health and social determinants of health, you all seem to fit so well within that. Do you have any early examples of your impact in helping advance value-based care in the US?

    Michiel Rauws (13:48):
    Absolutely. Yeah. And that's a big focus of what we do is we literally look at the list of the star rating system and all the different HEDIS quality measures and we look at, oh, which ones can we help with? So when we talk with a new health plan or a provider system, we don't just barge in and say, this is all we do. Look at all us. Some things we do, we say we're here as consultants. So for background, once upon a time I worked at IBM, that's where my interest started in AI. And in early days we used IBM Watson to look at if that was a good tool to use. And then later we benchmarked it with our own systems and we saw that our systems were as good or better. And yeah, that's kind of the origin of the story. So we've always been very consultative.
    (14:36):
    And then also we started as a research firm. In the beginning I was just, let's see if we can do this. And the last time this was tried really at scale was at MIT with Eliza, so very renowned research institution. And it got all over the news because it was so scary. Suddenly there's this... They called it a psychiatrist. They called it an AI robot psychiatrist. Well, that made the headlines everywhere, very much similar to now how history repeats itself to, oh my god, what's happening with all these AI models? Some people are using it for therapy, other people do this and that. And yeah, people are kind of going rogue, the regular ChatGPT telling it, you are now my therapist, please be my therapist and help me now. It's a good sign of trying to build resilience and taking initiative. However, it's very risky. And that's why so many of our research papers are about safety.
    (15:35):
    It's making sure that it's safe. It now seems like, oh, these new models are so good, we can easily use them for coaching. However, that's not a product, that's a language model. And that's a few quick tweaks you made to the language model. What we bring to the table is for especially these big enterprise partners of ours, is making sure it's safe. And how do you make it safe? Partially it's doing the research builds an entire product, not just a language model. For us, if someone is chatting with Cass, within nine seconds, a crisis counselor takes over. So it's not just a chatbot, it's a crisis counselor can always take over. And there's people reviewing transcripts, and we have a medical ethical board who meets regularly to discuss different topics that come up.
    (16:27):
    And different new models that come up are also things we have to discuss. So we also have an AI advisory board with executives from the industry, both from health insurance companies or insurance companies, and also from just deep tech, the AI world to really figure out, okay, when are these new models ready for primetime and how do we make them safe? And yeah, I can go on and on about how we do that, but let me know if that's of interest at all.

    Brian Urban (16:54):
    Well, that's amazing. First of all, on a couple fronts, and I forgot we did talk about this as we've been getting to know each other a little bit more, Michiel, you have really this ethical board and your background as a research firm, really thinking about IRBs, institutional review boards. You've taken that construct and overlayed it to your business, which shows that you care about safety. And so you actually have a healthy direction that you're going in as you develop these very important tools. And I didn't realize the triage capability that's in there as well for a counselor to jump in if there is a critical incident that gets discussed with the AI assistant. But you made me think about something. AI has come under a lot of heat recently from some big name industry leaders out there, mainly all billionaires calling out the dangers, whether that was kind of a press stunt or if it was real concern.
    (17:50):
    But to your point, there has to be some guardrails put around developing AI that is specific in healthcare. So I just want to go off this for a second. Do you think AI is going to have one of the biggest contributions toward healthcare in the next 10 years? Or the next... I'd say maybe shrink that down to two or three years. Is this going to be the biggest thing that changes how healthcare is perceived and used?

    Michiel Rauws (18:16):
    I believe so, yes. Yeah. And there's definitely two different ways of course, that the one way everyone's talking about now is what's happening with these language models. Can they chat like a primary care doctor or can they chat like a nurse, et cetera? These are good questions to ask and to explore, but it has to be done in a safe way, and you have to do the research to prove that it works, that it's effective, that it's safe. And then the other way is from back in my days at at at IBM, that was the same thing. It was about big data and data analysis. And now you see that with a recent study coming out for Alzheimer's of being able to use big data in the latest AI models to now very accurately start predicting who might get affected by Alzheimer's. So there's definitely these two different ways, and yeah, for sure there will be big impacts.
    (19:05):
    The important thing is let's make sure this is done in a safe way. And to summarize, our ethos is basically go slow, go fast, go fast, go slow. So it's really focused on there's no need to hurry on these things. Our existing systems work, we've done 13 published research studies on this. They're already reducing symptoms, they're already increasing access to care. The ROI is very strong. So in that way, everything existing works and is incredibly safe. We even published an ethical paper ourselves in which we've coined the concept AI on a leash, which means that everything that the AI is able to say has been approved by a professional. So in that way, it cannot go rogue at all. It's just technically not feasible. And then there's of course interesting questions. Well, what happens with these new models that can make up things? Which is good because you need to be creative sometimes to problem solve.
    (20:04):
    And our answer is in certain, it's context dependent. It's dependent on our partner, what's their appetite for innovation, and also appetite for risk. So that might be a lot higher for a consumer focused brand or a nonprofit who just tries to help people get access to resources. If it's just resource navigation, that's very different than, this is a mental health conversation. We try to help you with mental health, and we're a health plan for example. So our different partners, we see different levels of comfort. And that's why we set our system up in a modular way where we have our in-house algorithms, we have fine-tuned algorithms from open source libraries that were released in the past couple of years. And then now the very newest technology are these APIs with ChatGPT, et cetera.
    (20:57):
    Those we also have, but it's all modular. So we turn those on or off depending on which customer have that consulting approach. Again, have those conversations, figure out where are your pain points, what would be appropriate here? And not just trying to push the newest thing, but trying to push the safest thing. And to bring it back to what you mentioned trust, because that's really kind of my summary of what I've seen in this ecosystem is trust is so important, but it's lost. We allow our customers to white label our product, but often they actually ask, no, no, no need, just use your own branding. It's time for fresh voice.

    Brian Urban (21:37):
    Yes.

    Michiel Rauws (21:38):
    If you're just a nice identity person, individual, I don't know, thing, reaching out to someone and then just start chatting, start helping them, that's a way different way of approaching them than, oh, here's an email from your health plan and you have some open bills. And the important thing is that we're really strict with regards to data sharing to keep that trust. So even with our customers, it's on a as needed basis. So it's one of the important rules from HIPAA as needed basis to get access to data. In the same way, if we don't need to shared data with the health plan, then we don't because we need to make sure we keep that trust from the members and that they trust us that we are there to help them.
    (22:27):
    Because in the end, that's the painful thing. These health plans want to do the same thing for this group of people who are really suffering and in need, comorbidities, high age, et cetera. They want to give them things. And we are then the mechanism to do that part, but we're not the mechanism to talk about billing and bills and other things, that's like Chinese wall. We don't do that.

    Brian Urban (22:51):
    Yeah. And no one wants that conversation anyway. So I love what you hit on Michiel, because your tech... We talked about being an advocate in the space, a connector as well, the on-ramp to the behavioral health system. But really the social currency, the trust currency that's developed is, I don't even know how you can quantify a measure or a value to it. You probably could, I'm sure you already did. But to the organizations you work with, that is priceless. And to the people that find the trust within that, that's priceless as well because that's how you maintain that service and that help that a person seeks with using an AI assistant like Cass, calm, safe, and secure. So that's like, that's just awesome.
    (23:39):
    I love where you're going with this and I want to focus on one other part of our conversation because we've hit so many different perspectives of the maturity of Cass even into your new name now, but addressing social determinants of health as these are barriers, really it's a universe of determinants of health in our current age. Medical debt, food, transportation, and everything in between that is affecting someone's use or not use of healthcare and how that's affecting their health status and outcomes as well. So your AI does that now I think in a lot of ways. Do you see your AI expanding more? You said AI on a leash, so is that leash going to have more of a, I guess a bigger circumference than in touching more of the yard that the AI is in to touch other needs that a person might have? Or does it today? Because actually we didn't talk about that before and I didn't do the research on that part of your AI, but is that what it's doing today? Or do you see yourself expanding into that space to touch other needs that members have?

    Michiel Rauws (24:49):
    Yeah, yeah. So definitely we started out super, super focused. So we started out a long time ago, so we're not very new on the block. We started out 10 years ago with a research firm trying to see if these things are ready. And slowly we became more ready because we did more research and also the algorithms became better, so they became more useful. And now what we've seen as well is we started with just coaching, so just mental health coaching. And initially it was even just the intake and just to give some background for the therapist before they have the first chat, that they would already be able to know like, oh, we already know what's going on with this person. And we know that this person doesn't just have maybe issues with anxiety, but they also have financial issues. They also have difficulties with transportation, might have issues with medical debt, which exacerbate again, the anxiety because if you have financial stress that piles on.
    (25:42):
    So we saw that early on in that intake work. However, we forced ourselves to stay focused on just the coaching. And then now luckily in the recent years, we were able to expand more into that resource navigation. And that's the exciting thing with our customers. We have very wide-ranging customers all the way from health plans to nonprofits. And the nonprofits often have all these different free resources available to even further help with the social determinants of health. So in that way, for example, I saw one of the other podcast guests, RIP Medical Debt. That's a very good institution that takes care kind of behind the scenes of people's medical debt.
    (26:26):
    And what we are trying to figure out in that same way is there are some organizations who specifically work with consumers where consumers can apply to them. So in that way, we make that application process easier, where maybe before they weren't able to do it because they might have had to print something. Another good example is snap food benefits for snap food benefits available to so many people, but so many people for whom it's available don't have a computer, a desktop tied to a printer and are able to print those forms, sign them, fill them out. So what we figured out is make it fully a chat. So they keep just chatting with Cass and then they just answer questions. We help them answer the questions, then we digitally fill out the form and we digitally request a signature and then send out that fax or printed paper out to the right entity.

    Brian Urban (27:23):
    I think that speaks to where your AI assistant, really the expanse of your tech is going in a longitudinal way of all the different things you can see and help a person address. So I did not know that piece. So I've learned far more than I was expecting on our little conversation than coming into it, Michiel. So such a well-rounded technology, such a mission, purpose based. I'm looking out a couple years now, we have a lot of policy changes that are coming through in regards to health equity initiatives, how Medicaid dollars can be used in lieu other services, provisions and how people can be contacted too. So all these different changes that are finally taking place slowly but surely. And I'm looking at your technology and I'm wondering, Michiel, what's going to be your biggest contribution to the healthcare ecosystem going two or three years down the road?

    Michiel Rauws (28:24):
    Yeah, it's really focusing on trust and bringing back trust to an industry that that's been, yeah, really hurt by a lack of trust. And in the end, there are so many good intentions that everyone who works at these health institutions, they have good intentions. They want to help their members, they want to help people. And then the sickly pieces are like, well, but there's these bills outstanding and there's these other things outstanding and the system needs to work and this and that. Yeah, that's true. But in the end you might make that worse for that person and then the bills might even further increase. So it's really like about trust and making sure that the member knows that the health plan has the best, are really thinking about their best case scenario as well, and the alignment of those incentives. And that's what we've seen from a policy perspective with Medicare Advantage is more moving towards kind of that value-based care model, more of the integrated care models.
    (29:24):
    What we've seen with Kaiser Permanente work very well to further drive down costs than cost curve. So in that way, really what we are looking to do is bring that trust back into the relationship between the on-ramp into access to care. And then from there on, we just see whatever works best. If this person is really jaded and really just wants to have self-care as an option, they don't want to talk to a person, they don't trust anyone else, then they chat with Cass. They find that calm and safe space and they just start venting. They start talking. And we've seen that just the act of venting and journaling, like you might have done in the past, writing a diary or that helps people. So even that's kind of the main bar of how we're helping people. And then of course we do all this education about different ways to relax and different ways to deal with issues.
    (30:15):
    So that's an exciting part. And then, yeah, the future, we'll just tell based on wherever we see the biggest need for our partners and customers, whether we will focus more on the resource navigation. And the way I see it is it's like a social worker, amplifying the work of a social worker or a case manager. It's a little bit of coaching because you find someone in a very rough spot and first you need to focus on the person. And then you look at the whole picture of, oh, they have these social determinants of health, they have other things, and how can we help with these other parts? And yeah, the future will tell where the focus will lie. But I won't act as if I can just quickly predict that. But high level, I think it's trust and incentive alignment and yeah, I think there's a bright future ahead.

    Brian Urban (31:01):
    I love it. Cass is absolutely amplifying the skills that are needed in our economy by social workers, by care managers, and through an AI assistant. So it's a calm and safe place for individuals to use your services. So I'm excited about having more AI leaders that have a research background like yourself, Michiel. So I hope our listeners are encouraged to be able to not only reach out and work with you, but also others that are developing in this space, you said more are welcome, more are definitely needed. I love this conversation. Thank you so much. I feel a part two coming on. Maybe in another month here we can get a part two episode, Michiel. But thank you for joining our little show here. And for more exciting insights and excerpts, please visit us at finthrive.com.

    Exploring Price Transparency and Healthcare Solutions with Dr. Jonathan Kaplan

    Healthcare Rethink - Episode 111

    In an enlightening episode of the Healthcare Rethink podcast, hosted by Jonathan Wiik, VP of Health Insights at...

    Read More

    Leadership Development within the Revenue Cycle

    Healthcare Rethink - Episode 110

    In the most recent episode of the "Rethink Healthcare" podcast, presented by FinThrive, Rory Boyd, Revenue Cycle...

    Read More

    Lies I Taught in Medical School

    Healthcare Rethink - Episode 109

    Medical school taught Dr. Robert Lufkin the conventional wisdom of the healthcare system, but his experiences and...

    Read More