Saturday, March 7, 2026
HomeHealthcareDigital Well being Supply Firm Solera Tackles AI Governance Points

Digital Well being Supply Firm Solera Tackles AI Governance Points

Solera Well being has created a digital platform that matches well being plan members to greater than 20 curated digital well being options. Two of the corporate’s execs not too long ago sat down with Healthcare Innovation to debate the corporate’s enterprise mannequin and progress in addition to its method to AI governance throughout its digital well being associate community. 



Glenn Alphen, Solera’s chief industrial officer, spoke concerning the firm’s founding and progress, and Mike Levin, the corporate’s normal counsel and chief data safety officer, described the complexity of creating an AI governance framework throughout its ecosystem of digital well being resolution companions.



For instance of the kind of partnership it develops with payers, Blue Cross and Blue Defend of Texas simply introduced its Unity Well being Hub, powered by Solera Well being, that may hyperlink to customer support and situation administration sources to supply members with a coordinated expertise.

Healthcare Innovation: Might you give us an summary of the corporate’s enterprise mannequin and speak about among the digital well being companions it really works with?

Alphen: The corporate was based beneath the Inexpensive Care Act to serve Medicare Benefit members and to drive them to diabetes prevention applications domestically and doubtlessly digitally, after which flip their progress into claims. We began to construct a entrance finish, utilizing interviewing strategies to know the people utilizing it. Over time, our industrial clients who additionally had Medicare Benefit stated that there have been some digital applications that will be nice for his or her industrial inhabitants in weight administration and diabetes prevention. Might we do this as nicely? So we started to construct out a mannequin that steered folks to these kinds of applications and found out methods to construct these as claims.

We started gathering data on engagement and outcomes. Are you truly shedding pounds? Are you truly doing this system? We constructed what is actually our personal EMR, the place we preserve observe of all that information coming in by these companions over time. Now we’re at eight circumstances.

We have now quite a few massive well being plan clients that use all of our situation classes primarily in industrial markets — whether or not it is totally insured or ASO (Administrative Companies Solely) sell-through.

When I’m at a convention and other people ask what we do, I say, ‘See every thing on this room? We’re attempting to make it straightforward for a person to navigate and to take the purpose resolution fatigue away from the well being plan or the employer by being the place the place a community for digital and digital care exists, so we’re actually making a community method.’

HCI: Does Solera vet the digital well being options when it comes to their efficacy or trustworthiness? Or do the well being plans say to you that they work with a specific firm and would really like you to make it a part of your community?

Alphen: We do have plans say, ‘Hey, we love these guys. We need to make them a part of the community.’ However due to our vetting course of, it does not at all times occur.We begin with scientific vetting. After which there’s enterprise alignment. Do they serve a care path that we already serve or do they serve a brand new care path? As a result of that is how we give it some thought — what’s the suitable care path? There is a very scientific lens. The trick is that they should conform to extra of a pay-for-performance mannequin, which is that matching up of engagement with scientific outcomes. Can they share the info in order that we are able to construct a value-based framework round billing? There are completely different billing methodologies. They’re typically per member/per 30 days, and that is the place plenty of that time resolution fatigue comes from. The employers or the well being plans are at all times having to adapt to any individual’s new methodology. We clear that up for them, typically talking.

HCI: Solera simply introduced a brand new behavioral well being community with firms Calm and Lyra Well being. Might you speak about that?

Alphen: Sure. We’ve been very profitable within the psychological well being area with some prior companions. We thought we wanted somewhat bit extra of an expansive class, to actually meet the best way our clients. Calm grabs plenty of consideration due to their deep shopper background, however they’ve launched Calm Well being for Employers, which additionally asks questions on different circumstances that we serve. We’ll have the ability to map a few of that information into different choices that now we have. Behavioral well being provides us some flexibility to do some extra particular choices. I do not actually need to get into what these are but, however there are different areas that we are able to go into in behavioral well being.

HCI: Let me flip to Mike. I noticed some details about Solera unveiling a framework for accountable and clear use of AI in digital well being for use throughout your associate ecosystem. Might you first speak about the place governance most frequently collapses as soon as AI goes operational and what efficient, enforceable AI oversight must appear to be now on this area?

Levin: You are asking: how do how does AI governance break down? Typically, it is the identical issues that you just see in safety. In the beginning, it is stock drift. Lot of organizations do not even notice that they’re utilizing AI particularly in manufacturing or that their community companions are using it, so they do not also have a correct stock of the place the AI is definitely embedded.

Monitoring atrophy occurs fairly a bit, significantly whenever you’re constructing out a governance program. The monitoring cadence begins to float and the people who find themselves monitoring is probably not monitoring repeatedly, and that turns into an enormous danger. The third factor is incident response gaps. After we have interaction with our payers, that is the one which they’re frequently asking us about. A pilot does not truly floor actual incidents as a result of it’s totally restricted in scope. However when you’re truly out in the actual world, manufacturing may be very completely different. When an AI makes a problematic suggestion, how do you reply to it? In a reside scientific context, you want an escalation path. It’s essential be pulling within the correct material experience. These have very restricted 24- to 72-hour reporting home windows as nicely. Greater than anything, the incident response just isn’t actually thought by. It has to reflect what you do from a cyber perspective. If there are pre-existing fashions that exist in safety, you possibly can mainly copy them over to the AI aspect.

HCI: Solera is sitting in type of a singular place on the heart of a digital well being ecosystem of separate firms. Is that this governance framework one you are constructing to assist all these firms as a base you count on them to succeed in when it comes to issues like transparency?

Levin: We have now a reasonably expansive AI governance program for our digital well being suppliers. That is one thing that we preserve being requested about by our payers. There’s plenty of nervousness round this, as a result of it is an unknown and there’s numerous overlapping and typically contradictory steerage round this. We see twin dangers. There’s the scientific and there is the compliance, they usually do not at all times align. Scientific danger is about affected person security and care high quality. Does the AI floor correct suggestions? Does it hallucinate? Does it carry out equitably? If the info that’s coming in has bias, the outcomes that come out additionally has bias. Might it result in hurt if the output is improper?

Then there’s the compliance danger, which is the one that you just hear extra about from the authorized aspect, and that’s regulatory publicity. All people’s accustomed to HIPAA, however there are all these new legal guidelines, significantly in California and Colorado. Washington state has one, too. The FTC is wanting like they’ll begin imposing this as nicely. So there’s plenty of worry concerning the authorized danger perspective as nicely.

We have now a cross-functional oversight committee for our AI governance, which has engineering, authorized, safety, and compliance. Every of them has a singular perspective on the AI downside, if you’ll. These views have to work collectively, as a result of the dangers that I establish will not be the identical dangers that the engineering group or the scientific group will see. That is how it’s a must to handle it. The sensible actuality is that good scientific governance typically satisfies the compliance necessities. So in the event you do one proper, it’s going to typically result in the opposite. You must doc every thing. You want an enormous paper path.

HCI: Are these digital well being firms in your community appreciative that you just guys are doing this? Is it such as you’re serving to them or is it such as you guys are the duty masters who’re making them do that stuff?

Levin: Effectively, a few of them are much less blissful than others. We have now a spread of digital well being companions as a result of now we have a reasonably large portfolio, and a few of them are rather more mature, they usually’re capable of present mannequin playing cards. They’re capable of clarify danger, to elucidate bias and different issues. We have now to stroll them by this, however by doing that, they really construct out higher practices internally.

The half that shocked me greater than anything was the way you would possibly suppose AI is in all places, however it’s actually not at all times being utilized instantly within the supply of care. It is within the again finish. It is mainly getting used for coding or as a copilot within the workplace, however it’s not truly constructed into plenty of these healthcare apps, as a result of there’s a lot nervousness round it from a compliance perspective.

HCI: I learn that full implementation throughout the associate community was anticipated by the tip of the third quarter of 2025. Did that keep on schedule?

Levin: There have been some adjustments in our community, so since that assertion we have had some people be part of and others have left. However we do have visibility concerning the AI standing throughout all of our companions. We all know the posture of all of them, and we’re serving to those that want the assistance.

HCI: And Solera is creating an AI maturity scoring functionality with interactive dashboards for safety and compliance, anticipated to roll out this yr?

Levin: We’re engaged on that as a part of our bigger Halo platform. It is one of many product options. Consider it as a scoring mechanism for the digital well being suppliers — from a safety perspective, in addition to from an AI danger perspective. Consider it virtually like a credit score rating.

HCI: That already seems like quite a bit, however are there another huge duties in your to-do checklist for 2026?

Levin: That could be a lot. I would say that AI most likely consumes about 50% of my group’s time from a governance and oversight perspective, as a result of there’s a lot unknown about it proper now, and it is so dynamic. However we’re not alone. I’ve seen this throughout the payer ecosystem as nicely. A whole lot of the payers have invested pretty closely in constructing AI governance groups, and no two of them are the identical. All of them reply otherwise. They’re all deciphering the rules otherwise. Should you’ve seen one AI governance program, you’ve seen one AI governance program.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments