Healthcare advocacy teams have weighed in with responses to the Division of Well being & Human Providers’ Request for Info on “Accelerating the Adoption and Use of Synthetic Intelligence as A part of Scientific Care.” Subjects raised embrace knowledge fragmentation, regulatory and legal responsibility ambiguity, governance challenges and workflow integration.
In its letter, the Medical Group Administration Affiliation (MGMA) mentioned it helps HHS’ objective of creating federal coverage for AI that’s predictable, proportionate to danger, and supportive of innovation, noting {that a} fragmented or inconsistent federal strategy to AI oversight might create confusion, duplicative necessities, and extra administrative burden for practices.
The group expressed concern about latest deregulatory proposals within the HTI-5 Proposed Rule, saying they might have an effect on transparency necessities that allow medical practices to entry constant details about how AI-enabled choice help instruments are developed, validated, and meant for use.
MGMA mentioned that HHS ought to guarantee comparable transparency mechanisms stay in place if removing or modification of those necessities happens as a part of future coverage. “The shortage of a clearly articulated transparency framework might cut back visibility into mannequin attributes, knowledge sources, and validation strategies and doubtlessly expose practices to higher direct analysis and legal responsibility burden,” the group wrote. “At a time when governance and belief in AI are essential, certification-based transparency as part of the HHS Workplace of the Nationwide Coordinator (ONC) Well being IT Certification Program stays vital for knowledgeable adoption and protected implementation.”
MGMA confused that transparency can also be essential because it pertains to payers. It contends that HHS ought to require payers to be clear about their use of AI for utilization administration, claims processing, and protection limitations and guarantee AI programs utilized by payers are evidence-based, don’t exacerbate administrative burden for medical teams, and don’t intervene with doctor scientific decision-making.
When it comes to challenges to scientific teams, MGMA highlighted organizational readiness and governance hurdles. In medical teams, AI governance readiness stays uneven. A Jan. 20, 2026, MGMA Stat ballot (n=328) discovered that 42% of medical group leaders report having AI governance buildings or formal insurance policies in place (20%) or in improvement (22%), whereas 56% report having none and a couple of% are uncertain.
Organizational readiness (together with governance capability, workforce coaching, infrastructure, and monetary assets) and sensible limitations can form organizational choices involving AI instruments and their capacity to ship significant worth in apply settings, MGMA wrote.
In its response, the American Hospital Affiliation wrote that sure statutes and laws within the healthcare ecosystem, such because the patchwork of state privateness legal guidelines and 42 CFR Half 2, have not directly impacted hospitals and well being programs’ capacity to develop and deploy sure AI instruments.
AHA is encouraging the administration to work with Congress to enact a full HIPAA preemption provision, noting that “various state legal guidelines solely add prices and create problems for hospitals and well being programs.” AHA additionally urges the administration to work with Congress to take away remaining necessities underneath 42 CFR Half 2 in regards to the sharing of substance use dysfunction knowledge that it says hinder care crew entry to vital well being data.
AHA additionally raises the problem of the dearth of readability surrounding legal responsibility as a big barrier to supplier adoption of AI instruments. “AI programs are sometimes developed and deployed with inputs from quite a lot of stakeholders, the place suppliers are simply one among many sources,” its letter says. “Additionally, sure algorithm components could also be handled by builders as proprietary, which makes it difficult for hospitals and different AI customers to establish mannequin flaws, discrepancies between coaching knowledge and real-world functions, or any mannequin drift over time.”
Whereas many of those points could intersect with case legislation and state-level malpractice statutes, AHA notes that HHS can “play a significant position in supporting cheap requirements for developer transparency and post-deployment monitoring. A few of these points underscore the significance of insurance policies like post-deployment requirements to make sure the continuing integrity of instruments and transparency requirements for well being IT certification.” Because the company continues to discover novel legal responsibility challenges, AHA urges HHS to offer formal mechanisms for supplier enter.
The Alliance of Neighborhood Well being Plans (ACHP) recommends that HHS set up clear guardrails for AI-enabled utilization administration choices, together with necessities for:
• Human evaluate of antagonistic determinations;
• Explainability of choice logic to suppliers and sufferers;
• Ongoing monitoring for bias, error charges and disparate influence.
• Align prior authorization modernization efforts with present interoperability initiatives, together with FHIR-based Prior Authorization APIs, to make sure AI instruments can function inside standardized, clear workflows.
Outcomes-based reimbursement fashions
ACHP additionally helps advancing outcomes-based reimbursement fashions that align fee for AI-enabled instruments with measurable enhancements in care high quality, entry and complete price of care. To that finish, ACHP additionally recommends HHS:
• Encourage using performance-based fee preparations equivalent to shared financial savings, efficiency ensures or danger corridors.
• Promote standardized efficiency metrics for AI instruments (e.g., influence on avoidable utilization, clinician effectivity, affected person adherence or well being fairness outcomes) to allow constant analysis throughout payers and suppliers.
• Modify present medical code units and guarantee EHR capabilities can seize when AI is utilized in medical care and regulate funds accordingly.
ACHP recommends HHS set up focused protected harbors for early adoption of AI instruments with demonstrated potential worth. Examples embrace:
• Short-term regulatory or fee flexibilities for pilot applications working underneath outlined guardrails, transparency necessities and monitoring protocols.
• Safety from retrospective fee recoupment when AI instruments are deployed in good religion and in step with revealed federal steerage.
• Specific encouragement of provider-plan collaboration to check AI instruments inside different fee fashions with out triggering fraud and abuse issues.
A number of of the stakeholders joined ACHP in mentioning that HHS might help well being AI innovation by addressing interoperability challenges that hinder supplier connectivity and widespread well being IT adoption. ACHP mentioned it helps efforts to enhance knowledge requirements and interoperability, recognizing entry to correct, sturdy knowledge is crucial for scaling AI instruments safely and successfully. Making certain knowledge high quality, quantity and hygiene is a essential component for profitable well being AI implementation, given the potential for AI fashions to ingest incomplete, inaccurate or poor-quality healthcare knowledge. Moreover, ACHP member corporations acknowledge the necessity for a adequate well being IT infrastructure to help the information change required to allow value-based, technology-enabled care.
Amongst ACHP’s interoperability suggestions to HHS are to enhance supplier connectivity to FHIR-based APIs, together with Prior Authorization and Supplier Entry APIs, by:
• Making a nationwide digital endpoint listing that permits dependable discovery of payer and supplier endpoints.
• Establishing clearer EHR workflow requirements so AI-enabled knowledge change is embedded into scientific operations quite than handled as an add-on.
• Advancing provider-focused adoption necessities that prioritize usability and cut back implementation friction.
Interconnected boundaries
The Society for Cardiovascular Angiography and Interventions recognized a number of interconnected boundaries impede each improvement and accountable deployment of AI in scientific care, together with knowledge fragmentation, regulatory and legal responsibility ambiguity, and workflow integration.
“The present panorama doesn’t clearly delineate duty when AI influences a scientific choice that ends in affected person hurt. The skilled accountability framework in medication is properly established: the licensed clinician is accountable. However when an AI instrument contributes to a choice, and the clinician had no position in designing, validating, or selecting that instrument, the task of duty turns into unclear,” SCAI wrote. “Expertise builders function underneath legal responsibility protections that weren’t designed for scientific penalties. This ambiguity discourages adoption by the very professionals who would want to make use of these instruments.”
SCAI additionally pointed to algorithmic opacity: “Most scientific AI operates as an associative mannequin whose inner logic shouldn’t be clear to clinicians. Licensed professionals are educated to motive from proof, doc their rationale, and defend their choices. Appearing on opaque algorithmic suggestions is essentially at odds with how skilled scientific judgment is exercised and evaluated.
One other concern is workflow integration. AI programs are sometimes designed with out ample understanding of scientific workflows, SCAI wrote. “Instruments that generate extreme alerts, require parallel documentation, or interrupt established care pathways create friction quite than effectivity. In high-acuity settings just like the catheterization laboratory, poorly built-in AI might compromise quite than improve affected person security.”
