Artificial intelligence was certainly a hot topic of 2025 and will continue to be so in 2026 and beyond. Many health care organizations are eagerly adopting AI-enabled tools that promise to deliver efficiency and improved processes by accelerating clinical workflows, creating efficient documentation, strengthening operational efficiency, and reshaping patient engagement. Yet successful adoption requires more than enthusiasm: It demands thoughtful governance, responsible implementation, and a realistic understanding of the opportunities and risks.
Regardless of whether an organization has already implemented widespread use of AI-enabled technology or is just beginning to explore this technology for the first time, the following principles are important to consider.
- Know Your Why. AI should never be a solution in search of a problem. Although the market is exploding with tools and promises, an AI strategy anchored around specific clinical or operational challenges can focus efforts around meaningful improvements and minimize distractions.
- Implement a Strong Governance Framework. A strong governance framework is the foundation of responsible AI use. We recommend organizations establish a multidisciplinary governance and oversight structure that includes key stakeholders such as clinicians, operational leaders, compliance, legal counsel, and IT. Key elements of the framework include:
- A clear pathway for review and approval of AI tools
- Transparent documentation of decision-making and legal and ethical considerations
- Defined roles and processes for monitoring performance and potential risks including, for example, regulatory compliance, safety concerns, and bias
- Prioritize Ethical and Legal Considerations. AI systems rely on data, and health care data is among the most sensitive. Organizations must ensure compliance with HIPAA, state privacy laws and other applicable laws and regulations. To effectively manage these risks, ethical and legal considerations should be part of the evaluation from the outset, not an afterthought. The following practices are key:
- Rigorous vendor due diligence
- Strategic contract negotiation
- Effective data governance and access control practices
- Clear communication with employees, patients, and other stakeholders
Validate Before You Deploy. After you have completed your due diligence and contract negotiation, the risk management is not over. Before implementation, it is important to validate the tools perform as intended. This can include, for example, confirmation to test clinical accuracy and reliability, performance across diverse populations, and alignment with clinical or operational workflows.
Pilot programs can be an effective approach to test functionality, gather feedback, refine workflows, and assess legal risk before more widespread implementation.
- Think Supplement, Not Supplant. Although AI capabilities are increasingly impressive, AI is generally best suited to augment, not replace human decision-making and judgment in health care organizations. At the end of the day, the humans still hold the responsibility for ensuring accuracy, compliance and safety. To help ensure successful adoption and manage risk, implement training that builds competence in interaction with the tools along with awareness of legal responsibilities and adopt and communicate clear policies on human oversight, decision-making and accountability.
- Measure, Monitor, Adjust and Evolve. AI is not a “set it and forget it” technology. By design AI changes, as does the operational and regulatory world in which the AI tools are utilized. To ensure ongoing risk mitigation, it is important to:
- Determine at the outset what metrics and measurements are important to continued accuracy, effectiveness, safety and compliance and ensure identification of unintended consequences and disruptions
- Adopt proactive processes to monitor AI output and outcomes/performance
- Ensure effective feedback loops
- Monitor regulatory updates, evolving standards and legal obligations
- Engage in Transparent Communication. Trust is foundational to health care organizations. Transparency strengthens trust and reduces fear and misinformation. Effective communication also can reinforce the organization’s commitment to responsible, compliant use and ultimately mitigate legal, financial and reputational risk.
AI has extraordinary power and potential to transform health care. But in the words of Spider-Man’s Uncle Ben, “with great power comes great responsibility.” AI must be adopted thoughtfully, ethically, and in compliance with legal requirements and principles of risk management. Although health care organizations often want to move quickly to adopt and implement new AI, taking the time to invest in governance, legal oversight, validation, effective compliance processes, and human-centered design and communications will not only reduce risk but also have a greater potential to unlock meaningful improvements.

