AI 101: A Governance Action Plan

Matt Fowls

Share Post:

Artificial Intelligence (AI) is already changing business and healthcare in profound ways: Candidates are being screened by AI during the interview process; behavioral healthcare specialists are utilizing AI to annotate patient encounters in real time; dentists are relying on AI to customize treatment plans; data and chart audits are being completed by AI, with discrepancies identified in just a few seconds; claims for payment are being denied algorithmically. The list goes on for how AI is streamlining efficiency in once resource-intensive tasks.

We can likely expect AI to continue its impact during the coming years. The question remains, though, of how. So, what should you do if you’re a business leader considering AI? Should you use AI this early in the game? How can you do so responsibly? Undoubtedly new laws regulating AI are coming; how can you position your use of AI for long-term success? At least a few tips come to mind for effective AI use that mitigates risk. 

A Checklist for Responsible AI Deployment

  1. Create an AI Assessment & Oversight Committee. First and foremost, responsible deployment of AI requires calculated oversight and guardrails. And like any good guardrails, those checking AI should derive from experts with diversified perspectives from both inside and outside the organization. AI information technologists, seasoned privacy and corporate counsel, and operational leadership–to name a few resources–can enrich strategic planning around AI in a manner that preempts unanticipated use consequences. Convening these experts regularly in a standing AI Assessment and Oversight Committee maximizes continuous conversation around AI’s unique impact on your operations. 
  2. Conduct AI & Privacy Impact Assessment(s). Once needs for AI are identified, it is mission-critical that potential impacts toward information privacy, data security, and other sensitive areas are illuminated. Practically, this can occur through one or more AI & Privacy Impact Assessments. Some considerations include potential impacts upon patient and consumer privacy; whether the AI will comply with current obligations, such as those entailed in existing contracts and partnerships; if the AI will record inputs, such as patient conversations or location data; and, whether the AI will rely upon any additional third parties, along with how those additional third parties will process information. 
  3. Update required documentation. If AI is to be used, a great step before deployment is evaluating whether any updates to organizational privacy or other documents are needed. Informed consent forms, privacy notices, and other consumer-facing disclosures may have wording in them bearing upon the organization’s AI plans that, without revision, wouldn’t accurately reflect post-AI operational practices. This step is vital; a failure to appropriately disclose how the AI will store, collect, destroy, and otherwise use information or data can expose the organization to liability under consumer protection, privacy, and other laws and regulations. If using third party contractors, the organization may also want to consider updating its agreements with those contractors to reflect any changes warranted by the AI. 
  4. Obtain any needed consents. An organization hoping to use AI should also consider approaching any independent partners who will be affected by the AI before implementation. For example, a contracted provider seeking to use AI inside a third-party hospital it does not own should consider talking to that hospital first. Doing so will ensure that existing obligations are met under operative bylaws, medical staff rules, and other governing documents. 
  5. Draft AI Policies and train all personnel appropriately. Importantly, training personnel who will use the AI before they obtain access to it is imperative: Doing so will minimize the potential for mistakes, demonstrate that access was prevented for untrained personnel, and afford additional insight into how the AI will look in real-time use. Strong AI policies can delineate incident response and escalation procedures, algorithmic transparency, biases, user consent protocols, development standards, and much more. 
  6. Test the AI. As with most information technology initiatives in the healthcare space, testing the AI prior to use can be the difference between success and failure. 
  7. Deploy the AI and monitor it closely. In addition to consistent oversight of the AI in real-time, an effective monitoring program should include both regular and random audits. The team should take care to address any obstacles with the appropriate personnel and adjust deployment of the AI where indicated–including elimination of it where necessary.  

How We Can Help

The steps enumerated above are not rocket science. However, they comprise a focalized action plan requiring meaningful coordination between departments, personnel, budgets, and other organizational resources—something that can entail a heavy lift. 

The team at Milgrom & Daskam can apply expert legal perspective in your strategic conversations surrounding AI. We can also coach your team through privacy and other implications of proposed AI, negotiate favorable legal terms for the arrangement, and more. All of these measures empower a tailored, robust, and sophisticated AI strategy for your organization that compliantly mitigates risk while enriching operations. Get in touch with us here.

ABOUT THE AUTHOR

PARTNER

Matt joined Milgrom & Daskam in May 2024, and provides growth-centered corporate counsel to businesses of all sizes, with particular attention toward business organization, healthcare, information privacy, technology, and software as a service (SaaS). Matt empowers clients to effectuate their strategic goals by coaching them through legal and operational challenges in several different contexts.

More Articles

Business & Corporate Law

Your First Commercial Lease!

It is simple to sign a lease and stuff it away to (hopefully) never be seen again. It is decidedly less simple to make sure the document protects your business’ interest (and where you are signing a personal guaranty, as is often the case, your personal interests too).

Read More »
Contracts

Avoiding Pitfalls in Insurance Claims

Policyholders pay good money for their insurance. But when a loss occurs or someone sues you, insurance coverage is not automatic. People who make insurance claims face several potential pitfalls that may prevent them from getting what they paid for.

Read More »
Data Privacy

Biometric Amendment for Colorado Employers

This summer, Colorado employers will be subject to additional regulations around the collection and storage of biometric identifiers and biometric data. On July 1, 2025, the Biometric Amendment to the Colorado Privacy Act will take effect. While largely targeted at the collection and retention of biometric data from consumers generally, the Biometric Amendment also provides new regulations for Colorado employers. These regulations apply to both for-profit and non-profit organizations of any size, even those without any “consumer-facing” business.

Read More »