Publication

March 3, 2026
|
6 minute read
|

AI Governance for Business Leaders: Use It. Use It Safely. And Verify Everything.

“The horse is here to stay, but the automobile is only a novelty — a fad.” – Attributed to Horace Rackham’s investment banker, when advising Rackham not to invest in Henry Ford’s newly incorporated car company.

This quote is a reminder that organizations routinely underestimate technologies that end up reshaping how work gets done. Artificial intelligence (“AI”) is no longer “coming soon” to business operations — it’s here, it’s in tools your teams already use, and it’s showing up in how customers, vendors, and regulators expect work to be done. Used well, AI can reduce drudge work, surface risks faster, and free people up for the parts only humans can do: judgment, strategy, and accountability.

Used poorly, it can also create very human problems — confidentiality slips, hallucinated “facts,” IP headaches, and awkward conversations with customers and regulators. The answer is not “ban it and hope it goes away.” The answer is govern it: clear internal rules, training, and a verification culture that treats AI output like a smart intern’s first draft — helpful, but not authoritative.

This article focuses on two practical areas: (i) internal AI use policies that enable safe, responsible adoption; and (ii) board and executive oversight questions leaders should be asking before (and after) AI tools roll out.

How to Know When (and Where) to Use AI Safely

To decide where AI belongs, start with the decision, not the tool. Ask: what are we trying to decide or produce, who will rely on it, and how wrong can it be? That “decision-first” lens helps organizations adopt AI confidently without sleepwalking into high-risk uses.

Here’s a simple way to apply it:

  1. Name the output. Internal summary? Customer communication? Contract language? Compliance statement? Something that could be audited—or litigated?
  2. Assess impact if it’s wrong. Low (internal brainstorming), medium (internal analysis that drives actions), high (customer-facing, regulated, money/safety/employment-impacting).
  3. Check the data. Will the model see personal data, trade secrets, source code, or other confidential information? If yes, pause and confirm tooling, permissions, and contractual/security protections.
  4. Match guardrails to the risk. The closer the output gets to customers, regulators, or real-world consequences, the more you want: approved tools, restricted inputs, clear human ownership, and verification that points back to sources.

Rule of thumb: the closer AI gets to customers, compliance, cash, or safety, the more it should act like an assistant—not the decider.

Thompson Coburn frequently helps clients operationalize this decision-first approach—triaging use cases, designing right-sized controls, and aligning governance with privacy/security obligations and vendor contracts—so teams can use AI responsibly without freezing innovation.

Trust, But Verify (Yes, Every Time): Practical Rules for Internal AI Use

“Trust but verify” is the right mindset for everyday AI. Modern tools can be excellent at summarizing, organizing, drafting, and translating — and also remarkably confident when they’re wrong. Large language models may “hallucinate” facts, invent citations, or flatten nuance that matters. The fix is straightforward: use AI for acceleration, not substitution — and bake verification into the workflow.

Practical test: If the output will be sent to a customer, used to make a material business decision, embedded into a product, or relied on for compliance, assume it needs the same level of scrutiny as any other draft — because it does.

Good news: most of the value in AI comes from low-risk uses (summaries, issue lists, first-pass drafting) where human review is already the norm.

What a “Good” AI Use Policy Looks Like

A usable policy isn’t a novel; it’s a set of clear permissions and bright lines people can follow under deadline.

Most effective policies include:

  • Approved tools only. Don’t make every employee a vendor risk assessor.
  • No sensitive inputs without authorization. Define what counts as confidential, personal data, privileged material, trade secrets, source code, and regulated data — and where those data may not go.
  • Verification requirements. Identify when human review is mandatory and what “review” actually means (more than a quick skim).
  • Prohibited uses. For example: generating legal advice to customers, automating employment decisions without review, or using AI to create “official” compliance statements without validation.
  • Documentation expectations (commensurate with risk): keep a short to note whether AI was used, for what purpose, and what verification occurred—with enhanced documentation for regulated or other high-risk uses.
  • Client/contract/regulatory constraints. If you’ve promised customers you won’t use AI on their data (or you need consent), your policy should reflect that reality.

We often help leadership teams translate these principles into a practical operating model—who can use what tools, for which workflows, with what guardrails—so the policy becomes a business enabler rather than a speed bump.

A Few “Monday Morning” Use Cases

A. Vendor procurement: Consistent requirements, faster reviews

  • Use AI to generate/maintain a single checklist (data flows, subprocessors, retention, hosting, security controls, incident notice, audit rights).
  • Compare proposed terms to your playbook (privacy/security, indemnity, limits, confidentiality, IP) and flag non-standard provisions.
  • Turn vendor security and privacy documentation into a short “red flags + questions” list for Legal/Information Security review.

Guardrail: AI organizes and highlights—humans decide. Verify summaries against the source documents before relying on them.

B. Compliance operations: Draft faster, validate harder

  • Produce first drafts using approved language and internal standards (then revise in human voice).
  • Convert new guidance into “what changed / who’s impacted / what decisions are needed.”
  • Generate evidence request lists and testing checklists to keep audits consistent.

Guardrail: anything that becomes an “official” compliance statement needs source citations, review, and sign-off.

C. Financial analytics: Explain variance, don’t outsource judgment

  • Draft plain-English variance explanations and KPI movement summaries from structured inputs.
  • Turn financial tables into a first-pass executive summary with clear “drivers” and questions for follow-up.
  • Flag anomalies/outliers to investigate (not “answers” to accept).

Guardrail: tie every claim back to the ledger/BI source-of-truth; treat AI as a drafting assistant, not an auditor.

Verification Protocols: Specific Beats Aspirational

“Human review required” is a start, but it’s not a process. Every AI-assisted output that will be relied on should have:

  • A named human owner (someone is accountable),
  • A defined review standard (e.g., confirm every customer-facing claim; validate every compliance representation; cite-check anything legal),
  • A source-first posture (links back to underlying documents, system logs, clause text, or primary authority),
  • A no-surprises rule (if you can’t explain it without the model, don’t ship it).
  • For higher-risk outputs, consider a simple sign-off requirement (e.g., Legal/Information Security approval for external-facing claims or regulated decisions).

One practical move: define a handful of risk tiers (internal drafting vs. external communications vs. regulated decisions) and set escalating review requirements.

Training to Instill Confidence

Training should focus on tool literacy, not just “AI is risky.” People need to know what the tool is good at, what it’s bad at, where data goes, and what “confidential” means in practice.

The goal is confidence: teams that understand the limits are more likely to use AI appropriately — and less likely to avoid it altogether.

A quick win: publish an internal “AI cheat sheet” with:

  • approved tools,
  • do/don’t examples,
  • and a one-page escalation path (“if you’re unsure, here’s who to ask”).

Accountability: Clear Ownership and Cadence

You don’t need a new empire to govern AI. You do need clear ownership and a cadence.

A lean governance model often includes:

  • a cross-functional group (Legal + Security + Privacy + Compliance + IT + key business stakeholders),
  • a short approved-tool list,
  • periodic review of incidents and near-misses,
  • and a simple process for onboarding new tools and use cases.

Documentation can be lightweight: a short record of what the tool did, what data was used, and what review occurred — especially where regulatory scrutiny is plausible.

Board and Executive Oversight: The Questions That Matter

AI adoption is a leadership issue, not just an IT issue. Like selecting a core system, it implicates vendor risk, confidentiality, product quality, employment practices, and regulatory compliance. Leadership’s job is to ensure there’s a reasonable process — and that “reasonable” keeps pace with the technology.

A Practical Duty-of-Care Checklist for AI Tools

  • What data is sent to the vendor, and is it used to train models?
  • What security controls apply (access, logging, retention, incident response)?
  • What does the contract say about confidentiality, restrictions on vendor use, audit/assurance, and subcontractors?
  • How do we supervise use — and what’s the escalation path when something looks off?
  • Are we using AI in a way that changes our customer representations, regulatory posture, or risk profile?

Duty of Loyalty: Align AI Use With Commitments

Leaders should ensure AI decisions are made in the organization’s best interest — balancing innovation with customer trust, data protection, and the company’s public commitments. This includes managing conflicts (e.g., “free” tools that monetize data) and aligning AI use with stated values and contractual promises.

Why This (Increasingly) Matters Externally

Customers, business partners, regulators, and the public increasingly expect organizations to use AI responsibly and transparently — not perfectly, but thoughtfully. Organizations that can say “here’s how we control risk” (and demonstrate it) are better positioned in procurement cycles, audits, and inevitable incident response conversations.

Conversely, when AI goes wrong, the damage is often less about the technology and more about the basics: uncontrolled data sharing, poor oversight, and over-reliance on outputs that were never meant to be authoritative.

Conclusion

AI doesn’t need to enter your business through high-stakes decisions. Many organizations begin with repeatable back-end workflows—vendor intake, compliance drafting, and financial narratives—where the value is immediate and guardrails are straightforward. From there, teams can expand responsibly using the same decision-first discipline: define the output, assess impact, control the data, and match verification to risk. That keeps adoption practical and defensible—without becoming paralyzing.

AI will change the shape of risk in your organization—but it doesn’t have to increase it. Avoiding AI entirely can be a risk of its own: slower cycles, higher costs, and competitive disadvantage. The winning approach is responsible adoption—approved tools, clear rules, practical training, and verification that treats AI as a starting point, not an authority.

Thompson Coburn’s multidisciplinary Artificial Intelligence team helps companies implement AI with practical guardrails—from governance program design and vendor contracting to privacy/cyber alignment, employment considerations, and dispute readiness.

Related People