This article is part of Thompson Coburn’s ongoing series on California’s 2026 CCPA regulatory updates. In our initial overview, we highlighted three headline additions in the updated regulations—automated decision-making technology (ADMT), risk assessments, and cybersecurity audits. This piece focuses on ADMT, which is often the most operationally immediate of the three. Follow-up articles will address the risk assessment and cybersecurity requirements in more detail.
California’s updated CCPA regulations add specific requirements for businesses that use ADMT in certain high-impact contexts. For many organizations, this is the first time automated tools embedded in decision workflows may trigger consumer-facing rights with concrete operational requirements—not simply an internal governance discussion.
This article is written for executive leadership at CCPA-covered entities that are already using, or actively considering, automated decisioning for “significant decisions” without meaningful human review. The goal is to clarify what the regulations require, what they signal about regulator expectations, and what steps businesses can take now to build a program that works in practice.
What ADMT is under the regulations
ADMT is defined broadly in the regulations as “any technology that processes personal information and uses computation to replace human decisionmaking or substantially replace human decisionmaking.” This definition encompasses many artificial intelligence technologies, including machine-learning models and AI-enabled scoring, screening, and ranking tools. The key question is not whether a tool is marketed as “AI,” but whether it uses computation to meaningfully drive decisions about individuals.
“Substantially replace human decisionmaking” means using a technology’s output “to make a decision without human involvement.”
The regulations then define “human involvement” with specificity. It requires that the human reviewer:
- knows how to interpret and use the technology’s output to make the decision;
- reviews and analyzes the output and “any other information that is relevant” to make or change the decision; and
- has authority to make or change the decision based on that analysis.
In practice, ADMT as defined can describe automated screening, scoring, ranking, routing, flagging, and eligibility determinations—especially where outputs are treated as decisions rather than inputs.
What this means in practice: The regulations are not describing check the box “approval.” They are describing accountable decisionmaking: informed review, independent judgment, and authority to change outcomes. There is no meaningful human involvement where no one can override the model or change the outcome.
When the ADMT rules apply
The ADMT requirements apply when a business uses ADMT to make a “significant decision” about an individual.
A “significant decision” is defined by category and focuses on decisions that result in the provision or denial of:
- employment opportunities or compensation,
- financial or lending services,
- housing,
- education enrollment or opportunities, or
- healthcare services.
This is one reason ADMT compliance is not a privacy-policy project. The operational work sits in HR, lending/credit, admissions, housing, and healthcare workflows—where decisions are made and enforced.
On timing, the framework is straightforward: businesses using ADMT for significant decisions before January 1, 2027 generally must be compliant by January 1, 2027; businesses that begin using ADMT for significant decisions on or after that date should plan to be compliant when the use begins.
Achieving compliance in practice
ADMT compliance requires four capabilities tied to specific decision workflows.
Pre-use notice (at or before collection). Before using ADMT for a significant decision, a business must provide a notice stating the specific purpose for using ADMT and how individuals can exercise applicable rights. Practically, the notice should appear where the workflow begins—at or before data collection for the ADMT purpose (or before previously collected data is repurposed for it).
Opt-out (where required). In covered contexts, individuals may have the right to opt out of ADMT used for a significant decision. Opt-out cannot be implemented as a link alone. It requires an alternative decision path that does not rely on the ADMT output—particularly where automated screening or eligibility logic drives outcomes without meaningful human involvement.
Access and explanation. Businesses must offer a mechanism to submit requests related to ADMT use and be prepared to respond in plain language. The response should cover: the purpose of the ADMT use; meaningful information about the logic sufficient for understanding how the output was generated; and how the output was used in the decision (including the outcome and whether any human review occurred).
Appeals (where applicable). If a business relies on a human-review approach in lieu of offering an opt-out in certain scenarios, it may need an appeals process with reviewers who are trained, have access to relevant information, and have authority to change outcomes.
Read together, these requirements function as decision governance: the organization must be able to describe what the system is doing, why it is being used, and how decisions can be challenged.
What the ADMT rules signal—and how to implement without overbuilding
The ADMT requirements are written as a governance standard, not a technical standard. The definition of “human involvement” is instructive: regulators are signaling that accountability for significant decisions should remain with the business, not the tool.
A decision-first implementation approach keeps the work focused:
- Define the in-scope use cases. Develop an organization-wide method for identifying a short list of ADMT use cases that (i) operate without meaningful human involvement as defined in the regulations and (ii) determine access to employment, lending/financial services, housing, education, or healthcare. That list becomes your practical compliance perimeter and keeps efforts focused.
- Decide the alternative path before offering opt-out. For each in-scope use case, define what happens on opt-out: who reviews, what inputs they consider, decision timing, and escalation. If this is undefined, opt-out will not operate consistently.
- Set an organization-wide explanation standard. Develop a consistent practice within the organization to describe, in plain language, what ADMT is used for and how it functions in a significant decision workflow: the purpose, the key input categories, what the output represents, and how the output is used in decisionmaking. Where vendors are involved, the complexity increases—but accountability does not shift. The business should be able to explain and support its decision process without relying on vendor talking points at the moment a request arrives.
- Pressure-test vendor capability early. Many ADMT use cases are vendor-enabled. Before you scale or expand a use case, confirm whether the vendor can support the operational requirements: delivering information needed for pre-use notice content, producing decision records and meaningful explanations tied to an individual outcome, supporting opt-out handling (including suppressing automated decisioning where required), and supporting appeals workflows with auditable logs. If the vendor cannot support these functions, the compliance burden shifts to internal teams—or the use case needs redesign.
- Put change control around ADMT use cases. Treat retraining, new data sources, new outputs, or expanded use as triggers to reassess notices, opt-out mechanics, explanation content, and vendor obligations.
Most ADMT programs fail the same way. Someone assumes someone else owns the decision. Opt-out gets a button, but no one has defined what actually happens when a consumer clicks it. And when a regulator asks how the tool works, the answer comes from a vendor’s marketing deck rather than the organization’s own documentation.
These are not technical failures. They are governance failures—and they are entirely avoidable when organizations build the right infrastructure from the start: clear decision ownership, a defined alternative workflow, and documentation the organization itself can stand behind. The issue of decision ownership runs deeper than a single implementation step, and will be addressed in a future piece in this series.
Risk assessments: why ADMT can accelerate the timeline
Many organizations will focus on the December 31, 2027 timing that applies to certain pre-2026 processing that continues after January 1, 2026. That date matters, but it should not be read as a safe harbor for new ADMT initiatives.
The regulations also require a risk assessment “before initiating any processing activity” covered by the risk-assessment triggers, including using ADMT for a significant decision and processing personal information intended to train ADMT for significant decisions. They also require updates after material changes “as soon as feasibly possible, but no later than 45 calendar days.”
Practical implication: for organizations deploying or expanding ADMT in significant-decision workflows during 2026, risk assessment work may be required before launch or change—not at the end of 2027. ADMT governance and risk-assessment timing should be planned together.
Conclusion
ADMT compliance is not a checklist exercise. It requires aligning legal requirements with how significant decisions are actually made—across people, process, and systems. The organizations that will be positioned for January 2027 are the ones making these governance decisions now.
Thompson Coburn’s Cybersecurity, Privacy and Data Governance team helps clients (1) scope ADMT use cases and decision workflows, (2) design compliant notice, opt-out, access/explanation, and appeals mechanisms that operate in practice, and (3) integrate ADMT readiness into vendor selection, contracting, and change management.
Our next pieces in this series will address the companion requirements—risk assessments and cybersecurity audits—and how to build an integrated program rather than three disconnected workstreams.

