Artificial intelligence is becoming a routine part of hiring and workforce management, but in California, new rules mean employers may need to slow down and take a closer look.
On June 27, 2025, the California Civil Rights Council approved new regulations, effective October 1, 2025, that directly address how employers may and may not use artificial intelligence, algorithms, and automated decision‑making systems in employment. These regulations are designed to prevent discrimination against applicants and employees based on characteristics protected by the Fair Employment and Housing Act (FEHA), including situations where AI tools create an unlawful adverse impact.
In addition to expanding potential liability, the regulations significantly increase recordkeeping obligations, requiring employers to preserve certain records and data for at least four years. Because this area is evolving quickly, both legally and technologically, employers should work closely with experienced employment counsel before rolling out AI tools and should continue consulting counsel after implementation to ensure ongoing compliance.
Key Definitions Employers Should Understand
The California Civil Rights Council adopted intentionally broad definitions, which increase potential liability for employers and for third parties acting on their behalf and make compliance more challenging.
1. Who Counts as an “Employer”?
An “employer” includes any person or entity with five or more employees, regardless of whether all employees work in California. Importantly, the definition also includes agents acting on an employer’s behalf when performing functions traditionally handled by employers or regulated under FEHA—such as recruiting, screening, hiring, promotions, or decisions about pay, benefits, or leave.
As a result, outside vendors involved in resume screening, applicant assessments, or other hiring‑related services may also face potential liability.
2. Who is an “Applicant”?
“Applicants” include not only individuals who submit applications, but also those who claim they were deterred from applying due to an allegedly discriminatory practice. This means employers could face claims from individuals they never directly interacted with but who contend that aspects of the hiring process discouraged them from applying.
3. Who is an “Employee”?
“Employees” are broadly defined as individuals working under an employer’s direction and control, whether by appointment, contract, apprenticeship, or other arrangement—written or unwritten. While independent contractors are excluded, informal or poorly documented relationships may still give rise to disputes.
4. What Are “Employment Benefits”?
“Employment benefits” are defined broadly and include hiring, promotion, compensation, selection for training programs, having a discrimination‑free workplace, and any other favorable term, condition, or privilege of employment.
Understanding Automated‑Decision Systems
The regulations focus on “automated‑decision systems,” which include computational processes often powered by artificial intelligence, machine learning, algorithms, or statistical models that make or assist in employment‑related decisions.
Examples include tools that:
- Screen resumes or applications
- Administer skills, aptitude, or personality assessments
- Analyze video interviews for facial expressions, tone, or word choice
- Target job advertisements or recruiting materials to certain groups
- Analyze applicant or employee data obtained from third parties
The regulations also define related concepts such as algorithms, artificial intelligence, machine learning, and automated‑decision system data, all of which may be subject to scrutiny depending on how they are used.
What the Regulations Prohibit
At their core, the regulations make it unlawful for employers to use automated‑decision systems or selection criteria that discriminate against applicants or employees based on any characteristic protected by FEHA, including race, religion, sex or gender, disability, age, medical condition, national origin, and other protected categories.
Notably, the regulations expressly allow courts and enforcement agencies to consider whether an employer conducted meaningful anti‑bias testing or similar proactive efforts. The quality, scope, timing, results of those efforts, and the employer’s response to them may all be relevant in evaluating discrimination claims or defenses.
Adverse Impact and Reasonable Accommodations
Even facially neutral technology may be unlawful if it has an adverse impact on protected groups, unless the employer can show the practice is job‑related, consistent with business necessity, and that no less discriminatory alternative exists.
For example:
- Tools that rank or screen applicants based on schedule availability may disproportionately impact individuals with religious obligations, disabilities, or medical conditions.
- Systems that measure reaction time or physical abilities may disadvantage individuals with certain disabilities unless reasonable accommodations are provided.
- AI tools that analyze facial expressions, voice, or behavior may unintentionally discriminate based on race, national origin, gender, or disability.
Additional Prohibited Practices
The regulations also prohibit discrimination tied to factors such as accents (unless they materially interfere with job performance), English proficiency (absent business necessity), immigration status (unless required by federal law), possession of a driver’s license, citizenship, or certain height and weight requirements.
Employers are also prohibited from using automated tools to inquire about criminal history before a conditional offer of employment, or to ask applicants about age, marital status, or disability. In addition, employers may not use automated systems to publicize job opportunities in a way that discourages individuals with disabilities from applying.
Expanded Recordkeeping Requirements
Employers must retain personnel and employment records for at least four years from the later of the record’s creation or the relevant personnel action. Covered records include applications, personnel files, selection criteria, automated‑decision system data, and other records related to employment practices or benefits.
If a complaint is filed, records must be preserved until the matter is fully resolved, including through all appeals.
These requirements can be costly, particularly when accounting for data storage, hosting, and cybersecurity protections. For some employers, the compliance burden may factor into decisions about whether and how to deploy AI‑based tools.
Key Takeaways for Employers
California’s AI bias regulations reflect a broader trend toward increased oversight of workplace technology. Employers should not treat AI adoption as a one‑time decision. Instead, employers should:
- Involve experienced employment counsel when implementing AI‑based tools
- Regularly review systems for potential bias or adverse impact
- Conduct and document anti‑bias testing and corrective actions
- Weigh the operational benefits of AI against compliance, recordkeeping, and data‑protection costs
Thoughtful planning, regular review, and careful documentation in coordination with experienced employment counsel will be critical to managing risk in this evolving area.

