Publication

March 24, 2026
|
9 minute read
|

The AI of the Storm: Employers Must Weather AI Regulations Nationwide, with El Niño Headed to California

Artificial intelligence is rapidly becoming a part of everyday life, including in hiring and workforce management. Legislatures have enacted a variety of laws that impact employers in many industries, especially technology, software, gaming, media, health care, finance, retail, and manufacturing. Employers using or planning to use AI should work with experienced labor and employment counsel to batten down the hatches and ensure they are compliant with these evolving regulations.

Federal Guidance Drought Creates Compliance Wasteland

There is no comprehensive legislation addressing AI in the workplace at the federal level. While the Equal Employment Opportunity Commission (EEOC) previously provided guidance regarding employers’ use of AI tools, including those that could potentially violate Title VII of the Civil Rights Act (Title VII) and the Americans with Disabilities Act (ADA), this guidance was removed after Executive Order 14179Removing Barriers to American Leadership in Artificial Intelligence.” Far from promoting business, the lack of agency guidance creates uncertainty and risk, as the application of laws like Title VII and the ADA to AI tools may instead be defined through litigation.

Similarly, the Department of Labor (DOL) Wage and Hour Division withdrew Field Assistance Bulletin “Artificial Intelligence and Automated Systems in the Workplace under the Fair Labor Standards Act and Other Federal Labor Standards” after Executive Order 14148 was issued. Without key guidance about the impact of AI on wage and hour issues under the Fair Labor Standards Act (FLSA), employers face significant uncertainty about whether and to what extent AI might lead to exposure. This includes whether exempt employees’ use of AI, with or without the employers’ knowledge, could sufficiently undermine their exercise of independent judgment and discretion to such a degree that employees are misclassified.

While deregulation is often presented as cost-effective for businesses, the withdrawal of guidance for employers seeking to comply with existing federal laws like Title VII, the ADA, and FLSA face additional costs due to increased compliance efforts and potential litigation in an evolving area. Also, without federal legislation preempting the current patchwork of state AI regulations, companies face additional hurdles complying with the wide-ranging state laws in jurisdictions where they do business. At this stage, the value of advice from experienced labor and employment counsel cannot be understated.

Inconsistent State Laws Form Multi-Cell Compliance Storm

Many states have or are considering legislation regarding the use of AI tools in employment. However, such legislation ranges from robust to non-existent, can be inconsistent between jurisdictions, or address entirely different issues, further underscoring the need for timely labor and employment advice.

CALIFORNIA

The use of AI in employment is likely to lead to a new wave of litigation in the country’s major tech and economic hubs, particularly in California. The sheer scale of AI deployment, when paired with California’s AI, employment, and privacy regulations, are likely to make the state a major center for individual and class action AI litigation in the employment context and otherwise. Because other states often model their regulations after California’s, employers across the nation should take note.

Regulations Against AI Discrimination in Hiring

On June 27, 2025, the California Civil Rights Council approved new regulations, effective October 1, 2025, that directly address how employers may and may not use artificial intelligence, algorithms, and automated decision‑making systems in employment.

Regulatory Restrictions and Defenses

At their core, the regulations make it unlawful for employers to use automated‑decision systems or selection criteria that discriminate against applicants or employees based on any characteristic protected by FEHA, including race, religion, sex or gender, disability, age, medical condition, national origin, and other protected categories. The regulations also prohibit discrimination tied to factors such as accents (unless they materially interfere with job performance), English proficiency (absent business necessity), immigration status (unless required by federal law), possession of a driver’s license, citizenship, or certain height and weight requirements. Further, employers are prohibited from using automated tools to inquire about criminal history before a conditional offer of employment, or to ask applicants about age, marital status, or disability. In addition, employers may not use automated systems to publicize job opportunities in a way that discourages individuals with disabilities from applying.

Notably, the regulations expressly allow courts and enforcement agencies to consider whether an employer conducted meaningful anti‑bias testing or similar proactive efforts. The quality, scope, timing, results of those efforts, and the employer’s response to them may all be relevant in evaluating discrimination claims or defenses.

Adverse Impact and Reasonable Accommodations

Even facially neutral technology may be unlawful if it has an adverse impact on protected groups, unless the employer can show the practice is job‑related, consistent with business necessity, and that no less discriminatory alternative exists. For example:

  • Tools that rank or screen applicants based on schedule availability may disproportionately impact individuals with religious obligations, disabilities, or medical conditions.
  • Systems that measure reaction time or physical abilities may disadvantage individuals with certain disabilities unless reasonable accommodations are provided.
  • AI tools that analyze facial expressions, voice, or behavior may unintentionally discriminate based on race, national origin, gender, or disability.

Expanded Recordkeeping Requirements

Employers must retain personnel and employment records for at least four years from the later of the record’s creation or the relevant personnel action. Covered records include applications, personnel files, selection criteria, automated‑decision system data, and other records related to employment practices or benefits.

If a complaint is filed, records must be preserved until the matter is fully resolved, including through all appeals.

These requirements can be costly, particularly when accounting for data storage, hosting, and cybersecurity protections. For some employers, the compliance burden may factor into decisions about whether and how to deploy AI‑based tools.

Key Takeaways for Employers

California’s AI bias regulations reflect a broader trend toward increased oversight of workplace technology. Employers should not treat AI adoption as a one‑time decision.

Instead, employers should:

  • Involve experienced employment counsel when implementing AI‑based tools,
  • Regularly review systems for potential bias or adverse impact,
  • Conduct and document anti‑bias testing and corrective actions, and
  • Weigh the operational benefits of AI against compliance, recordkeeping, and data‑protection costs.

Because this area is evolving quickly, employers should work closely with experienced employment counsel for practical guidance before rolling out AI tools and after implementation to ensure ongoing compliance.

Data Privacy Protections Extend to Employees

The California Consumer Privacy Act (CCPA), which was amended by the California Privacy Rights Act (CPRA), by broadly defining “consumer,” granted employees, applicants, and independent contractors data privacy rights including:

  • the right to notice of what information is collected and how it is used,
  • the right to correct inaccurate personal data,
  • the right to deletion,
  • the right to opt of out sharing personal information,
  • limitations on the use of sensitive information like Social Security numbers, health data or geolocations, and
  • the right against retaliation.

Businesses should work closely with privacy and labor and employment counsel to ensure compliance with the CCPA and CPRA, including providing appropriate notices.

New Whistleblower Protections for Employees of Developers Reporting AI Safety Issues

Home to the majority of top AI companies, California enacted the Transparency in Frontier Artificial Intelligence Act (TFAIA), effective January 1, 2026. It applies to “frontier developers,” or persons who are trained or have initiated training of frontier models using the computational power in advanced AI models.

Although the language of the TFAIA may be alarmist in some ways, it seeks to prevent and address critical safety incidents involving:

  • unauthorized access, modification, or exfiltration of the model that could result in death, bodily injury, damages to or loss of property, and,
  • harm resulting from catastrophic risks that could contribute to the deaths or serious injuries to more than 50 people or $1 billion in damage to property arising from a foundational model’s involvement in chemical, biological, radiological, or nuclear weapons, cyberattacks, murder, assault, extortion, theft, evading the control of the frontier developer or user, or use of deceptive techniques that subvert the model’s controls or monitoring.

Specifically, the TFAIA requires that employers:

  • refrain from retaliating against employees responsible for these risks for disclosing information to the authorities indicating that the frontier developer’s activities pose a danger to public health or safety,
  • provide notice to such employees of their whistleblower rights by posting them in the workplace or providing yearly notice with acknowledgement of receipt, and,
  • have processes for such employees to anonymously report such safety risks internally and provide monthly updates to the person regarding the status of the investigation and actions taken.

Compliance with these regulations is particularly important because, in a whistleblower suit, once an employee establishes that their whistleblowing activity was a contributing factor in any adverse action against them, the developer faces a heightened burden of proof, “clear and convincing evidence,” to show that the adverse action would have occurred for legitimate, independent reasons.

AI Replica Protections for Performers

Effective January 1, 2025, California enacted AI protections for artists. Specifically, agreements for personal or professional services for new performances on or after January 1, 2025, by a digital replica of an individual are unenforceable if the agreements allow the creation and use of the replica in the individual’s voice or likeness and take the place of work that the individual would have done.

The state also enacted protections for a deceased personality’s name, voice, signature, photograph, likeness, and other materials.

Pending Restrictions on AI and Workplace Monitoring Tools

The California legislature is considering legislation, Assembly Bill 1331, that would limit the use of workplace surveillance tools by employers in private, off-duty areas and require that they be disabled during off-duty hours. The proposed legislation would address tools that collect “worker data, activities, communications, actions, biometrics, or behaviors, or those of the public, by means other than direct observation by a person, including, but not limited to, video or audio surveillance, electronic work pace tracking, geolocation, electromagnetic tracking, photoelectronic tracking, or utilization of a photo-optical system or other means.” Specifically, the bill would prohibit employers from surveilling workers in private, off-duty areas like:

  • bathrooms,
  • locker rooms,
  • breakrooms,
  • smoking areas,
  • lactation spaces,
  • employee cafeterias or lounges, and
  • homes and personal vehicles.

Further, employers would be prohibited from requiring employees to “physically implant,” including subdermally, a device that collects or transmits data.

Additionally, Assembly Bill 1898 would require that employers provide written notice to employees and union representatives that a workplace AI tool was used to assist the employer in making employment-related decisions or surveil the workplace. Specifically, the notice would be required to:

  • be provided at least 90 days before it is deployed and no later than February 1, 2027, as well as to workers upon hire,
  • be provided in plain language and be signed by the worker confirming that they received and understood the notice,
  • state the purpose and justification for the AI tool,
  • state the employment decisions the AI tool affects,
  • provide a description of worker data collected and the frequency and duration of its collection and storage,
  • describe the AI tool in plain language and identify the entity that created it,
  • disclose who can access the data,
  • disclose the locations, activities, communications, and job roles that will be surveilled,
  • provide a description of the quota set or measured by the tool and adverse actions that could result from the employee’s failure to meet it,
  • state whether jobs or tasks will be replaced by AI and the timeline for that replacement,
  • disclose the training given to managers on use of the AI tool, and
  • disclose the results of any risk assessments conducted on the AI tool.

A similar piece of proposed legislation, Assembly Bill 1883, would actually prohibit the use of surveillance tools to the extent that they violate other laws; identify, profile, or infer information about workers engaging in protected activity; or infer a workers’ protected status. It would apply to technology that recognizes emotions, faces, gait, or neural data.

Although it is currently a placeholder, Senate Bill 928 is intended to protect California State University employees from “encroachment” by AI.

Given the volume and complexity of pending legislation, employers should make efforts to stay up to date on the latest AI regulations and work with their counsel to ensure compliance.

COLORADO

Effective June 30, 2026, Colorado law will require that employers utilizing high-risk AI use reasonable care to protect residents, a broad definition that would include employees, from algorithmic discrimination. Employers are entitled to a rebuttable presumption that they exercised reasonable care if they can show that they:

  • implement a risk management policy or program for the high-risk system,
  • complete an impact assessment,
  • annually review the deployment of the system to ensure it is not causing algorithmic discrimination,
  • notify employees if the system will be a substantial factor in making a consequential decision about them,
  • provide the employee with an opportunity to correct personal data and appeal the decision with human review,
  • make a publicly available statement summarizing the types of high-risk systems the employer uses as well as the known or foreseeable risks of discrimination and the information collected by it, and
  • disclose discrimination to the attorney general within 90 days of discovery.
ILLINOIS

Effective January 1, 2026, the Illinois Human Rights Act (IHRA) was amended to address the impact of AI. Specifically, under the IHRA, it is a civil rights violation for an employer to use AI in employment decisions that would subject applicants and employees to discrimination based on protected classes. This includes using zip codes as a proxy for discrimination. The employer is also required to give notice to employees if it uses AI in employment decisions.

NEW YORK

Regulations Regarding Bias Audits for AI in Employment and Notification Requirements

Effective January 1, 2023, New York City imposed regulations on employer’s AI decision tools. The law requires that a “bias audit,” an impartial evaluation by an independent auditor, include assessments of the tool’s disparate impact on protected categories of employees. Specifically, employers may not use these AI tools unless they are subject to a bias audit within a year prior to the tool’s implementation and a summary of the results of the audit and distribution date of the tool are made publicly available on the employer’s website.

Further, employers using AI tools to screen employees or candidates must:

  • notify employees and applicants that the tool will be used at least 10 business days before it is used,
  • specify the job qualifications and characteristics the tool will be used to assess, and,
  • if not disclosed on the employer’s website, provide information about the type of data collected within 30 days of the written request.

Pending Legislation Prohibiting AI Discrimination in Employment

Senate Bill S-9028 would prohibit employers from engaging in discrimination against protected classes by using AI in recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment.

MARYLAND

Effective October 1, 2020, employers are prohibited from using facial recognition to create facial templates during applicant interviews for employment without applicant consent. Under the statute, applicants can consent through signed waivers that state in plain language:

  • their name,
  • the date of the interview,
  • their consent to the use of facial recognition during the interview, and
  • that the applicant read the waiver.
TEXAS

Effective January 1, 2026, the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), prohibits the use of AI to:

  • intentionally manipulate human behavior to incite physical harm or engage in criminal activity, or,
  • intentionally infringe on constitutional rights or discriminate against a protected class.

However, unlike in other states, disparate impact is not sufficient to demonstrate an intent to discriminate.

Conclusion

These new and evolving regulations are complex and often conflict, making timely consultation with counsel paramount for compliance. Employers using or planning to use AI in their employment practices should also consider the business and operational impact of using this technology, as legal compliance, records storage, and data protection can be challenging and expensive.

Related People