Biden-era guidance on responsible AI use has recently been removed from public-facing websites of certain federal agencies including the EEOC, OFCCP and DOL. However, that does not mean companies are not absolved from legal responsibilities when utilizing AI in support of business operations.
HR - Artificial intelligence (AI)
Considerations for Artificial Intelligence Policies in the Workplace
Because the use AI in the workplace can present serious risks to an organization, particularly involving security, intellectual property, confidentiality, and labor and employment legal risks, employers should consider adopting an AI policy to ensure that their use of AI is responsible, ethical, and legally compliant. AI policies can help
The Importance of Being Erroneous: Are AI Mistakes a Feature, Not a Bug?
TakeawaysRecent advances in autonomous AI agents could signal a breakthrough in how AI can eventually recognize and learn from mistakes.Some of our greatest innovators have recognized that making mistakes (and learning from them) is the key to innovation and invention.If AI agents can learn from mistakes and gain experience, they may be able to formulate and answer new questions.Article
Are Employees Receiving Regular Data Protection Training? Are They AI Literate?
Employee security awareness training is a best practice and a “reasonable safeguard” for protecting the privacy and security of an organization’s sensitive data. The list of data privacy and cybersecurity laws mandating employee data protection training continues to grow and now includes the EU AI Act. The following list is
New Proposed Regulations Will Impact How Businesses Utilize AI to Make Personnel Decisions
By: New Proposed Regulations Will Impact How Businesses Utilize AI to Make Personnel Decisions
It is no surprise that businesses are seeking ways to utilize AI to increase efficiency, including developing automated decision-making systems to assist in hiring and promotion processes. The California Civil Rights Council is actively working on new laws to address potential employment discrimination based on protected characteristics when using AI in personnel decisions. This includes considerations of whether facially neutral factors (e.g., criminal history) may still constitute discrimination.
On February 7, 2025, the Civil Rights Council published its second round of modifications to proposed employment regulations regarding automated decision systems. For employers in the process of implementing AI to make personnel decisions, here are some notable changes to keep in mind:
- The new definition of an “agent” has been expanded to include any person acting on behalf of an employer, directly or indirectly, to exercise a function traditionally exercised by the employer or any other FEHA-regulated activity. This may include recruitment, applicant screening, hiring, promotion, or other decisions regarding pay, benefits, or leave, including when such activities and decisions are conducted in whole or in part through the use of an automated decision system. This broad definition aims to cover both employers and any third parties assisting employers with AI systems.
- Employers may bear a higher burden of proving they have performed anti-bias testing or similar proactive efforts to avoid unlawful discrimination. “Lack of evidence” could be used against employers who cannot demonstrate concrete efforts to avoid discrimination, including the quality, efficacy, recency, and scope of such efforts, the results of such testing or other actions, and the response to those results.
- Employers must retain AI-related records for a longer period—four years instead of two. These records include all applications, personnel records, employment referral records, selection criteria, automated-decision system data, and other records created or received by the employer or any other covered entity dealing with employment practices that affect any employment benefit, applicant, or employee.
- Employers need to be cautious when using AI to filter out applicants based on protected characteristics (e.g., disabilities for physically demanding jobs). They must demonstrate that the criteria used to exclude applicants are job-related and consistent with business necessity, and that there is no less discriminatory standard, test, or other selection criteria that serves the employer’s goals as effectively.
The deadline to submit public comments to this round of modifications is February 24, 2025.
As AI tools continue to transform workplaces and employers strive to implement AI systems to maximize efficiency, they inevitably encounter various legal pitfalls that are tricky to navigate. It is prudent to work with legal counsel to understand the implications of potential legal liabilities and stay informed about the ever-evolving laws in this area. Feel free to contact Linda Wang or your preferred CDF attorney for a consultation.
Trump Administration Unveils New AI Policy, Reverses Biden’s Regulatory Framework
Early signals from the Trump administration suggest it may move away from the Biden administration’s regulatory focus on the impact of artificial intelligence (AI) and automated decision-making technology on consumers and workers. This federal policy shift could result in an uptick in state-based AI regulation.
Happy Privacy Day: Emerging Issues in Privacy, Cybersecurity, and AI in the Workplace
As the integration of technology in the workplace accelerates, so do the challenges related to privacy, cybersecurity, and the ethical use of artificial intelligence (AI). Human resource professionals and in-house counsel must navigate a rapidly evolving landscape of legal and regulatory requirements. This National Privacy Day, it’s crucial to spotlight
New Executive Order Issued on AI; Prior AI Order Revoked
Among the blizzard of executive orders issued following his inauguration, President Trump revoked former President Biden’s executive order addressing artificial intelligence (AI). A few days later, on January 23, 2025, President Trump issued his own AI executive order, entitled, “Removing Barriers to American Leadership in Artificial Intelligence” (“AI Executive Order”).
AI’s New Laws + Traditional Issues
“It’s like an AI chicken or the egg conundrum. Who should own the liability there? Should it be the developers of these technologies or should it be the users? If you’re trying to make that determination, where does that line fall? This uncertainty has worked its way into different legislation across the country. It really reflects how these lawmakers are grappling with some of these issues that, frankly, don’t have an easy answer.”
The Year Ahead 2025: Tech Talk — AI Regulations + Data Privacy
Careful consideration and close collaboration between your organization’s business departments are watchwords for 2025.
What Does the 2025 Artificial Intelligence Legislative and Regulatory Landscape Look Like for Employers?
In the absence of federal regulation, several states have either passed or are considering legislation aimed at mitigating the risk of an employer’s use of an AI system resulting in algorithmic discrimination. This Insight provides a roundup of state and local AI laws impacting employers, and notable pending measures.
We Get AI for Work: Establishing AI Policies and Governance (2)
We Get AI for Work: Establishing AI Policies and Governance (1)
We Get AI for Work: Establishing AI Policies and Governance Part 2
Organizations are harnessing the benefits of using generative and traditional AI technologies to enhance productivity, streamline operations, and foster innovation. However, before employing these tools in the workplace, organizations must minimize potential risks and ensure the ethical and responsive use of AI.
We Get AI for Work: Establishing AI Policies and Governance Part 1
Establishing a governance structure for artificial intelligence is essential today. Before committing to any specific technology, organizations should evaluate a potential policy’s risks and benefits to create maximum opportunity for successful outcomes.