How to Place AI Liability Insurance for Clients Using AI in Their Business

Nearly every business client you serve is now using artificial intelligence in some form — generative AI tools for content and communications, AI-assisted decision systems for hiring and lending, automated customer service platforms, or AI-powered software embedded in the products they sell. Each of these use cases creates liability exposure that existing policies were not designed to cover. Standard E&O policies have responded with AI exclusions or narrowed definitions of "professional services." CGL policies do not cover financial losses from AI errors. Cyber policies respond to data events, not professional judgment failures caused by bad AI output. The result is a growing gap between what clients believe their coverage provides and what it actually pays. Closing that gap is now core broker work — and the market, while still developing, has workable solutions for most risk profiles.

Prerequisites

  • A completed intake or annual review conversation that captures the client's current AI tools, vendors, and use cases (see Step 1 for the specific questions to ask)
  • The client's current policy schedule: E&O, CGL, cyber, BOP, and any professional liability forms — to map existing coverage before identifying gaps
  • Knowledge of which carriers in your market are actively writing AI liability endorsements or standalone coverage for the client's revenue band, industry, and AI risk profile
  • Familiarity with the client's industry regulatory environment: healthcare, financial services, employment, and consumer lending each face distinct AI regulatory regimes that drive claim frequency

Step 1: Map the Client's AI Use Cases

AI liability is not a single exposure — it is a cluster of distinct risks that vary based on what the AI system does, who it affects, and how much human review sits between the system's output and a decision that harms someone. Before selecting any coverage, document the client's specific use cases across four categories:

Generative AI for internal productivity. This is the lowest-risk category: employees using ChatGPT, Microsoft Copilot, Google Gemini, or similar tools to draft documents, summarize information, or write code. The primary risks here are data privacy (employees pasting sensitive client data into cloud AI systems) and intellectual property (AI-generated content that infringes on copyrighted training data). Regulatory exposure is lower unless the outputs are used in regulated professional advice.

AI-assisted decision support. Tools that inform — but do not automate — consequential decisions: AI-generated financial projections a CPA reviews before sharing with a client, AI-drafted medical chart summaries a clinician reviews before acting, AI-scored credit applications a loan officer reviews before approving. Liability turns on how much the professional relies on the AI output versus exercising independent judgment. When the human review step is thin or perfunctory, courts may treat the decision as effectively AI-generated.

Automated AI decisions affecting individuals. Fully or substantially automated decisions with real consequences: AI hiring and screening tools that filter job applicants without human review, AI pricing systems, AI content moderation, AI fraud scoring that freezes accounts. These carry the highest regulatory exposure. The EEOC's 2023 guidance on AI and the Americans with Disabilities Act (EEOC Technical Assistance Document, May 2023) and the CFPB's guidance on adverse action notices for AI-based credit decisions (CFPB Circular 2022-03) both establish that existing law applies fully to AI-generated decisions — and that the algorithm's opacity does not eliminate the obligation to explain and defend the decision.

Agentic AI systems taking autonomous actions. A distinct and more complex category: AI agents that can execute multi-step actions autonomously — booking appointments, executing trades, submitting filings, modifying databases, sending communications — without a human approving each step. The liability analysis for agentic AI differs from AI-assisted decision support because the agent is not generating a recommendation; it is completing an action with real-world consequences, often at machine speed, before any human can review it. This category requires coverage analysis beyond standard AI endorsements. See Agentic AI and Autonomous Systems Liability Insurance for how to evaluate and place coverage for clients running autonomous AI agents.

AI embedded in products or services sold to clients. Software vendors, SaaS companies, and technology consultants who incorporate AI into what they deliver to customers face product liability and professional liability simultaneously. When the AI component causes harm to the end customer, coverage questions involve both the product liability form (bodily injury, property damage) and the professional liability form (financial loss from a defective professional service).

Document which categories apply, which vendors and platforms are in use, and which decisions the AI system is influencing. This becomes the risk narrative for underwriters.

Step 2: Identify What Existing Policies Cover — and Where They Fall Short

Most clients assume their existing coverage handles AI-related claims. Walk through each policy form explicitly:

E&O / Professional Liability. Standard professional liability policies cover claims arising from professional wrongful acts — errors, omissions, and negligent acts in the rendering of professional services. The coverage gap for AI operates in two directions. First, many carriers have added AI exclusions or restrictive endorsements that carve out claims "arising from or related to" the use of artificial intelligence, automated decision systems, or machine learning. Second, even without an explicit AI exclusion, coverage depends on whether the AI system is deemed part of the "professional services" as defined in the policy. For a professional services firm whose work product is directly influenced by AI output, this is a genuine coverage ambiguity — one that will be resolved in litigation, not by a policy endorsement. For a full breakdown of how E&O policy triggers interact with technology-related claims, see E&O vs Cyber Liability Coverage: Does Your Client's E&O Policy Cover a Data Breach?.

CGL. Commercial general liability responds to bodily injury and property damage — categories that rarely describe AI-related losses. Financial harm from an AI hallucination, a biased AI hiring decision, or an incorrect AI-generated tax estimate is economic loss, not bodily injury or property damage. CGL also contains professional services exclusions that remove coverage for professional wrongful acts — exactly the territory where AI errors in professional practice live. See CGL vs Professional Liability (E&O): What Each Policy Covers and Why Most Professional Service Businesses Need Both for how these exclusions operate.

Cyber Liability. Cyber policies respond to data events: breaches, ransomware, network security failures, and privacy liability arising from the exposure of personal information. AI creates cyber-adjacent risk — an AI system trained on sensitive client data without proper authorization, or a chatbot that inadvertently exposes personal information — but the core AI liability risks (professional judgment errors, biased decisions, intellectual property infringement) are not cyber events. Most cyber policies do not respond to professional liability claims arising from bad AI output, regardless of whether AI technology was involved. For the mechanics of what cyber coverage does and does not respond to, see Cyber Liability Coverage for Small Business: How to Evaluate and Recommend the Right Policy.

The coverage map for most clients looks like this: data breach risks are addressed by cyber; professional errors in AI-assisted advice are in a gap between E&O exclusions and cyber scope; automated AI decisions affecting individuals have no standard policy response; AI product liability is partially addressed by product liability forms if the client manufactures a product, but not if the AI system is the service itself.

Step 3: Assess the Client's AI Liability Risk Profile

Underwriters evaluating AI liability are assessing four primary risk factors. Building this risk profile before going to market produces better quotes and fewer surprises at binding:

Consequence of error. The severity of the harm if the AI system produces a wrong or biased output. A generative AI system that helps draft marketing emails carries fundamentally different risk than an AI clinical decision support system that informs treatment recommendations. Underwriters are comfortable with the former; the latter requires specialist capacity and significantly higher premiums.

Human review layer. How much human oversight sits between the AI output and the consequential decision. Fully automated pipelines — where AI output triggers action without human review — are the hardest to place and the most likely to face coverage disputes about whether the insured exercised reasonable care in deploying the system.

Regulatory environment. Industries with specific AI regulatory obligations (healthcare under HIPAA and FDA AI guidance, financial services under CFPB and OCC guidance, employment under EEOC, credit under ECOA) face elevated underwriting scrutiny because regulatory enforcement creates mandatory defense costs before any civil claim is filed. The NAIC AI Model Bulletin, now adopted in 24 states and actively enforced through regulatory examinations in 2026, also creates a compliance layer specifically for insurers that use AI in underwriting and claims — with implications for how carriers document and explain AI-assisted decisions. For a broker-focused overview of the NAIC framework and its practical implications for placement and client advocacy, see NAIC AI Model Bulletin: What Insurance Brokers Need to Know in 2026.

Contractual obligations. Whether the client has made representations to their own customers about AI system accuracy, fairness, or auditability — in vendor agreements, terms of service, or marketing materials. Contractual warranties about AI system performance create direct contractual liability exposure separate from negligence claims.

Step 4: Understand the Available Coverage Options

The AI liability insurance market is not yet standardized. Coverage is available through three primary structures:

AI endorsements to existing E&O policies. Several carriers have developed endorsements that explicitly extend professional liability coverage to claims arising from AI-assisted professional services. These endorsements typically require disclosure of which AI tools are in use, may add sublimits for AI-specific claims, and may exclude fully automated decision-making without human review. Munich Re, Chubb, and several specialist markets have active AI endorsement programs as of 2025-2026. For clients whose AI use is embedded in professional services delivery (accounting firms, law firms, financial advisors, consultants), this is often the most practical coverage path — expanding existing E&O rather than adding a separate policy.

Standalone AI liability policies. A small number of specialist carriers and MGAs are writing standalone AI liability policies that cover a broader range of AI-specific risks: intellectual property claims arising from training data, bias and discrimination claims from automated decisions, product liability for AI systems embedded in software, and reputational harm from AI system failures. These are primarily available in the surplus lines market. Capacity is limited for high-risk AI applications (healthcare, autonomous systems, financial decision-making), and underwriting requirements are detailed — expect questionnaires covering AI vendor relationships, model documentation, governance policies, and incident history.

Technology E&O. For technology companies, software vendors, and SaaS businesses that incorporate AI into products they sell, technology E&O policies typically provide the broadest coverage — combining professional liability for technology services with product liability for software defects and some degree of IP infringement coverage. AI-specific exclusions are appearing in tech E&O forms as well, making AI disclosure and endorsement negotiation important at renewal.

Step 5: Navigate the Market and Gather Quotes

Admitted market. AI liability coverage from admitted carriers is primarily available as endorsements to professional liability policies for lower-risk use cases — professional services firms using AI as a productivity tool with strong human oversight. Expect carriers to ask for AI tool disclosure as part of the renewal application going forward; failing to disclose material AI use creates rescission exposure if an AI-related claim is later filed.

Surplus lines. The full range of AI liability coverage — including standalone policies, higher limits for technology firms, and coverage for higher-risk AI applications — is in the surplus lines market. When placing surplus lines coverage, confirm state filing requirements apply. For clients whose AI applications carry significant regulatory exposure (healthcare, financial services), specialty markets including Lloyd's syndicates and domestic surplus lines carriers with technology liability programs are the primary options.

Application requirements. Most carriers writing AI liability coverage require, at minimum: a description of all AI systems in use (vendor, purpose, and decision-making role), the client's AI governance policy (if one exists), any prior AI-related incidents or claims, the percentage of decisions influenced by AI systems vs. independent human judgment, and confirmation of the industries and individuals affected by AI-assisted decisions.

Step 6: Document the Coverage Recommendation and the Client's Decision

AI liability is new enough that clients frequently underestimate the exposure — and brokers who do not document the conversation create their own professional liability risk if an uncovered AI claim surfaces later.

After completing the coverage review, deliver a written summary covering: the AI use cases identified, the coverage gap in existing policies for each use case, the coverage options presented (admitted endorsement, surplus lines standalone, or tech E&O), the limits and premium ranges for each option, and the client's decision — including a clear record if the client declines recommended coverage. For clients who decline coverage, have them acknowledge the gap in writing.

This documentation is the same protection that applies to any coverage declination scenario — but AI liability is particularly important to document because the gap is non-obvious, clients genuinely believe their E&O or cyber policies respond, and the claim severity for AI-related professional liability can be substantial.

Common Mistakes to Avoid

Assuming cyber coverage handles AI liability. Cyber policies are data event policies. AI errors, biased AI decisions, and IP infringement from AI-generated content are not data events. The overlap is narrow: AI systems that cause a data breach are a cyber event; AI systems that cause a professional or financial harm are not.

Not asking about AI at intake or renewal. Most clients using AI tools have not told their broker. The question is not on standard applications. Build AI use into your intake and annual review workflow — "What AI tools are your employees using? Is any AI involved in client-facing decisions or work product?" — before the renewal, not after a claim.

Treating all AI use as equal risk. A generative AI writing assistant and an autonomous AI underwriting or hiring decision system are categorically different exposures. The appropriate coverage, market, and premium vary accordingly. Don't quote a blanket AI endorsement for a client whose actual risk profile requires surplus lines capacity.

Missing the IP component. Intellectual property claims from AI-generated content — particularly copyright infringement claims related to AI training data — are an underappreciated AI liability for clients producing creative, editorial, or technical content at scale with generative AI tools. This exposure sits in the gap between media liability (if the client has it) and standard professional liability forms.

Frequently Asked Questions

Does my client's E&O policy automatically cover AI-related claims?

Not necessarily, and increasingly no. Many carriers have added AI exclusions to professional liability forms in the past 18-24 months. Even without an explicit exclusion, coverage depends on whether the AI-assisted work falls within the policy's definition of "professional services" and whether the error is characterized as a professional wrongful act rather than a product or technology failure. Review the current policy form and any recent endorsements before assuming AI-related claims are covered.

What is the difference between AI liability insurance and cyber liability insurance?

Cyber liability insurance responds to data events: breaches, ransomware, network security failures, and privacy liability from the exposure of personal information. AI liability insurance responds to claims arising from AI system failures, errors, biased decisions, and intellectual property violations — categories that are generally outside the scope of a cyber policy. The two are complementary, not interchangeable.

Which industries face the highest AI liability risk?

Healthcare (AI in clinical decision support or medical coding), financial services (AI in credit decisions, financial advice, or fraud scoring), employment (AI in hiring, screening, or performance management), and legal services (AI in document review, contract analysis, or client advice) face the highest regulatory exposure and claim frequency. Retail, manufacturing, and professional services firms using AI primarily for internal productivity carry materially lower risk profiles. Cryptocurrency and blockchain businesses using AI for automated trading algorithms, on-chain fraud scoring, or DeFi risk assessment sit close to financial services on the risk spectrum — combining AI regulatory exposure (CFTC and SEC oversight of algorithmic trading) with the distinct digital asset coverage gaps that standard policies do not address. For the broader coverage framework for these clients, see Cryptocurrency and Blockchain Insurance: What Coverage Your Clients Actually Need.

Is AI liability coverage available in the admitted market?

Limited coverage is available as endorsements to existing professional liability policies in the admitted market, primarily for lower-risk AI use cases with strong human oversight. Broader coverage — including standalone AI liability policies, higher limits, and coverage for higher-risk applications — is primarily available in the surplus lines market through specialist carriers and MGAs.

What happens if a client doesn't disclose AI use to their carrier?

Non-disclosure of material information at application or renewal can result in claim denial and policy rescission. If AI use is material to the underwriting decision — and for professional liability policies, it increasingly is — undisclosed AI use discovered at the time of an AI-related claim gives the carrier grounds to void coverage from inception. Document AI use in the application and, if the carrier has not asked, proactively disclose it and request confirmation that coverage extends to AI-assisted professional services.

Do AI liability policies cover hallucinations — false information generated by AI that harms someone?

Some AI endorsements and standalone policies explicitly cover claims arising from AI-generated errors or inaccurate output. The coverage analysis depends on whether the error occurred in the context of a professional service (E&O territory), in a product the client sold (product liability territory), or in an automated decision that affected an individual (regulatory and civil liability territory). In all cases, coverage is more likely when the client can demonstrate human review before the erroneous output was acted upon.

How do I explain AI liability risk to a client who thinks their existing coverage is fine?

Walk them through a specific scenario relevant to their business: "Your accountant uses an AI tool to summarize client financials, and it miscalculates a key figure that leads to an incorrect tax position. The client is audited and owes back taxes and penalties. They sue your firm. Does your E&O policy cover that claim?" The answer depends on the policy form — and the honest answer for many clients right now is "we don't know, and there's a real chance it doesn't."

Arvori helps insurance brokers manage complex client risk profiles — including emerging technology and AI liability exposures — without the manual intake and documentation burden. If your firm is developing an AI liability practice or needs a better system for tracking AI use across your book of business, Arvori was built for exactly this kind of workflow.