NAIC AI Model Bulletin: What Insurance Brokers Need to Know in 2026

The NAIC adopted its AI Model Bulletin in December 2023, and by early 2026, more than 24 states and Washington, D.C. had adopted the bulletin — with the NAIC's new AI Systems Evaluation Tool now being used during regulatory examinations. The bulletin primarily governs insurers, not brokers, but its implications for brokers are real: it shapes how carriers document and disclose their AI use in underwriting and claims, what questions regulators are asking during market conduct exams, and what clients may ask brokers when AI-driven underwriting decisions produce unexpected results. Brokers who understand the regulatory framework can better advocate for clients and anticipate market conduct trends that will affect placement and renewal conversations.

What the NAIC AI Model Bulletin Requires of Insurers

The NAIC AI Model Bulletin is not a model law — it does not create binding rules on its own. It is guidance that state insurance departments adopt, directing insurers to implement written governance programs for AI systems used in insurance operations. Carriers operating in adopting states are expected to maintain documentation showing they have addressed the bulletin's requirements during regulatory examinations.

The bulletin focuses on five governance areas:

1. AI system inventory and documentation. Insurers must maintain a current inventory of AI systems used in insurance operations — underwriting, rating, claims, fraud detection, marketing, customer service — with documentation of each system's intended function, data inputs, decision outputs, and responsible personnel. "AI system" is broadly defined and includes third-party vendor models, not just internally developed tools.

2. Transparency and explainability. AI systems used in adverse action decisions — denying coverage, rating up, restricting terms — must produce explainable outputs. Insurers should be able to articulate in plain language why a specific AI output contributed to an adverse decision and document the basis for that explanation. The bulletin does not require full algorithmic disclosure, but regulators expect carriers to be able to explain decisions to consumers and examiners.

3. Fairness and non-discrimination. AI systems must be tested for discriminatory outcomes based on protected characteristics — race, sex, religion, national origin — even when the model does not use those variables directly. Proxy variables (geographic data, purchasing patterns, social media signals) can produce disparate impact against protected classes even in "neutral" models. The bulletin requires ongoing fairness testing and remediation protocols.

4. Third-party AI vendor oversight. When an insurer uses a third-party vendor's AI model (a common practice in claims AI, fraud detection, and telematics underwriting), the insurer remains accountable for that model's compliance with the bulletin. Insurers must conduct due diligence on vendor governance practices and maintain contractual rights to audit or terminate vendor relationships when compliance concerns arise.

5. Accountability and governance structure. A named senior accountable person (often a Chief Data Officer, Chief Risk Officer, or equivalent) is expected to be responsible for AI governance. Board or executive oversight documentation is expected. The bulletin encourages carriers to designate a cross-functional AI governance committee with representation from actuarial, legal, compliance, IT, and business units.

2026: Active Examinations and the NAIC Evaluation Tool

In 2026, the NAIC moved from guidance to supervision. The NAIC AI Systems Evaluation Tool — a structured questionnaire and examination instrument — is now being used in state market conduct examinations in participating states. The tool asks carriers to provide:

  • The complete list of AI systems deployed in insurance operations
  • Sample case files demonstrating how AI outputs contributed to underwriting or claims decisions
  • Testing documentation for fairness metrics and bias audits
  • Vendor governance artifacts (contracts, SLAs, due diligence documentation)
  • Escalation procedures for anomalous AI outputs

Several states have launched pilot programs using the tool, with additional states expected to adopt it by year-end 2026. For brokers, this matters because market conduct examinations that find AI governance deficiencies can result in carrier remediation orders — including suspension of certain AI-assisted underwriting practices — that affect the availability and consistency of coverage in those markets.

The NAIC is also advancing a proposal to create a third-party AI vendor registry that would require vendors providing AI models to insurers to maintain governance documentation accessible to regulators. If adopted, this would provide regulators (and eventually brokers and their clients) with more visibility into the specific models carriers use in underwriting decisions.

What This Means for Brokers: Five Practical Implications

1. AI-Driven Adverse Underwriting Actions Are Increasingly Documented

When a carrier declines, rates up, or restricts coverage for a client, and an AI model contributed to that decision, the bulletin's transparency requirements create a documentation trail that wasn't present before. Brokers can request explanations of adverse underwriting decisions more specifically — and regulators increasingly expect carriers to be able to provide them. If a client is declined by a carrier whose AI model flagged them for reasons the carrier cannot adequately explain, that is a legitimate compliance concern worth raising with the carrier's underwriter or the state insurance department.

2. Standard E&O Coverage Does Not Address AI Governance Failures

Broker E&O policies protect against errors and omissions in the broker's own conduct — not against a carrier's AI governance failure that affects the client. If a carrier's AI system produces a discriminatory rating or incorrectly processes a claim due to model error, the carrier bears the regulatory and liability exposure, not the broker. However, if a broker recommends a carrier whose AI-driven underwriting practices the broker knew or should have known were inconsistent with regulatory requirements, that could create a professional liability exposure. Staying informed about AI governance issues at carriers is part of broker due diligence. For limits and scope of broker E&O coverage, see How Much E&O Coverage Should an Insurance Broker Carry?.

3. Clients in High-AI-Use Lines Need Proactive Guidance

Lines where AI is most extensively used in underwriting and claims — personal lines, cyber, workers' comp, commercial auto, and increasingly commercial property — are the areas where clients may encounter AI-driven outcomes they don't understand. A client whose workers' comp experience mod is being calculated with AI-assisted audit tools, whose cyber premium is priced by a behavioral AI model, or whose property claim is being reviewed by an AI claims triage system may have questions about how those decisions were made. Brokers who can explain the general framework — "the carrier uses AI models subject to NAIC governance requirements in states where they operate, and those models must be explainable and fairness-tested" — provide value that clients notice. For the AI liability insurance question from the client's own coverage perspective, see AI Liability Insurance: How to Underwrite and Place Coverage for Clients Using AI.

4. Market Conduct Examination Risk Varies by Carrier

Not all carriers have implemented the NAIC AI governance requirements at the same level of rigor. Larger carriers with sophisticated compliance teams are generally ahead of the curve; smaller regional carriers and managing general agents using third-party models have variable governance maturity. As state examinations intensify in 2026, carriers with weaker AI governance documentation face examination findings, remediation orders, or consent agreements that could disrupt their underwriting practices in specific states. Monitoring market conduct exam activity in your primary placement states is a reasonable early-warning indicator for potential disruption.

5. Disclosure Requirements Are Evolving

Several states are developing mandatory disclosure rules requiring carriers to inform applicants or policyholders when AI was used in a coverage decision. The NAIC's Market Regulation and Consumer Affairs Committee is actively reviewing disclosure model language. If adopted broadly, these rules would require carriers to include AI-use disclosures in adverse action notices, renewal communications, and potentially in-force policy documents. Brokers should track these developments in the states where they do the most business — the same states' continuing education requirements may eventually include AI literacy components.

FAQ

Does the NAIC AI Model Bulletin apply to brokers?

Not directly. The bulletin is addressed to insurers — it governs how carriers must manage AI systems they deploy in their own operations. Brokers are not subject to the bulletin's governance requirements in their own business practices. However, brokers are affected by the carrier obligations the bulletin creates, and regulators in some states are beginning to examine whether broker recommendation practices involving AI-driven products raise their own disclosure or suitability issues.

What states have adopted the NAIC AI Model Bulletin?

As of early 2026, more than 24 states and Washington, D.C. have adopted the bulletin in some form. Adoption varies in scope — some states adopted it verbatim, others modified it. State insurance departments in California, Colorado, Connecticut, New York, and Texas have been among the more active in AI regulatory development, often through their own guidance documents in addition to the NAIC bulletin. Confirm the current adoption status in your primary operating states through the respective state insurance department website.

Can a client challenge an AI-driven underwriting decision?

In states that have adopted the bulletin and where the carrier cannot provide an adequate explanation for an AI-driven adverse action, the client has standing to raise a complaint with the state insurance department. The bulletin's transparency requirements create an implicit expectation that adverse decisions can be explained. This is a developing area — regulatory enforcement of AI explainability requirements is early-stage — but it is a legitimate avenue for clients who receive inexplicable adverse underwriting decisions.

How is the NAIC AI Bulletin different from a model law?

A model bulletin is guidance that state departments adopt by reference, directing carriers to implement the governance practices described. A model law goes through a state's legislative process and creates binding statutory requirements with specific penalties. The NAIC AI Model Bulletin creates regulatory expectations enforceable through examination and market conduct oversight, but it does not create new private rights of action for consumers and is not codified in state insurance statutes. A model law on AI is a likely next step in the regulatory evolution — the NAIC has indicated it may pursue model law development once examination activity establishes baseline compliance standards.

What should I do if a carrier cannot explain an AI-driven rate or declination?

Request the decision in writing, including the factors that contributed to the outcome. If the carrier cannot provide a coherent explanation, escalate to the underwriting supervisor or a dedicated excess/specialty market. Document the request and response in your file. If the client believes the decision reflects discriminatory treatment, a state insurance department complaint is the appropriate channel. From a placement standpoint, an unexplainable adverse AI decision is also a signal to reassess the carrier relationship for that line of business.

Arvori helps insurance brokers manage compliance workflows, carrier monitoring, and client communication at scale. Visit arvori.app to see how brokers use Arvori to stay ahead of regulatory and market changes.