AI Is Producing Regulated Content. Who’s Supervising It?

Date

April 24, 2026

category

Artificial intelligence

Purple and blue network of brain and wires connecting to chat bubble and email icons signifying AI

Artificial intelligence is no longer a future consideration for financial services firms. It is already embedded in day-to-day operations.

Advisors are using AI to draft client communications. Marketing teams are generating content at scale. Firms are leveraging AI tools to analyze data and support decision-making.

As adoption accelerates, a practical compliance question is emerging:

If AI is producing content that falls within regulatory scope, how is it being supervised?

AI Is Already Producing Regulated Output

Whether formally approved or informally adopted, AI tools are now generating outputs that may fall under existing regulatory frameworks.

This includes:

  • Client-facing communications
  • Marketing materials and social media content
  • Market commentary and insights
  • Internal summaries that may influence advice

In many cases, these outputs resemble work traditionally performed by registered representatives or supervised personnel.

The difference is that AI is often operating without clearly defined supervision structures.

Regulatory Expectations Are Already Taking Shape

Regulators and industry groups are not treating AI as a future issue. They are actively evaluating how its use aligns with existing rules.

FINRA has identified generative AI as an area of focus in its examination priorities, including how firms supervise AI-generated communications and manage associated risks.

The U.S. Securities and Exchange Commission has also signaled increased scrutiny around the use of AI in advisory processes and marketing. This includes how AI-generated content may fall within the scope of the Marketing Rule and whether outputs could influence investor decisions.

Industry guidance is evolving alongside regulatory focus. The National Institute of Standards and Technology has introduced an AI Risk Management Framework emphasizing governance, accountability, and ongoing monitoring. Similarly, the Securities Industry and Financial Markets Association has highlighted the importance of transparency, vendor oversight, and control frameworks when deploying AI tools.

Taken together, the direction is clear: AI does not sit outside existing compliance expectations. It is being incorporated into them.

The Emerging Risk: Unsupervised Output

As AI becomes more embedded in workflows, firms face increasing risk that content is being created without appropriate oversight.

Key considerations include:

  • Accuracy and reliability
    AI-generated content may introduce errors, outdated information, or unsupported claims
  • Inadvertent advice
    Content intended as general information may cross into personalized or actionable guidance
  • Disclosure gaps
    Required disclaimers or context may be omitted or inconsistently applied
  • Lack of auditability
    Firms may not have records of how or when AI-generated content was produced or used

These risks are not new in concept. What is new is the speed and scale at which content can now be created.

Existing Rules Still Apply

AI does not create a separate regulatory category.

If a communication would require supervision when created by a person, it requires supervision when generated by a technology tool.

Existing expectations around:

  • supervision
  • recordkeeping
  • communications review
  • marketing oversight

continue to apply regardless of how content is produced.

The challenge for firms is not identifying the rules. It is ensuring they are consistently applied in an environment where content creation is faster, more decentralized, and often less visible.

Where Firms May Encounter Gaps

In practice, AI adoption is often occurring faster than compliance oversight evolves.

Common gaps may include:

  • Informal or undocumented use of AI tools by employees
  • Limited visibility into where AI is being used across the organization
  • Review processes not designed for increased content volume
  • Policies that reference technology broadly, but do not address AI specifically

This can create a disconnect between operational reality and documented supervisory frameworks.

A Practical Starting Point

Firms do not need to eliminate AI use to manage risk. They do need to ensure it is incorporated into existing compliance structures.

A practical starting point may include:

1. Identifying AI Use Cases

Understanding where and how AI tools are currently being used across the organization

2. Assigning Responsibility

Designating individuals or teams accountable for oversight of AI-generated outputs

3. Applying Existing Review Standards

Subjecting AI-generated content to the same review and approval processes as human-created materials

4. Documenting Controls

Incorporating AI usage into written supervisory procedures and internal policies

5. Maintaining Required Records

Capturing and retaining AI-generated communications in accordance with applicable recordkeeping obligations

Final Thought

AI is increasing the speed and scale at which financial services firms operate. It is changing how content is created and how information is delivered.

What it is not changing is the expectation that firms must supervise communications and maintain appropriate controls.

Firms that recognize this early will be better positioned to adopt new technologies while maintaining alignment with regulatory expectations.

If your firm is evaluating how AI fits into your compliance framework, it may be helpful to take a closer look at how these tools are currently being used across your organization and how they align with existing supervisory practices. Gryphon Compliance works with financial firms to navigate evolving regulatory expectations and strengthen practical, scalable compliance programs.

Jonathan Wowak is Founder & Principal of Gryphon Compliance Services. He can be reached at
jwowak@gryphon-compliance.com

This material is provided for informational purposes only and is not intended to constitute legal, regulatory, or compliance advice. The information presented is based on publicly available guidance and industry developments as of the date of publication and may not reflect the most current regulatory expectations. Firms should consult with qualified legal or compliance professionals to assess how these considerations apply to their specific circumstances.