By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

Your #1 guide to start a business and grow it the right way…

BuckheadFunds

  • Home
  • Startups
  • Start A Business
    • Business Plans
    • Branding
    • Business Ideas
    • Business Models
    • Fundraising
  • Growing a Business
  • Funding
  • More
    • Tax Preparation
    • Leadership
    • Marketing
Subscribe
Aa
BuckheadFundsBuckheadFunds
  • Startups
  • Start A Business
  • Growing a Business
  • Funding
  • Leadership
  • Marketing
  • Tax Preparation
Search
  • Home
  • Startups
  • Start A Business
    • Business Plans
    • Branding
    • Business Ideas
    • Business Models
    • Fundraising
  • Growing a Business
  • Funding
  • More
    • Tax Preparation
    • Leadership
    • Marketing
Made by ThemeRuby using the Foxiz theme Powered by WordPress
BuckheadFunds > Leadership > AI Governance Is Not Just About The Models

AI Governance Is Not Just About The Models

News Room By News Room August 11, 2023 10 Min Read
Share

While AI has been used in enterprise and consumer products for decades, only large tech organizations with sufficient resources were able to implement it at scale. In the past few years, advances in the quality and accessibility of ML systems have led to a rapid proliferation of AI tools in everyday life. The accessibility of these tools means there is a massive need for good AI Governance both by AI providers and the organizations implementing and deploying AI systems into their own products.

However, “AI Governance” is a holistic term that can mean a lot of different things to different groups. This post seeks to provide some clarity on 3 different “levels” of AI Governance: Organizational, Use Case and Model. We will describe what they are, who is primarily responsible for them, and why it’s important for an organization to have strong tools, policies and cultural norms for all 3 as part of their broader AI Governance Program. We’ll also discuss how upcoming regulations will intersect with each different level.

Organizational Level Governance

At this level, AI Governance takes the form of ensuring there are clearly defined internal policies for AI ethics, accountability, safety, etc, as well as clear processes to enforce these policies. For example, to comply with many of the emerging AI regulatory frameworks, it’s important to have a clear process for deciding when and where to deploy AI. This process could be as simple as “the tech leadership team needs to approve the AI system,” or as complex as a formal submission process to an AI ethics committee that has go/no-go power over AI uses. Good organizational AI Governance for this means ensuring that the processes are clear, accessible to internal stakeholders, and regularly enforced.

The NIST AI Risk Management Framework does a good job our laying out all the various organizational governance policies and processes that should be in place in their GOVERN pillar, and the EU AI Act outlines various organizational requirements in Articles 9, 14, and 17 (although this may still be updated before final passage). The organizational governance standards and policies should be drafted by senior leaders at the organization in consultation with legal teams, which will ultimately be accountable for implementing and enforcing the policies. The organizational policies will largely define when and where an organization will choose to deploy AI, as well as determine what kinds of documentation and governance considerations must be made at the use case level.

Use Case Level Governance

Use Case Level Governance is mostly focused on ensuring that a specific application of AI, for a specific set of tasks, meets all necessary governance standards. The risks of harm coming from AI are highly context specific. A single model can be used for multiple different use cases (especially foundational or general purpose models such as GPT-4), with some use cases being relatively low risk (ex: summarizing movie reviews), while a very similar task with different context being very high risk (ex: summarizing medical files). Alongside the risks, many of the ethical questions for AI, including simply whether to use AI for a task, are tied to context.

Governance at this level means ensuring that organizations are carefully documenting the goals of using AI for the task, justifications for why AI is appropriate, as well as what the context-specific risks are and how they are being mitigated (both technically and non-technically). This documentation should conform to the standards set out in the organizational governance policies as well as any of the regulatory standards. Both the NIST AI RMF Framework (MAP function) and EU AIA (Annex IV) outline the types of “use case level” documentation that should be generated and tracked. Governance at this level will be owned by many different business units. The project leads (often product managers), in consultation with legal/compliance teams, will be the primary group responsible for ensuring proper governance processes are followed and documented at this level. The risks and business goals defined at this level largely then inform what the model level governance expectations should be for a given use case.

Model Level Governance

Model Level Governance is heavily focused on ensuring that the actual technical function of an AI system is meeting expected standards of fairness, accuracy and security. It includes assuring that data privacy is protected, there are no statistical biases between protected groups, the model can handle unexpected inputs, that there is monitoring in place for model drift, etc. For the purposes of this framing of AI Governance, the model level also includes the data governance standards applied to the training data. Best practices and standards for model governance have already emerged in specific sectors often under the guise of “model risk management,” and many great tools have been developed to help organizations identify, measure, and mitigate model level risks.

This is the governance level that many technical AI/ML teams primarily focus on as they are ultimately responsible for ensuring that the technical underpinnings of an ML system are fundamentally sound and reliable. With the notable exception of the NYC Local Law 144, most other AI focused regulations do not prescribe exact model level metrics or standards. For example, the MEASURE function of the NIST framework outlines what model governance should look like without being prescriptive about exact metrics or outputs. It expects an organization to have clearly defined their own testing, validation, and monitoring strategies, and to prove those strategies are enforced. Different sectors will likely develop exact metrics and standards for specific uses of AI over time, and these will be determined either by industry trade associations, or by sector specific regulators such as the FDA.

Why This Matters – Regulatory Enforcement

Many of the broad proposed regulations, such as the EU AI Act, focus heavily on the organizational and use case governance levels. This focus is intuitive, as model level requirements can be very difficult to define without hampering innovation, and can only be done for very specific use cases and types of AI systems. As such, the legislative process will be too slow to keep up. In addition, many legislators and regulators are too unfamiliar with the “lower level” nuances of AI systems, but feel more comfortable writing laws about expected organizational processes and societal outcomes. This familiarity will also likely carry over to how regulators approach enforcing AI focused laws. Enforcing laws at the model level will be difficult for many regulators even with the best hired consultants, but obvious flaws or gaps at the use case or organizational level will make for an easier enforcement action. Furthermore, many organizations are increasingly leveraging third-party AI/ML systems and may not be owning the model level governance, but would still need use case and organizational governance in place.

The AI audit requirements of various proposed regulations may cover all three levels, with evidence requirements for organization policies, use case risk assessments, and model testing. Organizations cannot make “AI Governance” just the exclusive responsibility of AI/ML teams, and instead need to ensure that organizational and use case processes and documentation are bullet proof and fully auditable in order to protect themselves from regulatory action.

Final Thoughts

We strongly believe that building accurate, reliable and unbiased ML models and practicing good data governance is essential for trustworthy and responsible AI systems. Every organization needs to have tools and processes in place to ensure their models and 3rd party AI systems are tested well and actively monitored. However, many of the stakeholders for overall AI Governance, including regulators and the general public, will not have the skills to understand or assess AI Governance practices at the model level, and so it’s important to understand the different levels of AI Governance abstraction those stakeholders will be looking at.

Read the full article here

News Room August 11, 2023 August 11, 2023
Share This Article
Facebook Twitter Copy Link Print
Previous Article By Seizing @Music, Elon Musk Shows He Doesn’t Know What Made Twitter Good
Next Article How To Build Resilience And Growth During Challenging Economic Times
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Wake up with our popular morning roundup of the day's top startup and business stories

Stay Updated

Get the latest headlines, discounts for the military community, and guides to maximizing your benefits
Subscribe

Top Picks

How to Calmly Confront Bad Reviews and Turn Them Into Growth
July 19, 2025
There’s a New Trend in Real Estate — and It’s Worth $438 Billion
July 19, 2025
Tech Billionaires Back Erebor in the Wake of Silicon Valley Bank Collapse
July 19, 2025
I Took My Side Hustle Full-Time and Earned $222,000 Last Year
July 19, 2025
How ‘F1 The Movie’ took a page from the ‘Barbie’ marketing playbook
July 19, 2025

You Might Also Like

There’s a New Trend in Real Estate — and It’s Worth $438 Billion

Leadership

When Tech Helps — It Hurts. Here’s How to Take Back Productivity and Culture

Leadership

Before Selling Your Business, Ask Your Buyer These 5 Questions

Leadership

6 Ways to Start a Corporate Social Responsibility Program With Real Impact

Leadership

© 2024 BuckheadFunds. All Rights Reserved.

Helpful Links

  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Resources

  • Start A Business
  • Funding
  • Growing a Business
  • Leadership
  • Marketing

Popuplar

How Anthony Volpe Turned a Recovery Drink Into a Business Opportunity
When Tech Helps — It Hurts. Here’s How to Take Back Productivity and Culture
Microsoft and OpenAI’s AGI Fight Is Bigger Than a Contract

We provide daily business and startup news, benefits information, and how to grow your small business, follow us now to get the news that matters to you.

Welcome Back!

Sign in to your account

Lost your password?