By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

Your #1 guide to start a business and grow it the right way…

BuckheadFunds

  • Home
  • Startups
  • Start A Business
    • Business Plans
    • Branding
    • Business Ideas
    • Business Models
    • Fundraising
  • Growing a Business
  • Funding
  • More
    • Tax Preparation
    • Leadership
    • Marketing
Subscribe
Aa
BuckheadFundsBuckheadFunds
  • Startups
  • Start A Business
  • Growing a Business
  • Funding
  • Leadership
  • Marketing
  • Tax Preparation
Search
  • Home
  • Startups
  • Start A Business
    • Business Plans
    • Branding
    • Business Ideas
    • Business Models
    • Fundraising
  • Growing a Business
  • Funding
  • More
    • Tax Preparation
    • Leadership
    • Marketing
Made by ThemeRuby using the Foxiz theme Powered by WordPress
BuckheadFunds > Startups > Keep Humans At The Center Of AI Decision Making

Keep Humans At The Center Of AI Decision Making

News Room By News Room October 24, 2023 7 Min Read
Share

Beena Ammanath – Global Deloitte AI Institute Leader, Founder of Humans For AI and Author of “Trustworthy AI” and “Zero Latency Leadership”

In this era of humans working with machines, being an effective leader with AI takes a range of skills and activities. Throughout this series, I’m providing an incisive roadmap for leadership in the age of AI, and an important part of leading effectively today is making sure your people are at the center of decision-making.

In popular discussions on artificial intelligence, there can be a sense that the machine stands alone, distinct from human intelligence and capable of functioning independently, indefinitely. It has led to some consternation around the mass elimination of jobs and the unfounded fear that the future of business is in replacing humans with machines. This is wrongheaded, and in fact, holding this assumption may actually limit potential value and trust in AI applications.

The reality is that behind every AI model and use case is a human workforce. Humans do the hard, often-unsung work of creating and assembling the data and enabling technologies, using the model to drive business outcomes, and establishing governance and risk mitigation to support compliance. Put another way, without humans, there can be no AI.

Yet, while the human element is a key to unlocking valuable, trustworthy AI, it is not always given the attention and investment it is due. The imperative today is to orient AI programs to focus on humans working with AI, not simply alongside it. The reason is that it can have a direct impact on AI ethics and business value.

Two areas of AI development and use are illustrative of the way in which data is curated and the importance of validating AI outputs.

The Risks In Data Annotation

AI models are largely trained on annotated data. Annotating text, images, sentiments and other data at scale is a time-consuming, highly manual effort. With this, human workers follow instructions from engineers to label data in a particular way, according to whatever is needed for a given model. There are matters of trust and ethics that grow out of this. Are the human annotators injecting bias into the training set by virtue of their personal biases? For example, if an annotator is color blind and asked to annotate red apples in a set of images, they might fail to label the image correctly, thus leading to a model that is less capable of spotting red apples in the real world.

Separately, what are the ethical implications for the humans engaged in this work? While red apples are innocuous, some data might contain disturbing content. If a model is intended to assess vehicle damage-based accident photos, human annotators might be asked to scrutinize and label images that contain things better left unseen. In this, organizations have an obligation to weigh the benefits of the model against the repercussions for the human workforce. Whether it is red apples or crashed cars, the insight is to keep humans at the center of decision-making and account for risks to the employee, the enterprise, the model and the end user.

The Importance Of Output Validation

With machine learning and other types of more traditional AI, model management requires ongoing attention to outputs to account and correct for things like model drift and brittleness. With the emergence of generative AI, the importance of validating outputs becomes even more critical for risk mitigation and governance.

Generative AI, such as large language models (LLMs), has rightly created excitement and urgency around how this new type of AI can be used across myriad use cases, both complementing the existing AI ecosystem with upstream deployments and enabling downstream use cases, such as natural language chatbots and assistive summaries of documents and datasets. Generative AI creates data that is (usually) as coherent and accurate as real-world data. If a prompt for an LLM asks for a review of supply chain constraints over the past month, a model with access to that data could output a tight summary of constraints, suspected causes and remediation steps. That summary provides insight that the user relies on to make decisions, such as changing a supplier that regularly encountered fulfillment issues.

But what if the summary is incorrect and the LLM has (without any malicious intent) cited a constraint that does not exist and, even worse, invents a rationalization for why that “hallucination” is valid? The user is left to make decisions based on false information, which has cascading business implications. This exemplifies why output validation is necessary for generative AI deployments.

To be sure, not all inaccuracies bring the same level of risk and consequence. If using generative AI to write a marketing e-mail, the organization might have a higher tolerance for error, as faults or inaccuracies are likely to be fairly easy to identify and the outcomes are lower stakes for the enterprise. When it comes to other applications that concern mission-critical business decisions, however, the tolerance for error is low. This makes a “human in the loop” who validates model outputs more important than ever before. Generative AI hallucination is a technical problem, but it requires a human solution.

Deloitte, where I’m the Global Head of the AI Institute, calls this the “Age of With,” an era characterized by humans working with machines to accomplish things neither could do independently. The opportunity is limited only by the imagination and the degree to which risks can be mitigated. Recognizing and prioritizing the human element throughout the AI lifecycle can help organizations build AI programs they can trust.

Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

Read the full article here

News Room October 24, 2023 October 24, 2023
Share This Article
Facebook Twitter Copy Link Print
Previous Article Public Universities Can Save And Invest
Next Article Advertising’s holding companies report Q3 earnings
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Wake up with our popular morning roundup of the day's top startup and business stories

Stay Updated

Get the latest headlines, discounts for the military community, and guides to maximizing your benefits
Subscribe

Top Picks

Franchise Success Starts at The Local Level — Here’s Why
July 14, 2025
Why Waiting for Monthly Financial Reports Is Creating Blind Spots and Slowing Your Growth
July 14, 2025
Tornado Cash Made Crypto Anonymous. Now One of Its Creators Faces Trial
July 14, 2025
I Learned These 5 Lessons the Hard Way So You Don’t Have To
July 14, 2025
Podcasts created a new media category. Where do they go from here?
July 14, 2025

You Might Also Like

Tornado Cash Made Crypto Anonymous. Now One of Its Creators Faces Trial

Startups

Linda Yaccarino Tried to Tame X. Now She’s Out as CEO

Startups

The Teens Are Taking Waymos Now

Startups

Trump’s Defiance of TikTok Ban Prompted Immunity Promises to 10 Tech Companies

Startups

© 2024 BuckheadFunds. All Rights Reserved.

Helpful Links

  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Resources

  • Start A Business
  • Funding
  • Growing a Business
  • Leadership
  • Marketing

Popuplar

How The NBPA and a Top African University Are Building Player Legacies Off the Court
Linda Yaccarino Tried to Tame X. Now She’s Out as CEO
‘Obvious’ Side Hustle: From $300k Monthly to $20M+ in 2025

We provide daily business and startup news, benefits information, and how to grow your small business, follow us now to get the news that matters to you.

Welcome Back!

Sign in to your account

Lost your password?