By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

Your #1 guide to start a business and grow it the right way…

BuckheadFunds

  • Home
  • Startups
  • Start A Business
    • Business Plans
    • Branding
    • Business Ideas
    • Business Models
    • Fundraising
  • Growing a Business
  • Funding
  • More
    • Tax Preparation
    • Leadership
    • Marketing
Subscribe
Aa
BuckheadFundsBuckheadFunds
  • Startups
  • Start A Business
  • Growing a Business
  • Funding
  • Leadership
  • Marketing
  • Tax Preparation
Search
  • Home
  • Startups
  • Start A Business
    • Business Plans
    • Branding
    • Business Ideas
    • Business Models
    • Fundraising
  • Growing a Business
  • Funding
  • More
    • Tax Preparation
    • Leadership
    • Marketing
Made by ThemeRuby using the Foxiz theme Powered by WordPress
BuckheadFunds > Startups > AI Giants Pledge to Allow External Probes of Their Algorithms, Under a New White House Pact

AI Giants Pledge to Allow External Probes of Their Algorithms, Under a New White House Pact

News Room By News Room July 22, 2023 4 Min Read
Share

The White House has struck a deal with major AI developers—including Amazon, Google, Meta, Microsoft, and OpenAI—that commits them to take action to prevent harmful AI models from being released into the world.

Under the agreement, which the White House calls a “voluntary commitment,” the companies pledge to carry out internal tests and permit external testing of new AI models before they are publicly released. The test will look for problems including biased or discriminatory output, cybersecurity flaws, and risks of broader societal harm. Startups Anthropic and Inflection, both developers of notable rivals to OpenAI’s ChatGPT, also participated in the agreement.

“Companies have a duty to ensure that their products are safe before introducing them to the public by testing the safety and capability of their AI systems,” White House special adviser for AI Ben Buchanan told reporters in a briefing yesterday. The risks that companies were asked to look out for include privacy violations and even potential contributions to biological threats. The companies also committed to publicly reporting the limitations of their systems and the security and societal risks they could pose.

The agreement also says the companies will develop watermarking systems that make it easy for people to identify audio and imagery generated by AI. OpenAI already adds watermarks to images produced by its Dall-E image generator, and Google has said it is developing similar technology for AI-generated imagery. Helping people discern what’s real and what’s fake is a growing issue as political campaigns appear to be turning to generative AI ahead of US elections in 2024.

Recent advances in generative AI systems that can create text or imagery have triggered a renewed AI arms race among companies adapting the technology for tasks like web search and writing recommendation letters. But the new algorithms have also triggered renewed concern about AI reinforcing oppressive social systems like sexism or racism, boosting election disinformation, or becoming tools for cybercrime. As a result, regulators and lawmakers in many parts of the world—including Washington, DC—have increased calls for new regulation, including requirements to assess AI before deployment.

It’s unclear how much the agreement will change how major AI companies operate. Already, growing awareness of the potential downsides of the technology has made it common for tech companies to hire people to work on AI policy and testing. Google has teams that test its systems, and it publicizes some information, like the intended use cases and ethical considerations for certain AI models. Meta and OpenAI sometimes invite external experts to try and break their models in an approach dubbed red-teaming.

“Guided by the enduring principles of safety, security, and trust, the voluntary commitments address the risks presented by advanced AI models and promote the adoption of specific practices—such as red-team testing and the publication of transparency reports—that will propel the whole ecosystem forward,” Microsoft president Brad Smith said in a blog post.

The potential societal risks the agreement pledges companies to watch for do not include the carbon footprint of training AI models, a concern that is now commonly cited in research on the impact of AI systems. Creating a system like ChatGPT can require thousands of high-powered computer processors, running for extended periods of time.

Read the full article here

News Room July 22, 2023 July 22, 2023
Share This Article
Facebook Twitter Copy Link Print
Previous Article Should You Outsource Your Chief Sustainability Officer?
Next Article 17 Comms Pros Share Lessons Learned From Media Outreach Mistakes
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Wake up with our popular morning roundup of the day's top startup and business stories

Stay Updated

Get the latest headlines, discounts for the military community, and guides to maximizing your benefits
Subscribe

Top Picks

Ad spend on YouTube, Google Search, Amazon Sponsored Products is growing: report
April 25, 2026
Nobody Knows How to File Taxes on Prediction Market Wins
April 24, 2026
How marketers are making experiences worth clipping
April 24, 2026
Meta Is Warned That Facial Recognition Glasses Will Arm Sexual Predators
April 23, 2026
How Grant & Ash are rewriting the rules of creator brand safety
April 23, 2026

You Might Also Like

Nobody Knows How to File Taxes on Prediction Market Wins

Startups

Meta Is Warned That Facial Recognition Glasses Will Arm Sexual Predators

Startups

With One Million Displaced, Lebanon Turns to Digital Wallets for Aid

Startups

“Uncanny Valley”: OpenAI and Musk Fight Again; DOJ Mishandles Voter Data; Artemis II Comes Home

Startups

© 2024 BuckheadFunds. All Rights Reserved.

Helpful Links

  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Resources

  • Start A Business
  • Funding
  • Growing a Business
  • Leadership
  • Marketing

Popuplar

With One Million Displaced, Lebanon Turns to Digital Wallets for Aid
Polymarket and Kalshi are turning TV programming into one big casino
“Uncanny Valley”: OpenAI and Musk Fight Again; DOJ Mishandles Voter Data; Artemis II Comes Home

We provide daily business and startup news, benefits information, and how to grow your small business, follow us now to get the news that matters to you.

Welcome Back!

Sign in to your account

Lost your password?