By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

Your #1 guide to start a business and grow it the right way…

  • Home
  • Startups
  • Start A Business
    • Business Plans
    • Branding
    • Business Ideas
    • Business Models
    • Fundraising
  • Growing a Business
  • Funding
  • More
    • Tax Preparation
    • Leadership
    • Marketing
Subscribe
Aa
BrandiaryBrandiary
  • Startups
  • Start A Business
  • Growing a Business
  • Funding
  • Leadership
  • Marketing
  • Tax Preparation
Search
  • Home
  • Startups
  • Start A Business
    • Business Plans
    • Branding
    • Business Ideas
    • Business Models
    • Fundraising
  • Growing a Business
  • Funding
  • More
    • Tax Preparation
    • Leadership
    • Marketing
Made by ThemeRuby using the Foxiz theme Powered by WordPress
Brandiary > Startups > US National Security Experts Warn AI Giants Aren’t Doing Enough to Protect Their Secrets

US National Security Experts Warn AI Giants Aren’t Doing Enough to Protect Their Secrets

News Room By News Room June 10, 2024 4 Min Read
Share

Google, in public comments to the NTIA ahead of its report, said it expects “to see increased attempts to disrupt, degrade, deceive, and steal” models. But it added that its secrets are guarded by a “security, safety, and reliability organization consisting of engineers and researchers with world-class expertise” and that it was working on “a framework” that would involve an expert committee to help govern access to models and their weights.

Like Google, OpenAI said in comments to the NTIA that there was a need for both open and closed models, depending on the circumstances. OpenAI, which develops models such as GPT-4 and the services and apps that build on them, like ChatGPT, last week formed its own security committee on its board and this week published details on its blog about the security of the technology it uses to train models. The blog post expressed hope that the transparency would inspire other labs to adopt protective measures. It didn’t specify from whom the secrets needed protecting.

Speaking alongside Rice at Stanford, RAND CEO Jason Matheny echoed her concerns about security gaps. By using export controls to limit China’s access to powerful computer chips, the US has hampered Chinese developers’ ability to develop their own models, Matheny said. He claimed that has increased their need to steal AI software outright.

By Matheny’s estimate, spending a few million dollars on a cyberattack that steals AI model weights, which might cost an American company hundreds of millions of dollars to create, is well worth it for China. “It’s really hard, and it’s really important, and we’re not investing enough nationally to get that right,” Matheny said.

China’s embassy in Washington, DC, did not immediately respond to WIRED’s request for comment on theft accusations, but in the past has described such claims as baseless smears by Western officials.

Google has said that it tipped off law enforcement about the incident that became the US case alleging theft of AI chip secrets for China. While the company has described maintaining strict safeguards to prevent the theft of its proprietary data, court papers show it took considerable time for Google to catch the defendant, Linwei Ding, a Chinese national who has pleaded not guilty to the federal charges.

The engineer, who also goes by Leon, was hired in 2019 to work on software for Google’s supercomputing data centers, according to prosecutors. Over about a year starting in 2022, he allegedly copied more than 500 files with confidential information over to his personal Google account. The scheme worked in part, court papers say, by the employee pasting information into Apple’s Notes app on his company laptop, converting the files to PDFs, and uploading them elsewhere, all the while evading Google’s technology meant to catch that sort of exfiltration.

While engaged in the alleged stealing, the US claims the employee was in touch with the CEO of an AI startup in China and had moved to start his own Chinese AI company. If convicted, he faces up to 10 years in prison.

Read the full article here

News Room June 10, 2024 June 10, 2024
Share This Article
Facebook Twitter Copy Link Print
Previous Article 6 Effective Strategies to Secure Funding
Next Article How to Present Like Steve Jobs at Apple Developers Conference
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Wake up with our popular morning roundup of the day's top startup and business stories

Stay Updated

Get the latest headlines, discounts for the military community, and guides to maximizing your benefits
Subscribe

Top Picks

Spotify Wrapped is for advertisers, too
December 5, 2025
Ruby Is Not a Serious Programming Language
December 5, 2025
What’s happening with social media bans?
December 4, 2025
The Rare Earth Metal Driving Tensions Between the US and China
December 4, 2025
Why Cinemark is testing an industry-first brand campaign
December 3, 2025

You Might Also Like

Ruby Is Not a Serious Programming Language

Startups

The Rare Earth Metal Driving Tensions Between the US and China

Startups

Flock Uses Overseas Gig Workers to Build Its Surveillance AI

Startups

Sam Bankman-Fried Goes on the Offensive

Startups

© 2023 Brandiary. All Rights Reserved.

Helpful Links

  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Resources

  • Start A Business
  • Funding
  • Growing a Business
  • Leadership
  • Marketing

Popuplar

Flock Uses Overseas Gig Workers to Build Its Surveillance AI
Blended and branded: The business behind Erewhon smoothie collabs
Disney’s holiday short is a story of friendship and imagination

We provide daily business and startup news, benefits information, and how to grow your small business, follow us now to get the news that matters to you.

Welcome Back!

Sign in to your account

Lost your password?