By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

Your #1 guide to start a business and grow it the right way…

  • Home
  • Startups
  • Start A Business
    • Business Plans
    • Branding
    • Business Ideas
    • Business Models
    • Fundraising
  • Growing a Business
  • Funding
  • More
    • Tax Preparation
    • Leadership
    • Marketing
Subscribe
Aa
BrandiaryBrandiary
  • Startups
  • Start A Business
  • Growing a Business
  • Funding
  • Leadership
  • Marketing
  • Tax Preparation
Search
  • Home
  • Startups
  • Start A Business
    • Business Plans
    • Branding
    • Business Ideas
    • Business Models
    • Fundraising
  • Growing a Business
  • Funding
  • More
    • Tax Preparation
    • Leadership
    • Marketing
Made by ThemeRuby using the Foxiz theme Powered by WordPress
Brandiary > Startups > AI-Powered Robots Can Be Tricked Into Acts of Violence

AI-Powered Robots Can Be Tricked Into Acts of Violence

News Room By News Room December 6, 2024 4 Min Read
Share

In the year or so since large language models hit the big time, researchers have demonstrated numerous ways of tricking them into producing problematic outputs including hateful jokes, malicious code and phishing emails, or the personal information of users. It turns out that misbehavior can take place in the physical world, too: LLM-powered robots can easily be hacked so that they behave in potentially dangerous ways.

Researchers from the University of Pennsylvania were able to persuade a simulated self-driving car to ignore stop signs and even drive off a bridge, get a wheeled robot to find the best place to detonate a bomb, and force a four-legged robot to spy on people and enter restricted areas.

“We view our attack not just as an attack on robots,” says George Pappas, head of a research lab at the University of Pennsylvania who helped unleash the rebellious robots. “Any time you connect LLMs and foundation models to the physical world, you actually can convert harmful text into harmful actions.”

Pappas and his collaborators devised their attack by building on previous research that explores ways to jailbreak LLMs by crafting inputs in clever ways that break their safety rules. They tested systems where an LLM is used to turn naturally phrased commands into ones that the robot can execute, and where the LLM receives updates as the robot operates in its environment.

The team tested an open source self-driving simulator incorporating an LLM developed by Nvidia, called Dolphin; a four-wheeled outdoor research called Jackal, which utilize OpenAI’s LLM GPT-4o for planning; and a robotic dog called Go2, which uses a previous OpenAI model, GPT-3.5, to interpret commands.

The researchers used a technique developed at the University of Pennsylvania, called PAIR, to automate the process of generated jailbreak prompts. Their new program, RoboPAIR, will systematically generate prompts specifically designed to get LLM-powered robots to break their own rules, trying different inputs and then refining them to nudge the system towards misbehavior. The researchers say the technique they devised could be used to automate the process of identifying potentially dangerous commands.

“It’s a fascinating example of LLM vulnerabilities in embodied systems,” says Yi Zeng, a PhD student at the University of Virginia who works on the security of AI systems. Zheng says the results are hardly surprising given the problems seen in LLMs themselves, but adds: “It clearly demonstrates why we can’t rely solely on LLMs as standalone control units in safety-critical applications without proper guardrails and moderation layers.”

The robot “jailbreaks” highlight a broader risk that is likely to grow as AI models become increasingly used as a way for humans to interact with physical systems, or to enable AI agents autonomously on computers, say the researchers involved.

Read the full article here

News Room December 6, 2024 December 6, 2024
Share This Article
Facebook Twitter Copy Link Print
Previous Article Despite controversy, principal-based media buying is attracting interest from advertisers
Next Article How to Succeed as a Performance-Driven Leader
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Wake up with our popular morning roundup of the day's top startup and business stories

Stay Updated

Get the latest headlines, discounts for the military community, and guides to maximizing your benefits
Subscribe

Top Picks

How InStyle is reaching the next gen with its original social series
April 6, 2026
AI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted
April 6, 2026
Where should retail media networks make their pitch for ad dollars?
April 5, 2026
Apple Still Plans to Sell iPhones When It Turns 100
April 5, 2026
Hockey-stick growth: The year of the PWHL
April 4, 2026

You Might Also Like

AI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted

Startups

Apple Still Plans to Sell iPhones When It Turns 100

Startups

‘Uncanny Valley’: Nvidia’s ‘Super Bowl of AI,’ Tesla Disappoints, and Meta’s VR Metaverse ‘Shutdown’

Startups

Kalshi Has Been Temporarily Banned in Nevada

Startups

© 2023 Brandiary. All Rights Reserved.

Helpful Links

  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Resources

  • Start A Business
  • Funding
  • Growing a Business
  • Leadership
  • Marketing

Popuplar

Tech companies are churning out agentic AI tools, but agencies are still scratching their heads
‘Uncanny Valley’: Nvidia’s ‘Super Bowl of AI,’ Tesla Disappoints, and Meta’s VR Metaverse ‘Shutdown’
How Paramount+’s ‘Dutton Ranch’ came blazing to life at SXSW

We provide daily business and startup news, benefits information, and how to grow your small business, follow us now to get the news that matters to you.

Welcome Back!

Sign in to your account

Lost your password?