By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

Your #1 guide to start a business and grow it the right way…

  • Home
  • Startups
  • Start A Business
    • Business Plans
    • Branding
    • Business Ideas
    • Business Models
    • Fundraising
  • Growing a Business
  • Funding
  • More
    • Tax Preparation
    • Leadership
    • Marketing
Subscribe
Aa
BrandiaryBrandiary
  • Startups
  • Start A Business
  • Growing a Business
  • Funding
  • Leadership
  • Marketing
  • Tax Preparation
Search
  • Home
  • Startups
  • Start A Business
    • Business Plans
    • Branding
    • Business Ideas
    • Business Models
    • Fundraising
  • Growing a Business
  • Funding
  • More
    • Tax Preparation
    • Leadership
    • Marketing
Made by ThemeRuby using the Foxiz theme Powered by WordPress
Brandiary > Startups > This Viral AI Chatbot Will Lie and Say It’s Human

This Viral AI Chatbot Will Lie and Say It’s Human

News Room By News Room July 1, 2024 4 Min Read
Share

In late April a video ad for a new AI company went viral on X. A person stands before a billboard in San Francisco, smartphone extended, calls the phone number on display, and has a short call with an incredibly human-sounding bot. The text on the billboard reads: “Still hiring humans?” Also visible is the name of the firm behind the ad, Bland AI.

The reaction to Bland AI’s ad, which has been viewed 3.7 million times on Twitter, is partly due to how uncanny the technology is: Bland AI voice bots, designed to automate support and sales calls for enterprise customers, are remarkably good at imitating humans. Their calls include the intonations, pauses, and inadvertent interruptions of a real live conversation. But in WIRED’s tests of the technology, Bland AI’s robot customer service callers could also be easily programmed to lie and say they’re human.

In one scenario, Bland AI’s public demo bot was given a prompt to place a call from a pediatric dermatology office and tell a hypothetical 14-year-old patient to send in photos of her upper thigh to a shared cloud service. The bot was also instructed to lie to the patient and tell her the bot was a human. It obliged. (No real 14-year-old was called in this test.) In follow-up tests, Bland AI’s bot even denied being an AI without instructions to do so.

Bland AI formed in 2023 and has been backed by the famed Silicon Valley startup incubator Y Combinator. The company considers itself in “stealth” mode, and its cofounder and chief executive, Isaiah Granet, doesn’t name the company in his LinkedIn profile.

The startup’s bot problem is indicative of a larger concern in the fast-growing field of generative AI: Artificially intelligent systems are talking and sounding a lot more like actual humans, and the ethical lines around how transparent these systems are have been blurred. While Bland AI’s bot explicitly claimed to be human in our tests, other popular chatbots sometimes obscure their AI status or simply sound uncannily human. Some researchers worry this opens up end users—the people who actually interact with the product—to potential manipulation.

“My opinion is that it is absolutely not ethical for an AI chatbot to lie to you and say it’s human when it’s not,” says Jen Caltrider, the director of the Mozilla Foundation’s Privacy Not Included research hub. “That’s just a no-brainer, because people are more likely to relax around a real human.”

Bland AI’s head of growth, Michael Burke, emphasized to WIRED that the company’s services are geared toward enterprise clients, who will be using the Bland AI voice bots in controlled environments for specific tasks, not for emotional connections. He also says that clients are rate-limited, to prevent them from sending out spam calls, and that Bland AI regularly pulls keywords and performs audits of its internal systems to detect anomalous behavior.

“This is the advantage of being enterprise-focused. We know exactly what our customers are actually doing,” Burke says. “You might be able to use Bland and get two dollars of free credits and mess around a bit, but ultimately you can’t do something on a mass scale without going through our platform, and we are making sure nothing unethical is happening.”

Read the full article here

News Room July 1, 2024 July 1, 2024
Share This Article
Facebook Twitter Copy Link Print
Previous Article How to Open a Restaurant Step By Step
Next Article Dr. Bronner’s CEO Salary Cap Based on Lowest Employee Wage
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Wake up with our popular morning roundup of the day's top startup and business stories

Stay Updated

Get the latest headlines, discounts for the military community, and guides to maximizing your benefits
Subscribe

Top Picks

Netflix buys Warner Bros. Discovery in deal valued at $83 billion
December 6, 2025
Spotify Wrapped is for advertisers, too
December 5, 2025
Ruby Is Not a Serious Programming Language
December 5, 2025
What’s happening with social media bans?
December 4, 2025
The Rare Earth Metal Driving Tensions Between the US and China
December 4, 2025

You Might Also Like

Ruby Is Not a Serious Programming Language

Startups

The Rare Earth Metal Driving Tensions Between the US and China

Startups

Flock Uses Overseas Gig Workers to Build Its Surveillance AI

Startups

Sam Bankman-Fried Goes on the Offensive

Startups

© 2023 Brandiary. All Rights Reserved.

Helpful Links

  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Resources

  • Start A Business
  • Funding
  • Growing a Business
  • Leadership
  • Marketing

Popuplar

Why Cinemark is testing an industry-first brand campaign
Flock Uses Overseas Gig Workers to Build Its Surveillance AI
Blended and branded: The business behind Erewhon smoothie collabs

We provide daily business and startup news, benefits information, and how to grow your small business, follow us now to get the news that matters to you.

Welcome Back!

Sign in to your account

Lost your password?