People are scared of generative AI, but the future is safe and bright if you prepare now.
I recently published an expert roundup on the benefits of generative AI. Some people worried about bias and political agendas, while others thought jobs would disappear and technocrats would hoard all wealth. Fortunately, we can mitigate risks through transparency, corporate governance and educational transformation.
Below, I’ll discuss the fears and dangers of generative AI and potential solutions for each:
Biased algorithms can shape public opinion
Bias is inherent in every system. Editors have always selected stories to publish or ignore. With the advent of the internet, search engines rewarded publishers for optimized content and advertising, empowering a class of search engine marketers. Then, social media platforms developed subjective quality standards and terms of service. Additionally, bias can arise from algorithm training with disproportionate demographic representation. As such, we’ll face the same problems, solutions and debates over safety and privacy with generative AI that we already face in other systems.
Some people believe in legislative solutions, but those are influenced by lobbyists and ideologues. Instead, consider competition among ChatGPT, Bard, Llama and other generative AIs. Competition sparks innovation, where profits and market share drive unique approaches. As demand increases, the job market will explode with demand for algorithm bias auditors, similar to the growth of diversity training in human resources.
It’s challenging to find the source of bias in a black-box algorithm, where users only see the inputs and outputs of the system. However, open-source code bases and training sets will enable users to test for bias in the public space. Coders may develop transparent white-box models, and the market will decide a winner.
Related: The 3 Principals of Building Anti-Bias AI
Generative AI could destroy jobs and concentrate wealth
Many people fear that elite technocrats will replace workers with robots and accumulate wealth while society suffers. Consider how technology replaced jobs for decades. The cotton gin replaced field workers who toiled in the hot sun. Movable type replaced scribes who hand-wrote books, and ecommerce websites displaced many physical stores.
Some workers and businesses suffered from these transformations. But people learned new skills, and employers hired them to fill talent gaps. We will need radically different education and training to survive. Some people won’t upskill in time, and we have an existing social safety net for them.
Historically, we valued execution over ideas. Today, ideation may set humans apart from machines, where “ideators” replace knowledge workers. Our post-AI world will require critical thinkers, creatives and others to innovate and define ideas for AIs to execute. Quality assurance professionals, algorithm trainers and “prompt engineers” will have a vibrant future, too.
There will also be a market for “human-made” products and services. People will hunger for a uniquely human touch informed by emotional intelligence, especially in the medical and hospitality industries. An episode of 60 Minutes ended with “100% human-generated content,” and others will follow.
Generative AI may create an influx of spam
Many marketers saw ChatGPT as a shortcut to content creation, publishing articles verbatim. The risky technique is just a cheap, fast, low-quality form of ghostwriting.
In contrast, generated content may make digital marketing more equitable by reducing ghostwriting costs for bootstrapped entrepreneurs. The key is understanding Google E-E-A-T, which stands for Experience, Expertise, Authoritativeness and Trustworthiness. Your Google reputation and ranking hinge on your published work. So, people who improve and customize generated content will prosper, while Google flags purveyors of “copy-paste” as spammers.
Rogue AI could pose cybersecurity risks
A rogue coder could create harmful directives for an AI to damage individuals, software, hardware and organizations. Threats include malware, phishing schemes and other cybersecurity threats. But that’s already happening. Before the internet, we battled computer viruses targeting people, organizations and equipment. For-profit antivirus providers have served this market need to keep us safer.
Zero-trust platforms like blockchain may detect anomalies and mitigate cybersecurity risks. In addition, companies will create standard operating procedures (SOPs) to protect their systems — and profits. Therefore, new jobs will materialize to develop new processes, governance, ethics and software.
Related: Why Are So Many Companies Afraid of Generative AI?
Stolen identities and reputation attacks could be imminent
People already create deepfake videos of celebrities and politicians. Many are parodies, but some are malicious. Soon, humans will be unable to detect them. Historically, we’ve had this capability since PhotoShop was released, and teams are already in place to address misinformation and fake images at social media companies and news outlets.
Regulations and policing will never prevent the creation of fake content. Nefarious characters will find tools on the black market and the dark web. Fortunately, there are solutions in the private sector already.
Social media platforms will continue to block presumably fake content and stolen identities. And more solutions will come to fruition. Tools can already detect generated content and continue to improve. Some may become integrated with internet browsers that start issuing fake content warnings. Or celebrities may wear timestamped, dynamic QR codes for authentication when filming.
The singularity may finally arrive
The thought of a conscious AI megalomaniac crosses sci-fi geek minds everywhere. Find comfort knowing that it may already exist. After all, we can’t detect biological or technological consciousness. Yet, consciousness may emerge from complex systems like generative AI. Indeed, the simulation hypothesis suggests we’re in a simulation that an AI controls already.
Related: Addressing the Undercurrent of Fear Towards AI in the Workforce
History is full of dangerous technology. Warren Buffet compared AI to the atom bomb. If he’s right, then we’re as safe as we have been since 1945, when the U.S. government dropped a nuclear bomb for the first and last time. Systems are in place to mitigate that risk, and new systems will arise to keep AI safe, too. Our future will remain bright if enough people pursue cybersecurity and related fields. With that in mind, learn to use this technology and prepare for the shift towards AGI.
Read the full article here