About 15 years ago, I noticed the revenue of my company had flatlined. I work in the language-services industry, and at the time, we were primarily providing translation services through humans. Large technology clients had stopped buying from us, and we were determined to uncover the problem.
As it turned out, we weren’t offering our big tech clients what they needed: machine-based language models that could train artificial intelligence and improve automated translation outcomes.
This left us with two choices: adapt or lose business. Even though we knew incorporating AI into our company was critical to its success, the transition wasn’t easy. From the C-level down, the idea of incorporating machines into our workflow and changing the nature of our people’s jobs was met with a lot of resistance. The new direction would completely change the DNA of our company, and not everyone was on board.
With the recent acceleration of generative AI tools like ChatGPT, businesses in virtually every industry are being forced to change. A report from the World Economic Forum, showed more than 75% of companies are looking to adopt AI tech in the next five years. And with AI expected to contribute 14.5% of the U.S. GDP by 2030, there’s no denying its impact on economic growth.
As leaders across corporate America navigate the era of AI in the workplace, here are three insights I learned from the digital transformation of our company.
Related: 3 Smart Things to Know Before Getting Started with AI
Reskilling is key to rescaling your workforce
Change is hard for everyone, but we found a lot of the resistance from our people came from not knowing what skills would be required of them as our company embraced AI. To ease their concerns, we brought in senior leaders who were adept at leading digital transformations to help reskill our workforce.
Offering employees an opportunity to reskill in the age of AI has cross benefits. Not only did it allow us to retain many top employees who had institutional knowledge of our business and were critical to our culture, but as our people obtained new technical skills, it made them more employable: being a human language translator is great, but being a human language translator who has trained artificial intelligence engines is even better.
The World Economic Forum’s Future of Jobs 2023 report showed 44% of employers surveyed from a cross-section of the world’s largest companies expect workers’ skills will be disrupted in the next five years. Such significant disruption requires business leaders, government and educational institutions alike to ensure we are providing the right training and reskilling of our human workforce.
For us, at times reskilling required merging different divisions of our company together for cross-learning opportunities. A division that understood the technology, for instance, could collaborate with a division that understood the customer experience to uncover new client solutions. This enabled us to reutilize valuable team members and also set them up to grow and prosper in their individual careers.
Related: 5 Tips for Integrating AI Into Your Business
Robots can’t replace complex critical thinking
I was given a demo by an AI vendor recently, and when asked if the translation technology could be used in a healthcare setting, he said “no.” This was due, in part, to the precise nature surrounding healthcare communication.
It got me thinking about my father. He was a great citizen who immigrated to America to provide a better life for his family. Later in life, he got dementia and lost much of his speech. I imagined how awful it would have been for him to go through an MRI, for instance, being aided with AI, rather than a human, for language interpretation.
As much as AI can help inform our decision-making and maximize efficiency, it can’t replace the emotional intelligence or critical-thinking skills of humans. An in-person interpreter is a multi-dimensional being with compassion that can mimic the gestures of a doctor and communicate to a vulnerable patient with empathy. That simply can’t be replicated through AI.
AI bases its decision-making off of the aggregate of data that’s been programmed into it — data that often contains preconceived biases. For example, I went into DALL-E by Open AI recently and asked it to create an image of a woman CEO. All of the images produced were of Asian women with zero differentiation in body size, race or emotional expression. With the volume of content being created through data analytics, we need human’s complex critical thinking skills more than ever to help audit and regulate machine output.
We also need human leadership skills more than ever. As a leader at our company, my role is to ensure we stay relevant — to be a step ahead of the trends and innovate to the benefit of our clients and team. These are skills that can be informed, but not replaced by AI.
Related: The Perfect Blend: How to Successfully Combine AI and Human Approaches to Business
Great tools require even greater responsibility
At the time of the Industrial Revolution, we had regulations in place for physical crimes. To this day, if you go into a supermarket and steal a physical product, there is legislation in place to prosecute you. The same cannot be said for the digital economy. Our justice system is not currently set up to protect Americans from the volume of cybersecurity issues the age of AI presents.
A recent report from Gartner showed that by 2025, consumerization of AI-enabled fraud will drive more focus on security education and awareness. From managing large volumes of private data to protecting our election systems, it’s extremely important that our laws and regulations catch up with the digital age.
For business leaders, one of the biggest mistakes you can make when incorporating AI into your organization is to violate the privacy of your stakeholders. And it’s not just humans that put privacy at risk. Machines and systems themselves can become non-compliant if they aren’t tested and implemented in a responsible manner. Privacy breaches can be extremely damaging and harm a lot of people, which is why it’s critical that AI is used responsibly and in conjunction with robust cybersecurity measures.
It’s early days in the era of AI, but already leaders around the world are recognizing its transformative capabilities. With great power comes great responsibility, and I believe it’s critical that as we go further down the road of incorporating AI into our workplaces, we establish regulations and uphold policies to protect the value of human life.
Read the full article here