Depending on how you look at it, AI appears to be either the greatest opportunity since whenever or the biggest threat to the way we live since the Industrial Revolution. To be fair, it is possible to hold both views at the same time. For instance, much is being said and written at the moment about the technology’s potential to deal with that thorny issue of productivity. The way it will do this is through software and products anticipating things in ways that we are only just starting to grasp. Since it is so fast and can handle vast amounts of data in no time at all, it is obviously more efficient than even the most punctilious worker and will therefore be bound to eliminate jobs. The question, though, is whether the productivity and opportunities unleashed by it will create more jobs — and more interesting and rewarding ones — than those that are lost.
Proponents of AI talk a lot about creativity. But one of the current stories in which AI features strongly concerns the Hollywood writers’ strike, which is in large part motivated by fears that the studios are bent on replacing the people who supply their content — generally at quite modest rates for their industry — with bots. So you can understand it if people with jobs that are deemed less creative are somewhat fearful.
At the same time, though, there is also a risk that organizations are carried away by all the hype and attention this technology is attracting. As Stephen Newton, founder and CEO of the consulting firm Elixirr, said in an email this week: “Like all technology, AI is only as good as the source of its information and the quality of the learning feedback loop. The danger is that businesses, the public and government rely on AI output when it has been fed with poor data and was taught by humans/other AI with a particular bias or ideology.” He added that it was important that executives asked detailed questions about what was going into AI and what was coming out.
A recent report from the IBM Institute for Business Value devotes much attention to the issue of jobs and work. Pointing out that AI is “fueling workforce disruption,” the study released at the end of June shows that 43% of CEOs say they have reduced or redeployed their workforce due to generative AI, with an additional 28% saying they plan to do so in the next 12 months. However, it also finds that a similar proportion of CEOs report having hired additional people due to AI, with plans for more hiring ahead. As the report says: “The picture is muddled.” So far, executives do not appear to have resolved the issue of what kind of workforce they will need in the future.
Moreover, fewer than one in three CEOs have assessed the potential impact of generative AI on their workforce. The authors regard this as “among the most disquieting findings from our analysis.” They say it means that two out of three CEOs are acting without a clear view of how to help their workforce with the disruption and inevitable transitions AI will bring. It is not clear whether this is an oversight or whether there is just a lag in activity. But it could be important, particularly since there appear to be significant differences of opinion between CEOs and other senior executives concerning organisational skills and readiness in this area. While 69% of CEOs see broad benefits of generative AI across their organizations, just 29% of their executive teams agree they have the in-house expertise to adopt generative AI. In addition, only 30% of senior executives who are not CEOs say that their organizations are ready to adopt generative AI responsibly.
So, where does this leave us? IBM’s view — which may be surprising to some — is that the people side is more important than the technological. In recent articles it has posted it has emphasised the human role, stressing how generative AI allows repetitive tasks to be automated, so freeing people to do more interesting tasks. In a recent interview about the report, Jill Goldstein, global managing partner for talent transformation at IBM Consulting, said that some organizations were already using the technology in this way. But she also stressed that a lot more action was required. In particular, organizations need to have broader technical ability and to supplement that with the specific skills required for AI. “Even business professionals need to become somewhat conversant with it,” she said. They should also teach technical skills to functional experts and tap into existing talent pools and use those skills.
Perhaps the important thing to remember at this stage is that it is developing so fast that any pronouncements about its scope or likely impact are likely to be proved wrong pretty quickly. As Goldstein pointed out, it was only late last year that the furore first broke over ChatGBT and look how views have developed since then.
Read the full article here