Generative AI is transforming our professional and personal lives. We discussed the innovative and sometimes difficult applications of this tech with Aamer Baig, who heads McKinsey's technology practice globally. Aamer provides insights into how generative AI is reshaping various industries based on his extensive experience advising clients across different sectors.
Can you share a little bit about how you're seeing clients implement AI in their businesses in this moment?
Let's look at this technology from a business and society standpoint. We call it a generational technology, and it's also been called the next platform revolution. Why is that important from a leadership perspective? One, it makes possible new business models and new businesses that didn't exist before. Many of our clients in financial services and insurance are still taking advantage of the computing revolution. They do large-scale transaction processing, whether it's claims, underwriting, or what have you. A platform revolution makes possible new business models, whether it's digital banking or the gig economy. It remains to be seen what AI enables writ large, but it is one of those technologies that'll have that sort of effect.
Second, it's going to enable the creation of new technology champions. Leaders will have to think about how they partner, co-op, or compete with them. Think of Amazon/AWS with Cloud computing as an example.
The third thing is technologies of these types actually change how organizations work. AI not only has the potential for democratizing information but democratizing intelligence and decision-making as well.
In terms of how organizations are applying AI, it is a combination of traditional AI and generative AI. Many organizations have AI initiatives in flight to improve productivity, efficiency, new service models, sales and distribution, etc.
In an article, you suggested that we are getting past the honeymoon phase with AI and facing the hard reality of its limitations. What are some of the challenges you’re seeing, and are they technical, employee morale, or skill-centric?
It's all of the above. We think generative AI has tremendous promise and can reshape how businesses and functions work. We were very clear that we believe this is a multi-trillion-dollar opportunity globally—$2.6 to 4.4 trillion of productivity.
There's a lot of buzz in 2023, a lot of investment going into generative AI, a lot of initiatives at companies across all sectors, including in insurance. At the end of 2023, we talked to leaders across all sectors in all functions and found that between 65 to 70% of organizations have AI initiatives and are planning to implement AI at scale, but less than 15% are seeing real benefits from it. It will take time for these investments to gain traction at scale and deliver meaningful benefits.
Are there common themes in that 15% on how they're attacking that issue and creating clarity to create momentum?
One common theme is the companies that are really doing well are actually doing fewer things exceptionally well, rather than many things just to get by. So, picking a few spots where you think it is going to be difference-making for the business, and putting all the ergs of energy behind that makes sense. This is difficult to do because you have to say no to different parts of the organization, and you have to slow things down a bit to go faster later.
How do the leaders of successful companies create and communicate a clear focus for their AI initiatives?
CEO and C-suite sponsorship is exceptionally important—having a view from the CEO's perch on where AI is going to make a difference to the organization, what is our AI strategy in terms of where we will apply AI for productivity, growth, or new capabilities. How do we partner with other organizations? And then how do we move at pace?
Most of the value that great AI initiatives will have will transcend traditional organizational boundaries. That requires a lot of change management. It needs alignment of the C-suite on what's important. It needs an alignment in terms of the benefit that we're driving with AI. What are the risks that we have to manage? How is this going to affect people's day-to-day jobs and what's our posture from a talent and risk perspective?
We like to say if you're spending a dollar in development, you have to spend three on change management. The organizational aspect of training people, understanding risk, understanding IP protection, and learning some new skills like prompt engineering, which didn't exist two years ago, you have to invest in. Having the right skill set, and the multidisciplinary way of working to develop the platform and the products, requires assembling minds, not just models.
Talent: The Key to Success in AI
What are the skill sets and mindsets for a CIO that are critical at the forefront?
There are many important roles for a CIO, but one of the important is to be the chief learning officer. This person has to help the C-suite learn and then help set the learning agenda on AI for the rest of the company. This has a huge bearing on how AI gets adopted responsibly, and thoughtfully, but also at pace. But within the technology function itself, the nature of how technology gets developed, deployed, and conceived is going to change. We're seeing remarkable progress in the idea of an assistant to a software engineer—generating code, testing code, deploying code, and the range of performance improvement is between 15 to 40%, depending on what you're talking about. So, half of what the technology function does is going to change.
Technology operations, which is the other half of it, is going to go through a lot of automation. Cloud already helped with that, but this is going to give an even bigger lift.
With automation and AI, we're going to see a lot more of the service operations being tech-enabled and automated. As a result, you're going to see a blending of ops and engineering skills together. It's happened in tech; it's happening in some parts of banking and FinTech. Over time, we think it'll happen in insurance as well.
Can you speak a bit about how the individual senior leaders, not just the CIO, but their staff need to be different on those dimensions?
I think the responsibility for AI goes much beyond the CIO’s office. Some of the tooling and some of the initial incubation may happen there, but if there is a technology that's going to democratize intelligence, and decision-making at a large enterprise, it's AI. Think of every professional function having an assistant or a copilot that may be available to you to do your job much more effectively, whether you're a financial analyst, whether you are an underwriter, an adjuster, or what have you.
It comes back to mindset and skills. You're going to need more “bilingual" people, is one way of framing it. You're going to understand tech, but you also need to understand the business domain. You will need to understand tech, but you may also need to understand the customer or the client. You may need to understand tech and risk. As a result, the need for translation may diminish over time as you're developing this talent that is going to be multidisciplinary.
The second is that large organizations have learned to manage risk through a lot of checklists and a lot of controls. By nature, AI is a lot more iterative, and it grows by learning from new data, and it evolves. That's going to require a lot of adjusting because most people are more comfortable operating in a lot of black and white.
How should leaders think about the non-technical talent they acquire and the development they need to provide to become smarter users and proponents of AI within their respective companies?
It has to be co-creation. The tech leadership has a responsibility to educate, but also help co-create, help reimagine, which is a different role from being just a functional leader. You're a full functional partner or a business leader as well. From the rest of the organization, I think they're going to be just as responsible for the re-imagination and responsible adoption. How it is protecting your IP, how it's making sure that there's minimal bias. And making sure that the decisions that are being supported by AI not being made by AI. I think those are really important judgment-based things, but also aspects that require a deeper understanding of technology and how it impacts a function.
The talent model is going to shift as a result. If a lot of decision-making research is being supported by AI, you'll have to think through how you apprentice new people coming in, and what's going to be the best way to grow talent in the organization. I would also really focus on adaptability as a skillset set. It is a time when the most adaptable people, who know how to renew themselves, grow their skills, and understand both the power, the risks, and the relevance of this technology, are going to do phenomenally well.
Topics Related to this Article