From AI excitement to alignment: Making digital adoption work
Terryel Hu, PhD
Process Excellence Network
Terryel Hu, PhD
Process Excellence Network
Terryel Hu, PhD
28 January 2025, Featured on Thinkers360 Weekly Digest #2
The AI enthusiasts keep sending businesses the same warning: if you want to stay competitive, invest in the latest AI capabilities. Nearly every company has included some wording around AI on their roadmap, even if they don’t have it. All this messaging around “adapt or die” may leave you wondering: How can they be so sure?
The reality with capabilities is that their value is only realized after acquisition. We can’t predict their worth in advance. We have to roll it out first to the rest of the company to see if it works. So, let’s maintain some perspective around the hype. I’m sure your AI assistant would advise you to not abandon common sense.
AI enthusiasts often fail to distinguish between existing solutions and those still emerging. The tech industry embraces uncertainty, but where did all that uncertainty go when it comes to AI projects? The hype around AI has shifted from might happen to will happen. Are these experts or futurists? Are they offering expertise or their speculation about what they think might happen? Until we manage to implement AI capabilities, we don’t truly know which ones are valuable.
The buzzwords surrounding strategy — agile, learning, performance — created a shopping cart of capabilities in the eyes of companies. However, any investment in capabilities requires monitoring and reflection. Consider learning capabilities: If a group invests in learning, how do they know they’ve truly learned something? More importantly, how do they ensure it’s the right thing to learn?
Managers often focus on learning “about” or “how to do” something but rarely inquire beyond that. Take Apple as an example. Since launching the iPhone in 2007, the company has released new models annually, driven by intense competition from Samsung, Google, Huawei and many more. If they can release a market-leading product every year, then so should we.
Now, imagine a car manufacturer wanting to transform their production capabilities to that of a software company like Apple. They aim to release a new model every year instead of every 3–4 years. Is it possible? Certainly. During World War II, Ford, General Motors, Chrysler, and Willys ramped up production. It can be done, but should it be done? A typical car can lose over 80% of its value within the first year of ownership. Even electric vehicles lose 50% of their value after a year. Acquiring the capability to produce a new car model every year doesn’t overcome the depreciation issue. It risks developing a capability with limited long-term benefit.
So why do entrepreneurs love talking about the future while overlooking the present? Don’t product roadmaps start from today? If you want a brighter future, don’t you need to start working on your goals now?
Visit your IT department. Every Fortune 500 company has one of them. Banks, for instance, rely on legacy systems that are known to be outdated, costly, and inefficient. The fact is they still keep the lights on. We could replace them and deal with the security, operational and reputational risks while transitioning. After all, the world did survive the 2024 Crowd-Strike outage which impacted 25% of Fortune 500 companies, losing an estimated $5.4 billion in losses.
Most of us don’t want that. The good news is that acquiring a new capability doesn’t mean throwing out the baby with the bathwater. Even the most innovative products combine old and new capabilities. We just don’t realize it because we use them so much. Microsoft Office, emails, and conference calls have remained largely unchanged for decades. Their core functions remain the same but have been modified and improved over time.
Adopt a “buy now, ask later — and keep asking” mentality. The value of a capability often becomes clear only after acquisition. Older capabilities could lead to irrelevance, but abandoning existing capabilities for new ones might leave you worse off.
Distinguish between what can be done and what should be done. Before investing in AI capabilities, ask, “What does it look like?” Don’t let marketing or your IT teams bypass this critical question.
Balance speculation and planning. AI capabilities are often sold as a “must-have” without being grounded in the present reality. Implementation should anchor back where you are today. You may be excited by your dreams of joining the NBA. However, at some point, you want to get excited about practising it too. Kobe Bryant would have wanted you to practice.
Embrace both the old and the new. Digital transformation is not about replacing everything — it’s about finding a balance. For example, the automation of assembly lines in the 1940s and Japanese robotics in the 1980s didn’t replace existing manufacturing practices overnight. Capabilities evolved tactfully over time. Ask yourself if new capabilities change the core solution.
The explosion of AI tools has introduced businesses to new use cases and exciting growth opportunities. It has opened up a new thinking around “This hasn’t happened yet, but it’s a possibility.” We should celebrate this hype, but we should also consider the present. If you’re excited about the future, get excited about what you can do today.
Terryel Hu, PhD
5 March, 2025, Republished from Linkedin
While today’s AI systems have capabilities in language processing, problem-solving, and even creativity, they still operate within a constrained context. One of the most famous contests between humans and machines was in 1997 when IBM’s Deep Blue defeated Gary Karaspov in the famous chess rematch. It happened in a context where options were restricted, and probabilities could be calculated. Leaving aside the games and tests, can AI defeat an expert in the real world?
Superintelligence can supposedly be a higher form of intelligence. Yet researchers haven’t agreed on what constitutes intelligence. Some subscribe to multiple theories of intelligence, while others subscribe to fluid vs crystalized definitions. The leap to superintelligence would go above and beyond these conventional definitions. It implies that AI systems are competent at independent reasoning, decision-making, and self-improvement beyond the typical human.
AI is thrown around like a fancy world. However, at its core, it’s a knowledge base that can store, retrieve and process facts faster than any human can. It can outperform an expert if focused on “fact-finding” or propositional knowledge. Here are some tasks where AI has proved to be useful:
Language Translation
Fraud Detection
Image Recognition
Medical diagnosis
In a recent study, researchers found that, when using AI-assisted tools, radiologists improved breast cancer detection by 17.6%. Even GPT-4 scored 76% on the bar exam, beating human test-takers by 7%. A more consistent AI tool would be Microsoft speech recognition software. The tool can subscribe to speech in real-time and has an error rate of less than 1%.
In Stanford University released the 2024 AI Intelligence Index Report, they discovered that the topic of AI went from being mentioned 266 times on earning calls in 2022 to 394 earning calls in 2023. In other words, it was mentioned by 80% of all Fortune 500 companies.
Although AI can perform tasks ‘better and quicker’ across many activities, it can hardly be described as superintelligence. In the real world, humans don’t move like chess pieces. We don’t calculate decisions based on probabilities or game theory. We use analytical thinking and intuition together.
If AI were an employee, how should we assess its performance? Let’s see how well it begins the day. On a typical day, you log onto the laptop, sign in to the VPN and verify your identity. More complex tasks might include making business calls, forecasts and securing deals. AI is intelligent to recognize these tasks, but not competent enough to execute them without our guidance. That’s where the issue lies. It cannot explain its reasoning or be held responsible for what it does, leading to trust and performance issues. We’d rather not be too harsh and call it incompetent.
We might say that AI is impressive but still misguided. AI may surpass human intelligence in the traditional sense and yet comes across as being functionally illiterate. Others would argue that AI's current limitations—such as reasoning errors, hallucinations, and data issues —suggest that true superintelligence is further away than we think.
Rather than dwelling on intelligence, it's a time to figure out a strategy on how to make AI more competent. It's time to move forward in the real world.