
From Riyadh to Abu Dhabi, Tech Monitor and AMD continued to take their executive roundtable tour beyond Europe. The mid-April event – hosted at the Talea restaurant inside the imposing Emirates Palace Mandarin Oriental – was another opportunity to understand the needs and hopes of leading IT professionals in an era of AI innovation. Here are a handful of key takeaways from an evening of insight and ideas.
This is what AI can do…
As has become customary at these Tech Monitor / AMD roundtable discussions, attendees were asked to share how they are deploying AI – predominantly generative AI (GenAI) and machine learning (ML) – for real-world impact. And, as usual, use cases were many and varied.
In the education sector, for example, AI is being applied to create learning dashboards so teachers and students can track progress, with the AI analysing performance to generate feedback and to create bespoke study plans. Elsewhere, another learning provider is demonstrating how an AI-based overhaul of the user experience has had a positive effect on student outcomes and grades.
Among the other use cases cited during our evening in Abu Dhabi, one engineer offered two aerospace examples – one looking to assess a pilot’s eye movements in order to track potential tiredness; another seeking to automate the taxiing journey from runway to gate. The former use case requires analysing 200 frames per second images, just one example where compute-intensive graphics processing units (GPUs) will be put to work.
…and this is what it can’t (or shouldn’t)
Given that the early conversation was dominated by what AI does well, one attendee decided to turn the topic on its head by asking whether there are “any use cases where AI shouldn’t be applied?” Two other guests offered their thoughts.
One pointed out that AI can’t deliver creativity – or, certainly, not creativity beyond what is within the data sets underpinning the large language models (LLMs). This recycled output is unlikely to lead to new ideas, he argued.
Another guest said AI cannot (or rather, should not be allowed to) make decisions on its own. There are too many examples of inherent bias in datasets, a lack of quality data, or the tendency for GenAI chatbots to hallucinate meaningless or misleading results. This means, for now at least, AI requires a human in the loop at all times.
AI adoption is a journey, not necessarily a destination
There are distinct classes of AI adoption, noted one attendee, a progression he described (half jokingly) as “the three stages of grief”. The first is co-pilot, i.e. making best use of the GenAI add-ons that come with off-the-shelf software packages. Next comes AI native architecture and, finally, full integration. Not surprisingly, the third and final stage is the most complex.

Too many AI initiatives fail the ‘why’ test
Before embarking on any technology project, first you need to address business needs and expectations, said one senior technologist in attendance. This means defining a “proper” problem statement that directly serves a business barrier or opportunity demanding immediate attention. Only then can you apply a “proper solution.” Too many organisations, this attendee said, fail to fully articulate their problem statement when it comes to AI. This may be project management 101, but many organisations are so keen to adopt AI at pace that they forget the basics. And the basics start with “why.”
Barriers persist, but solutions exist, too
Asked to describe some of the barriers preventing full-scale adoption, one attendee echoed an earlier point about the limits of AI as a technology. AI feedback, this attendee said, should not be trusted in all instances. Scepticism was an essential starting point. Many in leadership positions need to be “enlightened” on this point, otherwise they are likely to take AI decision making on trust.
Another limitation of AI, building on the previous point, is that data – “always trained by people, and people have specific ideas” – needs to be approached with caution. In short, there is no such thing as purely objective data. Culture informs the things we say and do. Inevitably, biases creep in. This means data quality varies and decision-making is likely to vary too.
A third challenge many face is how to manage data when the directive is increasingly to keep it within national borders. This runs counter to the aspirations of portability across regions, and through the cloud, that reflected the prevailing mood until recently. Nevertheless, recognising the dominant principle of data sovereignty that is now emerging, many of the big hyperscalers are building – or are planning to build – in-country data centres. For those considering hosting AI workloads off-premise, cloud once again becomes tenable.
‘AI innovation in an age of environmental and regulatory volatility’ – a Tech Monitor / AMD executive roundtable discussion – took place on Thursday, 17 April 2025, at the Talea by Emirates Palace, Abu Dhabi.