Contact sales
9 min read

It always seems to come back to AI, doesn’t it?

65% of UK workers are happy to use AI in their daily work, but that only tells half the story. If the public was asked to use one word to describe their feelings about AI, the most common words are “scary, worried, and unsure.” The age of AI is here, but are we quite ready for it?

We are, but it all comes down to how that technology is used, and the pace of adoption. Companies like Cisco are working hard to maximise AI’s value and relate that to core business values around collaboration and trust. The potential behind AI is vast, but it always needs to fold back around to that overarching idea of trust itself.

It’s exactly what Alex Ayers, Sales Director at Gamma, spoke about with Reed’s Henry Lee and Cisco’s Hendrik Blokhuis at one of our off-stage sessions at GX Summit 2025. The trio explored the pace of adoption itself, alongside aligning strategic intent to AI investment and being aware of ‘shadow AI’. Their conversation shapes a compelling narrative on how AI is reshaping trust, collaboration, and digital workplaces themselves.

“Agility is the most important part”

Hendrik, as Director of Solutions Engineering & CTO EMEA Partners at Cisco, knows the ins and outs around technological adoption. For him, it’s important for businesses to “take a step back” when selecting the right race and thinking about strategic intent.

It’s always important to remember the people and how they’ll be educated. The processes, governance and ethics, security and trust – all need to be considered when decisions like this are being made. Hendrik reminds us, when it comes to that pace, “don’t take it easy, but take it [seriously].”

For Henry, that pace is being dictated by the industry. His role as Head of Digital Workplace at Reed gives that oversight on AI adoption and implementing SaaS platforms. Various business areas are “running off with AI in those platforms”, and while it does add value, does it align with strategic intent?

Vendors want to deliver AI while providing value and “[seeing] what works for customers.” It can be difficult to manage that process.

It’s why Cisco took their own step back to design a “ethical, responsible AI framework.” Hendrik should be treated like an “operating model across the silos” rather than a tool, as that’s where the value is. There’s no joy in colling all the AI tools and hoping magic will happen; remember the strategic intent to then see the benefits.

“We can do fabulous AI stuff”

Don’t underestimate the importance of this comment. What Hendrik means is that, by putting a GPU in their Webex boards, there’s a foundation to build AI capabilities on. Cisco have been able to do it for 5, maybe 10 years, and now the benefits are being seen.

Facial recognition and people count in the room is just one strand. AI is being placed in security, as “cyber security can’t be done at human scale”, and the network to help with predictive networking capabilities. When managing a wide area network, predictions can be made around whether a link will soon be overloaded, allowing fast decision-making.

Again, the ethical framework, alongside security, trust and governance, has been built.

Cisco have also made a big effort to train employees around responsible AI usage. Simultaneously, “it’s about collapsing silos” and addressing a lack of alignment between various business processes. AI needs a strong network to function, which presents a very interesting business opportunity for those that manage their own network.

“Makers, shapers, takers”

That’s how McKinsey grouped those who operate in the GenAI space. Whether they take models off the shelf, customise them, or develop their own, these are the ones making a difference.

For Henry, while some can’t compete with the billions being spent by big players, it’s still worth finding “your value in this race.” That comes down to industry experience, human connections, and that unique dataset. It all loops back round to that strategic approach, while also remembering how the forementioned factors can be connected to the AI tools being developed.

Hendrik mentions how businesses can be a shaper, taker and maker at the same time. Some can build an agent specifically for their business, which can be taken to suppliers and customers. Suddenly, there a “unique competitive advantage.”

But are there blurred lines between making and shaping? Alex proposes that, if businesses “reconstruct something from some basic ingredients”, then that’s where the definitions mix. Takers can be easy to identify, but makers and shapers can be hard to pin down.

“This bring-your-own AI problem”

AI is “coming in the front door… and the back door,” and users may be tempted to just “come and get something themselves.” It relates to the shadow IT problem, and when AI is thrown in, that issue is multiplied. The stakes, in this case, are too high.

Henry admits that “[oversharing]”, while beneficial for Gen AI’s own development, is both easy and costly. Providing more information generates better results, which can lead to greater business outcomes. However, it’s unclear whether “we’ll ever control this to a degree,” representing a fundamental shift in shadow IT and data loss risks.

For Hendrik, “we shouldn’t fight it… we should give them a better option.” Users shouldn’t be blamed for an unmet demand, which is why Cisco developed BridgeIT. Proper training around responsibility is all part of that “better alternative” to shadow IT.

However, what might be amazing for one business area in respect to GenAI may not translate to another. Henry reminds us that SMEs can’t spend vast amounts of money on various AI models for different areas, leading back to that “strategic intent.” There needs to be a “viable, acceptable alternative”, which is also cognisant to the fact that not all user needs will be met.

“The embedded AI challenge”

The domain-specific nature is beginning to emerge, so will the big SaaS companies give us everything we need? Will those larger GenAI models be pushed across the employee base, or will it be determined by verticals and domains? If businesses aren’t consciously thinking what the strong use case is, then it’s “like… taking a sledgehammer to a nut.”

Henry proposes that there are domain-specific models and services, but there’s also “an element of marketing.” SaaS platforms could be tuning their services, making one consider that it’s just prompt engineering that can be done “in a more general model.” As businesses move to the cloud, SaaS platforms are everywhere, and it can be difficult to work out which models add real value.

Embedded AI, Hendrik points out, is very similar to the early stages of Internet adoption. Its usage perplexed people, and it was hard for companies to define what it could specifically be used for. For everyone, it’s always key to remember the strategic intent and start with that “ethical responsible AI framework.”

It feeds into that trust model, solidifies governance and trust, and safeguards against any nasty surprises.

“Two sides of the same coin”

Let’s converge two challenges – the “augmented learning trap… [and] the experience problem.” We want that win-win situation of bringing the technology and the human together, yet there are “hard yards” to cover. GenAI can’t be prompted to give that experience immediately; rather, “it’s a learned experience.”

Hendrik advises that, while humans “don’t want to be pushed away by technology,” sometimes all that’s left is for humans to “ask the right questions.” We can use our own humanity to shape how AI responds, especially as agentic AI grows exponentially. The combination of quantum networking, quantum computing and the brain computer interface can leave humans thinking “okay, what’s our position?”

“Scary and great at the same time,” as Alex describes it.

For Henry, he relates it back to one-click apply in recruitment, and how recruiters sought for more high-quality candidates. The ability to easily apply for a job meant there were more candidates, but not necessarily desirable ones. Suddenly, they’d need to filter down and verify which ones could do the job.

But once you throw AI on top of that, every candidate has a finely tuned CV and application. How can businesses differentiate those candidates and understand who’s good and who isn’t? AI is pitting itself against AI, and “who knows where it’s going to end up.”

“Trust is massive in modern business”

When building the right strategy around AI, trust is critical. It’s the “new business currency”, according to Hendrik, and is integral in that trustworthy relationship. Customers and partners are alike in wanting to work alongside someone who instils confidence in their customers.

Building that trust model involves not just general organisation protection but also safeguarding customer data and the supply chain. “Trust can be a very vague term,” so businesses need to “make it real” through that trust model. Security, transparency, privacy policies – these layers are “super important in the world of today.”

As a “trust-based process”, Henry is in full agreement from a recruitment standpoint. Validating identity may lead to an increase in face-to-face activity, especially in interviews. For Henry, it testifies to the importance of trust for us humans as we move away from video interviews.

“A weave and an intertwine”

Alex states that, as technologists, stacks and hierarchies aren’t a new concept. The models they’ve learned over the years are now moving towards a different approach for success. It’s “founded and intertwined” with conscious technological adoption, driving powerful outcomes and true business value.

All that is weaved between people and processes, customers and AI – even vendors and organisations like Gamma.

If there’s any key takeaways Henry and Hendrik want to disclose, those would be:

  • Maintain a healthy tension between those who are “conservative” and those who want to “try things”.
  • Remember the strategic intent before designing that ethics framework; create it as an operating model, rather than a tool.

Insights from these kinds of experts make an event like GX Summit special. A technological powerhouse like AI can’t just be bolted on – it needs to be woven in after considering the “why”. If not, people will start asking questions around where this AI journey is going.

AI adoption is a marathon, not a sprint. Take a breath and keep those ultimate goals in mind.