5 min read

In this article, we’ll discuss:

  • The finer details behind AI governance.
  • What businesses need to know about secure AI adoption.
  • Cisco’s own approach to responsible AI usage and deployment.

In the space of a year, the number of organisations using AI in at least one part of their business has increased from 78% to 88%. More solutions are now available on the market, and more business leaders understand where AI fits into their processes. From meeting transcriptions and real-time call insights to receptionists and fraud detection, AI has found its place.

Introducing new features and usages means new regulations must be introduced, or existing ones updated. For any AI-focused organisation, they need to stay compliant to fully enjoy the benefits. There’s now an expectation for companies to have greater control over where data goes, and who can access it.

If businesses want to collaborate more closely, and utilise AI, it must be done in a safe, controlled manner. Opting for a solution that’s AI-driven and compliant is a necessity, and can make all the difference when keeping communications secure.

Understanding AI governance

As outlined in ‘A pro-innovation approach to AI regulation’, the UK government expects regulators to adhere to 5 cross-sectoral principles:

  • Safety, security and robustness: AI systems need to be secure and resilient throughout their life cycle, while being continually assessed.
  • Appropriate transparency and explainability: Regulators need to help users understand how AI makes decisions and uses data, balanced with protecting proprietary information.
  • Fairness: AI systems shouldn’t undermine the legal rights of individuals or organisations, and instead work towards fair, equitable market outcomes.
  • Accountability and governance: Measures must be put in place to guarantee effective oversight for the supply and use of AI systems, with clear accountability across the life cycle.
  • Contestability and redress: Any users, third parties or actors within the AI life cycle can contest an AI decision or outcome deemed harmful, and access suitable redress.

Regulators, such as the Financial Conduct Authority (FCA) and Information Commissioner Office (ICO), are tasked with mandating these principles. As of writing, the Artificial Intelligence (Regulation) Bill is stuck in the House of Lords, and there are speculations around further bills being introduced.

Alongside existing laws around data, intellectual property, and human rights, regulators have plenty to consider. Simultaneously, organisations need to develop their own AI frameworks and policies to stay compliant. Data management and long-term, AI-fuelled growth are just two factors to consider when looking to safely adopt AI solutions.

The approach to AI adoption

94% of IT decision-makers view AI adoption as a key part of their strategy. For these business leaders, it’s crucial for security to keep pace with this goal to adopt. If not, then security will be left behind and data becomes vulnerable.

The adoption of AI agents, which would work alongside their human counterparts, is expected to increase by 327%. Productivity is predicted to grow by 30%, giving further impetus behind the adoption of these agents. But again, security must be top of mind rather than an afterthought.

Much like with the white paper around AI regulation, the UK government has set out 13 core principles to secure AI systems during their life cycle. This Code of Practice for the Cyber Security of AI, while voluntary, aims to establish a baseline around AI systems and their security considerations. With AI technology rapidly advancing, and its usage more widespread, these are welcome changes.

As AI becomes more integrated, it learns more. What these systems learn acts as an additional risk, with leakages and model manipulations providing cause for concern around security. In complex AI supply chains with third-party models and open-source components, control is out of an organisation’s hands.

There are steps that can be taken, such as appropriate training of staff around these dangers. What really makes a difference, however, is when providers and their solutions come with an established framework around responsible AI usage.

Cisco’s approach to AI

In terms of unified communications, AI-powered features are now commonplace. These systems, considering they handle sensitive customer data, must be resilient and robust in the face of any external threats.

Providers like Cisco are leading the way when it comes to this matter. Their Responsible AI Framework acts as a guideline on how Cisco develops and deploys AI, guided by 6 core principles. Cisco’s Integrated AI Security and Safety Framework provides a clear structure for understanding how AI systems fail, and what organisations can do to mitigate risks.

Gamma’s partnership with Cisco has led to an expansion in existing communications solutions. A wider portfolio, which includes Webex for Gamma and Cloud Connect for Webex Calling, opens the door for businesses looking to adopt advanced AI-driven products. With enterprise-grade security and cloud-based collaboration, these solutions are built for modern businesses.

The Cisco Control Hub allows admins to control the features around the Cisco AI Assistant. It’s capable of quickly surfacing insights and automating routine tasks that helps to create streamlined, optimised workflows that boost productivity. A powerful tool like this is readily available within these platforms and is there to support future-ready collaboration.

Providers who understand AI governance

AI is a tool that needs to be used ethically, especially as it becomes more widely adopted. For any organisation looking to unify their communications and navigate AI compliance, they need to work with a provider who understands both.

Solutions like Webex for Gamma are designed with compliant AI usage in mind. Businesses not only unlock the true power of unified, seamless collaboration, but benefit from AI-powered insights and automation. Whether it’s scaling or improving workforce efficiency, these solutions are perfectly capable of adapting to evolving business needs.

Cisco’s approach to AI is one that encourages innovation and growth. Having achieved Cisco Provider Gold status, Gamma is well positioned to leverage their AI-driven solutions.