By Carlos Torales, Cloudflare VP, Head of Latin America

While the world is analyzing and experimenting with the undeniable potential of AI, it is equally as important to have cybersecurity at the forefront of these conversations and implementations.

Looking at the organizations of today, business leaders and decision makers can’t ignore the impact of generative AI. 

Some people have jumped on the technology as a new age in the workplace when they’ll never have to face the task of writing an email or report ever again. 

For others, it’s a glimpse of a new wave of technology that is set to bring unseen benefits across business, from operations to development, and regulation. 

The initial use and the potential of AI that has now been seen worldwide, has unveiled a significant step forward in personal productivity — as well as raised concerns in terms of data privacy and security. 

Generative AI presents opportunities and security threats 

Using tools such as ChatGPT and other large language models (LLMs) is essentially opening the door to unmonitored work — devices, software, and services outside the ownership or control of IT and security departments.

While the ways in which to use AI are only growing, the problem is simple and solvable. 

Whether it’s an employee experimenting with AI — or a company initiative — once proprietary data is exposed to AI, there is no way to reverse it.

AI holds incredible promise, but without proper guardrails, it poses significant risks for businesses. 

According to a 2023 KPMG survey, executives expect generative AI to have an enormous impact on business, but most say they are unprepared for immediate adoption.

And top of the list of concerns are cyber security (81%) and data privacy (78%). 

That’s why CISOs and CIOs need to strike a balance between enabling transformative innovation through AI, while still maintaining compliance with data privacy regulations.

A Zero Trust cybersecurity model tackles these risks


Zero Trust security controls can actually enable businesses to safely and securely use the latest generative AI tools without putting intellectual property and customer data at risk.

Today’s cybersecurity method of Zero Trust requires strict identity verification for every person and device trying to access resources across the network.

A Zero Trust architecture trusts no one and no device by default — this being an essential approach for any organization using AI in any capacity. 

How does your organization use AI? How many employees are experimenting with AI services on their own? What are they using AI for? 

Some organizations may find that the adoption of a Data Loss Prevention (DLP) service would help to provide a safeguard to close the human gap in how employees may share data.

While more granular rules can even allow select users to experiment with projects containing sensitive data, with stronger limits on the majority of teams and employees. 

Put simply, if AI is in use, there is an inherent need to tighten security, across protocols and processes.

Regardless of where an organization stands with its current cybersecurity posture, there are steps to start from, or to pick back up on, to completely secure any organization and its data.

With Zero Trust in place, the framework will secure and verify every door while allowing for the innovation that AI can bring.