back

Data protection paradox: When the guards cheat

February 26, 2024

If they do, it can't be that bad, can it? Privacy and security leaders are exposing information in AI tools on a massive scale, when they should be fulfilling their role as data guardians. What do we learn from this? First, it teaches us something about AI.

Surprising figures on AI usage

According to the Cisco Data Privacy Benchmark Study, which surveyed 2,600 privacy and security professionals, 62% of respondents said they have entered information about internal processes into an AI tool. 48% have disclosed information about their company that was marked as not for public use, and 45% have disclosed employee names and information. 38% have disclosed customer names and information.
 

An upside-down world

Of all people! Of all people, those responsible for privacy and security are ignoring what they usually preach. They are sacrificing their principles, which everyone in the company is supposed to follow, for a quick result, a better answer, or to get out of work sooner? How can we trust them?

The trade-off between risk and benefit seems to be clearly in favor of benefit, no matter how consciously it is done. Maybe the risk is not so high? Were Samsung's developers just unlucky that their code could be traced back to them? Or were they just clumsy?

Neither outrage nor resignation will help. None of us are free from cognitive bias - it won't affect me, no one will notice. If there is such an attraction to AI tools among professionals, it means first of all that these tools are highly attractive and a real help at work. Many studies now confirm this.

Using AI safely in the enterprise

What does this mean for businesses? The most important step is to enable the safe use of AI tools. ChatGPT Team and Enterprise are safe, but expensive. A cheaper alternative is to use your own chatbot that works with the language model via an interface. The results are as good as with ChatGPT, but companies can be sure that confidential information does not end up in the language model.

Read more