Why HR should assume that every AI tool at work has been compromised

AI is likely used by your employees for research, to summarize transcripts or to develop competitive analyses. They may have the best of intentions to stay up-to-date with technology and improve efficiency, but they are putting your company at risk. Every popular gen AI tool is likely to have been compromised, putting the data and reputation of your organisation at risk.

These risks are further exacerbated due to the increasing globalization of firms. “Unintended data transfers across borders often occur as a result of insufficient oversight. This is especially true when GenAI technology is integrated into existing products, without any clear description or announcement,” explained Joerg Fritsch VP analyst, Gartner.

Organisations notice changes in the content created by GenAI-using employees. These tools are suitable for business applications but can pose a security risk if sensitive information is sent to AI tools or APIs located in unidentified locations.

Avoiding compliance landmines

Employees entering proprietary data from companies or customers into AI tools for consumers is the core problem. This is done outside corporate supervision. AI tools often retain user inputs for as long as possible, and use the data to improve their learning models. This dynamic may make the tools “better” in terms of their outputs but it also creates the opportunity for bad actors and regulatory compliance issues.

Imagine a hospital where the staff might include:

  • Enter protected health data (PHI), such as a patient’s name, medical records, or other information, into AI public tools.
  • AI can be used to create patient communication or instructions for care tailored to the individual’s needs using personal data.
  • Uploading lab results or images into an AI tool to analyze or provide a second view could expose patient data.

Employees will violate protocol if they use ChatGPT, or any other AI tool. It is not necessary for a breach to have occurred to cause them to be out of compliance.

Understanding common AI security problems

There are several security issues that occur with AI tools used in the workplace. The most common issue is when API credentials have been embedded directly in the front-end code of a website. It’s like taping “123password”, on a sticky-note, to your monitor so that anyone can see it. Someone who finds this code could easily hack into the system.

Microsoft 365 Copilot is another provider that has experienced this failure. Glitches caused cross-content context leaks. A software bug led to confidential information being accidentally accessed by another user.

Public AI tools expose intellectual property. Samsung, for example, banned ChatGPT when some engineers uploaded the source code in order to fix bugs. This shows the potential of AI to solve complex problems, but employees tend to see these tools as safe vaults. They need to be better informed about the risks. When firms don’t implement AI integration policies or use secure tools this can lead breaches, compliance issues, and PR nightmares.

Overcoming misunderstandings

The majority of executives believe that a well-known AI, like OpenAI, that receives lots of media attention is safe, and that it adheres to data storage and security guidelines that are in the best interests for users. Reality is much more alarming. HR executives and leaders often underestimate how many ways there are to compromise company data or have it breached.

AI tools, for example, often adhere to data compliance standards such as SOC 2 and ISO. Leadership teams may believe that certifications are bulletproof, but there are other threats. Some of these risks are people fooling AI tools with access tokens or prompt injections. Others include AI platforms accidentally exposing data from other users. Security compliance checklists do not address these risks because they often go unnoticed.

CHROs and enterprise leaders must start to treat shadow AI use, or openly acknowledged usage, as just as dangerous as phishing scams or non-compliance of password management policies. Not only should they select workplace-focused AI tools but also track breaches and educate themselves about the risks associated with public AI tools. They remain vulnerable if they don’t.

Take steps forward

HR teams can immediately take proactive steps to protect their employees and companies from AI tools that are being used in the public domain. They include:

  • Creating a “AI Acceptable Use Policy” that clearly outlines the expectations and guidelines, without any ambiguity. HR can reinforce the policies during quarterly training and onboarding sessions.
  • Approve secure AI tools with audit logging and single sign-on controls, while blocking unapproved services.
  • External auditors can be brought in to look for AI tool issues, such as the use of prompt injections (when someone tricks an AI to do something it shouldn’t) or token misuses that allow someone to access private information.
  • Data labeling is a must in any AI tool interaction. Each prompt should have a note stating if the output was confidential and meant for internal use only or if public viewing was allowed.
  • Create and enforce fair and strict procedures to deal with employees who do not follow the guidelines.

A zero-trust approach

All these efforts must also be accompanied by a zero-trust attitude. In IT, this means that you assume any tool is compromised or a compromise will occur soon.

Don’t give AI tools broad access to a system. Instead, only grant them temporary and minimal access. It is also important to instruct your IT staff on how to monitor employee logins and ensure that sensitive data processing using AI tools remains within the device or controlled environment.

HR leaders play an important role in steering AI adoption to secure and responsible platforms. It is important to keep AI data under the control of the organization, whether on-premises or in a cloud. This will protect the brand reputation and customer trust.

It is obvious that HR and C-suite executives cannot ban AI platforms. They should instead choose AI tools that are easier to use and more powerful than alternatives available for consumers. This approach encourages adoption by focusing on usefulness rather than stifling innovations through policy.

Don’t Stop Here

More To Explore

💬 Contatta un nostro operatore
1
Scan the code