According to a new study, shadow adoption of AI (where employees use AI tools such as ChatGPT but do not disclose it) can be beneficial to employees in terms of their careers but cause trust and accountability problems for the firms.
The study, conducted by Professor David Restrepo Amariles of HEC Paris Business School noted the challenges faced in adopting AI tools in consulting companies. The research found that managers rated content created with AI assistance more favorably. When employees revealed that they used AI, their efforts were often underestimated.
Analysts who hid their AI use tended to get more positive evaluations. This raised concerns about fairness, oversight and transparency.
In addition, managers found it hard to tell when AI tools were used unless they had been explicitly informed. Even if AI was not disclosed to managers, 44 percent suspected AI involvement. This trust gap leads to a misalignment of accountability. Employees benefit from the shadow adoption AI, while managers misjudge how much effort is involved.
AI policy and oversight is needed
To address these issues, the research suggests that companies should create clear AI policies. The report recommends mandatory AI disclosure, a framework to share risks between managers and staff and mechanisms for monitoring AI use. The findings show that structured policies are needed for fair evaluations, and to maintain the trust between management and employees.
Professor Restrepo said, “Our research shows that AI adoption depends not only on technology capabilities but also managerial experience and well-structured policy frameworks.” ChatGPT’s successful integration requires not only transparency, but also fair compensation of the human effort as well as balanced incentives.
Risks of AI data exposure at work
In the absence of AI policies , it is also a risk to use AI tools without disclosing their usage. Jared Siddle is the VP of Risk & Compliance for risk management company Protecht. He advises employees to not enter confidential data into AI tools unless it has been approved by their organization’s risk management department.
If you wouldn’t publish it, don’t use it in an AI tool. AI tools do not have perfect memories but they can process and store data to be used for training or moderation. “If an AI platform was compromised or misused by cybercriminals, this data could be an easy target,” he said.
TELUS Digital conducted a study that found 57 percent admit to entering high risk information into AI assistants available publicly.
AI security training is not optional. It’s vital. AI is becoming an everyday tool for many workers, but without the right guidance, a simple query can become a costly data leak,” Siddle said.
AI Governance: HR and Risk Management Implications
AI is becoming more and more integrated into workplace operations. HR and risk management must play a proactive role to ensure responsible AI usage. Lack of training and policies can lead to unfair performance evaluations and security breaches.
Siddle warns, human error is responsible for 74 percent (of all cybersecurity breaches), often because people are not aware of the risks. He warns workers to be careful when using AI tools.
“Confidential information doesn’t belong to chatbots.” Don’t blindly trust AI, but rather check the terms and stick with approved AI tools. AI is not a toy, but a tool for the workplace. “Treat it as you would any other software which interacts with sensitive information,” he concluded.