Shadow AI: The Hidden Danger of Unsanctioned AI Use in the Workplace

IT departments and business leaders alike are growing concerned as employees turn to AI tools without official approval, introducing Shadow AI.

October 11, 2024
A shadowy picture of a pair of hands typing on a laptop, with the screen turned away from the point of view.

In today's fast-paced digital world, artificial intelligence (AI) has become a powerful tool that workers are keen to implement. However, this enthusiasm has led to a concerning phenomenon known as "shadow AI". IT departments and business leaders alike are biting their nails as employees increasingly turn to AI tools without official approval or oversight.

What is shadow AI, and why is it a problem?

Shadow AI refers to the use of AI tools and applications by employees within an organisation without the knowledge, approval, or supervision of the IT department or senior management. 

The term "shadow" doesn't necessarily imply malicious intent. In many cases, employees may not think it necessary to mention their use of AI because it has become such a commonplace convenience. For instance, research by Microsoft and LinkedIn found that 75% of global knowledge workers use generative AI (GenAI).1

The issue isn't that workers use AI generally, but that in a quarter of cases (26%) where AI is implemented by the employees themselves, it is without their direct line manager being aware of it.2 This lack of clear communication about AI use can result in significant risks for the company and the individual.

While the intentions behind shadow AI are often well-meaning, it can make organisations vulnerable to risks related to data privacy and security, compliance issues, intellectual property concerns, biased decision-making, and more. Meanwhile, the individual employee responsible for introducing shadow AI could face consequences impacting their career. 

In 2023, Samsung employees accidentally leaked sensitive information to ChatGPT. The semiconductor division had allowed engineers to check source code with the AI tool, but some employees took their use of ChatGPT further. For instance, one employee shared a meeting recording in an innocent attempt to convert it into notes. This data is now retained by the tool and can be used to continue the model's training. Samsung responded by limiting the company's use of ChatGPT and investigating the people involved in the leak.

What can be done to mitigate the risks of shadow AI?

The key to success lies in creating an environment where AI use is both innovative and responsible. This requires ongoing collaboration between IT departments, leadership, and employees to ensure that AI becomes a powerful asset rather than a hidden liability. 

For the individual employee, the simple answer is to communicate with the organisation about your use of AI and ensure you are using it in line with company policy, if there is one (more on this in a minute).

Meanwhile, organisations should try to foster an environment where employees feel safe to discuss matters such as AI use without fear of judgement or being seen as a spare part. Currently, 52% of people who use AI at work are reluctant to admit they use it for their most important tasks.1 This is likely because 53% of them worry that using it on important work tasks makes them look replaceable.1 Encouraging open conversation about AI is only worthwhile if there is enough trust to make it happen.

Most crucially, the IT department has a significant role to play in protecting the company against the dangers of shadow AI. Their responsibilities can include:

  • assessing different AI tools for security, compliance, and effectiveness,
  • setting up access controls and implementing measures such as firewalls,
  • providing training resources to educate employees on responsible AI use,
  • and collaborating with leadership to develop and enforce AI policies.

The importance of AI policies

Too many organisations underestimate the importance of AI policies. 61% of those surveyed by Deloitte in 2023 said that their company did not have any internal guidelines on the use of AI.2 This leaves organisations vulnerable to the dangers of shadow AI by creating a lack of education and clear communication about the risks. 

Without policies to refer to, employees may not know where the company stands on AI use and might not feel confident in starting that conversation themselves. However, they could still turn to AI without informing management or the IT department, thus introducing shadow AI.

Shadow AI represents a significant challenge for modern organisations, but it also signals employees' eagerness to embrace new technologies. By taking a proactive approach to AI governance, providing approved tools, and investing in education, companies can harness the power of AI while mitigating its risks.

Discover how Narus helps businesses secure data, implement policies, and track AI usage.

1 2024 Work Trend Index Annual Report from Microsoft and LinkedIn.

2 Deloitte survey, ‘The rapid emergence of Generative AI in Switzerland,’ conducted in June and July 2023.

Narus logo