A new, free whitepaper with expert advice on how to adopt GenAI safely and securely.
Generative AI (GenAI) is rapidly reinventing the entire operating model for how we get work done. But execution isn’t living up to ambition — at least, not yet.
2023 data from Gartner found that 49% of organisations are struggling to demonstrate the value of AI.1 90% of GenAI pilots fail to make it into production.2 And while 83% of business leaders claim they know how to use GenAI safely to protect data, only 29% of their workforce feel the same.3
The hype machine has officially run out of juice. Organisations are now faced with some very real challenges standing in the way of getting long-term value from their GenAI investment:
One thing is clear: we’ve got a lot of work to do. Unlocking GenAI’s full value hinges on a holistic approach that aligns processes, infrastructure, and internal skills.
In this playbook, we’ll guide you through the key components to GenAI success for your organisation. We’ll help you:
Let’s get started.
A few years on since ChatGPT’s explosive entrance into the workplace, and organisations are starting to learn that with great power comes great responsibility.
Incidences of AI misuse have risen 32.3% between 2022 and 2023.4 Cyberattacks, data security, data privacy, and intellectual property theft are pressing concerns for organisations globally. 63% of organisations rank GenAI inaccuracy as one of their most urgent risks, while almost a quarter report that hallucinations have led to negative consequences.5
Unauthorised employee usage adds further weight to these risks, leading to breached customer and proprietary data, compliance and regulatory challenges, and reputational damage.
And for organisations still on the fence about AI, this might just be enough to make them stay perched there, forever. Because without the right strategy, the risks are big.
But the rewards are bigger.
Organisations’ concerns on GenAI are very real. But here’s the thing: most of the fears shine a light on our practices, not the technology itself. Cybersecurity and data privacy worries can be solved with good technical infrastructure. Employee misuse and inaccuracy can be solved with ongoing education and training.
And the reality is that these are not the biggest risk factor. Because the risk of missing out is far, far greater:
“One of the reasons organisations are taking a long time implementing AI is due to fear of making a mistake,” says Josie West, Chief Innovation Officer at AI consultancy FOIL. “They’re waiting for things to become “safe”, so they end up doing nothing. But the problem is that new organisations that are moving quickly will undercut them. They’ll be able to offer the same services for half the price, and in half the time. And they’re going to win. If you do nothing, your organisation will be left behind. Not in five or ten years — within the next year.”
Implementing GenAI successfully isn’t just important — it’s how your organisation will maintain its competitive advantage. So how big’s your FOMO right now?
We thought so. In the next section, we’ll share the five foundations of successful GenAI adoption, and some practical tips from our experts on how to pull this off.
As the market becomes flooded with shiny new GenAI tools, organisations are feeling the push to drive implementation at scale. But this is leading to a tech-first approach — and it’s not working.
“Organisations are feeling this sense of huge urgency with AI,” Josie says. “They have a sense that they need a chatbot or a tool, without a sense of why. But to make the right investment, you need to support your priorities with technology, not the other way around.”
Organisations still aren’t operating in a way that is compatible with AI’s pace of change.
“You need to assess your whole organisation to see how ready it is for change,” Josie says. “What size is your organisation? Do your people have the technical capability to maintain the AI? What level of change will each proposed use case bring? What’s your readiness when you look at people, processes, and technology?”
And on that topic, the biggest thing holding organisations up is the maturity of their data and technical infrastructure. A 2023 report jointly authored by Google and Boston Consulting Group found that data maturity is a predictor of business success, with data-mature organisations growing their revenue twice as fast as underperformers.9
GenAI needs data to be most effective. But if you haven’t been keeping house, you’re not only going to struggle to reap its full potential, but you might also open your organisation up to some huge security and privacy risks.
2024 data from Harvard Business Review found that while 93% of organisations recognise the importance of a good data strategy in successful AI adoption, only 37% said that their organisation has the right data foundation for GenAI.10
Before you roll out any (more) tools, start with a comprehensive audit of your current data landscape:
Next, focus on readying the data in business areas where you feel GenAI will make the biggest impact first. Consider what matters most to your organisation — both right now, and for future success.
“Organisations need to understand that implementing AI isn’t a knee-jerk reaction,” Josie says. “They can focus on one area at a time, and scale more slowly. Take a top-down approach and look at how your organisation makes decisions, what your business priorities are, and where you’re failing. Maybe it’s your customer data — so put your investment and focus there.”
In the race to derive business value from AI as soon as possible, the prospect of not doing anything for a year while you tinker with data might sound like a death sentence for the business. This is where it’s time to evaluate if any of those single-use cases could deliver value.
“You need to balance the risk of inaction versus needing to react,” Josie says. “What can you do now, and will it pay off? You could get a chatbot tomorrow — but that chatbot model could change every month, introducing more risk.”
In short, balance whether or not implementing a new tool will ease short-term pains. A tool that helps employees do their work more efficiently with minimal training or data input required? Go for it. A platform that needs your customer data to work effectively? Not so much.
🎯 Key action: Map your current priorities to data infrastructure readiness
As organisations have gotten more savvy about their data security and risk landscape, they’re now faced with a new challenge: making sure employee usage stays compliant and safe.
A 2024 Cisco report found that 61% of organisations control which GenAI tools their workforce can use, while 27% have banned it entirely.11 But wielding an iron fist isn’t the answer — bans only backfire, and there are always loopholes. Maintaining control and maximising your employees’ ability to innovate relies on building policy and governance.
AI governance is a living framework that supports decision-making and best practices across your organisation. Call it your organisation’s moral compass.
“Organisations often try to bolt on a responsible approach to AI later,” explains Sue Turner OBE, Director of AI Governance Limited. “They do the pilot and proof of concept — and then once they’re rolling it out, that’s when they evaluate whether or not it’s the right thing to do.
“Responsible AI must be embedded right from the beginning. Get people excited, but show them real-world examples of risks, security implications, and mistakes using GenAI and other tools. You have to put yourself in a position to make good decisions from the start.”
Making good decisions relies on employees knowing how, why, and where they can implement GenAI, along with best practices for using company data. But governance and policy alone don’t lead to responsible AI usage. Coupling governance with mindset will normalise AI as something that’s actively encouraged.
“There’s a lot of shame attached to AI right now,” Josie says. “People are able to do their work more quickly and efficiently — but they don’t want to own up to using AI because that feels like cheating. Organisations need to use governance to frame it as a good thing. You have an open book policy on how it’s used. Then you’re bringing people with you, not leaving them behind.”
GenAI governance sounds like a list of “can” and “can’ts”. And to be clear, our experts say that you don’t need a specific GenAI policy for each tool — but you do need to guide behaviour with principles and examples.
As AI evolves, education alone can’t entirely stamp out the potential for misuse. Neither can a blanket ban. But advancements in AI management tooling can help organisations mitigate risk and protect data while offering employees maximum freedom to use tooling as needed.
“The tension for organisations is that leaders want to equip their people with the tools they need to be most productive, and at the same time stay safe,” says Jon Mort, CTO at Adaptavist. “The easy option might be to ban tools — but the smart option is to provide them safely. Using an AI management tool or LLM wrapper enables transparency, control, and a level of assurance while enabling employees.”
AI governance is often seen as something that falls under the IT department’s remit. But the problem is that AI isn’t just a tool — it’s the entire operating system for the future of work. As such, it’s everyone’s responsibility.
“Organisations want to work fast, and be innovative,” says Sue. “Your governance processes need to match that. Many organisations put the responsibility on experts to [manage] how AI is being used. They discuss use cases at a committee level. Neither of these work with the speed of AI. You’ve got to design your systems to work with your organisation if you want to avoid bottlenecks. That starts with a cross-functional effort.”
Maximising this cross-functional effort relies on organisations decentralizing some of their decision-making, and allowing departments and teams to have greater ownership over how they use AI. For example, your marketing team needs to make governance their own by aligning their internal values and transparency on sharing artificially generated images with company policy. Giving teams ownership over these more intricate, nuanced decisions will ensure more responsible decision-making.
AI and failure go hand-in-hand. But in organisations, failure is often a synonym for risk — and tight controls to mitigate this risk often mean organisations miss a golden opportunity for experimentation and innovation.
“Learning from failure is a core part of innovative cultures,” Jon says. “Organisations need to promote a mindset that encourages learning through small failures and experimentation — if they don’t, they’ll get left behind. You have to remove the barriers so that employees can use the tool easily and safely. To do that, they need a way of interacting with generative AI that encourages playfulness, allowing them to explore in a way that is safe and as risk-free as possible.”
Organisations can encourage play by:
🎯 Key action: Build your AI governance framework
Research is still emerging on how organisations can best build governance and policy on a technology that’s still a moving target. But a 2022 study on AI governance in organisations found that it hinges on creating guidance around three key topics:12
At a basic level, your AI governance should touch on:
Once you’ve set the guardrails for how and where employees can apply GenAI tooling into their day-to-day workflows, it’s time for the fun bit: identifying your use cases where it’ll make the biggest splash.
But often, this can feel a bit like looking for a needle in a haystack. GenAI’s relative flexibility as a technology means it can be anything to anyone, depending on the nature of their role or tasks — a creative idea generation machine, a code error sleuth, or a chatbot customer service agent.
Getting it wrong here could lead to an added spend on time, budget, and internal resources. But when organisations have well-defined use cases, it pays off — big time.
According to a 2023 study, effective use case scoping aligned with critical business pain points resulted in one airline reducing non-performance-related costs by 25%, and delays by 30%.13 2024 McKinsey data estimates that well-structured AI use cases could impact organisations’ annual revenue by up to 2%, contributing $660 billion in value.14
Anchoring your AI use cases to your business strategy nets you the biggest return on your investment. So how do you figure out where your priorities lie?
“Organisations need to look at how they make decisions, what their company is, and what they value,” says Josie. “What is your current business strategy? What are your future goals? Where are you failing? What are your non-negotiables — the things that won’t be AI no matter what?
“Ultimately, you have to be able to prove that what you’re doing is a worthwhile investment,” Josie says. “Increasing revenue, reducing costs, improving customer service — that’s the language you need to frame what you’re trying to do.”
Organisations can use a gap analysis here to help identify the business value:
See the example below to help guide you.
When it comes to team and individual adoption, systems thinking can help employees pinpoint where GenAI fits into their daily programming.
“One thing I’ve observed in organisations is that they expect AI to be everything to everyone,” explains Laura Gemmell, Technical Founder at AI and data education startup Taught By Humans. “But when you break tasks down into systems thinking, you start to understand where your biggest opportunities are — and where a human needs to be in the loop. You start to identify where you trust AI to output enough information, and where you don’t.”
🎯 Key action: Identify individual use cases with systems thinking
Successful AI adoption is as much a people problem as it is a process one. The rise of AI in the workplace has already exposed the soft underbelly of our most pressing skills gaps — skills that are critical to future business survival. And demand for these skills is growing, with mentions of AI skills in job adverts rising 3.5x between 2012 and 2023.15
But hiring new employees with AI-ready skills isn’t necessarily the answer for long-term success. Because for generative AI to truly embed across your organisation, you need to make sure that the knowledge of how to use it isn’t just available to a select few.
“Organisations need to see employee education as something for the whole company, rather than recruiting in a few specialists and letting that knowledge [live in a silo],” says Sue. “Your organisation is full of domain experts already. Combining their expertise with training on how to use AI is how you bring them along with you.
“When we talk about generative AI and any type of AI tool, we need to remember that these tools are evolving all the time,” Sue adds. “You need to continuously refresh knowledge and skills, give people examples of how to use the tool, and offer opportunities for people to learn together.”
For training to take hold, your employees need ample opportunities to apply, adopt, and adapt their knowledge to their own context.16 In simple terms, they need to know what’s in it for them, and be able to see its relevance in their day-to-day tasks.
Your employees are at different levels of understanding when it comes to AI — ranging from excitement and curiosity, to distrust and even fear that robots are coming for their jobs. But 88% of GenAI’s primary users come from non-technical backgrounds.17 Starting with the basics will level the playing field, communicate the use case for the tech, and outline the risk.
“In businesses where there are a lot of tech tools, AI can often be seen as just another thing they need to learn how to use,” Laura says. “But education starts by framing understanding around the technology itself: what it’s good at, what its limitations are, how to structure a prompt, what detail to include, and how it reaches its conclusions. If we understand something and know its limitations, we know it’s not scary — we come in with a different mindset that sets the stage for learning new skills.”
If there’s one problem organisations struggle with the most with regards to AI adoption, it’s optimising for the end result, rather than how their people work with AI and one another in an AI-enabled environment.18 That’s a big oversight when it comes to bridging the gap between theory and practice.
“There’s a huge gap between learning and its immediate real-world application,” Laura says. “So we need to reframe the ‘skills’ conversation as a confidence one. Anyone can learn the skills — it’s having the confidence to apply that to your work in a workplace setting.”
Progressive, contextual upskilling is key to this approach.
“You’ve got people in your organisation with great domain knowledge, but spend a lot of time in spreadsheets,” explains Sue. “Step them up from spreadsheets to a business intelligence tool so they can learn how to apply their expertise in a new way. Then, once they’ve got that confidence linking different datasets together in different ways, you need to give them more training on machine learning and data science techniques, so they can get more confidence.”
🎯 Key action: Upskill employees with group training
GenAI high-performers share one thing in common: they religiously measure, track, and adapt the success of their efforts over time.
And as GenAI evolves, continuous measurement is both your safeguard to keeping it aligned to your business strategy, and your opportunity to refine your approach.
But in a brave new AI-enabled world, how do you know what success looks like? Well, it looks much like it did before in terms of how you quantify the business value of anything else. The only difference now is that organisations need to avoid creating a false equivalency between the output of humans versus that of machines.
“When we talk about AI delivering a return on investment, we’re going in with a number of assumptions,” Josie says. “One might be that a job that takes a human two days takes AI less time. That already gives you a value to calculate effectiveness: the cost of someone’s salary for one day. What does that look like over the course of a year? How does that multiply across your organisation?
“The metric organisations need to avoid is quantifying an AI tool in terms of the amount of employees they can replace — that’s not what we’re trying to do.”
Don’t get us wrong, productivity, efficiency, revenue, and time saved are all great KPIs to have. But they’re unlikely to solely define how you measure success. They’re a great north star for giving you a sense of direction on where you’re headed but don’t give you much specificity on people or process-related outcomes.
“What do you care about the most? That’s what you need to measure,” Jon says. “A team of engineers we worked with took a more holistic view of their KPIs by measuring how quickly features get to production, their level of quality, and the amount of rework that needed to be done. That doesn’t look at any individual’s or team’s productivity — it measures effectiveness relative to their performance.”
While most organisations are vying for faster, smarter, and better with their GenAI strategy, remember that long-term success isn’t just a numbers game. Success could also be found in qualitative, unquantifiable wins, such as enhanced collaboration, improved creative processes, or, more simply, happier employees.
“Organisations get stuck measuring outputs, when they should be thinking about outcomes,” says Jon. “They forget about these smaller wins. For example, one of the biggest productivity gains with GenAI is that teams have more time to think through problems. Instead of responding to customer complaints, they have more time to build a strategy to drive customer engagement. That’s where the biggest impact is felt.”
🎯 Key action: Create KPIs that quantify AI success
Building a picture of your success with GenAI tooling relies on having KPIs that don’t just measure the broad strokes of success, but give you insight into how it’s impacting your people and processes.
To do this, try splitting your outcomes into three key components:
GenAI is already fundamentally reshaping how we work. But without the proper foundations, organisations will struggle to fully unlock its value and bring their workforce along with them. Investing in essential governance and data infrastructure will prove to be the bedrock of organisations’ GenAI strategy, but building employee confidence and understanding in the tooling will make sure it embeds — and is successful — long-term.
Getting this right depends on organisations homing in on how employees use tooling, and making sure they use it safely — and this is where the right tech can be of help.
Implementing AI management tooling like Narus, means organisations can set the guardrails on GenAI usage at scale — meaning they have peace of mind, and their employees have everything they need to work at their best.
1 Gartner, ‘Gartner survey finds generative AI Is now the most frequently deployed AI solution in organizations’, May 2024.
2 McKinsey, ‘McKinsey’s ecosystem of strategic alliances brings the power of generative AI to clients’, April 2024.
3 Salesforce, ‘61% of workers embrace generative AI, but lack trusted data and security skills’, June 2023.
4 Ray Perrault and others, 2024 Stanford AI Index, Stanford University, April 2024.
5 Alex Singla and others, The state of AI in early 2024: Gen AI adoption spikes and starts to generate value, McKinsey, May 2024.
6 Erik Brynjolfsson, Danielle Li, and Lindsey R Raymond, ‘Generative AI at work’, National Bureau of Economic Research Working Papers, April 2023.
7 Alexia Cambon and others, ‘Early LLM-based tools for enterprise information workers likely provide meaningful boosts to productivity’, Microsoft, December 2023.
8 McKinsey, The economic potential of AI: The next productivity frontier, June 2023.
9 Marc Roman Franke and others, Any company can become a resilient data champion, Google and Boston Consulting Group, April 2023.
10 Thomas H Davenport and Priyanka Tiwari, ‘Is your company’s data ready for generative AI?’, Harvard Business Review, March 2024.
11 Cisco, Data privacy benchmark study, January 2024.
12 Johannes Schneider and others, ‘Artificial intelligence governance for businesses’, Information Systems Management, June 2022.
13 Michael Grebe, Marc Roman Franke, and Armin Heinzl, ‘Artificial intelligence: How leading companies define use cases, scale-up utilization, and realize value’, Informatik Spektrum, September 2023.
14 McKinsey, The state of AI in early 2024, May 2024.
15 Stanford AI Index April 2024.
16 Jason L Huang and others, ‘A tale of two transfers: Disentangling maximum and typical transfer and their respective predictors’, Journal of Business Psychology, January 2015.
17 Aaron De Smet and others, ‘The human side of generative AI: Creating a path to productivity’, McKinsey, March 2024.
18 Keng-Boon Ooi and others, ‘The potential of generative artificial intelligence across disciplines: Perspectives and future directions’, Journal of Computer Information Systems, October 2023.