Caroline Robinson, Commercial Real Estate Development Manager says:
We live in the age of artificial intelligence (AI), and avoiding it at work is now nearly impossible. According to the latest research from PwC, almost 90% of the top 100 UK law firms implemented or trialled generative AI tools in 2024, up from 55% in 2023.
The direction of travel is clear, especially for those of us in the legal sector where adoption has been rapid: AI is now a fact of modern working life.
I have worked with legal professionals for over 15 years and have seen first-hand how AI benefits lawyers by improving their accuracy, cost-efficiency and work-life balance. But I have also seen how others have missed out on these advantages because of AI myths that have fostered scepticism and fear.
In today’s business climate, labouring under limiting beliefs around AI is a personal and organisational risk. The overwhelming amount of content about AI on social media platforms like LinkedIn and X does not make it easy, but understanding AI’s true capabilities, keeping informed of its real risks, and staying abreast of how peers and competitors are (or aren’t) using it is an essential skill for any modern legal professional working in a forward-thinking firm.
Here are five top misconceptions I have found that regularly arise when I speak to legal professionals about AI that paralyse or disengage them.
Take a look at whether you have been labouring under any of these too, and whether reconsidering your point of view could lead you to engage with one of the most significant upskilling opportunities of a generation.
“If I use AI, it will eventually replace me”
People will always be the heroes of the legal profession. Where trust and client care are the foundations of the service being provided, the personal capabilities of a lawyer will be essential to the business.
The ability to empathise, build relationships, and navigate complex human dynamics are skills that AI simply cannot replicate in this profession.
Using AI as a co-pilot in your work to reduce the rate of human error, speed up timelines for delivering projects, and increase your overall efficiency simply serves to increase your own capital as a professional.
Furthermore, lawyers who embrace AI tools effectively will likely find themselves more valuable in the marketplace, as they can deliver higher-quality work more consistently while maintaining the crucial human elements that clients truly value.
“AI just generates data dumps”
The first thing to unpick here is that not all AI models are generative. While one of the most popular AI models right now, ChatGPT, does generate content based on text prompts, this does not mean that every AI producing text-based outputs has generated new content.
AI can be trained to extract and organise data, analysing it without changing any of the facts fed into it. AI is a fantastic tactical tool for data analysis, crunching the numbers of large volumes of information with great levels of speed and accuracy.
In fact, while humans are more likely to make mistakes when doing extensive amounts of due diligence work, which can at times be repetitive and increase the likelihood of error, AI is designed to avoid this pitfall. What this means is that there are sophisticated insights to be gained from conducting analysis via AI, unlocking new levels of understanding and advisory capabilities.
“AI breaches confidentiality”
That being said, AI’s knack for data analysis naturally raises some questions around confidentiality breaches.
It’s understandable to have concerns about data privacy when using AI tools, but it’s important to distinguish between different types of AI models. ChatGPT, for example, is an open-loop AI model, meaning it continuously learns and updates from new data inputs. This can indeed raise questions about data privacy and confidentiality because the model feeds user data back into the system to improve its responses.
However, not all AI models work this way. There are also closed-loop AI models, which operate within a fixed dataset and do not feed user data back into the system. This ensures a higher level of data security and privacy.
For instance, a model like REI, which we developed at Search Acumen, uses a closed-loop system and focuses solely on local authority data, not client information.
So, while it’s true that some AI models might raise privacy concerns, others are specifically designed to protect data confidentiality. It’s all about understanding the type of AI you’re dealing with and how it handles data.
“AI will prevent junior colleagues from becoming brilliant lawyers”
This concern often stems from the fear that AI might remove the need for junior colleagues to engage in complex thinking and learn the nuances of high-value work.
In reality, AI can actually enhance the training process for junior colleagues by creating sophisticated training exercises.
For example, an AI tool can generate the final result of a piece of work, and junior colleagues can then work backwards to understand how the AI arrived at that conclusion. This method helps them develop critical thinking skills and a deeper understanding of their profession.
Moreover, AI doesn’t need access to the most confidential data to be valuable. It can be used in a controlled environment as a training tool without compromising sensitive information. The goal is to produce AI-literate professionals who can perform legal work at the highest standards, combining their expertise with advanced technological skills.
“There’s no way to know that the AI is accurate”
This is a valid point to consider, but there are effective ways to ensure AI tools are reliable. The key is to approach AI adoption methodically and with appropriate safeguards, rather than viewing it as an all-or-nothing proposition.
One key method is through training exercises. These are invaluable, not just for upskilling and developing professional capital in a controlled environment, but also for testing the AI tools themselves. This hands-on approach helps verify the accuracy of the AI, while simultaneously building confidence and competence among professionals who might be hesitant about embracing new technologies.
It’s important to remember that AI shouldn’t be adopted with blind faith. Human oversight is crucial. People should act as co-pilots, testing and validating AI outputs, which gradually builds trust between them and their digital tools.
This relationship is essential for fully harnessing AI’s potential. Just as a pilot wouldn’t rely solely on autopilot without monitoring the instruments, lawyers should view AI as a sophisticated tool that enhances their expertise rather than replaces their judgment.
The most successful implementations of AI in the legal sector are those where technology and human expertise work in harmony, each complementing the other’s strengths while compensating for their respective limitations.
Bridging the gap
Many of these misconceptions come from not having clear communication and education about what AI can and can’t do.
While healthy scepticism and careful consideration of AI’s limitations are vital, allowing misconceptions to prevent engagement with these tools risks being left behind in an increasingly competitive legal landscape.
The most successful lawyers will be those who strike the right balance: leveraging AI to enhance their capabilities while maintaining the human judgment, creativity, and relationship-building skills that remain irreplaceable.
By addressing these common myths head-on and approaching AI with an open yet discerning mind, legal professionals can harness its potential to not just transform their working practices, but to elevate the quality and impact of their contributions.
The future belongs not to those who resist change, but to those who thoughtfully embrace it.