Understanding the Legal Risks of Relying on AI for Decision-Making in SMEs
- Stan Hebborn
- Nov 18, 2025
- 5 min read
Updated: Jan 8
Small and medium-sized enterprises (SMEs) are under constant pressure. Costs keep rising, staff are stretched, and there are never enough hours in the day. Against the constant challenges, shortcuts can look sensible, even necessary.
One shortcut that has become increasingly popular is the use of artificial intelligence (AI) to speed things up and keep costs down. Used carefully, that can work. Used blindly, it can cause real damage. We've seen that firsthand. Even those at the very top of the AI development world are warning against over-reliance. When even the people building the technology urge caution, business owners should listen!
AI is a tool, not a decision-maker
This is not an argument against AI. It is an argument for realism. AI should just be treated as a tool, not a decision-maker, and certainly not as a substitute for skilled, professional judgement. The legal and financial risks of getting this wrong are far greater than most SMEs appreciate.
AI can look like a silver bullet. It produces answers quickly, with confidence, and often at little or no upfront cost. What it doesn't do is understand your business, your sector, or the legal context you operate in. It predicts text based on patterns. That is all.
At Hebborn Consultancy, we regularly deal with the consequences of AI being used where it shouldn't have been. Businesses often turn to AI for policies, contracts, DPIAs, retention schedules, or other legal guidance because it seems quicker and cheaper. However, the pattern of results are depressingly familiar:
policies that do not comply with UK GDPR
contracts drafted against the wrong jurisdiction or outdated law and often absent crucial detail
retention schedules, DPIAs, and privacy notices that directly contradict UK requirements
advice that sounds authoritative but collapses the moment it is reviewed properly
By the time these issues are identified, the supposed saving has gone. That's when we usually get the call. The business then pays again, but this time for remediation, re-drafting, explanations to regulators, and sometimes external legal advice. Add management time, stress, and disruption, and the true cost becomes painfully clear.
AI will always give you an answer. That doesn't mean it's the right answer.
AI is only as good as what you put into it
One of the most overlooked risks with AI sits at the keyboard, not the output. Every prompt typed into an AI tool is an act of disclosure. That may include personal data, but it just as often includes commercially sensitive information, internal strategy, pricing models, contract positions, intellectual property, or draft material that reveals how a business thinks and operates. If an AI system is “helping” you write something, it is,,by default. getting something from you in return. The question businesses rarely stop to ask is: "what does the AI get out of this?" That’s often where the real risk begins.
Without a clear AI use policy, a supporting DPIA, and firm rules on what staff can and cannot input, organisations risk leaking personal data in breach of the integrity and confidentiality principle under Article 5(1)(f) UK GDPR, as well as disclosing corporate intelligence and IP.
These disclosures are quiet, routine, and easy to miss, but they can amount to unauthorised processing, loss of confidentiality, or a personal data breach long before anyone looks at the finished document.
AI does not challenge assumptions. It doesn't ask awkward questions. It doesn't flag uncertainty. If you input staff data, client information, case details, or internal correspondence into an AI tool, that personal data is being processed immediately. (Where have you covered this in your GDPR policies?)
In many cases, AI also transfers and stores data outside the UK. If there is no lawful basis to do this, no transparency, and no safeguard in place, the compliance failure has already occurred before a single word of output is produced.
That risk is often overlooked entirely.
The legal risks are bigger than most SMEs expect
Using AI without proper oversight is not just a quality issue. It is a liability issue.
Data protection and privacy
AI tools frequently process personal data, directly or indirectly. If an SME relies on AI to draft privacy information or design data handling processes without proper review, it risks non-compliance with UK GDPR and the Data Protection Act 2018. We routinely see AI-generated privacy notices that miss mandatory information, misstate rights, or gloss over data sharing. That is exactly the sort of thing the ICO takes an interest in.
Contracts
AI-generated contracts often contain subtle but serious flaws: the wrong governing law, missing liability provisions, or reliance on legislation that no longer applies. We have seen businesses pulled into disputes because a contract “looked fine” but was legally defective.
Advice and accountability
Some SMEs are now relying on AI for business or legal advice. That is a serious mistake. AI can produce confident-sounding guidance with no proper legal foundation. If you act on it and things go wrong, the liability sits firmly with you.
AI will neither attend nor give evidence to an ICO investigation. AI won't give evidence at a discipline hearing. AI won't stand alongside you when you are explaining your decisions to a regulator, a judge, or an employment tribunal.
Under the Data Use and Access Act 2025, accountability sits with a named individual (The Accountable Person). There must be a defensible audit trail showing who made the decision, why it was made, and on what basis.
“The AI wrote it” isn't a lawful defence. You will be the one answering the questions.
There is a blunt reality many businesses overlook. If a lawyer or GDPR consultant gives negligent advice, there is at least a route to financial redress. They owe a professional duty, they can be challenged, and they usually carry professional indemnity insurance.
AI offers none of this. You cannot sue an algorithm.
AI has no duty of care, no insurance backing its output, and provides no financial recompense when it gets things wrong. If an AI-generated document leads to regulatory action, litigation, or loss, the organisation that relied on it carries the financial exposure.
Under the Data Use and Access Act 2025, the accountable person responsible for the decision may also be identified in enforcement action and held personally accountable for governance failures.
Using AI without creating problems
AI does have a place, provided it stays in that place. Sensible use looks like this:
use AI for research or first drafts only
never rely on it for final decisions, compliance positions, or legal documents
ensure anything that matters is reviewed and signed off by a competent and skilled professional
keep a clear human approval record that can be evidenced if challenged
train staff to recognise when AI output needs challenging, not trusting
Efficiency is fine. Abdicating responsibility is not.
What we see in practice
Some of the many issues we have dealt with:
an AI-generated data retention schedule that failed during a regulatory audit, leading to corrective action and cost
contract templates produced by AI that lacked basic protections, resulting in expensive disputes
AI advice on employee data handling that ignored recent legal changes, triggering grievances and legal exposure
None of these saved money in the end. All of them cost far more to fix than doing it properly from the outset.
Final thoughts
SMEs are under genuine pressure to move quickly and control costs. AI offers tempting shortcuts, but blind trust in it is a false economy. AI can help you work faster. It cannot think for you, and it cannot carry your risk.
Use AI as a tool, not a shield. Combine speed with proper oversight. When it matters, have a human who knows what they are doing review the output. Because when things go wrong (and they will, eventually) it won't be AI answering the questions.
It could be you.
If you’re exploring how to use AI safely in your organisation, we’re happy to help you find the right balance between efficiency and compliance.




Comments