Stylist Aoife McGuigan on dressing to feel comfortable and confident
Stylist Aoife McGuigan on dressing to feel comfortable and confident

Sarah Gill

The dangers and environmental impact of using AI in your business
The dangers and environmental impact of using AI in your business

Elaine Burke

Social Pictures: The Jaeger-LeCoultre x Paul Sheeran event
Social Pictures: The Jaeger-LeCoultre x Paul Sheeran event

IMAGE

Kieran Clifford aka Fatbaby Bakes shares her life in food
Kieran Clifford aka Fatbaby Bakes shares her life in food

Sarah Gill

Meet the new dream treatment for face, eye and neck rejuvenation
Meet the new dream treatment for face, eye and neck rejuvenation

Lizzie Gore-Grimes

A retelling of the world’s greatest love story – what to watch this week
A retelling of the world’s greatest love story – what to watch this week

Edaein OConnell

Meet the sisters behind the hugely successful Nóinín in Kilkenny
Meet the sisters behind the hugely successful Nóinín in Kilkenny

Megan Burns

Social Pics: Boots No7 Future Renew™ launch at the Dylan Hotel
Social Pics: Boots No7 Future Renew™ launch at the Dylan Hotel

Edaein OConnell

This Dublin 12 home was extended to add light and flow, whilst still respecting its character
This Dublin 12 home was extended to add light and flow, whilst still respecting its...

Megan Burns

The most inspiring quotes from our IMAGE PwC Businesswoman of the Year 2025 winners
The most inspiring quotes from our IMAGE PwC Businesswoman of the Year 2025 winners

IMAGE

The dangers and environmental impact of using AI in your business
Image / Agenda / Business

Mo Eid on Pexels

The dangers and environmental impact of using AI in your business


by Elaine Burke
06th May 2025

Are you or your employees using generative AI in your business? Are you aware of the pitfalls, precautions and prompt etiquette that should be considered before you share your business’s business or rely on the unreliable? Tech journalist, broadcaster, host of For Tech's Sake and former editor of Silicon Republic, Elaine Burke, takes us through what we should really know about using gen-AI in the workplace.

It’s boom time for generative AI, with billions being poured into the sector and new tools being released at a rapid pace. Tech moves quickly, and gen-AI has exploded in the past two-and-a-half years, but it remains very much in an experimental phase, with early adopters and curious testers serving as its guinea pigs.

There’s no shortage of tools to test, either. There are chatbots, agents and image generators galore, readily available, and offering to do all sorts of odd jobs: from customer service to content creation to coding, and plenty in between. But is this purported jack of all trades a master of none?

One of the biggest problems with gen-AI is ‘hallucinations’. This is when a system generates a response that is completely fabricated but presented as fact. It can be as easy to spot as a seven-fingered hand, or it can be a plausible claim presented as definitive, sometimes even with sources. The latter can be harder to detect, and if gen-AI users are not informed enough to spot the errors, they can slip through.

Reviews have to be thorough, as gen-AI can make up just about anything. Ask Google’s AI Overviews to explain the meaning behind completely made-up idioms such as “you can’t lick a badger twice” and “never throw your poodle at a pig” and the system will, willingly and with confidence, offer a detailed history and rationale behind each invented phrase.

What can take minutes, or even seconds, to generate, can take much longer to quality-check than something made by a trusted and trained human. This was demonstrated in an experiment by MIT researchers, which found that ChatGPT could save time in brainstorming and writing drafts but more time was spent in editing its work.

Developers of gen-AI tools are working out ways to mitigate hallucinations. One such technique is to take a model trained for coherent natural language processing and have it check all responses against a trusted dataset – your company guidelines and policies, for example, if it’s a customer service chatbot. There is already precedent for this: customer service representatives typically work off a script, so an AI agent should too. Nonetheless, hallucinations persist. Part of this is down to the systems being driven to provide a response even if an adequate one can’t confidently be constructed. If AI had a productivity philosophy, it would be ‘done is better than perfect’.

Unchecked, a hallucinating AI agent could misinform your customers about company policies that don’t exist, as was the case for Anysphere, itself the creator of the gen-AI tool Cursor. Cursor is a popular tool for generating code and while the rigours of computer programming languages should mean there’s less room for error in this use case, hallucinations can still occur.

Ask Google’s AI Overviews to explain the meaning behind completely made-up idioms such as “you can’t lick a badger twice” and “never throw your poodle at a pig” and the system will, willingly and with confidence, offer a detailed history and rationale behind each invented phrase.

In fact, bad actors have noticed that AI-generated code will sometimes point to software packages that don’t exist. Utilising readymade packages is common in coding, and hackers have created real packages based on filenames they’ve seen gen-AI create, so the hallucinated code successfully runs, but it’s directed to malware. This is actually an old trick for a new era. It used to be that hackers would capitalise on common typos in a URL (like Goggle.com), creating a shadow site to dupe unsuspecting users.

Users must also be prepared to check the terms of service and privacy policies of their gen-AI tools thoroughly. These can vary widely and change frequently, so you have to stay on your toes, lest you accidentally leak proprietary company information. This is why Samsung Electronics had to temporarily ban employees’ use of tools like ChatGPT in 2023. A responsible AI usage policy is essential for all businesses, and can also help shine a light on ‘shadow AI’ use that takes place without company leaders’ knowledge or oversight.

A rule of thumb is that you shouldn’t share anything with a general-use gen-AI tool that’s proprietary or private, as these systems are a fairly new frontier for data processing and how that data is managed and manipulated after it enters that black box is not always clear-cut.

As well as risks to quality and cybersecurity, AI hallucinations can be damaging to your reputation. A highly publicised demo of Google’s original ChatGPT rival, Bard, claimed that the James Webb Space Telescope captured the first ever image of an exoplanet – a feat that was achieved many years prior to the telescope’s launch. The embarrassment forced Google to rebrand its chatbot as Gemini.

Maybe Google was able to rebrand quickly with the help of gen-AI tools, but there are yet more reasons to generate with caution. Not least because AI companies are facing multiple legal battles regarding the likelihood and legality of using copyrighted material in their training data. Downstream users are not likely to be implicated in the outcome of these lawsuits, but the rancour from creators can dampen consumer sentiment for gen-AI. In Ireland, a recent Qualtrics survey revealed that just 15% of people trust organisations to use AI responsibly, and displacement of workers was a concern for more than half of those surveyed.

To keep conscious consumers on board and stay on track for sustainability goals, gen-AI also presents a problem. Research from AI community platform Hugging Face and Carnegie Mellon University claims that generating just one image can use as much energy as you need to fully charge your phone. The cost of the energy to run these systems is borne by the companies producing them from their data centres, but the environmental cost will hit us all in the long run.

The pricey operation of these models, which are largely made available on a freemium or surprisingly cheap basis, will come back around eventually. There’s only so long these businesses can go on spending more than they make from gen-AI, and an arbitrary decision from one underlying provider to impose usage restrictions or inflated prices can have a dramatic impact if your business is reliant on its AI costs staying stable.

All of this is to say that gen-AI tools are currently being shaped atop some shaky foundations. There is a lot of risk, but what are the rewards? AI, in its broadest sense, has widespread, useful applications. I myself wouldn’t want to live in a world without AI-assisted transcription. Some people have other gen-AI tools for which they would say the same, I’m sure. I just hope they’re also aware that these tools are imperfect. Like my AI-generated transcriptions, their outputs will need oversight and intervention. A promise of 99% accuracy seems acceptable until you realise how jarring it would be if one word in every 100 in subtitles was wrong. Human supervision is the only way to make the most of these shortcutting tools without cutting any corners on quality. Gen-AIs hallucinate because they don’t know what they don’t know. As long as we know that, we can help them be useful, not the other way around.