Patrick Williams considers the forthcoming changes to the Online Safety Act 2023 (“the Act”) in relation to the creation of non-consensual images and child sexual abuse material using AI tools, and the impact on abuse claims in relation to the developing area of image-based abuse.
On 12 January 2026, the UK’s Independent online safety watchdog, Ofcom, announced that it had opened a formal investigation into social media platform X under the Act, to determine whether it has complied with its duties to protect people under the Act.
The investigation was brought about by concerning reports that the “Grok AI chatbot account” on X was being used to create and share images of people, including children, of an intimate or pornographic nature.
Ofcom therefore decided to open their investigation to establish whether X has failed to comply with its legal obligations under the Act to:
On 14 January 2026, X confirmed that it had implemented safety measures to prevent the Grok account from being used to create intimate images of people.
The Act came into force on 25 July 2025 and intended to protect children and adults online; it created a range of new duties for social media platforms and technology companies by making them legally responsible for the content they host as well as user safety on their platforms.
However, despite the implementation of a number of new duties and standards to be complied with, the Act currently does not specifically mention the use of AI products. While it is illegal to share intimate, non-consensual images, including “deepfakes”, it is currently not an offence under the Act to ask AI to create such images. Therefore the UK Government has announced its intentions to swiftly bring into force a new law which will make it an offence to create such images using an AI tool as well as amending existing law to create an offence for companies to supply the tools designed to make them.
Image-based abuse claims are now being made more often – the first judgment concerning an image-based abuse claim was handed down on 27 February 2023 in the case of FGX v Gaunt [2023] EWHC 419 (KB). The alleged circumstances were that the claimant discovered that her partner had concealed a camera in their bathroom and had uploaded intimate images of her online, with a photo of her face, for which he had obtained payment from pornographic websites. The claimant suffered PTSD as a result. Thornton J presiding over the case considered the claim to be one of image-based abuse and the impact on the claimant in such circumstances akin to the impact of sexual abuse, notwithstanding the abuse being image-based and not physical. Accordingly, general damages were awarded at £60,000 under Section C of Chapter 4 of the Judicial College Guidelines 2022 and within the “moderate” bracket, for cases where the claimant’s prospects of recovery with professional help are better but still likely to suffer from significant disability for the foreseeable future.
Since, and likely at least in part as a result of, the case of FGX v Gaunt, the 17th edition of the Judicial College Guidelines released on 5 April 2024 for the first time included image-based abuse within its definition of “abuse”. The Guidelines themselves refer to a “small cluster of decisions concerning damages for sexual abuse, including image-based abuse … which has led to … adjustments to the brackets”. As these still remain an emerging abuse claim, reported damages awards are rare in this area, therefore the only cases which the Judicial College could be referring to are FGX v Gaunt and possibly the claim made by Georgia Harrison, who successfully sued her ex-partner, Stephen Bear, for general damages totalling £120,000 after he shared sexually explicit images of her online without her consent.
The development of AI products to have the ability to create non-consensual images and child sexual abuse material is another extension of image-based abuse, whereby a user is able to manipulate the AI tool to create such images and therefore presents the same risks of image-based abuse. Accordingly, it is entirely possible that in respect of image-based abuse claims we may also start to see claims made in circumstances where individuals have created such materials with the use of AI.


The service you deliver is integral to the success of your business. With the right technology, we can help you to heighten your customer experience, improve underwriting performance, and streamline processes.