Info Pulse Now

Update on latest developments in online safety


Update on latest developments in online safety

As we wait for the final version of the codes on illegal harms to be published next month, the new Data (Use and Access) Bill has been introduced to parliament, which makes some changes to the online safety regime. Ofcom is consulting about access for researchers to information about online safety and it has also published an open letter to online service providers operating in the UK about how the UK's Online Safety Act 2023 will apply to Generative AI and chatbots.

The Data (Use and Access) Bill was introduced to Parliament on 23 October 2024. From an online safety perspective, it contains provisions about the retention of information by providers of internet services in connection with investigations into child deaths.

The Bill also includes a power to create a researcher data access regime. This aims to support researchers to access data held by online platforms so they can conduct robust and independent research into online safety trends. It also aims to boost transparency and evidence about the scale of online harms and the measures which are effective in tackling them.

With this in mind, Ofcom has published a call for evidence on independent researchers' access to information on online safety from providers of services regulated under the OSA. Under the OSA, Ofcom has an obligation to report on how and to what extent independent researchers access information about online safety matters from providers of regulated services. It has asked about how and to what extent independent researchers currently access information from providers of regulated services, the challenges that currently constrain information sharing for these purposes and how greater access to this information might be achieved. The call for evidence ends on 17 January 2025.

Finally, Ofcom has published a very useful open letter about how the OSA applies to generative AI. It mentions recent concerning incidents, including the tragic death of an American teenager who had developed a relationship with a chatbot based on a Game of Thrones character and another incident where a Generative AI chatbot platform had created chatbots to act as 'virtual clones' of real people and deceased children, including Molly Russell and Brianna Ghey.

The OSA covers websites or apps that allow their users to interact with each other by sharing images, videos, messages, comments or data with other users of the platform. Any AI-generated text, audio, images or videos that are shared by users on a user-to-user service is user-generated content and would be regulated in the same way as human-generated content. For example, deepfake fraud material is regulated no differently to human-generated fraud material. It does not matter whether that content was created on the platform where it is shared or has been uploaded by a user from elsewhere.

The OSA also regulates Generative AI tools and content in other ways, including:

Ofcom strongly encourages services to start preparing now to comply with the relevant duties. For providers of user-to-user services and search services, this means, among other requirements, undertaking risk assessments to understand the risk of users encountering harmful content; implementing proportionate measures to mitigate and manage those risks; and enabling users to easily report illegal posts and material that is harmful to children. Ofcom mentions the following steps, among others:

The OSA' duties are mandatory. If companies fail to meet them, Ofcom is prepared to take enforcement action, which may include issuing fines. For background on the timelines, see here.

Previous articleNext article

POPULAR CATEGORY

corporate

7123

tech

8240

entertainment

9003

research

4131

misc

9457

wellness

7194

athletics

9585