Peter Howitt
Update: This article has been revised to take account of the latest Ofcom guidance, codes and statement published on 16 December 2024.
For too long, sites and apps have been unregulated, unaccountable and unwilling to prioritise people’s safety over profits. That changes from today.
(Dame Melanie Dawes, Ofcom’s Chief Executive)
The UK Online Safety Act 2023 (OSA or Act) is a landmark law reshaping how platforms like X (formerly Twitter), Instagram, Threads and Bluesky operate in the UK. It will also impact search applications, gaming platforms and AI generated content platforms.
With a strong focus on protecting democratic content, increasing transparency, and curbing hate speech, the Act imposes strict obligations on platforms to balance user safety with free expression. One of the primary focuses of the OSA is to protect children from harmful content however it also aims to protect against hate speech and threats to democracy.
This article explores how the changes required by the OSA will affect the policies, algorithms, and content management system for social media platforms and AI generated applications with a focus on its impact for adults. A separate article exploring the impact for children will follow.
This article also explores (see Democratic Threats below) why implementation of the OSA needs to be fast-tracked given the increasing attacks on representative democratic systems and the misuse of social media platforms by people like Elon Musk.
Major Compliance Deadline: Providers now have a duty to assess the risk of illegal harms on their services, with a deadline of 16 March 2025. Providers will need to take the safety measures set out in the Codes or use other effective measures to protect users from illegal content and activity.
The OSA imposes significant obligations on social media platforms, interactive websites and applications (including group video) and AI apps affecting how these user-to-user and search service providers manage:
Their new ‘Duty of Care’
Illegal content (including hate crimes)
Protecting children from ‘harmful content’ including grooming, bullying and harassment
Ensuring transparency
Protecting journalistic content and democratic discourse; and
Algorithmic tools.
Other than the duty of care and the requirement to protect all UK persons from illegal content and children from online harm, the full scope of the duties depends on the categorisation of the services provider.
The OSA requires providers to take extra measures to protect children even if the content is not illegal content.
The OSA requires services to proactively prevent children from encountering primary priority content, which includes pornography, content promoting self-harm, eating disorders, or suicide. This is a central focus of the legislation for protecting children from the most harmful content.
The OSA mandates services to protect children from priority content, such as bullying, hate speech, and violent content. Services are expected to tailor these protections to specific age groups based on the risks identified.
The OSA also obligates services to assess risks associated with non-designated harmful content through children’s risk assessments and implement measures to mitigate these risks.
Regulated Services: Encompasses internet services, user-to-user services, and search services that meet specific thresholds defined in the Act.
User-to-User Services: These are internet services where users can generate, upload, or share content accessible to others on the same platform.
Examples include social media platforms, forums, and collaborative applications.
This includes AI image and AI content generation platforms (like grok, chatgpt, gemini etc).
The definition is comprehensive, capturing services even if user interaction is not a primary feature. Exemptions apply to limited functions like private business communications and one-to-one messaging services such as email and SMS/MMS.
Search Services: These include any internet service functioning as a search engine, allowing users to search across websites or databases.
This category extends beyond traditional search engines (like Google) to any platform offering a search or filtering capability, such as websites with tag-based filtering.
Services operating as both user-to-user and search services are classified as "combined services" and must comply with the obligations of both categories
Internet Services: An internet service, other than a regulated user-to-user service or a regulated search service, that is within 80 (2) or Schedule 2 (primarily related to pornographic content).
All online regulated services within scope of the OSA must protect UK users from illegal content and, where applicable, protect children from online harm. However, additional more detailed obligations apply to specified categories of service provider.
The OSA, and additional regulations to be published pursuant to it, are expected to categorise services providers as follows:
Category 1: Services with a significant number of UK users and functionalities that pose higher risks of harm. Ofcom has advised that this category should capture services that meet one of the following criteria:
Use content recommender systems and have more than 34 million UK users (approximately 50% of the UK population).
Allow users to forward or reshare user-generated content, use content recommender systems, and have more than 7 million UK users (approximately 10% of the UK population).
Category 2A: Services with a moderate reach and risk profile, likely to be the highest reach search services. Ofcom recommends that this category include search services (excluding vertical search engines) with over 7 million UK users.
Category 2B: Services with a moderate reach and risk profile, likely to be other user-to-user services with potentially risky functionalities or characteristics. Ofcom recommends that this category target services allowing direct messaging, with over 3 million UK users.
Once the thresholds are set, Ofcom will publish a register of categorized services in the summer of 2025. Ofcom anticipates that the final thresholds will result in 35 to 60 services being categorised. Most in-scope service providers will not be categorised (as they will not be sufficiently large) and so will not be subject to the additional category duties (summarised below).
Ofcom Guide to Categories and Requirements
Ofcom have since published a summary of their decisions in their Illegal Harms statement (the “Statement”) which outlines which services they relate to. It sets out:
The detailed measures they are recommending for user-to-user (U2U) services;
The detailed measures they are recommending for search services;
Their guidance for risk assessment duties, applicable to all U2U and search services; and
Their guidance for record-keeping and review duties, applicable to all U2U and search services.
The guidance sets out more than 40 safety measures that must be introduced by March 2025 broken down by size of service and risk.
Size:
Large services. Defined as a service which has an average user base greater than 7m monthly active UK users, approximately equivalent to 10% of the UK population.
Smaller services. These are all services that are not large and will include services provided by small and micro businesses.
Risk Profile:
Services are then sub-divides into three sub-groups. They depend on the outcome of the provider's risk assessment for the 17 kinds of priority illegal harms set out in the Risk Assessment Guidance:
‘Low-risk’ refers to a service which the provider has assessed as being low risk for all kinds of illegal harm. To be classified as low risk, there should be no evidence of harm taking place on the service and no or few specific risk factors associated with the kind of harm identified in the Risk Profiles
‘Single-risk’ refers to a service which the provider has assessed as being medium or high risk for just one kind of illegal harm.
‘Multi-risk’ refers to a service which the provider has assessed as being medium or high risk for at least two different kinds of illegal harms.
The General Risk Level Table provided by Ofcom outlines the conditions for classifying a service as medium or high risk:
Medium risk: A service may be classified as medium risk if there is a moderate likelihood that a user could encounter illegal content and several risk factors have been identified. For example, a service may be deemed medium risk for grooming if there is evidence it has occurred, but not to a significant extent.
High risk: A service may be classified as high risk if there is evidence of a substantial amount of illegal content or if many individuals have suffered actual harm. For example, services with functionalities such as private messaging, livestreaming, or encrypted messaging may be assessed as high risk for grooming.
Factors Determining Risk Level:
Risk Factors: The Risk Profiles published by Ofcom identify specific functionalities and characteristics of services associated with various illegal harms. Service providers must consider these risk factors when assessing their own risk levels.
Evidence of Harm: Service providers must consider evidence of illegal content and harm occurring on their services, including data from user reports, content moderation systems, and expert analysis.
Existing Controls: The effectiveness of existing safety measures and content moderation processes implemented by service providers will influence their risk level.
Note: The final categorisation of services will be determined by Ofcom based on their assessment of each service's specific circumstances and risk profile (summer 2025). While service providers conduct their own risk assessments, Ofcom retains the authority to classify services as Category 1, 2A, or 2B under the OSA.
Summary of the Detailed Measures
Governance and Accountability
Regular reviews of risk management activities and internal monitoring.
Clear designation of an individual responsible for illegal content safety and reporting.
Documented responsibilities and codes of conduct for staff.
Tracking of emerging illegal harms.
Content Moderation
Implementation of content moderation systems (both human and automated).
Establishment of internal content policies and performance targets.
Prioritisation and resourcing of content review efforts.
Training for content moderation staff and provision of materials for volunteers.
Reporting and Complaints
Mechanisms for user complaints and reporting of illegal content.
Clear communication and timelines for handling complaints.
Processes for handling appeals and specific types of complaints.
User Controls and Support
Safety features for child users (e.g., default settings).
Terms of service that are clear and accessible.
Support services for child users.
Tools for user blocking and muting.
Additional Measures
Specific measures for recommender systems (e.g., safety metrics collection).
Removal of accounts associated with proscribed organizations.
Labeling schemes for notable users and monetized content.
Dedicated reporting channels for trusted flaggers.
Search-Specific Measures
Moderation of search results and predictive search suggestions.
Provision of content warnings and crisis prevention information.
Publicly available statements about content safety measures.
Free Speech: Regulated category 1 service providers must safeguard diverse political opinions, journalistic content, and democratic discourse while complying with moderation obligations.
Algorithm Transparency: All categorised service providers must provide detailed disclosures about how algorithms identify harmful content, moderate misinformation, and serve recommendations will be required.
Protect Children from Harm: All providers must take extra measures to protect children even if the content is not illegal content.
Harmful and Criminal Content Management: All providers must implement robust systems to detect and remove illegal criminal content and provide clear reporting tools for users. Category 1 providers must also take extra measures to enable adult users to reduce their exposure to legal but potentially harmful content.
User Control & Identity Verification: Category 1 providers must empower users with tools to manage their online experience such as use of personalised filters and ID verification.
Codes of Practice: Managing and understanding the practical compliance obligations for providers and users will be with reference to Ofcom guidance and codes of practice which assist to interpret the law. Under the OSA, Ofcom is required to prepare and issue the following separate Codes of Practice:
Codes of Practice for terrorism content
Codes of Practice for child sexual exploitation and abuse (CSEA) content
Codes of Practice for the purpose of compliance with the relevant duties relating to illegal content and harms.
Illegal content is defined broadly to encompass a wide range of what are known as priority offences. These include:
Terrorism: Content that promotes, glorifies, or incites terrorism
Child Sexual Exploitation and Abuse (CSEA): Material depicting or promoting child abuse
Sexual Exploitation of Adults
Threats, Abuse & Harassment including Hate Crimes: Content that incites violence or hatred based on protected characteristics.
Unlawful Pornographic Content: image based sexual offences.
Fraud: Deceptive or misleading content intended to defraud users.
Suicide: Assisting or encouraging suicide.
Buying/Selling unlawful items: e.g. buying or selling drugs or weapons.
See the Ofcom Background Guidance ('Protecting people from illegal harms online') for more information.
Illegal Content Judgments
Providers must conduct "suitable and sufficient" Illegal Content Risk Assessments (ICRAs) that consider the risks of users encountering illegal content, including "priority illegal content".
Providers must make illegal content judgments based on "reasonable grounds to infer," a lower threshold than the criminal standard of "beyond reasonable doubt." This means that there must be reasonable grounds to infer that:
The conduct element of a relevant offense is present or satisfied.
The state of mind element of that same offense is present or satisfied.
There are no reasonable grounds to infer that a relevant defense is present or satisfied.
Freedom of expression and privacy must be considered when making these judgments.
When service providers are alerted to the presence of illegal content or are aware of its presence in any other way, they have a duty to operate using proportionate systems and processes designed to "swiftly take down" any such content. This is referred to as the "takedown duty".
Ofcom has issued the Illegal Content Judgements Guidance (ICJG) to support providers in understanding their regulatory obligations when making judgments about whether content is illegal under the OSA. It provides guidance on how to identify and handle illegal content, while considering freedom of expression and privacy. The ICJG outlines the legal framework for various offenses, the importance of context, jurisdictional issues, and the handling of reports and flags. It also offers specific guidance on various offense categories including the conduct and mental elements, as well as relevant defences.
User to User services: For the purposes of brevity given the scope of the OSA, I will focus on user to user services Codes and Guidance (as the most relevant category for social network platforms like X). The Illegal content Codes of Practice for search services is available here.
The draft Illegal content Codes of Practice for user-to-user services has been published with measures recommended for providers to comply with the following duties:
The illegal content safety duties set out in section 10(2) to (9) of the Act;
The duty for content reporting set out in section 20 of the Act, relating to illegal content; and
The duties about complaints procedures set out in section 21 of the Act, relating to the complaints requirements in section 21(4).
Section 3 of the document provides an index of recommended measures, including the application, relevant codes, and relevant duties for each measure. The recommended measures cover a range of areas, including governance and accountability, content moderation, reporting and complaints, recommender systems, settings, functionalities and user support, terms of service, user access, and user controls.
Recommended Measures
The recommended measures are set out in Section 4 of the document and are divided by thematic area:
Governance and Accountability
Large services should conduct an annual review of risk management activities related to illegal harm in the UK.
All services should designate an individual accountable for compliance with illegal content safety and reporting/complaints duties.
Large or multi-risk services should:
have written statements of responsibilities for senior managers involved in risk management.
have an internal monitoring and assurance function to assess the effectiveness of harm mitigation measures.
track and report evidence of new or increasing illegal content.
have a code of conduct setting standards for protecting users from illegal harm.
provide compliance training to individuals involved in service design and operation.
Content Moderation
All services should have a content moderation function to review and assess suspected illegal content and take it down swiftly.
Large or multi-risk services should:
set and record internal content policies, performance targets, and prioritize content for review.
provide training and materials for content moderators (including volunteers) and use hash-matching to detect CSAM.
Reporting and Complaints
All services should have accessible and user-friendly systems for reporting and complaints, and take appropriate action on complaints.
Larger services and those at risk of illegal harm should provide information about complaint outcomes and allow users to opt out of communications.
Specific requirements apply to handling complaints that are appeals or relate to proactive technology.
Recommender Systems
Services conducting on-platform testing of recommender systems and at risk of multiple harms should collect and analyse safety metrics.
Settings, Functionalities and User Support
Services with age-determination capabilities and at risk of grooming should implement safety defaults for child users and provide support.
All services should have terms of service that address illegal content and complaints, and these terms should be clear and accessible.
User Access
All services should remove accounts of proscribed organizations.
User Controls
Large services at risk of specific harms should offer blocking, muting, and comment-disabling features.
Notable User and Monetised Labelling Schemes
Large services with labelling schemes for notable or monetized users should have policies to reduce the risk of harm associated with these schemes.
Implementing the recommended measures will involve the processing of personal data, and service providers are expected to comply fully with data protection law when taking measures for the purpose of complying with their online safety duties.
The purpose of ICRAs are to help service providers understand how different kinds of illegal harm could arise on their service and what safety measures need to be put in place to protect users. ICRA's must be 'suitable and sufficient' for a provider to meet their OSA obligations.
Core inputs: This type of evidence should be considered by all service providers and includes risk factors identified through the relevant Risk Profile, user complaints and reports, user data (such as age, language, and groups at risk), retrospective analysis of incidents of harm, relevant sections of Ofcom's Register of Risks, evidence drawn from existing controls, and other relevant information (including other characteristics of the service that may increase or decrease the risk of harm).
Enhanced inputs: This type of evidence should be considered by large service providers and those who have identified multiple specific risk factors for a kind of illegal content. Examples of enhanced inputs include results of product testing, results of content moderation systems, consultation with internal experts on risks and technical mitigations, views of independent experts, internal and external commissioned research, outcomes of external audit or other risk assurance processes, consultation with users, and results of engagement with relevant representative groups.
The different types of illegal content that must be assessed are:
The 17 kinds of priority illegal content: Terrorism, Child Sexual Exploitation and Abuse (CSEA) (including Grooming, Child Sexual Abuse Material (CSAM), Hate, Harassment, stalking, threats and abuse, Controlling or coercive behaviour, Intimate image abuse, Extreme pornography, Sexual exploitation of adults, Human trafficking, Unlawful immigration, Fraud and financial offences, Proceeds of crime, Drugs and psychoactive substances, Firearms, knives and other weapons, Encouraging or assisting suicide, Foreign interference, and Animal cruelty.
Other illegal content: This includes non-priority illegal content as described in the Register of Risks and potentially other offences depending on the specific service and evidence available.
Additional factors that service providers should consider when carrying out an illegal content risk assessment:
Service characteristics: The characteristics of the service, such as its user base (e.g., age, language, vulnerable groups), functionalities (e.g., live streaming, anonymous posting), and business model, can affect the level of risk.
Risk factors: The Risk Profiles published by Ofcom identify specific risk factors associated with each type of illegal content. Service providers should consider these risk factors and any additional factors specific to their service.
Likelihood and impact of harm: The assessment should consider the likelihood of each type of illegal content occurring on the service and the potential impact of that content on users and others.
Existing controls: The effectiveness of any existing measures to mitigate or control illegal content should be considered.
Evidence: Service providers should use a variety of evidence to inform their risk assessment, including user complaints, data analysis, and external research.
Categorised service providers also have the following additional duties regarding their illegal content risk assessments:
Publication of Summary: They must publish a summary of their most recent illegal content risk assessment. Category 1 services must include this summary in their terms of service, while Category 2A services must include it in a publicly available statement. The summary should include the findings of the assessment, including the levels of risk and the nature and severity of potential harm to individuals.
Provision of Assessment to Ofcom: They must provide Ofcom with a copy of their risk assessment record as soon as reasonably practicable after completing or revising it.
The Online Services Act (s.61) defines content that is harmful to children as:
'Primary priority content' being:
Pornographic content
Content which encourages, promotes or provides instructions for suicide.
Content which encourages, promotes or provides instructions for an act of deliberate self-injury.
Content which encourages, promotes or provides instructions for an eating disorder or behaviours associated with an eating disorder.
Section 62 defines other priority content that can be harmful to children and must be managed appropriately. It includes:
Bullying and cyberbullying
Abusive or hateful content
Content depicting or encouraging serious violence
Content promoting dangerous stunts or challenges
Content encouraging the ingestion or exposure to harmful substances
Platforms must ensure that access to this type of content is age-appropriate and that protections are in place for children
The OSA prioritises protecting UK users from online harms.
(1)This Act provides for a new regulatory framework which has the general purpose of making the use of internet services regulated by this Act safer for individuals in the United Kingdom. (2)To achieve that purpose, this Act (among other things)—
(a)imposes duties which, in broad terms, require providers of services regulated by this Act to identify, mitigate and manage the risks of harm (including risks which particularly affect individuals with a certain characteristic) from—
(i)illegal content and activity, and
(ii)content and activity that is harmful to children, and
(b)confers new functions and powers on the regulator, OFCOM.
The Act outlines specific age and identity verification requirements, particularly for platforms categorized as Category 1 services, which are likely to have a significant number of users and offer a wide range of functionalities. In addition, platforms that are clearly aimed at pornography consumption must carry out age assurance checks.
“Highly Effective” Age Verification or Estimation Required: The Act mandates that services likely to be accessed by children use age verification or age estimation methods that are “highly effective” at correctly determining whether a user is a child. This applies across all areas of the service, including design, operation, and content.
Self-Declaration Not Sufficient: Simple self-declaration of age is not considered a valid form of age verification or age estimation.
Ofcom Guidance on Effectiveness: Ofcom, the designated regulator, is responsible for providing guidance on what constitutes “highly effective” age assurance. This guidance will include examples of effective and ineffective methods, and principles to be considered.
Factors for Effective Age Assurance: Ofcom’s guidance suggests that effective age assurance methods should be technically accurate, robust, reliable, and fair. They should be easy to use and work effectively for all users, regardless of their characteristics.
Recommended Methods: Ofcom has recommended methods like credit card checks, open banking, and photo ID matching as potentially highly effective.
Transparency and Reporting Requirements:
Platforms using age assurance must clearly explain their methods in their terms of service and provide detailed information in a publicly available statement. They must also keep written records of their age assurance practices and how they considered user privacy.
Ofcom Reports on Age Assurance Use:
Ofcom will assess how providers use age assurance and its effectiveness, reporting on any factors hindering its implementation.
Category 1 Services Must Offer Identity Verification: The Act requires Category 1 services (like major social media platforms) to offer all adult users in the UK the option to verify their identity, unless identity verification is already necessary to access the service.
No Specific Method Mandated: The Act does not specify a particular method of identity verification. Platforms can choose a method that works for their service, but it must be clearly explained in their terms of service.
Documentation Not Required: The identity verification process does not necessarily need to involve providing documentation.
User Empowerment Features:
Identity verification is linked to user empowerment features, as platforms must offer adult users the ability to:
Control their exposure to harmful content.
Choose whether to interact with content from verified or non-verified users.
Filter out non-verified users.
Ofcom Guidance for Category 1 Services:
Ofcom is expected to provide guidance for Category 1 services on implementing identity verification, with a focus on ensuring availability for vulnerable adult users.
General Considerations
The Act aims to strike a balance between online safety and freedom of expression, and this balance influences the implementation of age and identity verification requirements.
Specific details regarding the application of these requirements are still under development, and Ofcom is working on codes of practice and guidance to provide further clarification.
The age and identity verification requirements under the Act aim to enhance online safety, particularly for children and vulnerable adults. The Act focuses on the effectiveness of these measures, transparency from platforms, and user empowerment to control their online experiences.
The OSA imposes specific obligations on Category 1 services due to their reach and influence. These rules aim to safeguard the diversity of opinions and the integrity of democratic debate whilst minimising harmful speech. See also 'Democratic Threats' below.
Key Requirements
Content of Democratic Importance: Providers must ensure moderation processes do not disproportionately suppress political opinions or stifle democratic discussion. This includes protecting content from verified news publishers, journalistic pieces, and user-generated contributions to political debates.
Equal Treatment of Opinions: Decisions about content moderation must respect free expression and avoid bias against particular political viewpoints. This includes avoiding overzealous removals under policies aimed at combating misinformation or hate speech.
Protection of Journalistic Content: Articles and posts deemed to have journalistic value must not be unjustly removed or suppressed, ensuring the platform remains a space for investigative reporting and public interest stories. Platforms must protect:
Verified news publishers' content.
Journalistic content, even if shared by individual users.
User-generated contributions to political debates.
While regulated internet services are required to remove illegal or harmful content, the OSA emphasises the need to uphold free speech. Providers must develop policies and systems that balance protecting users from harm and allowing diverse viewpoints to flourish.
The requirement for transparency reports that include moderation policies and actions will be crucial here
The UK Online Safety Act’s requirements regarding pornography varies for specialised pornography platforms and other internet services like search engines.
Specialised pornography platforms:
For specialised pornography platforms, which are classified as “services that feature provider pornographic content”, the Act imposes a duty to ensure children are not normally able to encounter regulated provider pornographic content. This means these platforms will have to implement robust age verification or age estimation systems.
The definition of “regulated provider pornographic content” is specific and excludes content that consists solely of text, or text accompanied by emojis or non-pornographic GIFs. However, content in image, video, or audio form that is considered pornographic would fall under this definition and trigger the age assurance obligations.
The Act also mandates that these platforms, along with other user-to-user services and search services, conduct risk assessments. These assessments should identify and mitigate potential harms related to illegal content, including child sexual abuse material (CSAM) and extreme pornography.
Research indicates that user-to-user pornography services are particularly vulnerable to these types of illegal content. For example, a study found that a user-to-user pornography website hosted nearly 60,000 videos under phrases associated with intimate image abuse.
Additionally, evidence suggests that some services that host pornographic content prioritize user growth over content moderation, leading to less effective detection and removal of extreme content.
For other regulated internet services:
For other internet services, like search engines, the Act’s impact is more indirect. While search engines are not directly obligated to implement age verification, they are still subject to the requirement to mitigate and manage the risks of harm from illegal content and content harmful to children.This includes content that may be accessed through search results, even if the search engine itself does not host the content. For example, evidence suggests that search engines can be used to access websites offering illegal items like drugs and firearms.
The Act acknowledges that search engines are often the starting point for many users’ online journeys and that they play a crucial role in making content accessible.
Search engines are also subject to risk assessments. Given the potential for users to find illegal content through search, they are expected to consider how their functionalities, like image/video search and reverse image search, might increase risks. Furthermore, even if pornography is not their core function or purpose, platforms like X (formerly Twitter) and Reddit, which allow users to share user-generated content, including pornographic material, would be classified as user-to-user services and be subject to the relevant duties under the Act. This means they would also need to conduct risk assessments, consider the risks associated with user-generated pornographic content, and implement measures to mitigate those risks.
In conclusion, the Online Safety Act has significant implications for both specialised pornography platforms and other internet services that may have links to pornography. The Act aims to protect children from accessing pornographic content through robust age verification measures and seeks to reduce the prevalence of illegal content on these platforms through risk assessments and content moderation practices. The Act’s wide scope means that even platforms where pornography is not the main focus are still obligated to address the risks associated with such content.
Detailed Reporting: all categorised regulated internet services must publish annual transparency reports explaining their algorithms’ role in content moderation and misinformation detection. These reports should detail the volume of flagged and removed content, alongside the impact of moderation algorithms on users.
Proactive Technology Disclosure: providers must disclose any automated systems, such as machine learning tools, used to detect harmful or illegal content.
Terms of Service Clarity: providers must clearly explain its policies on algorithmic decision-making, especially regarding content of democratic importance and misinformation.
Algorithms are central to how regulated internet services moderate content, serve recommendations, and filter harmful material. The Act introduces transparency and accountability measures to ensure these algorithms and systems are safe and fair:
Providers must be transparent about how their algorithms function and the potential impact on users' exposure to illegal content.
They must include provisions in their terms of service or publicly available statements that specify how individuals are protected from illegal content, including details about the design and operation of algorithms used.
Additionally, they must provide information about any proactive technology used for compliance, including how it works, and ensure this information is clear and accessible.
Category 1 providers have an additional duty to summarise the findings of their most recent ICRAs in their terms of service, including the level of risk associated with illegal content. Factors like the speed and reach of content distribution facilitated by algorithms must be considered. These assessments must be updated regularly to reflect changes in Ofcom's Codes of Practice (COPs), risk profiles, and the provider's business practices.
Safer Algorithms for Children: If regulated internet services are accessed by children, their algorithms must minimise exposure to harmful content. This includes age-appropriate design measures and risk assessments targeting features that could harm younger users.
The Act promotes user choice and control by requiring platforms to provide tools that help users manage their online experience. For example:
Users can thereby gain more insight into how recommendation systems work.
Platforms could be required to offer non-personalised feeds that reduce reliance on algorithm-driven content.
Category 1 services must provide adult users with control features that effectively:
Reduce the likelihood of encountering specific types of legal but potentially harmful content, such as content promoting suicide, self-harm, or eating disorders.
Offer features to filter out interactions with non-verified users.
Clearly explain the available control features and their usage in the terms of service.
See 'Clean up the internet' recommendations to Ofcom:
AI Chatbots: It is very likely services such as ChatGPT, Gemini, Perplexity etc will be categorised as user-to-user services, as they allow users to interact with a Generative AI chatbot and share chatbot-generated text and images and other user generated content with other users.
Art: Services such as Midjourney (Art) are also in scope.
Generative AI Tools and Pornographic Content: Services featuring AI tools capable of generating pornographic material are additionally regulated and must implement highly effective age assurance measures to prevent children from accessing such content.
Generative AI and Search Services: AI tools enabling searches across multiple websites or databases are considered search services under the OSA. This includes tools that modify or augment search results on existing search engines or offer live internet results on standalone platforms. Consequently, these AI-powered search services will need to comply with the relevant duties outlined in the Act.
The Act will have a significant impact on services like ChatGPT, Gemini, Perplexity, Claude and others especially given the recent concerns about Generative AI and the potential for misuse.
These will usually fall within user-to-user services, potentially impacting their functionalities, transparency requirements, and approach to user safety.
Ofcom published an open letter on November 8, 2024, specifically addressing Generative AI and chatbots in the context of the Act. This letter emphasised the Act’s application to:
Services that allow users to interact with and share content generated by AI chatbots. For example, if ChatGPT allows users to share AI-generated text, images, or videos with other users, it would be considered a regulated user-to-user service.
Services where users can create and share AI chatbots, known as ‘user chatbots’. This means that any AI-generated content created and shared by these ‘user chatbots’ would also be regulated by the Act.
Ofcom has expressed concerns about the potential for Generative AI chatbots to be used to create harmful content, such as chatbots that mimic real people, including deceased children. These concerns highlight the Act’s focus on protecting users from harmful content generated by AI, even if it is technically ‘user-generated’ through the chatbot interface. The Impact on services like ChatGPT are set out below.
Content Moderation:
The Act will require services like ChatGPT to implement robust content moderation mechanisms to prevent the creation and dissemination of illegal content through its platform. This could include:
Monitoring user prompts and chatbot responses to identify and prevent the generation of harmful content, such as hate speech, child sexual abuse material (CSAM), or content promoting terrorism.
Developing safeguards to prevent the creation of ‘user chatbots’ that mimic real people or deceased individuals, particularly children.
Implementing reporting mechanisms and processes for users to flag potentially harmful chatbot interactions or content.
Transparency:
The Act’s emphasis on transparency will require services like ChatGPT to provide more information about its content moderation practices and the use of AI in its service. This could include:
Publishing transparency reports detailing the volume and nature of harmful content identified and removed, including AI-generated content.
Disclosing the use of algorithms and proactive technology to detect and mitigate harmful content.
Providing clear information in its terms of service about its approach to user safety and AI-generated content.
Risk Assessments: Services like ChatGPT will need to conduct thorough risk assessments, evaluating the specific risks associated with Generative AI and chatbots, considering factors like:
The likelihood of its functionalities facilitating the presence or dissemination of harmful content, identifying functionalities more likely to do so.
How its design and operation, including its business model and use of proactive technology, may reduce or increase the likelihood of users encountering harmful content.
The risk of its proactive technology breaching statutory provisions or rules concerning privacy, particularly those relating to personal data processing.
Challenges and Considerations:
Defining Harmful Content: Applying the Act’s broad definitions of harmful content to the context of Generative AI will be complex. Determining what constitutes “harmful” chatbot interactions, considering factors like context, intent, and potential for harm, will require careful consideration.
Balancing Safety and Innovation:
Finding a balance between protecting users from harm and fostering innovation in Generative AI will be crucial. Overly restrictive measures could stifle the development and beneficial uses of AI chatbots.
Technical Feasibility:
Implementing effective content moderation and safety measures for a service like ChatGPT, which relies on complex AI models, poses technical challenges. Developing robust and adaptable solutions to mitigate risks associated with Generative AI will require ongoing research and innovation.
Conclusion:
The Online Safety Act 2023 represents a significant shift in the regulation of online services, including AI-powered platforms like ChatGPT. The Act’s focus on user safety and transparency will require ChatGPT to adapt its approach, implement robust content moderation, and provide greater transparency about its operations. While the Act presents challenges, it also offers an opportunity for ChatGPT to demonstrate its commitment to responsible AI development and user safety. The evolving nature of Generative AI and the Act’s implementation will require ongoing dialogue between Ofcom, service providers like ChatGPT, and stakeholders to ensure a balanced and effective approach to online safety.
Gaming environments like Roblox and Fortnite that allow chat fall under the definition of “user-to-user services” under the Act. This categorization means these platforms will have legal responsibilities for keeping users safe online, particularly children, and will need to take steps to mitigate and manage risks associated with illegal and harmful content.
Illegal Content:
The Act requires platforms to proactively address illegal content, including Child Sexual Abuse Material (CSAM) and terrorism-related content.
Platforms like Roblox and Fortnite will need to implement:
Content moderation: Robust content moderation systems to detect and remove illegal content. This might involve using a combination of automated tools (like hash-matching for known CSAM) and human moderators.
Collaboration channels: to collaborate with law enforcement agencies and organizations like the IWF to proactively identify and report illegal content.
Child Safety and Grooming: The Act is particularly focused on protecting children from online harms, including grooming and sexual exploitation. Roblox and Fortnite, given their large child user bases, will need to implement robust safeguards to prevent and detect grooming behaviours. This could include:
Enhanced age verification mechanisms to accurately identify child users.
Restricting direct messaging or chat functionalities for children or requiring parental consent.
Providing educational resources and safety tips to children and parents.
Proactively monitoring chat and interactions for suspicious behaviour and patterns that indicate grooming.
Harmful Content:
While not strictly illegal, certain types of content can be harmful, particularly to children. This includes content promoting self-harm, suicide, eating disorders, violence, and hate speech.
Gaming environments and platforms like Roblox and Fortnite will need to develop and implement clear policies on harmful content and establish effective moderation processes to address it.
They will also need to consider the impact of their algorithms and recommender systems on user exposure to harmful content. This could involve:
Analyzing user-generated content and in-game interactions to identify and address potential risks.
Adjusting algorithms to limit the visibility of or recommendations for harmful content.
Providing users with tools to manage their online experience and filter out unwanted content.
Transparency and Reporting:
The Act emphasizes transparency and accountability, requiring platforms to be open about their content moderation practices and the risks they face.
Roblox and Fortnite will need to publish regular transparency reports detailing their efforts to combat illegal and harmful content. These reports could include:
Data on the volume of content removed or action taken.
Information about their content moderation processes and the use of automated tools.
Insights into emerging risks and challenges.
They might also need to be more transparent about their algorithms and how they impact content visibility and user experience.
Risk Assessments:
Providers of services like Roblox and Fortnite will need to conduct thorough risk assessments to identify and evaluate the specific risks of illegal and harmful content on their platforms.
These assessments should consider factors like their user base demographics, functionalities (like chat, user groups, and in-game interactions), and business models.
The risk assessments should inform their safety strategies and content moderation policies.
Overall, the UK Online Safety Act 2023 signifies a significant shift in how online platforms, including popular gaming environments, are expected to approach user safety. Roblox and Fortnite will need to invest in robust safety measures, enhance their content moderation capabilities, and be more transparent about their practices to ensure compliance and protect their users, particularly children, from online harms.
Categorization: The Act’s applicability and specific requirements hinge on how video conferencing tools are categorized. This depends on factors like user base, functionality, and whether they primarily facilitate private or public communication. If categorized as user-to-user services due to features like group chats or content sharing capabilities, they might be subject to more stringent requirements, similar to social media platforms.
Private vs. Public Communication: A core principle of the Act is the distinction between private and public communication. The Act generally avoids imposing obligations related to private communications, recognizing the importance of privacy. Video conferencing tools primarily used for private one-to-one or small group conversations might fall under this exemption. However, features enabling broader content sharing, recording, or public broadcasting could trigger additional scrutiny.
Illegal Content and CSAM: A significant concern is the potential misuse of video conferencing tools for illegal activities, including the creation and distribution of Child Sexual Abuse Material (CSAM). The Act requires platforms to take steps to mitigate and manage risks associated with illegal content. This could potentially impact video conferencing tools in the following ways:
Proactive Measures: Platforms are required to implement proactive measures to detect and prevent CSAM, potentially through partnerships with organizations like the Internet Watch Foundation (IWF) or using hash-matching technology to identify known CSAM content.
Content Moderation: Depending on categorization and functionalities, video conferencing tools could face obligations related to content moderation, requiring them to remove or disable access to illegal content. This could involve human moderation or automated tools.
Transparency and Reporting:
The Act mandates transparency for regulated services, potentially requiring video conferencing tools to disclose information about their content moderation practices, including the volume of illegal content removed and the use of proactive technologies. This could involve:
Publishing transparency reports outlining their approach to content moderation.
Updating terms of service to clearly explain user safety measures and reporting mechanisms.
Potential Impacts on Specific Features:
Recording and Sharing: Features enabling recording and sharing of video conferencing sessions could be subject to additional scrutiny due to the potential for misuse. Platforms might be required to implement safeguards, such as requiring user consent for recording or limiting sharing capabilities to prevent the spread of illegal content.
Livestreaming: If video conferencing tools offer livestreaming functionality, they might face similar obligations as video-sharing platforms, potentially requiring content moderation during livestreams and measures to prevent the broadcast of illegal or harmful content.
Messaging and Chat: Integrated messaging and chat functionalities could be treated similarly to other messaging services. Depending on the level of encryption and the platform’s categorization, there could be requirements related to illegal content detection and removal or transparency regarding data access for law enforcement purposes.
Additional Considerations:
Risk Assessments: Video conferencing tool providers need to conduct comprehensive risk assessments to identify and evaluate risks specific to their platform and features.
Age Assurance: If platforms have a significant number of child users or offer features attractive to children, they might need to implement age assurance mechanisms to protect children from harmful content or interactions.
Emerging Technologies: The Act is designed to be technology-neutral and adapt to evolving technologies. This could impact video conferencing tools as new features or functionalities emerge, requiring ongoing assessment and adaptation to comply with the Act’s principles.
Conclusion:
The specific requirements will depend on how video conferencing tools are categorized and the functionalities they offer.
Combating hate speech is a cornerstone of the OSA. Regulated providers must take decisive measures to reduce the prevalence of illegal hate speech and implement systems for detection, reporting, and removal of hate speech.
Key Duties for Platforms
Illegal Content Detection: Hate speech is classified as priority illegal content, requiring regulated internet services to identify and remove such material promptly.
Risk Assessments: Regulated providers must evaluate the risks of hate speech on their platform and develop proportionate systems to manage and mitigate these risks.
Clear Reporting Mechanisms: The platform must provide users with accessible tools to flag hate speech. Reports must be acted upon swiftly, with outcomes communicated transparently.
Transparency by Moderation
To meet the Act’s transparency standards, regulated services providers must:
Publish data on the volume and nature of hate speech flagged, reviewed, and removed.
Explain their systems for detecting and moderating hate speech in its transparency reports.
By addressing hate speech robustly, regulated services providers can align legal requirements with fostering a safer environment for users.
The process of bringing the Online Safety Act into law has been winding and subject to lengthy delays. Many of the provisions of the OSA came into force on January 10 2024 (including the new duty of care) for all regulated online services and many of the powers needed by Ofcom as the regulator responsible for enforcing the OSA. However, it has been subject to an implementation process which required Ofcom consultation and the issuance of Codes and guidance.
Major Deadline: All providers have a duty to assess the risk of illegal harms on their services, with a deadline of 16 March 2025. Providers will need to take the safety measures set out in the Codes or use other effective measures to protect users from illegal content and activity.
Additional key protections in respect of Category 1 providers (like X) are unlikely to be in force until 2026 or 2027. Further delay on major platforms now looks very dangerous (see below 'Democratic Threats').
The Secretary of State (Schedule 10) will determine regulations specifying Category 1, 2A, and 2B threshold conditions for different types of services. Commencement dates for remaining provisions of the Act will be set by future regulations under Section 240.
Phase 1: Illegal Harms (December 2024–March 2025)
December 2024: Ofcom will release the Illegal Harms Statement, including:
Illegal Harms Codes of Practice.
Guidance on illegal content risk assessments.
March 2025: Service providers must complete risk assessments and comply with the Codes or equivalent measures. Enforcement begins once Codes pass through Parliament.
Phase 2: Child Safety, Pornography, and Protection of Women and Girls (January–July 2025)
January 2025:
Final guidance on age assurance for publishers of pornography and children's access assessments.
Services likely accessed by children must begin children’s risk assessments.
April 2025: Protection of Children Codes and risk assessment guidance published.
July 2025: Child protection duties become enforceable.
February 2025: Draft guidance on protecting women and girls will address specific harms affecting them.
Phase 3: Categorisation and Additional Duties (2024–2026)
End of 2024: Government to confirm thresholds for service categorisation (Category 1, 2A, or 2B).
Summer 2025: Categorised services register published; draft transparency notices follow shortly.
Early 2026: Proposals for additional duties on categorised services are expected to be released.
2027: Implementation of all requirements for Category provider obligations.
Ofcom OSA Roadmap:
Ofcom state, in the 16 December 2024 Overview, that in early 2025, they will seek to enforce compliance with the rules by a combination of means, including:
Supervisory engagement with the largest and riskiest providers to ensure they understand Ofcom's expectations and come into compliance quickly, pushing for improvements where needed;
Gathering and analysing the risk assessments of the largest and riskiest providers so they can consider whether they are identifying and mitigating illegal harms risks effectively;
Monitoring compliance and taking enforcement action across the sector if providers fail to complete their illegal harms risk assessment by 16 March 2025;
Focused engagement with certain high-risk providers to ensure they are complying with CSAM hash-matching measure, followed by enforcement action where needed; and
Further targeted enforcement action for breaches of the safety duties where they identify serious ongoing issues that represent significant risks to users, to push for improved user outcomes and deter poor compliance.
"We will also use our transparency powers to shine a light on safety matters, share good practice, and highlight where improvements can be made."
Compliance Monitoring
Ofcom, the UK's communications regulator will closely monitor regulated internet services's adherence to the Act. Breaches could result in substantial penalties, including fines of the greater of £18 million or up to 10% of global annual turnover (Sch. 13).
Balancing Act
Regulated providers face significant operational challenges:
Maintaining Free Speech: Striking the right balance between protecting free expression and removing harmful content is critical. Over-moderation risks alienating users, while under-moderation could attract regulatory action.
Transparency Burden: Producing detailed reports and disclosing algorithmic processes requires resources and technical clarity.
Algorithm Design: Algorithms must meet the dual demands of protecting children and fostering open debate. Regulated internet services may need to invest in redesigning its systems to comply with these requirements.
Despite concerns about notable interference in UK politics and stirring up anti-Islamic and anti-immigrant sentiment in the UK, Elon Musk is maintaining his aggression against the UK government (and the EU which has the Digital Services Act which in many respects is similar to the OSA).
In the summer of 2024, Musk personally and via his X platform helped to spread anti-immigrant, anti-Government and anti-Islamic misinformation by right-wing extremists about the tragic stabbings of a number of adults and children in Stockport. This culminated in a number of riots across the UK fed by far-right extremists. The young man responsible for the tragic events in Stockport was neither a Muslim or an immigrant.
In respect of the EU DSA, the Commission has already found X to be in breach of misuse of verification checkmarks, blocking access for research & lack of transparency for advertising. And remains under investigation for not curbing (i) the spread of illegal content — hate speech or incitement of terrorism — (ii) information manipulation.
Musk argued, inter alia, with Thierry Breton (formerly European Commissioner for Internal Market) and accused the Commission of censoring free speech, illegality and deceit. Musk went on to tell Breton (on X) that he looks forward to outing the truth in court (presumably Musk is hoping for a Californian jury trial). Breton must think himself lucky not to be called a ‘pedo guy’.
Despite the continuing and accelerating attacks by Musk against the EU and the UK as they try to rein in hate crimes, unlawful content, and misinformation on social media platforms, in the meantime Peter Kyle (the UK’s technology secretary) recently suggested that governments need to show a “sense of humility” with big tech companies and treat them more like nation states.
We can only agree with Marietje Schaake, a former Dutch member of the European parliament and now the international policy director at Stanford University Cyber Policy Center and international policy fellow at Stanford’s Institute for Human-Centred Artificial Intelligence HAI) who commented on this statement as follows:
“I think it’s a baffling misunderstanding of the role of a democratically elected and accountable leader. Yes, these companies have become incredibly powerful, and as such I understand the comparison to the role of states, because increasingly these companies take decisions that used to be the exclusive domain of the state. But the answer, particularly from a government that is progressively leaning, should be to strengthen the primacy of democratic governance and oversight, and not to show humility. What is needed is self-confidence on the part of democratic government to make sure that these companies, these services, are taking their proper role within a rule of law-based system, and are not overtaking it.”
We hope that the UK Government will be more aggressive in seeking to bring powerful unelected billionaires (like Elon Musk) and organisations to account.
We can hope for the best, but we must prepare for the worst.
It is essential that Ofcom help platforms get the balance right as in many cases the right to be offended by someone else's views is a cornerstone of a democratic society.
“If we don't believe in freedom of expression for people we despise, we don't believe in it at all.”
(Noam Chomsky)
Protecting us from Government Monopolies on Permitted Opinions
In addition to the risk of misinformation, bias and skewed freedom of speech and opinion by operators of social media platforms we must also bear in mind the significant risk of Governments seeking to have a monopoly on which opinions are permitted. This risk is always extremely high, as witnessed, for example, by the anti-scientific approach to any debate in the UK (and US) during COVID. When science meets politics, science invariably suffers.
Note I am not an anti-vaxxer, just astonished that civil liberties were so easily magically swept away simply by asserting public health grounds (are public health grounds the new 'national security' grounds?). Other countries, like Sweden, managed to take a much more democratic and mature approach to the balancing acts required by COVID 19.
Transparency and protection of democracy and free speech must also extend to the impact of indirect political and governmental influence over regulated service providers (i.e. outside of the normal permitted legal channels) and what views about 'reality' are permitted. Transparency reports by in-scope providers must include the impact of direct and indirect political pressure and influence.
"How can I help it? How can I help but see what is in front of my eyes? Two and two are four."
"Sometimes, Winston. Sometimes they are five. Sometimes they are three. Sometimes they are all of them at once. You must try harder. It is not easy to become sane.”(George Orwell - 1984)
The difference between freedom of speech and abuse of freedom of speech
Clearly there is a big difference between the right for a man or woman on the street taking to a social media platform (or the streets) to express their concern about policies and politics (or Genocide in Gaza) from the misuse of platforms (or platform data) or political processes by powerful vested interests to skew public opinion and spread misinformation or even racial or religious prejudices.
With great wealth and power should come great transparency and responsibility (though in our current political landscape it appears the opposite is true).
The biggest risks we face to freedom of expression and transparency about use of algorithms and filtering are when big business and governments are on the same side in wishing to suppress those freedoms.
The UK Online Safety Act 2023 represents a significant regulatory shift, especially for platforms like regulated internet services that play a pivotal role in public discourse. By enhancing transparency, refining algorithms, and protecting democratic content, regulated internet services have the opportunity to demonstrate leadership in compliance and user safety. However, navigating these new requirements will require substantial effort, resources, and innovation.
As Ofcom enforces the Act, the response of regulated internet services providers will set a precedent for how social media platforms can adapt to an evolving regulatory landscape. However, there are significant concerns that Ofcom (and by extension the UK Govt) will be slow and timid in its roll out of key aspects of the OSA and in its guidance and enforcement action.
Humility is clearly not going to work with someone like Musk and further delays until 2027 for Category 1 obligations are going to be deeply damaging to UK democracy.
“If UK users are continuing to experience the same problems with fake and anonymous accounts several years after the Online Safety Act came into force, it may bring into question whether the Act, or its enforcer, are fit for purpose.” (Stephen Kinsella)
AI generated artwork of the glorious self-elected defender of truth and free speech against the infidel elected representatives:
PS: a concerned reader writes in to ask about the wisdom of my use of an AI generated image of Elon Musk. Worry not, my dear sensitive reader, Musk (a true master of defamation) has to jump over 2 insurmountable hurdles to attack me successfully:
1. That he is not morphing into a Sith Lord (the untrue limb);
2. That suggesting he is a Sith Lord is untrue and is damaging to his reputation.
I rest my case for the defence.