Skip to main content

Online Harms Regulatory Principles

  • date icon February 28, 2019

IA is the only trade association that exclusively represents leading global internet companies on matters of public policy. IA has over 40 members, in areas ranging from search (e.g. Google), to hosting platforms, social media (e.g. Facebook and Twitter), digital services and beyond. In November 2018, IA established a London office to help ensure that internet companies operating in the UK can continue to make a positive contribution to the economy, innovate for social good, and enrich people’s daily lives.

We welcome the government’s review and consideration of the important policy area of internet safety, and we want to engage constructively in the policy dialogue. The internet delivers substantial benefits to UK consumers, the economy, and society, and we believe the sector needs a balanced policy environment to continue and grow this contribution in future.

IA member companies are committed to addressing online harms and undertake significant activity in this area, from investment in hiring teams and improving systems for detecting and removing inappropriate or abusive content; to participation in and leadership of global bodies like the Internet Watch Foundation and Global Internet Forum to Counter Terrorism. We also recognise that industry needs to keep improving and do more to keep people safe online.

Online Harms Regulatory Principles

IA supports the development of new online safety regulation that strikes the right balance between addressing concerns about harms, and enabling the internet economy to grow. IA member companies are on the record supporting “smart regulation and policy innovation” and the “right regulations [as] an important part of a full system of content governance and enforcement ”, and IA is helping to convene industry thinking in the UK on future online safety regulation.

IA and its member companies encourage the government to take a principles-based, proportionate approach to further online safety regulation. We have therefore developed a series of principles which we believe will help marry the joint aims of tackling online harms while maintaining a pro-innovation environment. We believe that any further online harms regulation must:

  1. Be targeted at specific harms, using a risk-based approach that recognises that different harms need different solutions. Not all harms are the same – they vary in their levels of seriousness and impact – and regulation should also recognise the differing levels of risk in order to provide the most effective protection for consumers and society. Regulation should, for example,  recognise the distinction between clearly illegal content and content which is harmful, but not illegal, and enable different approaches to tackling these categories of content. Such an approach would drive change at a systemic level, and encourage better stewardship of online spaces.
  2. Provide flexibility to adapt to changing technologies, different services and evolving societal expectations. Change happens fast in the internet sector, with new services and technologies constantly emerging. The internet sector is not a homogenous group of companies, but rather it is made up of many different products, services and business models delivered over the internet. Public expectations of what is and is not appropriate also change over time; and vary between countries. A new regulatory model must be sufficiently flexible to account for this diversity if it is to be effective in both addressing online harms and allowing consumers to enjoy a wide range of differentiated services.
  3. Maintain the intermediary liability protections that enable the internet to deliver significant benefits for consumers, society and the economy. The internet has flourished in part because platforms permit users to post and share information without fear that those platforms will be held liable for third-party content. Dilution of intermediary liability protections would encourage internet companies to engage in over-censorship for fear of being held liable for content, with a consequential impact on freedom of speech. Intermediary liability protections also play a critical role in driving economic growth, by enabling new companies to invest and launch new services in the UK and enabling existing companies to innovate, scale and grow their businesses.
  4. Be technically possible to implement in practice, with particular attention given to the ability of smaller companies to comply. Internet companies have made significant technological advances in the identification of potentially harmful content online, for example through artificial intelligence and machine learning. While this innovation continues apace, regulation should recognise the limits of current technology to monitor and remove content. It should also take into account that resources available for this type of activity vary between companies, with smaller companies likely having less capability than larger companies to implement significant technological solutions.
  5. Provide clarity and certainty for consumers, citizens and internet companies in a manner that builds and maintains public confidence in the process. Consumers should be clear on the legal boundaries for what is and is not acceptable in terms of content and behaviour online, on where they can expect to be protected, on appeal and redress mechanisms, and on when it’s appropriate to notify companies to take down harmful content. Internet companies should have clear guidance on what types of content and behaviour is and is not acceptable on their services, rather than making decisions on free speech and public safety on their own. An education/citizenship campaign – promoted jointly by government, industry and NGOs – could be developed to raise awareness of what constitutes acceptable behaviour online, ideally with reference back to what is considered by society to be acceptable behaviour offline.
  6. Recognise the distinction between public and private communications, and the implications of government involvement in the latter. The reach and impact of communications online differs according to the service used. A service that enables one-to-one private communication between individuals is different to a service that plays out communication to the public, which is different again to a service that provides information in response to a consumer request. While regulation quite rightly has a role to play in relation to public communications, care should be taken to avoid regulation encroaching into the surveillance of private communications, which should be a matter for individuals.

Next steps

IA, and our members, will continue engaging closely with the government on the Online Harms White Paper. IA is clear that the UK needs a policy environment which strikes a balance between the opportunity for growth and innovation on the one hand, and the need to address concerns about online harms on the other.