262 total views
‘Britain will have the toughest internet laws in the world’, Sajid Javid proudly announced on Monday 8th April. Should someone remind him of the state of North Korea, or the women in the Arabian Peninsula World, who the foreign secretary should be working so hard to help see political freedoms? Whilst Javid may have inadvertently glossed over the freedom of millions of people, we understand the sentiments as he marked the occasion of the government’s white paper entitled ‘Online Harms’.
The call for government action against the monsters of the internet has been going on for a long time. Sadly, it took Molly Russell to kill herself when influenced online, to make the world listen. For whilst there are laws in place for Daesh propaganda and fake news, for example the 2006 Terrorism Act, and the 2003 Communications Act, in cases like Molly Russell there were no preventative measures so ‘Online Harms’ comes as a vague and inaccurate step in the right direction.
With ‘harm’ not defined explicitly, it is difficult to establish what ought to be controlled and what the punishment for failing to protect the vulnerable from ‘harm’ is correct or appropriately severe. The financial cost to tech companies could be up to 4% of their turnover following such laws, and having not self-regulated their own services prior to this new proposed legislation, it is clear that they are disinterested in the ethical side of their company, profit being their only motivation. The new legislation encourages them to cover their own platfroms, and could lead to bans on unpopular opinions, still within someone’s right to free speech.
Britain has an ever expanding group of protected characterises, such as homosexuality or being female. Engagement and discussion on these topics or with people who find themselves within these groups could become impossible online if companies are forced lose objectivity so as to not face criticism. We must allow for different ideas, that aren’t offensive or dangerous, and avoid being politically correct for its own sake. Inaccuracy over ‘harm’ will motivate companies to safeguard their users for all the wrong reasons, in all the wrong ways: without room for debate and a restriction to societal norms we may face stagnation in society.
It is a good step forward but there could be detrimental consequences for smaller companies. News companies may decide to ban user-generated content such as letters to the editor, if they consider the risk becomes too great and encouraging monopolies by companies such as Facebook whose profits will barely notice a 4% loss and have the money to deal with mistakes.
I think this is a good bandage for the problem, an instant fix to stop immediate harm but it cannot remain in this form for long without encroaching on our free speech. There are gaping wounds beneath the bandage that must be stitched up so that the internet can become a safe place.