What More Can Be Done To Stop Racist Abuse Online?
By Paul Seddon
TECH DIGEST – Racist abuse directed at black England footballers has intensified calls for stronger action to stamp out hate online.
Marcus Rashford, Bukayo Saka and Jadon Sancho were targeted after they missed penalties in the Euro 2020 final.
The government has promised its Online Safety Bill will do more to force social media companies to act.
The abuse has also prompted questions about what more can be done elsewhere too.
What about criminal prosecutions?
There is already legislation in place in the UK that can be used to prosecute abuse online.
In England and Wales, online abuse is covered by two main communication laws dating from 1988 and 2003.
The Law Commission, which is reviewing those laws on behalf of the government, says they are ill-suited to the modern nature of internet abuse.
It adds that another avenue for prosecution under public order laws, presents “real challenges” as they were not designed to apply online.
England manager Gareth Southgate also says a lot of the abuse received by his players comes from abroad, which makes things even more difficult.
Prosecuting abusers based outside the UK poses legal problems, and can make it much more difficult and time-consuming for police to gather evidence.
And the Commission cautions that the limited resources of police and prosecutors make it impossible to deal with all abuse through the justice system.
The government says it will look at the Commission’s final recommendations when it reports back later this year.
Banning fans from games
The government says it wants those found guilty of online racist abuse to be banned from attending football matches.
Prime Minister Boris Johnson announced the move three days after the Euro 2020 final, after Labour called for the measure.
Currently, courts can only ban fans from attending games in the UK for racist chanting at or near a football ground.
The three-to-10 year bans also allow authorities to force UK-based fans to hand over their passport to police during matches or tournaments overseas.
According to the Home Office, there were 1,621 in force at the end of last season. Around half were for public or violent disorder.
Since Sunday’s match, over one million people had signed an online petition calling for tougher rules to ban abusers from attending games.
Some clubs have also enforced bans of their own after fans have been found guilty of sending racist abuse online.
Another idea that has occasionally been suggested to deal with online abuse is through court-ordered behaviour orders.
A range of Criminal Behaviour Orders – which replaced former Anti-social Behaviour Orders or ‘Asbos’ – can be used to restrict the behaviour of individuals.
Some have argued that prosecutors could widen their guidance to promote greater use of restrictions on internet use for abusers.
In a 2015 report on anti-Semitism, a group of MPs also suggested prevention orders restricting sexual offenders from using the internet could be extended for hate crime offences.
Facebook, which also owns Instagram, said it had “quickly removed comments” directed at players on its platforms.
Twitter said that it had taken down over 1,000 tweets directed towards England players in 24 hours, and permanently suspended a “number of accounts”.
The firm said this happened after reviews from human moderators, as well as automated technology.
Social media companies are responsible for taking content down, and are under pressure do more in this area.
The government argues its Online Safety Bill will address this, by giving ministers and regulators greater powers over firms, which up until now have been largely self-regulating.
Ofcourse, the body that regulates traditional broadcasters, will be able to issue fines of up to £18m or 10% of global turnover, whichever is higher, if firms fail to comply with the new rules.
It would also gain the power to block access to sites in the UK.
Does the bill do enough?
The government has faced criticism at the time taken for the Online Safety Bill – first conceived by Theresa May’s government in April 2019 – to come to fruition.
Although a draft version has been published, it is still undergoing initial scrutiny from MPs and others and is yet to start its journey into law.
The bill’s passage through Parliament is also likely to be complicated by concerns that some of its provisions could harm freedom of speech.
Labour says it will push to introduce criminal sanctions for senior executives at tech companies who fail to enforce rules on abuse.
And there has been criticism the bill doesn’t do enough to prevent abuse from anonymous users, who are more difficult to identify and punish.
Some have suggested people should have to provide ID before they are able to open an account.
But the government has rejected this, arguing it would “disproportionately impact” users such young people exploring their sexual identity and victims of abuse, who rely on anonymity to protect their identity.
It said firms would have to stop “repeat offenders” from opening new accounts in order to meet their new duty of care.