Posted December 02, 2018 08:30:23After years of using algorithms to target specific individuals, social media giants are turning to data analytics to tackle hate in the digital space.
Facebook, the social network and Google are now working on the use of algorithms to detect and prevent specific types of hate speech and discrimination, according to a report from The Australian Financial Review.
In the report, the news organisations reveal the companies are using the new technologies to find “the right message” for a targeted audience.
The news organisations say the companies have been working with Australia’s National Hate Crime Council and its data analysis arm the Anti-Fascist Information Resource Centre to use “deep learning” techniques to identify and identify potential hate speech offenders.
The Anti-Hate Crimes Commission is working with the tech companies to identify potential anti-Semitism and other forms of bias that can be exploited for online harassment.
A spokesperson for Facebook told the news outlets that the company was looking at ways to help identify individuals using “familiar features and features like language, location and social context”.
“We’re also looking at using machine learning to identify people who are known to share content that’s discriminatory and hateful,” the spokesperson said.
“This can be a problem for people who don’t like the content but don’t want to be identified as a person who shares it.
We’re looking at how we can do that.”
The company also announced that it is working on “a set of tools and services to help users identify and prevent online hate”.
It’s not the first time the tech giants have used data to combat online hate.
Google has been working to combat hate speech on its search engine, but in recent months, the company has also started to use its algorithms to help target hate speech, including through “tweets” and “posts” posted by users.
This is a big step forward in social media and in the fight against hate, says Johnathan Rennie, director of research at the Centre for Cyber and Homeland Security at the University of Sydney.
He says the new tools could have a positive impact on the fight for free speech and the ability of individuals to speak freely online.
“If you look at a number of companies who have been using data to fight hate, they have had a number in the past year, and they’ve been working really hard to identify hate speech that’s being shared on social media,” he said.
He said that “deep neural networks” are already being used to identify individuals on Facebook who are engaging in violent and antisocial behaviour.
“So, the next step is to be able to identify the hate speech of that person on Facebook and to try and get them to engage with other people who share that hate speech,” he explained.
“That’s really what this is about.”
Google said that the technology was being used for targeted, targeted analysis, but the company could not reveal specific details about how it uses the technology to identify users.
“The data is not publicly available to us, and we don’t share any of it with our users,” the company said.
“But the technology is being used on Google’s search engine to identify content that may be dangerous to the public.”
The Australian Privacy Commissioner is investigating Google and Facebook for allegedly failing to protect privacy.
The Australian Crime Commission has also launched a public inquiry into how Facebook uses its technology.