Google Will Hire 10000 Employees To Moderate YouTube

For instance, Wojcicki says that YouTube will have 10,000 people working to address questionable content by 2018 - noting that human reviewers are essential to training the company's machine learning systems. It has not previously revealed the size of its content review team, although the figure is thought to be in the high thousands at present.

YouTube already relies on machine learning technology to remove violent extremist videos and will expand its use to flag videos and comments that contain hate speech or harm to children.

Several advertisers, included Mars Inc., Adidas and Diageo, said they would pull their campaigns off YouTube in the aftermath, fearing the videos would attract pedophiles, according to the Wall Street Journal.

Though it's unclear whether machine learning can adequately catch and limit disturbing children's content - much of which is creepy in ways that may be hard for a moderation algorithm to discern - Wojcicki touted the company's machine learning capabilities, when paired with human moderators, in its fight against violent extremism.

"We need an approach that does a better job determining which channels and videos should be eligible for advertising", she said.

As media coverage surrounding inappropriate YouTube content continues to rise-especially in tandem with United States kids' mobile usage-the Google-owned platform has been vocal about taking initial steps to ensure its youngest of viewers aren't exposed to nefarious material.

The video sharing website overhauled its policies to restrict what type of content can appear while investing heavily in machine learning technology which takes down videos and comments that violate its policies.

"We are planning to apply stricter criteria, conduct more manual curation, while also significantly ramping up our team of ad reviewers to ensure ads are only running where they should", she wrote.

It's unclear when the advertising changes will go into effect. "We've heard loud and clear from creators that we have to be more accurate when it comes to reviewing content, so we don't demonetize videos by mistake".

In a pair of blog posts today, the company elaborated on its strategy for weeding out the rising tide of video content that has turned services such as YouTube, Facebook, and Twitter into bottomless cesspools of fake news, terrorist propaganda, and Nazi-fueled rage.

"A Lyft ad should not have been served on this video", a Lyft spokesperson told BuzzFeed News. But the company's newest measures intend to take things one step further.

Vanessa Coleman

Comments