By nScreenMedia’s Lloyd Dixon
YouTube stumbled into yet another ad scandal this week in what some are calling Adpocalypse 2.0. This time children are being exposed to extreme, violent, sexual, and disturbing content. This is an all too familiar story for YouTube. Can the problem finally be fixed with technology or is a more traditional approach called for?
Kids exposed to dangerous videos
In March 2017, YouTube was hit with what has come to be known as the Adpocalypse. Advertisers found their ads placed against extremist content. YouTube’s reaction was to make aggressive changes to its curation algorithms to enforce stricter monetization rules. These stricter rules angered many creators who saw their revenues drop or disappear entirely. Moreover, some of the impacted content should never have been “demonetized” in the first place. However, the move did restore advertiser confidence and many have resumed advertising on the video site.
Unfortunately, YouTube once again finds itself in hot water with advertisers, users, and creators. “Bad actors” have been gaming YouTube’s recommendation algorithm to reach children. For example, colorful thumbnails, an important piece of YouTube video promotion, have attracted children to content many would find disturbing.
Toyfreaks, a popular YouTube channel with 8.5 million subscribers, is one of the creator channels that has been terminated. The channel featured videos from one father showing:
- Him dumping a bucket of frogs into a bathtub to scare his children
- His daughter’s baby tooth pulled and her crying while her mouth was bleeding.
The creators used shock humor to entertain the audience and many found the videos unacceptable for children.
Other channels went even further, utilizing bloody gore, and popular kids’ characters in sexually explicit situations. Additionally, the comment section of the videos is a popular spot for not just kids but also sometimes pedophiles. The outrage has caused YouTube to make drastic changes.
YouTube falls back on real people
YouTube relies on its artificial intelligence (AI) algorithms to find and eliminate inappropriate content. However, in a tacit admission that the AI approach alone is not enough, the company announced that it would hire 10,000 people to review content and comments. This represents a 25% increase in the number of staff at the AVOD giant.
The curation system will continue to use its algorithms, which have removed more than 150,000 videos since June. However, it can no longer rely solely upon this approach.
This is a big blow to the company. It has enjoyed high profit margins partly due to the lack of human capital expenditures. Adpocalypse could be shifting the balance back toward human curation. In a blogpost, Susan Wojcicki, YouTube CEO, wrote:
“Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content.”
She said that moderators have manually reviewed nearly 2 million videos for violent extremist content since June. They are also helping to train machine-learning systems to identify similar footage in the future. Ms. Wojcicki claimed that “advances in technology have allowed the site to take down 70% of extremist content within 8 hours of uploading.” Whether or not this is true is up for debate. However, the Adpocalypse has shown us that sophisticated enough systems do not exist to monitor all the comments. Unfortunately, pedophiles and those seeking to harm children can sometimes lurk in these sections.
[READ] FIRST LOOK: Netflix and Other Premium OTT Players (Or Wannabes) – “The New Faces of the Content Industry”
Many, including advertisers, view the deletion of thousands of accounts due to stricter guidelines as a welcome move. Creators, on the other hand, remain worried. There still seems little transparency between creators and YouTube on what content is acceptable and what is not.
Creator account deletions continue regardless of the appropriateness of the content. Many more are seeing their videos demonetized for no reason and with no explanation. YouTube has an appeals process which may move faster now there are more people involved in the process. However, creators have already lost most of the revenue by the time an appeal is successful.
We will still have to see if the 25% increase in staff can handle the 400 hours a minute uploaded to YouTube every single minute.
The YouTube Adpocalypse debacles should serve as a warning to any company relying on automation and artificial intelligence to curate large amounts of user-generated content. Facebook, Google, and Twitter should particularly take note.
Why it matters
The limitations of algorithm-based content curation are becoming painfully apparent.
A second scandal has rocked YouTube with auto curation tools failing to eliminate highly inappropriate content targeting children.
YouTube has hired 10,000 employees to cope with the crisis. Is this a tacit omission that humans are still needed in the auto curation of user generated content?