The vast majority of videos removed from YouTube towards the end of last year for violating the site’s content guidelines had first been detected by machines instead of humans, the Google-owned company said on Monday.
YouTube said that it took down 8.28 million videos during the fourth quarter of 2017, and about 80 per cent of those videos had initially been flagged by artificially intelligent computer systems.
The new data highlighted the significant role that machines – not just users, government agencies and other organisations – are taking in policing the service as it faces increased scrutiny over the spread of conspiracy videos, fake news and violent content from extremist organisations.
Betting on improvements in artificial intelligence (AI) is a common Silicon Valley approach to dealing with problematic content; Facebook has also said that it is counting on AI tools to detect fake accounts and fake news on its platform.
But critics have warned against depending too heavily on computers to replace human judgment. Read the full article titled “Majority of Videos Removed from YouTube Detected by Machines” to find out why.
“Empowering Enterprise” is an ongoing Ingram Micro series published in every Wednesday’s edition of The Business Times. It aims to provide news and thought leadership on the latest developments in cloud and security.
The series is produced in partnership with the following vendors: Dropbox, Microsoft, VMware, Cisco, IBM, Progress, Symantec, Barracuda, Dell EMC, FireEye, Hewlett Packard Enterprise, Juniper Networks, Lenovo, Menlo Security, Adobe, BitTitan, DocuSign, NSFOCUS and Veritas.
This post may contain excerpts from an article entitled “Majority of Videos Removed from YouTube Detected by Machines” published in The Business Times on 25 April, 2018.