

The better “fingerprinting” technology gets, the more variants of an offending piece of footage can be detected, but the imperfection of the current systems in part explains why copies of the video are still appearing on sites like YouTube several hours after the initial assault. Social media companies looking to prevent a video being uploaded at all must first upload a copy of that video to a database, allowing for new uploads to be compared against that footage.Įven when platforms have a reference point - the original offending video - users can manipulate their version of the footage to circumvent upload filters, for example by altering the image or audio quality. The way most content-recognition technology works, he explains, is based on a “fingerprinting” model. “It’s very hard to prevent a newly-recorded violent video from being uploaded for the very first time,” Peng Dong, the co-founder of content-recognition company ACRCloud, tells TIME. “We’re also removing any praise or support for the crime and the shooter or shooters as soon as we’re aware.”Įxperts say the Christchurch video highlights a fatal flaw in social media companies’ approach to content moderation. “We quickly removed both the shooter’s Facebook and Instagram accounts and the video,” a Facebook spokesperson said.
