Content Brief: false metrics, Russian hackers, and Facebook’s latest PR move
We’re one month into 2017 and already a lot has happened. Here’s the lowdown on the rise of false metrics, what Russian hackers are up to, and Facebook’s latest PR stunt—and other things you missed in content marketing this past month.
Platforms reported false metrics and no one can stop them
Facebook has come under fire again for reporting false metrics to publishers, brands, and advertisers. In December, it was reported by MarketingLand that Facebook misreported engagement numbers—such as user reactions to live videos and like/share posts on Facebook—for links and live videos. In November, Facebook disclosed it miscalculated the organic reach of posts, video completions, and time spent on Instant Articles. And in September, The Wall Street Journal reported the social network overestimated how long users were watching videos for nearly two years.
Yet, the problem isn’t exclusive to Facebook. In late December, Business Insider reported that a bug in Twitter’s Android app led its video advertising metrics to be inflated by as much as 35 percent. And in early January 2017, it was widely reported that a former Snapchat employee filed a lawsuit against the company alleging that the company inflated growth metrics.
So how come platforms can continue to report false metrics? And why can’t anyone stop them?
For one reason, they have bigger priorities—fake news, anyone?—and also, these platforms hold an immense amount of power over advertisers. For a few years now, social media platforms have slowly restricted the availability of data to third-parties (See the fall and resurrection of SharedCount). Marketers have no power to stop these platforms from reporting false metrics, beyond amplifying the industry’s call for third-party measurement verification.
Even then, Facebook, Twitter, and other platforms can continue to misrepresent data precisely because these platforms reach a large audience, and advertisers need to continue to advertise. Despite the kinks in these platforms’ reporting, there is no reason to believe Facebook and Twitter and other major players will discontinue reporting false metrics.
Russian hackers pulled off the largest ad fraud scheme in history
Russian hackers are everywhere in the news lately, but this alarming news story may not have caught your attention. According to a report by the cybersecurity firm White Ops, a group of Russian hackers appears to have pulled off the largest ad fraud scam of all time by showing real video ads to fake people on fake websites for “real” clicks.
The ad fraud scheme, known as “Methbot,” required hackers to reverse engineer the ad-quality verification process to pass fake views as real ones. The hackers created over 6,000 domains with 250,000 unique URLs to impersonate domains from publishers like ESPN and Vogue, siphoning off the most profitable video ads to hacker-controled sites. Then, the hackers provided the fake audience to “watch” the ads by using hundreds of thousands of legitimate IP addresses, manipulated geolocation software, social media logins, and thousands of Methbots programmed to move the mouse over the screen, click on the ads, and “watch” the ads. And it did so while using the advertising industry’s best practices against them to make between $3–5 million each day.
Russian adtech and ad fraud schooled the advertising system to earn a lot of money from fake sites, which is not unlike how the Macedonians profited from a fake news scheme. But it is a stark reminder of how easy it is to manipulate adtech—and commit ad fraud—for your own gain.
Facebook took steps to embrace journalism but not without criticism
Facebook has finally begun to accept that it’s not only a tech company but also a media company. As the largest social networking platform in the world, Facebook felt the pressure from critics, partners, media organizations, and users to take a stand against fake news, embrace its media roots, and commit to journalism. So, they made two big PR plays to mitigate critics.
The first move came on January 6 with the hiring of Campbell Brown, a former NBC and CNN anchor, as a news partnerships manager. As New York Magazine noted, Facebook’s latest hire is an effort to placate media critics who called for Facebook to hire an editor-in-chief. Brown, however, is working on the business side, not editorial, of Facebook. “She’s not responsible for eliminating fake news so much as making Facebook a welcome place for news organizations—and the ad dollars that Facebook hopes might accompany them. And her background makes it clear which news organizations in particular Facebook is interested in.”
The second move, on January 11, was the unveiling of the Facebook Journalism Project. The project’s major goal is to bridge better relationships with journalists and newsrooms and has three major objectives:
- Collaborative development of news products, such as new storytelling formats, initiatives for local news and business models, and hackathons
- Training and tools for journalists, such as e-learning courses and CrowdTangle
- Training and tools for everyone, which includes an unspecified set of measures against fake news
Now, some of these objectives and services are things Facebook actively does, such as the hackathons and new storytelling formats, and is looking to expand upon. But the project sounds eerily similar to Google’s Digital News Initiative. That’s not a bad thing, but it also feels like it’s not specific enough to address the biggest concerns facing the platform (i.e. fake news).
Though we have to commend Facebook for finally taking some steps, it’s easy to read between the lines of what they are doing to see what they really want to accomplish.
Image credit: Lorenzo Cafaro via Pexels