Skip to main content

Kelby is a real person, one I have known for years, but online she barely seems to exist. She is one of the vast majority of people, to have never had a personal website, tried to be YouTube famous, published a press release, or even a blog post. Her Facebook Account is set to mostly friends only, and otherwise, she has, basically no online footprint.

She is on Twitter, using just her first name on an account with about 10 followers. Her account tweets very few things that are personally identifiable. Honestly, she seemingly exists to retweet Jeffree Star and Shane Dawson. Outside of a couple of photos of her child, Kelby looks indistinguishable from the common idea of a bot. While tools like Botometer do label Kelby a human, many people wouldn’t instantly assume her humanity.

Fake Accounts Are Hard To Spot

By most metrics people look for, like having only a partial photo of her face for a profile pic, having a disproportionately high number of retweets to original tweets, few followers and interactions with just a small handful of accounts, she looks fake. She’s not. She’s part of, if not the majority on Twitter, at least a healthy normal subset of users. I’ve been shocked by the number of times I’ve had real conversations with accounts that set off the same “this is a bot” alarm bells as Kelby.

Another friend of mine, Gary Leland has been on the internet since there was an internet. Leland started an eCommerce site back in ’96. If it wasn’t the first online store, it was one of them. Since that time he’s published thousands of hours of video and audio, published at least 15 e-books, and thousands of blog posts on hundreds of websites.

See also  AI Will Break Online Search

Leland has largely avoided the kind of mainstream media circles that social networks try to use as benchmarks for influence and impersonation risk. He has no shortage of mentions including some of the earliest books about Podcasting. He has the kind of online footprint that really cannot be faked.

Fake Accounts Hurt The Platforms

I have huge sympathy for the teams at social networks who must try to accommodate for both the Kelby’s (last name withheld) and the Lelands. Earlier today I wrote about Twitter removing 10,112 state-backed accounts spreading propaganda. It feels important that Twitter remove these accounts, but they may be harder to spot than you think.

While I haven’t sifted through the terabyte and a half of the data Twitter shared. One Chinese-controlled account had over 300,000 followers. It looked real, at least enough to pass the sniff test. What that account was sharing, how it was accessed, and how other accounts were coordinating with it are likely the connective thread that tipped off the team at Twitter, to its being used maliciously.

So Expand Verification

Yesterday, before the Twitter announcement about the removal of over 10k accounts, YouTube changed its rules for verification. The new YouTube rules will seemingly unverify a lot of accounts. I argued that this was a mistake. But not for the reason that many YouTubers were angry about on Twitter.

I don’t care that verification is viewed as a sort of creator’s reward. I think YouTube, Twitter, Facebook, Tinder, and Yo moma (I couldn’t help myself) should all let as many people be verified as want to submit to some kind of process. Last year I wrote that for Twitter expanding verification could slow the spread of fake news.

See also  Short Videos, Good For Platforms, Bad For Everyone Else

The connective thread between Kelby and Gary Leland is that they are both real. I’ve met them both, but what’s more, you don’t need to take my word for it. They can both prove they are real, and they can do so without showing a list of mainstream media mentions. When I got verified on Twitter, I had to show my ID, and a photo of myself. Twitter used this, and a few other things for identity verification.

Stop Making “Verification” An Endorsement

Unfortunately, most of these social sites insist on turning verification into a form of endorsement, by subjectively determining notoriety. I get it, Twitter was, before closing the verification requests, flooded with people wanting the badge. But couldn’t Twitter or any other social network handle verification in a way similar to EV SSL certificates? EV SSL certificates (unlike DV certs) confirm the identity of the website owner, and SSL providers like Comodo process a huge number of these requests each year.

It’s true that government propaganda groups would be able to create accounts the platforms would go on to verify. When you can create government-issued IDs, it would be easy to game the system. But that account verification would mean less, because more people would have it. Also, despite government groups being able to fabricate “real” people, trolls, hackers, and non-state propagandists would have a much more challenging time exploiting the platform. Even without ID Tinder figured out some level of verification.

 



Mason Pelt is the Founder and a Managing Director of Push ROI. 
 Follow him on his blog. Header Image: “Twitter” by chriscorneschi