To begin with, it will probably become more and more difficult for social media platforms to justify banning the actual purchaser; if such a policy became common, competitors or antagonists could purchase followers / likes for genuine accounts and get them into trouble. Thus, for social media services like Facebook and Twitter, the battle will likely be with the actual holders / operators of fake accounts (and proxy re-sellers of their services), essentially de-risking purchasers1.
The extent to which these large platforms are able to accurately identify, label, and take action against fake accounts today is also questionable. Using a few of the popular “fake checking” sites, it quickly becomes apparent that even some of the most legitimate brands (@nytimes for example) owe large percentages of their following to supposedly fake accounts. Granted, this “labeling of fake accounts” by each of these site operators is a fallible process, however, for results that persist accross “fake checking” sites, we definitely become deservedly suspicious. In general, “bots” as a genre continue to grow in size and quality on the internet. See estimates of how many social media accounts are fake. Hence, this is also an area of concern for platform operators.
To further stir the pot, it’s possible to imagine an evolution of these black hat social services as enabled by smart contracts. When a user discovers a service via Google that, say, delivers Twitter followers, there’s no way for the user to gauge whether they will actually receive the services described upon purchase. Most of these sites present marketing messages like “reviews” and “ratings” however these could easily be (and typically are) fake. It’s very much a “Wild West” industry.
With a smart-contract-based system, users would be able to lock up funds in the contract that would only be delivered to the seller upon meeting certain criteria. In the case of Twitter followers, for example, an oracle would record the user’s initial number of Twitter followers to the chain upon purchase. Then, when the seller attempts to collect payment after delivery, the oracle would once again record the number of Twitter followers on the user’s profile to the chain. The payment to the seller would then only be released if the number of Twitter followers increased by the promised amount, enabling a mostly trustless exchange2. In addition, note that some oracle services (like Oraclize) offer encrypted queries, enabling purchases to be private (in that only the purchaser, seller, and oracle are aware of the account receiving the service).
This system could even be further augmented to distinguish longevity / quality between sellers. Measuring longevity of a delivery is easy to imagine - we can set up a contract in which the oracle continues to check follower / like / etc. counts over an extended period of time (and the seller is paid over an extended period of time, perhaps governed by a payment curve that makes sense depending on the type of delivery).
“Quality” is another metric often advertised by sellers. Once again, these claims are not really backed up by anything substantial and are often simply made up by the seller as it serves them to have buyers believe it. Here, we’re using “quality” to refer to how “real” the followers / likes / views / etc. are, or how “real” they appear to be. Essentially, as a buyer, you’d rather have social signals that appear genuine vs. those that appear fake. Often, “quality” comes at a premium.
To actually distinguish quality between sellers, we can employ a number of techniques. For instance, what if instead of simply using a “count the number of followers” oracle, you employed one of the “fake checking” services above? Users could demand that the delivery of a service not only increases their overall follower account but also maintains their estimated “fake follower percentage” under a certain threshold, thus forcing sellers to deliver quality followers which do not trigger flags of “fakery” as measured by these “fake checking” services. If you employed a composite score built from multiple “fake checking” services it would be even more robust. This technique would apply equally well to likes, views, and other social signals so long as sites exist that attempt to judge fakeness of those signals on the platform in question.
Another potential gauge of the “quality” of a given fake follower could be the amount of time it is allowed to remain on the platform (in the sense that the platform operator has yet to determine that it is fake and take action against it, implying some level of quality). Instead of an oracle simply checking the count of followers over an extended period of time, it could check that some subset of the original delivery of fake followers remain followers of the user / unbanned on the platform. This technique can only apply to situations in which the underlying accounts that supply the likes / follows / etc. are public (and can thus be viewed by the oracle) - for example, it wouldn’t work in the case of purchased YouTube views where we’re unable to see which individual accounts gave views.
Finally, none of the schemes here need to be all or nothing with respect to the compensation sellers receive - one can imagine a number of performance-based payment curves depending on the scenario.
From this overall system, some key side effects emerge:
- A sort of reputation / performance record would build up over time on-chain for each vendor, further enabling transparency for buyers.
- Being that the service would be entirely on-chain (besides the oracle component), it could be difficult to shut down.
- Given that the service is “crypto-native”, it would allow payment with cryptocurrency, which is typically harder to control / may allow for entirely anonymous payments depending on which chain the system lives on.
- It may serve to “commoditize” / make efficient these markets for black hat social media services, as it serves as a canonical marketplace with objective quality measures.
All of the major social media platforms of the day are “vulnerable” to this system in one way or another. The major chokepoint of a system like this remains in detecting the fake accounts and removing them from the platform - a perfect version of this can of course nerf any botting system.
The essence of this situation then is the face-off between these bot detection algorithms created by the platforms and those algorithms which attempt to create human-like bot behavior. Social signals are currently valuable - just look at the percentage of Americans who get news from social media, and for a more anecdotal form of proof, examine the primary information sources of your peers. Assuming this trend continues, either side of the “fight” is incentivized to devote more resources to improving the sophistication of their algorithms - especially if there is an entirely commoditized (smart-contract-based) commercial marketplace driving demand on the botting side (vs. the interests of a public company / perhaps the public at large on the other).
One very notable exception to this is that where the legal system may enter the fray - in the case of, say, a business purchasing fake Yelp reviews and it being classified as fraud, or a paid influencer misrepresenting their following to advertisers, or a Twitch streamer essentially stealing money from the platform with fake prime subscriptions. However, even in this case, it may prove difficult to establish that the person receiving the fake social signals was actually the original purchaser, especially given the rise of certain digital currencies which enable anonymous payments. That said, this is not a trivial case and the legal system may end up playing a large role in the evolution of social media platforms going forward. ↩
This system does rely on trusting the oracle, however, we can imagine some decentralized oracle system like ChainLink helping to reduce this dependence, or even argue that a centralized service like Oraclize would not jeopardize the trust of their entire business by messing with the results of a black hat social media contract. ↩