by Daniel Peck, Research Scientist
The question sounds crazy, especially for someone who’s spent a fair amount of the last year working on making spam and other malicious message detection on social networks better. But we do a disservice to tools geared for protection when we don’t think long term about the consequences of them. Does better spam detection on say twitter for example reduce the total amount of spam that users see, or does it just change the signal to noise ratio?
Websites who’s only content is related to spam didn’t get many hits. This led spammers to move to Search Engine Optimization techniques, which have had a good run are still fairly effective, but more often than not spam sites are full of legitimate content harvested from other sites.
I suspect, and have seen several examples, that the same trend is taking place in social media. We build systems that force spammers to put more “real” content into the stream, so that they don’t immediately out themselves. These fake accounts contain plenty of retweets of popular stories, and shared links on facebook with a bit of “hey, what a great deal on shoes” or “click here to see my naked” thrown in here and there.
Times are changing here too, sharing too many popular things also indicates than an account is a spammer, or at the very least a much less valuable node in the network. So the next step is wholesale copying of real peoples profiles, complete with pictures of their cat, a bizzaro you with everything from your facebook account duplicated on another network, such as tumblr or google+, with an occasional spam or malicious link thrown in. The kind of place where friends will eagerly add you, because everyone needs to be connected to every one of their friends through every medium possible of course, and not think twice about clicking on the malicious link that bizzaro you just shared out.
Besides being quite a blow to the privacy of the accounts being copied, this also reduces the trust that anyone can put into a user, which may not necessarily be a bad thing from a security point of view, are we making a problem that’s cosmically easy to spot for end users, such as the endless number of Nigerian prince scams, morph into something that is much more difficult for the end user to distinguish from real content? Are we moving towards an advertorial world where the signal and the noise are nearly impossible to separate?
When it comes to advanced vulnerability discovery and exploitation techniques I am all for raising the level of discourse and seeing talented researchers raise the bar for attack and defense alike, but with something like this I’m not so sure. Maybe it’s best to keep the bar low with regards to detection/blocking on social media and focus on securing APIs and the data they access, understanding that its better for those with less benevolent intent to pull out a few weak individuals from the herd than to give them incentive to find methods to take the whole.