It’s for our own good….

And I’m sure that Twitter will not be doing anything else – at least not yet – with their code when they’re making the Twittersphere safe for us all to Tweet in by screening links.  The logic of the Twitter people is sound; by vetting links they can reduce or totally remove the number of phishing and malware links that are made available to Twitter users.  They’re effectively developing a Twitter ‘Killbot’. One thing that has become clearer over recent years with the explosion of Social Network sites like Twitter and Facebook is that no matter what you say to people, and how often you say it, folks will still click links from total strangers and get themselves in to trouble.  Despite warnings, they’ll hand over user names and passwords because they’re asked for them.  And even savvy Net users are occasionally caught out by well crafted ‘targetted’ phishing scams.

 So checking and validating links – including those in DMs – is at first glance a good idea.  It only takes a few people replying to spam or filling in details on phishing sites to keep the problem going, and as education seems to be woefully inadequate at changing people’s behaviour on these issues; let’s face it, after nearly 20 years of widespread Internet use by the general public, the message about not replying to spam and not buying from spammers  has still not penetrated a good many thick skulls.

However – and it’s a big however – the technology that stops dodgy links can also be used to stop any Tweets, simply by tweaking the code.  There is a line that is crossed when you start using automated filtration techniques on any online platform.  It’s obvious that on fast growing, fast moving systems like Twitter it’s going to be impossible to have human beings realistically monitoring traffic for malware of any sort, and it’s inevitable that some form of automated techniques will be in use.  But once that line’s crossed, it’s important that we don’t forget that the technology that stops these links can also be used to stop anything else that ‘the Creators’ don’t wish to be on the system.

A wee while ago I wrote this item, in which I suggested that so much of the responsibility for ongoing phishing attacks on Twitter falls on folks who’re clicking those links; whilst spammers and phishers get bites, they carry on trying.  So, if you ARE still falling for these phishing scams – get wise and learn how to spot them!

One final observation – the code that can spot a malware link can also spot keywords.  And when you can spot keywords you can start targeting adverts.  And combined with Twitters newly activated Geolocation service, we might soon see how Twitter expects to make money from location and content targeted advertising.

Leave a Reply

Your email address will not be published. Required fields are marked *