Earth calling Tim Cook…

There’s a scene in Monty Python’s ‘The Life of Brian’ in which a character asks ‘What have the Romans ever done for us?’  This is then followed by a host of other characters giving many useful things that the Romans HAVE provided for the people of Palestine.

I was reminded of this sketch when I encountered this article about Apple’s Chief Operating Officer Tim Cook in which he comments that there isn’t a single thing that a Netbook does well.  Time, I have some bad news for you, sunshine; there are lots of things that Netbooks do well – however, they’re probably things that Tim Cook doesn’t do.  In the last week or so:

  • I used the Netbook to test an ADSL connection at the point of entry of the phone-line to the house.
  • When out and about I used it to write a blog article whilst waiting for an appointment.
  • Hooked it up to my amateur radio gear to decode some weather fax images.
  • Downloaded some code from an SVN repository, made a quick fix and uploaded it again.

In other words, stuff I couldn’t use my Blackberry for, and stuff that I needed a real keyboard for – whilst the Crackberry is great, I don’t fancy writing 500 words of blog post or trying to debug code on it.

But it’s real, genuine work being done, and not stuff I could do on a keyboard-less, USBless iPad.  Sorry Tim – here on Planet reality we’re not all managers and critics and reviewers and surfers.  Some of us actually do real work on the move, which at the moment (and probably will do for some time to come) requires a real keyboard and a piece of kit that I can actually install software on – not a closed garden that looks good but is at the same time too big to put in my pocket and too small to act as a sensible paperweight.

I love teh concept of the Pad – but this sort of arrogance from Apple – following on from their recent attacks on development toolkits and the serious limitations in connectivity of the iPad – really makes me wonder whether the bods at Cupertino ever spend time in the real world watching how people use technology.

Crystal Reports…where did you go wrong?

Many moons ago, when you could write useful software on a computer with less processing power than my last cellphone, there was a reporting tool called Crystal Reports that was incredibly useful for those of us who spent our working lives using tools such as Visual BASIC 3 to write windows applications.  It had a few gremlins, but they tended to be the sort of thing that you wrote in your notebook and turned to when you deployed an application that used CR…sort of:

“All report files verified against database…check.  All report files in distribtion package….check.  All CR runtimes in distribution package….check. ”

And that was it – the whole thing fitted on a couple of floppy discs (remember those?  If not, the contents of 400 of them will fit on a CDROM) and after I got my checklist sorted I was good to go and was happy to use Crystal Reports whenever I needed a quick and straightforward reporting solution.

The years passed and I found myself working on various projects which either used different reporting technologies or that didn’t involve me with reporting systems, and I gradually lost track of Crystal Reports until a couple of years ago when I found myself having to use the package again.  And most of the time it’s fine – but when used with Visual Studio to develop and deploy Internet Web sites and applications….oh dear.

As always you tend to blame yourself for being stupid with these sorts of things.  You are, after all, dealing with a couple of packages that could easily have knocked you back over £600 if you buy the full packages.  So, you kind of think that by following the instructions, you’ll get a working system without any real problems.  And, if it all goes pear-shaped, you assume that somewhere along the way you’ve dropped a clanger, so you repeat stuff, reinstall stuff, restart machines, uninstall stuff, sacrifice chickens…the usual persistent efforts to solve problems adopted by software developers.

Of course, we’re now aided by Google (how did we manage to resolve these issues before the Web?  I really can’t remember, but software seemed to go wrong less frequently back in the early 1990s) and so I did a quick Google of:

  1. Why Crystal Reports viewers failed to run properly when added to a web page, even if you used the exact code on Microsoft’s and Business Objects’s web sites?  Which led to….
  2. Why a particular folder called aspnet_client wasn’t being created when I created a new website.  Which led to….
  3. Why, when I manually added the folder (again, as per the instructions) , things still failed. 

6 hours of my life disappeared down the maw of this problem – 6 hours that I could happily have spent doing other things.  Eventually, rather than spend my life going round and round in ever decreasing circles (or re-installing EVERYTHING – not something I wanted to do on someone else’s server) I came up with what I ended up describing on Twitter as a ‘wanky bodge’ to work around the problem. 

What was incredibly scary was the number of times the issue turned up on Google with the comments ‘Don’t know how to fix, it sorted itself out after re-installing, Couldn’t fix it and so didn’t use Crystal Reports’.  It’s not just me – looks like the combination of Crystal Reports XI and some instances of Visual Studio (but not all) and some Web sites on the same server (but not all) can give rise to the situation where it’s impossible to view a report without bodging things. 

Guys…it shouldn’t be  like this.  There’s an old joke that says that if we built bridges the way we built  software we’d never dare to drive across them.  I think that there’s a little too much truth in that joke.

Apple – why 2014 could be like 1984

Back in 1984, Apple had Ridley Scott direct a very imaginative advert to launch the Macintosh computer.  It ran twice – once on a small TV station late at night to get it in the running for some awards, and the second time at half time in the Superbowl American Football game on 22nd January 1984.  And it never ran again.  The message from Apple was that their new machine would shatter the conformity that people like IBM (and by extension Microosft) were putting on the computer market, by making computing available to the masses.

The advertisement ends with the line:

 “On January 24th, Apple Computer will introduce Macintosh. And you’ll see why 1984 won’t be like “1984″. ”

The problem was that the Macintosh was so expensive that few people could afford it.  It was a pain in the rear to write software for – so relatively few folks wrote software for it, especially as the market was small compared to that offered by the PC.  As it turned out, 1984 wasn’t at all like 1984, but no thanks to the Macintosh which even today, in all it’s forms, occupies only 10% of the computer operating system market space, even if you include iPhones.

From day one, there was always something ‘control freak’ about Macintosh, all of it’s successors, the iPhone and now the iPad.  As I mentioned above, the original Macintoshes were not easy to write software for, and Apple didn’t make life easy for developers.  the situation persists today; to write software for an iPhone, iPod or iPad, you have to run the emulator kit on a Macintosh of some sort.  Let’s do a quick comparison – if I want to develop an application for my Blackberry, I download teh tools from the Blackberry website and get it running on my PC running Windows.  For free.  If I want to write an application for an iPod or iPhone….I first of all have to join the Developer Program at $100 a year.   Then I can download the SDK.  To run the SDK I need a machine running Mac OSX.  Oh look…only Mac’s can legally run Mac OSX…very much a closed garden.

Early Macintoshes came with no network connection; obviously this is no longer the case but it should have given us the hint that Macs were not really designed to talk with the rest of the world.  Fortunately for Apple, some of the people involved saw sense and gradually the more open Macintosh that people use today in it’s numerous forms came in to being.  And gadgets like iPhone, iPod and iPad emerged in to the market, able to interact with the Internet and other media.

But let’s look at what this actually means.  First of all, aaccess to applications and media for these latter machines is very much controlled by Apple in terms of:

  • Control of the means of production – make sure non-Macintosh / Apple users cannot easily develop applications.
  • Control of the means of distribution – iTunes store, various recent high profile cases of applications being banned from the iTunes store makes it difficult to get applications in to the world.
  • Control of the means of communication – these devices lack the ability to easily handle ‘standard’ add ons such as USB or cheap memory cards, like SD.  iPhones have also frequently been tethered to particular telephone companies. 
  • The fact that  iPad comes without Flash, for example, suggests that Apple are adopting a policy of attempting to control content that is usable on their kit.

Let’s ignore the stupidities around making devices reliant on rechargeable batteries in which the battery can only be changed by returning it to the manufacturer. 

The natural progression for Apple would be to continue growing as a media and services company, rather than as a hardware house.  By an iPad, and rely on Apple for much of your available content and software.  And Apple can also ensure that you don’t leave the ‘walled garden’ of Apple acceptable content by making sure that the inbuilt iPad browser doesn’t handle some common media formats like Flash.  How will they fund all this?  Easy – you’ll pay.  Apple have already stated that they are rolling out an advertising model for iPad / iPod / iPhone applications in which the application provider would be able to get 60% of advertisng revenue generated via their application – the other 40% going…well….you know where.

Control of content, hardware and communication.  2014 could very much be like 1984 if Apple gets it’s way.

The ‘father’ of home computers dies…

A few days ago one of the pioneers of the home computer revolution of the 1970s died.  Ed Roberts, an MD in Georgia, died after a long battle with Pneumonia.  Back in the 1970s his company, MITS, moved from model rocket telemetry, to calculators, then to building the first ‘computer kit’ – the Altair 8800 – for which Bill Gates and Paul Allen provided a BASIC interpreter.  The Linux and Apple Fanbois amongst you may now know who to blame for Microsoft… 🙂

It’s debatable that without the Altair 8800 another home computer – in kit or ready built form – would have come along.  The Apple 2 followed behind theAltair, as did many other similar machines, but the Altair was first.

The Altair 8800 was basically a microprocessor chip with enough associated ‘gubbins’ to make it work – it could be chipped up to have 8k of memory – my laptop here has 4,000,000 k of memory – and could even handle a keyboard and eventually a video display – although when you got it out of the box (and after you’d soldered the thing together) it'[s user interface was a bank of toggle switches and some LEDs.

Yup – you programmed it, entered data and read the output in binary.  It was safe to say that in the mid 1970s, as far as computers were concerned, men were real men, women were real women, and real programmers did it in binary with a soldering iron tucked behind their ear. The fact that within 10 years of the Altair being launched teenagers were typing their own programs in to Spectrums, ZX-81s, BBC Micros, Apples and the rest is a monument to the excitement and speed of those early days of computing.

And, by golly, it was FUN! Even the act of getting your computer working in the first place was part of the game – you learnt to code in machine code from day one because either nothing else was available or you realised that in order to make anything useful happen with only a few HUNDRED bytes of memory you needed to right VERY ‘tight’ code.

I built my first computer in the mid-1970s – well, not so much a computer as a programmable calculator.  I took an electronic calcul;ator and wired up the keyboard to some circuitry of my own invention that mimicked keypresses.  Programming this beast involved changing the wiring in my circuit – running teh program involved pressing a button and after a few seconds the answer would appear.  I then got even smarter, and managed to work out how to introduce some decision making in to my gadget.  Fortunately, I blew the output of the calculator up soon afterwards – I say fortunately because I then found out about microprocessors and ended up building some simple computer circuits around 6800 and Z80 microprocessors, rather than carrying on with my rather ‘steampunk’ programmable calculator!

Ed Roberts’s machine wasn’t an option for me; my pocket money wouldn’t cover the postage from the US.  But the fact that people were doing this sort of thing was very exciting, and by the time I left university in 1982 I’d already spent time with ZX81s and Apple 2s, and had written my first article for the home computer press – a machine code monitor and loader program for the ZX81 in ‘Electronics and Computing Monthly’.  I was reading in the magazines about the developments of software from up and coming companies like Microsoft – even in those pre-PC days – and for a few years in the early 1980s the computing field in the UK was a mish-mash of different machines, kits, ready made stuff – and most people buying these machines bought them to program them.  How different to today.

I have to say that I’ve always thought that the fun went out of home computing when the PC came along, and when Microsoft and Apple stopped being ‘blokes in garages’ and started being real companies.

Ed Roberts – thank you for those fun packed years!

Why are some Open Source support people so damn rude?

Don’t get me wrong – I love Open Source software and have used some of it fairly widely in various development projects that I’ve done.   I’m also aware of the fact that people involved in the development and support of such software are typically volunteers, and on the odd occasion I have called upon people for support, I’ve always had good experiences.

I’ve also seen some absolute stinkers of ‘support’ given to other developers, in which the people who’re associated quite strongly with the softwrae have treated people in a rude, patronising and often offensive and abusive manner.  Now, in 20+ years of dealing with IT support people – including folks like Oracle, Microsoft. Borland (showing my age) and even Zortech and Nantucket (back in the deep past!!) I can count on the fingers of one hand the number of times I’ve had this sort of treatment from big bad commercial software houses.  It’s unfortunate that I’ve seen dozens of examples of this poor customer service from Open Source suppliers in the last couple of years.

Because even if we don’t pay, we are customers – and some of the worst behaviour I’ve seen from companies where users are required to pay for a license when the software is sued in commercial situations.  It’s hardly encouraging, is it?  I know it can be frustrating to answer the same question several times a day, especially when the solution is well documented, but rudeness isn’t the way forward.  After all – it doesn’t exactly encourage people to use the product, or pay for a licence – rather than persevere or even volunteer a fix, folks are more likely to just go to the next similar product on the list.

Ultimately, it boils down to this; piss off enough potential customers and people like me will write articles like this but will name names and products.

So, here are a few hopefully helpful hints to people involved in regularly supporting products and libraries.

  1. If it’s your job, you’re getting paid to do it.  If you’re a volunteer, you’ve chosen to do it.  In either case, if you don’t feel trained up enough in the interpersonal skill side of things, just be nice, and read around material on customer support.  If you don’t like it support, then rather than taking it out on customers, quit.  Because you’re unhappy is no reason to take it out on other people.
  2. Remember that the person asking the daft question may hold your job (or the future of your product) in their hands.  You have no idea whether they’re working on a project for a small company or a large blue chip / Government department.  Your goal is surely to get widespread adoption – the best way to do this is to make folks happy.
  3. Even if the fix IS documented in any number of places, be polite about it.  If it’s that common, then have it in your FAQs or as a ‘stock answer’.  The worst sort of response is ‘It should be obvious’.  Of course it’s obvious to you – you wrote it.  It isn’t obvious to other people.  This seems to be a particular problem with ‘bleeding edge’ developers who swallow the line that ‘the source code is the documentation’ – it may well be, but if you want your product or service to be adopted you need to get as many people as possible using it.
  4. Don’t forget that if someone perseveres with your software, through buggy bits, they may be willing to help you fix it.  The chances of you getting a helper if you are rude to them is minimal.
  5. If you get a lot of questions or confusion about the same issue, perhaps it’s time to update the FAQs or Wiki?  And don’t forget sample code – if you’re generating code libraries PLEASE provide lots of real-world examples.

And to all the nice support folks – thanks for all the help – it is appreciated!

Chrome – the prissy Maiden Aunt of browsers….

I’m currently involved in developing a web application of moderate complexity using Ext to provide a ‘Web 2.0’ front end on a PHP/mySQL application.  We’ve endeavoured to make it work across a range of browsers – Firefox, IE, opera and Chrome.  And this is the blog article in which I vent my spleen about Chrome.

Because, you see, there are some occasions when Chrome is an absolute bag of spanners that behaves in a manner that just beggars belief, and it worries me immensely.  If IE behaved in the same way that Chrome does under certain conditions then the Chrome / Google Fanbois would be lighting their torches and waving their pitchforks as they headed out towards Castle Microsoft.

Giving Chrome it’s due, Chrome renders CSS well against standards, and is frequently faster than Firefox and IE in terms of delivering pages; where it does seem to be lacking is in it’s sensible handling of JavaScript. The general impression I’ve had over recent days with Chrome and JavaScript is that it’s incredibly picky about how it handles JavaScript that is less than perfectly formed – hence the ‘Maiden Aunt’ jibe.  It requires everything to be very right and proper.  I understand that any browser should be expected to deal with properly structured script, but in recent years I’ve found that the major browsers tend to behave in a pretty similar manner when processing JavaScript and tend to vary in behaviour when rendering CSS – hence the fact the some sites look different in IE than they do in Firefox or Chrome.

But I’ve encountered some horrendous differences in the way in which Chrome on one side and Firefox/IE on the other handle JavaScript.  Chrome seems to be very ‘tight’ in it’s handling of two aspects in particular; white space and commented out code.  I hope that following comments might prove useful to anyone doing JavaScript development – particularly with libraries such as Ext.  Note that these issues don’t occur all teh time with Chrome, but have occurred often enough to give me problems.

Watch the White Space

Chrome seems particularly sensitive to white space in places where you wouldn’t expect it to be.  For example:

  • Avoid spaces following closing braces ( } )at the end of a js source file.
  • Avoid spaces around ‘=’ signs in assignments. 
  • Avoid blank lines within array definitions – don’t put any blank lines after an opening ‘[‘ before data.

Watch the comment lines

The // construct used to make a line in to a comment line needs to be handled with care with Chrome.  Don’t include it in any object or array definitions – whilst it works OK in IE, it can cause major problems in Chrome.

Indications of problems

If you’re lucky you may get a straight forward JavaScript error – in this case you will at least have an idea of what’s what.  If you’re unlucky you may end up with either an apparent ‘locking up’ of Chrome or a 500 Internal Error message from your Web server.  The ‘lock up’ will frequently clear after a few minutes – the browser seems to be waiting for a timeout to take place.  When the errors do take place, I’ve found that the loading of pages featuring JavaScript errors is terminated – it can give the impression that a back end PHP or ASP.NET script has failed rather than client side script.

In summary, just be aware that Chrome may not be as well behaved as one would expect.

And that’s my whine for the day over!

I want to use Ubuntu on a laptop…but….

Many, many moons ago, and I mean in the last century, I had a version of Linux running on one of my PCs, and lo, it was good.  If you liked command lines and stuff like that, as the PC concerned wasn’t really up to running X-Windows and such.  But hey, not a problem; I was only wanting to run it as a development web server and after a little faffing around I had it neatly wired to my router and that was that.

Somewhere in the early 21st Century – OK, 2004 – we bit the bullet and went Wifi here at The Towers.  Once we had the Windows machines running on that network, I started toying with WiFi for the Linux boxes but couldn’t find a distro that did it ‘out of the box’.  As an experiment I did try and get whatever version of Ubuntu was kicking around in about 2006 working with a USB WiFi Dongle, but once I got on to the part of the instructions that involved downloading all sorts of arcane bits of software from around the Internet, then compiling wrappers, installing Windows drivers on Linux machines and all sorts of other tomfoolery I decided to quit.  Oh, and after I’d spent a couple of evenings at it with no results.  

The general impression I got from people ‘in the know’ was that:

  1. It is easy if you follow the instructions.
  2. Linux is OK – it’s my WiFi dongle that was wrong.
  3. Future versions of Ubuntu will deal with these dongles.

Well, so be it.  But at the time I didn’t find (1) easy, (2) seemed a bit dumb when the dongle worked happily on 3 versions of Windows and (3) – well, I can be patient.

In 2008 my then laptop had a couple of accidents involving a cup of tea and a crashed hard disc, and it was replaced with a nice new machine, and today I decided that the old laptop should be dug out and Linuxed – and because I’ve had previous experience with Ubuntu, I decided to grab a CD Image of Ubuntu and re-partition and re-format the hard disc of the laptop with Ubuntu 9.10.  The installation went fine – I soon had a laptop running Ubuntu, and was pleased to see that it automatically detected my USB WiFi dongle AND also spotted a couple of local WiFi networks….shame neither of them were mine.

I attempted to attach to my home network directly by specifying the name; I selected the correct security protocol and entered the key; nothing.  Nada.  Not a peep.  Just kept asking me for my security key settings again.  In other words, it wasn’t connecting to my home network.  Which is a shame, because we have XP machines, Vista machines, Windows NT, Windows ME, a Blackberry and a Wii that all happily connect to the WiFi network at Pritchard Towers. 

I guess I’ll take another look at it soon; using a wired connection isn’t really on due to where the router is in the house, so WiFi is necessary.  Fortunately I’m not reliant on this machine as my main PC around the house, but it’s a sad repetition of the last time I decided to install a Linux on one of my machines to use as a ‘Client’ Operating System rather than as a server.  It doesn’t bloody work!  I appreciate that the Linux Fanbois will tell me all sorts of things I can do to make it work, but to be honest that misses the point.  Take a look at teh graph below (from Wikipedia)

Linux provides a little over 1% of the total number of Client Web Browsers detected by web sites.  The fun and games I’m having probably explains why.  For all the hype and fuss about Linux finally coming of age as a desktop replacement for Windows, it is just not going to happen as long as you can’t get the damn thing to connect to a bog standard WiFi network out of the box.

Come on fellows, I want to play; meet me half way.

Your email address CAN be harvested from Facebook…a heads up!

Or…yet another reason to watch who you befriend….

Facebook attempts to be what’s known in the online world as a ‘Closed Garden’ – interactions with the rest of the Internet are restricted somewhat to make the user experience better…or to keep you in the loving arms of Facebook, depending on how cynical you are.  One of the tools in this process is the Facebook API – a set of programming tools that Facebook produce to make it possible for programmers to write software that works within the Facebook framework.  Indeed, Facebook get very peeved if you try automating any aspect of the site’s behaviour without using the API.

One thing that the API enforces is the privacy controls; and one thing that you cannot get through the API is an email address.  Which is cool – it prevents less scrupulous people who’ve written games and such from harvesting email addresses from their users to use for other purposes.  It also ensures that all mass communications are done through Facebook.

Of course, if you’re determined enough you could go to every Friend’s profile page and copy the email address from there…or there are scripts that people have written to do the task by simply automating a browser.  The former is tedious, the latter is likely to get you thrown off of Facebook.

However, a method documented hereshows how this can be done through the auspices of a Yahoo mail account.   It is apparently a legitimate application available within Yahoo Mail for the benefit of subscribers.  How long Facebook will allow this loophole to be exploited is anyone’s idea, but given that I have a number of Facebook friends I felt it worthwhile warning folks.

The problem is not you, my trusted and good and wonderful reader, who would only use the tool for what it’s intended for – added convenience in contact management.  The problem lies with people who are a bit free and easy about who they make friends with.  If you do end up befriending a less than trustworthy individual, they could quite happily get your email address through this method, and soon enough you’ll be receiving all those wonderful offers for life enhancing medication and get rich quick schemes.

So…watch who you befriend.  Today might be a good day to prune out those folks that you’re not one hundred percent sure about!

Google predict the end of desktop PCs….

When I started in IT, I encountered a program called ‘The Last One’.  It was a menu-driven application generator that allowed a non-programmer to specify the sort of system they wanted (within a limited range) and generate a BASIC program that would do the job.  When it was first announced – and before any of us got to take a look at it – there was a little nervousness amongst the ranks of programmers, based on the advertising strapline for the program, that suggested the software was called ‘The Last One’ because it was the last program you would ever need to buy…

Which was, of course, utter rot.

I was reminded of it today after coming across this piece in which the bods at Google are predicting the  end of the desktop computer.  And the reason I was reminded was that the ‘The Last One’ story just went to show how bad IT pundits – and those in the industry – are at predicting the future.   You see, the problem with predicting the future is that you have to make certain assumptions and extrapolations from today in to the future, and then work out consequences based on those assumptions.  And if you get your assumptions of teh future wrong – or the assumptions of how the world works now – then it can all go horribly wrong.  And that’s what’s happened to Google.

The demise of the desktop computer – to be replaced by iPads, Smartphones and similar mobile devices.  Note that Google aren’t even suggesting that laptops and netbooks and their ilk will be delivering the goods – it’s all going to be a mobile wonderland.  Now, short of some sort of high tech ‘Rapture’ occuring in December 2012 that whisks away all the computers we use in our homes and offices whilst leaving only mobile computing devices behind, I very much doubt that this is going to happen.

Google have mixed up predicting the future with what they (with their interest in mobile operating systems and desire to compete with Apple) want the future to be.  A dangerous thing for a technology company to do.  Whilst in Google’s idea world of media and search consumers everyone would be able to do what they need to do on some sort of mobile gizmo, those of us who work with computers for serious amounts of time each day will NOT be able to function with  poxy little touchscreen keyboards or Blackberry QWERTY pads.  Sorry guys, we need real sized keyboards which will be realistically associated with a decent sized screen and so will be at the very least a reasonably sized laptop – which we’ll sit on a desk and run from the mains.

Quite a few of us also like the idea of storing data locally – not in ‘The Cloud’ or on Google’s application servers – something that isn’t easy on many mobile devices right now.

Google – you’re wrong.  Stop looking at the dreams of your own and other researchers, and start looking at how real people use computers – especially in their work.  And make that the basis of any more crystal ball gazing.

Twitter Phishing…YOUR responsibility!

The recent spate of Twitter ‘phishing’ attacks have been interesting for me in a number of ways. First of all, my wife received one of the phishing DMs from a contact of hers whose account had been compromised. Fortunately, she knew enough not to enter any details in to the page she was directed to, and there was no harm done. A quick change of password just to be on the safe side, and that was that.  Fortunately, she knew enough not to enter any details in to the page she was directed to, and there was no harm done. A quick change of password just to be on the safe side, and that was that.  This particular DM was one that was a ‘social engineering’ attack – an invitation to check a website out to see if the recipient of the DM were featured on that site.  A nice try – after all, most people are interested in finding themselves on the Net!

 

The second point of interest is why the sudden flurry of attempts to compromise Twitter accounts. It’s been suggested that one reason is that the compromised accounts will be used to promote sites in to search engines, based on the recent development of search relationships between Yahoo and Microsoft’s ‘Bing’.  Getting hold of the Twitter accounts would have been the first stage of the operation; the idea would be to automate those accounts to ‘spam’ other users with  other links over the next few weeks to attempt to increase the search engine standing of those links.

But the thing that’s surprised me most is how often people have actually gone along with the phishing request – to enter your Twitter user name and password into an anonymous web page, with no indication as to what the page is!  To be honest, it stuns me.  And it isn’t just Internet neophytes – according to this BBC story an invitation to improve one’s sex life was followed through on by banks, cabinet ministers and media types.  Quite funny, in a way, but also quite disturbing – after all, these are people who’re likely to have fairly hefty lists of contacts on their PCs, and whilst an attack like the one detailed in this article is quite amusing, a stealthier attack launched by a foreign intelligence service against a cabinet minister’s account would be of much greater potential concern.

There are no doubt technical solutions that twitter can apply to their system to reduce the risk of the propagation of these Phsihing attacks.  For example, looking at the content of DMs sent from an account and flagging up a warning if a large number of DMs are sent containing the same text.  Twitter have also been forcing password changes on compromised accounts – again, this has to be a good move.  It might also be worth their while pruning accounts that have been unused for a length of time – or at least forcing a password change on them. 

A further part of the problem is with the use of Link Shortening services like Bit.ly to reduce the length of URLs in Tweets.  This means that you can’t even take a guess at the safety or otherwise of a shortened link;  a link that is goobledegook could lead to the BBC Website to read the story I mentioned above, or to a site that loads a worm on to a Windows PC – or prompts you for your Twitter credentials.  perhaps a further move for Twitter would be to remove the characters in URLs from the 140 character limit.  That way, full URLs could be entered without shortening.

But ultimately a lot of the responsibility for Twitter phishing attacks lies with us users.  We need to bear the following in mind:

  1. If you get a DM or Reply from ANYONE that says ‘Is this you’ or ‘Read this’ form a friend, then to be honest, check with the person concerned to see whether they have sent them.  If you get such a message from anyone who’s not well known to you, then just ignore the message.
  2. DO NOT enter your Twitter username and password in to any website that a link takes you to.  If you do do this, change your password as soon as possible, and don’t use the Twitter password on ANY other system.
  3. Keep an eye on your Followers – if there is someone you don’t like the look of, just block them.  It may seem extreme but it stops possible miscreants ‘hiding in plain sight’.
  4. Ensure your anti-virus and anti-malware software is up to date – this is your last line of defence designed to stop malware that YOU have allowed on to your machine by falling for phishing scams. 🙂

So…play your part in reducing the impact of Twitter Phishing attacks by not clicking those links!