It’s for our own good….

And I’m sure that Twitter will not be doing anything else – at least not yet – with their code when they’re making the Twittersphere safe for us all to Tweet in by screening links.  The logic of the Twitter people is sound; by vetting links they can reduce or totally remove the number of phishing and malware links that are made available to Twitter users.  They’re effectively developing a Twitter ‘Killbot’. One thing that has become clearer over recent years with the explosion of Social Network sites like Twitter and Facebook is that no matter what you say to people, and how often you say it, folks will still click links from total strangers and get themselves in to trouble.  Despite warnings, they’ll hand over user names and passwords because they’re asked for them.  And even savvy Net users are occasionally caught out by well crafted ‘targetted’ phishing scams.

 So checking and validating links – including those in DMs – is at first glance a good idea.  It only takes a few people replying to spam or filling in details on phishing sites to keep the problem going, and as education seems to be woefully inadequate at changing people’s behaviour on these issues; let’s face it, after nearly 20 years of widespread Internet use by the general public, the message about not replying to spam and not buying from spammers  has still not penetrated a good many thick skulls.

However – and it’s a big however – the technology that stops dodgy links can also be used to stop any Tweets, simply by tweaking the code.  There is a line that is crossed when you start using automated filtration techniques on any online platform.  It’s obvious that on fast growing, fast moving systems like Twitter it’s going to be impossible to have human beings realistically monitoring traffic for malware of any sort, and it’s inevitable that some form of automated techniques will be in use.  But once that line’s crossed, it’s important that we don’t forget that the technology that stops these links can also be used to stop anything else that ‘the Creators’ don’t wish to be on the system.

A wee while ago I wrote this item, in which I suggested that so much of the responsibility for ongoing phishing attacks on Twitter falls on folks who’re clicking those links; whilst spammers and phishers get bites, they carry on trying.  So, if you ARE still falling for these phishing scams – get wise and learn how to spot them!

One final observation – the code that can spot a malware link can also spot keywords.  And when you can spot keywords you can start targeting adverts.  And combined with Twitters newly activated Geolocation service, we might soon see how Twitter expects to make money from location and content targeted advertising.

Your email address CAN be harvested from Facebook…a heads up!

Or…yet another reason to watch who you befriend….

Facebook attempts to be what’s known in the online world as a ‘Closed Garden’ – interactions with the rest of the Internet are restricted somewhat to make the user experience better…or to keep you in the loving arms of Facebook, depending on how cynical you are.  One of the tools in this process is the Facebook API – a set of programming tools that Facebook produce to make it possible for programmers to write software that works within the Facebook framework.  Indeed, Facebook get very peeved if you try automating any aspect of the site’s behaviour without using the API.

One thing that the API enforces is the privacy controls; and one thing that you cannot get through the API is an email address.  Which is cool – it prevents less scrupulous people who’ve written games and such from harvesting email addresses from their users to use for other purposes.  It also ensures that all mass communications are done through Facebook.

Of course, if you’re determined enough you could go to every Friend’s profile page and copy the email address from there…or there are scripts that people have written to do the task by simply automating a browser.  The former is tedious, the latter is likely to get you thrown off of Facebook.

However, a method documented hereshows how this can be done through the auspices of a Yahoo mail account.   It is apparently a legitimate application available within Yahoo Mail for the benefit of subscribers.  How long Facebook will allow this loophole to be exploited is anyone’s idea, but given that I have a number of Facebook friends I felt it worthwhile warning folks.

The problem is not you, my trusted and good and wonderful reader, who would only use the tool for what it’s intended for – added convenience in contact management.  The problem lies with people who are a bit free and easy about who they make friends with.  If you do end up befriending a less than trustworthy individual, they could quite happily get your email address through this method, and soon enough you’ll be receiving all those wonderful offers for life enhancing medication and get rich quick schemes.

So…watch who you befriend.  Today might be a good day to prune out those folks that you’re not one hundred percent sure about!

Internet access a ‘fundamental right’?

I would say that I’m something of an ‘online person’ I ran a Bulletin Board ‘the hard way’ in the late 1980s / early 1990s using a phoneline, a modem and a PC at home, and have been on the Internet in one way or another for over 20 years, and was involved with Prestel back in 1982/83.  However, this article from the BBC made me do a serious reality check.   Nearly four out of five people in a survey done of 27,000 folks around the world considered that Internet access should be regarded as a ‘fundamental right’.

Now, this sort of thing crops up every now and again, and it always elicits the same response from me.  At this point in the history of our planet, nonsense.  Yes, information is increasingly important – even, or perhaps especially – in developing countries and economies.  But a ‘fundamental right’? No.  Let’s not forget that the Internet is a communications technology first and foremost – similar to the phone system, road and railway network, etc.  And let’s face it, there are many people in the world without access to a reasonable road and railway system, let alone  a phone system and the Internet.

Let me give you the run-down on precisely why I think that there are many rivers to cross before we get to the luxurious position of the Internet being a fundamental right.

The Internet can’t carry food…

Or people, or goods, or equipment.  An information superhighway is great in an information economy, but of limited use when you have a subsistence, agricultural or manufacturing based economy.  And let’s face it, whilst information is essential in developing new skills and supporting economies, it can be delivered in lots of old fashioned ways – like books, pamphlets, radio, TV.

The Internet needs power…

To deliver a reliable Internet service in to a country requires that that country have a viable and effective power supply.  Even now, many developing countries do not have reliable power.  Is it realistic to prioritise the right to the Internet over the right to a reliable and cheap energy source that can provide power for light, heating, entertainment, energy for industry? 

What’s the point of an Internet without machines…

Even with projects like OLPC and other ideas to get computers in to developing nations, there will still be the problem of providing equipment and software in to developing nations in a sustainable and long term manner.  A laptop computer – or a mobile phone, for that matter – is a complex piece of kit and is unlikely to be easily manufactured or maintainable locally. 

The Internet doesn’t educate or heal

Whilst the information on the Internet may be helpful in education, just how much of it is relevant without literacy?  And which is a more effective means of delivering basic and even advanced education in a developing nation?  $1000 spent on a computer that might help 1 person, or the same amount spent on books and similar resources for a class?  the Internet does not provide basic health care – it may provide useful information but cannot vaccinate.

The bottom line is that we live in a world of limited resources in which we have to prioritise those resources.  To claim the Internet is a fundamental right is to forget that the real fundamental rights – a home, food, safe water and no local Gestapo kicking the door in because you disagree with your Government – are yet to be achieved over much of the planet.  In a technologically advanced society their might be an excuse for this sort of comment, but in parts of the world where the next drink of water could kill you, it’s a luxury that cannot be realistically afforded.

Twitter – the medium is NOT the message!

Regular readers of my ‘jottings’ might recall a recent post of mine in which I debated the value of Tweeted Wisdom.  Always one to consider returning to the scene of past musings, I was today motivated back in to Twitter criticism territory after I read a Tweet that suggested that:

 “100 is the new 140 for massive retweetlove”.

Now, I have enough problems with 140 characters, but then again I’m using Twitter to communicate ideas and concepts as well as gossip, funnies and bon-mot to the good folks following me.  Whether I get re-tweeted or not is not the first thing in my mind when I put a Tweet together – what matters to me is whether I can marshall the idea effectively in to the 140 character limit.

Starting to apply lower character limits to Tweets based purely on the possibility of re-tweeting does seem rather ‘arse about face’ to me – it IS putting the process of communication ahead of the content – i.e. putting the medium before the message.

Some years ago, the Ford Motor Company were in pretty dire straits – losing money and market.  There was a serious concern amongst the higher echelons at Deerborn that Ford might actually go under.  Various policies were implemented throughout the organisation, including cuts to the design and manufacturing base of the company.  The story goes that at one Board Meeting, some of the directors were commenting that they had managed to get the books looking better by reducing costs, and that most of the cost reductions had come from savings made by closing down manufacturing facilities.  A grizzled old veteran who DID know the difference between a carburetor and a Carbonara pithily pointed out that, based on that thesis, the best way to save the company was to close ALL the company’s manufacturing facilities and stop making cars altogether….

And this is how this sort of emphasis on the mechanism of Twitter strikes me; people get way too wound up with the phenomena and culture and technology of Twitter rather than the function – and the function of Twitter is to allow rapid, succinct communication and conversation between people.  Or even between people and other computer programs!  But the emphasis is on communication and conversation – and when we start emphasising the possibility of a re-tweet over the quality of content, we are in danger of making Twitter more ‘gimmicky’ – something that is not good.

So, for what it’s worth – use that character allowance for the purpose it was originally given to us – to communicate.  Giving 30% of available space up for posisble re-tweets seems pointless.  What matters is what you say; not necessarily how many times it gets re-tweeted.  The ultimate re-tweetable message accoridng to some folks would be a single word – don’t let the usefulness of Twitter be compromised by ego.

Sturgeon’s Law

Following on from a recent post when I commented on the quality of Web 2.0 ‘user generated content’ I started thinking about the continued validity of Sturgeon’s Law – usually stated as ‘90% of everything is crap’.  When it was first formulated, the vast majority of the consumers of science fiction – the genre to which it originally applied – were protected from most of the crap by the editors.  (Having said that, the magazines of those far off days still contained a reasonable amount of stuff that could be described as ‘less than brilliant’…but that’s another story!!)

In my own view, I think Sturgeon’s Law is slightly out of date now – I’d probably suggest that the figure is closer to 95%, and what is worse is that:

  1. Web 2.0 allows much more of it to come through to the web-using public.
  2. The demand of satellite TV, Cable TV, etc. for new content has again reduced the quality threshold, allowing more stuff through that, to be honest, just isn’t up to the mark.
  3. The situation is almost certainly going to get worse; it’s increasingly difficult to apply any critique of quality to produced media without being accused of being elitist.

Is there an answer?  I certainly hope so; I have a good many years of life ahead of me and I hope that some of the time will be filled with entertainment that makes me laugh, cry and think .  I want to be provoked; I don’t want media to slide down to a lowest common denominator value or simply be inferior re-hashes of past glories.

The bottom line is that, whether we like it or not, we have to reintroduce the old concepts of judging value; of estimating and rewarding quality, even if this means we have to produce material that is regarded as too intelligent or challenging by some.   It may mean that sacred cows are killed – I have frequently commented, for example, that some otherwise excellent scripts of the most recent incarnation of the TV science fiction series ‘Dr Who’ were ruined in parts by the writers bringing in politically correct characters and dialogue that absolutely jarred.  It might also mean that we have LESS content; I for one would prefer to have less entertainment and media of a higher quality and production standard.

The answer is almost certainly not technical; there is too much content produced and we don’t have a technical means of grading material based on such subjective and culturally loaded terms like ‘quality’, ‘taste’ and ‘entertaining’.  maybe we all need to ‘up our game’ and be less forgiving of stuff that just seems slipshod and hastily put together to meet a marketing demographic.  Perhaps we need to have more editorial input on our web forums – there will always be calls of ‘censorship’ and freedom of speech when you do this, but perhaps it’s the first steps on the path to breaking Sturgeon’s Law.

The problem with Tweeted Wisdom….

Like many of us on Twitter, I follow a number of Twitter users who post aphorisms, quotes, sayings, etc.  A sort of electronic review of the ‘Wisdom Literature’ of the last 2000 years.  This can be pretty cool; I do wish that some folks would post their tweets across the day rather than in large floods, but, hey, it’s tolerable.

However, I recently started wondering about aphorisms in general – just how much wisdom can you cram in to 140 characters?   There is a lot of really smart stuff that gets posted, but just how much of it ‘sticks’ with us – indeed, how much of it is actually thought about by the people who actually post the wit and wisdom? 

Don’t get me wrong – there is quite a bit of good stuff that comes up.  My main issue is just how much we think about what we see – indeed, how much time do we have to think about what’s presented to us in the Twitter-stream.  After all, Twitter is fast and ephemeral – that hardly seems a suitable medium for something designed to stimulate thought and insight.  There is a serious risk when we start delivering and consuming ‘bite sized’ wisdom literature, and that is that the interpretation  and assimilation of what we read gets forgotten about.  

the whole idea of ‘widom literature’ is that it delivers to us something to chew on; it’s not a finishing point, it’s actually a starting point from which each of us may trace our own journey starting from the same starting point.  There is a Christian practice called Lectio Divina – literally ‘Divine reading’  which is based around reading a piece of spiritual writing – maybe scripture, maybe something generally spiritual – and then study it, ponder on it, interpret and then use as a basis for prayer or other worship.  And this is a process that takes time, and isn’t rushed.  While a piece used in Lectio Divina might easily be short enough to encompass in a Tweet, the time taken to interpret it certainly isn’t ‘Twitter-Time’.

Twitter is a great medium for certain types of message, but I am starting to wonder whether it’s a valid medium for wisdom literature ; I toyed with the idea of launching a ‘blog’ type site last year based around publishing a suitable quotation each day and writing a short piece based around my own thoughts on that topic – but then ditched the idea after a week or two because I realised I was subjecting others to my own interpretation. 

At least Twitter removes the ego from the posting of such literature quotes; there’s no space to post an interpretation, after all!!  But Twitter reduces everything submitted to it to something that exists in the reader’s ‘window of opportunity’ for just a few minutes before it’s forgotten.  Is that really how to treat this type of post?

The end of 6 Music

So, the BBC are going to close down 6 Music – which will be a great shame as it’s one of the few stations around that play a good mix of contemporary and past music, AND also has presenters that are knowledgeable about music and that have a love and passion for it.  Which is rare in this day and age of pre-packaged poppets of either sex whose main claim to fame is that they’re currently ‘in the public eye’ because of who they’re seen with or where they’re seen.

The cuts announced by Mark Thompson to the Corporation’s 3.5 billion budget may be politically motivated or commercially motivated, depending upon who you listen to.  They may be a ‘stalking horse’ to try and coax the Government to give the BBC more money, and won’t be pushed through.  They may be designed to soften up  the public to make them willing to take higher license fees to keep services.  there are any number of possible reasons floating around the blogosphere right now, as well as the stated reason of focusing the BBC’s resources on what are called ‘core functions’.

I’m not going to get in to the other aspects of the restructuring; I’m just going to focus on 6 Music and try and bring it’s cost in to perspective.  It costs about £9 million a year to keep it running, and there are some useful comparisons of ‘cost per listener’ of the BBC’s digital stations here.  In terms of pure cost per listener, Radio 1 Xtra and the Asian Network cost considerably more.

£9 million is a little over half the cost of the original (ending in July 2010) deal with Jonathan Ross for his services to the BBC – £17 millions over 3 years.  Graham Norton has just signed a 2 year deal with the BBC for a total of £4 millions. Thompson’s salary £800,000 a year.  Take the opportunity to read around about the expenses culture at the BBC – again, you’ll find that an awful lot of license fee seems to be spent on things a long way away from the provision of programmes.

The cost of  6 Music is small fry for the BBC – it’s a bout 0.0002% of the total budget of 3 odd billion.  It’s almost a rounding error in the BBC’s scheme of things.  To cut the services will do the BBC no good at all.  It’s such a fundamental misjudgement that I am starting to wonder whether the ‘conspiracy theorists’ are right and we may soon be told by Thompson that it was all a mistake and that 6 radio will not be scrapped after all.  A lot of the listenership of 6 Music is vociferous and media-savvy; there are many alternative media sources available for people today.  The BBC’s repeated treatment of licence payers as a cash cow that need not be listened to can only go on for so long before a backlash starts, and this round of changes might just be the thing to do it.

Web 2.0 – User Generated Content or garbage?

wastebinSome months ago, an Internet Form that I belonged to was taken offline after an internal dispute….and it never came back.  The upshot of it was that the content of the forum was no longer available – gone for good.  Of course, it wasn’t all pearls of ever-lasting wisdom, but there was some interesting stuff there that’s now gone forever.  A week or so ago, another friend commented on my Facebook profile about the ephemeral nature of a lot of what we put online  as ‘User Generated Content’, and it’s quality, and that got me thinking about just how much user generated content is worthy of any form of retention.

‘Web 2.0’ is very much about user generated content; a Web 2.0 site is essentially designed by the interaction that it offers users of the site – be it the ability to configure the user experience, participate in discussions, real time chat, post articles or images, whatever.   For those of us from the 70s and 80s,  it’s all very reminiscent of the paper based fanzines and newsletters we created, or the BBS systems of the 1980s and 990s – of course, the sheer volume and speed of communication offered by Web 2.0 exceeds the earlier versions of ‘user generated content’.

One might even include things like ‘Letters to the Editor’ in newspapers and magazines – how many of us knew someone who’d had a letter published in the local, or even national, press?  And then you get in to the rarer scenario of having an article, poem or story accepted for publication – and getting paid for it.  I still remember all the details of the first article that I had published in 1982 in the now defunct magazine ‘Electronics and Computing Monthly’.

The further you go back, the more important one thing becomes – and that’s editorial filtering.  Basically, space was limited in magazines, and so you wanted to fill it with what would sell.  And that’s where the quality control of the editor came in.  Even with fanzines, there was a similar need – you had a limited amount of space dictated by the cost of copying, postage and the time taken to type and duplicate it all.

Today, many of these limitations are gone – cost of publication is minimal, distribution is done by the reader picking their copy up form your site, etc.  Anyone can set up a publication in the form of a site, and expect to get a lot of content from users of the site.  In theory, a perfect world of conversation between similarly minded people across the globe, with no editor getting in the way and dictating policy.  It’s a wonderful dream.  And it doesn’t work.

To be honest, most people are just not up to the job of writing for an audience; the editor didn’t introduce censorship – he or she bought along quality control, focus and direction for the publication.  I’m far from perfect myself, but I learnt quite a bit about writing for an audience by having a couple of hundred article and a dozen or so books published in the days of the ‘paper press’.  If we forget the obvious nonsense that turns up as comment on blogs – the spam, the ‘me too’ and ‘I agree’ posts – then much of what does end up online is often poorly phrased rant or loosely disguised ‘advertorial’.   A lot of content on sites such as Facebook, Twitter and the online discussion forums is by it’s nature ephemeral – water cooler discussions enshrined in hard disc space – and the good stuff that you do find is typically drowned in the noise.

Like I said, I’m far from perfect and am conscious enough of my own abilities to know that my blog is simply the 2010 equivalent of a fanzine written by me and with a small audience.  But it’s important that we don’t get fixated on the idea that the removal of editorial policy on the web and the resultant ‘free for all’ for people to provide content is necessarily good.

It isn’t.  It’s removed quality control, and generated a Web that is increasingly full of rubbish.  If you want quality – look for sites with editorial policy or moderation.

The PAYG Laptop?

You write one article about Appliance Computing and the following morning this BBC story pops up – Laptop Launched to aid computer novices’.  The ‘Alex’, a Linux based laptop, is aimed at people who’re occasional computer users and comes with an Office suite, mail, browser, broadband connection and a monthly fee.  In other words, a PAYG laptop.  There’s nothing new about this; a number of Mobile Phone Companies offer mobile broadband access packages that include a Windows laptop, and in the recent past there have been a few occasions when companies have attempted to launch similar schemes, sometimes backed with advertising.

I say attempted, because they’ve tended not to work, and I’m not at all convinced that this one will be any more succesful.  The company’s website describes the package available here,and to be honest it does seem rather over-priced for what is a modified and stripped down Ubuntu distro.  And one that seems to only work when your broadband connection is running.  It’s a good business model provided that you can get people to buy in to it.  There’s a review of the package to be read here.

Now, first question – who is the market place?  The Broadband company who’ve developed this package claim that almost 25% of people in the UK with computers don’t know how to use it.  really?  That I find difficult to believe.  Most folks I know – across the board, non-techies, techies, old, young, whatever are quite au fait with using their computer to do what they want to do.  There may be aspects of computing that they don’t get, in the same way that I don’t ‘get’ iTunes, for example, or the intricacies of computer or video gaming, but I know no-one who’s bought a computer who doesn’t make some use of it.  Perhaps that 25% didn’t really want a computer, or have ended up with one totally unsuitable for them?

If the market sector is this 25%, then what proportion are willing to buy a £400 computer and a £10 access fee?  Apparently a ‘sofwtare only’ option that can be installed on older computers and that will simply cost you the monthly fee is out in the next couple of months, which might allow people with older computers to make use of them.  the package comes with 10Gb online storage; does this mean that local storage is not available?  If so, what happens to your data if you don’t pay your monthly fee or cancel your subscription?  To be honest, that sounds like something of a lock-in akin to Google Docs.  According to this review, on stopping the subscription, the PC effectively ‘expires’ – along with the access to your data.

I’m afraid that from what I can see I’m not impressed with either the environment or the limitations offered.  One of teh things that you learn after a while in putting together user interfaces is that people who come in knowing nothing soon gather skills and in some cases start finding the ‘simple interface’ that originally attracted them to be a limitation.  With a standrad PC, you just start using more advanced programs and facilities; with something like the Alex you’re stuck with what you’re given.  And whilst you could just buy a PC, and ask someone to set it up ‘simple’ for you (to be honest, it isn’t THAT difficult with a Windows PC, Mac or Linux machine if you ask about) and use a more ‘mainstream’ machine, you’re still stuck with your data being locked in to the Alex environment.

The solution to this problem is perhaps to look at front ends that sit on existing platforms, rather than work to further facilitate the move towards a computer appliance future split between a large number of manufacturers who lock us in to proprietary data stores.

PCC, Stephen Gately and liberal backlash

The Press Complaints Commissionshas decided not to uphold complaints about an article by Jan Moir about the circumstances surrounding Stephen Gately’s death.  I’m not going to rehash the details of the case – a quick Google will allow you to find the original article, but my main interest is in some of the comments that I’ve heard floated up on Twitter and other web sites about the findings of the PCC.  The PCC did indeed receive a record number of complaints – 25,000 – about the column, and there was a fairly hefty campaign mounted over social networks such as Twitter to encourage people who felt strongly to complain.  The newspaper concerned, The Mail on Sunday, dodged censure:

PCC chairwoman Baroness Buscombe said the commission found the article “in many areas extremely distasteful” but that the Mail had escaped censure because it “just failed to cross the line”.

The PCC had considered context and “the extent to which newspaper columnists should be free to publish what many will see as unpalatable and unpleasant stories”.

and two complaints to the Metropolitan police that were passed to the Crown Prosecution Service were also rejected as grounds for prosecution because of insufficient evidence that the piece breached the law.

Jan Moir’s piece was ill-timed, and some of her comments were hurtful to some people.  I guess that there were those who found the piece upsetting who didn’t complain, and that there were probably quite a few people who wholeheartedly agreed with what she had to say; after all, complaints procedures rarely get support.  But, as they say, process has been carried out and judgement bought in by the PCC and the CPS, and in many ways that should be the end of it – whether you agree with the outcome or not. 

Having said that, I wasn’t surprised today when I saw a fair amount of blather on Twitter from the ‘chattering classes’ referring to the PCC judgement, starting off by saying that as the editor of the Mail on Sunday is on the PCC, the verdict is immediately biased.  I guess that’s to be expected.  We then went in to slightly disturbing territory, with a Tweet that I came across along the lines taht the Tweeter didn’t want to censor comment but felt that something to rein in columnists from claiming authority they didn’t have.  There’s also this debate on the BBC’s own web site.  Now, why do I find that tweet rather disturbing? 

It’s all in the wording.  Where does ‘claiming authority’ start and end?  Do we apply it across the board?  Do you have to be a political scientist to talk about politics?  A GP to write medical articles?  A physicist to comment on the LHC?  And what about us bloggers?  Do we have to ‘in with the in crowd’ before we can comment on the activities of celebrities?  Do I have to have a degree in economics before I can comment on the parlous state of the UK economy?  Should we have license to comment?

I’m sorry – but a good columnist SHOULD occasionally say something that pisses people off; one shouldn’t b personally offesnive or abusive, but the sacred cows of modern society should be up for comment. Once you start down the road of ‘reining in’ columnists it’s the thin end of the wedge towards full blown censorship.  Would there have been so much fuss from the media and liberal intelligentsia were the column about the death of a young ‘smack rat’ in similar circumstances?  I very much doubt it; I fear that a lot of the reaction here has been about the death of  ‘one of their own’ in what must be described as unusual circumstances – unusual in my experience, any way.

 There’s an old saying that someone stays liberal on law and order until they get mugged or burgled; perhaps we might expand that to suggest that some people stay liberal on freedom of speech until someone dares to use it to say something they disagree with.