Tethering a Blackberry to a PC

This is one of those ‘good to try, might be useful’ sort of things that I’ve been intending to try for some time.  First of all, a couple of caveats – some service providers don’t like you doing this, and almost all of them charge you extra for the privilege.  So, regard this as an emergency measure for use when all other connection methods aren’t available.

Or, like me, you decided to do it because ‘it might one day be useful’!

So, what’s tethering? It’s the ability to use the modem facility that the phone uses to communicate with the Internet to allow the computer to connect to the Internet via the phone.  In this post I’ll go through the steps I went through to connect my Vista laptop to the Internet via my Blackberry, using BT’s network.  As always – it worked for me, but don’t blame me if it all goes horribly wrong – proceed at your own risk!  You will need:

  • A Blackberry with up to date software.
  • A laptop running up to date Blackberry Desktop software. 
  • A PC to Blackberry USB cable.

First of all, disconnect whatever network connection you currently have running on your PC.  This is most easily done by disconnecting teh network cable or turning off (or disconnecting) your WiFi connection.

Now, connect your Blackberry to the PC using the USB cable.  On your PC, run the ‘Blackberry Desktop’ program.  This bit is essential, and you can’t make use of the Blackberry’s modem unless the Desktop Manager program is running.

On the computer, open up Control Panel->Phone & Modem Options.  On the Modem tab you should see a new ‘Standard Modem’ added – on my PC it was listed as attaching to COM6, although COM11 is occasionally to be found.  Now go to Properties->Diagnostics->Query Modem and press the Query button – you should see a list of responses from the Blackberry.  The contents are not too important – the most important thing here is that you get something and it doesn’t pop up with ‘No Response’ or just leave a blank dialogue.

Now click Properties->Advanced and enter the following in to the initialisation command box:

+cgdcont=1,”IP”,”btmobile.bt.com”

The Blackberry Modem is now configured.  The next stage is to set up a Connection to the Internet.

Create a new Internet connection by Start->Connect To->Show all connections->Create a new connection.  Select ‘Connect to the Internet’ and then the ‘Set up my connection manually’ option, then next.  Then, select ‘Connect using a dial-up modem’ and Next, then give the connection a name such as “Blackberry Modem”, then Next.  Now, enter the following:

  • Number :  *99#
  • User name : bt
  • Password : bt

And that’s that!  Save the connection and to test it just connect to the Internet using your newly created connection.  There are two things to note – in most circumstances it won’t be as fast as your normal WiFi / Broadband connection, and you almost certainly be charged by the volume of data that you transfer.  For example, BT’s rates are here.

If you want to try this on another network, this page may be useful.

Apple – why 2014 could be like 1984

Back in 1984, Apple had Ridley Scott direct a very imaginative advert to launch the Macintosh computer.  It ran twice – once on a small TV station late at night to get it in the running for some awards, and the second time at half time in the Superbowl American Football game on 22nd January 1984.  And it never ran again.  The message from Apple was that their new machine would shatter the conformity that people like IBM (and by extension Microosft) were putting on the computer market, by making computing available to the masses.

The advertisement ends with the line:

 “On January 24th, Apple Computer will introduce Macintosh. And you’ll see why 1984 won’t be like “1984″. ”

The problem was that the Macintosh was so expensive that few people could afford it.  It was a pain in the rear to write software for – so relatively few folks wrote software for it, especially as the market was small compared to that offered by the PC.  As it turned out, 1984 wasn’t at all like 1984, but no thanks to the Macintosh which even today, in all it’s forms, occupies only 10% of the computer operating system market space, even if you include iPhones.

From day one, there was always something ‘control freak’ about Macintosh, all of it’s successors, the iPhone and now the iPad.  As I mentioned above, the original Macintoshes were not easy to write software for, and Apple didn’t make life easy for developers.  the situation persists today; to write software for an iPhone, iPod or iPad, you have to run the emulator kit on a Macintosh of some sort.  Let’s do a quick comparison – if I want to develop an application for my Blackberry, I download teh tools from the Blackberry website and get it running on my PC running Windows.  For free.  If I want to write an application for an iPod or iPhone….I first of all have to join the Developer Program at $100 a year.   Then I can download the SDK.  To run the SDK I need a machine running Mac OSX.  Oh look…only Mac’s can legally run Mac OSX…very much a closed garden.

Early Macintoshes came with no network connection; obviously this is no longer the case but it should have given us the hint that Macs were not really designed to talk with the rest of the world.  Fortunately for Apple, some of the people involved saw sense and gradually the more open Macintosh that people use today in it’s numerous forms came in to being.  And gadgets like iPhone, iPod and iPad emerged in to the market, able to interact with the Internet and other media.

But let’s look at what this actually means.  First of all, aaccess to applications and media for these latter machines is very much controlled by Apple in terms of:

  • Control of the means of production – make sure non-Macintosh / Apple users cannot easily develop applications.
  • Control of the means of distribution – iTunes store, various recent high profile cases of applications being banned from the iTunes store makes it difficult to get applications in to the world.
  • Control of the means of communication – these devices lack the ability to easily handle ‘standard’ add ons such as USB or cheap memory cards, like SD.  iPhones have also frequently been tethered to particular telephone companies. 
  • The fact that  iPad comes without Flash, for example, suggests that Apple are adopting a policy of attempting to control content that is usable on their kit.

Let’s ignore the stupidities around making devices reliant on rechargeable batteries in which the battery can only be changed by returning it to the manufacturer. 

The natural progression for Apple would be to continue growing as a media and services company, rather than as a hardware house.  By an iPad, and rely on Apple for much of your available content and software.  And Apple can also ensure that you don’t leave the ‘walled garden’ of Apple acceptable content by making sure that the inbuilt iPad browser doesn’t handle some common media formats like Flash.  How will they fund all this?  Easy – you’ll pay.  Apple have already stated that they are rolling out an advertising model for iPad / iPod / iPhone applications in which the application provider would be able to get 60% of advertisng revenue generated via their application – the other 40% going…well….you know where.

Control of content, hardware and communication.  2014 could very much be like 1984 if Apple gets it’s way.

A hint of mortality

Today Guy Kewney died of cancer.  He’d been ill with Liver and bowel cancer for a year.  For those of us who got involved in personal and home computing ‘at the start’ Guy was effectively ‘Mr Personal Computer World’.  He didn’t own it, but his column was often the one we all read first.  One of Kewney’s claims to fame was that he invited the ‘Uncle Clive’ persona for Clive Sinclair – true or not I guess we’ll never know, but it did wonders for Sinclair and his machines. Kewney also had a massive amount of influence in terms of how he got a lot of folks interested in writing for the magazines – even those of us who never wrote an article for Kewney felt motivated by him.  There’s a nice piece here by Jon Honeyball, which sums up Kewney pretty well.  The sad thing is that for many people he’ll be remembered not for his journalism, but because the BBC ended up interviewing a taxi driver called Guy Goma instead of Guy Kewney a couple of years ago.  Typical BBC….

I had a certain sympathy with Kewney because he wrote the ‘NewsPrint’ section of PCW which gave industry news- I did a similar job for a few months for a small technology newsletter, and the job almost killed me.  Guy, thanks for the articles and the inspiration.

Over the years I’ve been saddened on a number of occasions by writers that I first encountered in my childhood or teens.  Back in 2008 I commented on the passing of Sir Arthur C Clarke, and a few weeks ago I learnt that a radio amateur called Norman Fitch, who for 21 years had written a column about VHF radio communications for the UK Amateur Radio movement’s ‘house magazine ‘Radio Communications’ had died.  Way back in 1989, I remember reading about the death of a chap called MG Scroggie, who’d written one of the books that got me interested in amateur radio in the first place. 

When Johnny Cash died I was saddened – another part of my childhood passed away.  I guess that when people that we grew up knowing, or those that are our contemporaries, die, it’s a constant reminder of our own mortality.

And oddly enough, whilst I’ve been writing this piece, I heard that Malcolm McLaren, one time manager of the Sex pistols and arguably the creator of much of the UK Punk Scene – and very much a figure from my own teens – has also died today at the age of 64.

Too few experiments in school science

I’m a science geek – always have been, always will be.  When I was a kid I had microscopes, telescopes, chemistry sets – anything that allowed me to do experiments.  By the time I went to secondary school I was already pretty practically inclined in the laboratory, having done quite a few of the experiments that I was expected to do at school in the garden shed at home.  Fortunately I managed to avoid explosions, poisoning, fire and accidentally opening portals to other universes a la Fringe.  I appreciate that i was lucky in having parents and an aunt and uncle who actively supported my interest in matters scientific.

Articles like this from the BBC, noting that there is inadequate experimental science done in schools, sadden me greatly.  In the early 1980s I was involved with writing computer software for schools.  It was suggested back then that ‘virtual labs’ could replace some of the practical work carried out, saving money, reducing the need for equipment and also offering health and safety advantages.  I was quite a supporter of this idea for a while – thankfully some of my colleagues talked me out of it.  They were wise enough to realise that so much of science education is the tactile, the experiential – the smells, sounds and sights of experimentation. 

It’s easy to think that there is little value in repeating ‘classic’ experiments – after all, the answer is already known!  However, the importance is in understanding what theories the experimental results supports and in learning how to actually do an experiment – the theory and practice of the scientific method.   And there’s enormous value to be obtained in experiments when, despite care and attention, the results aren’t what’s expected – that is when true scientific investigation can begin at any age.

Unless we do something to re-discover the rich practical experiences offered to science pupils 20 or 30 years ago, it’s inevitable that the standing of this country in terms of research and industry will falter.  We cannot built a modern scientific and technological economy based purely n the ‘soft science’ that seems to be offered in today’s classrooms.  Whilst it’s useful to be able to debate the pros and cons of social policies on scientific issues, it’s equally important to be able to identify fallacies in scientific arguments, and perhaps even put together simple experiments to demonstrate complex issues – after all, ‘hands on’ experiences tend to cement learning.

A breathtaking example of how simple, practical science brings home concepts was given by the late Richard Feynman during the enquiry in to the explosion that destroyed the Challenger space shuttle.  In a simple experiment involving ice water and a piece of rubber, he showed that at low temperatures the rubber (which was the material used as O ring seals on the booster rockets of the Challenger) became hard and distinctly un-rubbery, and was no longer fit for purpose.  He cut through months of bullshit in 5 minutes, in an experiment of elegant simplicity and with a little showmanship.  The perfect demonstration of scientific principles applied to solving a major engineering disaster.

My own contribution to trying to make science a more practical business for both school and home is a new web site I’m starting up called Hands On Science.  It’s hopefully going to be full of experiments and demonstrations that can be done with the minimum of equipment but that demonstrate in an interesting way many scientific principles.  It’s only just started up, but I’d welcome comments over the weeks to come – and ideas!

The ‘father’ of home computers dies…

A few days ago one of the pioneers of the home computer revolution of the 1970s died.  Ed Roberts, an MD in Georgia, died after a long battle with Pneumonia.  Back in the 1970s his company, MITS, moved from model rocket telemetry, to calculators, then to building the first ‘computer kit’ – the Altair 8800 – for which Bill Gates and Paul Allen provided a BASIC interpreter.  The Linux and Apple Fanbois amongst you may now know who to blame for Microsoft… 🙂

It’s debatable that without the Altair 8800 another home computer – in kit or ready built form – would have come along.  The Apple 2 followed behind theAltair, as did many other similar machines, but the Altair was first.

The Altair 8800 was basically a microprocessor chip with enough associated ‘gubbins’ to make it work – it could be chipped up to have 8k of memory – my laptop here has 4,000,000 k of memory – and could even handle a keyboard and eventually a video display – although when you got it out of the box (and after you’d soldered the thing together) it'[s user interface was a bank of toggle switches and some LEDs.

Yup – you programmed it, entered data and read the output in binary.  It was safe to say that in the mid 1970s, as far as computers were concerned, men were real men, women were real women, and real programmers did it in binary with a soldering iron tucked behind their ear. The fact that within 10 years of the Altair being launched teenagers were typing their own programs in to Spectrums, ZX-81s, BBC Micros, Apples and the rest is a monument to the excitement and speed of those early days of computing.

And, by golly, it was FUN! Even the act of getting your computer working in the first place was part of the game – you learnt to code in machine code from day one because either nothing else was available or you realised that in order to make anything useful happen with only a few HUNDRED bytes of memory you needed to right VERY ‘tight’ code.

I built my first computer in the mid-1970s – well, not so much a computer as a programmable calculator.  I took an electronic calcul;ator and wired up the keyboard to some circuitry of my own invention that mimicked keypresses.  Programming this beast involved changing the wiring in my circuit – running teh program involved pressing a button and after a few seconds the answer would appear.  I then got even smarter, and managed to work out how to introduce some decision making in to my gadget.  Fortunately, I blew the output of the calculator up soon afterwards – I say fortunately because I then found out about microprocessors and ended up building some simple computer circuits around 6800 and Z80 microprocessors, rather than carrying on with my rather ‘steampunk’ programmable calculator!

Ed Roberts’s machine wasn’t an option for me; my pocket money wouldn’t cover the postage from the US.  But the fact that people were doing this sort of thing was very exciting, and by the time I left university in 1982 I’d already spent time with ZX81s and Apple 2s, and had written my first article for the home computer press – a machine code monitor and loader program for the ZX81 in ‘Electronics and Computing Monthly’.  I was reading in the magazines about the developments of software from up and coming companies like Microsoft – even in those pre-PC days – and for a few years in the early 1980s the computing field in the UK was a mish-mash of different machines, kits, ready made stuff – and most people buying these machines bought them to program them.  How different to today.

I have to say that I’ve always thought that the fun went out of home computing when the PC came along, and when Microsoft and Apple stopped being ‘blokes in garages’ and started being real companies.

Ed Roberts – thank you for those fun packed years!

Social Search…waste of time?

I’m a big user of search engines.  Despite my grumblings and pontifications on here about Google, I still use them the most because they’re still the best out there.  I hope that Bing – despite the daft name – will one day come to challenge Google, but until then, I just Google.  It’s been interesting recently to see Tweets start appearing in search results, and I’ve commented in this blog on the topic.  The most recent work being done by Google that they feel will improve the search experience for us all is explored in this piece from the BBC, and I’m particularly interested in the comments made about ‘Social Search’.

First of all, what is Social Search? 

My definition of a true Social Search tool is one that would give weight to a number of different aspects when searching.  These would include:

  • The normal search criteria as entered in to any search engine that you care to use.
  • Your location, intelligently applied to any searches that might be expected to have a geographical aspect to them.
  • A weighting applied to favour the results based upon material that meets the criteria you’re searching on that may have been placed on the Internet by people or organisations within your personal or professional network.

To give an example – you do a search for restaurants.  The search engine makes a guess about your location based on previous searches, geocoding based on your IP address or, coming real soon, tagging provided with the search request specifying your location based on a GPS in the device that you’re using for the search.  The search engine then determines whether your ‘friends’ have done similar searches, whether they’ve done any reviews or blog posts about restaurants in the area, posted photos to Flickr, or are actually Tweeting FROM a restaurant as you search, whatever.  The results are then returned for you – and ideally would be tailored to your particular situation as understood by the search engine.

And this is roughly what the Google Social Search folks are looking at.

“….returns information posted by friends such as photos, blog posts and status updates on social networking sites.

It is currently only available in the US and will be coming to the rest of the world soon.

Maureen Heymans, technical lead at Google, said this kind of search means the information offered is personal to the user.

“When I’m looking for a restaurant, I’ll probably find a bunch of reviews from experts and it’s really useful information.

“But getting a review from a friend can be even better because I trust them and I know their tastes. Also I can contact them and ask for more information,” she said.

In future users’ social circles could provide them with the answers they seek, as long as individuals are prepared to make those connections public.”

Of course, the million (or multi-billion) dollar question is how far are people to go in terms of making their networks available to search engine companies in such a way that results can be cross referenced in this way.  Once upon a time I’d have said that folks wouldn’t, as they value their privacy, but today I’m not so sure.  Given that we have seen sites where people share details about credit card purchases, I’m not convinced that people value their privacy enough to not allow this sort of application to take off, at least amongst the ‘digital elites’.

Of course, hopefully it will be up to us whether we participate in using Social Search – I guess all of us who blog or Tweet will find our musings being used as ‘search fodder’ unless we opt out of making our contributions searchable.  Will I use Social Search?  If it’s at all possible to opt out, No.  And here’s why.

Because I doubt the results will be as relevant to me as Google and all the other potential providers of SOcial Search think they will be.  Let’s face it – these companies will not be doing it for nothing – some where along the way the ‘database of intentions’ will be being supplemented and modified based upon the searches carried out, and such information is a goldmine to marketers and advertisers.

But the relevance to me?  I’m yet to be convinced – and here’s why.

If I really want the opinions of my friends, family and occasional business contacts on what I eat, wear, watch or listen to then I’ll ask them directly.  Just because I know someone doesn’t mean that I share any similarity in viewpoint or preferences at all.  I have friends with very different interests – Christians, Muslims, Jews, Buddhists, Agnostics  and Atheists, people from the political left and right, party animals and stay at homes…the differentiation goes on.  This is because I pick my friends based on what they’re like as people – not necessarily because they share interests or beliefs.  As it happens, I’m occasionally quietly offended by what some of my online friends say – but that’s life.  We don’t always have to agree or share the same beliefs.  

Therefore, the idea of biasing my search results based on what people I know search for, prefer or comment on is potentially useless.  If I wish to know what my friends think or say – I’ll talk to them, email them or read their tweets / blogs / whatever directly. 

I feel there’s also a serious risk of ‘crystalisation’ of beliefs – a sort of friendship groupthink emerging.  Think of what it was like when you were 13 years old and spotty.  For many teenagers it matters to be ‘in with the in-crowd’; Social Search could contribute to the return of that sort of belief structure amongst peer groups.  By it’s nature, the people who will be ‘opinion leaders’ in your Social Search universe will be those friends who are most online and who share the most.  Their activities will hence bias the results returned in Social Search.  It might not be such a problem for them, though – people who have a high Social Search presence will undoubtedly come to the attention of advertisers and opinion formers who might wish to make use of that ‘reputation’.

One of the great advantages of good, old-fashioned, non-social search is taht you will occasionally be bowled a googly (pitched a curve ball for my transatlantic friends!) that might lead you off in to whole new areas of knowledge.  You may be prompted to try something new that NONE of your friends or colleagues have heard of.  Whilst these results will still be in the results, if they’re on the second page, how many of us will bother going there?  We’ll become fat and lazy and contented searchers.

So….I think I want to stay as an individual.  For now, I’ll happily turn my back on Social Search!

Why are some Open Source support people so damn rude?

Don’t get me wrong – I love Open Source software and have used some of it fairly widely in various development projects that I’ve done.   I’m also aware of the fact that people involved in the development and support of such software are typically volunteers, and on the odd occasion I have called upon people for support, I’ve always had good experiences.

I’ve also seen some absolute stinkers of ‘support’ given to other developers, in which the people who’re associated quite strongly with the softwrae have treated people in a rude, patronising and often offensive and abusive manner.  Now, in 20+ years of dealing with IT support people – including folks like Oracle, Microsoft. Borland (showing my age) and even Zortech and Nantucket (back in the deep past!!) I can count on the fingers of one hand the number of times I’ve had this sort of treatment from big bad commercial software houses.  It’s unfortunate that I’ve seen dozens of examples of this poor customer service from Open Source suppliers in the last couple of years.

Because even if we don’t pay, we are customers – and some of the worst behaviour I’ve seen from companies where users are required to pay for a license when the software is sued in commercial situations.  It’s hardly encouraging, is it?  I know it can be frustrating to answer the same question several times a day, especially when the solution is well documented, but rudeness isn’t the way forward.  After all – it doesn’t exactly encourage people to use the product, or pay for a licence – rather than persevere or even volunteer a fix, folks are more likely to just go to the next similar product on the list.

Ultimately, it boils down to this; piss off enough potential customers and people like me will write articles like this but will name names and products.

So, here are a few hopefully helpful hints to people involved in regularly supporting products and libraries.

  1. If it’s your job, you’re getting paid to do it.  If you’re a volunteer, you’ve chosen to do it.  In either case, if you don’t feel trained up enough in the interpersonal skill side of things, just be nice, and read around material on customer support.  If you don’t like it support, then rather than taking it out on customers, quit.  Because you’re unhappy is no reason to take it out on other people.
  2. Remember that the person asking the daft question may hold your job (or the future of your product) in their hands.  You have no idea whether they’re working on a project for a small company or a large blue chip / Government department.  Your goal is surely to get widespread adoption – the best way to do this is to make folks happy.
  3. Even if the fix IS documented in any number of places, be polite about it.  If it’s that common, then have it in your FAQs or as a ‘stock answer’.  The worst sort of response is ‘It should be obvious’.  Of course it’s obvious to you – you wrote it.  It isn’t obvious to other people.  This seems to be a particular problem with ‘bleeding edge’ developers who swallow the line that ‘the source code is the documentation’ – it may well be, but if you want your product or service to be adopted you need to get as many people as possible using it.
  4. Don’t forget that if someone perseveres with your software, through buggy bits, they may be willing to help you fix it.  The chances of you getting a helper if you are rude to them is minimal.
  5. If you get a lot of questions or confusion about the same issue, perhaps it’s time to update the FAQs or Wiki?  And don’t forget sample code – if you’re generating code libraries PLEASE provide lots of real-world examples.

And to all the nice support folks – thanks for all the help – it is appreciated!

Facebook would like you to share even more….

There’s an episode of ‘The Simpsons’ in which Lisa sets out to determine whether a hamster or Bart is the more intelligent for a school science project.  She does this by applying electric shocks to the ‘subjects’ when they attempt to feed.  the hamster soon stops trying to eat the nuts that are attached to teh electrical wiring, while Bart just keeps on getting electric shocks whenever he tries to eat a slice of booby-trapped cake.

And so it seems with Facebook and privacy issues; no sooner than they navigate their way through one privacy crisis, then they end up with another problem of their own construction –this time involving a new plan to allow ‘trusted third party partners’ access to information about your Facebook account.  At the moment, when you go off to a site – like a game – that connects to Facebook via the ‘Facebook Connect’ application, you’re asked if you wish to give the site permission to access data from your Facebook account that the site needs to work.  This is usually the point at which I say ‘No’ and close the brwoser window, I should add.  The new arrangement will be that certain sites will be given special dispensation to bypass this process and use your Facebook ‘cookie’ on your PC to identify your Facebook account, then go off to Facebook and grab details about friends, etc. without you ever agreeing to it.

Of course, there will be the option available for us to Opt Out of this rather high-handed approach, and by reducing the amount of information that you make available in your profile with a privacy setting of ‘Everyone’ you’ll be able to restrict what data is presented anyway.  But it does appear that this, combined with the recent changes to default privacy settings that made ‘Everyone’ the standard (unless you change it), are pointing to an increasing interest form Facebook in working out ways of :

  1. Using your facebook login and data as a ‘passport’ on to other affiliated sites.
  2. Increasing the ‘stickiness’ of Facebook – not necessarily by keeping you on the Facebook site but by keeping information about your social activities with other Facebook users going back to the Facebook site.
  3. Increasing the ‘reach’ of Facebook accounts to make them more valuable for monetising.

It’s inevitable that Facebook will want to start making some real money from the vast amounts of personal data acquired on their users; if they increase the number of ‘selected partners’ significantly then the amount of data that can be collected about behaviours of Facebook users will be vastly increased – perhaps it’s time to start remembering that you are soon going to be paying for Farmville and other such activities one way or another; it may not be a subscription, but your personal data might start showing up in all sorts of places.

You may have missed this…the day China pulled the plug.

You might have missed this.  I certainly did – but then again for the last week or two I’ve been running around like the proverbial ‘blue arsed fly’ trying to juggle a variety of personal, professional and voluntary responsibilities whilst avoiding cat-induced sleep deprivation.  Anyway…where were you when China appeared to ‘turn off’ access to Twitter, Facebook and YouTube all over the world?

Because yes, it actually happened – from sometime on Wednesday traffic destined for the servers of these three social media giants was noticed to be going to servers based in the People’s Republic of China.   Technicians overseeing the world’s DNS systems (the ‘phone books’ of the Internet that tell servers and routers around the Internet where to send traffic to) noticed this, and eventually traced it back to a node on the DNS system in Sweden, that may have either been accidentally reconfigured or deliberately reconfigured by hackers.  Whatever the reason, it’s been an eye opener in principle, it means that any reasonably equipped government or terrorist organisation can subvert the whole routing system of the Internet – at least until the holes that allowed this to happen are secured.

The nature of the Internet is such that it has always been possible to do this sort of subversion; it’s just that the Net has never been important enough to be worth worrying about until recently.    The recent kerfuffle between Google, the Government of the PRC and the US Government has put the Internet firmly on the political stage – much more prominently than took place during the Iranian disturbances last summer.  (I’ll be commenting again on Google / PRC in the next few days, but here are my previous comments on that particular story)

It’s almost certain that this was an act either ordered or condoned by the government of the People’s Republic.  Their much vaunted ‘Green Dam’ is clearly capable of acting way beyond the borders of the PRC, especially if the remote control ‘exploits’ are used to take control of PCs running the program.  This would effectively give the PRC a massive cyberwarfare potential, with every PC legally installed in the PRC being capable of taking part in a botnet.

This action very much appears to be a shot across the international community’s bows; the PRC demonstrated their ability to break the Internet.  There are ways around this intrusion, of course, and steps will be taken to deal with it, but it does show that the gloves are off in what is increasingly a battle of wills between governments wishing to restrict what their citizens can read online and those that aren’t interested.  And I’m afraid that I have to include some democratic governments – like Australia – in that list.

The Internet is a political weapon; last Dceember I commented on how the rules of online civil unrest might be changing, as people on the receiving end of protest decided to do something about it – in that item it was Iran and Twitter.  It may well be that that was simply the beginning of ongoing efforts from repressive regimes to control the streets of cyberspace as well as the streets of their own cities.  What is important to realise is that the nature of the Internet – it’s flexibility, expandability, it’s ability to be used for things that the original creators had never even thought of – is at the root of the relative ease with which people can break it.

Unfortunately I expect the ‘powers that be’ to react to this sort of threat by using it as an excuse to tighten up various aspects of security and surveillance on the Net.  Expect legislation such as ACTA and The Digital Economy Bill to be tightened up in a ‘9/11’ style response to this act of online retaliation.

Chrome – the prissy Maiden Aunt of browsers….

I’m currently involved in developing a web application of moderate complexity using Ext to provide a ‘Web 2.0’ front end on a PHP/mySQL application.  We’ve endeavoured to make it work across a range of browsers – Firefox, IE, opera and Chrome.  And this is the blog article in which I vent my spleen about Chrome.

Because, you see, there are some occasions when Chrome is an absolute bag of spanners that behaves in a manner that just beggars belief, and it worries me immensely.  If IE behaved in the same way that Chrome does under certain conditions then the Chrome / Google Fanbois would be lighting their torches and waving their pitchforks as they headed out towards Castle Microsoft.

Giving Chrome it’s due, Chrome renders CSS well against standards, and is frequently faster than Firefox and IE in terms of delivering pages; where it does seem to be lacking is in it’s sensible handling of JavaScript. The general impression I’ve had over recent days with Chrome and JavaScript is that it’s incredibly picky about how it handles JavaScript that is less than perfectly formed – hence the ‘Maiden Aunt’ jibe.  It requires everything to be very right and proper.  I understand that any browser should be expected to deal with properly structured script, but in recent years I’ve found that the major browsers tend to behave in a pretty similar manner when processing JavaScript and tend to vary in behaviour when rendering CSS – hence the fact the some sites look different in IE than they do in Firefox or Chrome.

But I’ve encountered some horrendous differences in the way in which Chrome on one side and Firefox/IE on the other handle JavaScript.  Chrome seems to be very ‘tight’ in it’s handling of two aspects in particular; white space and commented out code.  I hope that following comments might prove useful to anyone doing JavaScript development – particularly with libraries such as Ext.  Note that these issues don’t occur all teh time with Chrome, but have occurred often enough to give me problems.

Watch the White Space

Chrome seems particularly sensitive to white space in places where you wouldn’t expect it to be.  For example:

  • Avoid spaces following closing braces ( } )at the end of a js source file.
  • Avoid spaces around ‘=’ signs in assignments. 
  • Avoid blank lines within array definitions – don’t put any blank lines after an opening ‘[‘ before data.

Watch the comment lines

The // construct used to make a line in to a comment line needs to be handled with care with Chrome.  Don’t include it in any object or array definitions – whilst it works OK in IE, it can cause major problems in Chrome.

Indications of problems

If you’re lucky you may get a straight forward JavaScript error – in this case you will at least have an idea of what’s what.  If you’re unlucky you may end up with either an apparent ‘locking up’ of Chrome or a 500 Internal Error message from your Web server.  The ‘lock up’ will frequently clear after a few minutes – the browser seems to be waiting for a timeout to take place.  When the errors do take place, I’ve found that the loading of pages featuring JavaScript errors is terminated – it can give the impression that a back end PHP or ASP.NET script has failed rather than client side script.

In summary, just be aware that Chrome may not be as well behaved as one would expect.

And that’s my whine for the day over!