Left to Apple, 2014 could be like 1984…revisited…

Those of us of a certain age in IT will remember Apple’s famous TV advert for the groundbreaking ‘Macintosh’ computer back in 1984.  The advert, here on You Tube, portrayed how the Mac would free computer users from the grasp of the evil Corporate Computer giants (such as IBM and Microsoft) and did a lot to help Apple’s image as the ‘good (albeit expensive) guys’ in the computer world, providing computers that were fun to use, cool and trendy.

Macs were always hard to write software for, compared to the PC.  But the ease of use and availability of high quality software for media use, combined with a large number of users who might be regarded as ‘opinion formers’ – writers, authors, musicians and other media players – ensured that the Mac would survive.  In recent years the iPod, iPhone and iPad have created new markets for Apple products – indeed, I have an iPad on loan at the moment and I really enjoy it, despite my original qualms about the iPad.  But Apple kit has become increasingly ‘walled garden’.  I first explored this in this Blog post: http://www.joepritchard.me.uk/2010/04/apple-why-2014-could-be-like-1984/, expressing concern about the way in which Apple were controlling what you viewed and accessed with the iPad.

So, what’s new? Why am I back here?

Take a look at this Patent.  The stated purpose is to allow the owners of concert or conference venues to turn off the cameras of any devices in the venue that are using technology that is described in the Patent.  You might wonder why someone in the digital camera / video business would want to put circuitry in their cameras that would allow them to be remotely disabled.  Well,  if you’re a media publisher, then you might be very interested indeed in being able to prevent people filming concerts and such that you might actually have the rights for.  At this level – that of Digital Rights Management – then it’s a useful technology – especially if, like Apple, you make money by selling media, or if you think that governments, encouraged by media companies, may consider beefing up DRM laws to protect more forms of media.

The patent relies on infra-red light to disable (or change the function of) the cameras.  Wireless signals would have range issues or might even be disabled by the simple expedient of the user of the camera simply disabling WiFi. As far as I can see, the patent works by using Infra Red light coming in through the camera lens – there might be a way to filter this, but I’m not entirely sure – probably suitable IR filters would dim and distort the colour of the image beyond usability.

Whilst the DRM issue of recording performances has been the overt driving force behind this patent, I’m more worried about how it might be used to disable the camera at demonstrations, civil unrest, etc.  Capturing footage such as that seen in the UK Student Demonstrations, the UK G20 evidence about the death of a passer by and all the footage from Egypt and Greece might no longer be possible for users of cameras fitted with such technology.  All the authorities would have to do is ‘paint’ areas of the scene they don’t want filming with a suitable IR signal and that’s that – apart from any ‘old tech’ that doesn’t have this patent incorporated.  This would be a simple step – the technology to paint with IR could be as simple as a battery of high intensity infra-red LEDs emitting the required coded signals.  One can imagine the situation – the authorities wish to violently break up a demonstration, they turn the infra-red emitters on, the phone cameras go dark, the kickings start.

Apple seem to have come a long way since their ‘freedom from authoritarian power’ beginnings in the 1970s and 1980s.  The revolution will not be televised; certainly not with Apple kit, anyway.

If it ain’t on your machine, it ain’t yours.

Yesterday I found out that Yahoo had pulled the plug on the Delicious application, amongst a few other APIs and services.  There will no doubt now be a spate of articles about how to move your content from these applications to somewhere else, and it may be that new services spring up out of the Internet eco-system to fill the gap.  But hopefully the users of these systems will have learnt a valuable lesson:

If it ain’t on your machine, you cannot rely on it being there.

This isn’t rocket science for those of us who cut our computing teeth in a pre-Cloud, pre-WWW world, but it was pointed out to me the other day that there are now large numbers of children and teenagers who have never lived in a world without the WWW.  Scary.

A couple of years ago a Forum that I was an occasional contributor to shut up shop in a sudden and pretty final manner – the owner simply closed the shutters with little warning.  For me this was vaguely annoying but no biggie, but for other users of the Forum who’d committed some pretty large articles and intellectually robust commentary over a period of time, it was almost the equivalent of Edmund Blackadder having his novel burnt by Baldrick.  Of course, the site owner was perfectly within his rights to do this – free forum and all that.  But the general feeling was that a form of social contract had been broken.  However, one could easily say that the authors had not taken backups of their content…

I mothballed a forum myself a year or so back – it’s still online, all content there, but posting has been disabled.  I have to say that in these times of almost limitless server space and cheap hosting it almost seemed churlish to pull the work of others. 

But there may well be a point at which I let the domain go or re-use it for something else.  It’s perfectly within my rights to do so, and that content will then exist only as a zipped up backup on a DVD somewhere, and anyone who posted anything there, who wants it back and didn’t take a copy will have to whistle.

And there is the issue; the ‘universal availability’ offered by browser based applications, the Web and the Cloud means that many people no longer own their own data, in anything but an intellectual property sense.  They don’t know where it is stored, they don’t know who gets to look at it, search it or mine it.  They don’t know how often it’s backed up, and have an assumption that ‘smeone’ will be taking care of it.  The increasing focus of Operating Systems on hiving off document and data storage to servers ‘out there’ in the Cloud or on the Internet (like Google’s new Chrome OS) is regarded as a great positive for those involved in Internet service related businesses – after all, it could well be the next big thing in what you can be charged for – always something folks like. 🙂

There is something rather neat, in my opinion, about having your data on your hardware, under your control.  Yes, it’s your responsibility, but we need to start regarding personal or household data in the same way previous generations have looked after old letters and photographs.  If you need to work on stuff whilst away, then why not just put the files in question on USB sticks?

And finally, data ‘out there’ is under the legislation and jurisdiction of whatever country the servers lie in.  You might want to look at things like the US PATRIOT act before saving your data anywhere that crosses US jurisdiction.  Whilst you might not think you’re a terrorist or a troublemaker, the definitions these days are flexible.

Ultimately, there is something rather reassuring about having your data at home, under your roof, where the only way it can be seized or searched is when the stormtroopers kick the door in.

The Death of Google Wave

Not for Google Wave the sudden death; more a slow, drawn out lingering farewell on the life support machine of ‘development has been stopped’. I guess it gives the boys at Mountain View the opportunity to change their minds if the pressure gets too much. The demise of Wave doesn’t actually surprise me; I’m surprised that it’s lived as long as it has done.

Here’s the story of my experiences with Google Wave.

When it was first announced, I wasn’t quite sure what to make of it – a sort of mash-up of email, instant messaging, social networking, blogging and online discussion forum. I received my invitation and got signed up. I have to say that I wasn’t an early adopter – to be honest I wasn’t sure what I was going to use it for and I’m past the stage in my life where I have to try out all new technology the day it comes out – life is way too short to be someone else’s Beta-Tester….

And there we hit problem number 1. I knew that Wave would not work with IE, so I signed in with Firefox, and had a few problems there as well. OK, Google, you want me to use Chrome so I will do – and I was sorely disappointed when I still couldn’t get the equivalent of a profile set up on my Wave account – the special form of Wave that stores such information just wasn’t playing with me. I contacted Google technical support, scoured discussion groups and found that others experienced the same problem. I was told by Google that it was something to do with my account, but not how to deal with it. Various other folks suggested that it was ‘just one of those things’ that might get fixed at some point, but for now it was a problem that bothered some users.

OK…I could live with it.

The second thing is that getting a Wave account is rather like buying the first telephone in your circle of friends – because of the social nature of Wave you need a few friends to make it worthwhile. You can use it without other folks in your network using it – but it rather misses the point. So, next, find your friends. And that was the next sticking point for most IE using, Firefox using, non-techies that I knew – why should they bother trying to get on to a new social networking / communications / chat / mail / what have you system where most of their friends AREN’T?

However, I have a number of techy pals and people who’re interested in emerging technologies, so I got a few folks on-boad.

OK…I could live with it.

We then hit the issue of exactly what to do with Wave. For one project we did try using it to discuss design ideas and such, but we found that it was more convenient to use an existing issue / bug handling system already in place for the organisation. Another couple of people I knew attempted to kick off various waves but it just felt like we were using Wave for the sake of using Wave. I was reminded to some degree of a great piece of software (IMO) from the 1980s called Lotus Agenda – it did all sorts of clever stuff but conceptually was a mare to get your head around – but at least Lotus provided a few samples of what could be done.

And I think that this was, in the end, the thing that did Wave for me – I couldn’t honestly think of an application within my circle of friends and professional contacts that couldn’t be done better with a different tool. There’s an approach to software utility development that I often adopt that I was taught very early on in my career; build tools to do specific jobs very well – and if possible, make those tools so that they’ll talk to each other. Now Wave attempted to combine e-mail, social networking, instant messaging, file sharing and online discussion forums in a way that doesn’t really give the advantages of the individual technologies but requires a change in working practice, in many cases change of browsing software and a cultural / behavioural change amongst participants to get them ‘on board’.

And that’s why I’m not terribly surprised that Wave hasn’t taken off; I am hopeful that if Google release the code in to the wild as an Open Source project we might see some new projects spring from it. But I’m still to be convinced that the ‘Wave’ concept of multi-mode online communication all in one place is going to be popular – especially if it requires you to sign up to yet another site and maybe even change browsers.

I write software…to solve problems

Well, it’s a while since I wrote a blog post so why not kick off with a slight bit of professional heresy.  I write software for a living; have done for over 30 years, starting with SCMP microprocessors in my teens (yes, I was THAT sort of teenager…) and working through everything in between until now when I spend my time split between .NET, JavaScript and PHP.

Now, why do I write code?  Well, occasionally I do it for fun, but mostly I do it for profit – my clients pay me to do it.  Actually, that’s not right.  My clients pay me to solve their problems for them using software. 

I’ve never been one of the great ‘geeks / hackers’ in life; I’m a radio amateur and electronics whizz, and the closest I ever came was in my teens and early twenties when I was fiddling with low level stuff like analogue to digital converters and the like; but pure software geekery has never been me.  I used to say to people that I was a reasonable programmer but an excellent developer; now I’m more likely to say I’m an excellent problem solver.

Don’t get me wrong; I have an active interest in my profession, from the perspective of how I can deliver better service to my clients in delivering what they want from me.  And I like to think that I write sound, efficient and effective code.  I create data structures, create objects to model those structures and business processes, create code to implement these abstracts and put something on my client’s desktop or web server that allows them, bottom line, to make more money or save more money.  I also write code that is easy to follow and maintain, that has sensible variable names, that I document and leave a pile of useful information with my client.  And I’m there for them when needed.  I love it when I get a call from a client who tells me ‘We needed to add a new feature, so we took a look at the code and documentation and we think we’ve done it right, but next time you’re in, could you give it a quick look?’ – the ultimate accolade for me – I’ve delivered code that others can pick up and run with.

I’m methodical, but don’t have what you could call a methodology; I was recently asked whether I was agile; I almost replied that I used to be but since I tore my knee cartilage a few years back I’m not as nimble as I once was.  Do I practice Extreme programming; not really, I’m more Church of England, middle of the road, myself….

I’ve started to notice that there are two broad categories of software developers; those who work for software houses or in large development teams where words like Agile, Extreme, kanzen, dojos, user stories, sensei are the common parlance, and those who work very close with business and organisational problems, where the usual words that define a day at the coalface are fix, solution, feature, document, debug, budget, timescale.

I like to talk to my clients in their language; I’m afraid I still work in a world where businesses have processes, not user stories; where they don’t particularly care what technique I use behind the scenes as long as I deliver working, maintainable and efficient code, to budget and on time.  I’m sure that the software house methodologies work effectively but do they provide yet another layer of obfuscation, bureaucracy and abstraction between what we do and what our clients and customers want us to do – solve their problems?

No matter how much we dress things up with Japanese words (and I speak with some knowledge and experience of Japanese culture and management) we must not lose track of what we do and why we do it; we solve problems by developing effective software systems delivered on time and to budget.  That is all our clients care about; we’re not ninjas or ronin; we’re professional programmers and problem solvers.

I guess what I’m saying to developers is don’t fetishise what you do to the point where the process becomes more important than the product.  It’s rare I have much good to say about Steve Jobs and the slavering behemnoth that is Apple, but he did once say ‘Great artists deliver’.  And that’s what it’s all about.

I was right to blame it on sunspots!

Early on in my consulting career – late 1980s, early 1990s – I did a lot of work for a public sector organisation.  I worked on a number of projects – this was in the days when IT consultants could still be generalists, applying their skills to whatever was needed – and tended to specialise on development of a few database applications that were centrally based and accessed over a (pre-Internet) wide area network, held together by leased lines, private cabling, etc.

All in all, a fantastic environment in which to hone your skills.  Actually, in many respects I was rather spoilt by this client – and by my first job out of university – they both gave me a rather distorted view of working life!  For a while we experienced some rather ‘odd’ problems on some of the applications running over the wide area network.  Despite our best efforts, we couldn’t actually ground the problems – we checked software, hardware, cabling, the works.  Eventually, and half jokingly, a colleague and I (both of us radio amateurs) decided that the problems were being some how caused by sun spots….

Unsurprisingly, this caused gales of laughter in the office, but as far as we were concerned there was an element of logic in our proposal.  We knew that sun spots and solar activity in general had an effect on the earth’s ionosphere, and that in the past bad solar storms had knocked out telephone and communication systems.  Indeed, in the pre-Internet, pre-computer days of 1859 a major solar storm had caused incredible effects, even causing telegraph wires to carry electrical currents when all the batteries were disconnected!

This information did little to convince people around the office, so we simply did what any other self respecting techie would do; turn things off and on, replace a few network cards and bridges, tighten connections and tweak software.  And the odd errors stopped, and we stopped worrying about it.

But over teh years I’ve thought about those gremlins on numerous occasions, and it now appears that we may have been right after all.  According to this article, solar storms can cause mystery glitches in communication and computer systems. 

It may be that the next time we get a big solar storm or Coronal Mass Ejection – when a massive plume of plasma and charged particles is thrown from teh sun out in to space – the impact will be much more than a few gremlins in the works.  Some have suggested that a storm similar to that of 1859 might cause massive damage to the electrical and communications systems of the world; indeed, some real pessimists have suggested that a BIG solar event might put us back in to the pre-electronics age for decades.

Let’s hope we don’t get it…

The Movie Star and the Secret Weapon…..

This blog post was originally an article I had published in an amateur radio magazine some years ago…enjoy!  Another example of how it’s often the ‘amateurs’ who deliver the goods.

How about this for a movie script; an actress flees her homeland after it is taken over by a murderous dictatorship, and settles in the United States.  Within a few years she is well known for her films, but has also invented a secret communications method for her adopted homeland.
Far fetched?  Well, I thought so too until I learnt about Hedy Lamarr and her invention of Spread Spectrum technology.  In this article I’ll tell the story of how the team of this glamorous icon of the 1940s and her musical director came up with a technology that is widely used today in cellular phones and many other communication systems.

Hedy Lamarr was born Hedwig Eva Maria Kiesler on September 11, 1913 in the city of Vienna, Austria, at the time part of the Austro-Hungarian empire.  She married an industrialist called Fritz Mandl, and from him this highly intelligent young woman picked up a lot of information and gossip about the armaments industry with which he was involved in.  Unlike her husband, who became enamoured of the Nazi party, Hedwig, who’d already started doing some acting, left for London and then went on to Hollywood to take up acting.  A swift name change soon followed, and Hedy Lamarr was born.  She had starred in some rather ‘risqué’ movies, particularly ‘Ecstasy’, by the time that she and her musical arranger, George Antheil, found themselves at a dinner party one evening in 1940 thinking about the unfolding European war.

Guided Weapons

The United States, then neutral, was developing a number of weapons that depended upon radio signals for guidance.  Amongst these was a guided torpedo, which could be steered towards it’s target by a radio signal.  However, there was a problem; any radio guided missile had a weak link in that given adequate warning that such missiles were in use Nazi scientists could easily produce a radio receiver that could be used by prospective targets to detect the signals used to control the missile or torpedo and then a transmitter could be used to jam the guidance system.  Indeed, the jamming signal could be very simple; it might be enough to tune a transmitter to the signal frequency and just turn it on.  As the missile approached the target the controlling signal would be weakening with distance from the guiding plane or ship, while the jamming signal on the target would get stronger.  Eventually it would overwhelm the guidance signal with the effect that the missile would effectively become a ‘dumb’ weapon and simply carry on in a straight line past the target.

Frequency Hopping

So, what could you do?  Hedy was a smart cookie, as they say; she quickly realised that if it were possible for the guidance signal to randomly change frequency it would be difficult for the enemy to actually detect the signal in the first place, and virtually impossible for them to then transmit a jamming signal that would follow the guidance signal.  This ‘frequency hopping’ would need to be random and fairly frequent  to prevent the enemy predicting which frequency would be used next.  Changing the frequency of the transmitted signal on such a basis would be reasonably straight forward to achieve; what was more difficult, Lamarr realised, was making sure that the receiver on the missile or torpedo was able to synchronise itself with the transmitted signal so that as the transmitter changed frequency the receiver would change it’s receive frequency at the same time.  Don’t forget, by the way, that this was before the invention of the transistor; all radio communications depended upon valves, and the computer, even in it’s most rudimentary form, would not appear until 3 years later and would then occupy a whole room…not the stuff you could fit in the head of a torpedo no more than two feet in diameter.

Player piano

The composer George Antheil was a friend and colleague of Lamarr’s, and due in part to his background as a composer he imagined that one possible solution to the problem of synchronising transmitter and receiver would be to incorporate some sort of switching mechanism in to the transmitter and receiver that could read a ‘tape’ of instructions, a little like the punched paper strips read by automatic ‘player pianos’.  These machines read cards or paper tape similar to what would be later used to program computers, and as the tape was ‘read’ through the machine the holes in the tape caused musical notes to be played.  Analogously, thought Antheil, it should be possible for the tape in the transmitter to switch the transmitted frequency as it was slowly unwound through some sort of electronic switch capable of detecting holes in the tape, and similarly an identical tape in the receiver should be able to switch receiver circuits to different frequencies for signal reception.  If you had two identical tapes, unwound at the same rate, one in the transmitter and one in the receiver, you could synchronise the transmitter and receiver to stay in step with each other.  Of course, any mechanical system is prone to slippage and slight losses of synchronisation, but the principle was there. In December 1940, the concept of a communication system based upon ‘frequency hopping’  was submitted by Hedy Lamarr and George Antheil  to the National Inventors Council, a US Government organisation that was co-ordinating technical developments for the war effort.  The patent, number 2,292,387 was eventually filed on June 10th 1941 and was granted over a year later in August 1942, when the Britain, the US and the USSR were up to their necks in the series of defeats that would only be halted at El Alamein and Stalingrad.  Now would be a very good time for a secret weapon to be developed…..

The Practicalities

Unfortunately, the practicalities of setting this up would prove to be too difficult; the synchronising tapes would have had to be paper tapes, and the whole technical issue of putting fairly complex electronics and mechanics in to the small and rough environment of a bomb or torpedo was too much.  Lamarr and Antheil gave their Patent to the US Government as part of the war effort, but their creation would have to wait for almost 20 years until the invention of the transistor and other semiconductor devices allowed the construction of practical, if crude, frequency hopping equipment that was based around digital circuits that created a reproducible, but apparently random, string of random electronic impulses that could switch circuitry with no moving parts.

Practical Uses

The patent lapsed in the early 1960s, at the heart of the cold war, and the US Navy immediately put the system to use using semiconductor technology to create a frequency hopping secure communications system.  This was the start of the military use of ‘spread spectrum’ technology, the direct descendant of the Lamarr’s invention.  The technology would soon find itself used in a wide range of military communication systems, with frequency switching taking place many times a second making it difficult for an enemy to even detect a signal; a spread spectrum signal heard on a ‘normal’ radio receiver just sounds like a slightly higher than usual level of noise on the channel.    The technology was eventually de-classified in the 1980s, just in time for the technology to be used in cellular telephone systems.  To see why this technology is useful one has to consider that a lot of cellular phones are in use in the same geographical area.  It’s not really feasible for a given phone to be given it’s own frequency, as there just aren’t enough frequencies.  Instead, cellular phones can transmit on a number of frequencies and the frequency in use will ‘switch’ as the phone call is made and the user moves from one ‘cell’ on the cellular network to another.  The switching from frequency to frequency also reduces the effect of interference on the signal; an interfering signal that is strong on one frequency may be quite weak on another, and so although some of the signal may be lost there is a greater chance for the signal to ‘get through’.

In addition to the cellular phone, low level spread spectrum transmitters are used in ‘wireless’ computer networks, where data is sent from portable computers to other computers by UHF or microwave radio signals.  Again, single frequencies would not be feasible in a busy office environment or city centre, so the network adapters that allow the computers to talk to one another use spread spectrum techniques to improve reliability and data security; unless you know a lot about the network it’s quite hard to listen in and detect computer traffic on wireless networks due to the frequency hopping.

The algorithms used to control the frequency hopping in different spread spectrum systems are quite varied, depending upon the job in hand.  For example, cellular phones and wireless network cards use chips that generate a pseudo random string of pulses.  Two devices in communication will initiate the session by exchanging enough information to set the ‘start’ position for the random pulse chain.  Provided the two systems start from the same place, they’ll keep in synchrony.  Alternatively, the message to ‘change frequency’ might be actually transmitted to the receiver as part of the transmitted signal.  This approach is also used in cellular phones and wireless network cards.  Data about when to switch and what frequency to switch to is sent as a data packet.  This isn’t terribly secure as anyone with patience and the correct equipment can  log the data packets and simulate the receiver.  The ultimate in secure spread spectrum probably involves the modern equivalent of the ‘one time pad’; a CD Rom or memory chip is used at each end; these devices contain a string of totally random noise pulses from a natural source, like solar radio noise or noise from noise diodes.  A CD ROM might contain enough ‘bits’ for a few dozen messages; a copy would be made and the copy sent to the receiver site, usually under diplomatic protection.  The CD ROM would be used for communications, and then after each block of bits is used for a single message it’s never sued again.  Combined with a suitable cipher system, this sort of communication is undetectable (don’t forget that the signal sounds like an increase in local noise) and even if it is detected the cipher system ensures that no one else can read the message. 

And Finally……

And finally, what did Hedy and George get for all their cleverness?  Well, until the late 1990s, not much.  Apparently they never even received a formal thank you letter from the US Government.  But before she died in 2000,  Hedy Lamarr received an award from the Electronic Frontier Foundation recognising her contributions to modern computer technology, even though it took place 50 years before.  George Antheil died before he could get the award, but at least now the contribution of the composer and the actress to modern communications has finally been recognised.

Configuring MOWES on a USB Stick

There’s an old saying that you can neither be too thin  or have too much money.  I’d like to add to that list – you can’t have too many web servers available on your PC.   For the non-geeks amongst you, a web server is a program that runs on a computer to ‘serve up’ web pages.  because I write web software for part of my living, I run my own web server on my PC.  Actually, that’s not quite true…because there are two main web servers used today – Microsoft’s IIS and Apache – I have two.  And today I decided that it would be really useful to have a web server and associated software on a USB stick that I could plug in to computers to demonstrate my web applications out on client sites.

I decided to use the MOWES installation – after all, it’s designed to run on USB sticks – and as well as the standard Apache, PHP and mySQL I decided to also install Mediawiki and WordPress.  As well as being used for demonstrations, I decided that I’d also like to have a portable Wiki to use for note taking / book research when I’m on my travels, and run a demonstration instance of WordPress.


The simplest installation involves putting a package together on the MOWES website, downloading it to your PC and installing it.  To get started with this, Google for MOWES and select what you want to install.

NOTE – when this post was written I pointed to a particular site.  That site – chsoftware.net – now reports back as a source of malware, so I’ve removed the link.

For my purposes I chose the full versions of Apache, mySQL 5 , PHP5,  ImageMagick, Mediawiki, WordPress, and phpMyAdmin.  This selection process is done by ticking the displayed checkboxes – if you DON’T get a list of checkboxes for the ‘New Package’ option, try the site again later – I have had this occasionally and it will eventually give you the ‘ticklist’ screen.

Tick the desired components and download the generated package.

Plug in your USB stick, and unzip and install the MOWES package as per their instructions.  First thing to note here is that you may need to keep an eye on any requests from the computer for allowing components access to the firewall.  The default settings will be Port 80 for the Apache web server and 3306 for mySQL.  If these aren’t open / available – especially the mySQL one – then the automatic install of the packages by the MOWES program will fail miserably.

Once you have the files installed on your memory stick, then you can configure them.


If you never intend to run the installation on any PC that has a local Web Server or instance of mySQL, then you don’t need to do anything else in terms of configuration.  You might like to take a look at ‘Tidying Up’ section below.

If you ARE going to use the USB Stick on PCs that may have other web servers or mySQL instances running, then it’s time to come up with a couple of ports to use for your USB stick that other folks won’t normally use on their machines.  The precise values don’t matter too much – after all, the rest of the world won’t be trying to connect to your memory stick – but be sensible, and avoid ports used by other applications.

I eventually chose 87 for the Apache Web Server, and 4407 for mySQL – 87 fitted with my own laptop where I already have a web server at Port 80 and another one at Port 85, and I run mySQL at the standard port of 3306.  NOTE that if you run the installation using an account with restricted privileges, you may not be able to open the new ports you use.

In order to configure the MOWES installation you’ll need a text editor of some sort – Windows Notepad will do at a push.  You’ll be editing a couple of files on the USB stick, as follows:


Open this file up and look for a line starting with Listen.  Change the number following it to the number you’ve chosen for your Apache Port – e.g. 87.

Now look for ‘ServerName’ – change the line to include the Port number – e.g. localhost:87


Open this file and find the line starting mysql.default_port.  Change the port referenced in this to the Port you have chosen for your mySQL installation.  E.g. mysql.default_port=4407


Open the file and look for two lines like port=3306.  Change the port number to the one you have chosen – e.g. 4407 – port=4407.  There will be two lines like this in the file, one in the [client] section and one in the [server] section.


This is the configuration file for the phpMyAdmin program that provides a graphical user interface on to the mySQL database.  Look for a line that starts with : $cfg[‘Servers’][$i][‘port’] and replace the port number in the line with (in this example) 4407.

And that, as they say, is that for the configuration files.  You can now start up the MOWES server system by running the mowes.exe program.  If all is working, after a few seconds your web browser will be started and will load the ‘home page’ of the MOWES installation.  With the configuration carried out in this article, the browser will show the url http://localhost:87/start/ and the page displayed will show links to WordPress, Mediawiki and phpmyadmin.

WordPress Configuration

The final stage of configuration is to make a change to WordPress that allows WordPress to run on a non-standard Apache port.  This needs to be done via phpmyadmin, as it involves directly changing database entries.  Open phpmyadmin, and then open the wordpress database from the left hand menu.

Now browse the wp_options table.  Find the record where option_name is ‘siteurl’ and change the option_value field to (for using a port number of 86) http://localhost:86/wordpress.  Now find teh record with option_name of ‘home’ and again change the option_value to http://localhost:86/wordpress.

Tidying Up

You may like to put an autorun.inf file on the root of your memory stick, so that when it is plugged in to a machine it will automatically start the MOWES system (if the machine is so configured).  The file can be created with a text editor and should contain the following:

label=Your Name for the Installation

And that’s that!


Tweeting in meetings….

I came across this rather interesting article from the personal blog of a Pastor in the US recently in which he suggests that Tweeting in Church might be a good idea.  Now, I have to admit that I was something of a late adopter with Twitter (and Facebook…and for that matter with SMS texting….yeah, OK, I’m a bit of a Luddite in some respects!) but I have to say that this suggestion surprised me.  I’m afraid that when I’m in Church I’m focusing on my own engagement with God, via my participation in the collective experience of the congregation in the church.  Which sounds more like an academic treatise than a celebration of faith, but that’s me!

the idea was that by tweeting ‘commentary’ on the sermon and other aspects of the service it could be regarded as a means of evangelising to the outside world and so bringing the Word to others – perhaps, but I think it’s one tweet too far for me.  Which then led me on to business meeting tweets, conference tweets, etc.

Perhaps it’s a generational thing but despite having a Blackberry, a Netbook and enough technology at home to sink a small boat, I still go to meetings armed with a pen and paper for note taking.  As far as I’m concerned, it’s reliable, no batteries to run out, makes no weird noises, doesn’t force me to think ‘How do I do that?’, will take text, drawings and doodles and isn’t ostentatious.  Pen and paper is what I like to call ‘humble technology’ – it does what it says on the tin, no muss, no fuss.  I’ve been in meetings recently where iPads have been deployed, tweets have been made (as I found out after leaving the meeting and looking at twitter) with no apparent damage to the business of the meeting…but…looking at my own notes taken in the meetings concerned, I’m wondering whether the meetings were actually needed / useful as my notes are pretty skimpy, and I take good notes.

We then have the recent debacle in the UK where some aspects of an industrial relations negotiation between British Airways and Trades Union representatives was tweeted to the outside world, resulting in a ‘pitch invasion’ of the building where the negotiations were taking place.  I’m sorry…negotiations are supposed to be delicate affairs between the parties involved and any mediators.  If someone feels they can’t negotiate without doing the equivalent of bellowing from the window, perhaps they need to be in different jobs.

As you can probably tell by now, I’m not a fan.  My own rules of Twitter are pretty straight forward:

  • If I’m in a meeting, focus on the meeting. 
  • If I’m at Church, focus on that.
  • If I’m at an event and want to tweet, I’ll wait until a ‘natural break’ and do it then.

I recently read a good tip about the etiquette of Texting and Tweeting.  Basically, imagine pulling out a crossword puzzle and doing it.  If you wouldn’t do that in the situation, then you really should think hard about whether you should tweet / text (emergencies excepted, naturally!!)  I was at a social event the other evening and I found that tweeting is sort of like smoking used to be (never smoked so maybe on tenuous ground here…) – it gives you something to do with your hands whilst you’re nervous!

In most meetings, unless you’re there as an observer or reporter tasked with providing a running commentary, I can’t imagine a need to Tweet that can’t wait an hour or so.  So just focus on making the meeting effective.

Google’s ‘mistake’ maps all UK WiFi networks…

Some weeks ago, a story broke about Google recording data about WiFi networks when they were wandering around taking family snapshots with their now infamous fleet of ‘Streetview’ cars.  At the time, Google claimed that the information gathered was ‘accidental’ – that rang a few bells with quite a few techies.  It’s alike me wandering the streets of Sheffield taking photographs and at the same time ‘accidentally’ running war dialling software so that I can log any WiFi activity in the area.  There’s no ‘accidental’ link between digital imaging and WiFi networks, so what the heck were Google up to?

I intended to blog at the time, but life decided to intervene and so I didn’t do the post…which is a shame because of what’s reported here.  Google have mapped every WiFi network that was detectable on the routes taken by their StreetView cars.  In other words, if your house or office was photographed by Google, they also grabbed bits of data about your WiFi network, if you have one – MAC address, SSID, Channel in use.  OK, it may seem that this is pretty much ‘small fry’ in terms of data and privacy, but let’s just take a wider look.

  • First of all, Google have breached Data Protection Legislation in virtually every country in which they’ve done this; you’re not supposed to gather information up willy-nilly in this manner.
  • Secondly, Google have shows the same sort of respect (or lack of same) for privacy that Facebook have been accused of.  In fact, I’d argue that Google’s crimes against privacy are probably worse than Facebook.  With Facebook I had a choice to use their site to share my data.  Google just whizz along, photograph my property and grab my data whether I like it or not.
  • Gathering and storing this data isn’t a by-product of any photographic process; the equipment and process to record and store this data must have been installed deliberatley in the Google Streetview vehicles.  Now, no-one does this sort of thing for laughs – so we have to assume that Google carried out an action that cost money, was against Data protection legislation and that they might have suspected would upset people for a particular reason.
  • And they actually patented the techniques / technology used.  The last one’s a bit of a give away….

What could that reason be?

That, my friends, is the 64 dollar question.  Google have ended up with the most comprehensive map of WiFi coverage in the UK that’s ever been compiled.  Now, much of that capacity isn’t publicly accessible – i.e. it belongs to folks like me and thee – but it did start me thinking about what a gung-ho, conquer the universe by next Thursday company like Google might do.

What about….

  1. Gathering data on the different types of router / network in use in domestic and business environments to sell to marketing companies working for hardware manufacturers?
  2. Spotting ‘dark areas’ in towns where there is no public WiFi – where Google could fill a need, perhaps?
  3. Gathering information as to WiFi networks in towns that Google might approach to sell advertising to?
  4. Testing their technology – a dry run to see what they could get, the attitude of the relavant authorities, etc.?
  5. Testing the possibilities for WiFi network usage by vehicles?
  6. Checking WiFi security settings on the behalf of ‘other oragnisations’ to see how much effort someone would need to carry out a comprehensive mobile monitoring exercise for WiFi?  A little like the TV Detector vans?

Anyone else got any bright ideas?

Earth calling Tim Cook…

There’s a scene in Monty Python’s ‘The Life of Brian’ in which a character asks ‘What have the Romans ever done for us?’  This is then followed by a host of other characters giving many useful things that the Romans HAVE provided for the people of Palestine.

I was reminded of this sketch when I encountered this article about Apple’s Chief Operating Officer Tim Cook in which he comments that there isn’t a single thing that a Netbook does well.  Time, I have some bad news for you, sunshine; there are lots of things that Netbooks do well – however, they’re probably things that Tim Cook doesn’t do.  In the last week or so:

  • I used the Netbook to test an ADSL connection at the point of entry of the phone-line to the house.
  • When out and about I used it to write a blog article whilst waiting for an appointment.
  • Hooked it up to my amateur radio gear to decode some weather fax images.
  • Downloaded some code from an SVN repository, made a quick fix and uploaded it again.

In other words, stuff I couldn’t use my Blackberry for, and stuff that I needed a real keyboard for – whilst the Crackberry is great, I don’t fancy writing 500 words of blog post or trying to debug code on it.

But it’s real, genuine work being done, and not stuff I could do on a keyboard-less, USBless iPad.  Sorry Tim – here on Planet reality we’re not all managers and critics and reviewers and surfers.  Some of us actually do real work on the move, which at the moment (and probably will do for some time to come) requires a real keyboard and a piece of kit that I can actually install software on – not a closed garden that looks good but is at the same time too big to put in my pocket and too small to act as a sensible paperweight.

I love teh concept of the Pad – but this sort of arrogance from Apple – following on from their recent attacks on development toolkits and the serious limitations in connectivity of the iPad – really makes me wonder whether the bods at Cupertino ever spend time in the real world watching how people use technology.