Imaging That

Sodium-lit spray rose between cars and buses, an orange mist enveloping the ageing Hillman Husky. Windscreen wipers screeched and thudded and the engine strained as the car crawled up the hill between soot black buildings. Factory workers shuffled along the pavement, heads bowed, leaning into the pin-stinging rain, silhouetted against rectangles of pale yellow light pouring out of half-empty shops. But, according to Moi, Manchester had changed a lot in the past thirty-five years. I shouldn’t let memories of a car journey one winter’s evening back in the 1970s shape my perception of the city; so she said.

Moi and I had met several years earlier in the lounge of the Langham Hotel, midway through a seminar organised by the Department of Trade. The London stop for a roadshow encouraging universities and high tech companies to help claw back the UK’s contribution to the EU research fund. Few present were interested in spending a year building consortiums and travelling back and forth to Brussels with CDs full of electronic forms. I certainly wasn’t looking for another ACLARA. Most of us were merely interested in making contacts. Moi was looking for an industrial partner; me, a piece of academic research. Away from the consultants selling access to the EU Commission, we both found what we were looking for.

‘Cambridge was not the only hi-tech centre of excellence in the country, give Manchester a second chance,’ said Moi, who suggested I paid the city another visit. It took a while to get around to doing this; but, sure enough, much had changed – except the rain.

We were sitting in the Patisserie Valerie in Oxford Street; just a bit further from Manchester’s Metropolitan University than Moi’s students usually ventured during their lunch break. Most of the postgraduates working on Moi’s team were from the Far East. Manchester, like Cambridge, had imported its Silicon Valley dynamism from China, Malaysia and Singapore. So, I could almost be back amongst the dreamy spires, except that the macaroons in the Patisserie Valerie were 30mm larger. They were supplied from a bakery in Birmingham, whereas the Cambridge branch got their macaroons from London. I’m not sure Moi was particularly interested in the business model of Madame Valerie’s, soon-to-be-AIM-listed, chain of Cafés. It was not why I was invited to Manchester.

Moi Hoon Yap was a senior lecturer in the Metropolitan University’s Faculty of Science and Engineering, and her research team were developing computer vision software to analyse facial expressions; technology in search of an application. Were we finished here? Anything else I would like? A croissant and a set of scales, perhaps? I had wandered off topic, so made a passing mention of that image analysis system Digithurst had sold Manchester University; the one used to develop software to detect brain tumours. But that was a long time ago, over twenty-five years; the other side of several hundred video hairdressing systems, somewhere in the labyrinth.

At the time, the tumour-detecting imaging system was just another box collected at the end of the day by Securicor. It was an anodised case, a BT framestore, a BICC-Vero power supply, fan, connectors and cables. Four rubber feet and a sticky label printed with a serial number. Why am I, on the walk back to the university, mentally running through a bill of materials for something I built in 1985? A memory test, because I’m worried about dementia and had been since starting this project.

Passed with flying colours apparently, according to Moi, who guides me through the diagnostic software provided by her other industrial partner, Cambridge Cognition. So, with the hypochondria out of the way, I demonstrate the Android app. The first software I had written for over three years. Not much of a challenge now that the world is full of programmers posting code on the Web. The forward-facing camera on the Samsung tablet was a challenge; it took me a while to get to grips with this narcissistic piece of technology. The app itself was a cross between a juke box and a photograph album; family snaps and fifty-year-old pop songs in a database. The camera would monitor the user’s facial expression as images appeared on the screen and tunes were played through the speaker. Over time the content presented to the user would be restricted to media which elicited a positive response. The target market was carers; respite from thumbing through photograph albums with friends and family members, whose memories were being destroyed by Alzheimers. It was a cut-down, emotion-driven version of PictureBook, designed to prevent the patient’s impaired cognition trapping their mind in some traumatic series of events from their past.

Moi and I had already demonstrated the software to the School of Dementia Studies at Bradford University, hoping they would share some of their expertise. We were surprised by how possessive organisations are with their diseases; how, like diabetes, dementia is bought and sold like a commodity by charities and researchers. How the yet-to-be-diagnosed are traded on the NHS futures market; used to attract investment from the government and corporations.

As a prescribed therapy, then, we were on our own with the software; although, speaking to companies like Docobo, there was a market if we could get the technology to work. Although what about the morality of using a device to lock the dementia sufferer in an ever-looping virtual world constructed from fragments of their past? But then, isn’t this merely a metaphor for the Internet itself? Perhaps our always connected world of continually looping 24-hour news is a form of dementia; our collective consciousness trapped in the compressed narrative of ‘now.’ As it turned out, we could put these ethical issues to one side, for a while at least, as technical limitations of the current generation of mobile devices saw the project shelved.

Back in the lab at Manchester, one of Moi’s researchers demonstrated the software that would detect those small changes in facial expression. Appearing so briefly, for 1/25 to 1/15 of a second, micro-gestures were almost undetectable to the human eye; hence the use of slow-motion images of people’s faces in TV advertisements. They form part of that subliminal communication channel predating spoken and written language. Moi sounded apologetic now, as the camera used to capture the facial expressions was bulky and cost over £5000. Only a top-of-the-range PC was capable of processing the data; even then, not in real time, but frame by frame from a video recorder. Having spent two days battling with the forward-facing camera on a Samsung tablet and Google’s Computer Vision API, I now realised the gap between the processing power the device could provide and what the application needed was unbridgeable. Tempting to say ‘not in my lifetime’, but then I remember someone saying that about the Berlin Wall – and they were wrong by twenty years.

- 01010010 -

Thirty years elapsed between that first grainy image appearing, line by line, on the screen of Digithurst’s Commodore PET and those high-resolution moving images on our mobile phones. Except, innovation is seldom linear; it occurs in short disruptive bursts. We are currently experiencing another technological hiatus, much like the one which followed the introduction of the IBM PC. The high-tech world is killing time, kicking its heels with eye candy like Snapchat and thousands of other throwaway, one-trick, apps. The next leap in semiconductor technology will bring real-time analysis of micro-gestures within our grasp. If so, our relationship with electronic devices and how we use them within networks will change irrevocably. Time, then, to ask whether we actually want devices to interact with us on a subliminal level.

Obviously, there are already issues with facial recognition on social networks, predominantly relating to privacy. No one is comfortable with the idea of Googling faces or Facebook automatically naming people in photographs posted to timelines. There was a backlash against Google’s Glass that saw the product dropped. But, no doubt, some harmless-looking nerd, dressed in a T-shirt and with a microphone strapped to their face, will try to make the repellent sound compelling and the downright spooky sound cool. Will this be the person who finally takes us beyond singularity? Past the point where consciousness is revealed as little more than a transient, evolutionary anomaly? The moment we all surrender?

Google and Facebook did not grow out of the 21st century versions of Digithurst. Their owners spend little time ruminating on how their technology impacts on the world. Even if they did, and began to question where they were taking us, these are massive corporations, the modern equivalent of IBM and Microsoft, so it’s unlikely they will just turn off the servers. While Page, Brin and Zuckerburg project themselves as freewheeling entrepreneurial nerds, they are merely Garfields attached to the fender of Blackrock and Accel-powered juggernauts. The promise ‘not to do evil’ wears thin if, as human experience is reduced to nanosecond bursts of information devoid of a coherent narrative, good and evil become abstract concepts. The Internet and the Web have delivered the virtual future we predicted; the challenge, now, is making this technology work for, rather than against, us.

... (An extract from The Ghost in the Labyrinth by Peter Kruger)

Next chapter ...