Online Only

I, cyborg

The life and work of digital humans

KEVIN AND BARBARA are cyborgs. They live in Coventry, England. They look just like you and me. They are just like you and me. With one difference. When Barbara moves her hand, no matter where Kevin is, he feels it. They are linked by electrodes implanted in their nervous systems.

Distance is no barrier. One day, Kevin was wired up at Columbia University in New York. He moved his hand. In England, a robot hand moved with it. The robot hand gripped an object in England and Kevin, in New York, could feel how much pressure was being applied.

Kevin Warwick (his full name) wants to be part of the first brain-to-brain communication experiment. He is one of the world’s leading experts in cyborg research.

He told the BBC: ‘We’ll be able to communicate…not in this antiquated way that you know now [but] start moving on as humans and enhance ourselves and start communicating electronically, directly, brain to brain.’

You thought that you were close to your loved one? Welcome to the new world. Because while Kevin and Barbara are a long way from the sorts of cyborgs you might imagine, the question is, for how long?


CYBORG IS A scary word. It is associated with science fiction, but despite the condescension with which some mainstream authors view the genre, it provides a useful way to explore the unintended consequences of how our paths into the future interact with human character and frailties. George Orwell’s Nineteen Eighty-Four and Margaret Atwood’s The Handmaid’s Tale (McClelland & Stewart, 1985) both warn us of futures we don’t want to have. Ursula K Le Guin’s The Dispossessed (Harper & Row, 1974) imagines a co-operative future anarchist utopia. Our imaginations plot potential scenarios before they become a reality.

In the Blade Runner movies, humans struggle to assert superiority over ‘replicants’ or androids – biological machines that look like humans but are not quite. Possessing a form of artificial intelligence and self-awareness, they are sentient beings who are not keen on remaining slaves.

Scientists and futurologists, not just novelists and filmmakers, worry about what will happen when machines achieve consciousness and can out-compete humans.

But they have all missed the point.

The thing to think about is not whether or when artificial intelligence (AI) achieves consciousness, or what will happen if it does. Chances are, we will never know. Because long before that happens, machine intelligence will integrate with humans, creating what can variously be called digital humans or cyborgs.

What happens when that occurs is what we need to think about, now.


SCIENCE FICTION IS replete with conscious, intelligent machines – from Isaac Asimov’s I, Robot to Skynet in the Terminator movies and the character named Data in Star Trek: The Next Generation. But it is another character in Star Trek that is much closer to our future: the Borg. The Borg instils fear into the heart of the Enterprise captain because of their ability to subsume all aspects of their captors’ individuality.

But they fascinate the two of us – one a researcher on work; one a sociologist; both interested in how power and the economy shape our lives – for different reasons. The Borg are combinations of human biology and computer technology. Although fundamentally human, they are connected to each other – to the Borg’s collective consciousness – through a neural connection, depicted in the TV series as a series of tubes, face masks and other odd make-up. And that has all sorts of implications for work, economy, society and power.

Researchers in a number of countries and corporations are working on ways of connecting the human brain to the external world through some sort of computer interface. There are still technical barriers, but one day they will be overcome. And that day is closer than you think.

In 2013, Nature reported on an experiment linking the brains of rats on two continents, so that one responded to the stimuli the other was exposed to. One rat, in Brazil, would use its whiskers to choose between two stimuli. An implant recorded its brain activity and told a similar implant in the brain of a rat in the US. The American rat then usually (but not always) made the same choice on the same task as the Brazilian rat. In only a few short years, Kevin Warwick’s work rendered such experiments old hat.

Meanwhile, in 2014, a US defence agency announced it was working on an implantable brain chip that could restore memory for wounded soldiers recovering from traumatic brain injuries. In 2015 a paraplegic man was able to walk again after being fitted with an electrode cap that relayed brain signals telling his legs to walk to devices in his knees. That year, other scientists reported the invention of an ‘ultra-fine mesh that can merge into the brain [of mice] to create what appears to be a seamless interface between machine and biological circuitry’.[i] The mesh was ‘syringe injectable’.

In 2016 the Los Angeles Times reported the temporary restoration of everyday movement in a twenty-four-year-old quadriplegic man’s hand in Ohio after surgeons circumvented a spinal cord injury. They sent electrical signals issued by the patient’s own brain through a suite of sensors. And in 2017 Neil Harbisson – an artist born with achromatopsia (complete colour-blindness) – was able to ‘see’ colour in UV thanks to an antenna-like implant leading to his brain.


NEURAL TECHNOLOGY HAS been around since the first cochlear ear implants were devised in 1957. These electronic devices are implanted into the skull and then provide signals to the brain that are interpreted as sound.

Now technology doesn’t just replace missing senses. It adds to them. Kevin Warwick says that with it he can sense ‘like a bat senses the world, and our brains are brilliant. They adapt. They can take on new signals in that way.’ The potential benefits of developments in this field for individuals with disabilities or severe injuries are enormous, and ongoing developments are inevitable.

The most intriguing current research is that into ‘neural lace’. Tesla and SpaceX founder Elon Musk announced the establishment of Neuralink in 2017, a new company that aims to develop a neural-implant interface between human brains and computers.

In his now infamous 2018 podcast with Joe Rogan, Musk declared that ‘I think we’ll have something interesting to announce...that’s better than anyone thinks is possible’ and added: ‘Best case scenario: we [humans] effectively merge with AI.’[ii]

The feasibility of merging AI and human biology is controversial. Some suggest it won’t happen, others that it ‘is not as close at hand’ as Musk suggests. Musk’s stated timeframe –eight to ten years – might seem rather optimistic. Yet that’s hardly the point.

Musk can polarise views with his egocentric and at times outlandish behaviour but regardless of whether or not it is he who brings neural-implant technology to fruition, he has a remarkable capacity to point to issues facing us and the ways technology can respond.

And Musk is not alone. In 2016, millionaire venture capitalist Bryan Johnson, owner of a small LA tech firm, announced a project to identify ‘neural code’. He argued, ‘The market for implantable neural prosthetics, including cognitive enhancement and treatment of neurological dysfunction, will likely be one of – if not the largest – industrial sectors in history.’[iii] Many are working on turning this fantasy into reality, as there is huge money to be made.

It is not just ‘legitimate’ scientists, either. In the US, groups of do-it-yourself cyborg makers – known as ‘biohackers’ or, curiously, ‘grinders’ – eschew hospitals and universities for backyard workshops, and link with body piercers to attempt to implant forms of computer technology into their own bodies. As of 2017, their main achievement amounted to a star-shaped implant under the skin that lit up when activated by a magnet, but their ambitions are much greater – to challenge the social order, improve humanity and avoid regulatory interference.

Their ambitions are not only limited to physical or sensory enhancement. As Kevin Warwick sees it, human memory is ‘not that wonderful, particularly as you get older’. So why not outsource ‘the whole lot to a much more accurate but instantly accessible source’? For that matter, why stop at memory? If Isaac Asimov’s I, Robot made us imagine how AI might think, perhaps all of next century’s biographies will instead be called I, Cyborg.


WHATEVER THE ROUTE, it is inevitable that in time, machines, perhaps in the form or size of computer chips, will be able to be implanted into humans, enabling individuals to combine the lateral thought capacity of the human brain with processing speeds currently beyond our experience. Initially this might be limited by the processing speed of the chip or device. But such a limitation would quickly be overcome by connectivity, and individuals would be able to make use of the processing speeds of the world’s fastest accessible computers. In this way, they will be able to include within their body a massive amount of stored ‘memory’, comprising much of what is already known to science. As it is, hard disk drives are becoming redundant. Many computers (including the one on which most of this was written) now store information on USB-like devices that require no additional energy or cooling. No one would have ever imagined storing information in floppy disks or rotating hard disks inside the body. But solid-state storage is very different – no moving parts, less heat, more stability. And then there is the cloud.

While the technical limits of these developments will be set by atomic structures, the social limits will be set by cost and power structures. A new technology will not be implemented if it is too expensive compared to the benefits it creates. As it is, many jobs that could theoretically be automated are not. Automated car washes exist, but much paid car washing is still done by hand. Many experimental transport systems never get adopted. Concorde, launched in the 1960s, no longer flies. Many systems simply cost too much relative to the benefits and alternatives.

Whether a new technology is implemented also depends on whether its introduction is supported or opposed by those in positions of power. Surveillance systems that many Western police or ‘security’ forces would love to use are beyond their reach, not because of their cost, but because laws in most countries stop them from adopting them. Australia is a bit of an exception, with the latest offering to security agencies being the mandated requirement for tech firms to provide those agencies with a ‘back door’ to encrypted communications, in effect making all such communications hackable. Sometimes legal constraints are ignored anyway, as revelations about the US National Security Agency showed. In the end the limits are social, not technical.


DOES ALL THIS sound a bit too esoteric? Think for a moment about what these developments would mean for work and for society – maybe for you (if you’re young), or for your children and grandchildren.

Let’s start by considering the moment when fully functional internet-connective neural-implant technology first becomes commercially available. It will, of course, be horrendously expensive. Like a trip to the moon now, only a small elite of the very wealthy would be able to afford it. Gradually, the price will come down (by much more than the price of a trip to the moon, because the hardware will not be very big). The size of the elite that can afford the new technology will therefore grow.

If you were in that wealthy elite, though, why would you bother spending your money on it?

For one thing, it would give you something that otherwise it would be impossible to experience: almost instantaneous access to whatever by then could be offered by the internet. Without touching a screen, or moving your hands, or even your eyeballs. Your thoughts tell it where to go.

It may not be that ride to the moon, but the virtual reality version of a ride to the moon – or any other experience you could ever want – could be just as good as the real thing, and so much safer. There would be no chance of your rocket exploding or drifting helplessly into the sun. Unless you wanted to be part of a game in which that happened, and you heroically and realistically rescued all from certain death.

Already, virtual reality games are big business, with major tech firms spending high on them. Six hundred thousand people still regularly take part in ‘Second Life’ on their computers, an escape from Earthly existence – that is if you call a simulation with real estate salesmen, religions, pyramid schemes and embassies an escape from Earthly existence. Through their avatars people can suddenly possess good looks, material possessions and live out their sexual fantasies. Over a decade ago, the first real-life divorce happened after someone’s avatar had an affair on Second Life.

These days, such things do not make the news. Imagine, though, if you really could lead a second life through a realistic world that played out not on your computer screen, but in your head. For that matter, why stop at two lives? It would genuinely be the ultimate escape. Like the characters in the movie Inception, you would find it hard to distinguish what was real and what was not.

But just as Second Life’s audience has been faltering in the face of the ‘real-world’ allure of Facebook, Instagram and Twitter (its usage appeared to peak a decade ago), so too neural-link technology’s greatest allure will be its real-world effects. Back in the real world, neural-implant technology would offer the opportunity for vastly greater wealth.

Instantaneous access to the internet also gives you instantaneous access to information. So, you’re in a meeting; you have the answer to the question – any question – that was asked a few seconds ago. You’re in the boardroom of a finance company, on a trading floor, in the back seat of your chauffeured car, wherever you are, you can instantly make decisions that make your organisation, and you, very wealthy. The casinos close down the blackjack games because your type keep on taking them to the cleaners.

It’s a bit like the massively expanded capabilities we’ve seen imagined in Limitless or Lucy, but without the bizarre premise of drugs enabling the brain to fulfil some alleged untapped potential. There is no legitimate science behind the idea that we only use a tenth of our brain, but there is a lot of legitimate science being devoted to developing neural-link technology.

Who benefits from such gains? In the past, the gains of productivity growth were shared to varying degrees between capital and labour. Workers and corporations slugged it out over just how much each would gain after some new device was incorporated into the workplace and used to increase productivity and profits. In the postwar decades, those gains were, roughly, evenly shared. Since the 1980s, it has been mostly corporations that have won. By 2018 wages growth was so slow even central bankers were worried. Some productivity gains accrued to individual workers, through various performance pay schemes, but often these were just a guise for managerial prerogative – especially in schemes for top executives.

But neural-implant technology changes the link between the person, productivity and pay. The most productive new technology will be internal, not external, to the person. The greatest pay gains from productivity boosts will be privatised – not to ordinary workers, but those with the greatest economic privilege.

And what would be the best imaginable present from future parents to their children? The operation, of course; the operation for an implant. Those children would kill it in exams. Those without the implant are destined for a second-class university and a second-class job. The lucky implanted offspring will be guaranteed the top placings, the top universities, the top jobs and, by far, the top pay.

More than any other technological advance, the development of neural-implant technology will enable a widening of the great divide between the haves and the have-nots. More importantly for the haves, it will ensure that the children of the haves are guaranteed a future among the elite group of haves. The reproduction of the ruling class through its offspring will be more strongly institutionalised than at any time since the heyday of medieval royalty.

It won’t be without cost. With growing inequality comes growing poverty. With that, comes growing violence through instability. And so means must be found to suppress that violence, or at least to protect the haves from being victims of it.

Those gangs who used to bash up kids to steal their designer shoes or phones? They won’t be after shoes or phones. They’ll want those things implanted in you. You won’t ostentatiously display that you carry one, of course, but your class superiority may be pretty hard to disguise. The biohackers and grinders of the future will service a niche market in second-hand implants – like the providers of second-hand eyes in Minority Report. It will be risky, potentially deadly, taking on a second-hand implant, not for the faint-hearted. But for the desperate, perhaps not so difficult.

So one of the growth industries of our future, if we take this path, is likely to be personalised, private security services. No ruling-class child shall be without one. Just pity those who look well off, but who don’t have private security, and who don’t even carry an implant. It could be very messy for them.

The alternative, perhaps more sensible, solution is to physically segregate the haves and the have-nots. As it is, cities are becoming increasingly segregated, and the postcode of where you are brought up has a bigger impact on your life chances than just about anything else. Walled suburbs, like the walled cities of centuries past, may seem a bizarre idea, though in reality they would be an extension of the concept of gated communities that are becoming increasingly prevalent.


THE FUTURE WILL not be just about new technologies. It will also be about new climates. It is very hard to predict how humanity will respond to the climate crisis. But suppose the current madness continues for a while, and we fail to adequately reduce our carbon emissions. Millions of people will become climate refugees. To imagine that this will have no impact on Western countries is to imagine that bringing a lump of coal into a national parliament constitutes an energy policy.

While market forces will eventually dictate that energy systems abandon coal and embrace renewables, political forces are harder to predict. Countries may respond to the climate refugee crisis by accepting them, or shunning them, or going to war with them – perhaps a combination of all three. Whichever approach is taken, the haves in the West will seek ways to insulate themselves geographically from the environmental and social vandalism the policies of their predecessors will have caused. Walled coastlines, uncontaminated by beaches, walled towns and walled suburbs would all make sense in such a society. But would that be enough?

One scenario would be for ‘gated communities’ to be superseded by ‘floating communities’ for the wealthy. The Seasteading Institute is presently building a prototype in a French Polynesian lagoon. These floating communities are conceived of as a possible response to climate change. They are facilitated by poor island nations for that reason. Ironically, though, the inhabitants of island states will be unlikely to ever afford the high cost of joining a full-blown floating community.

Floating communities would offer the potential to provide sustainable large-scale, secure living space for wealthy individuals. They would have the scope to establish themselves as outside most legal boundaries and hence obtain special status for taxation purposes. For defence, though, they would require the patronage of a major state. They have not impressed the Thai authorities, who recently occupied and dismantled a tiny seasteader off their coast.[iv]

If these communities become viable, occupants would be able to use digital technology, particularly embedded neural technology, to manage ‘mainland’ production and wealth-generation from a distance. They would thus be able to avoid many of the externalities created by the production processes they control, as depicted in a more extreme form by the two worlds of Elysium. So these communities offer the potential to widen the gap even further between the rich and the rest.


LOOKING FURTHER AHEAD, eventually, the price of neural implants will come down to a point where they are accessible to all, just as computer power that once cost a million dollars can now be bought by, and carried in the pocket of, almost anyone.

Would that ‘democratisation’ of the technology suddenly reverse the great inequalities that neural technology had brought about?

Not really, in part because it is not the technology creating this great inequality in the first place. The technology greatly facilitates the ability of the wealthy to further enrich themselves and facilitate their control. But it’s the social structures that drive the inequality, not the technology.

When the price of neural technology collapses, social power structures will not collapse with it. They will have become stronger than they have been for centuries. And those who benefit from those structures will not give up their privileges willingly.

As neural technology becomes available to all, what would the owners of corporations do? If they want to maximise the productivity of their workforce, they would expect, or rather demand, that all their employees have it.

The education level of Australian workers has increased greatly in the past few decades. In 1980, only one in three students finished high school. By the early 1990s it was three-quarters. University education likewise increased. While a better educated work workforce is theoretically a more productive one, in practice the biggest effect has probably proven to be ‘credentialism’. Jobs that previously only required a school-based certificate now require a degree, because employers can insist that applicants have one. In a similar way, in a world of universal access to neural implants, employers will demand all employees have one.

It may not be a difficult demand. Already hundreds of workers in an American technology firm have had chips the size of a grain of rice implanted into their skin to make it easier for them to get into their building, pay for food at the cafeteria and other tasks that involve swipe technology.

That’s nothing new for Kevin Warwick. With his first implant, he walked around his building and ‘the computer could recognise where I was, and it opened doors for me, switched on lights, said “hello”’.[v] Thousands of Swedes have similar chips implanted to make it easier to do similarly simple things, such as shop or catch public transport.

So don’t expect that people will strenuously resist such encroachments. Apparent convenience is a key motivator that enables people in positions of power to monitor others and erode what were once traditional privacies or freedoms.

In some manual jobs (though there won’t be so many of these anyway as they are the most easily automated of jobs), neural-implant technology would do little to boost worker productivity. But the implants would have a more important advantage. They would facilitate employer control of the factors of production.

What better way could there be to ensure quality and production targets are achieved than to be able to directly relay instructions to workers? What better way could there be to monitor the location of the workforce than to be able to track each neural device?

You have a grievance? You want to organise with fellow workers to resist a new managerial edict? To unionise? The corporation would know your every communication, your every move.

The phrase most associated with Star Trek’s the Borg is ‘resistance is futile’. In the TV show, this aphorism is directed at the bewildered alien races that confront this species. But in the future corporate world, ‘resistance is futile’ will be the message internalised by the cyborg workers themselves.

All that is before the state itself starts to take advantage of the surveillance opportunities that neural-implant technology provides.

Of course, not everyone will have neural-implant technology. There will be many, mostly living well away from the cities, who will reject the technology, the benefits it offers and the restrictions it imposes. There will be others, the biohackers of the future, who will subvert the technology in whatever way they can. These groups will be the new marginalised class –or at least, part of it.

So, when we think about new technology, it is not enough to simply imagine what the future might bring, as so many sci-fi films have done. We need to consider how all of this relates to social structures and forces, to how we live and how we work. It is not enough to apply the imaginations of Isaac Asimov and Philip K Dick to future technology. We need to think about what happens when that technology meets Charles Dickens’s world of poverty, F Scott Fitzgerald’s world of ruling-class excess and Margaret Atwood’s world of subjugation and control in The Handmaid’s Tale.


OUR POINT IN writing about digital humans like this is not to make predictions. Rather, our intent is to sound some warnings – and to have you think about what policies need to be put in place to lead things to take a different, democratic turn. Too often, policy responses are devised too late, long after the problem has emerged and many policy options have been eliminated.

We have options about dealing with the way neural-implant technology affects the reproduction of classes. They include: copyright control of new cyborg technologies; wealth taxation to reduce intergenerational wealth transfers; more actively redistributive policies (such as financial transaction taxes or more active welfare states) for the same reasons; the promotion of more democratic, actively organised and hence powerful labour; and the direct regulation of cyborg technology, something that would not be easy for any single nation acting alone in a world of multinational technology firms and biohackers. There are also major financial questions as to how this future might pan out for developing countries with low incomes.

Separately, there are policy options about how neural-implant technology will likely affect control. Privacy legislation, already in place in many countries, is likely a good starting point but would need much rethinking and reformulation to ensure it was relevant to the age of neural networking.

Who should be making these policy decisions? New institutions will need to be created, but in the end it will rely on pressure from the people who will be affected. Without that, politicians and corporate leaders won’t constrain forces that potentially entrench their power. The rise of neural-implant technology poses some major policy challenges – probably the greatest challenges society will have to face since the emergence of capitalism, with the possible exception of climate change. In the end, we choose what outcomes will arise: whether it will be a force for overcoming injury, illness and disability and for bringing about unprecedented improvements in living standards, or for ushering in a period of inequality and suppression as great as that envisaged in any dystopian science fiction novels or films.



[i] Newitz A, ‘Scientists just invented the neural lace’, Gizmodo, 17 June 2015:

[ii] Hasselton T, ‘Elon Musk: I’m about to announce a “Neuralink” product that connects your brain to computers’. CNBC, 7 Sept 2018:

[iii] Johnson B, ‘Kernell’s quest to enhance human intelligence'. Medium, 20 Oct 2016:

[v] Mills G & Smith C, ‘How to become a cyborg’, The Naked Scientists, 20 Nov 2018:

Griffith Review