When one hears the phrase German philosophy, our minds go to the past; a topic explored by intellectual-heavyweight dead guys in the nineteenth century. When we mention German philosophers, the immediate image we recall is of old, white haired, bearded men, with grizzled features poring over obscure texts – Schopenhauer, Marx, Nietzsche (ok, Nietzsche died at 44, and sported a heavy moustache minus the beard).
Let’s park this stereotype for the moment, and return to our own times.
A contemporary German philosopher we need to learn from is Byung-Chul Han. Wait a minute – what was that name again? South Korean born, Swiss-German philosopher Byung-Chul Han (1959 -) examines, among other things, the impact of digital technologies on human society. He has lived and worked in Germany since the age of 22.
In Germany, he found his spiritual home, dedicating himself to philosophy. He has elaborated how our digitally-dominated culture has come to influence how we work and view our lives.
Neoliberalism drives us to work ever harder for mass consumption, and has converted us into consumers. The social media age has elevated narcissism into a product – we live on social media not to connect with others, but to circulate an image of our successful selves to influence others.
Narration has become indistinguishable from marketing; social media has converted every story into an SEO advertisement. Politicians sell themselves, slick advertising has replaced substantive policy discussions. The sound bite is all important.
The reward of instantaneous publicity, offered by social media, is reinforced by celebrity culture. Your storytelling becomes an SEO-driven marketing package. Collective reflection is replaced by repetitive social exposure.
Texting versus conversation
Texting has become the go-to method of communicating with each other. Facebook messaging, WhatsApp, mobile phone texting, Skype – digital texting has replaced face-to-face communication in business, education and social life. Texting is in fact influencing our conversation.
Texting enables us to communicate over vast distances, sharing our ideas with geographically disparate people. We can stay in touch with friends and relatives who have moved away, share documentation and photos across the distance, and ask technological assistants such as Alexa or Cortana for their answers. Anything from how to copy and paste, to what the weather forecast is, is at the tip of our fingers, reducing the need for human interaction.
Texting has come a long way since the first SMS in 1992. Two Vodafone colleagues texted each other, in December 1992, with a simple message ‘Merry Christmas.’ Since then, we have emojis, emoticons, GIF files, Facebook reacts – a kind of modernised hieroglyphics. It is almost its own language – digilect, in the words of Ágnes Veszelszki, a professor of communications and linguistics in Budapest, Hungary.
The medium certainly influences the type of message being conveyed. Digilect is a product of computer-mediated talking – talking to each other through machines, and talking to machines.
Think of all the internet acronyms and digitally inspired words that have made it into our conversational lexicon – hashtag, troll, meme, facepalm.
But is this digital communication strictly speaking a language? It is an approximation of a language – digilect – but not a distinct language.
Nonverbal communication
Have we all forgotten that an indispensable component and stage of language is nonverbal communication? Our bodily cues convey information just as important as our words. Hand gestures, tone of voice, the impact of sound – all these elements of nonverbal communication contribute to making connections and memories that digilect never could.
Indeed, the emergence of language was not a singular, explosive event, but rather the product of numerous steps and stages, one of which was nonverbal communication. In fact, until today, human communication consists of the interplay between verbal and nonverbal communication. No, nonverbal stages of language are not primitive or regressive, just different.
Let’s address an implicit, underlying yet important assumption here which will change how we think about computers and digital technologies. The brain is not a computer. That’s right, the brain does not have hardware, software, RAM, a central processing unit, an operating system, DOS, encoders, decoders – the brain is not a computer.
The analogy of the brain as a computer is very powerful. It has enabled neuroscientists to make deep insightful discoveries about the operation and mechanics of the brain and central nervous system. Analogies are just that – metaphors. They do not encapsulate the real thing. Analogies between brain and technology are nothing new.
A newborn infant’s brain has inbuilt reflexes. He/she can suck, swallow, blink their eyes, vocalise infant sounds, grasping objects in their tiny hands. No, the brain is not a computer. The baby is not born with algorithms, data, subroutines or programmable software. The baby brain does not process information.
Every technological age brings with it multiple analogies to dig into questions we have about the human brain and psyche. Rene Descartes, impressed by the burgeoning field of hydraulics, envisioned the brain as a system of hydraulic pumps and values. Isaac Newton surmised the brain is an interlocking system of mechanical clocks.
The advent of electricity and switches brought with it an array of brain metaphors as an interconnected electrical system. Helmholtz proposed that the human brain was analogous to a telegraphic system.
The rise of computers gave birth of to a whole new series of brain analogies – the computer network. It is a very seductive analogy – what could be more impressive than a network of computers, each with its processing power, sending and receiving information at the speed of light?
The seemingly awesome power of AI today is based upon decades of data retention, software development by developers, and increasingly powerful computer chips that require ever greater power to process AI chatbot requests. Why do I say this?
Deep Blue
May 11, 1997 – yes I was alive that year. That date was momentous. Gary Kasparov, world chess champion, victor in thousands of chess matches and tournaments, was beaten by Deep Blue, an IBM supercomputer specifically designed to tackle chess. Surely this is proof – a machine outsmarted a human in chess, and a chess grandmaster at that.
There was an entire team of human software developers, analysing Kasparov’s matches and chess tactics, programming Deep Blue to calculate countermoves. Deep Blue’s predecessors, which were no slouches in the computer world, were pitted against Kasparov. The latter defeated his computer opponents as easily as a person swats a fly.
Over the years, as IBM programmers learned more about chess and the strategies used by grandmasters like Kasparov, they added calculated plays to outmanoeuvre Kasparov.
Even Deep Blue, in its initial matches in 1997, was easily defeated by Kasparov. IBM’s software development team returned to the drawing board, and programmed their supercomputer to cater for the grandmaster’s tactics. It was an ever-evolving system.
They added ever-greater processing power capabilities to Deep Blue. The latter could research 200 million chess scenarios per second. Kasparov was basically worn down. Interestingly, after the 1997 win, once Deep Blue had shown it could defeat Kasparov, and gain IBM publicity to strengthen its corporate position, Deep Blue was rapidly dismantled; sorry, I meant retired.
Behind the apparent triumph of AI, there was vast and collaborative human input.
Every once in a while, look up from your mobile device.
[…] Blue, an IBM supercomputer in 1997. Computer algorithms, through sheer processing power, finally out-calculated Kasparov over multiple chess […]