The growth of artificial intelligence (AI) has prompted a deluge of commentary regarding its impact on almost every field of human endeavour. Journalism, music, art, technical writing, education- all of these fields and more have felt the impact of AI. While most articles have examined the potential automation of these jobs by AI, I want to examine the underlying assumptions, and some myths, regarding AI And its impact on human creativity.
The brain is not a computer. This long-standing analogy, which undoubtedly has assisted research in the field of computer science, has outlived its utility. It has become a hindrance in psychology and understanding the origin and expansion of language. You will never find Beethoven’s symphonies, or impressions of the paintings of Da Vinci, located in specific areas in the brain like we are examining a block of computer code. No, the brain is not analogous to ‘hardware’, like we would find a computer’s motherboard.
Matthew Cobb, writing about this very subject in the Guardian, elaborates on the long history of the brain-machine analogy. Rene Descartes, the famous philosopher, surmised that the brain was a series of hydraulic pumps and values. Throughout the ages, various technologies have inspired a wide variety of analogies – electrical, mechanical, telephonic, and currently, digital and computerised.
What has all this got to do with AI? Consciousness is not a neural network; language is not the product of coding by developers, but an outcome of our biological and sociocultural evolution. We have all seen the headlines, such as ‘can a robot write for a newspaper?’ Very interesting question, and one that needs to be answered. AI can certainly take the drudgery and monotony out of writing first drafts, reviewing and editing journalistic articles, for sure. No, it cannot substitute for human judgement.
Artwork, music – all these endeavours are the products of human labour power. AI can certainly increase productivity, but it cannot replace it. What we have been doing since the dawn of the digital age, is outsourcing our moral and value judgements to the algorithm. Search engines have become the first point of call for questions and issues we have. Medical diagnoses, prospective romantic interests, gaming hobbies, chess, shopping for music – we have enabled the algorithms to do our thinking for us.
Steven Poole, British journalist, wrote about this precise trend in 2013. The growth of Massively Open Online Courses (MOOC) was heralded in university education as a boost for the empowerment of students, making available thousands of courses for anyone willing to learn. That’ll great, but then computerised algorithms were marking papers – graduation by algorithm. Do we replace the university structure with MOOCs?
Let’s take self-driving cars. In theory they sound great – the algorithm simplifies the driving experience. That is all well and good. However, let’s look at the road toll. Gary Marcus, psychology professor at New York University, offered a scenario. You are in a self-driving car, about to cross a narrow bridge. A school bus full of children careens out of control, and there is not enough room on the bridge for both of you. Should the algorithm controlling your vehicle decide to drive your car off the bridge, sacrificing your life to save all the schoolchildren?
Even in seemingly routine matters, such as driving, moral and value judgements are required. If you think that scenario is far fetched, think again. The former director of the National Security Agency (NSA) and the CIA, General Michael Hayden, stated that when it comes to the collection and retention of surveillance data, “we kill people based on metadata.” That comment was made in the context of a debate on how our metadata – phone call logs, internet searches, – is being used by surveillance and intelligence agencies.
AI, being a product of human engineering, inevitably reflects the biases and values of the corporations who own and operate it. Even the large language models (LLM) of generative AI are not value free. Language, originating in the human sociocultural experience of cognition and formation of words, can be mimicked by LLM, not replaced.
Surely something as straightforward as the retention of facial recognition data would not be subject to biases? Take the case of Randal Quran Reid, an African American man improperly arrested and jailed for six days purely on the ‘strength’ of facial recognition. Arrested by Louisiana police on the basis of theft reports from New Orleans, Reid had never actually been to New Orleans. His protestations came to nothing.
His family raised thousands of dollars to get Reid out of jail. Reid’s case is not unusual. As Silicon Valley tech magnates warn of the dangers of abusing machine intelligence, they are still spending millions of dollars on developing such tools – stochastic parrots, as one commentator put it.
The danger is not AI itself, but in how we are allowing the generative AI technology to shape the world in which we live. ChatGPT can simplify our writing tasks, but it can fully replace the nuances and subtleties of human perception, cognition and language. Indeed, a subject which has been missing from all the talk about AI – the vital importance of nonverbal communication. I seem to recall that a book, which started this subject was published in 1872, The Expression of Emotions in Man and Animals, by Charles Darwin.
That book was the earliest foray – at least in the English-speaking world – about the psychophysical processes underlying emotions, and our nonverbal communication. I do not think it is an exaggeration to surmise that human language had a crucial nonverbal precursor, before evolving into a fully verbal and social experience.
We certainly require a discussion of AI. Lets expand that discussion into how we can shape and use it, and not let ourselves be guided by the market imperatives of those tech giants who control it.