Ten questions about AI

Bobby Elliott
10 min readFeb 29, 2024

--

Generated by Microsoft Co-pilot

This short article is a primer on artificial intelligence (AI) for people who want to understand the basics of the subject. It poses 10 questions and the answers, I hope, will illuminate the subject.

What’s the big deal?

The big deal is that AI may be the most important human invention in history. Some people believe it might also be the last. If that sounds dramatic, you might prefer it described as the biggest technological change in the 21st Century. Either way, it’s a big deal.

What is AI?

Artificial Intelligence is a branch of computer science that aims to create systems capable of performing tasks that would normally require human intelligence. I could tell you that AI is a mix of computation, statistics, neuroscience and psychology — and that’s true if not particularly helpful. So let me put it simply. AI is Machine Learning.

What is Machine Learning?

I learned to program in the 1970’s using a procedural, rule-based language (Algol). These languages provide the computer with a list of instructions.

10. Do this.
20. Now do that.
30. Now do this.
40. If this happens do this otherwise do that.
50. Go back to line 10.

Machine Learning is different. It doesn’t define every step. Instead, it tells the computer to look for patterns in data. Here’s an example to illustrate the difference.

Traffic lights use a program to control the sequence of lights. The program (“algorithm”) is simple.

  1. Red for 60 seconds.
  2. Red/amber for 10 seconds.
  3. Green for 60 seconds.
  4. Amber for 10 seconds.
  5. Go back to (1).

This is traditional programming. It blindly repeats the sequence regardless of traffic conditions. These traffic lights will stop you at 3am in the morning when the roads are empty.

A better system would use cameras to take account of traffic (this is how most traffic lights actually work). The cameras detect traffic and feed this data to the program, which changes the lights if necessary. Using this system, traffic lights on a main road would remain green until a camera detects cars approaching from a side road. This is better but will still stop five cars on the main road to give way to one car approaching on the side road. The code for these lights is still procedural but the algorithm accepts data from the cameras to change the lights.

“Smart” traffic lights use Machine Learning. The traffic lights would learn as they go. The initial state might be the usual sequence but the traffic lights (more accurately, the program running in the traffic lights) take data from the surrounding environment to determine the best timing for each colour of light. What timings minimise the number of cars that have to stop? What timings maximise traffic flow? What’s timings minimise wait times for drivers? ML systems use feedback loops to change the behaviour of the lights, measuring wait times, car stops and traffic flow to continuously alter the lights to optimise these “parameters”. If a change to the sequence makes anything worse then the change is undone; changes that improve traffic flow are kept.

You might have noticed that my explanation of AI was short and my explanation of Machine Learning was long — because AI is Machine Learning.

Why is AI important now?

What happened to make AI a big deal? AI was a big deal in the 1970’s, before entering the “AI Winter”, which lasted for around 30 years, during which time it made little progress. Then, around 2010, a tipping point was reached. The combination of more “compute” and more “data” produced results. During the 2010’s, a lot of progress was made in areas such as language translation and facial recognition, culminating in a breakthrough in early 2020’s with Generative AI such as ChatGPT. In the space of just over a decade, AI went from “dead-end” to “the future”.

Traditional AI (also known as symbolic AI) focused on “compute”. It tried to program intelligence. Expert systems tried to mimic human expertise by building decision trees. But these systems rapidly became massively complex and, no matter how big and complex they became, they still weren’t very good.

Decision tree for car repair

The real world proved too complex for this approach. Then we threw data at the problem. Instead of building a massive model of how an experienced car mechanic repairs a car, just give the computer one million examples of car repairs and look for patterns. The “unreasonable effectiveness of data” surprised even computer scientists.

How long will it take?

The AI revolution has been compared to the industrial revolution. One (the industrial) revolutionised labour; the other (AI) will revolutionise intelligence. But the AI revolution isn’t going to take 100 years. How long? Who knows? Some people claim that 40% of current jobs could be replaced by AI by 2035. Unlike the industrial revolution, it’s not low wage manual jobs under threat; the professions are in the firing line. Bill Gates predicts that in the not too distant future we’ll use AI agents as our personal assistants — arranging your car service, acting as your fitness coach, organising meetings, booking a holiday, providing legal advice, managing your finances, teaching you new skills, getting the best mortgage for your new home: “I’ll have my agent talk to your agent and we’ll agree a price for your home” maybe isn’t too far away.

How many jobs would that replace? A lot. How long will it take for this to happen? Probably longer than some say but faster than you’d imagine. Twenty-forty will probably be very different from 2020.

What is narrow and general AI?

There are two types of AI: narrow AI and general AI or, to give it its correct name, “AGI” — Artificial General Intelligence. Narrow AI is here. It’s the AI in your Tesla; it’s the AI that recognises faces; it’s the AI that plays chess. It’s “narrow” because it does one thing. The AI that controls your Tesla can only control your Tesla.

AGI scares us because it’s capable of learning to do just about anything. This is the AI in Terminator but, unlike Terminator, it’s not coming in 2029. Most experts agree that AGI is some way off. Some think it will never arrive. The problem is that AGI needs a lot of capabilities. Current AI is good at natural language and, up to a point, reasoning but not so good at planning, spatial awareness, manual dexterity and actually understanding what it sees and hears. Take ChatGPT. ChatGPT was an incredible breakthrough for natural language processing when it was launched in November 2022. More recent versions are even better. But it doesn’t understand a word of its output. The neural network behind it simply generates coherent text in response to a prompt. It has no clue what it means.

If I re-assured you that Terminator isn’t happening any time soon, let me alarm you with Artificial Super Intelligence (ASI). ASI is what happens after AGI. Once machines achieve general intelligence they won’t stop learning and in a relatively short period of time will gain super intelligence.

Should I be worried?

Sam Harris is worried. He worries about Artificial Super Intelligence. He gives an analogy with humans and dogs. Dogs like humans; we give them food and shelter in return for, well, not very much. Humans like dogs; we like their affection and obedience. But what would happen, Sam wonders, if dogs carried a deadly virus? We’d wipe them out. Sam’s point is: ASI will make human intelligence comparable to dog intelligence. An intelligence gap is bad for the less intelligent.

The history of human progress doesn’t provide reassurance. Roman roads were used to transport slaves; agricultural innovations (such as enclosures) made people’s lives worse; factories were hellish for the unfortunate workers. While the broad trajectory of human development is upwards (at least during the last 200 years), that’s not much consolation for the peasant who lost his little bit of land to his Lord’s new enclosure, resulting in him and his family starving to death. My point is: “Don’t worry about it, things will be fine” is historically naïve.

There’s also something different about AI. While the short term consequences of technological change were often negative, longer term consequences were generally positive — because people were still needed. Displaced peasants drifted to towns where they found work; weavers found employment in factories; redundant typists found more interesting digital work. But what happens when people aren’t needed? For anything. Because AI can do it all. That’s new.

When should I worry?

I don’t think there’s much to worry about right now. The general consensus in the computer science community is that narrow AI is, well, narrow and general AI is a long way off. As one computer scientist put it: “Machines won’t do what we don’t want them to do”. In the meantime, AI will replace boring repetitive work, improve road safety, tutor your children and possibly cure cancer. Of course, if AGI/ASI arrives, all bets are off. I don’t want to be the dog in this story.

It’s true that “machines won’t do what we don’t want them to do”. The problem is what we want them to do. If the current sociopathic mantra of “the purpose of corporations is to maximise profits” prevails, AI is going to be good for business and bad for everyone else. We’re told that AI will improve productivity and profits but when you dig deeper this is achieved by replacing people. Fine if you’re a shareholder; not so fine if you lose your job. The displaced workers aren’t going to spend their universal basic incomes learning to write poetry; learning to live on less food is more likely. Then there’s the military. Who knows what they’re developing? If autonomous weapon systems prove to be advantageous on the battlefield, as they surely would, then why wouldn’t the military develop them? If giving these machines more autonomy improves their effectiveness, as it surely would, what’s to stop the military doing so? The T-800 (the machine in Terminator) almost certainly started life as a “better weapon”.

Is AI conscious?

No. Absolutely not. And it never will be.
Yes. Absolutely. It’s inevitable.

I think both answers are correct. AI will never be conscious in the way humans are conscious. Our consciousness is biological. It’s created from a complex mix of chemicals and neurons. Consciousness is defined as “knowing what it’s like to be something”. You know what it’s like to be you. I know what it’s like to be me. Your dog knows what it’s like to be a dog. A hammer does not know what it’s like to be a hammer.

Machine consciousness is not, and never will be, like that. Machines will never feel happy or sad. Could we program feelings into a machine? Yes, if you mean making a machine pretend to have feelings. HAL (from 2001) showed emotions — but these were fake. Machines don’t feel sad or happy or any other emotion, and coding PRINT “I AM SAD” doesn’t change that.

So, if that’s what we mean by consciousness, machines will never be conscious. But what if there’s another type of consciousness? A digital consciousness. Many biologists believe consciousness is inevitable once an organism gets sufficiently complex. If that’s true, and AGI/ASI emerges, then machines will be conscious. Not conscious like us but conscious nonetheless. Sufficiently complex neural networks might “know what it’s like to be a neural net”. Does that sound weird? In a cosmological context, it might be biological consciousness that’s weird.

What is the alignment problem?

Computer scientists worry about two things: (1) the intelligence explosion; and (2) the alignment problem.

The intelligence explosion is AGI/ASI. It’s Sam Harris’s worry that as soon as AGI arrives, ASI will be right behind. The alignment problem is new.

Nick Bostrum worried about paper clips. Suppose we set an intelligent machine the task of producing paperclips. “Machine! Maximise the production of paperclips!”. Harmless, right? Well, no. Not if the machine was unconstrained and set about making everything a paperclip. Unconstrained, the machine would crush and recycle every car for the production of paperclips. This is an example of the alignment problem. How do we ensure that machine goals and values are aligned with human goals and values?

It sounds like a simple problem to solve. Constrain machines to stop them doing what we don’t want them to do. It’s not that simple. Firstly, every constraint reduces the autonomy (and, therefore, the effectiveness) of the machine; secondly, it can be hard to predict the unintended consequences of the goals we set machines.

Isaac Asimov tried to solve the alignment problem with his Three Laws of Robotics.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Nice try, Isaac. But not the best strategy for maximising profits or winning wars.

This short article offers no solutions. But now you know what AI really is. You understand some of the jargon. You know the potential dangers posed by rapid intelligence acquisition and the alignment problem. And you know the danger posed by paper clips.

The real threat from AI isn’t the machines. It’s the humans behind the machines. Futuristic talk about “AGI” and “emergent capabilities” is a smokescreen from the here-and-now development of intelligent systems that benefit the few at the cost of the many. The future is being written by a few thousand ML engineers without much (any?) oversight. That needs to change. The potential good of AI is enormous. The potential threat is existential.

--

--

Bobby Elliott

Ex-teacher, educationalist and geek. I use Medium for reading and writing. My writing spans education, politics, technology, science and productivity.