Ragged Clown

It's just a shadow you're seeing that he's chasing…


Computers can know things

July
2023

I wrote this back in April after a friendly discussion with my fellow philosophy students as to whether a computer can be said to really **know** things. It has languished in my drafts folder since then. I just tidied it up a bit and posted it.

WARNING! It’s philosophy. If you hate philosophy, you’ll hate this. On the other hand, you might read it and say “Hmmm…. Philosophy is not so bad after all!” Who knows?

In this essay, I will argue that there is no meaningful difference between the way that humans know things and the way that computers know things. Furthermore, there is no principle that says computers could not perform all of the activities that, if a human were to perform them, we would describe them as intelligent. I will also show that consciousness has no bearing on whether or not an entity can be said to know something or to be intelligent. I will begin with a thought experiment to illustrate what I mean by intelligence and knowledge before defining the terms more precisely.

Imagine an alien, Zorbak, who comes down from Zargon and reads a bunch of books on how to build a business. Zorbak produces a business plan:

1. Create a website for an eco-friendly business that gives tips on saving energy.

2. Choose a name for the business and design a logo.

3. Start writing articles and charge subscribers $1 to read each one.

Zorbak reads a bunch of books on website design and creates the website in WordPress. He designs a logo with a green wheel (representing progress) intertwined with the shoots of a plant (representing growth) because he has learned that humans associate green with eco-friendly ideas.

I would say that Zorbak displayed intelligence. Zorbak has learned how to build a website and he knows that humans associate green with eco-friendly ideas. I can’t say whether Zorbak is conscious in the Nagel sense (there is something it is like to be Zorbak) (Nagel, 1974) but that has no bearing on whether or not Zorbak is intelligent and knows things.

Now, instead of an alien, imagine a computer, Super-AI-3000, that reads a bunch of books on business, builds a website, writes articles and sells them at $1 a pop (Fall, 2023). As with Zorbak, I would describe Super-AI-3000 as intelligent. I would say the same about a 13-year-old girl, Ada, who built a successful business after reading some books. Super-AI-3000 is not conscious in the Nagel sense (there is nothing it is like to be Super-AI-3000). Nevertheless, Super-AI-3000 is intelligent and knows things. I claim that there is no meaningful difference between Ada, Zorbak and Super-AI-3000. They all displayed intelligence. They all learned how to build a business, they know how to build a website and they know that green is associated with eco-friendly ideas. There is no test that could distinguish between their abilities. I will consider each of these claims in turn starting with the claim that a computer can know things.

Many people claim that a computer cannot know things in the way that humans know things. A hard drive can store a bunch of facts such as the capital of Iceland or the directions to Cornwall but a hard drive cannot be said to know these facts. According to this view, artificial intelligence (AI) can read the data on the hard drive but neither the hard drive nor the AI knows what’s on the hard drive (Karen, 2023). Is there a meaningful difference between knowing and what I will call fake-knowing? A definition will be helpful.

The tripartite theory of knowledge says that knowledge is justified, true belief (Chimisso, 2011). A fact is true or false regardless of where it is recorded so the distinction between knowing and fake-knowing must turn on justification or belief. The most common methods of justification are deduction and inference (Chimisso, 2011) and modern AIs can do both as well as humans can (Open.ai, 2022). This leaves us with belief as a possible difference. Another definition will be helpful.

The Stanford Encyclopedia of Philosophy defines belief as the attitude that a proposition is true. However, different philosophers emphasise either the representation of the belief (how the belief is recorded) or the consequences of the belief (entities with a certain belief are likely to behave in a certain way) (Schwitzgebel, 2021). For example, a self-driving car represents the geography of England as a map that locates Cornwall to the south of Birmingham and thus a self-driving car headed for Cornwall will drive south rather than north. The car believes that Cornwall is to the south of Birmingham and behaves as if it believes it. On either account of belief, there is no meaningful difference between the car’s belief and a human’s belief. The car’s belief is justified and it is true and thus the car knows that Cornwall is to the south of Birmingham.

We might object that reading something does not count as knowing something. We can dismiss this claim out of hand. A child who learns facts or techniques from a book is said to know those facts or techniques. But perhaps the experience of reading is significant. Does the feel of the page make a difference or the child’s awareness of her surroundings? No. The child would learn just as well if the book were on Kindle or if the child were reading up a tree.  Some might claim that computers do not know things because they do not have the senses to verify the information for themselves. This is patently false. A self-driving car can see a child on the road and believe that there is a child on the road just as a human can. A stronger form of this claim says that neither a human nor an AI truly knows a technique until they have demonstrated that they can apply the knowledge and this leads us to the topic of intelligence which I will consider shortly.

Some people claim that consciousness is necessary for knowledge and, since an AI is not conscious, it cannot be said to have knowledge. The definition of consciousness is notoriously slippery and I will consider three common definitions, starting with Nagel’s definition that consciousness is what it is like to be something (Nagel, 1974). It is not clear what role consciousness might play in knowledge acquisition under this definition. Our self-driving car may or may not know what it is like to be a self-driving car but can read a map just as well as a human. Another account says that consciousness involves the experience of qualia (singular: quale)—for example, the redness of a rose or the sweetness of a peach (Matravers, 2021). Is Southness a quale? If Southness is a quale, we might question why the experience of Southness is relevant to whether an AI can know that Cornwall is to the south of Birmingham. If Southness is not a quale then consciousness is not required to know something. Finally, my preferred account describes consciousness as an awareness of one’s own mental state: I can know subconsciously that a rose is red. I can see a rose out of the corner of my eye but I am not conscious that the rose is red until I give attention to the rose and become aware of its qualities. Under this account, both a human and a self-driving car can know things without being conscious that they know them. Likewise, both can see things without being conscious of them until they pay selective attention and become aware of them. One does not know something until one becomes aware of it and, again, there is no meaningful difference between the computer and the self-driving car’s knowledge. Having thus shown that an AI can know things just as well as a human can, I will next consider whether computers can be intelligent.

According to me, intelligence is the ability to learn new information and make use of it to solve a problem that you have never encountered before. Merely following instructions does not count. So, can an AI be said to learn new information? Some have said that AI merely follows the instructions given by its programmers but this has not been true for decades. One could build an AI by giving explicit instructions but this is not how modern AIs are built (Wolfram, 2023). An AI such as ChatGPT is given access to information in books and websites which it learns by using inductive, deductive and statistical reasoning (OpenAI, 2022a). As demonstrated above, an AI can learn from a book, just as a child can. Also, there is no meaningful difference between a book or a website or any other source of data on the internet in this regard and an AI can learn from any of them. But can the AI use the information to solve problems? As we have seen above, ChatGPT can build a business after reading instructions from a website. It is not merely following instructions; it is building something new that has never been built before. A child that built a new business after reading instructions would be said to be intelligent and we are compelled to say that the AI must be too since there is no meaningful difference between the two. 

Another possible objection is that reading information from books is not true intelligence; true intelligence requires experience. Consider a robot with artificial intelligence that wants to learn what is meant by ‘green’. The robot can visit a farm and ask of the farmer ‘Are these lettuces green?’ and ‘Are these carrots green?’. After several such questions, the robot will eventually learn what is meant by ‘green’ and there is no meaningful difference between what the robot can learn and what a child would learn in the same circumstances. We could postulate that the child has some ineffable ‘experience’ of green that the robot does not have but this has no bearing on whether or not the robot has learnt what green means. The robot knows it just as well as the child does.

I will consider one final objection before concluding that an AI should be considered intelligent. It has been claimed that in the build-a-business example, ChatGPT only built the website because it was told to. To be truly intelligent—according to the objection—it would have to decide to build a website of its own accord. Leaving aside questions of free will, we have all had experience directing a junior employee to accomplish a task. For a very new employee, we might need to specify exactly what shade of green to use on the website. For a more experienced employee, we might give her free rein to choose every aspect of the design. The VP of Product might decide that we do not need a website after all and a magazine would be a better choice. Each employee receives instructions of different specificity according to our confidence in their abilities and the specificity of the goal that we wish to achieve. The same is true of an AI that we ask to build a business. We must either conclude that each of the employees below the level of CEO lacks intelligence or that both the AI and the employees are intelligent since there is no meaningful difference between them.

In the history of AI, there have been repeated claims that X will never be achieved by an AI because an AI is not truly intelligent: a computer will never win at chess; a computer can never teach itself to play Go; a computer will never win at Jeopardy; a computer will never learn to drive a car. As each of these claims is proven false, the goalposts are moved to either say that the AI does not truly know how to do X or to claim that yes, a computer can do X but it will never do Y. On the first point, I have already shown that there is no meaningful difference between a computer that achieves a goal requiring intelligence and a human that does the same. On the second point, I confidently predict that for every X that requires intelligence, a computer will eventually be able to do X. There are, of course, human activities that require other qualities in addition to intelligence. I make no claim, for example, as to whether a computer could ever fall in love or make a human baby. Computers can already create art (OpenAI, 2022b) and can write essays as well as any second-level philosophy student (Clown, 2022). They can drive cars (Waymo, 2022) and build websites (Fall, 2023) and this will only improve with time.

As you have seen, I have successfully shown that there is no meaningful difference between the way that humans and computers know things or between the intelligence displayed by a computer and a human performing the same task with the same information. If humans can know things and be intelligent, then a computer can too.

References

Chimisso, C. (2011) Knowledge. Milton Keynes, The Open University.

Clown, R. (2022) AI is coming for your job. Available at https://www.raggedclown.com/2022/12/07/ai-is-coming-for-your-job/ (Accessed 22 April 2023).

Fall, J. (2023) [Twitter] 12 March. Available at https://twitter.com/jacksonfall/status/1636107218859745286 (Accessed 22 April 2023).

Karen. (2023) ‘No Turing Test for Consciousness’, Book 5 discussion, in A222: Exploring Philosophy. Available at https://learn2.open.ac.uk/mod/forumng/discuss.php?d=4362350&p=p30082607#p30082607 (Accessed 22 April 2023).

Matravers, C. (2011) Mind. Milton Keynes: The Open University.

Nagel, T. What is it like to be a bat. Available at https://warwick.ac.uk/fac/cross_fac/iatl/study/ugmodules/humananimalstudies/lectures/32/nagel_bat.pdf (Accessed 23 April 2023).

OpenAI. (2022a) Introducing ChatGPT. Available at https://openai.com/blog/chatgpt (Accessed 22 April 2023).

OpenAI. (2022b) Waymo. Available at https://openai.com/research/dall-e (Accessed 23 April 2023).

Schwitzgebel, E. (2021)  ‘Belief’, The Stanford Encyclopedia of Philosophy (Winter 2021 Edition). Available at: https://plato.stanford.edu/archives/win2021/entries/belief/ (Accessed: 20 April 2023).

Waymo. (2022) Waymo. Available at https://waymo.com/ (Accessed 23 April 2023).

Wolfram, S. (2023) What Is ChatGPT Doing … and Why Does It Work? Available at https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/ (Accessed 27 July 2023).

Notes

The business plan thought experiment is inspired by a Twitter thread in which Jackson Fall asked ChatGPT to create a business. I simplified the idea for brevity and to avoid being distracted by the specifics of ChatGPT (Fall, 2023).

I was planning to say more about computers displaying creativity but I ran into my word count. I will say more about this in a separate essay.

In subsequent arguments, I noticed that a lot of folks who took the position that computers can’t possibly know things also believed that consciousness requires a kind of je ne sais quoi that can’t be explained in material terms or arise from material structures like brains and neural nets. I plan to write about this separately too.