Books

Gödel, Escher, Bach: Chapter 1

This chapter begins with the MU puzzle, which is meant to represent deriving theorems from a set a rules. Basically, the goal is to produce MU from MI. 

Rule #1: If a string ends in “I”, you can add a “U” to the end. For example, “MI” can become “MIU”. 

Rule #2: If a string starts with “M” (i.e., is of the form “Mx”), you can double the portion of the string after the “M”. For example, “MI” can become “MII”. 

Rule #3: If a string contains “III”, you can replace it with “U”. For example, “MIIII” can become “MUI” or “MIU”. 

Rule #4: If a string contains “UU”, you can remove it. For example, “MUU” can become “M”. 

Here is an example of me attempting to derive MU and miserably failing. 

MI (Apply Rule #2)

MII (Apply Rule #2)

MIIII (Apply Rule #3)

MUI (Apply Rule #1)

MUIU (Apply Rule #2)

MUIUUIU (Apply Rule #4)

MUIIU (Apply Rule #2)

MUIIUUIIU (Apply Rule #4)

MUIIIIU (Apply Rule #3)

MUIUU (Apply Rule #4)

MUI 

This brought me right back to the 4th string in the process. But the loop I went through reminds me of the cyclical nature of ideas within this book so far: like Escher’s strange architectural drawings, and Bach’s canons. Maybe finding a loop in the MU puzzle is not a coincidence. 

Another idea the author talks about in this chapter is the learning capability of machines. Hofstadter writes,

“It is an inherent property of intelligence that it can jump out of the task which it is performing, and survey what it has done; it is always looking for, and often finding, patterns.”

Applying this to ChatGPT, the way generative AI works is finding patterns among huge amounts of inputed data. But when it does that, is it jumping out of the task it is performing? The task of ChatGPT itself could be to learn, in which case it is stuck in that task forever. When applying this to humans, we often take breaks from our work to reflect on what we do and how to do it better: But what if our task is the same as ChatGPT’s? To learn and function as a part of a society? Or even more basic, to survive? It is impossible for us to escape the system of how our neuron’s fire and connect, of how we are programmed by nature, so I don’t think we are any more special than AI. But that’s a side tangent. 

Each chapter ends with a dialogue between Achilles and the tortoise from that one thought experiment by Zeno. The central paradox is about infinite decreasing distances being added to one another. It is impossible to reach the end, and therefore, motion is impossible (at least according to Zeno). It’s funny that I had similar ideas as a kid; I remember being up at night thinking about a stick, where I keep adding smaller and smaller chunks to it, but it never reached being a foot long. I now know such paradoxes can be modeled with calculus and limits, but the philosophical aspect still remains a mystery to me. I better keep reading GEB and see where it takes me!

Leave a Reply

Your email address will not be published. Required fields are marked *