AGI-The Last Human Invention-Part I

Prakhar
7 min readOct 16, 2021

“There are only two subjects worth studying: Physics and Neuroscience”

-Demis Hassabis(CEO of Deepmind)

If we consider our mind as a black-box, the world is divided into two parts divided by a blurred boundary which keeps fluctuating at times, outside world-The Universe and inside-The Mind. Of the outside, we know plenty(not all), the development of physics in the 20th century has been miraculous, never before the kind of exponential increase in the understanding of the external world has ever achieved in any century or millennia since the dawn of Mankind. Of the inside, well not much.

In Nature, we find there are many intelligent beings, from the ant to the blue whale, from a worm to a human the range of Biological Intelligence is enormous. Off late we have started venturing into the prospects of Non biological forms of intelligences i.e. AI. Now lets make it clear, Consciousness is not equal to Intelligence. Defining what is Consciousness is difficult but On Wikipedia, the first few lines define it as:

Consciousness, at its simplest, is sentience or awareness of internal and external existence.[1] Despite millennia of analyses, definitions, explanations and debates by philosophers and scientists, consciousness remains puzzling and controversial,[2] being “at once the most familiar and [also the] most mysterious aspect of our lives”.

Strange loops

In Douglas Hofstadter's book Gödel, Escher, Bach: An Eternal Golden Braid, strange loops are defined as:

A strange loop is a cyclic structure that goes through several levels in a hierarchical system. It arises when, by moving only upwards or downwards through the system, one finds oneself back where one started.

If that looks too much to understand, Don’t worry, it is. I will make it crystal clear in a bit.

This statement is false.

The above statement is the simplest example of a strange loop. Notice how as we parse the statement, it affects its own authenticity resulting in a paradox like situation. Theoretical Computer Science has its version of strange loopy paradox in The Halting Problem. Mathematics has the famous: Russell’s Paradox. Now first we have to look at what SET theory is to really understand the implication of this concept.

Set Theory

Set theory grew out of an impulse to put mathematics on an entirely rigorous footing — a logical basis even more secure than numbers themselves. Set theory begins with the set containing nothing — the null set — which is used to define the number zero. The number 1 can then be built by defining a new set with one element — the null set. The number 2 is the set that contains two elements — the null set (0) and the set that contains the null set (1). In this way, each whole number can be defined as the set of sets that came before it. Once the whole numbers are in place, fractions can be defined as pairs of whole numbers, decimals can be defined as sequences of digits, functions in the plane can be defined as sets of ordered pairs, and so on.

“You end up with complicated structures, which are a set of things, which are a set of things, which are a set of things, all the way down to the metal, to the empty set at the bottom,” said Michael Shulman, a mathematician at the University of San Diego.

Russell’s Paradox

Bertrand Russell a philosopher logician and a mathematician, identified a critical flaw in this early version of set theory, he noted that Russell noted that some sets contain themselves as a member. For example, consider the set of all things that are not spaceships. This set — the set of non-spaceships — is itself not a spaceship, so it is a member of itself.

Russell defined a new set: the set of all sets that do not contain themselves. He asked whether that set contains itself, and he showed that answering that question produces a paradox: If the set does contain itself, then it doesn’t contain itself (because the only objects in the set are sets that don’t contain themselves). But if it doesn’t contain itself, it does contain itself (because the set contains all the sets that don’t contain themselves). A kind of Strange loop occurs here.

Type Theory

Bertrand Russell created type theory to resolve the paradox in SET theory. In this System, instead of SETs, a carefully defined object called “type” is used.

Russell’s type theory begins with a universe of objects, just like set theory, and those objects can be collected in a “type” called a SET. Within type theory, the type SET is defined so that it is only allowed to collect objects that aren’t collections of other things. If a collection does contain other collections, it is no longer allowed to be a SET, but is instead something that can be thought of as a MEGASET — a new kind of type defined specifically as a collection of objects which themselves are collections of objects.

From here, the whole system arises in an orderly fashion. One can imagine, say, a type called a SUPERMEGASET that collects only objects that are MEGASETS. Within this rigid framework, it becomes illegal, so to speak, to even ask the paradox-inducing question, “Does the set of all sets that do not contain themselves contain itself?” In type theory, SETS only contain objects that are not collections of other objects.

Strange loops occur when one ventures out of the “type” and starts to make statements about “itself” which in type theory is not allowed as we can make statements only about the object in lower hierarchy as us.

General Intelligence

Why did we venture into those abstract math topics when we were discussing intelligence though? The Reason is this venturing out of the system is a characteristic of a General intelligence.

Contrary to artificial narrow intelligence like that of deep blue, the computer program that beat Garry Kasparov at chess, a artificial general intelligence does not specialize in one specific task rather, it is a general problem solver, flexible, Dangerous and Extremely Useful. It can use different narrow intelligence programs to its advantage.

Thinking about self, while working in a system, reasoning about that system, these are the things that constitute Sentient General Intelligence. Imagine a computer program, you run it, it will execute the code on which it runs, it will never venture out of the program to modify its code! But you give some human a tedious repetitive task, it will spend some time in the “system” doing the task and then will start to get bored and will start to contemplate about the “Boring system”, and will probably be fed up and leave or Automate the process itself. This thinking outside of the preferred “type” or in general venturing outside the system is not yet achieved artificially. Biological Systems have been able to achieve general intelligence in a range of organisms some are primitive form of general intelligence and some are advanced like humans.

The AGI once produced, will be able to perform every performable task a “Human” could ever ask for. It will be an all purpose machine that will have the capability to use all the resources available to humans, and will have the ability to produce even more resources using the previous resources. Why am I saying this? Because of One single fact: Self improvement.

Do Not Anthropomorphize AGI

One mistake anyone who thinks about AGI does at first is the humanization of AGI. The thing with General Intelligence is that it can come in any shape and size. It need not be anything remotely similar to a human mind. For analogy, a supersonic fighter jet is a threat to you in a way no bird is, but both have wings and both fly.

The space of all possible minds is huge, and somewhere within that you have space of all possible minds that biological evolution can produce, and somewhere within that you have the space of minds that actually exist and somewhere within that you have human mind. So human minds are actually miniscule dot on a miniscule dot on a miniscule dot on a miniscule dot! The AGI we may end up building will possibly nothing like we could ever imagine! So it is very tempting to anthropomorphize AGI more so than in other contexts like the bird one, because its a thing that makes plans, take actions in the real world just like humans but it need not think anything like us and to think of it as a person is a mistake.

Role Of Philosophy in AGI

Philosophy has been one of the most important of human endeavors. But its real world application till now have been limited then say Physics, one may argue that all subjects like Physics and Mathematics are a kind of Philosophy but I am talking about the Philosophy major that deals with questions that have no right answers, the human philosophy of ethical and moral reasoning. With the development of AGI we need to Sophisticatedly formulate all of human philosophy without any minor flaws. The need for this is much more necessary than ever before because we may need to hardcode all Human philosophy into the AGI that we will eventually build.

Dangers of AGI

Extinction of human race, but not unlike fictional terminator scenario nah……..something much more brutal and apathetic.

In the next part of the blog I will cover the Dangerous aspects of AGI along with explaining how AGI may behave like an Optimizing function.

REFERENCES

Limits of the human model in understanding artificial Intelligence

The Universe of Minds

https://www.quantamagazine.org/univalent-foundations-redefines-mathematics-20150519/

Gödel, Escher, Bach: An Eternal Golden Braid

Deadly Truth of General AI? — Computerphile

--

--