In my previous post, I alluded to an argument made by John Lucas and popularised by Roger Penrose against the possibility of AI, and this is that Gödel's Incompleteness Theorems suggest that even weak AI is impossible. We could never make a computer system that behaved intelligently.
Loosely put, Gödel's Incompleteness Theorems put fundamental limits on what can be achieved by working with formal systems.
In particular, some argue that they show that formal systems can never be as intelligent as human beings. In this post, I argue against this view and propose that humans are subject to the same limitations.
Loosely put, Gödel's Incompleteness Theorems put fundamental limits on what can be achieved by working with formal systems.
In particular, some argue that they show that formal systems can never be as intelligent as human beings. In this post, I argue against this view and propose that humans are subject to the same limitations.