On the Requirements of Artificial Intelligence

19 July 2011

Before AI can exist a number of things must first occur. These things which must occur will also make programming easier and more efficient. In fact the two are strongly related and make AI very likely for reasons other than exponentially increasing transistor density. In fact I am skeptical that the obsessions in this direction will bear any fruit at all. But I do think the following line of thinking will bear fruit.

Computers will continue to increase in complexity and power The demands and complexities of the tasks on computers will continue to improve Our ability to program them must improve. Scaling with man power will not prove sufficient Part of this improvements will mean smarter compilers, programs and manner of tackling problems

It then follows that anything which makes programming easier, more efficient and require less hand holding of the computer must be doing more on its own and doing it in a way that is increasingly less stupid. The more general these systems become, the closer to AI we get. Hence those working on programming languages and computer design software and tools are in fact working on a very simple nascent version of AI.

So we can define AI as the end result of the evolutionary process of making a computing device that continually does more with less hardcoded instructions - i.e. the process of making a computer more and more useful, less and less annoying. We are in our own way following in the path of nature, going from hard coded inflexible instincts to ever more flexible behavourial patterns. I will also clarify that when I say AI, I mean basic mammal or corvid like flexibility and not human level general intelligence. Human level and perhaps beyond I feel are likely but the dimensions of that space are so immense that I can offer no predictions - it could happen in a decade or a century,. toTo my 3D perception it may be many miles away but there might be a shortcut in 6D. Something else worth noting is that although the AI might not be so intelligent in an absolute sense, if its niche is for example in complex reasoning on genetic sequences its basic mammalian like flexibility may far surpass a human in certain aspects of reasoning. Beaten only by creativity and holistic reasoning of the human. The AIs may be considered autistic in a sense.

Before AI can exist computer programming must become easier and not take so much effort. Otherwise, any device that can contain the specification of an AI will collapse into a black hole. We need higher level languages, we need to make it easier for programs to modify themselves, we need to remove the separation between machine learning and machine instructing (programming) and more importantly; we must also better understand the theory of programming as it is one of specification and intimately tied with the concept of what is knowable.

Specifically, Kolmogorov Complexity is in essence what is finitely describable - it is about what can be compressed in the form of a generating process specified in a computer language. And if we do not have a better understanding of how to specify such things and what it means to write them efficiently with certain guarantees then we have no hope in creating an AI. With that out of the way, the below is a list of things that will make programming easier and hence is a list of preconditions for AI.

List of Requirements

The last is starred because it is the only thing on the list that is not a requirement but instead will develop hand in hand with the field. Which highlights the fact that no where did I mention neurobiology or psychology. I do not believe that we need to understand how the brain works to get to AI. Indeed I think such an approach would be counter productive by not be leveraging the fact that computers are a fundamentally different platform (high frequency computation, low latency, perfect recall) and hence must operate on a rather different set of basic principles even if the outputs are measurably within the same plane of output behaviours. Nonetheless it would be foolish not to cross pollinate and get guidance of what an Intelligence cannot be composed of by studying humans and other animals.

Better Theory of Programming

Mathematics is the most effective way to lossfully compress knowledge contained by a set of concepts. For simpler systems we can get a full description of the process with no loss, with the models growing in size with the complexity involved and accuracy required. This is all mathematics is and is not something limited to numbers or calculus. Numbers and calculus are an example of this compression, reduction or abstraction process. Because maths need not be about numbers and while proofs are necessary as a measure of consistency, these are not the first thing that should come to mind when I suggest that computer programming needs a mathematical description. Instead I mean a tool that makes it easier to reason about complex things by compressing them without losing fidelity so that we can more readily come to meaningful conclusions, limits and constraints. There have been several related attempts using Set Theory, Type Theory, Model Theory, Domain Theory and Category Theory - with the best in my opinion being Category theory, being able to encompass all of the prior.

However they all remain too difficult and locked up in academia with no practical use or even explanation to the engineer. And in particular only really help with the smaller details of algorithms and data structures and not the bigger picture and actual difficulty of implementation and architecture. Category theory arguably does this but in a way that ties your hands to its axioms. Attempts using diagrams and specifications are more common in industry but are too complex and have gotten lost in their own details - resulting in specifications that take even more work than what is being specified itself and quadrupling the work load. Hence we need a simple math that can be employed to get and communicate meaningful ideas and test limits before work starts. This will make the work less blind. By being more aware of programming we reap the benefits that geometry gave to construction and bootstrap ever more complex processes. What makes this even more interesting is that as stated, a theory or mathematics of programming is also a mathematics of what is knowable. It is not unlikely that invariants from this theory will be useful in AI and a branch specifically dealing with that will form.

AI as the dual of Human Intelligence

AI may not necessarily be superior to human intelligence in all aspects. This is because the concept of insights and creativity, the requirement of abstraction are all due to human limits of speed and recall. A computer has no such limits with respect to us so the question becomes does this mean computers cannot be creative?