*27 Sep 2012*

Some people like to talk about the unreasonable effectiveness of mathematics in describing nature. But I've never been impressed with how well math describes the universe. I'd be more surprised if it **didn't**. I find it very hard to express this concept and have only recently gained the sophistication to even attempt this but here it goes:

*Math that predicts reality is inevitable in a universe where learning occurs.*

The basic idea is that it is tautological that mathematics works. A universe you can learn about and understand is one whose description is compressible. It turns out that math is one of those ways to describe it. Programs are another (constructive only). If the universe was not compressible we could not, by definition learn anything about it and **math would not work**.

You can't make predictions about an uncompressible system. If you can learn about the universe then there exists a system that captures a compressed description. A compressed description is by definition one that is able to predict beyond its size (Bastardizing Kolmogorov Complexity a bit here). Math is one language with which you can create a compressed description of nature. Math that predicts reality is inevitable in a universe where learning occurs.

The other issue is partially anthropic; an inconsistent universe would be less likely to have us here being awed that a *methodology based on the consistent applications of rules* works at describing it. If one posits that a certain type of idea of symmetry well captures some properties of particles then I don't think it is surprising when a consistent theory of symmetry explored and unfolded has the ability to predict the properties of any system consistent with its basic operations and axioms. Whether the system lives on paper or out there in the universe.

Any consistent system whose basic behaviour is explainable with a theory or algebra of symmetry should be predictable when the theory is further expanded in a deductive manner. Just a more involved example of the not at all surprising phenomenon that a pizza divided amongst 4 people yields 2 pieces each.

What I am trying to say is if this mathematical model has succesfully captured/compressed the phenemenom then its internal rules must be consistent with the phenemom's rules. So if you explore the mathematical system, so long as you are consistent in your expansion, then it follows that your discoveries may also be observed one day in the universe - which when viewed this way, the universe is a subset of mathematics. One of many possible expressions and choices of space, variables and algebras. Hence the only difference between the space of all possible mathematics and specific realities is the placing of constraints and removal of 'simplifying' assumptions.

Science then is searching this mathematical space for a description that sufficiently compresses an observed phenemenom.

Taking the analogy of a graph/tree, I think the difference between Lie Algebras and dividing pizza is one of depth not structure. The same procedures of constructing the tree is used for both its just the one is of greater depth.

Note that incompressible or actually, of high Kolmogorov complexity need not be inconsistent. So we can talk about universes whose structure are of varying KComplexity as related to how difficult it would be to learn. Life would be possible but intelligent life less likely as Evolution would have a more complex search space. Intelligent life if it existed would be less likely to develop Occam's razor as a guiding principle.

Most things of interest to us are actually of fairly low KC and so it comes as no surprise that "inferring" the rules of the automaton are possible at all. I know my terminology is very biased to computer science but I hope I've managed to make clearer my key thoughts on this.

Here are my assumptions I hold as true: The Strong Church turing thesis. Everything in the universe is computable and by a [quantum] turing machine. I do not think the universe is a computer or a simulation but I think all its phenomena are computable.

Kolmogorov complexity is very important because it is actually about predictability (its also closely related to entropy). For example consider an entity running an experiment. They gather a lot of data. Now they want to find a set of equations that properly captures the data. The higher the KC of the generating function of the data, the harder it is to learn. That the universe has exploitable structure means one can get decent results from combinatorial search. That the functions are low KC means they can be learned in a decent amount of time. If the laws of nature were of high KC then evolution could not have developed so much in this time.

What I am saying is that it is not that impressive that math works because all it is is a language to encode compressed descriptions any place where learning occurs most have such things. What is more interesting is how much structure there is and how "easy" most things are. But then, thinking further, if one imagines a probability distribution described by a cellular automaton generating all possible physics and all possible universes optimally (not saying thats how it is), it is sensible to assume simpler universes would be more likely to be computed. As such it should come as no surprise that we are in one such.

It is true that in principle the Laplace could have deduced much of quantum mechanics but in practice such a construction is (not so different than constructing a tree) is too difficult. I don't agree that physics is so different either. I think for other subjects, biology say, the issue is that our tools were not sufficiently developed to allow us to treat them in the same manner as physics. And they are more complex so harder to do without querying the environment and less amenable to pure deduction. Consider that it takes more bits to specify a particular human than say the entire universe and just its rules unfolded. Just as while the equation for the Mandlebrot set is low complexity,to specify a small part of a specific location takes more bits and more complexity.

...nature is predictable which means that the idea that it has rules which it follows is valid. Capturing these rules is learning a model - a compressed description of the phenomenon and the pattern of its rules. If we imagine these rules as a large tree to be inferred or a program or cellular automaton or whatever, then early math based on reality could be one of the top nodes or a simple rule from the early states. Mathematics might be argued to be a way of learning how to construct this tree - learned by sampling lots of experiences with reality. That is math is learning a compressed desc of the rules of the rules of nature and executing this "program".

As such following mathematics builds up possible programs that are consistent with the "Nature Tree" even if not based on any phenomenon. So later on when someone finds a phenomenon that matches a theory, because Math follows the same structure of applying rules that is in built with nature the theory will be a valid way to describe nature. And the difference in Reality and Math is one of energy constraints on what are valid physical trees.

In this view Mathematics is meta-learning of the patterns in nature. It is also my argument that it is fairly simple (we can learn it after all and our brains are only decently powerful). I mean it doesn't even use all of First Order Logic IIRC. One wonders a world with multiple possible construction methods - one with magic perhaps? Hah.

...

It follows that if you can construct rules then any system that can execute those rules would be able to simulate how nature would behave. It turns out that we have a universe which for some reason has these universal executors.

I believe the Church turing thesis is correct and as such believe the brain is one such executor and that only intuitionistic concepts of constructable mathematics are valid in the universe. As well, in a world not complex enough to have UTMs consciousness could not exist based on my thinking. One key difference between our brain and a computer is that our brains are much better at manipulating symbolic infinity and so we are better at doing mathematics (amongst many other reasons to do with exploiting structure).