The Future of Computers

Thursday, August 16, 2012 - 10:50 PM

So right now we have a lot of computing power and are able to generate a bunch of data and are starting to get better at learning from data. But there is an inefficiency in the way current learning algorithms are implemented. The inefficiency is that each class of learning algorithms is not being matched to the most appropriate set of hardware. More than just more parallelism, we also need more heterogeneous processor architectures. When each class of algorithm is running on the most appropriate platform, it will be far quicker and hence a lot more energy efficient. Here is my breakdown:

Bayesian and other Probabilistic or Sampling Based Methods: Probabilistic Chips (should also go well with mobile)

Methods Based on Linear Algebra and Optimization: GPU/Stream Processing like Architectures

Neural Network and Pattern Based Classifiers: memristors

And Current computers are best for exactly the techniques they are least used for: symbolic/logic and program tree based methods. And also heuristic search + non gradient based optimization

Of course, these categories are not hard and fast. You can have aspects of program tree based methods on GPU or probabilistic chips and aspects of variational Bayesian methods and all of Neural Nets do well on GPU. There's also the fact that memristors don't exist yet so we don't really know much on their limits or potential. But the bottom line is that there is a lot to be gained simply by matching algorithms to the best possible substrate.