18 January 2016

Philosophizing about AGI

Brad Delong linked to an annoying post on AGI recently. 

David Deutsch, professor and physicist, discusses the progress, or lack thereof, on building an Artificial General Intelligence.  Although Dr. Deutsch takes a while to get there, the core of his argument is that we don't know how to program creativity and, until we figure that out, we can make no progress on creating an AGI.

First, allow me to pull some quotes out of context and rant on them.


"Charles Babbage and his assistant Ada, Countess of Lovelace". 
Lady Lovelace was a collaborator and peer of Charles Babbage, not an assistant. I find this phrase offensive and sexist.


"But no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality. The enterprise of achieving it artificially — the field of ‘artificial general intelligence’ or AGI — has made no progress whatever during the entire six decades of its existence."
The idea that artificial intelligence has made no progress in the past 60 years is completely absurd.  60 years ago we didn't have alpha-beta tree pruning.  We didn't have distributed programming.  We couldn't amass hardware that had computing power functionally as powerful as a human brain.  20 years ago neural nets didn't work very well and were research curiosities showing a bit of promise.  Today they are frequently used in a variety of different areas.

Meanwhile, super-human artificial general intelligences do exist. A computer chip is not designed by a single human, it is designed by a corporation. The corporation is a man-made (hence artificial) general intelligence.


"As far as I am aware, no one has [programmed a computer to be self-aware], presumably because it is a fairly useless ability as well as a trivial one."
In actuality, self-awareness is so useful (and trivial) that it is a standard technique that we use when programming.  It turns out that having a computer program tell you when it is not working is quite useful.  We regularly program our software to monitor itself and report on errors and high latency operations.


"And in any case, AGI cannot possibly be defined purely behaviourally. In the classic ‘brain in a vat’ thought experiment, the brain, when temporarily disconnected from its input and output channels, is thinking, feeling, creating explanations — it has all the cognitive attributes of an AGI. So the relevant attributes of an AGI program do not consist only of the relationships between its inputs and outputs."
How do we tell that this disconnected brain is thinking, feeling, and creating explanations if we cannot examine its inputs and outputs.  What alternative to evaluating a system behaviorally does Dr. Deutsch propose that we use in order to decide if that system is intelligent?

In fact, earlier in the article Dr. Deutsch implies a behavioral test.  The AGI should write a paper on Dark Matter accepted for publication by an academic journal.


"Expecting to create an AGI without first understanding in detail how it works is like expecting skyscrapers to learn to fly if we build them tall enough."
And yet, the one General Intelligence we know of was built without understanding in detail how it works. 

The analogy also fails because General Intelligences exist.  Flying skyscrapers don't exist.

Also, you can learn a lot about how something needs to work by trying to build it before you completely understand all the details.  The act of building something wonderfully focuses your mind as to which details are critical to understand and which can be glossed over.


Likewise, when a computer program beats a grandmaster at chess, the two are not using even remotely similar algorithms. The grandmaster can explain why it seemed worth sacrificing the knight for strategic advantage and can write an exciting book on the subject. The program can only prove that the sacrifice does not force a checkmate, and cannot write a book because it has no clue even what the objective of a chess game is.
In fact, computers and grandmasters do use vaguely similar algorithms.  It is true that the computer is far more precise and examines far more data in far more detail.  But basic concepts are shared between the grandmaster and the computer.  Both learn to look at board positions and evaluate them to decide how favorable a board position is to each side. They learn how to balance mobility with material.  While we might not program a chess playing program to write a book, we could write a book describing the features of a board that a program was looking at and the weights that it assigned to those features and why the program decided to sacrifice the knight for some other strategic advantage.

Both the program and the grandmaster use some sort of min-max algorithm to evaluate the outcome of a tree of moves.  Both use some sort of quiescence evaluation to decide when a board position is sufficiently static that it makes sense to evaluate the board.  Computer programs are very consciously modeled on the thought processes used by humans.



Overall Dr. Deutsch is too quick to dismiss emergent behavior.  We do view glimpses of emergent creative behavior from our current systems.  We do have various techniques, and we will refine other techniques, for generating nano-conjectures and nano-criticisms, the key components in generating nano-creativity.  Just as with chess where we took a couple of simple techniques and then scaled them up by a factor of trillions, we will execute scads of these nano-conjectures and nano-criticisms across massively parallel hardware, possibly far more hardware than is available to a human brain, and when we tie these together, we will get results that are truly creative.


While writing this I also found another response to Deutsch.

15 January 2016

Taming Bacteria

In the past, we've fought head on with bacteria.  Human health greatly improved when we learned to kill bacteria efficiently with antibiotics.  The next breakthrough in human health will occur when we tame bacteria.  

Domesticated bacteria can be used in multiple ways.  One approach is to evolve strains that are less harmful, more helpful, and good at crowding out less desirable wild strains. Infecting our mouths with strains of domesticated bacteria may lead to fewer cavities, less gum disease, and otherwise better oral health.  We will also eventually infect our guts with preferred strains of bacteria.

Another form of tame bacteria will be used for monitoring, diagnosing, and repairing common problems.  We will be able to design bacteria that detect the presence of selected molecules in our bodies.  Detection will act like a switch that causes the bacteria to produce a useful enzyme or molecule.  Our bodies will contain trillions of very simple computers acting locally to make individually minor improvements that will combine to provide major benefits.

Poorly Argued Economics

Over at the American Enterprise Institute, we find some poorly argued economics.  Steve Conover suggests there are four schools of economic thought.  But in his Far Left school of thought, he not only creates a straw man, he also creates a logical error.

Paul Krugman does not say we should spend money on Bridges to Nowhere.  He (and Keynes) say spending money on a Bridge to Nowhere is better than doing nothing.  You at least put pay into the pockets of the people who are building the bridge and they will go out and spend that income on something useful.  Still, Krugman and Keynes would far rather build a Bridge to Tomorrow and get both the benefits of paying workers as well as the benefits of creating wealth that produces a public return on investment.

Also, wasteful defense spending is a perfectly good Bridge to Nowhere.  If you believe that Paul Krugman would be happy to spend money on a Bridge to Nowhere, then you have to believe that he would be happy to keep wasteful defense spending.

And while we're here...  The real Statesmen will prefer to cut wasteful defense spending and build the Bridge to Tomorrow because that will increase GDP and allow more money to be spent on defense.  As Eric Schmidt, Google's Executive Chairman, likes to say: more revenue solves all problems.