Friday, March 28, 2008

Big Dog!

Here is the kind of intelligent design I can really get behind!

Busy Beavers

The most embarassing memory I have about my intellectual development as a computer scientist was the day I decided that maybe Turing got that halting problem thing wrong. I know, I know, what was a thinking! Blame it on youth. But I also blame it on a gap in my education on the theory of computation. See, I was taught about Turing Machines (TM) but was never exposed to Busy Beavers (BB) until much later.

My thinking at the time was that Turing pulled out a very contrived type of TM in his halting problem proof. He made the TM self referential to achieve the proof. So my thinking was that perhaps the only class of TM that could evade a halting detection algorithm was the class offered by Turing in his proof. I thought it was possible for the vast majority of other TM to be shown to halt algorithmically. My thinking went as follows:

  1. A TM is a deterministic device.
  2. If a TM goes through a sequence of states and at some point returns to a state already visited without halting then that TM will never halt. Here it is important to relaize I am including the tape in the state specification.

My "algorithm" for detecting a TM that did not halt was simply to run a TM in a simulator that at each step would save a snapshot of the entire state and compare to previous snapshots to find a duplicate. If a duplicate was found then it would declare the TM would not halt. For this to actually work the algorithm would have to set some bounds on how long it would run the simulation. I did not think this was a problem because I thought there was obviously a linear or at worst quadratic function based on the number of possible states in the input TM. And, of course, this is where I was wrong. Had I known about Busy Beavers I would not have gone down this road.

It turns out the the Busy Beaver function looks something like this for a two symbol TM:

5>=1.9 × 10^704

Linear indeed!

The silver lining for me is that I learned a very important lesson (besides the one about not doubting a mathematical proof by someone of Turing's stature that withheld the scrutiny of thousands of individuals of similar stature). I learned that human intuition about the emergence of complexity from simple mechanisms is woefully poor. This lesson is repeated when we look at Cellular Automata in NKS. Its also repeated when we compute the number of possible configurations of a chessboard and compare that the estimates of the number of atoms in the known universe.

Luckily I learned my lessons while I was still quite young. Many individuals who believe themselves to be intellectuals have never learned to not always trust what they believe is intuitively reasonable. That not so bad except they use the web to infect the gullible with their intuitions. So, be careful out there! There are a lot of busy beavers with bad intuition.

Micro vs. Macro Evolution

I suspect some of my readership may be getting bored with all this evolution vs. ID stuff so I promise after this post we will return to our regular scheduled programming!

A favorite ploy of creationism is to accept microevolution (e.g. Darwin's Finches) while rejecting macroevolution (new species, birds descending from some dinosaurs, etc.) There is plenty of credible discussion about micro and macro evolution on the web so I am not going to repeat it here. See Douglas Theobald , John Wilkins and Wikipedia. I'd like to instead address the kinds of drivel exemplified by the thousands of posts like this one. Here we see an author asking "When Did the Fish Sprout Legs?" and then denying such a leap is physically or biologically possible. Here is an excerpt:

When one examines the historical record of life, we find the absence of transitional forms between the major life groups such as fish and amphibians or reptiles and birds. The fossil record has failed to yield the host of transitional forms demanded by the theory of macro-evolution. Rather, the fossils show an abrupt appearance of very distinct groups of animals. Take, for
example, the supposed"fish-to-amphibian" transition. The general assumption has
been that the earliest amphibians evolved from the order of fish, the Rhipidistia. However, there are major differences between the earliest assumed amphibians, the Ichtyostega, and its presumed fish ancestor. The differences are not simply a few small bone changes but are enormous structural differences as can be seen in Figure 1.The first amphibian had well-developed fore- and hind limbs which were fully capable of supporting terrestrial motion. The transitions between the two are strictly hypothetical, and no transitional fossils have ever
been found ... only imagined and artistically drawn. The mechanism for the
supposed macro-evolution of the fish to the amphibian is purely hypothetical.

When I was a boy my family used to picnic at Westbury Gardens in Long Island. There is a large pond there where I used to love to catch frogs to take home. There was also a shallow area where there were steps leading into part of the pond. Around these steps swam hundreds of tadpoles. One day I decided it would be really cool to capture some tadpoles and take them home to watch the transition of a tadpole into a frog. So I caught about a dozen tadpoles and took them home and placed them in a fish tank. I waited and waited but they never turned into frogs. Clearly I did not provide them with the right environment and nutrients to allow this transition to occur.

Can we learn anything at all from my boyhood escapade? Well clearly I am not going to claim that the transition from tadpole to frog is an example of macroevolution at work. Clearly the transition is preprogrammed and does not involve any mutation or selection. But here is what is interesting and very instructive:
  1. A tadpole looks far more like a fish than it does a frog.
  2. Everyone knows that tadpoles do sprout legs and become frogs given the correct conditions.
  3. We also know that the transition from tadpole to frog is not instantaneous and each intermediate form is viable.
  4. We learn from my experiment that given the wrong environment a tadpole will remain a tadpole and eventually die.

So in a time frame far far shorter than any timescales on which macroevolution occurs we see a fish-like-thing turn into a frog. Fascinating really. What is fascinating is not that this is a proof of macroevolution. It is not. What is fascinating is that it is there is a stable trajectory through genotype space that leads to a stable trajectory through phenotype space that manifests itself as a fish transforming into a frog. The mechanisms by which genes switch on and off in the case of tadpoles are based in regulator genes, enzymes, etc. and not mutation and selection. But so what?

If you accept micro-evolution, whereby selection and mutation lead to small changes in form and you witness for yourself a purely biological process whereby a rather large morphological change can occur in a span of weeks, how can you not at least admit to the possibility of macroevolution? Oh right, it’s not in the bible. Sorry, I forgot.

p.s. I just found similar ideas by someone much more qualified than myself. Definitly worth a read.

Wednesday, March 26, 2008

The Fundemental Principle of Science

What is the fundamental principle that distinguishes science from non-science? I have been thinking about this a lot lately. I am somewhat familiar with the vast literature from the philosophy of science which speaks to this question (e.g. Popper, Kuhn, Feyerabend). Popper is best known for the "falsifiability criteria". Kuhn is famous for his "paradigm shifts". And Feyerabend insisted that science not follow any method whatsoever, lest it somehow restrict itself.

I buy into pieces of each of these philosophies but yet I feel compelled to think about a principle that would resonate with almost every practicing scientist. To me, the principle can't be as simple as "the scientific method" or the use of mathematics. Much scientific progress happens outside the confines of rigorous method and rigorous math. Scientists can't escape from the fact that they are ultimately human and as humans they succumb to emotion, prejudice, and turf wars. They use rhetoric as much as they employ differential equations and statistics.

Ultimately, despite temporary deviations from method and rigor, all true scientists buy into the principle of Occam's razor. No matter what mode a scientist is presently working, he or she is guided by a quest for simplicity. This does not mean the path to simplicity is always a straight line.

Most computer programmers, like myself, are also on a quest for simplicity. We call code "elegant" when it achieves great feats while remaining simple. However, most programmers don't regularly write elegant code; we just know that when we do it is the most satisfying experience imaginable. Likewise, most science does not start out elegant but it is constantly seeking this state. Science is looking for the simplest rules, laws and equations that explain the most observations, dispel the most mysteries and lead to the most new discoveries.

Occam's razor is the essence of Science.

This, more than anything else, is why the vast majority of practicing scientists reject pseudoscience (like Intelligent Design, Astrology, Numerology and the like). For example:

Intelligent Design: how could an explanation that requires the preexistence of a designer before the designed be the simplest explanation? Isn't simpler to assume the non-circular premise that intelligent life does not depend on the preexistence of someone more intelligent than the life whose origin requires explanation?

Astrology: how can the position of planets whose, gravity is too weak to even move a feather on earth, be the simplest explanation for any given human's life story.

Numerology: How could the letters of ones name, which are arbitrary artifacts of the evolution of language, have any bearing on a persons fate? Isn't it simpler to imagine ones fate is tied to a combination of heredity, environment and chance?

Of course, a believer in god would counter that his system is the simplest. You presume god and everything else follows. How can you get any simpler!?! It is of course at this point where any hope for intelligent discourse ends and the scientist and the faithful must part ways.

Tuesday, March 25, 2008

A Lesson in the Process of Science.

I realize that I have been posting quite a bit about education and the evils of teaching creationism. These topics are a bit off topic for this blog but they are important in light of the fact that (a) this is an election year and (b) a new creationism propaganda film staring Ben Stein is about to be released.

This film and creationists in general claim that "big science" is stifling other viewpoints and that doing so is anti-scientific. However, this position has as much legs as the theory creationism itself(that would be none).

Allow me to illustrate how science actually works by considering another area that is not as emotionally charged as the origins of life. Let's consider physics and in particular Quantum Mechanics (QM). I am inspired to write this by a recent article in New Scientist titled Quantum Randomness may not be Random.

As most readers are probably aware, the meaning and interpretation Quantum Mechanics was hotly debated during the birth of modern physics (~1880 - 1930) . The two most famous individuals at the heart of this debate were Albert Einstein with his position best immortalized in the "God does not play dice" quote and Niels Bohr who argued for the abandonment of all notions of causality at the quantum level. Bohr's view point became known as the Copenhagen interpretation and it ultimately became the dominant viewpoint of physics and the one that the vast majority of physicists accept today. In fact, this interpretation of QM has the same status in physics as The Theory of Evolution has in biology.

The first point to be made is that during the evolution of modern physics there was certainly room for multiple viewpoints and these viewpoints were hotly debated. But these debates always followed a process of science which begins with the presentation of facts and uses logic and mathematics to reach conclusions. Of course, scientists are humans and a certain degree of emotion and bullying come into play as well but nothing is settled using these devices. They are only a back drop of the human saga that is science. However, this is not what is truly instructive.

Fast forward to 2008. Quantum Mechanics is the most successful theory in the history of physics and its equations are responsible for so much innovation in the modern world. Truly, QM has earned the right in physics to be untouchable dogma. Certainly any respectable physicist who would dare question the Copenhagen interpretation would be the laughing stock of his profession and his career would be ruined. Certainly the proponents of Creationism would have you believe that this is how science works. But they are wrong.

In the New Scientist article we learn that a respected physicist from Rutgers, Sheldon Goldstein, is trying to revive an older interpretation of QM called the Bohmian Model, after David Bohm. The details are not as important as the moral. Goldstein is not being mocked by physics (even though his views are squarely in the minority) because he and his peers question the dogma of QM on scientific grounds. He presents mathematical and logical arguments. When his peers raise objections he does not scream foul or prejudice but rather talks about possible experiments. He does not dismiss his peers arguments by arguing in circles nor does he draw on sources of mysticism that lie squarely outside of science. Goldstein and others can question Big Science while remaining well ground in the process that define the way science has always operated.

Creationist don't play by the rules of science but want the respect of scientists. They propose arguments which draw on misrepresentations of thermodynamics but when they are called out on this they jump to other arguments equally fallacious. It is not so much the argument of design that disturbs most scientists; its the lack of logical and consistent reasoning that pervades all of ID.

I doubt many proponents of ID read my blog but if there are any out there allow me to suggest the following analogy. Imagine a scientist walking into your church this Sunday and saying, "Listen all you Christians your whole process of worshiping Christ and interpreting the bible is wrong. You should interpret Mathew like such and such and Paul like this and that." Wouldn't you be furious? By what right does a heathen have in telling your preacher what the bible means. How dare he! Well I say to you, "How dare you! How dare you come into the house of science and tell it how it should be. By what right?!?. Please leave immediately! ... But if you'd like to drop a small monetary donation on the way out we'd gladly accept!

Sunday, March 23, 2008

Intelligent Design Indeed.

Here it is in a nutshell why teaching ID in schools will create a country full of boobs (I mean more than the number we already have sitting in pews).

Saturday, March 22, 2008

Interval Math

While doing research for the Numerics Chapter of my forthcoming Mathematica Cookbook I came across a site devoted to research on Interval Math. Interval Math is an approach from the domain of Numerical Analysis that deals with the fact that all measurements are imprecise by abandoning the representation of measured values by numbers. Instead of numbers, it defines all mathematical operations on intervals.

Mathematica (as of version 5) support real (but not complex) interval math where intervals take the form Interval[{min1,max1}...]. All of the typical mathematical operations and functions are defined for intervals.

Interval math is important for computer systems that must act intelligently in the real world. All sensors are approximate. This is true for man-made devices as well as for our own eyes and ears. If a sensor on a robot returns a particular value there is always an inherent error. Rather than deal with errors by sampling and averaging, interval math allows the error to directly be represented in the values that enter downstream computations. This means all intermediate results track the propagation of errors from multiple sources to yield better information. There also seems to be a relationship between interval computation and fuzzy sets but it I have not located any resources except on paid content sites.

It seems that although the study of Interval math began in the US it is largely forgotten while in Germany it is there are conferences and it is part of the qualifying exams for studies in numerical methods.

Some of the less technical resources on the earlier mentioned site are this introduction, an article from American Scientist and even a movie.

Thursday, March 20, 2008

I was going to vote for John McCain...

John McCain pretty much had a lock on my vote for the 2008 election. The purpose of this blog is other than politics so I am not going to go into why I thought he was the best candidate. Instead I would like to discuss why I may have to change my vote.

The issue is "Intelligent Design" AKA "Creationism". Apparently McCain's views on the teaching of evolution and the teaching of creationism is that each is a point of view and each point of view should be taught.

Well, Senator, Astrology and Numerology are points of view. Should we teach them beside Astronomy and Mathematics? Phrenology is a point of view. Should we teach it beside neuroscience? I sincerely hope the senator would have the common sense, even though he says he is not a scientist, to see that "points of view" and "science" are not the same thing. Point's of view don't cure disease, solve problems in physics, help design the next generation of computers, launch a spaceship, etc. A point of view is not a scientific criteria. Scientists follow a process and within the boundaries of that process there can be different "points of view". "Intelligent Design" does not follow the process of science. This has been well established, so it would be silly to repeat the points here.

Can McCain be convinced to abandon this position? Well just to get my vote he probably can't but I think its time for a little grass roots action in the states where McCain has to have victory to become president. There must be enough rational folks out there to help convince the senator to abandon his foolhardy stance.

Tuesday, March 18, 2008

In Dedication To Arthur C. Clarke 1917-2008

You made me fall in love with AI.

You made me become a fan of Science Fiction.

You (with Kubrick's help) sent shivers down my spine at the sight of the black monolith.

You had a vision for what 2001 could have been had mankind not squandered its resources trying to kill each other.

I majored in Computer Science partly because of you.

You will always be alive because you live in the minds of your fans and will one day live in the mind of HAL.

Sunday, March 16, 2008

The Problem with Mathematics Education

There are numerous essays and newspaper blurbs lamenting the poor state of mathematical education in the US. Here is a typical example: Presidential panel bemoans state of math education.

What I see as the problem is that advanced mathematics is introduced in language that is unfit to inspire any but the few that were genetically destined to be mathematicians (or physicists).

Ask a recent college grad what an Eigen value or Eigen vector is. I give you 100:1 odds you'll get a blank stare. Okay now ask them to read this explanation from a popular Math web site. I bet their face will be even blanker. Now ask them to read this wonderful little explanation. Chances are the lights came on.

This is not to say that the later explanation will allow a person to do the math. But this is certainly where Math education, even at the highest levels, should begin. Illustrate why the problem is important, give a sensory picture to go along with the abstractions. Some might believe that this is how most Mathematicians teach but that is simply not the case. Mathematics is a very macho profession and many mathematicians believe its beneath them to offer intuition prior to rigor. The sad truth is many of them could not come up with compelling intuitive explanations even if they wanted to. It was not the way they were taught either.

Saturday, March 15, 2008

Mathematica on LinkedIn and on a Wiki

This is my third post today (penance for not posting for so long!)

I recently started a LinkedIn Group called Mathematica Users Group. If you are a member of LinkedIn you can join the group by clicking here. After creating the group I thought it would be cool to have a Mathematica Wiki and soon discovered that Luc Barthelet thought this was a good idea too but thought of it a few years earlier than I did!

Semantic Wiki's

I attended a Semantic Web Meetup this past Thursday (Mar 13) where the topic was Semantic Wiki's. Although the presentations were not as focused as I would have liked, the topic is an interesting one. The two talks focused on the Semantic Media Wiki and the presentations can be found here and here..

Semantic Media Wiki is an extension to Media Wiki, the wiki engine that powers Wikipedia. The basic idea is that the Wiki supports an underling Triplestore (product example). Triples model subject, predicate, and object relationships (For more Semantic Web background see this, this and this).

The problem with a regular Wiki is that the information is largely unstructured. Some may argue this is a feature and there is something to the argument that the popularity of the Wiki stems from not forcing authors to use cumbersome syntax to structure the data for the benefit of computers. However, this lack of structure makes the information in a Wiki hard to re-purpose and also makes Wiki's harder to maintain (consider the fact that there is no automation in Wikipedia to keep lists like this one in sync with new pages).

Semantic Wiki's solve this problem by tagging data with known relationships that the computer can automatically leverage to cross-reference, collate and re-purpose data.

I think this idea is a natural progression of the Wiki concept but it remains to be seen if Semantic Wikis ever reach a critical mass comparable to Wikipedia. My personal view is that the work of organizing mounds of textual information needs advances in computer processing (AI) and that only a select few fanatics will engage in "tripling up the web" manually. Although, when it comes to web trends my crystal ball has been rather clouded.

Readers of my older posts know that I have proposed similar ideas under the moniker WISDI. I am still interested in the WISDI idea but circumstances have forced me to turn my attention elsewhere for the near term (I'll update readers in future posts) .

Ultimately, triples are just a syntax for the logic of relations (which is not even first order logic) so, to me and many others, the Semantic Web initiative is using really low fidelity tools to attack a high fidelity problem. However, in the agile spirit of "the simplest thing that can possibly work" they may achieve a more usable and reusable web in the near term.

Nested Dreams

I have always found dreaming a fascinating subject. I think dream research is key to unlocking data about consciousness. However, it is one of those areas of research where there is a very low signal to noise ratio.

In the past year I have had a very vivid class of dream that I don't recall ever experiencing earlier in my life. I don't know if there is a technical term for it but I call it a "nested dream". This is a dream that I apparently wake up from, retrospect about the dream but am, in fact, waking up into another dream. I am not talking about simply transitioning from one dream to another but actual dreaming, dreaming about waking up from that dream, things occurring in the new "dream stack frame", and then ultimately really waking up and remembering details from both frames.

I use the notion of a stack loosely since there is no remembrance of pushing down from dream 1 to dream 2 but rather there is the remembrance of popping out of 2 and into 1.

Has anyone experienced a similar kind of dream?

Saturday, March 8, 2008

Prof. Ray C. Dougherty's Research

A few weeks ago I attended a Wolfram Research event called Mathematica Publishers Day. The goal of the event was to highlight the capabilities of Mathematica 6 as a platform for technical publishing. I really enjoyed this event but was also pleasantly surprised by a talk that did not quite fit into the overall theme of the event but nevertheless was quite fascinating to me.

The presenter was Prof. Ray C. Dougherty, NYU Linguistics researcher. He used Mathematica to model all possible sine wave based communications systems. The presentation is available via Wolfram. Unfortunately, as with many interesting presentations, you needed to hear the talk to get the most out of it. Here are some interesting excerpts that I remember:

  • The Cochlea is computing the second derivative of the auditory input.
  • The most mathematically complex communication system is one where the transmitter and receiver have the same anatomy (e.g., wings of insects).
  • Bats can hear phase changes because they can rotate their ears. A human can not hear a change in the rotation of a tuning fork but a bat can.
  • Prof Dougherty believes he has a Chomsky generative grammar that enumerates all possible animal communication systems.
  • He also believes he can map each possible system onto the integers in a natural way.
  • From this he concludes that evolution must proceed in jumps.
  • He relates this idea to the evolution of all possible Tic Tac Toe Games to illustrate the notion that all such games are not unique and similarly the space of all possible communication systems contains many redundant systems as well.
  • He goes on to visualising distributions of the primes to illustrate that there are systems that are not random but whose patterns are too complex for us to model in a simple fashion. Explains how this is related to the ideas in Stephen Wolfram's NKS.