Thursday, October 30, 2008

Geeks just wanna have fun

A computer is truly the greatest geek toy ever invented. It is like an erector set to the googleplex power. Being "forced" to earn a living making virtual toys for others to use rather than spending ones days exploring this toy's possibilities for my own edification can be truly depressing sometimes. Maybe not as depressing as having to sell used cars or life insurance to feed one's family but close. I probably should be grateful I have a job that involves these toys especially in this economy.

I guess I just feel like being a whiner today in lieu of a useful blog post. Go ahead and slam me in the comments.

Tuesday, October 7, 2008

More Programmers = More Code

What do you get when you throw lots of programmers at a problem? You get lots of code! That's what programmers do, duh. It does not matter how simple or complex the problem is. For any given size problem, more programmers will mean more code. That is more code to debug, more code to test, more to maintain. More, more, more. Eventually this will require even more programmers and the cycle continues until the project collapses.

And it is even worse if you only hire the best and brightest programmers (don't we all!!). They write code faster! And it will be more complex than a dullard programmer because good programmers like to write general solutions.

Rich companies like Microsoft can keep this going for some time. However, you see the result in Vista and Office. Bloat, bloat, bloat.

What do you do if you have a problem that would take 12 months for 2 programmers but you want it done in 6. Hire 2 more programmers? Hire 4 more programmers? Big mistake! First off, it ain't gonna happen in 6 months no matter what you do if it is really a 12 month problem. If you must cover your ass as a manager and claim you used all your resources to make it happen in 6 months, I suggest the following.

Put 2 of the best programmers you have on the project. Make sure they like each other and think alike about software design. They are the leads. Only the leads will write code that will go into production.

Use all your remaining dollars to hire programmers who will support the leads but NOT write production code unless it is a piece of boiler plate delegated to them by one of the leads. What these junior developers do is write and execute unit tests for the leads. They write test tools when appropriate. They participate in code reviews. They are not slaves but future leads in training. Or maybe they are not future leads in training. Maybe they are salves. That's okay, mediocre programmer's need to eat to.

Some bigger projects may need 4 leads or even 6. If you have more than that your project is DOA and you better convince someone to spilt it up. If they won't, I'd find another place to work. That's better than being canned or losing all you hair/sleep/health over a doomed project. And trust me, it is doomed.

Wednesday, October 1, 2008

Let's Make a Deal - Let Monty Rest!

Some "solved" problems just never go away. Perpetual Motion machines is one example. Another example is the Monty Hall Problem. There is presently a mini-debate in the discussion area of Scientific American's How Randomness Rules Our World and Why We Cannot See It with a significant number of adamant doubters of the standard result that states it is better to switch if Monty shows you a goat.

This problem is amazingly obvious to understand once you analyze it correctly and remove certain ambiguities from the problem statement. Here's my analysis and some Mathematica simulations to add some weight (as if any is needed).

Okay, we all can agree that the probability of NOT picking the DREAM VACATION is 2/3, right? There are two GOATS and one DREAM VACATION.

Now, when Monty shows you the remaining door with a GOAT he just beamed you some very significant information. He's told you that if you picked a GOAT then the probability of getting a DREAM VACATION is 1 if you switch! We already know the probability you picked a GOAT is 2/3 so after he gives you this new info your probability of winning is now 2/3. So switch for GOAT's sake!! If you don't switch, your probability is just 1/3.

Here is a Mathematica program for the non-believers.

GOAT = 0; (* Goat worth zero *)
VACATION = 1; (* Vacation worth one*)

makePrizes[] := Module[{},Switch[RandomInteger[{1,3}],

randomPick[doors_List] := Module[{},RandomInteger[{1,Length[doors]}]]

strategy1VS2[trials_Integer] :=
Module[{winnings1=0, winnings2=0, firstPick, secondPick, doors, doors2},
Do[doors = makePrizes[];
firstPick = randomPick[doors];
(*winnings of person who keeps first pick*)
winnings1+= doors[[firstPick]];
(*delete first pick from choices*)
doors2 = Drop[doors,{firstPick}];
(*delete goat from remaining*)
doors2 = Drop[doors2,Position[doors2,GOAT][[1]]];
(*Always pick remaining prize *)
secondPick =doors2[[1]];
(*winnings of person who switches*)
winnings2+= secondPick,{trials}];

(*Run simulation 10000 times. *)

The result {3356,6644} means keeping first choice only paid 3356 over 10000 runs but switching paid 6644!

Now, there are ASSUMPTIONS here (there always are). One assumption is that on each run the position of the prize changes. It turns out that keeping the prize always in any particular door for the entire simulation does not matter (as long as the contestant does not have the information, obviously!)

strategy1VS2A[trials_Integer,init_List] :=
Do[doors = init;
firstPick = randomPick[doors];
(*winnings of person who keeps
first pick*)
winnings1+= doors[[firstPick]];
(*delete first pick from choices*)
doors2 = Drop[doors,{firstPick}];
(*delete goat from remaining*)
doors2 = Drop[doors2,Position[doors2,GOAT][[1]]];
(*Always pick remaining prize *)
secondPick =doors2[[1]];
(*winnings of person who switches*)
winnings2+= secondPick;,{trials}];




The other assumption is that your not forced to switch before seeing the goat. This IS important!!

strategy1VS2B[trials_Integer] :=
Do[doors = makePrizes[];
firstPick = randomPick[doors];
(*winnings of person who keeps first pick*)
winnings1+= doors[[firstPick]];
(*delete first pick from choices*)
doors2 = Drop[doors,{firstPick}];
(*Randomly choose from remaing*)
secondPick =randomPick[doors2];
(*winnings of person who switches*)
winnings2+= doors2[[secondPick]];,{trials}];


So information has value, Duh!

So now that you have this information, become a believer, make the switch!

Friday, September 5, 2008

Scratched Chrome

Call me impatient and unreasonable but don't you think the features of Google Toolbar would be available in Google Chrome??? 

Friday, August 29, 2008

Tan of the Kitchen Sink

In a post on MathForum I made the mistake of doubting Mathematica could compute the Tan of the Kitchen Sink. It is always a mistake to doubt Mathematica's prowess as Daniel Lichtblau of Wolfram pointed out to me:

In[1]:= Tan[Khinchin//Sinc] // N
Out[1]= 0.165514

I stand corrected!


It sort of spoils the joke/pun to explain but to non-Mathematca users...

Khinchin's constant is aprox. 2.68545
Sinc[x] = Sin[x]/x
N means "give numeric value"
and // means "use postix" so
this computes N[Tan[Sin[Khinchin]/Khinchin]]

Thursday, August 28, 2008

How to mess with Comp Sci Students

Assignment: Write a compiler that compiles all programs that can't compile themselves.
Extra Credit: Use the compiler to compile assignment.

Tuesday, August 26, 2008

F# For Scientists Misses the Boat On Mathematica Performance

I recently purchased F# For Scientists by Dr. Jon Harrop after the author mentioned it on the Mathematica Mailing List. According to Dr. Harrop,

Mathematica's .NET-Link technology allows Mathematica and .NET programs to interoperate seamlessly. Moreover, Microsoft's new functional programming language F# provides many familiar benefits to Mathematica programmers:
The marriage of Mathematica with F# can greatly improve productivity for a wide variety of tasks.

I am a big fan of Mathematica and functional programming and have been wanting to check out F# for some time so I decided to give the book a shot. It just arrived today so I can't post a full review but I did jump directly to the small section (5 pages) on using F# with Mathematica.

What did I learn? Well this section rightly claims that Mathematica has awesome symbolic math capabilities (it does). But then it goes on to claim that F# can beat the pants off of Mathematica on raw calculation. Thus it suggested F# programmers should call out to Mathematica for symbolic integration but then evaluate the result in F# for speed (to the tune of 3.4 times Mathematica's speed). I was naturally dubious. The explanation of this speed up is give as

The single most important reason for this speed boost is the specialization of the F# code compared to Mathematica's own general purpose term rewriter. ... Moreover, the F# programming language also excels at compiler writing and the JIT-compilation capabilities of the .NET platform make it ideally suited to the construction of custom evaluators that are compiled down to native code before being executed. This approach is typically orders of magnitude faster than evaluation in a standalone generic term rewriting system like Mathematica.

Okay, hold the phone! First off, I did not know the F# language could write compilers. I'll forgive this as poetic use of language. I guess I sort of know what he meant to say. More interesting is that we have gone from 3.7 times to "orders of magnitude". Now, I don't take anything away from the brilliant folks at Microsoft, but the equally brilliant folks at Wolfram have been focusing exclusively on mathematics software for 20 years and you might think they learned a thing or two about computational speed!

Here is the example from the book...

First, he uses Mathematica to integrate a function.


(-2*ArcTan[1 - Sqrt[2]*Sqrt[Tan[x]]] +
2*ArcTan[1 + Sqrt[2]*Sqrt[Tan[x]]] +
Log[-1 + Sqrt[2]*Sqrt[Tan[x]] - Tan[x]] -
Log[1 + Sqrt[2]*Sqrt[Tan[x]] + Tan[x]])/(2*Sqrt[2])

He then goes to show that Mathematica takes 26 secondsto evaluate this function in loop for 360,000 iterations.

He then shows a translator that converts the Mathematica to F# and the F# code does the same work in 7.595 seconds.

So far Dr. Harrop is correct but like some many others who are in a rush to show their new favorite language superior to another's, he forgets to read the manual! Particularly, the section on optimization! If he had he would have found a handy little Mathematica function called Compile. Hmm, sounds promising. And in fact....

cf = Compile[{{x, _Complex}}, Evaluate[Integrate[Sqrt[Tan[x]],x]]]
Timing[Do[cf[x + y I],{x,-3.0,3.0,0.01},{y,-3.0,3.0,0.01}]]


That's 5.281 seconds on my relatively underpowered laptop (Thinkpad X60) !

Some might feel I'm being a bit harsh on Dr. Harrop but after all he made me layout bucks for a book that promised me "many familiar benefits" only to deliver 5 measly pages of half truth. F# programmers may benefit from Mathematica but the jury is still out as to whether the reverse is true.

Monday, August 25, 2008

Drilling Square Holes

One of the fringe benefits of working on a book is all the tidbits of knowledge you come across while doing research. While working on the graphics chapters I cam across a shape known as a Reuleaux triangle.

It turns out that this shape is the key to doing what on the surface may seem impossible, Drilling a nearly square hole.

Saturday, July 26, 2008

A Biological Programming Language

I've often said to friends that if I could start my career over again I would go into biology instead of computer science. Now, perhaps, there is a way to have a foot in both worlds.

Little b is a programming language for modeling biological systems. Quoting from the languages site...
The little b project is an effort to provide an open source language which
allows scientists to build mathematical models of complex systems. The
initial focus is systems biology. The goal is to stimulate widespread sharing
and reuse of models. The little b language to allow biologists to build
models quickly and easily from shared parts, and to allow theorists to program
new ways of describing complex systems.

Currently, libraries have been developed for building ODE models of
molecular networks in multi-compartment systems such as cellular epithelia.
Aneil Mallavarapu is the author and inventor of little b, and runs the
project. Little b is based in Common Lisp and contains mechanisms for rule-based
reasoning, symbolic mathematics and object-oriented definitions. The syntax is
designed to be terse and human-readable to facilitate communication. The
environment is both interactive and compilable.

Makes me wonder if Mathematica would be a good enviornment for similar exploration but with more sophisticated tools already built in.

Friday, July 18, 2008

Find the next row. Win $25

Each row below is produced by a definite rule. What is the next row and what is the rule? $25 Prize to the first person who posts the answer in the comments or emails me at [s m a n g a n o [at] i n t o - t e c h n o l o g y [dot] c o m].


Saturday, May 31, 2008

Semantic Vectors Revisited (for 31 bucks!)

Its been a while since I posted (been busy writing my new book) and even longer since I've posted any ideas related to my ideas on using concepts from linear algebra to model intelligence. But I thought I'd share an experience that makes me wonder how anything got done before the WWW!

While doing research on tensors for my book I came across a book called MathTensor: A System for Doing Tensor Analysis by Computer. This book describes software for Tensor Math developed using Mathematica so it instantly caught my interest. This lead me to one of the author's web sites which lead me to an article Tensor Analysis of Matrix Cognition during Medical Decision-Making. Now you can't put the words matrix and cognition next to each other without getting my immediate attention so I jumped to that essay which ultimately lead me to this gem: A scaling method for priorities in hierarchical structures by
Thomas L. Sattay written in the Journal Mathematical Psychology 1977; 15:234-281 (there is no online version but you can buy a PDF copy at ScienceDirect if you are willing to part with $31.

I found this research to be fascinating and gave me much food for thought that I'll try to share when I have more time. For now I'd only like to make the following rather obvious observation. If it was not for the web, there would be close to zero chance that I would have found this article and an even smaller chance I would be reading it within 15 mins of finding it. The only sad part is that it is locked up in some obscure journal that I did not have immediate access to without parting with the cost of a nice dinner. I think publishers of journals need to catch up with the rest of the world and begin opening up their older content to free access. Clearly they can use advertising to subsidize this but perhaps advertisement driven business models have reached a point of saturation. Perhaps its time for a library based approach to become virtualized.

I am sure I could find a library within a reasonable vicinity of my home that had access to this journal but who has the time! Why not offer a version that rather than costing $31 to keep forever, costs me $1 to read for a day and $0.50 for each additional day. DRM technology is certainly good enough to make this work. And I am guessing that the publishers would make more money than by waiting for someone like me who was motivated enough to part with $31. There is a vast amount of lost knowledge hiding in these journals. History has shown that the world benefits greatly when such knowledge is serendipitously rediscovered (think Gregor Mendel and his Bean Plants). Its time to unlock the vaults of knowledge so creativity and discovery can reach new unimagined heights!

Friday, May 16, 2008

Real Time Blog Monitoring!!

Part of human nature is to become obsessed about what others think about something you produce. This seems to be especially true with respect to writing. I know many authors (including myself) who have become obsessed with monitoring their Amazon ranking or star rating (I weaned my self off of that one with great difficulty).

Feynman wrote a book titled, "what do you care what other people think?" I never met the man in person but I imagine it would be easier for someone of that stature to not care. I try not to care but often do, much to my displeasure. It is hard for us ordinary people not to measure our own worth through the eyes of others (even if many of these eyes gaze out of even more ordinary heads).

So given this obvious flaw in human nature, what would you say if I told you I found the equivalent of crack cocaine for insecure bloggers! As with other drugs, I found this quite by accident. Some of you may have noticed the little AIM chat widget on my blog. I installed this on a whim more out of curiosity then any thoughts I would spend much time chatting with my few readers. There were only two attempts by anyone to chat with me and both occurred when I stepped away from my computer.

However, what was much more interesting is what happens on the other side of this widget. As you can see from the screen shot above, AIM synthesizes a guest id for every visitor to my blog while my chat client is up. This means every time a new eyeball looks at my blog my AIM client gets updated in close to real time! Further, if you have sounds enabled you hear the squeaky door open and close as they come and go! Talk about instant gratification!!
The real funny thing is that I did not notice all this until someone posted one of my posts on redit! I don't know if the AIM developers thought this through, but I can tell you if folks with far more popular blogs than mine embedded this control in their blog page, the AIM chat servers would get a run for their money!
I'll probably remove the AIM widget from my blog because it serves no real useful purpose. I'd rather folks leave comments that everyone can benefit from than have a private conversation. Still it is interesting what the law of unintended consequences can dredge up. Any insecure bloggers care to "take a hit"?

Thursday, May 8, 2008

The Joys of a Technically Inferior Phone

Recently I lost my cell phone. It was a pretty plain-jane Nokia and my contract was up and so I was planning to buy a new phone anyway. My initial inclination was to buy an i-Phone. I actually went to my local Apple Store with that very intent. Lucky for me they were out of stock.

Several of my friends own i-Phones while others own Blackberries. I was beginning to think I was missing out. However, the out-of-stock condition at Apple gave me just enough time to pause and rethink before caving into my impulse to buy the hot gadget.

First off, I hate AT&T. I did not always hate AT&T but several recent experiences made me swear never to give them my business again. Still, I almost caved and got an i-Phone anyway. Such is the power of techno-lust.

The real reason I'm glad I settled for my new phone (An LG Voyager with Verizon Service) is that it is somewhat cool but nowhere near cool enough. Why is this good, you ask?

Watch an owner of an i-Phone or a Blackberry. I often do. Watch them stroke their phone, caress their phone, slide their thumb wheel or finger their screen. It really makes you wonder what they did with their hands before these touchy-feely phones were invented.

Now, my Voyager has a touch screen and a keyboard. But neither is such a turn on that I feel the need to constantly fiddle with it. So what do I do with my phone? Well, it pretty much stays in my pocket or briefcase until I need to make a call or check my email. I also use the MP3 player.

This is great. My hands are free to do more productive things like doodle on the margins of presentations, scratch my head over some obscure code or even pick my nose when no one is looking (yeah, sure, you do it too, liar).

So think twice before plucking down 400 bucks on a device whose interface is so amazingly fluid you just want to stroke it all day. I am sure you can find more productive ways to use your hands.

Wednesday, May 7, 2008

Nonsense on Stilts

I attended a talk lats night by Massimo Pigliucci related to the debate of when science should and should not be trusted. It was in support of his forthcoming book, Nonsense on Stilts: How to tell the Difference between Science and Bunk.

This was an excellent and well balanced talk that showed how scientists and post-modernists can both get things wrong and what we can do to maintain our BS detectors. I highly recommend listening to it. His web site is called Rationally Speaking.

I probably should not be telling you this...

... but I can't post today because I've come down with a bad case of paralipsis aggravated by symptoms of praeteritio, preterition, cataphasis, antiphrasis, not to mention parasiopesis. So I won't be posting today. I will not stop to mention that my recent posts should have enough meat to keep you busy while I rest.

Monday, May 5, 2008

Red-Black Tree in 2 hours

Many computer science students have heard of red-black trees. If you use any container class library that has a map (ordered associative container) like STL you probably know that red-black trees are a popular implementation of a map. It may be non-obvious why the constraints associated with red-black trees cause them to remain roughly balanced but what is even more daunting is the implementation of such trees in an imperative language like C!

While working on an implementation for a recipe in my forth coming "Mathematica Cookbook" I found a functional implementation in Haskell (postscript) by Chris Okasaki. I know this has been said a million times before but it never ceases to amaze me how succinct and beautiful the functional approach can be. Using the Haskell solution as a guide, I was able to develop a complete red-black implementation in Mathematica in under 2 hours. This may not sound that impressive but consider that the referenced paper does not show how to implemented a remove operation. Even without the need for a remove, how many programmers could take a red-black tree written in say C and translate it into a completely working implementation in Java in under 2 hrs? I suppose a few can but this is really not a post about bragging rights. The functional approach to software development is just god-damn beautiful at so many levels and it is this and not my hacker abilities which made this exercise possible.

Now, there are caveats. There always are. Any C implementation of a red-black tree is bound to have numerous optimizations and a purely functional solution will not fare well for every application of a map (but surprisingly it is competitive for many. I'll post some C++ comparisons when I have a chance.)

If you'd like to learn more about red-black trees but don't know Haskell I would highly recommend learning the minimum of Haskell you need to understand Okasaki's paper instead of trying to learn about them by digging into a C implementation first.

Saturday, May 3, 2008

Which programming language is the most X?

Mailing lists are great places to find fruitless arguments. One of the most chronic arguments take the form of "My programming language is the most X". Here X may be "object-oriented", "functional", or even something much vaguer like "elegant".

When using terms like "object-oriented" and "functional" it is good to have somewhat agreed upon definitions. A characteristic that applies to everything applies to nothing. This is why I get a little bit frustrated when I read arguments like C is object-oriented because you can create tables of function pointers to simulate polymorphism or C is functional because you can pass a pointer to a function. We'll then so is assembler, I guess.

However, almost as annoying are arguments within the group of languages that are widely agreed to have the specified trait. Consider functional languages. I think present day functional enthusiasts would agree that Haskell is an outstanding example of a functional language. It has first class functions, lambda abstraction, and higher order primitives like map, foldr, foldl, etc. However, things begin to go awry when Haskellers start conflating features of the language that, although virtuous as they may be, evolved after the functional paradigm became widely recognised. For example, take single assignment. Clearly single assignment has many benefits. But it is hard for me to accept that the mother of all functional languages, Lisp, is not functional because it has setq. The same is true for other traits like currying, closures, type induction, lazy evaluation, etc.

What gets lost in the quest to define a language as the most X is the much harder question of why the traits that one deems must exist in a X language are important in the first place. In which circumstances will such a trait lead to better software and in which circumstances will it simply provide rope to hang oneself.

The distinction between strict and non-strict functional languages is a good case in point. In a strict language the arguments passed to a function must be fully evaluated before the invocation (ML, Scheme) while non-strict languages can support lazy evaluation (Miranda, Haskell). Is a non-strict language better than a strict one? Should languages support both? Are such questions even answerable?

I think these questions are important and are answerable but the answers involve very deep considerations of problem domain, proofs of correctness, type theory, compiler design and even the limitations of human ability to reason about software. Human knowledge in these areas are not extended by arguments about what's more X.

Monday, April 21, 2008

The Most Creative Stage in the Software Lifecycle?

Not too long ago I was working for a consulting company that prides itself in only accepting new development projects (as opposed to maintenance of existing systems). New development has an immediate appeal, no doubt. One would naturally expect that the blank slate that comes with the development of a brand new system would afford its builders the most freedom of expression and hence the most opportunity to be creative. I take it as a given that all talented individuals want a chance to be creative.

When I look back on my own career I see that I have spent approximately 50% working on brand new development and 50% on development within a system that was in place before I arrived. When I map these periods on to the periods where I felt most fulfilled as a developer, I notice that working on a completely new system was a poor predictor for happiness. It seems I was often being more creative doing, so called, maintenance. How can this be? Am I a ditch digger deep down inside?

Before I move forward, let me distinguish between two types of maintenance work. The first kind is on a system that it at the end of its life or is no longer considered strategic. Such a system must be maintained simply because the company still needs it in some capacity (the new system is not ready, the business is changing but not ready to abandon the income supported by the old or simply because that part of the business is still important but has no growth potential). The second kind is on a system that has existed for some time but is still considered strategic and is still evolving at a rapid pace. For the remainder of this essay, assume I am taking about the later system (if there are any readers out there who find joy at working on the first type of system, well, more power to you).

I maintain that there is often (and I mean very often) more opportunity to be creative evolving a strategic legacy system than there is on a clean slate system.

The first characteristic of clean slate development is great uncertainty. The customers are often not quite sure what they want. The development is as much an experiment as it is a development process except no one is going to call it a prototype because that is what was done six months ago by someone using Visual Basic + Excel before the team of 12 new expensive consultants were hired! It is always extremely frustrating to be told you are not building a throw away but on the other hand 60% of the system requirements are ill defined!

The second characteristic of clean slate development is great time pressure. You have a new development team that is often unproven and perhaps never worked together before. Everyone is anxious to prove their worth while the management team (possibly new as well) is anxious to show their bosses or clients results. High time pressure work is the least conducive to creativity!

The third characteristic of new development is new technology. After all, who wants to develop something new with something old? Often the catalyst for new development projects is a new technology that promises higher productivity at lower cost (C++ drove new development of systems originally written in C, Java did the same to both C and C++, .NET did the same to C++ and possibly Java and the cycle is sure to continue). However, new technology brings its own degree of uncertainty and creative impediments. It is tough to be creative from a problem solving point of view while at the same time discovering best practices for the new technology. Inevitably you get one or often both sides of the equation wrong!

Now contrast the so called legacy system development.

The working system is an operational specification. If you need to enhance something, you know what parts are supposed to change and what parts are supposed to remain the same. There may be uncertainty about the new but you are at least grounded in the experience of those who developed the old. Lessons have already been learned and those lessons are present in unambiguous code. Sure it may be spaghetti but even buggy spaghetti code is more deterministic than a customer who is only 30% sure what he wants!

The system (at least the part that exists) is presumably working to some degree. Perhaps it is too slow or perhaps it is too complex and sometimes it may crash but it basically works and supports the business. There are several highly creative acts that can occur in this context:
  1. Figuring out how to make the system faster without breaking it.
  2. Figuring out how to make the system more maintainable or flexible.
  3. Reducing complexity while increasing robustness.

Very often these tasks have time pressure associated with them but the saving grace is that there is a working system to fall back on and it is unlikely the really creative problem solving that occurs during 1-3 is occurring under the glare of upper management.

Finally, you are released from the burden of new technology. This may sound very unappealing to you youngins out there but believe me, working on older technology can be very rewarding as long as it is not too old! I think the paradigmatic examples here are C++ for programming and SQL for databases . Both are rather ancient by computer standards but both are still evolving and have deep knowledge bases of best practices associated with them. Java probably has reached equivalent maturity at this time. The beautiful thing about being an expert is C++, Java and SQL is that you already paid your dues and can focus on the problem at hand and not the tool. Readers who have a hobby like carpentry or photography know what I am talking about. If you buy a router (I mean the kind for making fancy wood carvings) or a new fangled lens for your camera you know you are going to waste a significant time learning to use these before you actually enjoy using them.

Now, there are exceptions to everything, of course. I have had tremendous joy doing clean slate work but that was because the work was either on a very small team of very talented people or on a project that was understood by management to be more R than D. The real point of this article is to remind everyone that rewarding creative development often comes wrapped in quite unexpected packages.

Sunday, April 20, 2008

Kindle Back in Stock

I'm cautiously optimistic again after Kindle shows up back in stock and Bezos letter to share holders is all about the Kindle and future innovation at Amazon. Fingers crossed.

Wednesday, April 16, 2008

Erlang is WTF?

As you know from a recent post, I am not into language bashing but I recently came across a witty one liner that sort of resonated with me: "C is fast, Ruby is beautiful and Erlang is WTF?" I might substitute a functional language (like Lisp or Haskel for Ruby) but the basic message is a valid one. Erlang as some cool ideas but they are packaged into a slow and rather ugly (syntactically) implementation.

Tuesday, April 15, 2008

Cargo Cults vs. Western Religion

Recently I came across an amusing phenomenon: Cargo Cults. A cargo cult is a religious movement that occurs when a primitive tribal culture comes in contact with a technological culture for the first time. These tribes come to believe that the technological artifacts of the encroaching culture really belong to them and they engage in rituals to coax their gods into giving them these material goods.

It is easy to laugh at these primitive fools but are modern western religions any more logical? Modern western religions (Judaism, Christianity, Islam) are grounded in the beliefs of humans who lived roughly 4000 years ago. The people of that time had significantly more technology than some of the tribes that form Cargo Cults but they were still very primitive by modern standards. One similarity is that western religions emerged either out of oppression or other hardship or as a result of an individual or individuals with delusions of grandeur. A primitive tribe must likewise feel disenfranchised when modern people show up on the scene with their wealth and gadgetry. The shamans (individuals with delusions of grandeur) of these cultures probably feel rather inadequate when their juju beans don't measure up to rifles and ocean freighters. But, why laugh at the very human reaction of the Cargo Cults and not at the founders of our "great" Western religions? It is really difficult for me to see the fundamental difference.

Intelligence = Prediction

The field of AI has always been a big draw for me. Understanding the processes that yield intelligent behavior has to be up near the top of the list of the BIG problems (up there with origin's of life and origins of the universe).

However, the underlying premises of mainstream AI research during the 80's (and even today) have always struck me as wrong headed. First Order Logic is useful if you want to create a software that behaves as if it has intelligence in a limited domain but is behavior really intelligence? Turing thought it was (Turing Test) but many philosophers disagreed and this lead to much heated but ultimately worthless debates of whether Chinese rooms can be intelligent, and the like.

Back in my graduate school days I wrote a thesis arguing that intelligence needs to be built on a foundation more akin to computer simulation than to logic based inference. My thinking was that simulation drives the ability to make predictions about what will happen in the world and predicting what will happen is a prerequisite to applying rules to act intelligently. Prediction precedes inference.

It was thus very refreshing to listen to Jeff Hawkins' talk at TED. He too argues for a definition of intelligence based on prediction. His research is focused more on new types of memory architectures than software but I think his work will be the foundation for the kinds of software solutions that will one day give us a HAL.

Thursday, April 10, 2008

How do you know when you have mastered a new programming language

This is a minor continuation of my thoughts from The Law of the Excluded Middle does not apply to Programming Languages.

One sure sign that you have mastered a language is when you can create a fairly comprehensive list about what sucks about the language (while simultaneously appreciating why some of this suckiness is a necessary evil).

When you first meet a new language that floats your boat, there is a tendency is to fall in love. This happened to me not too long ago with Erlang. You think, "This language is great. Such and such is so hard to do in Language X and look how easy it is in Language Y.

Well, much like in the real world of human relationships you don't really know what love means until you get married! When you get married to a language (commit to developing a multi-year non-trivial system in it) then your love is surely tested. You learn about the languages warts and its tendency to leave its socks outside the hamper, squeeze the tooth paste from the top and leave the toilet seat in an inconvenient position!

If you still love the language with its warts and all, then you have some of the necessary (but not necessarily sufficient) hallmarks of a master. Either that, or you ave a really good therapist.

p.s. Erlang and I broke up but we are still friends! On a happier note, my long term mistress (Mathematica) and I are really making sparks fly! Hope C++ doesn't catch us.

The Second Law

Everything the educated layperson and puzzled student needs to know about one of the most important principles in physical science can be found at Prof. Frank L. Lambert's web site. Ten pages of low entropy enlightenment!

Wednesday, April 9, 2008

Staying Young

There is a large laundry list of characteristics that separate youth from old age but there is one characteristic in particular which has been poking around in my mind today. I think this characteristic is the most important because it applies to other entities, like companies, as much as it applies to people. And it is a characteristic that both companies and people can change about themselves.

I think one of the quintessential traits of youth is a tendency to focus on what can go right while old age focuses on what can go wrong. Sure, there are differences between individuals, some being optimists and some being pessimists. But, there is definitely a trend to become increasingly pessimistic with age. This is, of course, not unexpected. As we get older we have potentially much more to lose.

This same tendency applies to young versus old companies. Google versus Microsoft is an obvious comparison (as was Microsoft vs. IBM back in the day). Two young Turks with a few servers and an above average search engine have nothing to lose. Naturally they will eat drink and sleep what they can do to turn average into insanely great. In contrast, a ~20 yr old company that dominates the market for PC operating systems and application software has plenty to lose. A great deal of company resources must go into thoughts of protecting that turf.

Completely ignoring what can go wrong is called irresponsibility. No middle aged person or successful company can afford to focus 100% on the rosy scenarios. The question for both individuals and company leaders is "what do you want the dominant attitude to be"? The answer defines your age, your culture, your prospects for growth and ultimately the number of years you have left in this world (baring unforeseen tragedy).

I am presently working on a project where we are about to embark on some uncharted territory and it is pretty exciting. I am contractually prohibited on elaborating but I will say the following. The most frustrating aspect of working on this project is the overall dominating focus on what can go wrong. It is truly unbearable sometimes and even borders on the absurd . It should not surprise anyone that the work I am doing is for an older company.

So here is my 2 cents. You only have some many hours in a day and so many brain cells to occupy with thoughts. Be vigilant and make sure that at least 51% of your limited resources are focusing on success and improvement. You'll feel a lot younger.

Tuesday, April 8, 2008

The Law of the Excluded Middle does not apply to Programming Languages

While reading Herb Sutter's Blog I found a link in the comments to yet another rant about C++ and why this particular programmer does not use it any more. When are programmers going to wake up and stop this meaningless commentary about the merits of Programming Language A versus Programming Language B. When it comes to programming languages the Law of the Excluded Middle does not apply. The statements "C++ sucks" and "C++ is insanely great" are equally true. In fact you can say the same about Java, Perl, C#, etc.

It is not just that these statements can be true for one programmer and false for another. They can both be true to the same programmer on the same exact day. In fact, at this very moment I am compiling some C++ code and am of the opinion that C++ sucks and C++ is great. I think most programmers know what I am talking about. When a language does what you want or allows you to squeeze an extra uSec out of some code that needs said uSec squeezed it feels insanely great to have the privilege of working in that language. When a language gives you a completely cryptic page full of compiler errors before you squeeze out that uSec it really sucks.

This is really all that needs to be said to end all future language wars but like normal war, it will never stop. The average person just can't bare contradiction and this inevitably leads to conflict.

Do I contradict myself? Very well, then I contradict myself, I am large, I contain multitudes.

-- Walt Whitman.

Saturday, April 5, 2008

Encyclopedia of Integer Sequences

I found this site via one of my connections on Linkedin. It is not the most intuitive web site in the world if you merely want to brows but it is rather unique and its search feature is probably useful to anyone doing Mathematics research.

Thursday, April 3, 2008

An Open Letter to Jeff Bezos

Re: The Amazon Kindle

Dear Jeff,

I am writing to express my utter disappointment in Amazon's apparent lack of commitment to the Kindle. I am hoping my concerns are unfounded and that Amazon will do something in the near term to demonstrate that.

Let me explain. The day the Kindle was announced, I was ecstatic. I ordered it without any hesitation whatsoever. When asked by friends what I thought about the device, I sang its praises. You can find my review on your web site here. And here is an excerpt from an email I sent to my associates.
I read and buy a lot of books. I want the latest books and I don't have time to go to the library and when I do I always pay fines for returning them late. Anyway I like to write in my books so the library is out of the question. When Amazon offered their Amazon Prime service I jumped all over that because it would save me a ton in shipping cost and I can have the book I wanted in two days.

But the Kindle for me is 2 orders of magnitude better than Amazon Prime because:

1) I can have the book instantly.
2) The book is cheaper.
3) I am seriously running out of shelf space.
4) I can't carry more than 3 physical books with me on my 1.25 hour commute from beautiful Oyster Bay

So for me Kindle is not a device it is a SERVICE (yes I know I sound like Jeff B. but it is true).

Here is what this SERVICE gives me:

1) Instant access to all the books I buy hence forth.
2) The Wall St. Journal first thing in the morning (where I live your lucky if it gets there at 9:30 AM)
3) Instant access to a book I want to buy and a discount to boot.
4) Ability to search my books.
5) Ability to add notes.
6) All of this (including my notes) backed up on Amazon.
7) I can hold the Kindle in one hand and turn the pages while simultaneously drinking coffee with other hand while standing in the subway (THIS IS HUGE)!
8) I can read the WSJ without feeling like a complete idiot because I can't figure out how to turn the giant pages without whacking the girl sitting next to me on the train (THIS IS REALLY HUGE).

However, its been 5 months now and the Kindle platform seems to be stalled. Here is what I perceive as wrong:

1) The Kindle still shows itself sold out!
I realize this does not mean your have stopped producing and shipping Kindles. But to be in a perpetually sold out state for 5 months is not good. It may seem good to some but I am suspicious. The iPod is the most successful device in terms of sales in a long time and I don't believe Apple ever listed it as sold out on their web site. No matter how many iPods were purchased, Apple was committed to manufacturing more. There is no way you will have me believe that you are selling more Kindles than iPods, so why can't you keep any in stock?

2) There are a huge number of books I still have to buy as paper.
Why are you not being more aggressive in making deals with publishers like O'Reilly. I am told by my contacts there that your terms are too difficult. Open up your wallet and allow the publishers to make a buck and the Kindle will be the one of the most successful devices of all time (despite Job's pronouncement that no one reads anymore).

3) Why no software updates?
This device is far from complete! There is so many features you left out. Here is my list. I realize not all theses feature would be money makers for Amazon but some new features would show commitment to the platform.

a) Define my own short cut keys
b) Simple PDA functions like a calendar and todo list
c) A feature to turn on and off the wireless at specific times of the day to save battery. For example, I would turn mine on in the morning to receive the WSJ and then off.
d) Email client (besides gmail).
e) Better way to organize books such as user folders, short cut links, etc.
f) A customizable home screen (RSS feeds, weather, etc.)
g) A music player worth using!

I can go on and on.

Please tell me some of these things or better ones are in the works. Please tell me you did not get me all excited about the e-book revolution for nothing! Please stop being so secretive about sales figures. Please re-kindle my enthusiasm!


Sal Mangano

Tuesday, April 1, 2008

Top 10 Foolish Beliefs

In honor of April 1st, traditionally April Fool's Day in the US and some other countries, here are my thoughts on the top 10 foolish beliefs of all time.

10. Perpetual motion is possible.

9. The Earth is Flat

8. My religion is correct, all others are wrong.

7. Dianetics (Scientology) is not a cult.

6. There is a Devil who lives in hell.

5. Astrology is a valid methodology for understanding your life.

4. Numerology has something relevant to say about our personal destiny.

3. Noah's Ark existed and had two of every animal on board.

2. Intelligent Design is a rational model of the origin of life.

1. God (as in an intelligent animate being) exists.

You may wonder why the Devil is only 6 while God is 1, besides the obvious numerological reason :-) . I justify this ranking based on the observation that there is much more anecdotal evidence for Satan than God!

Friday, March 28, 2008

Big Dog!

Here is the kind of intelligent design I can really get behind!

Busy Beavers

The most embarassing memory I have about my intellectual development as a computer scientist was the day I decided that maybe Turing got that halting problem thing wrong. I know, I know, what was a thinking! Blame it on youth. But I also blame it on a gap in my education on the theory of computation. See, I was taught about Turing Machines (TM) but was never exposed to Busy Beavers (BB) until much later.

My thinking at the time was that Turing pulled out a very contrived type of TM in his halting problem proof. He made the TM self referential to achieve the proof. So my thinking was that perhaps the only class of TM that could evade a halting detection algorithm was the class offered by Turing in his proof. I thought it was possible for the vast majority of other TM to be shown to halt algorithmically. My thinking went as follows:

  1. A TM is a deterministic device.
  2. If a TM goes through a sequence of states and at some point returns to a state already visited without halting then that TM will never halt. Here it is important to relaize I am including the tape in the state specification.

My "algorithm" for detecting a TM that did not halt was simply to run a TM in a simulator that at each step would save a snapshot of the entire state and compare to previous snapshots to find a duplicate. If a duplicate was found then it would declare the TM would not halt. For this to actually work the algorithm would have to set some bounds on how long it would run the simulation. I did not think this was a problem because I thought there was obviously a linear or at worst quadratic function based on the number of possible states in the input TM. And, of course, this is where I was wrong. Had I known about Busy Beavers I would not have gone down this road.

It turns out the the Busy Beaver function looks something like this for a two symbol TM:

5>=1.9 × 10^704

Linear indeed!

The silver lining for me is that I learned a very important lesson (besides the one about not doubting a mathematical proof by someone of Turing's stature that withheld the scrutiny of thousands of individuals of similar stature). I learned that human intuition about the emergence of complexity from simple mechanisms is woefully poor. This lesson is repeated when we look at Cellular Automata in NKS. Its also repeated when we compute the number of possible configurations of a chessboard and compare that the estimates of the number of atoms in the known universe.

Luckily I learned my lessons while I was still quite young. Many individuals who believe themselves to be intellectuals have never learned to not always trust what they believe is intuitively reasonable. That not so bad except they use the web to infect the gullible with their intuitions. So, be careful out there! There are a lot of busy beavers with bad intuition.

Micro vs. Macro Evolution

I suspect some of my readership may be getting bored with all this evolution vs. ID stuff so I promise after this post we will return to our regular scheduled programming!

A favorite ploy of creationism is to accept microevolution (e.g. Darwin's Finches) while rejecting macroevolution (new species, birds descending from some dinosaurs, etc.) There is plenty of credible discussion about micro and macro evolution on the web so I am not going to repeat it here. See Douglas Theobald , John Wilkins and Wikipedia. I'd like to instead address the kinds of drivel exemplified by the thousands of posts like this one. Here we see an author asking "When Did the Fish Sprout Legs?" and then denying such a leap is physically or biologically possible. Here is an excerpt:

When one examines the historical record of life, we find the absence of transitional forms between the major life groups such as fish and amphibians or reptiles and birds. The fossil record has failed to yield the host of transitional forms demanded by the theory of macro-evolution. Rather, the fossils show an abrupt appearance of very distinct groups of animals. Take, for
example, the supposed"fish-to-amphibian" transition. The general assumption has
been that the earliest amphibians evolved from the order of fish, the Rhipidistia. However, there are major differences between the earliest assumed amphibians, the Ichtyostega, and its presumed fish ancestor. The differences are not simply a few small bone changes but are enormous structural differences as can be seen in Figure 1.The first amphibian had well-developed fore- and hind limbs which were fully capable of supporting terrestrial motion. The transitions between the two are strictly hypothetical, and no transitional fossils have ever
been found ... only imagined and artistically drawn. The mechanism for the
supposed macro-evolution of the fish to the amphibian is purely hypothetical.

When I was a boy my family used to picnic at Westbury Gardens in Long Island. There is a large pond there where I used to love to catch frogs to take home. There was also a shallow area where there were steps leading into part of the pond. Around these steps swam hundreds of tadpoles. One day I decided it would be really cool to capture some tadpoles and take them home to watch the transition of a tadpole into a frog. So I caught about a dozen tadpoles and took them home and placed them in a fish tank. I waited and waited but they never turned into frogs. Clearly I did not provide them with the right environment and nutrients to allow this transition to occur.

Can we learn anything at all from my boyhood escapade? Well clearly I am not going to claim that the transition from tadpole to frog is an example of macroevolution at work. Clearly the transition is preprogrammed and does not involve any mutation or selection. But here is what is interesting and very instructive:
  1. A tadpole looks far more like a fish than it does a frog.
  2. Everyone knows that tadpoles do sprout legs and become frogs given the correct conditions.
  3. We also know that the transition from tadpole to frog is not instantaneous and each intermediate form is viable.
  4. We learn from my experiment that given the wrong environment a tadpole will remain a tadpole and eventually die.

So in a time frame far far shorter than any timescales on which macroevolution occurs we see a fish-like-thing turn into a frog. Fascinating really. What is fascinating is not that this is a proof of macroevolution. It is not. What is fascinating is that it is there is a stable trajectory through genotype space that leads to a stable trajectory through phenotype space that manifests itself as a fish transforming into a frog. The mechanisms by which genes switch on and off in the case of tadpoles are based in regulator genes, enzymes, etc. and not mutation and selection. But so what?

If you accept micro-evolution, whereby selection and mutation lead to small changes in form and you witness for yourself a purely biological process whereby a rather large morphological change can occur in a span of weeks, how can you not at least admit to the possibility of macroevolution? Oh right, it’s not in the bible. Sorry, I forgot.

p.s. I just found similar ideas by someone much more qualified than myself. Definitly worth a read.

Wednesday, March 26, 2008

The Fundemental Principle of Science

What is the fundamental principle that distinguishes science from non-science? I have been thinking about this a lot lately. I am somewhat familiar with the vast literature from the philosophy of science which speaks to this question (e.g. Popper, Kuhn, Feyerabend). Popper is best known for the "falsifiability criteria". Kuhn is famous for his "paradigm shifts". And Feyerabend insisted that science not follow any method whatsoever, lest it somehow restrict itself.

I buy into pieces of each of these philosophies but yet I feel compelled to think about a principle that would resonate with almost every practicing scientist. To me, the principle can't be as simple as "the scientific method" or the use of mathematics. Much scientific progress happens outside the confines of rigorous method and rigorous math. Scientists can't escape from the fact that they are ultimately human and as humans they succumb to emotion, prejudice, and turf wars. They use rhetoric as much as they employ differential equations and statistics.

Ultimately, despite temporary deviations from method and rigor, all true scientists buy into the principle of Occam's razor. No matter what mode a scientist is presently working, he or she is guided by a quest for simplicity. This does not mean the path to simplicity is always a straight line.

Most computer programmers, like myself, are also on a quest for simplicity. We call code "elegant" when it achieves great feats while remaining simple. However, most programmers don't regularly write elegant code; we just know that when we do it is the most satisfying experience imaginable. Likewise, most science does not start out elegant but it is constantly seeking this state. Science is looking for the simplest rules, laws and equations that explain the most observations, dispel the most mysteries and lead to the most new discoveries.

Occam's razor is the essence of Science.

This, more than anything else, is why the vast majority of practicing scientists reject pseudoscience (like Intelligent Design, Astrology, Numerology and the like). For example:

Intelligent Design: how could an explanation that requires the preexistence of a designer before the designed be the simplest explanation? Isn't simpler to assume the non-circular premise that intelligent life does not depend on the preexistence of someone more intelligent than the life whose origin requires explanation?

Astrology: how can the position of planets whose, gravity is too weak to even move a feather on earth, be the simplest explanation for any given human's life story.

Numerology: How could the letters of ones name, which are arbitrary artifacts of the evolution of language, have any bearing on a persons fate? Isn't it simpler to imagine ones fate is tied to a combination of heredity, environment and chance?

Of course, a believer in god would counter that his system is the simplest. You presume god and everything else follows. How can you get any simpler!?! It is of course at this point where any hope for intelligent discourse ends and the scientist and the faithful must part ways.

Tuesday, March 25, 2008

A Lesson in the Process of Science.

I realize that I have been posting quite a bit about education and the evils of teaching creationism. These topics are a bit off topic for this blog but they are important in light of the fact that (a) this is an election year and (b) a new creationism propaganda film staring Ben Stein is about to be released.

This film and creationists in general claim that "big science" is stifling other viewpoints and that doing so is anti-scientific. However, this position has as much legs as the theory creationism itself(that would be none).

Allow me to illustrate how science actually works by considering another area that is not as emotionally charged as the origins of life. Let's consider physics and in particular Quantum Mechanics (QM). I am inspired to write this by a recent article in New Scientist titled Quantum Randomness may not be Random.

As most readers are probably aware, the meaning and interpretation Quantum Mechanics was hotly debated during the birth of modern physics (~1880 - 1930) . The two most famous individuals at the heart of this debate were Albert Einstein with his position best immortalized in the "God does not play dice" quote and Niels Bohr who argued for the abandonment of all notions of causality at the quantum level. Bohr's view point became known as the Copenhagen interpretation and it ultimately became the dominant viewpoint of physics and the one that the vast majority of physicists accept today. In fact, this interpretation of QM has the same status in physics as The Theory of Evolution has in biology.

The first point to be made is that during the evolution of modern physics there was certainly room for multiple viewpoints and these viewpoints were hotly debated. But these debates always followed a process of science which begins with the presentation of facts and uses logic and mathematics to reach conclusions. Of course, scientists are humans and a certain degree of emotion and bullying come into play as well but nothing is settled using these devices. They are only a back drop of the human saga that is science. However, this is not what is truly instructive.

Fast forward to 2008. Quantum Mechanics is the most successful theory in the history of physics and its equations are responsible for so much innovation in the modern world. Truly, QM has earned the right in physics to be untouchable dogma. Certainly any respectable physicist who would dare question the Copenhagen interpretation would be the laughing stock of his profession and his career would be ruined. Certainly the proponents of Creationism would have you believe that this is how science works. But they are wrong.

In the New Scientist article we learn that a respected physicist from Rutgers, Sheldon Goldstein, is trying to revive an older interpretation of QM called the Bohmian Model, after David Bohm. The details are not as important as the moral. Goldstein is not being mocked by physics (even though his views are squarely in the minority) because he and his peers question the dogma of QM on scientific grounds. He presents mathematical and logical arguments. When his peers raise objections he does not scream foul or prejudice but rather talks about possible experiments. He does not dismiss his peers arguments by arguing in circles nor does he draw on sources of mysticism that lie squarely outside of science. Goldstein and others can question Big Science while remaining well ground in the process that define the way science has always operated.

Creationist don't play by the rules of science but want the respect of scientists. They propose arguments which draw on misrepresentations of thermodynamics but when they are called out on this they jump to other arguments equally fallacious. It is not so much the argument of design that disturbs most scientists; its the lack of logical and consistent reasoning that pervades all of ID.

I doubt many proponents of ID read my blog but if there are any out there allow me to suggest the following analogy. Imagine a scientist walking into your church this Sunday and saying, "Listen all you Christians your whole process of worshiping Christ and interpreting the bible is wrong. You should interpret Mathew like such and such and Paul like this and that." Wouldn't you be furious? By what right does a heathen have in telling your preacher what the bible means. How dare he! Well I say to you, "How dare you! How dare you come into the house of science and tell it how it should be. By what right?!?. Please leave immediately! ... But if you'd like to drop a small monetary donation on the way out we'd gladly accept!

Sunday, March 23, 2008

Intelligent Design Indeed.

Here it is in a nutshell why teaching ID in schools will create a country full of boobs (I mean more than the number we already have sitting in pews).

Saturday, March 22, 2008

Interval Math

While doing research for the Numerics Chapter of my forthcoming Mathematica Cookbook I came across a site devoted to research on Interval Math. Interval Math is an approach from the domain of Numerical Analysis that deals with the fact that all measurements are imprecise by abandoning the representation of measured values by numbers. Instead of numbers, it defines all mathematical operations on intervals.

Mathematica (as of version 5) support real (but not complex) interval math where intervals take the form Interval[{min1,max1}...]. All of the typical mathematical operations and functions are defined for intervals.

Interval math is important for computer systems that must act intelligently in the real world. All sensors are approximate. This is true for man-made devices as well as for our own eyes and ears. If a sensor on a robot returns a particular value there is always an inherent error. Rather than deal with errors by sampling and averaging, interval math allows the error to directly be represented in the values that enter downstream computations. This means all intermediate results track the propagation of errors from multiple sources to yield better information. There also seems to be a relationship between interval computation and fuzzy sets but it I have not located any resources except on paid content sites.

It seems that although the study of Interval math began in the US it is largely forgotten while in Germany it is there are conferences and it is part of the qualifying exams for studies in numerical methods.

Some of the less technical resources on the earlier mentioned site are this introduction, an article from American Scientist and even a movie.

Thursday, March 20, 2008

I was going to vote for John McCain...

John McCain pretty much had a lock on my vote for the 2008 election. The purpose of this blog is other than politics so I am not going to go into why I thought he was the best candidate. Instead I would like to discuss why I may have to change my vote.

The issue is "Intelligent Design" AKA "Creationism". Apparently McCain's views on the teaching of evolution and the teaching of creationism is that each is a point of view and each point of view should be taught.

Well, Senator, Astrology and Numerology are points of view. Should we teach them beside Astronomy and Mathematics? Phrenology is a point of view. Should we teach it beside neuroscience? I sincerely hope the senator would have the common sense, even though he says he is not a scientist, to see that "points of view" and "science" are not the same thing. Point's of view don't cure disease, solve problems in physics, help design the next generation of computers, launch a spaceship, etc. A point of view is not a scientific criteria. Scientists follow a process and within the boundaries of that process there can be different "points of view". "Intelligent Design" does not follow the process of science. This has been well established, so it would be silly to repeat the points here.

Can McCain be convinced to abandon this position? Well just to get my vote he probably can't but I think its time for a little grass roots action in the states where McCain has to have victory to become president. There must be enough rational folks out there to help convince the senator to abandon his foolhardy stance.

Tuesday, March 18, 2008

In Dedication To Arthur C. Clarke 1917-2008

You made me fall in love with AI.

You made me become a fan of Science Fiction.

You (with Kubrick's help) sent shivers down my spine at the sight of the black monolith.

You had a vision for what 2001 could have been had mankind not squandered its resources trying to kill each other.

I majored in Computer Science partly because of you.

You will always be alive because you live in the minds of your fans and will one day live in the mind of HAL.

Sunday, March 16, 2008

The Problem with Mathematics Education

There are numerous essays and newspaper blurbs lamenting the poor state of mathematical education in the US. Here is a typical example: Presidential panel bemoans state of math education.

What I see as the problem is that advanced mathematics is introduced in language that is unfit to inspire any but the few that were genetically destined to be mathematicians (or physicists).

Ask a recent college grad what an Eigen value or Eigen vector is. I give you 100:1 odds you'll get a blank stare. Okay now ask them to read this explanation from a popular Math web site. I bet their face will be even blanker. Now ask them to read this wonderful little explanation. Chances are the lights came on.

This is not to say that the later explanation will allow a person to do the math. But this is certainly where Math education, even at the highest levels, should begin. Illustrate why the problem is important, give a sensory picture to go along with the abstractions. Some might believe that this is how most Mathematicians teach but that is simply not the case. Mathematics is a very macho profession and many mathematicians believe its beneath them to offer intuition prior to rigor. The sad truth is many of them could not come up with compelling intuitive explanations even if they wanted to. It was not the way they were taught either.

Saturday, March 15, 2008

Mathematica on LinkedIn and on a Wiki

This is my third post today (penance for not posting for so long!)

I recently started a LinkedIn Group called Mathematica Users Group. If you are a member of LinkedIn you can join the group by clicking here. After creating the group I thought it would be cool to have a Mathematica Wiki and soon discovered that Luc Barthelet thought this was a good idea too but thought of it a few years earlier than I did!

Semantic Wiki's

I attended a Semantic Web Meetup this past Thursday (Mar 13) where the topic was Semantic Wiki's. Although the presentations were not as focused as I would have liked, the topic is an interesting one. The two talks focused on the Semantic Media Wiki and the presentations can be found here and here..

Semantic Media Wiki is an extension to Media Wiki, the wiki engine that powers Wikipedia. The basic idea is that the Wiki supports an underling Triplestore (product example). Triples model subject, predicate, and object relationships (For more Semantic Web background see this, this and this).

The problem with a regular Wiki is that the information is largely unstructured. Some may argue this is a feature and there is something to the argument that the popularity of the Wiki stems from not forcing authors to use cumbersome syntax to structure the data for the benefit of computers. However, this lack of structure makes the information in a Wiki hard to re-purpose and also makes Wiki's harder to maintain (consider the fact that there is no automation in Wikipedia to keep lists like this one in sync with new pages).

Semantic Wiki's solve this problem by tagging data with known relationships that the computer can automatically leverage to cross-reference, collate and re-purpose data.

I think this idea is a natural progression of the Wiki concept but it remains to be seen if Semantic Wikis ever reach a critical mass comparable to Wikipedia. My personal view is that the work of organizing mounds of textual information needs advances in computer processing (AI) and that only a select few fanatics will engage in "tripling up the web" manually. Although, when it comes to web trends my crystal ball has been rather clouded.

Readers of my older posts know that I have proposed similar ideas under the moniker WISDI. I am still interested in the WISDI idea but circumstances have forced me to turn my attention elsewhere for the near term (I'll update readers in future posts) .

Ultimately, triples are just a syntax for the logic of relations (which is not even first order logic) so, to me and many others, the Semantic Web initiative is using really low fidelity tools to attack a high fidelity problem. However, in the agile spirit of "the simplest thing that can possibly work" they may achieve a more usable and reusable web in the near term.

Nested Dreams

I have always found dreaming a fascinating subject. I think dream research is key to unlocking data about consciousness. However, it is one of those areas of research where there is a very low signal to noise ratio.

In the past year I have had a very vivid class of dream that I don't recall ever experiencing earlier in my life. I don't know if there is a technical term for it but I call it a "nested dream". This is a dream that I apparently wake up from, retrospect about the dream but am, in fact, waking up into another dream. I am not talking about simply transitioning from one dream to another but actual dreaming, dreaming about waking up from that dream, things occurring in the new "dream stack frame", and then ultimately really waking up and remembering details from both frames.

I use the notion of a stack loosely since there is no remembrance of pushing down from dream 1 to dream 2 but rather there is the remembrance of popping out of 2 and into 1.

Has anyone experienced a similar kind of dream?

Saturday, March 8, 2008

Prof. Ray C. Dougherty's Research

A few weeks ago I attended a Wolfram Research event called Mathematica Publishers Day. The goal of the event was to highlight the capabilities of Mathematica 6 as a platform for technical publishing. I really enjoyed this event but was also pleasantly surprised by a talk that did not quite fit into the overall theme of the event but nevertheless was quite fascinating to me.

The presenter was Prof. Ray C. Dougherty, NYU Linguistics researcher. He used Mathematica to model all possible sine wave based communications systems. The presentation is available via Wolfram. Unfortunately, as with many interesting presentations, you needed to hear the talk to get the most out of it. Here are some interesting excerpts that I remember:

  • The Cochlea is computing the second derivative of the auditory input.
  • The most mathematically complex communication system is one where the transmitter and receiver have the same anatomy (e.g., wings of insects).
  • Bats can hear phase changes because they can rotate their ears. A human can not hear a change in the rotation of a tuning fork but a bat can.
  • Prof Dougherty believes he has a Chomsky generative grammar that enumerates all possible animal communication systems.
  • He also believes he can map each possible system onto the integers in a natural way.
  • From this he concludes that evolution must proceed in jumps.
  • He relates this idea to the evolution of all possible Tic Tac Toe Games to illustrate the notion that all such games are not unique and similarly the space of all possible communication systems contains many redundant systems as well.
  • He goes on to visualising distributions of the primes to illustrate that there are systems that are not random but whose patterns are too complex for us to model in a simple fashion. Explains how this is related to the ideas in Stephen Wolfram's NKS.

Tuesday, February 19, 2008

Stephen Wolfram on Software Design and Naming

In this post Stephen Wolfram talks about design reviews and the importance of naming things correctly. I sought of envy him for working in a company where one can afford to spend a total of 10,000 hours in design review and where you sweat over the details of naming every last function. As much as I can see the merits of Agile Development, I have never worked on an Agile project that creating anything nearly as elegant as Mathematica. Sure, Agile focuses on working software and "good enough" software and the economics of most software projects makes this mode of development necessary. But it would be nice to one day work on a project where time to market was less important than the end result.

Saturday, February 9, 2008

Ubiquitous Eigenvectors and Quantum Computing

Recently I have been studying Quantum Computation (QC). If you want to get anywhere with QC you need to master Linear Algebra and Vector Spaces since QC is baically an exercise in applied vector space theory.

Being a bit rusty in the topic myself, I decided to pick up the book Finite-Dimensional Vector Spaces by Paul Richard Halmos. Although Halmos is one of my favorite math authors, I picked this book primarily because it has a Kindle edition. It turns out that another one of Halmos's books, Linear Algebra Problem Book, is a far better choice for the non-mathematcian. The later book walks you step by step through bite sized problems and provides hints (and also all the answers if you get stuck). A free resource with answers can also be found here.

One of the central mathematical techniques at the heart of Linear Algebra is the concept of Eigenvectors and Eigenvalues. The term Eigen is derived from German and means "characteristic". An Eigen Decomposition is a method of reducing a square matrix to into a constant (eigenvalue) and a vector (eigenvector). This decompostion is central to many problems in physics.

It turns out that the study of Eigen decompostion can yield deep insight into problems that are in the realm of computer science. Consider, for instance, this paper about Google's page rank algorithm and the Eigenface technique for facial recognition.

Coming to grips with the mathematics behind Vector Spaces is one of the single most rewarding experiences for anyone interested in advanced problems in computer science. It is a must if you ever want to graduate from the comprehension of clasical algorithms to the comprehension of quantum algorithms. However, if are curious about QC but the thought of learning advanced linear algebra sounds like too big of a comitment, then you might want to check out Quantum Computation explained to my Mother. This is the most approachable paper I have ever read on the topic that is also mathematically accurate.