Larry Kilham Blog
As we get older almost all of us yearn for the carefree times of childhood. Despite the many problems that hemmed in almost every child, the child still has the almost naïve capability of unfettered imagination. Some people, very few, keep this imaginative ability through adulthood. Their imaginings leads to inventions, art, designs and explorations of many frontiers never seen before. Emotion is part of this creative formula, and perhaps the emotional element is what is hardest to reconcile in equating the human mind (or even possibly a dog's mind) to an advanced computer. Did you ever see a computer cry?
As I was leaving childhood it was the 50's and 60's. That was a carefree time bridging the national self-confidence after World War II with the hope coming out of the labs - big cars with tail fins, "Atoms for Peace," Miracle Drugs, Electronics. The world was not ready to think about the big ecological picture except for some strange voices like Rachael Carson who wrote so stirringly about impeding ecological crises. We forgot that a major reason that the Japanese entered World War II was that they thought that their oil supply was threatened.
I recall the excitement when computers became useful in the early 60's. As an engineering student, I learned to program those huge machines full of glowing tubes. A coffee pot was kept warm on an equipment rack. Replacing burned out tubes was as routine as sweeping the floor. Transistors were just being invented and circuit chips were a figment of someone's imagination. Data was fed into and out of the machines by punched paper tape and later by the familiar IBM Cards with the rows of rectangular holes. The most advanced machines, accessible only by pedigreed top researchers, had less computer power than today's personal computers.
At MIT I was an assistant to professor Franco Modigliani who later received a Nobel prize for his Life Cycle Theory of Savings. He would call me anytime, day or night, "Larry, it's time to run some more data!" This always seemed to happen on Saturday night and my date would help sort punch cards into decks to feed the hungry computer. The next day, Sunday, there would be printouts to review with the professor. The data for a typical run of multiple regression modeling would come from old published studies such as United Nations data about savings propensities by age, country, etc. Squeezed tightly, all the data for one run could be written on the back of an envelope. Professor Modigliani would review the correlations, modify his model, and I would head for the patient computer to do another run. The world at that time was ready for econometric-based economic theory but not major complexity.
A few years later, I graduated and took my first real job with a Cambridge, Mass. think tank called Arthur D. Little, Inc. One of my projects was working with an antisubmarine warfare team. We designed long-range sonar for tracking Russian submarines, and we also devised mathematical scenarios for predicting how the Russian submariners would try to traverse the oceans to the US. "He thinks that we don't know that he knows a special route free of sonar…" The thought gaming simulations were entered by a clackety-clack teletype to a time-shared sort-of-super computer far away. I think the net computing power was still less than today's better PCs. On at least one occasion, when the scenario became to huge or complex for computer analysis, it was simply assumed that the captain wouldn't choose to take that route!
Meanwhile, back at MIT in the 60's and 70's professor Jay Forrester and his colleagues produced computer simulations of many years of interactive cycles of socio-economic-resource systems of various entities including the world. Their results were described as counterintuitive and many of their scenarios forecasted the stagnation and fall of the world's economy somewhere in the 2020-2030 era. Oil would be running out and pollution would still be rising. Subsequent simulations published as the famous Limits of Growth. A group of European industrialists and others calling themselves the Club of Rome tried to rally world opinion behind this study. In the 80's, much of the attention drawn to these arresting findings found new causes and drifted elsewhere.
Early in the 90's I moved back to my native Santa Fe, New Mexico to sort of quiet down after all that heady eastern experience. I found the Santa Fe Institute. It was working on complexity problems such as modeling evolution and stock market trading behavior. This was an eclectic group of Nobel Prize winners and broad scale thinkers from a variety of disciplines. The Santa Fe Institute thinking tended to be strongly mechanistic. If you had enough microbes, ants, people or whatever progress would happen in a statistically predictable way. By now the computers had mega-capacity and so meaningful number-crunching analysis could be done of many, many micros acting as a macro.
But nagging questions were arising: what about free will and consciousness? Could these cause a powerful mind to jump out of the box of many micros and produce a major invention or bold new insight? Think of quantum mechanics or relativity.
Certainly, human intelligence adds additional challenge to the analysis. What is consciousness or free will? Can they be replicated by a Turing Machine or do they go beyond that mechanism? How do we deal analytically with originality, artistic expression, invention and unrestricted wanderlust? In the short run, such as over a week in the life of an ant hill or a stock market, these factors may not be significant. But in the greater sweep of history, these factors become key in defining civilizations and leveraging their success in forbidding environments.
Unfortunately the great issues seem to be swept aside as almost everyone seeks answers in search engines. There's nothing wrong with search engines up to a point. But they are looking backwards, not forwards. The Internet mesmerizes as the Muse of Dataland. We mine heaps of data and risk overlooking the glacial shifts or exogenous shocks.
Larry Kilham is a Sloan School of Management graduate from MIT, received three patents, and has founded two high-tech companies. Many of his product designs required innovative use of computers, and as early as the 1960s he was researching artificial intelligence (AI).