Huzzah n2,3 Universal Computation!

This is very cool, in a way I am far from qualified to articulate: a 20 year old undergraduate has proven that a 2,3 Turing machine is capable of universal computation. Every word I write after that will be one more inaccuracy, but nonetheless here goes: a 2,3 Turing machine is a mechanically very simple device, which can be theoretical or could actually be constructed, as Turing and associates did, and which follows a few simple rules. It’s current “state” depends on it’s state in the last time step: in the case of the 2/3, it has some “heads” which can be either up or down, and it prints on some “paper” in 3 colours. The state of each head and the colour it prints depends on the head positions and previous colours of the line above on the roll of printer paper. Or alternatively, any other 2 and 3 position machine you can imagine. If this sounds to you something like the game of life, that’s because you know more about this stuff than I do. By comparison, the computer you’re reading this on has many, many “states” in the logic gates of it’s CPU. Like millions.

Universal computation means it can accomplish any possible computational task, if you got the rules for up/down white/orange/red colour right. Obviously. Duh.

People have proven that relatively complicated (7 state(?)) machines are capable of universal computation. Which is pretty wild. People claim that 2/2 and simpler machines aren’t capable of it. So the question was: can 2,3 machines do it? Cause if they can, that probably makes them the simplest possible machines capable of universal computation.

Stephen Wolfram, boy genius and subsequent author of the scale-crushing A New Kind of Science was so intrigued by the topic that he offered up $25 000 to the first person to prove (in the mathematical Proof sense) the question either way.

Why is this important? Hell I don’t know. Folks claim that all life and possibly the entire universe is really just a form of information computation, and this impinges on that sort of thinking somehow. I don’t really understand what that means, although it’s somehow provocative to the old imagination.

It also gives us all a fresh excuse to read Cosma Shalizi’s review of A New Kind of Science, one of the most refreshingly acerbic excercises in sharp-blade big-brain academic critique I know.

A New Kind of Science
A Rare Blend of Monster Raving Egomania and Utter Batshit Insanity
….
What, then, is the revelation Wolfram has been vouchsafed? What is this new kind of science? Briefly stated, it is the idea that we should give up trying on complicated, continuous models, using normal calculus or probability theory or the like, which try to represent the mechanisms by which interesting phenomena are produced, or at least to accurately reproduce the details of such phenomena. Instead we should look for simple, discrete models, like CAs (“simple programs”, as he calls them) which qualitatively reproduce certain striking features of those phenomena. In addition to this methodological advice, there is the belief that the universe must in some sense be such a simple program — as he has notoriously said, “four lines of Mathematica”. Most of the bulk of this monstrously bloated book is dedicated to examples of this approach, i.e., to CA rules which produce patterns looking like the growths of corals or trees, or explanations of how simple CAs can be used to produce reasonably high-quality pseudo-random numbers, or the like.
….
As the saying goes, there is much here that is new and true, but what is true is not new, and what is new is not true; and some of it is even old and false, or at least utterly unsupported. Let’s start with the true things that aren’t new.

A disclaimer of sorts: I am a giant fanboy of the idea that the bizarre swirly patterns of the world may be the encrusted aggregate product of a handful of granular basic mechanisms, not least because that would mean that it’s okay if I let down the people trying to teach me the “complicated, continuous models, using normal calculus or probability theory or the like, which try to represent the mechanisms by which interesting phenomena are produced, or at least to accurately reproduce the details of such phenomena”.

An Interesting Day in the Academic Trenches

As Utah Phillips might say, the University of Michigan isn’t the middle of nowhere, but you can see it from here. So it’s nice that lots of interesting people keep coming here to keep us entertained.

At lunch I went to at talk on the historical development of the nuetral theory of evolution, from STIS staff scientist Egbert Leigh. It doesn’t sound like such a hot topic, but I’m fascinated by just how un-obvious tropical biological richness is when you really start to look at it, and I’ve been told I should consequently know about neutral theory, and thought the talk might be just the thing. So did lots of other people apparently, the large-ish room at the museum of natural history was at capacity. It turned out to be just this side of incomprehensible for my genetics-theory underequipped brain (and frankly some people should just not be allowed around powerpoint). But there was something soothing and pleasant about sitting on a radiator in a room packed up with young and old smart folks, listening to this bearded old dude droning on about really smart stuff he clearly really knew a lot about, and idly contemplating the firing of neural networks throughout the crowd. There were necessarily no academic high points for me, but the non-academic high point was when he suggested in his even, dispassionate way that Steve Hubbell built out the powerful and influential neutral theory, which every sensible person knows is fundamentally bonked, “because it was a sweet job, the same way building the atomic bomb was a sweet job for Oppenheimer”, and the crowd accepted that in their even, dispassionate way. I’m sure the lecturer didn’t mean it that way, but still, c’mon, Hiroshima?

After lunch I went and hung out in the Center for the Study of Complex System, where I feel legitimately entitled to check my email in a complex systems way since I probably passed my Evolutionary Dynamics test yesterday and thus still have a shot at getting my minor in complex systems. Then I went back to my home department, where my advisor had arranged an informal afternoon seminar with Michael Batty and some other Brits who were in town for a social sciences conference. Last year I spent a long weekend in Chicago, exploring the neighbourhoods there. I took a copy of one of Batty’s many books about city simulation with me. I didn’t end up doing a lot of reading, but to the extent that I did it was fun to contrast the rich and surprising reality of the very visceral and assertive city of Chicago with the abstractions and essences of the book. So it was particularly pleasant to spend a non-directed afternoon around a table with Prof. Batty and other smart people batting around big ideas in agent-based modeling.

Being a grad student has it’s ups and downs, and there are plenty of times when I’ve wished for the mindless tedium of manual labour as a preferable substitute for the adult-student lifestyle, but when it comes through, the life of an academic can really come through.

Casting Aspersions on Modern Matrix Math?

The S.O.S. Mathematics primer on matrix algebra leads off thusly:

Matrices and Determinants were discovered and developed in the eighteenth and nineteenth centuries. Initially, their development dealt with transformation of geometric objects and solution of systems of linear equations. Historically, the early emphasis was on the determinant, not the matrix. In modern treatments of linear algebra, matrices are considered first. We will not speculate much on this issue.

Correct Math Will Be Important

I’m taking something called Evolutionary Dynamics from a guy who really knows it. He’s super nice but not taking it particularly easy on us, and probably wouldn’t appreciate any implication that he should. I’ve picked up rumours and hints of the mysterious aesthetic crunch of real math from all kinds of sources for a long time, actually engaging in it elbow-deep is hell of an experience. But I don’t have the background to follow along except painfully. This Onion article, Scientists Ask Congress To Fund $50 Billion Science Thing, rings true to my experience so far.

Another diagram presented to lawmakers contained several important squiggly lines, numbers, and letters. Despite not being numbers, the letters were reportedly meant to represent mathematics too. The scientists seemed to believe that correct math was what would help make the science thing go.

I’m getting better.

Insightful Analysis: Naeem et al’s Box-Ecosystems and Diversity

This week’s insightful analysis is for Naeem et al’s sweet 1995 paper on their ecosystems-in-boxes experiment at the ecotron, in which they manipulated species diversity in producer-consumer-predator systems and measured ecosystem functions for 200 days.

The original paper: Empirical Evidence that Declining Species Diversity May Alter the Performance of Terrestrial Ecosystems, Philosophical Transactions of the Royal Society: Biology, 347(1321) 1995.

Read the rest of this entry »

Seed’s Science Writing Winners Really Get It

Seed Magazine has published the first- and second-place entries in their 2nd annual science writing contest.

Both entries are explicitly not about science as fact or even science as method, but rather insist that science is about uncertainty and rigorous discourse in the context of physical evidence. So I’m down with either one or both, and if challenged for a manifesto might provide a photocopy of either.

1st: Scientific Literacy and the Habit of Discourse, Thomas W. Martin.

2nd: Camelot is Only a Model: Scientific Literacy in the 21st Century, Steven Saus, which gets bonus points both for pumping up the primacy of models in thought and for referencing Python.

Insightful analysis: Bengtsson, Which species? What kind of diversity?

An “insightful analysis” for the longly titled Which species? What kind of diversity? Which ecosystem function? Some problems in studies of relations between biodiversity and ecosystem function, Bengtsson, J. 1998. Applied Soil Ecology 10: 191-199.

Bell ringers: The sentences which most excited me were

1. “Diversity of functional groups, diversity within functional groups vs. total diversity”

(p.196) Despite the author’s claim that diversity is not a mechanistic driver of ecosystem function, it seems clear that we will identify any real mechanisms linking ecosystem function to gritty biology through the persistence of statistical correlations between units of stuff in ecosystems and the outcomes of those ecosystems. Divvying up diversity into inter- and intra-functional groups measures seems like a powerful step in finding the most suggestive statistical correlations.

2. “It is difficult to predict which species will be of importance in the future.”

(p. 197) I gather there is a good literature on this “natural insurance capital” theory, which is an exciting idea. Clearly there is a Gleason:Clementisan evolutionary component to the question of whether any given diversity of groups/species is best suited to the likely perturbations of their locale. It seems to contradict the author’s claim that “there is no mechanistic relationship between diversity and ecosystem function”. Perhaps not in the immediate term, but given the convincing argument for a consideration of time-dynamic processes, all ecosystem functions may be dependent on a future-proof “smart diversity”.

Mechanism and correlation: The author is clearly not a disciple of R.H. Peters and his “Critique of Ecology”, which advocates an abandon of mechanistic “narrative descriptions” (which Peters claims can’t predict outcomes or definitively answer questions). Rather, the author suggests correlation is a lesser kind of knowledge and that mechanism is the goal of real beef-eating scientists. I agree with him, but wonder if he’s forgotten that we get there through data, and if we pre-judge our data based on the existing canon of identified mechanisms, we may miss out on new candidates. This is especially important in an emerging field, where there may not be consensus around relevant mechanism. A bunch of possible ecosystem functions are listed, and there is an implication that those functions plus some other stuff that we also know are a good approximation of what ecosystems do. My intuitive response is that ecosystems are awfully complicated and our understanding of how they work is yet basic. I fully agree with the author that we’ve been way over-focused on divvying them up into units of species, but I’m skeptical that we now know how to best aggregate them.

Experiments and data aggregation: The kinds of experiments the author advocates for testing mechanism are awfully compelling (and perhaps I should more carefully read the ecotron paper now). They would be tough though. Time-dynamic-analysis, controlling for biomass, in real ecosystems when possible, is a high bar. Perhaps rather than insisting on defining “functional groups in consistent ways” a priori, we should be working on measuring our data at the least-aggregated level, and providing it in standardized formats into open repositories which would allow us to take on such cool-but-daunting studies in the “big science” format increasingly popular with the bioinformatics/molecular genetics crowd.

Blogging My Seminar: Biodiversity & Ecosystem Function

I’m really excited about one of my new classes this semester. NRE 639-039, Don Zak’s ecology seminar, entitled “Biodiversity & Ecosystem Function: Are There Any Links?”. After having taken a rather more technical course last term on measuring and storing data on biodiversity and ecosystem informatics, I was left asking myself over and over if biodiversity was really a monolithic good and if for instance, there were any links between biodiversity and ecosystem function. I’ve become an evangelist of function and raw physical mechanism as a relevant focus for ecosystem study, especially as opposed to our Victorian-artifact fetish with whatever species are.

So oh boy this otta be a good course, the kind of course you say, carve out a chunk of your life to go to graduate school for. Dr. Zak is consistently rated by his students as a great instructor, just as an added enticement. He’s asking for a weekly “insightful analysis” response one-pager to one of the papers we read. Putting the adjective “insightful” in the title of the assingment seems optimistic, but opt ism is good and I’m sure we’ll all do what we can. I’m inclined to treat them as a sot of blog thing. So I’m going to post them to my blog, natch. The first one is the next entry. Yay!

My Boss Shall Ride the New Train to Tibet

My supervisor leaves today for one of his periodic trips to China. This time he’s going through the northwest, stopping in Urumqi and other points in Xinjiang. He’s also going up to Lhasa, and to get there he’s taking the crazy new Qinghai-Tibet Railway. Politically, culturally, it is as he put it “what it is”. As a train ride, it’s gotta be the coolest most train-fetishistic thing you can do these days.

Debating Formats for Open Access Articles

Andy Powell asks if the ubiquitous Portable Document Format is the right choice for academic publishing online, and especially for open access journals. He suggests that HTML might be the right way to go instead.

I don’t know much about the history of PDF, but I understand it was Adobe’s proprietary descendant of postscript. Obviously PDF has been wildly successful. Interestingly, as Peter Sefton points out in the comments, PDF is no longer a propietary format. Adobe opened up the standard, giving away both the instructions for making PDF documents and the rights to do so to anybody who wants to build the software for it. Why? I dunno. It seems like PDF was a big success as an Adobe-only format, and you have to wonder what convinced them to give up the legal rights to being the only people who can sell software to make and read it. Some governments very sensibly require that their documents be published in non-proprietary formats (although enforcement of that requirement seems pretty thin in places). That’s a sensible requirement both because open standard formats are more accessible to people who can’t or don’t pay for a monopoly company’s software, and because they are more likely to be accessible in years to come when the responsible company may have long ceased to sell readers compatible with current operating systems and such. Perhaps Adobe was afraid that somebody else’s open standard would rise to supremacy on that government requirement and the similar requirement of accessibility-oriented private citizens. In any case, they made the bold move of opening PDF up to the world. Nowadays Foxit among others make software for making and reading .pdfs (and in the case of Foxit, arguably do a better job than Adobe. Certainly their software is less of a bully to your computer, regardless of the quality of the documents it produces).

Peter Sefton also points out that HTML (which was designed not to do layout), is inevitably bad at document layout. Which can matter for the readability of table- and diagram-centric research documents. I would add that all the little readability details of font and kerning and such are also a bit of wreck in HTML. PDF gives the publisher the potential for control of those things, and sometimes control is a good thing.

He also argues for XML as a secondary format for open access articles to be published in. That would allow full semantic machine-fu goodness. I think he’s implying that the XML version would be the canonical one, which is an interesting and compelling idea. No one is going to read a document in native XML of course (XML not being really being designed to be read in its raw form), so this would have the odd fallout that the authoritative version of the document wouldn’t necessarily be seen by humans. Of course, properly implemented, XML is easily machine-translatable into any readable format. I hope and trust that there are existing software engines to automatically do just that.

So maybe researchers should publish their articles in (at least) three versions: an authoritative, future-proof, easily catalog-able, semantically illustrated XML version, a PDF or other hard-pixeled version for printed human consumption, and an HTML version for cursory online use. That would be kind of a work-flow version of what latex does for document creation on a humbler scale. The .pdf and .html could be easily auto-generated to the journal’s norms from the .xml, and further customized for layout by authors/editors that have the time and the inclination. All seems like a good idea to me.

← newer posts · older posts →