The Upside of Environmental Complexity

Just bought a sockeye salmon off the dock at Granville Island. Hook-and-line caught in the ocean the evening before, gutted when it was brought in to the boat, and because we didn’t mind a little seal bite we got the big one for the regular price.

receiving the sockeye

According to the bathroom scale we paid about the same per pound as we would for factory farm hamburger, which is a somewhat sad economic consequence of the magnitude of this year’s incredible sockeye run. My complex systems teacher once made the point that at a certain scale, complex systems can be effectively equivalent to chaotic systems. Viewed from the commercial dock at Granville Island, the 4 year oceanic salmon cycle sure seems chaotic.

seal bite
seal bite

And tasty. We’ve never cooked salmon before, but we’re figuring on grilling up a chunk of it on bbq tonight with a little lemon.

Update: and that was exactly what we did.

Artificial Intelligence in Flash

This guy appears to be doing some work on network-based artificial intelligence… in flash. I wouldn’t have thought flash would be a first choice of programming language if you’re into experimental computation. But you gotta admit, it sure does look pretty.

Maybe Netlogo should hire a designer to gussy up their applets. Right after they get around to going open source.

Computer Dollhouses and the Vagaries of Internet Fame

When I visited Chiron Bramberger a few weeks back he showed me some photos he had taken of dollhouse furniture staged in a computer case. I thought they were really neat. A couple of days ago I saw similar photos posted by Cory Doctorow on his blog BoingBoing, which has the reputation of being the most-visited blog in the universe. I shouted “hey!”, then read the post which attributed the photos to a Russian casemodder. Such is the power of contextualization that I instantly accepted that the photos on BoingBoing were only coincidentally similar to Chiron’s, and forgot about it.

It turns out that those photos are indeed the ones Chiron took. Cory thought they were Russian because they had appeared on a Russian case-modding site. Now Chiron has a post of his own up indicating that those photos actually appeared on a whole host of A-list blogs (in fact, I don’t think he even knew they had also appeared on BoingBoing when he wrote that list).

So Chiron’s little photo project seems to have a attained the maximum of internet fame that such art projects can. Held up for admiring inspection by thousands, perhaps millions of internet readers who frequent the very creme-de-la-creme of blogging sites. Which is a great thing in it’s own right: to have your idea enjoyed by many. That’s cool. But I think I’ve always assumed that when I see somebody’s project do the rounds on the cool-things blogs, it must represent some moment of personal success for the originator. Recognition and credibility and an increase in the world’s demands and attentions on that person. Perhaps sometimes it does. In this case Chiron apparently didn’t even know it was happening until after the fact. Chiron does have projects that merit attention; some are non-commercial, like the Broken Happiness Machines and Petsynth projects which have formed the bulk of the content on hughstimson.org lately, and also Flytrap Gear, which is commercial at least in the sense that it would be most successful conceptually if lots of people bought stuff. Did any of the dollhouse-mod traffic accrue to Petsynth or Flytrap? I’m guessing no, since there were no links even to Chiron’s sites.

This isn’t a complaint, I’m not implying that the blogs which featured the dollhouse photos did anything wrong in doing so, or were obliged to do the kind of deep research which might or might not have tracked the photos back from the Russian site to Chiron, and I’m not disrespecting the basic fun of having Chi’s photos traded around by admiring folks worldwide. But the disconnect is interesting.

It’s also mirrored somewhat by Morris Rosenthal’s experiences. He wrote a book about computer diagnosis and repair, and also prepared a series of excellent diagnostic flowcharts, which he posted on his website in 2003 hoping they might “go viral” and show up on CS students’ dorm walls and blogger blogs. He reports that over the years those flowcharts have indeed had some success on the internets, but didn’t blow up as a “discovery” of great novelty and nowness until suddenly this year (landing them for instance on BoingBoing). Apparently this sudden virality isn’t related to any change in the way he posts or publicizes the flowcharts, it’s just a random fluctuation in interest which was amplified through the law of increasing returns governing the link-blog universe into a random frenzy. A butterfly tweets in Tokyo Harbour and there is a storm over the Metafilter Coast. Morris also suggests that that wave of nonlinear virality helped book sales, but only a little. Perhaps that is due in part to bloggers’ tendency to link an item to the blog they discovered it on with the ubiquotous [via] tag, rather than to the item’s original source.

Incidentally, it’s been my policy on hughstimson.org to link to the original source of an item if possible, although I’ve often had a twinge of guilt in doing so. I worry that it might appear that I’m claiming original research, when I’m actually riding the same wave of aggregate fascination that powers everyone else’s blogging.

Another disconnect: even though tens of thousands of people did in the end find the primary website for the book from which the flowcharts were derived, the difference in actual book sales was on the order of 1%. Mr. Rosenthal says

“Most marketing consultants and promotional experts aren’t focused on bottom line sales because they can’t deliver them. They’ll expect you to celebrate the number of websites quoting your story, the number of visitors to your site, the number of links that show up on unrelated sites around the world.”

That kind of uncommericalizable commerciality makes me think of the current Twitter fascination. Every business entity in the world wants Twitter cred. And somehow this social networking addendum, best suited in form to trivial narcissism, has become a business obligation. Employees are obliged to twitter whether they want to or not. Why? Retweets and followers are not minted on the gold standard. You can’t turn them in to Bank of Twitter for an equivalent weight in gold shavings. The occasional stories about commerical returns (think “Dell Sells Computers on Twitter“) only seem to point up the ridiculousness of the possibility of making real money through one-liners. But businesses want profile, seemingly for its own sake. The connection between online success and integrated meat-world success has always struck me as a non-simple one, and the experiences of folks like Chiron and Morris Rosenthal seems to suggest it may be very non-linear indeed.

If Only The Economy Were Allowed to Stop Failing

Greenspan Concedes to `Flaw’ in His Market Ideology — Bloomberg (2nd Term)

‘”If we are right 60 percent of the time in forecasting, we are doing exceptionally well; that means we are wrong 40 percent of the time,” Greenspan said. “Forecasting never gets to the point where it is 100 percent accurate.”‘

Yes, that follows. And when the consequences of bad outcomes are catastrophic and prediction of good outcomes can’t be certain, you have to have policies which are robust to failure. What Greenspan seems to have been suggesting, and what he still seems to be defending, is that when prediction cannot be 100%, it is acceptable or even inevitable to forge ahead as if the outcome was sure to be uniformly positive.

‘Today, the former Fed chairman asked: “What went wrong with global economic policies that had worked so effectively for nearly four decades?”

Greenspan reiterated his “shocked disbelief” that financial companies failed to execute sufficient “surveillance” on their trading counterparties to prevent surging losses.’

So how many catastrophic market failures do we have to have before we get past shocked disbelief when there’s another? Sure, each one is different in specific character than the last, but the insistence that this time we’ve got it all figured out is practically childish when repeated ad infinitum. Marketeers seem capable of convincing themselves that, because they are personally familiar with the mechanisms at play at the level of individuals, they can therefore know what behaviour will emerge at the level of the system. It’s not that neoliberal market theorists don’t believe in emergence, by contrast they are devoted to the elegant efficiencies that they see when markets aggregate information and action. They just don’t seem to want to believe that complex systems (including the ultra-complex systems Wall St. financiers are capable of cooking up) are capable of negative outcomes too.

It comes back to John Kenneth Galbraith’s position that market collapses don’t happen because of unpredictable shocks from somewhere outside of the lines that economists draw around “the economy”, they happen because of the most fundamental rules of capitalist economies. And they will again, particularly if we don’t exercise cautious oversight.

update: See also this interesting and convincing chunk of quotes from the same testimony:

Greenspan: Bad data hurt Wall Street computer models — NYT

‘Business decisions by financial services firms were based on “the best insights of mathematicians and finance experts, supported by major advances in computer and communications technology,” Greenspan told the committee. “The whole intellectual edifice, however, collapsed in the summer of last year because the data inputted into the risk management models generally covered only the past two decades a period of euphoria.”

He added that if the risk models also had been built to include “historic periods of stress, capital requirements would have been much higher and the financial world would be in far better shape today, in my judgment.”‘

We live and learn. Especially about using models to make serious decisions.

Absolute Joy in Domains of True Uncertainty

After getting irritated at humankind’s inability to accept that some things are genuinely uncertain, I open my podcasting device and hey presto:

Resilience: Adaptation and Transformation in Turbulent Times — A World Of Possibilities, May 6th

It opens with Buzz Holling (who NRE 580 alumnus will remember for panarchy theory) on adaptation, uncertainy, adeterminism, non-equilibrium, and such like in the general world. Then it moves onto Brian Walker talking about much of the same in ecosystem management, plus control fetishism. Then it moves on from there. Recorded at a Stockhlom conference on applying biology-based resilience theory to social systems. The idea of which is now creeping me out. Except that maybe, just maybe, this is a group of people that can be trusted to think rationally across disciplines. Maybe. Anyhow, it’s good listening.

Brian Walker’s talk reminded me of a lecture on conservation management from my undergrad, wherein Thom Nudds announced that if you manage to get an ecosystem to not cycle you’ve flatlined it, so congratulations on that.

Niall Ferguson and Peter Schwartz Debate: Are We Fux0red?

Somehow I missed this swashbuckling undebate between Niall Ferguson and Peter Schwartz on the general topic of: is the future screwed? Schwartz (see video here, in which he claims we will mostly all die from sudden global climate change-induced violence long before peak oil is an issue) is the optomist, Niall Ferguson is concerned things might be uncomfortable for us.

Alternative histories, finance, evolutionary biology, cosmology, corn syrup, Hashemites, diesel-shitting microbes, China, complexity, dead friends and imaginary ones. And laughs! And if you really need hard futurism, Schwartz (back in April) predicts substantial rebalancing of the the finance system within weeks. It’s a corker. Summary here, mp3 here, podcast here for the whole seminar series, if you wish to descend the full rabbit hole (highly, highly recommended). Don’t miss the Q&A after.

“Not Life of Brian, Meaning of Life. Sorry. Jet lag.” — Niall Ferguson

An Interesting Day in the Academic Trenches

As Utah Phillips might say, the University of Michigan isn’t the middle of nowhere, but you can see it from here. So it’s nice that lots of interesting people keep coming here to keep us entertained.

At lunch I went to at talk on the historical development of the nuetral theory of evolution, from STIS staff scientist Egbert Leigh. It doesn’t sound like such a hot topic, but I’m fascinated by just how un-obvious tropical biological richness is when you really start to look at it, and I’ve been told I should consequently know about neutral theory, and thought the talk might be just the thing. So did lots of other people apparently, the large-ish room at the museum of natural history was at capacity. It turned out to be just this side of incomprehensible for my genetics-theory underequipped brain (and frankly some people should just not be allowed around powerpoint). But there was something soothing and pleasant about sitting on a radiator in a room packed up with young and old smart folks, listening to this bearded old dude droning on about really smart stuff he clearly really knew a lot about, and idly contemplating the firing of neural networks throughout the crowd. There were necessarily no academic high points for me, but the non-academic high point was when he suggested in his even, dispassionate way that Steve Hubbell built out the powerful and influential neutral theory, which every sensible person knows is fundamentally bonked, “because it was a sweet job, the same way building the atomic bomb was a sweet job for Oppenheimer”, and the crowd accepted that in their even, dispassionate way. I’m sure the lecturer didn’t mean it that way, but still, c’mon, Hiroshima?

After lunch I went and hung out in the Center for the Study of Complex System, where I feel legitimately entitled to check my email in a complex systems way since I probably passed my Evolutionary Dynamics test yesterday and thus still have a shot at getting my minor in complex systems. Then I went back to my home department, where my advisor had arranged an informal afternoon seminar with Michael Batty and some other Brits who were in town for a social sciences conference. Last year I spent a long weekend in Chicago, exploring the neighbourhoods there. I took a copy of one of Batty’s many books about city simulation with me. I didn’t end up doing a lot of reading, but to the extent that I did it was fun to contrast the rich and surprising reality of the very visceral and assertive city of Chicago with the abstractions and essences of the book. So it was particularly pleasant to spend a non-directed afternoon around a table with Prof. Batty and other smart people batting around big ideas in agent-based modeling.

Being a grad student has it’s ups and downs, and there are plenty of times when I’ve wished for the mindless tedium of manual labour as a preferable substitute for the adult-student lifestyle, but when it comes through, the life of an academic can really come through.

An Agent-Based Modeling Textbook, Free in Alpha

José M. Vidal is writing a textbook called “Fundamentals of MultiAgent Systems”, and he’s posted an alpha version on his site, with a call for comments. It’s here:

Fundamentals of Multiagent Systems Textbook

The link to the .pdf seems a bit flakey, but if you try a couple of times it should come through.

Apparently the book is based on his experiences running a grad course in agent based systems. Cool.

He also runs this user-blog on multi-agent systems:

www.multiagent.com

which works on the mechanism that if you assign a weblink in del.icio.us with a certain tag (for:jmvidal), that link and your accompanying text will show up on the blog. Neat.

Gonna Try to Make a Spatial Model of Regional Dialect Formation

And in that spirit

I heard an interview on NPR with William Labov. He was talking about how regional dialects in the US are entrenching and differentiating themselves. Which seems counter to what you might think would be the case in a highly connected and media-centralized society. He talked a lot specifically about the ‘Northern Cities Shift’, which if you happen to know some native Michiganders you may be anecdotally aware of. I thought it was fascinating that dialect doesn’t settle down into some kind of homogenous equilibrium, or a least isn’t doing so now. It struck me that that kind of perpetual novelty and lava-lamp partial pattern persistence is the sort of thing you see in systems complexity — places where there are many agents interacting with local rules which crank out the big-system behaviour.

I have to make a model for my agent-based modeling course, so I figured: this is the one. No, it’s got nuthin to do with ecology or landscapes or remote sensing or whatever, but the more I think about the more I think it’s kind of cool anyway.

With our ant-trail presentations out of the way, these days we’re having the presentations on our proposed models. I presented last week, and all the proposals were really cool: a model of pollution-coalition formation and stability among nations from Johannes Urpelainen; a completely off-the-hook model of interest and agenda formation and influence in distributed human communities from Andrew Bell, and Kensuke Mori suggested a meta-population model of predation and birth patterns in african mammals, which is the sort of thing I wish I had thought of because it’s such a clear ecological application. That’s the first set of presentations. Damn.

The slides from my own presentation are here. They get weak at the end, I was still wacking away at them at home 8 minutes before the start of class. Like any good presentation they probably won’t mean much with the audio component anyway — highlights from my draft proposal follow below, and the whole thing is here.

Read the rest of this entry »

Ants, Ant Books, Programming, and Raccoons

I have a group project writing an agent-based program to simulate the foraging behaviour of ants. The NetLogo implementation of this idea makes it look easy. Turns it out it’s not. Which has lead to lots of interesting questions about ants.

Incidentally, the project is being written using the RePast agent based modeling libraries for java. Now, I haven’t looked at the code of the NetLogo sample implementation since I started writing this thing, because we’re not supposed to. But I did look at it last semester, and I seem to remember you could fit the code on a tshirt, using a fairly hefty font, if you were so inclined. You could not fit the equivalent java code on a tshirt. You could not fit it on a muumuu. If nothing else, this project is convincing me that as soon as we’re let loose, I’ll be switching to NetLogo. RePast may not be as clumsy or random as a blaster, but NetLogo is just like way faster. Bring on the clumsy and random.

In an effort to answer some of my questions about how real ants have solved their RePast programming issues, I got a copy of Ants at Work by Deborah Gordon out of the library. I was shocked and mildy irritated to see that no one has checked out this copy — the only one in the UMich system — before me. WTF? I first read AaW when I was contemplating a project for my final year field course in undergrad, and it sticks in my memory as one of the most interesting books I have read. Dr. Gordon studies how it is that individual ants, obeying no rules outside of their own tiny heads, somehow come together to form the persistent yet adaptable superorganism that is an ant colony. She uses methods ranging from painting individual ants to digging up colonies with backhoes. It was my first introduction to the idea of emergence, before I (or apparently Dr. Gordon) had ever heard the word.

I can’t believe nobody else has read it around here. What’s wrong with these people? It’s so much more portable than The Ants, and costs 1/20th as much, even if you don’t include the cost of the hand cart.

Also, there is a raccoon sleeping in the garbage bin to the east of the Shapiro library doors.

wtf.gif

older posts →