R^2

Roughly a year ago, I made some noises on this blog about wanting to learn R. Not surprisingly, I didn’t do it.

A year later I’m a government scientist with some statistics to do, and I’m once again thinking of learning me some R. In the interim, I’ve received an email assuring me “you could get up and running with it within a day, I think. Master it in a week or two”. So I download the package — it’s free! — install it, and boot it up. I’m looking at a command line console labelled the “GUI” (ha ha), with the following help text:

“Type ‘demo()’ for some demos”

Demos! Perfect! Let’s see some concrete examples of how to do statistics in R-land! So I type demo() into the “GUI” prompt, and receive the following output:

Demos in package ‘base’:

is.things: Explore some properties of R objects and is.FOO() functions. Not for newbies!
recursion: Using recursion for adaptive integration
scoping: An illustration of lexical scoping.

Demos in package ‘graphics’:

Hershey: Tables of the characters in the Hershey vector fonts
Japanese: Tables of the Japanese characters in the Hershey vector fonts
graphics: A show of some of R’s graphics capabilities
image: The image-like graphics builtins of R
persp: Extended persp() examples
plotmath: Examples of the use of mathematics annotation

Demos in package ‘stats’:

glm.vr: Some glm() examples from V&R with several predictors
lm.glm: Some linear and generalized linear modelling examples from `An Introduction to Statistical Modelling’ by Annette Dobson
nlm: Nonlinear least-squares using nlm()
smooth: `Visualize’ steps in Tukey’s smoothers

Use ‘demo(package = .packages(all.available = TRUE))’ to list the demos in all *available* packages.

Tables of the characters in the Hershey vector fonts? demo(package = .packages(all.available = TRUE))? Some of the ‘stats’ packages sounded like they might make sense, so I tried to run them, but I couldn’t figure out how. I love the idea of open source bare-knuckle computing. I wish I loved it in practice.

Enhance

Forgive me for saying so, but I know a thing or two about enhancing photographs. I’ve put some time in as a satellite and aerial imagery analyst, and as a hobby photographer I make no apologies about Photoshop. I grok histogram response curves, level shifting,  global  and local contrast, interpolation, headroom, falloff, edge detection, hue isolation and saturation expansion. I know you almost always zoom out (!) to see a pattern, but if you want to get into pixel-peeping, I know a little about decomposing a pixel into constituent spectral signatures, k-means clustering and machine-learning classification, and all the lovely supervised and unsupervised pixel binning techniques. If I give myself an hour to study up, I can even keep the Minimum Noise Transformation straight in my head for 15 minutes. And the N-Dimensional Visualizer speaks for itself.

There is an enormous amount you can do to make a shape or pattern or shade of interest stand out in a image, by tweaking the colour or contrast response, or exploiting extra parts of the light spectrum to help the computer find hidden colours. You can fuzz together noisy patterns to see the shapes behind them, or bin together multiple pixels to lighten up the darkness. Just about the only thing you can’t do is create detail where there wasn’t any to begin with.

So I get grumpy every time I watch a movie with an image analysis scene, and the one and only thing they always always do is the one damn thing you can’t.

dunk3d made a montage:

Two they left out:

Bladerunner (the original?)

and of course Super Troopers

…(although it’s true that imagery analysts wear state trooper uniforms to operate their computer terminals.)

Google Massively Automates Tropical Deforestation Detection

Landcover change analysis has been an active area of research in the remote sensing community for many years. The idea is to make computational protocols and algorithms that take a couple of digital images collected by satellites or airplanes, turn them into landcover maps, layer them on top of each other, and pick out the places where the landcover type has changed. The best protocols are the most precise, the fastest, and which can chew on multiple images recorded under different conditions. One of the favourite applications of landcover change analysis has been deforestation detection. A particularly popular target for deforestation analysis is the tropical rainforests, which are being chainsawed down at rates which are almost as difficult to comprehend as it is to judge exactly how bad the effects of their removal will be on biological diversity, planetary ecosystem functioning and climate stability.

Google has now gotten itself into the environmental remote sensing game, but in a Google-esque way: massively, ubiquitously, computationally intensively, plausibly benignly, and with probable long-term financial benefits. They are now running a program to vacuum up satellite imagery and apply landcover change detection optomized for spotting deforestation, and for the time being targeted at the amazon basin. The public doesn’t currently get access to the results, but presumably that access will be rolled out once Google et al are confident in the system. I have to hand it to Google: they are technically careful, but politically aggressive. Amazon deforestation is (or should still be) a very political topic.

The particular landcover change algorithms they are using are apparently the direct product of Greg Asner’s group at Carnegie Institution for Science and Carlos Souza at Imazon. To signal my belief in the importance of this project I’m not going to make a joke about Dr. Asner, as would normally be required by my background in the Ustin Mafia. (AsnerLAB!)

From the Google Blog:

“We decided to find out, by working with Greg and Carlos to re-implement their software online, on top of a prototype platform we’ve built that gives them easy access to terabytes of satellite imagery and thousands of computers in our data centers.”

That’s an interesting comment in it’s own right. Landcover/landuse change analysis algorithms presumably require a reasonably general-purpose computing environment for implementation. The fact that they could be run “on top of a prototype platform … that gives them easy access to … computers in our data centers” suggests that Google has created some kind of more-or-less general purpose abstraction layer than can invoke their unprecedented computing and data resource.

They back that comment up in the bullet points:

“Ease of use and lower costs: An online platform that offers easy access to data, scientific algorithms and computation horsepower from any web browser can dramatically lower the cost and complexity for tropical nations to monitor their forests.”

Is Google signaling their development of a commerical supercomputing cloud, a la Amazon S3? Based on the further marketing-speak in the bullets that follow that claim, I woud say absolutely yes. This is a test project and a demo for that business. You heard it here first, folks.

Mongobay points out that it’s not just tropical forests that are quietly dissapearing, and Canada and some other developed countries don’t do any kind of good job in aggregating or publically mapping their own enormous deforestation. I wonder: when will Google point its detection program at British Columbia’s endlessly exanding network of just-out-of-sight-of-the-highway clearcuts? And what facts and figures will become readily accessible when it does?


View Larger Map

Mongobay also infers that LIDAR might be involved in this particular process of detecting landcover change, but that wouldn’t be the case. Light Detection and Ranging is commonly used in characterizing forest canopy, but it’s still a plane-based imaging technique, and as such not appropriate for Google’s world-scale ambitions. We still don’t have a credible hyperspectral satellite, and we’re nowhere close to having a LIDAR satellite that can shoot reflecting lasers at all places on the surface of the earth. Although if we did have a satellite that shot reflecting lasers at all places on the surface of the earth, I somehow wouldn’t be surprised if Google was responsible.

Which leads me to the point in the Google-related post where I confess my nervousness around GOOG taking on yet another service — environmental change mapping — that should probably be handled by a democratically directed, publically accountable organization rather than a publically-traded for-profit corporation. And this is the point in the post where I admit that they are taking on that function first and/or well.

Retro Computing Jam Session

Have you ever had that feeling that somewhere out there, people are jamming on a Vic-20, a PET and a Commodore 64, possibly in some kind of classroom setting?

The middle computer would be Petsnyth‘s first (I assume) public performance.

Artificial Intelligence in Flash

This guy appears to be doing some work on network-based artificial intelligence… in flash. I wouldn’t have thought flash would be a first choice of programming language if you’re into experimental computation. But you gotta admit, it sure does look pretty.

Maybe Netlogo should hire a designer to gussy up their applets. Right after they get around to going open source.

“The finest in 1-bit sound on the Commodore PET”

And now the Petsynth project has a website: petsynth.org

You have to record the program to an audio cassette tape to load it onto your Pet. But this home copying is fully legal: Petsynth has gone open source.

Previously:

Broken Happiness Machines Are Go
PetSynth: A Superior Synthesizer for the Commodore Pet

Broken Happiness Machines Are Go

A couple of weeks ago I mentioned Petsynth, Chiron Bramberger’s novel synthesizer software for the Commodore Pet. But Chiron doesn’t just write music on the Pet, he also blasts 8-bit rythmic weirdness from a pipe-organ arrangement of Amigas and Ataris and god knows what else.

And last week, Chiron pressed play on the Broken Happiness Machines website, from which he will distribute some of those grooves. (And hopefully that software!)


photo by Chiron Bramberger

An Adware Programmer on the Resilient Goodness of Humans

From the end of this extraordinary interview with a retired writer of programs to screw up your computer irretrievably:

S: Do you think that in our society we delude ourselves into thinking we have more privacy than we really do?

M: Oh, absolutely. If you think about it, when I use a credit card, the security model is the same as that of handing you my wallet and saying, “Take out whatever money you think you want, and then give it back.”

S: …and yet it seems to be working.

M: Most things don’t have to be perfect. In particular, things involving human interactions don’t have to be perfect, because groups of humans have all these self-regulations built in. If you and I have an agreement and you screwed me over badly, you’ve always got in the back of your mind the nagging worry that I’m going to show up on your doorstep with a club and kill you. Because of that, people don’t tend to screw each other too much, right? At least, they try not to. One danger, perhaps, of moving towards an algorithmically driven society is that the algorithms aren’t scared of us showing up and beating them up. The algorithms will do whatever it is that they are designed to do. But mostly I’m not too worried about that.”

Your Computer Screen May Need To Be Colour Calibrated

Your computer screen may need to be colour calibrated. Mine sure did. I bought a new laptop which seems to have a nice enough screen, but I could tell by looking that it suffered from a blue cast. It’s specifically a Dell XPS 13 (yes, a Dell, forgive me), but blue-ishness seems to be a common characterisitic of laptop screens. I’ve noticed it on several, and I recall photography spaz Ken Rockwell had the same problem. It’s not an issue if you aren’t doing photography or graphic design, but if you are it is. Not knowing the actual colour of the image you’re making is a real pain if you’re planning on printing it or showing it on somebody else’s screen.

Calibrating a screen has a hard part and an easy part and a hard part. The hard part is swallowing the idea of paying non-trivial sums of money for an obscure hardware thing that will sit briefly on your monitor before being forgotten in a desk for months or years. The easy part is actually running the procedure once you’ve installed the associated software and have plugged the device into your computer. The hard part is knowing what to do with the calibration profile file that procedure will produce. The calibrator I use installs a little program that loads up with Windows and automatically applies the profile file when the system boots, which seems easy. But I notice that program sometimes fails, so I have to override it and use the built-in Windows color management controls anyway. It’s less of pain in Vista than it was in XP, but it’s still a pain. Mac and linux may have better systems, I’m not sure. In any case it means digging through control panels and pondering what will happen when you plug multiple differnent monitors into your computer, if you’re into that sort of thing.

A litte Q&A:

What is a colour calibrator? What is a colour profile? A calibrator is a little USB puck that has a basic spectrometer built into it. The associated software produces an image on your screen that cycles through some colours. The spectrometer puck precisley records what colour your screen is actually generating. The software then compares those output values to what it was feeding into the screen and makes a little file with numbers the operating system can use to correct the difference. With that correction applied the result should be a nuetral colour response.

Why don’t screen manufacturers test their screens and give out profiles? I don’t know. My old external monitor actually did come with a factory-made profile, but I only discovered it by accident while digging around the utilities cd. Maybe because every screen coming off the production line will have slightly different colour response and they can’t be bothered testing each screen individually. Maybe because they figure it’s too much to expect owners to know how to apply the profile file within their particular operating system. Maybe because they figure nobody cares. Maybe nobody does.

Is there any such thing as truly nuetral colour response? Yes. But if you start to think too much about that it gets complicated.

If everybody’s screen is blue-ish, shouldn’t you keep yours blue-ish so you’ll know what your photos will look like on theirs? That’s depressing, next question.

Shouldn’t you do this for your printer too? Yes. But that gets complicated. And expensive.

Will the printer at the photo store you send your prints to have a nuetral colour response? Maybe, but they will hopefully have it calibrated if it doesn’t. But that gets complicated. For now, just worry about your screen.

PetSynth: A Superior Synthesizer for the Commodore Pet

Today I got to try PetSynth v0.006, by Chiron Bramberger. Chiron owns several Commodore Pet personal computers, and was dissapointed by the quality of the music-making software available for them, so he wrote his own. He has plans to release it to the Commodore community but currently it’s stored on a 5.25″ disk lodged in his dual disk drive.

In addition to producing lovely beeps, Chiron figured out how to drive distortion in the Commodore’s built-in music hardware, to produce vibrato, and to generate something very much like a drum tone. Hooked up to a pair of pot knobs for bending the signal, the system can produce some mean 8 bit grooves.

Chiron also builds guitar effects pedals from the recycled innards of modems, grafted into lovely hand-painted acrylic boxes with the shells of harddrives for backs, but that’s a seperate project.

older posts →