GIS Art from Landscape Biodiversity Project

I’m working on a fun project for the BC Forest Practices Board. We’re taking great whackloads of province-wide spatial data and transmogrifying it into handy reports on the physical status of forests in different administrative and ecological zones. And I’m pleased to report that the project is beginning to produce some GIS art.

biogeoclimactic zones around Port Alberni

GIS art, for those who haven’t had the pleasure, are the serendipitous bits of aesthetic map flotsam that tend to pop up as intermediary products in geographic analysis chains. They’re the recombinant product of the natural attractiveness of landforms, the semi-random automated assignment of colours to landcover classes, and the quasi-organic distortions introduced by algorithm. The above is a relatively unprocessed version, see for comparison one of my old favourites:


localized explanatory power of soil water for shapeness
of juniper, Strawberry Crater, Waputki AZ

OK, so maybe it’s not great art. But when GIS art does show up, it’s often a nicely timed distraction from the more abstract “pleasures” of analytical troubleshooting.

East Van is for Local Photographers (Maybe)

Eric Fischer used the locations of geotagged photos on Flickr to make a series of city maps he calls The Geotaggers’s World Atlas. Then he got even cleverer and figured out which of the photos came from locals and which came from tourists, based on the time lag in between photographs. The result is a new set of maps called Locals and Tourists.

Here’s Vancouver:

Red dots are photos from tourists, blue dots are from locals, and yellow are cases where Eric’s algorithm wasn’t able to conclusively differentiate. I notice two things.

  1. Vancouver is the 9th city on the list of 96. And according to Eric, he ordered them “by the number of pictures taken by locals”. So Vancouverites like to take photos of their city. (Although I suppose it depends on how big the other cities in the project were). Compare for instance with Las Vegas.
  2. Everything east of downtown belongs to the locals. Clark, Commercial, East Hastings, 2nd and for some reason Heatley are thick bands of solid blue.

crop

Except that I don’t entirely trust point #2. It just doesn’t make sense that Heatley would outshine Broadway as a go-to destination for photographers. Here’s what I think is happening: there aren’t actually that many people who go on blanket photo missions, then do the geeky work of linking their imagery output to GPS tracks and uploading them in bulk to flickr. Those few photomatic enthusiasts are driving the apparent patterns. That theory is anecdotally supported by this comment from Roland.

It’s a striking differential nonetheless. Next time I find myself visiting a new city, an interesting project would be to track down the places that the locals think are worthy of camera action, but don’t usually get much interest from foreign photogs.

Google Maps With ‘Earth View’ Still Has ‘Terrain View’

Google has just integrated the 3-D fly-through technology of Google Earth into their standard Google Maps website. How do they pack the tech of a 70mb program into a utility that runs in a browser? I do not know, although it appears they may have just (“just”) made the Google Earth plugin for web browsers into an automatic download and install.

Vancouver in its 3-dimensional glory

I was concerned that the arrival of Earth view had replaced the ‘terrain’ view option. Among other things, the hillshaded terrain view is handy for grabbing lat/long locations of natural features for quick input into GIS, particularly when used in conjunction with the LatLng marker option.

But all is well. ‘Terrain’ view is still there, it’s just been moved into the ‘More’ dropdown menu.

decent terrain, too

Hawth’s Tools Becomes the “Geospatial Modelling Environment”

Whenever I have run into a more-than-usually knotty GIS analysis step, one which the tools bundled with ArcGIS just don’t seem to be able to unravel, I look first to Hawth’s Tools. Hawth’s Tools is a free package of add-ons for ArcGIS, capable of all manner of tricks, like “for each polygon, create a new attribute which records the range of values of the points which fall into it“. Handy stuff like that. When asked about a GIS problem, I have a bad habit of saying “oh sure, I can do that” and then discovering it’s not so easy, and as such I’ve often thanked Hawthorne L. Beyer under my breath for his freely given antidote to my hubris.

Having just such a task on my hands today, I look to spatialecology.com and discover that the Hawth has made good on his long-standing threat of re-writing the whole H-Tools package in a new and ambitious form, currently in beta distribution and very handsomely titled “The Geospatial Modelling Environment“.

“GME provides you with a suite of analysis and modelling tools, ranging from small ‘building blocks’ that you can use to construct a sophisticated work-flow, to completely self-contained analysis programs. It also uses the extraordinarily powerful open source software R as the statistical engine to drive some of the analysis tools. One of the many strengths of R is that it is open source, completely transparent and well documented: important characteristics for any scientific analytical software.”

Excellent.

update: I’ve now used GME for some basic processing, and the enhanced-command-line-interface it employs might be a little unfriendly for some users, but on the whole it looks like an excellent system with real promise. And given that it’s built on open libraries for geo-statistics and visualization, the tools can presumably be ported into other GIS packages, including open source packages, relatively easily.

Graffiti Map of Vancouver

A map of all the graffitti known to the City of Vancouver, courtesy of the fantastic City of Vancouver Open Data Catalogue.


View Larger Map

I’m not sure what the difference between the grey-on-yellow and the yellow-on-grey checkmarks is. I guess they need to work on their metadata still.

update: it looks like each dataset in the catalogue actually is associated with a handy metadata page. For example, here’s useful info on the graffiti data, including the fact that the data is updated weekly. Although I still can’t figure out what the difference between grey and yellow boxes is in the Google Maps version above.

Distributed Emergency Mapping In Haiti

Ushahidi is “a platform that allows anyone to gather distributed data via SMS, email or web and visualize it on a map or timeline.” They are producing a real-time map of crisis-relevant locations in Haiti. People currently in Port-Au-Prince can submit reports by text-message to a single phone number. The raw feed of reports coming in are interpreted by volunteers who then add them to the map under a number of categories like “road blockage”, “food available” or “missing person”. Volunteer teams in the US have been swapping off with teams in Africa to maintain the site and keep up with reports throughout the day and night. Those volunteers are also monitoring a long list of news articles, blogs, agency updates and the like to generate reports directly.

In addition to the map, people on the ground can sign up to be notified whenever a there is a new report within a customizable distance of their location.

There seem to be multiple aggregations of incoming reports, possibly broken into SMS messages submitted directly from Port-au-Prince, and items collected from news and social networks by volunteers. Those aggregations contain detailed and in many cases alarming information.

Reading through the blog posts and news reports, it looks like the map is being used primarily by international agencies to distribute resources. I’m curious to what degree internet is available within Port-au-Prince itself. Apparently, there must be some access, because previous to cell service being established, reports were coming in through “Web reports, email and Twitter”. According to their situation room,the SMS method of reporting is now working.

Additionally, Google is coordinating a person-finding application.

Enhance

Forgive me for saying so, but I know a thing or two about enhancing photographs. I’ve put some time in as a satellite and aerial imagery analyst, and as a hobby photographer I make no apologies about Photoshop. I grok histogram response curves, level shifting,  global  and local contrast, interpolation, headroom, falloff, edge detection, hue isolation and saturation expansion. I know you almost always zoom out (!) to see a pattern, but if you want to get into pixel-peeping, I know a little about decomposing a pixel into constituent spectral signatures, k-means clustering and machine-learning classification, and all the lovely supervised and unsupervised pixel binning techniques. If I give myself an hour to study up, I can even keep the Minimum Noise Transformation straight in my head for 15 minutes. And the N-Dimensional Visualizer speaks for itself.

There is an enormous amount you can do to make a shape or pattern or shade of interest stand out in a image, by tweaking the colour or contrast response, or exploiting extra parts of the light spectrum to help the computer find hidden colours. You can fuzz together noisy patterns to see the shapes behind them, or bin together multiple pixels to lighten up the darkness. Just about the only thing you can’t do is create detail where there wasn’t any to begin with.

So I get grumpy every time I watch a movie with an image analysis scene, and the one and only thing they always always do is the one damn thing you can’t.

dunk3d made a montage:

Two they left out:

Bladerunner (the original?)

and of course Super Troopers

…(although it’s true that imagery analysts wear state trooper uniforms to operate their computer terminals.)

The Natural Earth Dataset Is Online

I was working on a mapping project, and I was frustrated that I couldn’t find basic shoreline data for the Great Lakes. You wouldn’t think it would be hard to get something as simple as an outline of the most famous fresh water in the world, but it is: Geobase.ca for instance only covers half of the lakes because Geobase stops where Canada stops. Even if Geobase were international in scope, the National Hydro dataset on offer is insanely detailed and would have to be mosaiced and filtered and smoothed and generalized before being used to make a regional-scale map. Alternatively, there are a few clunky world baselayers floating around, but zoomed into a regional scale they look like they were digitized by an intern in a hurry. finder.geocommons.com (bless ’em) has all kinds of interesting specialized products — Fishing Special Regulation Lakes in Pennsylvania for instance — but not just, you know, a decent map of all the lakes.

Nor is this an unusual problem. General basemap data is rare. High quality, consistent, freely available, freely publishable basemap data is even rarer. Quality, consistent, usable basemap data that is predictably findable is gold. ESRI gives away some low-res, somewhat inconsistent free data, and will sell you a pretty comprehensive set of higher quality stuff. But in my expereience the quality of even the paid data varies like crazy from jurisdiction to jurisdiction, and often doesn’t match up at the borders. Each mapper I know has a little stash of their favourite basemaps, and sometimes they will get traded around, and sometimes they won’t. And that still doesn’t solve your problem if you’re trying to make a map that looks consistent across multiple provinces or states or countries.

(I should point out that normally, access to Great Lakes would be an exception, since gis.glin.net is a good GIS source. But they’re broken these days).

If you want a nice tinted relief map of the world, that at least has been available thanks to the work and generosity of Tom Patterson, USGS cartographer and acknowledged master of shaded relief and naturalistic cartography. And for a while there have been hints that Tom was participating in a new project that would release a broader set of basemap data, including high-quality vector data as well as raster layers. Wouldn’t that be a thing!

So I decided to check in on the progress of that project. And lo, it has been released! Since last week! Natural Earth is online and distributing data.

What’s in there? An extraordinary cartographical toolkit — physical and cultural, hand-generalized to 3 different useful scales. Checked for accuracy and consistency. Comprehensive across the world. With an active infrastructure for gathering reported errors and plans to revise and rerelease improved iterations. Free as in beer, free as in speech. You don’t even have to sign in. And as if all that wasn’t enough, Tom Patterson seems to have included a new portfolio of shaded relief layers, including some gorgeous hypsometric tinted landcover representations.

In addition to Tom, the other driving collaborator seems to be one Nathaniel Kelso, someone I didn’t know of but who works for the Washington Post. Apparently the kernel of the vector side of Natural Earth is a dataset the Washington Post had assembled for quick-turn-around diagramatic cartography. I’m not quite clear on the funding of the project, but as far as I can tell the motivation is sheer good will. It’s an unexpected and extremely promising cartography resource.

Google Massively Automates Tropical Deforestation Detection

Landcover change analysis has been an active area of research in the remote sensing community for many years. The idea is to make computational protocols and algorithms that take a couple of digital images collected by satellites or airplanes, turn them into landcover maps, layer them on top of each other, and pick out the places where the landcover type has changed. The best protocols are the most precise, the fastest, and which can chew on multiple images recorded under different conditions. One of the favourite applications of landcover change analysis has been deforestation detection. A particularly popular target for deforestation analysis is the tropical rainforests, which are being chainsawed down at rates which are almost as difficult to comprehend as it is to judge exactly how bad the effects of their removal will be on biological diversity, planetary ecosystem functioning and climate stability.

Google has now gotten itself into the environmental remote sensing game, but in a Google-esque way: massively, ubiquitously, computationally intensively, plausibly benignly, and with probable long-term financial benefits. They are now running a program to vacuum up satellite imagery and apply landcover change detection optomized for spotting deforestation, and for the time being targeted at the amazon basin. The public doesn’t currently get access to the results, but presumably that access will be rolled out once Google et al are confident in the system. I have to hand it to Google: they are technically careful, but politically aggressive. Amazon deforestation is (or should still be) a very political topic.

The particular landcover change algorithms they are using are apparently the direct product of Greg Asner’s group at Carnegie Institution for Science and Carlos Souza at Imazon. To signal my belief in the importance of this project I’m not going to make a joke about Dr. Asner, as would normally be required by my background in the Ustin Mafia. (AsnerLAB!)

From the Google Blog:

“We decided to find out, by working with Greg and Carlos to re-implement their software online, on top of a prototype platform we’ve built that gives them easy access to terabytes of satellite imagery and thousands of computers in our data centers.”

That’s an interesting comment in it’s own right. Landcover/landuse change analysis algorithms presumably require a reasonably general-purpose computing environment for implementation. The fact that they could be run “on top of a prototype platform … that gives them easy access to … computers in our data centers” suggests that Google has created some kind of more-or-less general purpose abstraction layer than can invoke their unprecedented computing and data resource.

They back that comment up in the bullet points:

“Ease of use and lower costs: An online platform that offers easy access to data, scientific algorithms and computation horsepower from any web browser can dramatically lower the cost and complexity for tropical nations to monitor their forests.”

Is Google signaling their development of a commerical supercomputing cloud, a la Amazon S3? Based on the further marketing-speak in the bullets that follow that claim, I woud say absolutely yes. This is a test project and a demo for that business. You heard it here first, folks.

Mongobay points out that it’s not just tropical forests that are quietly dissapearing, and Canada and some other developed countries don’t do any kind of good job in aggregating or publically mapping their own enormous deforestation. I wonder: when will Google point its detection program at British Columbia’s endlessly exanding network of just-out-of-sight-of-the-highway clearcuts? And what facts and figures will become readily accessible when it does?


View Larger Map

Mongobay also infers that LIDAR might be involved in this particular process of detecting landcover change, but that wouldn’t be the case. Light Detection and Ranging is commonly used in characterizing forest canopy, but it’s still a plane-based imaging technique, and as such not appropriate for Google’s world-scale ambitions. We still don’t have a credible hyperspectral satellite, and we’re nowhere close to having a LIDAR satellite that can shoot reflecting lasers at all places on the surface of the earth. Although if we did have a satellite that shot reflecting lasers at all places on the surface of the earth, I somehow wouldn’t be surprised if Google was responsible.

Which leads me to the point in the Google-related post where I confess my nervousness around GOOG taking on yet another service — environmental change mapping — that should probably be handled by a democratically directed, publically accountable organization rather than a publically-traded for-profit corporation. And this is the point in the post where I admit that they are taking on that function first and/or well.

Landscape Maps Colourized by GPS’ed Flickr Photos

I won’t attempt to summarize how this map was made:

harvard_colors

Just go read the post about its making:

Flickr As A Paintbrush — cartogrammar blog

And incidentally, the guy who made that map is also helping make these fascinating things. My inner cartographer is freaking out.

older posts →