Embedding a Fusion Table map in a WordPress post

Just testing: a Google Fusion Table map embedded in a WordPress blog post.

Here’s the original Fusion Table table. That data is described in this Ottawa Citizen post by Glen McGregor.

Here’s the exact embed code used for the above map:

<iframe src=”http://www.google.com/fusiontables/embedviz?viz=MAP&q=select+col6+from+2621110+&h=false&lat=43.50124035195688&lng=-91.65871048624996&z=5&t=1&l=col6″ scrolling=”no” width=”100%” height=”400px”></iframe>

That’s just the default output from the Fusion Table map visualization, with the minor exception of tweaking the height and width a bit to fit my post. That code is available from within the Fusion Table visualization page if you click the “get embeddable link” link. Here’s a screenshot:

I’m posting this because someone was having trouble with this process and I wanted to try it out myself. I didn’t have to make any modifications to my existing WordPress 3.3.1 installation or theme to get it working.

Please note however that there is still a Conservative majority.

Update (later the same day): I’m pleased to see that Glen McGregor was able to embed the map on the the Edmonton Journal site, and wrote a telling article around it.

Google Maps With ‘Earth View’ Still Has ‘Terrain View’

Google has just integrated the 3-D fly-through technology of Google Earth into their standard Google Maps website. How do they pack the tech of a 70mb program into a utility that runs in a browser? I do not know, although it appears they may have just (“just”) made the Google Earth plugin for web browsers into an automatic download and install.

Vancouver in its 3-dimensional glory

I was concerned that the arrival of Earth view had replaced the ‘terrain’ view option. Among other things, the hillshaded terrain view is handy for grabbing lat/long locations of natural features for quick input into GIS, particularly when used in conjunction with the LatLng marker option.

But all is well. ‘Terrain’ view is still there, it’s just been moved into the ‘More’ dropdown menu.

decent terrain, too

That Explains It

— from My Way, a New York Times collection of faux maps by illustrator Christoph Niemann

That became slightly less funny to me when I noticed that the road to Rick’s was explicitly labelled (referential jokes are a dish best served cold), but it’s still funny.

Roll Your Own Rosling-esque Statistical Visualizations

It’s a statistical certainty that you watched Hans Rosling’s extraordinary information visualization presentation, from back when TED talks were cool. If not, you should certainly watch it below, as well as all the triumphant sequels.

And now, courtesy of Google, an experimental interface for rolling your own Rosling-esque statistical displays. Below is one of the examples they offer, slightly customized by me, but you can start from scratch and cook up anything you want from the datasets they have on hand.

The interface for assigning variables to axes and symbolism is fantastic. It’s reminds me of the Hectares BC approach. (Which reminds me in turn of the wonderful and neglected JMP exploratory stats package.) Complicated interfaces are great when you know what you want and want to be able to get it no matter how complicated it is, but a simpler interface allows for faster experimentation.

I hope they expand the amount of data, and I’m sure they will. I also hope they allow for cross-tabbing data from disparate data sets: for now you can only correlate numbers from the World Development Indicators with other WDI numbers, for instance.

We’re increasingly seeing numerical and geographical information displays which explicitly incorporate time, and Google is a big part of that. I’m a big fan of that trend towards explicit temporality — it helps take the focus off stocks and onto flows, and makes casually it clear that baselines really do shift.

With Regards to Google and the NSA

I love those “send an email of protest” web pages that so many activist groups have now. They do all the minutes and minutes of research that I would never get around to, to figure out what the pertinent email addresses and salutations are, and you just enter your name and (optionally) update their suggested email and presto, your email goes off and changes the world, to some degree. My unrigorous research as a former Amnesty letter writer and current Government Scientist suggests that letters and even, yes, emails, do actually make an impact at institutions caught in the uncomfortable light of controversy. And those web pages make it so damn easy. You just get to rant into a text box, and off it goes to make a difference.

Here, for instance, is today’s rant at Google via the ACLU:

The road to being evil is a moderately long and asymptotically creepy one, and by entering yourself into an alliance with the NSA you have placed yourselves squarely on it. The NSA overcollects. This is known. Google defends itself against accusations of overcollection by suggesting that the data is only ever automated and aggregated. This is known.

NSA-style overcollection + Google-style overcollection =/= happy Valentines Day. It = me getting very nervous around my Google account. *Especially* if you’re planning to go all “social network provider” on us.

Stay away from the NSA.

Another Ambiguously Worrying Google Development

Google Massively Automates Tropical Deforestation Detection

Landcover change analysis has been an active area of research in the remote sensing community for many years. The idea is to make computational protocols and algorithms that take a couple of digital images collected by satellites or airplanes, turn them into landcover maps, layer them on top of each other, and pick out the places where the landcover type has changed. The best protocols are the most precise, the fastest, and which can chew on multiple images recorded under different conditions. One of the favourite applications of landcover change analysis has been deforestation detection. A particularly popular target for deforestation analysis is the tropical rainforests, which are being chainsawed down at rates which are almost as difficult to comprehend as it is to judge exactly how bad the effects of their removal will be on biological diversity, planetary ecosystem functioning and climate stability.

Google has now gotten itself into the environmental remote sensing game, but in a Google-esque way: massively, ubiquitously, computationally intensively, plausibly benignly, and with probable long-term financial benefits. They are now running a program to vacuum up satellite imagery and apply landcover change detection optomized for spotting deforestation, and for the time being targeted at the amazon basin. The public doesn’t currently get access to the results, but presumably that access will be rolled out once Google et al are confident in the system. I have to hand it to Google: they are technically careful, but politically aggressive. Amazon deforestation is (or should still be) a very political topic.

The particular landcover change algorithms they are using are apparently the direct product of Greg Asner’s group at Carnegie Institution for Science and Carlos Souza at Imazon. To signal my belief in the importance of this project I’m not going to make a joke about Dr. Asner, as would normally be required by my background in the Ustin Mafia. (AsnerLAB!)

From the Google Blog:

“We decided to find out, by working with Greg and Carlos to re-implement their software online, on top of a prototype platform we’ve built that gives them easy access to terabytes of satellite imagery and thousands of computers in our data centers.”

That’s an interesting comment in it’s own right. Landcover/landuse change analysis algorithms presumably require a reasonably general-purpose computing environment for implementation. The fact that they could be run “on top of a prototype platform … that gives them easy access to … computers in our data centers” suggests that Google has created some kind of more-or-less general purpose abstraction layer than can invoke their unprecedented computing and data resource.

They back that comment up in the bullet points:

“Ease of use and lower costs: An online platform that offers easy access to data, scientific algorithms and computation horsepower from any web browser can dramatically lower the cost and complexity for tropical nations to monitor their forests.”

Is Google signaling their development of a commerical supercomputing cloud, a la Amazon S3? Based on the further marketing-speak in the bullets that follow that claim, I woud say absolutely yes. This is a test project and a demo for that business. You heard it here first, folks.

Mongobay points out that it’s not just tropical forests that are quietly dissapearing, and Canada and some other developed countries don’t do any kind of good job in aggregating or publically mapping their own enormous deforestation. I wonder: when will Google point its detection program at British Columbia’s endlessly exanding network of just-out-of-sight-of-the-highway clearcuts? And what facts and figures will become readily accessible when it does?


View Larger Map

Mongobay also infers that LIDAR might be involved in this particular process of detecting landcover change, but that wouldn’t be the case. Light Detection and Ranging is commonly used in characterizing forest canopy, but it’s still a plane-based imaging technique, and as such not appropriate for Google’s world-scale ambitions. We still don’t have a credible hyperspectral satellite, and we’re nowhere close to having a LIDAR satellite that can shoot reflecting lasers at all places on the surface of the earth. Although if we did have a satellite that shot reflecting lasers at all places on the surface of the earth, I somehow wouldn’t be surprised if Google was responsible.

Which leads me to the point in the Google-related post where I confess my nervousness around GOOG taking on yet another service — environmental change mapping — that should probably be handled by a democratically directed, publically accountable organization rather than a publically-traded for-profit corporation. And this is the point in the post where I admit that they are taking on that function first and/or well.

My Google Wave Address And Fears

I’m interested in exploring the Google Wave communication system, if anyone wants to try it I’m [email protected].

Should I not publish that address on a website? Is there Wave spam yet? If not, I’m predicting it. You heard it here first, folks.

I’m also nervous about Google owning yet another slice of our collective information infrastructure. In the case of Wave, the code is (or will be) open-sourced, and in theory anyone could make independent server software to host waves. But unike conventional email you can’t use an off-line email application as a principle place to host and store the things you’ve written to each other, so our communications are pushed yet further onto the cloud. If my wave service could be hosted on my own cloud server that inter-operated with other people’s self-hosted wave servers that wouldn’t bother me much. But I still haven’t seen movement towards personal cloud computing. And even if someone did make that happen, most people would go with a Wave service hosted and operated by a big company anyway, so that they wouldn’t have to think about it too much. And most of those people will end up with Google, because it will be the first Wave provider and, knowing Google, the best implemented.

Thus, if Waves do significantly supplant emails, the single most important messaging tool on the internet will largely be centralized with the same publically traded for-profit corporation that handles our mapping and our driving and our book searching and our public transiting and our finding of each other and our finding of everything. And it is indeed Google’s stated hope that wave will be the next email.

That said, who wants to try it out with me?

Update Dec 8th: I now have a whole bunch of invites to give out. The rate they’re arriving suggests that Wave is close to going open to normal sign-ups, but if you’re still looking for early access, and I know you somehow, I can probably hook you up.

I Want a Personal Cloud

I seem to like computing in clouds. I don’t want to: I don’t like the idea of putting my business or academic data into someone else’s for-profit servers, and I think it’s nutty in a special way to put your private photographs and social relationships in there too. But that’s just ideology, in practice I keep on opening up new documents sporting the Google logo, day-dreaming about the science computing I could do with a few hundreds dollars worth of clock cycles on an Amazon-hosted hadoop cluster, and contemplating moving my email address over to Google Apps for Your Domain. It’s all just so useful. It works across computers, it works across people, and nowadays it even sometimes works when you don’t have internet. The benefits are immediate and tangible (if cloud computing can be called tangible), and the drawbacks are longer-term and probabilistic.

Thus I was excited when the words “private cloud” started cropping up. A private cloud is web-based applications that run on your own server, instead of running on theirs. Advantages without drawbacks. For now private clouds are for corporations to run on their internal intranets. So the words I especially want to see are “personal cloud”. I already rent space on a web server, now I want to be able to install a calendaring service on hughstimson.org, in the same way I’ve already got blogging and photo gallery apps. And I especially want to install Mozilla Docs there. Mozilla, are you making Mozilla Docs?

Big question: if everybody has their own personal cloud running, can they work together? One of the major advantages of current cloud computing is collaboration. If I open a new Google Docs document here in Vancouver, my collaborators over the straight in Victoria can see it and edit it right away, using an interface they’re familiar with. If I were running a document application on hughstimson.org I could create that file, but other people probably don’t want to open an account on hughstimson.org to edit it, nor do they want to learn to use the interface for whatever editing application I’m running there.

I’m guessing there are technical solutions to this technical problem. People already care very much about standard formats in existing cloud computing, and if all of our clouds are able to speak to each other in a common language, then maybe collaboration across them isn’t such a big deal. I open a new spreadsheet, stored in .ods format on my own server, and start editing it on my web interface in my browser. Then I send out an invitation to an email address at Pink Sheep Media, and they open that document up in their own browsers using their own editing application running on the Pink Sheep Media cloud. Or maybe they’re still using Google Docs, and they access the file from hughstimson.org/docs, but edit it in the Google Docs interface. Maybe login access is handled using OpenId. Why not? It would mean having not just open standards for file formats, but also some common commands for editing functions. The editing could be done on their servers, and then the document would be saved back to mine, staying in the open standard file format the whole time. Is that hard? Does someone know?

As far as I know, Mozilla is not working on Mozilla Docs. But they are doing some cool stuff in cloud computing. This one looks like a big opportunity to me. At least, I know I want it very much. So somebody, please, build me a personal cloud.

Busting Google’s Book Monopoly

Not so long ago Google signed a deal to end a lawsuit launched against them by the Authors Guild and the Association of American Publishers. The Google Book program has been scanning books from a few major libraries since 2005–University of Michigan was one of the first–and making the text searchable online, and displaying snippets of them in the search dialogue. There was an assumption that Google would make money from this process, either by posting their ubiquitous text ads on the interface, or just by the inexorable process of making the internet more useful and thus bringing more folks into Google’s path, or something. The Authors organizations were convinced, reasonably, that Google must have seen a way to make money from it, or they wouldn’t be doing it. And they figured that since it’s their job to represent authors, and the product of authors was making money somehow, they wanted a taste. When Google pointed out that making book discovery easier might just be the single biggest thing that anyone could do to drive up declining book sales and make back-catalogs profitable, they didn’t care or weren’t convinced. They wanted money up front, directly, from Google.

So they opened up a public relations front, and opened up a lawsuit alleging infringement.

In the meantime, some other folks got concerned that Google was the only entity scanning books. They figured book discovery was indeed an important public good, and one that probably shouldn’t be the domain of a single for-profit. Google wasn’t talking about giving away their databases, and in fact seemed to be re-negotiating the terms of their agreements with the contributing libraries such that access to the data was becoming increasingly centralized. So the non-profit Open Content Alliance (with cash and tech from Microsoft and Yahoo, among others) fired up their scanners, with the intent of creating a commonly available pool of data on what was in all those books that are sitting on all those shelves.

I give huge props to Google for starting the book scanning movement. Before them, nobody thought it could be done technically, and nobody much seemed to realize that it should be done. In the time since, librarians at participating universities say they’ve seen an enormous uptick in book check-outs. It’s a great program, broadly speaking.

But the data shouldn’t only belong to Google. If the libraries had been collectively smart, once the Open Content Alliance came along offering to scan the books into a shared database they should have switched exclusively over to them, and suggested that Google join to the alliance too. If the author’s associations were smart, they should have supported the initiative whole-heartedly, made what-can-you-do gestures when the databases were leaked and started showing up on Kindles (or alternatively, struck a deal with Amazon), and watched the royalties on sale of physical copies of their back-catalogs skyrocket.

Some libraries did indeed join the OCA, for example University of California and U of Toronto. But the Author’s associations–as content trade groups tend to be–were stupid with greed. How stupid? In order to settle the deal, Google made them an offer: give us a license to scan the works of all the authors you represent, and we’ll give you some money. But only us! And the author’s association said, hey, money! That doesn’t seem like a good deal for the authors to me: book readership has been declining, and getting a few cheques cut from Google HQ isn’t going to change that, but making books relevant and discoverable certainly can. Centralizing that capacity in a single search-provider won’t facilitate relevancy and discoverability. And regardless of the financial benefit or loss to authors, it certainly seems like a bad thing for human knowledge.

And that looked to be that. Yet another centralization of a significant public good into that one single monolithic information infrastructure corporation, Google. Aided once again by Google’s vision, their engineering prowess and their strategic astuteness (I like the term “deep cleverness“). You have to hand it to Google, they are brilliant at what they do. The thing is, you might want it back some day. Google should flourish on their ability to compete in technology and business, not on their ability to end competition. So that deal made me very sad.

Which is why today is a happy day:

Justice Dept. Opens Antitrust Inquiry Into Google Books Deal — MIGUEL HELFT, New York Times (Registration required.)

“The Justice Department has begun an inquiry into the antitrust implications of Google’s settlement with authors and publishers over its Google Book Search service, two people briefed on the matter said Tuesday.

Lawyers for the Justice Department have been in conversations in recent weeks with various groups opposed to the settlement, including the Internet Archive and Consumer Watchdog. More recently, Justice Department lawyers notified the parties to the settlement, including Google, and representatives for the Association of American Publishers and the Authors Guild, that they were looking into various antitrust issues related to the far-reaching agreement.”

Also some reporting from the Wall Street Journal here, but it’ll cost ya.

My guess is that the search term “google antitrust” is going to get popular over the coming years. Google is like a government: they’re only as good as we make them. As far as books, personally, I’d rather have the Open Book Alliance, and if this investigation is a move towards breaking the weird little collusion between Google and the author’s associations, maybe open scanning and searchability of books still has a chance.

older posts →