Embedding a Fusion Table map in a WordPress post

Just testing: a Google Fusion Table map embedded in a WordPress blog post.

Here’s the original Fusion Table table. That data is described in this Ottawa Citizen post by Glen McGregor.

Here’s the exact embed code used for the above map:

<iframe src="http://www.google.com/fusiontables/embedviz?viz=MAP&q=select+col6+from+2621110+&h=false&lat=43.50124035195688&lng=-91.65871048624996&z=5&t=1&l=col6" scrolling="no" width="100%" height="400px"></iframe>

That’s just the default output from the Fusion Table map visu­al­iza­tion, with the minor excep­tion of tweaking the height and width a bit to fit my post. That code is avail­able from within the Fusion Table visu­al­iza­tion page if you click the “get embed­d­able link” link. Here’s a screenshot:

I’m posting this because someone was having trouble with this process and I wanted to try it out myself. I didn’t have to make any mod­i­fi­ca­tions to my existing WordPress 3.3.1 instal­la­tion or theme to get it working.

Please note however that there is still a Conservative majority.

Update (later the same day): I’m pleased to see that Glen McGregor was able to embed the map on the the Edmonton Journal site, and wrote a telling article around it.

Google Maps With ‘Earth View’ Still Has ‘Terrain View’

Google has just inte­grated the 3-​​D fly-​​through tech­nology of Google Earth into their standard Google Maps website. How do they pack the tech of a 70mb program into a utility that runs in a browser? I do not know, although it appears they may have just (“just”) made the Google Earth plugin for web browsers into an auto­matic download and install.

Vancouver in its 3-​​dimensional glory

I was con­cerned that the arrival of Earth view had replaced the ‘terrain’ view option. Among other things, the hill­shaded terrain view is handy for grabbing lat/​long loca­tions of natural features for quick input into GIS, par­tic­u­larly when used in con­junc­tion with the LatLng marker option.

But all is well. ‘Terrain’ view is still there, it’s just been moved into the ‘More’ dropdown menu.

decent terrain, too

That Explains It

– from My Way, a New York Times col­lec­tion of faux maps by illus­trator Christoph Niemann

That became slightly less funny to me when I noticed that the road to Rick’s was explic­itly labelled (ref­er­en­tial jokes are a dish best served cold), but it’s still funny.

Roll Your Own Rosling-​​esque Statistical Visualizations

It’s a sta­tis­tical cer­tainty that you watched Hans Rosling’s extra­or­di­nary infor­ma­tion visu­al­iza­tion pre­sen­ta­tion, from back when TED talks were cool. If not, you should cer­tainly watch it below, as well as all the tri­umphant sequels.

And now, courtesy of Google, an exper­i­mental inter­face for rolling your own Rosling-​​esque sta­tis­tical displays. Below is one of the examples they offer, slightly cus­tomized by me, but you can start from scratch and cook up anything you want from the datasets they have on hand.

The inter­face for assigning vari­ables to axes and sym­bolism is fan­tastic. It’s reminds me of the Hectares BC approach. (Which reminds me in turn of the won­derful and neglected JMP exploratory stats package.) Complicated inter­faces are great when you know what you want and want to be able to get it no matter how com­pli­cated it is, but a simpler inter­face allows for faster experimentation.

I hope they expand the amount of data, and I’m sure they will. I also hope they allow for cross-​​tabbing data from dis­parate data sets: for now you can only cor­re­late numbers from the World Development Indicators with other WDI numbers, for instance.

We’re increas­ingly seeing numer­ical and geo­graph­ical infor­ma­tion displays which explic­itly incor­po­rate time, and Google is a big part of that. I’m a big fan of that trend towards explicit tem­po­rality — it helps take the focus off stocks and onto flows, and makes casually it clear that base­lines really do shift.

With Regards to Google and the NSA

I love those “send an email of protest” web pages that so many activist groups have now. They do all the minutes and minutes of research that I would never get around to, to figure out what the per­ti­nent email addresses and salu­ta­tions are, and you just enter your name and (option­ally) update their sug­gested email and presto, your email goes off and changes the world, to some degree. My unrig­orous research as a former Amnesty letter writer and current Government Scientist suggests that letters and even, yes, emails, do actually make an impact at insti­tu­tions caught in the uncom­fort­able light of con­tro­versy. And those web pages make it so damn easy. You just get to rant into a text box, and off it goes to make a difference.

Here, for instance, is today’s rant at Google via the ACLU:

The road to being evil is a mod­er­ately long and asymp­tot­i­cally creepy one, and by entering yourself into an alliance with the NSA you have placed your­selves squarely on it. The NSA over­col­lects. This is known. Google defends itself against accu­sa­tions of over­col­lec­tion by sug­gesting that the data is only ever auto­mated and aggre­gated. This is known.

NSA-​​style over­col­lec­tion + Google-​​style over­col­lec­tion =/​= happy Valentines Day. It = me getting very nervous around my Google account. *Especially* if you’re planning to go all “social network provider” on us.

Stay away from the NSA.

Another Ambiguously Worrying Google Development

Google Massively Automates Tropical Deforestation Detection

Landcover change analysis has been an active area of research in the remote sensing com­mu­nity for many years. The idea is to make com­pu­ta­tional pro­to­cols and algo­rithms that take a couple of digital images col­lected by satel­lites or air­planes, turn them into land­cover maps, layer them on top of each other, and pick out the places where the land­cover type has changed. The best pro­to­cols are the most precise, the fastest, and which can chew on multiple images recorded under dif­ferent con­di­tions. One of the favourite appli­ca­tions of land­cover change analysis has been defor­esta­tion detec­tion. A par­tic­u­larly popular target for defor­esta­tion analysis is the tropical rain­forests, which are being chain­sawed down at rates which are almost as dif­fi­cult to com­pre­hend as it is to judge exactly how bad the effects of their removal will be on bio­log­ical diver­sity, plan­e­tary ecosystem func­tioning and climate stability.

Google has now gotten itself into the envi­ron­mental remote sensing game, but in a Google-​​esque way: mas­sively, ubiq­ui­tously, com­pu­ta­tion­ally inten­sively, plau­sibly benignly, and with probable long-​​term finan­cial benefits. They are now running a program to vacuum up satel­lite imagery and apply land­cover change detec­tion optomized for spotting defor­esta­tion, and for the time being targeted at the amazon basin. The public doesn’t cur­rently get access to the results, but pre­sum­ably that access will be rolled out once Google et al are con­fi­dent in the system. I have to hand it to Google: they are tech­ni­cally careful, but polit­i­cally aggres­sive. Amazon defor­esta­tion is (or should still be) a very polit­ical topic.

The par­tic­ular land­cover change algo­rithms they are using are appar­ently the direct product of Greg Asner’s group at Carnegie Institution for Science and Carlos Souza at Imazon. To signal my belief in the impor­tance of this project I’m not going to make a joke about Dr. Asner, as would normally be required by my back­ground in the Ustin Mafia. (AsnerLAB!)

From the Google Blog:

We decided to find out, by working with Greg and Carlos to re-​​implement their software online, on top of a pro­to­type platform we’ve built that gives them easy access to ter­abytes of satel­lite imagery and thou­sands of com­puters in our data centers.”

That’s an inter­esting comment in it’s own right. Landcover/​landuse change analysis algo­rithms pre­sum­ably require a rea­son­ably general-​​purpose com­puting envi­ron­ment for imple­men­ta­tion. The fact that they could be run “on top of a pro­to­type platform … that gives them easy access to … com­puters in our data centers” suggests that Google has created some kind of more-​​or-​​less general purpose abstrac­tion layer than can invoke their unprece­dented com­puting and data resource.

They back that comment up in the bullet points:

Ease of use and lower costs: An online platform that offers easy access to data, sci­en­tific algo­rithms and com­pu­ta­tion horse­power from any web browser can dra­mat­i­cally lower the cost and com­plexity for tropical nations to monitor their forests.”

Is Google sig­naling their devel­op­ment of a com­mer­ical super­com­puting cloud, à la Amazon S3? Based on the further marketing-​​speak in the bullets that follow that claim, I woud say absolutely yes. This is a test project and a demo for that business. You heard it here first, folks.

Mongobay points out that it’s not just tropical forests that are quietly dis­s­a­pearing, and Canada and some other devel­oped coun­tries don’t do any kind of good job in aggre­gating or pub­li­cally mapping their own enormous defor­esta­tion. I wonder: when will Google point its detec­tion program at British Columbia’s end­lessly exanding network of just-​​out-​​of-​​sight-​​of-​​the-​​highway clearcuts? And what facts and figures will become readily acces­sible when it does?


View Larger Map

Mongobay also infers that LIDAR might be involved in this par­tic­ular process of detecting land­cover change, but that wouldn’t be the case. Light Detection and Ranging is commonly used in char­ac­ter­izing forest canopy, but it’s still a plane-​​based imaging tech­nique, and as such not appro­priate for Google’s world-​​scale ambi­tions. We still don’t have a credible hyper­spec­tral satel­lite, and we’re nowhere close to having a LIDAR satel­lite that can shoot reflecting lasers at all places on the surface of the earth. Although if we did have a satel­lite that shot reflecting lasers at all places on the surface of the earth, I somehow wouldn’t be sur­prised if Google was responsible.

Which leads me to the point in the Google-​​related post where I confess my ner­vous­ness around GOOG taking on yet another service — envi­ron­mental change mapping — that should probably be handled by a demo­c­ra­t­i­cally directed, pub­li­cally account­able orga­ni­za­tion rather than a publically-​​traded for-​​profit cor­po­ra­tion. And this is the point in the post where I admit that they are taking on that function first and/​or well.

My Google Wave Address And Fears

I’m inter­ested in exploring the Google Wave com­mu­ni­ca­tion system, if anyone wants to try it I’m hughstimson@​googlewave.​com.

Should I not publish that address on a website? Is there Wave spam yet? If not, I’m pre­dicting it. You heard it here first, folks.

I’m also nervous about Google owning yet another slice of our col­lec­tive infor­ma­tion infra­struc­ture. In the case of Wave, the code is (or will be) open-​​sourced, and in theory anyone could make inde­pen­dent server software to host waves. But unike con­ven­tional email you can’t use an off-​​line email appli­ca­tion as a prin­ciple place to host and store the things you’ve written to each other, so our com­mu­ni­ca­tions are pushed yet further onto the cloud. If my wave service could be hosted on my own cloud server that inter-​​operated with other people’s self-​​hosted wave servers that wouldn’t bother me much. But I still haven’t seen movement towards personal cloud com­puting. And even if someone did make that happen, most people would go with a Wave service hosted and operated by a big company anyway, so that they wouldn’t have to think about it too much. And most of those people will end up with Google, because it will be the first Wave provider and, knowing Google, the best implemented.

Thus, if Waves do sig­nif­i­cantly supplant emails, the single most impor­tant mes­saging tool on the internet will largely be cen­tral­ized with the same pub­li­cally traded for-​​profit cor­po­ra­tion that handles our mapping and our driving and our book searching and our public tran­siting and our finding of each other and our finding of every­thing. And it is indeed Google’s stated hope that wave will be the next email.

That said, who wants to try it out with me?

Update Dec 8th: I now have a whole bunch of invites to give out. The rate they’re arriving suggests that Wave is close to going open to normal sign-​​ups, but if you’re still looking for early access, and I know you somehow, I can probably hook you up.

I Want a Personal Cloud

I seem to like com­puting in clouds. I don’t want to: I don’t like the idea of putting my business or academic data into someone else’s for-​​profit servers, and I think it’s nutty in a special way to put your private pho­tographs and social rela­tion­ships in there too. But that’s just ideology, in practice I keep on opening up new doc­u­ments sporting the Google logo, day-​​dreaming about the science com­puting I could do with a few hundreds dollars worth of clock cycles on an Amazon-​​hosted hadoop cluster, and con­tem­plating moving my email address over to Google Apps for Your Domain. It’s all just so useful. It works across com­puters, it works across people, and nowadays it even some­times works when you don’t have internet. The benefits are imme­diate and tangible (if cloud com­puting can be called tangible), and the draw­backs are longer-​​term and probabilistic.

Thus I was excited when the words “private cloud” started cropping up. A private cloud is web-​​based appli­ca­tions that run on your own server, instead of running on theirs. Advantages without draw­backs. For now private clouds are for cor­po­ra­tions to run on their internal intranets. So the words I espe­cially want to see are “personal cloud”. I already rent space on a web server, now I want to be able to install a cal­en­daring service on hugh​stimson​.org, in the same way I’ve already got blogging and photo gallery apps. And I espe­cially want to install Mozilla Docs there. Mozilla, are you making Mozilla Docs?

Big question: if every­body has their own personal cloud running, can they work together? One of the major advan­tages of current cloud com­puting is col­lab­o­ra­tion. If I open a new Google Docs document here in Vancouver, my col­lab­o­ra­tors over the straight in Victoria can see it and edit it right away, using an inter­face they’re familiar with. If I were running a document appli­ca­tion on hugh​stimson​.org I could create that file, but other people probably don’t want to open an account on hugh​stimson​.org to edit it, nor do they want to learn to use the inter­face for whatever editing appli­ca­tion I’m running there.

I’m guessing there are tech­nical solu­tions to this tech­nical problem. People already care very much about standard formats in existing cloud com­puting, and if all of our clouds are able to speak to each other in a common language, then maybe col­lab­o­ra­tion across them isn’t such a big deal. I open a new spread­sheet, stored in .ods format on my own server, and start editing it on my web inter­face in my browser. Then I send out an invi­ta­tion to an email address at Pink Sheep Media, and they open that document up in their own browsers using their own editing appli­ca­tion running on the Pink Sheep Media cloud. Or maybe they’re still using Google Docs, and they access the file from hugh​stimson​.org/​d​ocs, but edit it in the Google Docs inter­face. Maybe login access is handled using OpenId. Why not? It would mean having not just open stan­dards for file formats, but also some common commands for editing func­tions. The editing could be done on their servers, and then the document would be saved back to mine, staying in the open standard file format the whole time. Is that hard? Does someone know?

As far as I know, Mozilla is not working on Mozilla Docs. But they are doing some cool stuff in cloud com­puting. This one looks like a big oppor­tu­nity to me. At least, I know I want it very much. So somebody, please, build me a personal cloud.

Busting Google’s Book Monopoly

Not so long ago Google signed a deal to end a lawsuit launched against them by the Authors Guild and the Association of American Publishers. The Google Book program has been scanning books from a few major libraries since 2005–University of Michigan was one of the first–and making the text search­able online, and dis­playing snippets of them in the search dialogue. There was an assump­tion that Google would make money from this process, either by posting their ubiq­ui­tous text ads on the inter­face, or just by the inex­orable process of making the internet more useful and thus bringing more folks into Google’s path, or some­thing. The Authors orga­ni­za­tions were con­vinced, rea­son­ably, that Google must have seen a way to make money from it, or they wouldn’t be doing it. And they figured that since it’s their job to rep­re­sent authors, and the product of authors was making money somehow, they wanted a taste. When Google pointed out that making book dis­covery easier might just be the single biggest thing that anyone could do to drive up declining book sales and make back-​​catalogs prof­itable, they didn’t care or weren’t con­vinced. They wanted money up front, directly, from Google.

So they opened up a public rela­tions front, and opened up a lawsuit alleging infringement.

In the meantime, some other folks got con­cerned that Google was the only entity scanning books. They figured book dis­covery was indeed an impor­tant public good, and one that probably shouldn’t be the domain of a single for-​​profit. Google wasn’t talking about giving away their data­bases, and in fact seemed to be re-​​negotiating the terms of their agree­ments with the con­tributing libraries such that access to the data was becoming increas­ingly cen­tral­ized. So the non-​​profit Open Content Alliance (with cash and tech from Microsoft and Yahoo, among others) fired up their scanners, with the intent of creating a commonly avail­able pool of data on what was in all those books that are sitting on all those shelves.

I give huge props to Google for starting the book scanning movement. Before them, nobody thought it could be done tech­ni­cally, and nobody much seemed to realize that it should be done. In the time since, librar­ians at par­tic­i­pating uni­ver­si­ties say they’ve seen an enormous uptick in book check-​​outs. It’s a great program, broadly speaking.

But the data shouldn’t only belong to Google. If the libraries had been col­lec­tively smart, once the Open Content Alliance came along offering to scan the books into a shared database they should have switched exclu­sively over to them, and sug­gested that Google join to the alliance too. If the author’s asso­ci­a­tions were smart, they should have sup­ported the ini­tia­tive whole-​​heartedly, made what-​​can-​​you-​​do gestures when the data­bases were leaked and started showing up on Kindles (or alter­na­tively, struck a deal with Amazon), and watched the roy­al­ties on sale of physical copies of their back-​​catalogs skyrocket.

Some libraries did indeed join the OCA, for example University of California and U of Toronto. But the Author’s associations–as content trade groups tend to be–were stupid with greed. How stupid? In order to settle the deal, Google made them an offer: give us a license to scan the works of all the authors you rep­re­sent, and we’ll give you some money. But only us! And the author’s asso­ci­a­tion said, hey, money! That doesn’t seem like a good deal for the authors to me: book read­er­ship has been declining, and getting a few cheques cut from Google HQ isn’t going to change that, but making books relevant and dis­cov­er­able cer­tainly can. Centralizing that capacity in a single search-​​provider won’t facil­i­tate rel­e­vancy and dis­cov­er­ability. And regard­less of the finan­cial benefit or loss to authors, it cer­tainly seems like a bad thing for human knowledge.

And that looked to be that. Yet another cen­tral­iza­tion of a sig­nif­i­cant public good into that one single mono­lithic infor­ma­tion infra­struc­ture cor­po­ra­tion, Google. Aided once again by Google’s vision, their engi­neering prowess and their strategic astute­ness (I like the term “deep clev­er­ness”). You have to hand it to Google, they are bril­liant at what they do. The thing is, you might want it back some day. Google should flourish on their ability to compete in tech­nology and business, not on their ability to end com­pe­ti­tion. So that deal made me very sad.

Which is why today is a happy day:

Justice Dept. Opens Antitrust Inquiry Into Google Books Deal — MIGUEL HELFT, New York Times (Registration required.)

The Justice Department has begun an inquiry into the antitrust impli­ca­tions of Google’s set­tle­ment with authors and pub­lishers over its Google Book Search service, two people briefed on the matter said Tuesday.

Lawyers for the Justice Department have been in con­ver­sa­tions in recent weeks with various groups opposed to the set­tle­ment, including the Internet Archive and Consumer Watchdog. More recently, Justice Department lawyers notified the parties to the set­tle­ment, including Google, and rep­re­sen­ta­tives for the Association of American Publishers and the Authors Guild, that they were looking into various antitrust issues related to the far-​​reaching agreement.”

Also some reporting from the Wall Street Journal here, but it’ll cost ya.

My guess is that the search term “google antitrust” is going to get popular over the coming years. Google is like a gov­ern­ment: they’re only as good as we make them. As far as books, per­son­ally, I’d rather have the Open Book Alliance, and if this inves­ti­ga­tion is a move towards breaking the weird little col­lu­sion between Google and the author’s asso­ci­a­tions, maybe open scanning and search­a­bility of books still has a chance.

older posts →