Embedding a Fusion Table map in a WordPress post

Just testing: a Google Fusion Table map embedded in a WordPress blog post.

Here’s the original Fusion Table table. That data is described in this Ottawa Citizen post by Glen McGregor.

Here’s the exact embed code used for the above map:

<iframe src=”http://www.google.com/fusiontables/embedviz?viz=MAP&q=select+col6+from+2621110+&h=false&lat=43.50124035195688&lng=-91.65871048624996&z=5&t=1&l=col6″ scrolling=”no” width=”100%” height=”400px”></iframe>

That’s just the default output from the Fusion Table map visualization, with the minor exception of tweaking the height and width a bit to fit my post. That code is available from within the Fusion Table visualization page if you click the “get embeddable link” link. Here’s a screenshot:

I’m posting this because someone was having trouble with this process and I wanted to try it out myself. I didn’t have to make any modifications to my existing WordPress 3.3.1 installation or theme to get it working.

Please note however that there is still a Conservative majority.

Update (later the same day): I’m pleased to see that Glen McGregor was able to embed the map on the the Edmonton Journal site, and wrote a telling article around it.

Things I Haven’t Blogged About Lately

It’s been a long time since I blogged. And I’m not going to right now either. Instead, here’s a list of some of the things that I’ve almost blogged about recently:

  • Context Signaling in US Military Communication
  • How the Scholarly Character of Wikipedia Responds to Perturbation
  • Ecologists Exchange Thoughts on Ecosystem-based Fisheries Management
  • Late/Early Thoughts on Tablet Computing
  • Analytical Implications of the Census Discontinuity
  • The Yasuni-ITT Deal

I advocate to new bloggers that they should maintain a consistent rhythm to their posts. The idea is that if you maintain a posting frequency, your readers will know roughly how often to check your blog. That way they won’t go away disappointed, even if you don’t blog all that often. I’ve been breaking that rule lately. Mostly I blame some of my friends who have recently gotten into blogging, and have raised the quality bar for me. Thanks a lot guys.

Incidentally, I appreciate that my peers are now arriving at blogging, just as we’re being told that blogs are dead. We’re also being told that the web is dead. Both are lies. The growth of blogging is slowing, and claiming that a decreasing acceleration is the same as death is a dependable fallacy of the western neo-capitalist mindset. And the growth of the web is actually accelerating. For heavens sake.

It’s just been a little quiet around this part of the web, that’s all.

Newspapers Aren’t Archiving Their Web Content?

Here’s a remarkable claim:

“Digital subscriptions were supposed to replace microfilm, but American libraries, which knew we were racing toward recession years before the actual global crisis came, stopped being able to pay for digital newspaper and magazine descriptions nearly a decade ago. Many also (even fancy, famous ones) can no longer collect—or can only collect in a limited fashion. Historians and scholars have access to every issue of every newspaper and journal written during the civil rights struggle of the 1960s, but can access only a comparative handful of papers covering the election of Barack Obama.”

Posthumous Hosting and Digital Culture, zeldman.com

Can that be true?

Typekit and the Problem with Web Fonts

Being a text-based place, you might expect there to be a wide range of fonts used around the world wide web. Perhaps you’ve noticed that in actuality you’re almost always looking at Arial and Times. This of course drives web designers crazy, and they’ve been trying to get more fonts onto the web for years. The technical aspects of web-embedded fonts are more or less solved, but there’s still the basic problem that font foundries aren’t willing to put their intellectual property into the most copyable place in the universe — the world wide web. Until we find ways around that, web designers are stuck with using the fonts they can count on already being on your computer, basically Arial, Times, Courier, Georgia, and Verdana.

There are certainly ways of putting fonts onto the web as images instead of text, but image-based text has problems around searchability, copy-and-pastability, archivability, translatability, download speed, and access for the visually impaired. So to heck with that (except maybe for titles, maybe). Another option has been to use fonts which are in the public domain, open source, or otherwise licensed for web use. I’ve seen one or two sites that use that approach, but the current limit is the availability of freely usable high quality fonts. I expect to see that limit loosen as more indie and even commercial foundries release web-licensed giveaways, either to be helpful or to get known. I’m also predicting that foundries will start to freely license one or two variants of a font family — say the medium and bold variants — to promote the sale of the professional full-family bundles. For now however, quality free fonts are scarce.

Enter Typekit, which has gone live and large as of this morning. Typekit is a font service for web sites. It doesn’t create images, and it’s not just free fonts. It embeds a copy of a real, licensed typeface definition onto a webpage, allowing browsers to translate the honest-to-god plain text on that site into honest-to-god rendered screen font on user’s screens. So that’s good. In return the website owner pays Typekit a subscription fee, and Typekit pays some of that back to the font foundries they license from. That sounds fair. But I hope it’s not the only long-term solution, because I don’t like the idea of the typography of my websites collapsing back to Arial and Times if the Typekit server ever goes flaky, or if my wallet ever goes flaky. I’m not totally comfortable with the Typekit approach, but as of today it may fairly be said that there are more than five fonts on the internet.

There’s another problem with web typography I didn’t mention above: the low resolution of modern computer screens (roughly 100 dots per inch) makes type look jaggy and blurry compared with good-quality paper printing (roughly 300dpi), but that problem is getting solved in drips and drops every year as screen resolution gradually floats upwards faster than screen sizes do. It’s a slow process, but it will solve itself in the next ten years or so. So that’s something for us all to look forward to: crisp clean screen text, possibly before the mars landing.

The First Web Page, Almost Sorta

I did not know that Tim Berners-Lee has maintained a one-copy-removed mirror of the first html page that ever was published on the web.

It is here.

It presumably was edited after first being posted, but in its current and final state it references the content of the world wide web. All of it.

I found it referenced here.

Note: This post is the third part in a two-part “comprehensive history of computing” series, begun here and then here.

I Want a Personal Cloud

I seem to like computing in clouds. I don’t want to: I don’t like the idea of putting my business or academic data into someone else’s for-profit servers, and I think it’s nutty in a special way to put your private photographs and social relationships in there too. But that’s just ideology, in practice I keep on opening up new documents sporting the Google logo, day-dreaming about the science computing I could do with a few hundreds dollars worth of clock cycles on an Amazon-hosted hadoop cluster, and contemplating moving my email address over to Google Apps for Your Domain. It’s all just so useful. It works across computers, it works across people, and nowadays it even sometimes works when you don’t have internet. The benefits are immediate and tangible (if cloud computing can be called tangible), and the drawbacks are longer-term and probabilistic.

Thus I was excited when the words “private cloud” started cropping up. A private cloud is web-based applications that run on your own server, instead of running on theirs. Advantages without drawbacks. For now private clouds are for corporations to run on their internal intranets. So the words I especially want to see are “personal cloud”. I already rent space on a web server, now I want to be able to install a calendaring service on hughstimson.org, in the same way I’ve already got blogging and photo gallery apps. And I especially want to install Mozilla Docs there. Mozilla, are you making Mozilla Docs?

Big question: if everybody has their own personal cloud running, can they work together? One of the major advantages of current cloud computing is collaboration. If I open a new Google Docs document here in Vancouver, my collaborators over the straight in Victoria can see it and edit it right away, using an interface they’re familiar with. If I were running a document application on hughstimson.org I could create that file, but other people probably don’t want to open an account on hughstimson.org to edit it, nor do they want to learn to use the interface for whatever editing application I’m running there.

I’m guessing there are technical solutions to this technical problem. People already care very much about standard formats in existing cloud computing, and if all of our clouds are able to speak to each other in a common language, then maybe collaboration across them isn’t such a big deal. I open a new spreadsheet, stored in .ods format on my own server, and start editing it on my web interface in my browser. Then I send out an invitation to an email address at Pink Sheep Media, and they open that document up in their own browsers using their own editing application running on the Pink Sheep Media cloud. Or maybe they’re still using Google Docs, and they access the file from hughstimson.org/docs, but edit it in the Google Docs interface. Maybe login access is handled using OpenId. Why not? It would mean having not just open standards for file formats, but also some common commands for editing functions. The editing could be done on their servers, and then the document would be saved back to mine, staying in the open standard file format the whole time. Is that hard? Does someone know?

As far as I know, Mozilla is not working on Mozilla Docs. But they are doing some cool stuff in cloud computing. This one looks like a big opportunity to me. At least, I know I want it very much. So somebody, please, build me a personal cloud.

Twitter in 140 Characters or Less: Trivial

I wrote this as a draft and didn’t post it. Now I’m doing so as a way to edit history to make me look prescient. Except that at the time of posting, history still disagrees with me. Ah well.

As far as I’m concerned, there are two possibilities regarding twitter. One is that I get it: it’s cross-platform version of what the Facebook status update feed does. The second possibility is that I don’t get it at all, and there is in fact some crazy emergent magic about the existence of tweets in people’s lives that is obscure to me but exists and justifies an enormous degree of upcry and hullabaloo amongst the technologically literate and celebrity technology journalists alike.

But even if the second case is true, and I doubt it, there is no conceivable way that twitter could justify the actual current degree of upcry and hullabaloo it is causing. It’s a meme bubble. The bubble will pop. Holy crap people, it’s a single-line communication tool. Big freaking deal.

I’m reading reports that dismissing twitter as an “I don’t get it” is so 2009. And maybe I don’t get it. I mean, I understand that it represents a slightly different communication tool: a short messaging system that is easy to publish to, including phone messaging options, allows for a collated syndication of all your friends tweets into a single, easily digestable stream, and allows for people to subscribe and unsubscribe as they wish. Similar to RSS-reading blogs, but simpler and swifter to do, and simpler and swifter can put a technology over the hump into widespread use.

But who cares if the widespread use is a trivial one? Not that there’s anything wrong with triviality. Some argue that only newbies use twitter for trivial communication, but they don’t suggest what a non-trivial use of twitter might be, and all the tweets I’ve seen can be fairly summarized as trivial. I know, I know, the Iranian revolution. But from what I’ve read, twitter isn’t actually getting much use by protestors, who are mainly using phone messaging. The western media is fixating on the twitter traffic because we can’t see the phone messages.

Someone somewhere is replying, yes, but that’s only because twitter penetration isn’t as high as it will be. Once you stop complainining and start using it, it will become a true revolution! To answer that we should turn to acknowledged twitter-lover Clay Shirky, who points out that communication revolutions are only ever identifiable in retrospect. So if the world is fascinated by twitter for what the revolution that it might be, I don’t find that very compelling.

Not that it matters. Either twitter is a revolution, will be a revolution, or neither. Unless someone finds a non-trivial use for a one-line message delivery service, then it’s a choice between a trivial revolution or a non-trivial revolution. Twitter in 140 characters or less: trivial.

Stephen Wolfram Building A Search Engine That Models the World?

Stephen Wolfram, post-boy-genius egomaniac and author of the book that received the best bad review ever, is making a search engine. Except it’s not a “search” engine exactly, because it doesn’t look for websites that have the answer to your question on them, it figures out the answer to your question.

How? If the descriptions of the engine I’ve read are accurate, and they can’t be, it essentially runs a model of the world, constructed of all the theoretical constructs which Wolfram an co. have been able to represent in some kind of Mathematica-based ur-language and a massive pile of curated raw data on everything. So I suppose if you ask it “how long would it take for me to fall from 30 000 feet”, it determines semantically that you are asking a question about gravitic physics, calculates distances, speeds, resistances, and masses, and tells you a number. And if you ask it, “will it hurt?”, it fires up it’s nueron-emergent model of the brain and says, “yes”.

Or something. I don’t know. Look, it’s possible Wolfram really is a genius and not just a strong technician, and consequently could be building a revolutionary search engine which commensurates knowledge from diverse conceptual domains into a meaningfully live intelligence grower. I’m certainly curious to find out.

Yesterday there was 2 hour webcast demoing the engine. I’m not curious enough to watch it. But they’re promising it will go public in May. What questions do you have?

Jason Scott Is In Your Geocities, Rescuing Your Sh*t

Some time back, Jason Scott — the computing documentarian who hughstimson.org readers may remember from King of Kong controversy — “got angry like a fire gets burning” because AOL hometown was shutting down and leaving its users without many options to save off their home pages. This is part of Jason’s abhorrence of “the cloud”, a general point of view I share. My way of doing something about that distrust is to soldier on operating a personally administered website and email account while even my own aging generation is consumed by Facebook. Jason’s way of doing something about it has been to get ever angrier and found the Archive Team, a loose affiliation of data wonks who are pledged to archiving all the nominally doomed data of the world. They take as their motto “We Are Going to Rescue Your Sh*t”.

So when the call went up that Geocities, perhaps the oldest and creakiest of the early-era personal website providers was being shut down by now-owner Yahoo, the eyes of the world swiveled suddenly to Jason. Could he and the Archive Team rescue two decades worth of websites on Yahoo Geocities? Literally millions of websites? Despite that Yahoo presumably had no interest in him doing so?

Well, Jason?

“And the answer, which I hope you would expect, is OF COURSE WE ARE.”

Good man. Go team. And yes I did. If you’ve spent much time around Geocities, you might now be asking, is it really worth saving? To which he offers this answer:

“Not because we love it. We hate it. But if you only save the things you love, your archive is a very poor reflection indeed.”

I suppose so. All of two days later, the Archive Team is now deep into the process, and offer an update, which I warn you is even more profane than some other Jason Scott discourses on computing and computing history. He reports that large swaths of the Cities appear to have simply been purged over the decades, and those may be forever gone, but many more chunks are there and are being consumed into posterity as we speak. In fact, he estimates that they now have on their harddrives every pre-1999 site that hadn’t already been deleted.

Which made me wonder, was the first website I ever made still there? After all, I stopped updating it back in 1997, which was well before archive.org was doing really comprehensive internet mining. And indeed, it looks like it must have disappeared in the subsequent purges.

But don’t worry world, and don’t worry Archive Team, I performed a search of my own system and discovered I do indeed have a full backup of Where Even Richard Nixon Still Has Soul, manifestos, poems, and correspondence with Richard Nixon buffs intact. He’s still got it. Soul.

nofram3 stamp blink

NCSA Mosaic and the Truly Vintage Web

Note: This post is the second part in a two-part “comprehensive history of computing” series, begun here.

These folks offer us “the Vintage Web”, websites that look like they haven’t noticed the last ten years. That’s great.

But 1998 is late-on in internet World Wide Web history. What if you wanted to re-experience the truly vintage www? Even if we’ve forgotten it, a central tenet of HTML is that the display of content should be decided principally by the browser, not by the author. And there was a time, the truly vintage time, when that was still regarded as a feature of the web and not a bug. To really see the vintage web, you would need a truly vintage web browser.

Like, say NCSA Mosaic. You’ll recall that NCSA was the first browser for the World Wide Web to feature a Graphical User Interface (well, the first one not for lawyers). It opened the web to tens of thousands of newcomers in 1993, back before the browser wars had even begun. Netscape? Yet to be built on the bones of Mosaic. Internet Explorer? Isn’t “Explorer” the name Microsoft uses for both their file browser and their interface shell? Surely they aren’t using that name for a third application? Using Mosaic was a learning process: you weren’t just learning the interface of another browser, you were learning that a program could fetch hypertext markup language-encoded text pages from other computers on the Internet network, and display them on your own computer screen.

Well hey, here it is to download and install.

A caveat: only a version updated somewhat in 1997 is available for download, you can’t get v2.0 from 1993. But there’s plenty of 1993 to be felt here. For instance, the download is available as a 3MB file, or as smaller “DISKS”. It will install by default to C:\mosaic\; an entirely sensible place. When you try to connect to a site, it will first advise you that it is “looking up Domain Name Server” (true enough, so it is). The top item in the drop-down selection box of recommended Web sites is “Gopher Servers”, followed by “Home Pages”. “World Wide Web Info” is 5th down. And this line in the user’s manual I downloaded:

“For full functionality, you need access to the Internet. If you do not have access, see Appendix D, “Access Providers” for information.”

Sadly, the one thing it won’t do is load a remote web page for me. I’m not sure why not, and I couldn’t find an active Mosaic user’s forum to help me with my technical issue.

It will load local web pages. Here’s what one of those looks like, albeit with with a modernized “bgcolor” tag set:

But I remember the way the web first looked through that grey window. Square 16bit beveled icons, black serif text on a grey background, and the promise of universal access to the all the geographically dispersed information in the world.

(Here’s a genuine Mosaic screen cap that has survived from almost that far back. Novell’s service and support web site, circa early 1995, held on to by Nathan Zeldes.)

older posts →