I’ve been meaning to write about URLs, text and non-web online publishing for a while, but now I don’t have to, because Craig Mod has, and he did it better than I could have done. (He’s also going to get more attention, which is great, because it’s more likely things will change.)
Some choice quotes (although you should read the whole thing):
Am I reading text? If the text in your ereader isn’t text but is instead an image (.jpeg, .png, etc) then, by golly, your ereader’s incompetent.
Can you copy text? If you can’t, your ereader’s incompetent.
Is there a publicly facing pointer (URL, etc) by which you can reference the content in your ereader?
As Mod notes, it’s amazing that things like the iPad Wired app, which fail all three of these points, have been so highly praised. However, I’m more inclined to put malice (or its close relation, “business reasons”) as the reason for some of these decisions, in some apps. Despite the fact that Twitter, Facebook and email can drive readers to a site, it seems some companies would rather their magazines and newspapers lived in hermetic isolation.
At least the Guardian’s iPhone app, which is far from flawless, has the ability to email a link and post to various services, although (oddly) it fails to have a simple “Open in Browser” option. From what I’ve seen, neither the Wired app, nor any of the Mag+ publications, have such obviously useful features.
At least, as Mod notes, we’re only six months into the life of the iPad (and barely a couple of years into widely-used mobile devices). Perhaps with time will come a realisation that locking things down isn’t the best idea.
¹ Hat tip to dan w for the links.
² In one of his footnotes, Mod approving notes Instapaper, which I agree gets almost everything right. Hopefully at some point I’ll write about the (somewhat weak) social aspects of the app, though.)
I have a worrying feeling that Instapaper isn’t the future of magazines; it’s a short, brief possible now of magazines, for those of us who understand it.
Yesterday evening I started reading this Wired article, which I found via the Instapaper front page. I got home to find it was also in the print edition of Wired UK, but of course I’ll finish it on the phone, on the way to or from work. I also read far more on the Guardian in Instapaper than in its own app. Generally, I seem to be able to find more than I can manage during weekdays from my delicious network and other recommendations.
Meanwhile, every publisher seems to want to get their icon onto my phone (and, if and when I get one, an iPad). The Times are pushing their app on video screens in the Tube; Wired and Popular Science are just two of many magazines which hope to bring not just interactivity and a nice experience, but that promised land of a sustainable business model.
But, but; does that mean that each of them ends up in a silo, or a glass box, with the web sites turning into vestigal stubs, paginated into unusability? If that happens, where does that leave my Read Later bookmarklet? And are those of us who do graze on articles and reviews and, yes, blog posts, no matter where they come from (and with less concern for who published them than whether they’re interesting) just too small a tribe to be on publisher’s radar?
I hope I’m being overly alarmist here. I hope the app fad dies down, and that the focus returns to good simple texts on generally available web sites. Still, I’m a little worried.
A couple of hours after I gave my talk about Flickr machine tags and their possible lessons for Twitter’s new annotations, Raffi Krikorian gave a talk at Warblecamp on that very subject. He’s now posted slides of the talk, which are well worth a look.
In them, he expands on the format for annotations (they consist of types, attributes and values; types can be repeated, but attributes can’t), and mentions an annotations “explorer”, which will contain both “statistics of most used, adopted and trending attributions” and a “wiki page so developers can document their attributes”.
This dual approach pretty much fixes the main points I was worried about, combining a “pave the cowpath” method (looking at actual usage data) with a more editorial take on the wiki.
Anyway, the talk touched on even more (including the beta rollout plan, which will be based on OAuth-enabled apps, rather than feature flags or user lists), and mentioned release dates (which are reassuringly close). All in all, it’s pretty exciting, and I’m looking forward to seeing how they get used in the wild.
I’m at Warblecamp (unsurprisingly, they also have a Twitter account), where I gave a short talk about Flickr’s machine tags and possible lessons for Twitter’s upcoming annotations feature. You can download the slides (6MB PDF), but they’re very much from the “big word / big picture” school, so feel free not to bother.
The idea was to breeze through Flickr’s implementation of tags, machine tags, machine tag extras, and exploring hierarchies via both URLs and the API, and point out the features I liked and how, perhaps, Twitter might learn from them.
The discussion afterwards was interesting. One point, which was well worth making, was that Twitter’s stream of text is very different from Flickr’s archive of photographs. (One more difference is that tags (and machine tags) are editable later; annotations are set in stone at post create time.) Aral Balkan suggested a registry of Twitter annotation namespaces, along the lines of his Twitter Formats proposal. Personally, I prefer the “pave the cowpaths” approach of discovering what’s actually in use in the wild (and is also why I built the machine tag browser). I didn’t mention this at the time, but there was an attempt at a Flickr machine tags wiki, which failed, perhaps colouring my view.
There was also a question about size limits for annotations (turns out it’s 512 bytes) and a discussion on the more RDF-ish aspects of triple tags (and how you say what a thing is, which also touched on establishing concordances). Generally I don’t get hung up on the semantics of machine tags, but I’m sure there are people who do, and they might be reassured by the points (mentioned in the Twitter preview post) about the use of schemas:
People could add some agreed upon “meta-annotation” that points to something which *describes* the annotation or annotations that person is using. Think something sort of like XML DTD, though not necessarily machine readable.
For a few slides knocked up the evening before, I’m vaguely happy with both the talk but very happy with the response and the way it’s made me think more about the idea.
I’d forgotten - until yesterday - that the epic post on calendars and blue moons, on the Panic blog, had made me think about doing a post about the changes in New Years. So, before 2010 properly gets going (with most people going back to work tomorrow), I thought I’d try and get this out while it’s still topical.
You’d think the concept of a new year was straightforward. After all, it’s right there: the date is 1/1 (whether you’re European or American), and given we don’t use 0 for dates¹, that’s the first day of the year, right? Well, yes, it is now, for a good chunk of the world’s population. It wasn’t always.
Readers of Pepy’s Diary will know that; indeed, the entry for 1st January 1666/1667 bears two dates. Until the UK changed to the Julian calendar in 1752, the first day of the year was on the Feast of the Annunciation, Lady Day, marking the occasion of Mary’s meeting with the Angel Gabriel. Before then, dates for the first third of the year carry both the date of the Julian and Gregorian year. The British tax year still starts on this date, with (complicated) adjustments for the days lost when the calendar changed.
That’s not the only “new year”, though. Parliamentary years start with the State Opening, in November (or, occasionally) December; the Catholic and Anglican liturgical year also starts in December. Meanwhile the academic year starts after harvest in September. (Australia’s also starts in late summer.) Admittedly, none of those has as much legal force as the calendar or tax year, but still, I thought them worth mentioning.
That’s just in the UK, of course. There are two other obvious major world calendars, both lunar. The Chinese new year (also celebrated in Korea and Vietnam, but not Japan, which swapped to the Western calendar in 1873) is based on a lunar-solar calendar, so it moves around, but not much: it’s defined as the second new moon after the winter solstice, fixing it to a date between 21 January and 21 February (with thanks to this PDF, which did all the sums for me).
Meanwhile, the Hijri calendar, used by Muslims, is a pure lunar calendar, with nothing fixing it to the solar year. As a result, the Islamic new year shifts by either 11 or 12 days a year, moving through the Western calendar every 30 years or so. Even more alarmingly for those used to the rigid certainty of solar reckoning, the first day doesn’t happen until the new moon is officially sighted: this can shift the start of the year back a day, in theory at least. In 2009, the first day of Muharram, the first month, was on 18 December.
I’m not even going to try and explain the various Indian new year’s days, except to note that most of them seem to be around the northern hemisphere spring equinox.
So, happy new year, unless you’re Islamic, in which case, belated happy new year, or Asian, in which case, it’ll soon by new year, unless you’re Japanese, in which case: happy new year.
There is no denying the technological craft behind the work in Decode. However, unlike physical craft of the kind that fills the rest of the V&A, you cannot actually see the skill behind digital art. You cannot see the intricate computer codes and algorithms. … All you can judge it on is the “object” itself. And that, while undeniably pretty, is too often underwhelming.
Maybe that’s true for Tom Dyckhoff, but for me, it’s easier for me to conceive how to make something like Chris O’Shea’s Audience (part of Decode, but outside the paywall, to repurpose an internet term) than it is to understand the process behind, say, sculpture or ironwork. Obviously I’m a bit of an outlier, but I wouldn’t be surprised if, in a decade or three, the idea that “analogue” art is understandable and “digital” isn’t falls by the wayside.
Oh, and the exhibition? It’s not the best in the world but if you’re at all interested in digital art or interactivity, it’s worth the £5 to see it. You’ve got until April 2010.
I use a lightly modified version of FlickrTouchr.py to back up my photos from Flickr, and also to push them to the iPhone (in case I get terminally bored on the Tube and fancy paging through my favourites). Unfortunately, there are two issues with it.
Firstly, until I made a patch this evening, videos were downloaded with a .jpg extension. I fixed that (note the ‘&extras=media’ argument I add to the URL when fetching photos) and then deleted the old “pictures” manually. (I suppose I could have checked for a matching ID, but… exercise for the reader, sorry.) Sadly, iTunes won’t sync videos mixed with photos: if I want them on my phone, I’d have to add them explicitly as video, and I really can’t be bothered. Still, at least they now don’t cause errors, and I can look at them on my hard drive.
Secondly, and more mysteriously, some of my favourites won’t sync. The ones I’ve checked have a ColorSync profile embedded in them, for a “LaCie 321”, which I assume is a specific (fancy) monitor. Quite why iTunes won’t just ship them over and sod the colour matching, I have no idea, but I wondered if anyone else had come across this or knew why it was enforced?
Yesterday, the new ticket hall at King’s Cross Underground station opened. The official unveiling had been on Friday, with the Mayor and Minister for London, but it was on Sunday that regular commuters got their own chance to have a look around.
So, first things first: the station works. It’s big - surprisingly big, in fact, given how much of it is in deep tunnels. It’s shiny enough (although I’m not sure how long that will last). Generally, it’s well signed. The new ticket hall is well located for St Pancras and the new high-speed domestic services due to start properly in December, and the whole thing has to pretty much double the capacity of the (incredibly busy) station.
The station is so big, in fact, that the lifts have their own map. There are nine of them (although one isn’t open yet and, alarmingly, when I popped in today one was closed), six of which belong to the extended station. Oddly, the lifts don’t go up or down automatically; once called, they wait for a button press to ascend or descend. Generally they’re well signed, but the Piccadilly’s Lift J is hidden between the platforms- I figured out where it was by descending from the interchange subway (pictured above).
The opposite end of the subway to the Victoria line sees the Northern line’s new concourse hosting an artwork by Knut Herik Henriksen, as discussed at Building Design, and photographed by Londonist (cheekily reused above). I suspect the art is so subtle as to go unnoticed by many, but I quite like it (although perhaps more so in photographs and diagrams than in reality). Londonist’s pictures really do a good job of capturing the way a 2009-era Tube station looks when freshly uncovered, too; the shot up at the ticket hall ceiling from the bank of four escalators down to the subway level is lovely.
Speaking of the interchange subways, they’re very, very long. The map above shows how the three deep lines (the Northern, Piccadilly and Victoria) more or less meet at a point a the bottom of the current Tube hall’s escalators. The new hall feeds down to a much longer set of passages (in peach), especially for the Victoria line (where they connect with the existing, now barely used, subway to what was once the Thameslink station, now maintained as an exit to Pentonville Road).
There’s nothing really wrong with that. What is somewhat offputting is that the signage at the new entrance to the Underground from the main-line station’s concourse is that it suggests any deep-level passenger should head via the Northern hall. This turns a one and a half minute journey into a four-minute one.
Of course, the signs are there for the confused, and there’s probably merit in sending people down the wider, shinier new corridors. If I get the choice, though, the older entrance is far more likely to be the one I use.
Still, it’s good to see a project that nearly didn’t happen take a massive step forwards.
When AppJet was still around, we had an edit button, a single page of a few dozen lines and a cross-domain AJAX API. This, surprisingly enough, was all you needed to apply some programmatic patching to a bunch of different use-cases
In it, he compares Joyent (nee Reasonably) Smart’s model, where you develop locally then deploy with git, to that of AppJet, where you edited online via a web form.
For a long time, I thought that the online-editing model was the right one, and I thought Heroku and App Engine suffered for not supporting it. However, when I was seriously working on my AppJet sites, I found that I ended up developing locally (with their .jar download), then wishing for an easier deployment method than copying and pasting between text files.
For any serious project (and most developers), I’m sure that pattern would be repeated. So why bother with the complex work required to put an editor into the browser (even if Bespin promises to be a drop-in component)? Better to concentrate on the deployment side, either building on a (D)VCS like git or using custom hooks, like GAE’s “appcfg.py update”.
He goes on to say:
I’ve said it before and I’ll say it again – I will pay for what AppJet were providing. People will pay for what AppJet were providing.
Unfortunately, this has a classic problem of having to build a market. Heroku can build on Rails programmers; GAE on Python and now Java developers. Smart has the harder job of getting people to write server-side JS, but at least there are people who think that way, and they can usually pick up git.
AppJet, on the other hand, was trying to persuade people who don’t think they can program that it’s really not that hard. This is a huge problem, and I can see why it’s far easier for their developers to sell real-time web based collaboration, since everyone knows how to write words.
I suppose this is a long winded way of saying that web-based server-side programming is still a way off.