What I realised is that neither exception-based approach is appropriate when one wishes to make software as robust as possible. What one needs is to know exactly which errors / exceptions a function can return / raise, and then deal with each on a case-by-case basis. While it is possible that modern IDEs could (indeed, they may well do, for all I know) automatically show you some of the exceptions that a given function can raise, this can only go so far. Theoretically speaking, sub-classing and polymorphism in OO languages means that pre-compiled libraries can not be sure what exceptions a given function call may raise (since subclasses may overload functions, which can then raise different exceptions). From a practical point of view, I suspect that many functions would claim to raise so many different exceptions that the user would be overwhelmed: in contrast, the UNIX functions are very aware that they need to minimise the amount of errors that they return to the user, either by recovering from internal failure, or by grouping errors. I further suspect that many libraries that rely on exception handling would need to be substantially rewritten to reduce the number of exceptions they raise to a reasonable number. Furthermore, it is the caller of a function who needs to determine which errors are minor and can be recovered from, and which cause more fundamental problems, possibly resulting in the program exiting; checked exceptions, by forcing the caller to deal with certain exceptions, miss the point here.

The only way to read these days …

(By way of “what have you been up to?”, or “have a blog, say something!”)

I just finished two fairly large reading projects, and I’m quite happy I stuck with it and finished it. A decade ago, I used to read in short, intense bursts (days on end of doing nothing else), and when this became no longer possible (less time!), I stopped reading, more or less, until about two or three years ago when I slowly began reading bits and pieces again. What I figured out was this: there’s only one long-term sustainable way to keep reading, and that’s the “slow and steady” way.

So now, I read on average two or three pages a day. Sometimes five or six. But never beyond ten. Always at least one. And I think this works for most books.

The first of the two isn’t a book but a series of online articles — more accurately, a series of usenet posts, all written by Erik Naggum over a ten-year period from 1994-2004. The entire list is here. Now you either don’t know about this guy at all, or if you do you probably have a negative preconception, just as I did, based on (say) his Wikipedia page, or some highly opinionated (and IMHO, ignorant) posts like this one; but I’m not the only one to advocate a more open-minded reading of his posts, c.f. Stanislav here, so you may be interested too.

The second is a series of books. I first came across Eric Hobsbawm in a very negative context (a youtube clip of him quixotically refusing to reconsider his prior stance, decades ago, on communism) — and I expected his writing to be similarly polemical. Imagine my surprise, then, when it was not (what really confuses me, then, is this contrast between the historian self and the public interview self). Instead, the series “The Age of Revolution (1789-1848)”, “The Age of Capital (1848-1875)”, “The Age of Empire (1875-1914)” and “The Age of Extremes (1914-1991)” is the best grand overview of everything that I’ve come across. The last one, if you’re curious, is actually logically three books (“Catastrophe”, “The Golden Years”, “Landslide”) which explains its semi-frustrating length. I wonder what he would make of the post 9/11 era, though he wasn’t very optimistic about the end of the Landslide.

Ironically, rent control is exactly why there is so little housing available in SF. Iirc, only 30% of the voting population owns property. So how is it that the rest of us (70%) don’t vote to build more? Well, because ¾ renters are under rent control, and essentially don’t care. So only 70%/4 < 30% of people are motivated to push the market towards more housing. The rest profit handsomely from increased home prices due to high demand. This is the real tragedy of rent control in San Francisco IMO.

The press, television, and movies make heroes of vandals by calling them whiz kids. … There is obviously a cultural gap. The act of breaking into a computer system has to have the same social stigma as breaking into a neighbor’s house. It should not matter that the neighbor’s door is unlocked.

– Ken Thompson, 1983 Turing Award Lecture

The Asocial network

I’ve been online for a while now. I remember having a Hotmail1 account around 1998 or so, discovering Geocities2 perhaps a year before that. I first browsed on the old Netscape Navigator3. You get the idea.

First, an aesthetic rant: the early internet was nothing like what you see now. It was fugly, with marquees4 being terribly abused all over the place, etc — but that was the point! Everyone rolled their own pages.

Now, everything looks better — but everything looks _the same_5. Its all rounded corners and whitespace. That’s nice to look at, but, you know, boring. Anyway …

There have been various things that can be called “social networks”. I remember reading an old “Internet for Dummies” and puzzling over Compuserve and AOL (Usenet was, even then, already very old); there were separate instructions and clients and so on for each of them. Later (after the (first ?) dot-com bubble) there were Friendster and Myspace. And today, of course, you have Facebook, Google+ and Twitter. But these were not all the same — and I don’t just mean they looked different.

Now Facebook is hugely successful and it has succeeded in reliably and (mostly happily) connecting lots of people all over the world (though WhatsApp etc seem to have since picked up a lot of the “quick messages” that people use to stay in touch) …

… but I resent being forced into a “standardized template”. Photo to share? Add it to the designated spot. Thinking of something? Enter it in this box. Wait, you don’t have anything interesting to share? Ah, how about this little news or notification! Worried about not having anything to say? Here’s a 1-bit channel ready-made for you: click right over there where it says “Like”.

At the end, you have a profile that looks like everyone else’s profile, except for the profile image swapped out, a slightly different selection of music, a slightly different stream of +1s or Likes. This would be fine if it wasn’t for people becoming their profiles.

The worst part of all this isn’t just the uniformity of presentation or the standardization of forms of content. Nope, it’s the slow, subliminal conformity of what you share about yourself. Over time, the vast majority of posts are mostly all the same — photos of food, smiling photos at place X, and so on, with an extremely predictable stream of responses and “likes”.

Now I don’t want to be too extreme here; there are other collections of “why not to use Facebook” (e.g. Stallman’s page here), which I find hyperbolic (of course it’s ok for Facebook to withhold anonymity for you, to show you ads etc — don’t like that, don’t use it, etc.). Maybe every generation has its own version of “Eternal September”6, and this is mine; I don’t know.

So it’s not all bad, obviously. But hey, I’m free to rant, and I want the freewheeling, chaotic internet I remember back, dammit. Except that it probably never was really as I remember it to be, and a decade or two from now, it won’t be for you either 🙂

  1. This lasted until about 2009, though I had gradually stopped using it since 2003 when I switched to Gmail as my primary account. 
  2. I had a “homestead” in one of the Area 51 “suburbs”, but I forget the details. 
  3. For a touch of nostalgia, see this gif of the “loading” indicator 
  4. i.e. in the days before CSS, HTML tags were used for both markup and presentation, in particular this one 
  5. Thank you, Twitter Bootstrap 
  6. When AOL users overwhelmed the tiny, carefully curated Usenet community in 1993. Ironically this post (which probably coined the term) looks further back a decade earlier to when the first CompuServe users came online (!) 

If you first design your software to near-perfection and then find coding it to be tedious because no problems remain to be solved, I recommend that you obtain a thing called a “compiler” to do the coding part for you. You see, a “compiler” takes a formal specification of your design as input, and automatically produces a program as output! I know this sounds like science fiction, but we’ve been using these things for a while now, even though they took a while to get accepted.

To be precise it is not even about alternatives to SQL but about alternatives to the relational model and ACID. In the meantime the CAP theorem revealed that the ACID criteria will inevitably reduce the availability of distributed databases. That means that traditional, ACID compliant, databases cannot benefit from the virtually unlimited resources available in cloud environments. This is what many NoSQL systems provide a solution for: instead of sticking to the very rigid ACID criteria to keep data 100% consistent all the time they accept temporary inconsistencies to increase the availability in a distributed environment. Simply put: in doubt they prefer to deliver wrong (old) data than no data. A more correct but less catchy term would therefore be NoACID.

Those of you who still find it enjoyable to learn the details of, say, a programming language – being able to happily recite off if NaN equals or does not equal null – you just don’t yet understand how utterly fucked the whole thing is. If you think it would be cute to align all of the equals signs in your code, if you spend time configuring your window manager or editor, if put unicode check marks in your test runner, if you add unnecessary hierarchies in your code directories, if you are doing anything beyond just solving the problem – you don’t understand how fucked the whole thing is. No one gives a fuck about the glib object model.