Poetic Engineering

I stumbled upon this by an extremely oblique reference1: an essay by Edgar Allan Poe, on “The Philosophy of Composition”. Now if you were laboring under the impression that poetry is composed exclusively out of some sort of “divine madness” and springs forth spontaneously from the poet’s soul — well, Mr. Poe will gladly correct you.

Here he presents a sort of “behind the scenes”, and uses not some academic third party textbook example, but his own famous poem, “The Raven” — so instead of hearing some critic tell you what he thinks the author thought at such-and-such time, hear it from the author itself.

The deconstruction of the poem is quite total, and knowing how a magician performs a trick usually leads to a superbly anti-climactic ‘meh’ moment, so you’ve been warned.

Here’s an abstract that conveys the general idea (emphasis mine):

For my own part, I have neither sympathy with the repugnance alluded to, nor, at any time, the least difficulty in recalling to mind the progressive steps of any of my compositions, and, since the interest of an analysis or reconstruction, such as I have considered a desideratum, is quite independent of any real or fancied interest in the thing analysed, it will not be regarded as a breach of decorum on my part to show the modus operandi by which some one of my own works was put together. I select ‘The Raven’ as most generally known. It is my design to render it manifest that no one point in its composition is referable either to accident or intuition- that the work proceeded step by step, to its completion, with the precision and rigid consequence of a mathematical problem.


  1. If you must know, a very brief mention (a couple of seconds), in this lecture by Gerald Sussman on the 60th birthday of Dan Friedman. 

Thoughts on teaching Python stand out as especially trenchant even many months later. The intro course is so important, because it creates habits and mindsets in students that often long outlive the course. Teaching a large, powerful, popular programming language to beginners in the era of Google, Bing, and DuckDuckGo is a Sisyphean task. No matter how we try to guide the students’ introduction to language features, the Almighty Search Engine sits ever at the ready, delivering size and complexity when they really need simple answers. Maybe we need language levels a lá the HtDP folks.

He argues that Linux is designed for a use case that most people don’t have. Linux, he says, aims to be a 1970s mainframe, with 100 users connected at once. If a crash in one users’ programs could take down all the others, then obviously that would be bad. But for a personal computer, with just one user, this makes no sense. Instead the OS should empower the single user and not get in their way.

APL is a mistake, carried through to perfection. It is the language of the future for the programming techniques of the past: it creates a new generation of coding bums.

Edsger W. Dijkstra, “How do we tell truths that might hurt?” (EWD498, 1975)

To put it bluntly, the discipline of programming languages has yet to get over its childish passion for mathematics and for purely theoretical and often highly ideological speculation, at the expense of historical research and collaboration with the other social sciences. PL researchers are all too often preoccupied with petty mathematical problems of interest only to themselves. This obsession with mathematics is an easy way of acquiring the appearance of scientificity without having to answer the far more complex questions posed by the world we live in.

**The goals**

  1. To develop a universal programming language.
  2. To define a theory of equivalence of computation processes. This would be
    the basis for a theory of equivalence preserving transformations.
  3. To represent algorithms by symbolic expressions in such a way that significant
    changes in the behavior represented by the algorithms are represented by
    simple changes in the symbolic expressions.
  4. To represent computers as well as computations in a formalism that permits
    a treatment of the relation between a computation and the computer that
    carries out the computation.
  5. To give a quantitative theory of computation. For example to find a quantitative
    measure of the size of a computation analogous to Shannon’s measure
    of information

(Referencing a paper by McCarthy, quoted from a summary here)

My point is that the ML module system can be deployed by you to impose the sorts of effect segregation imposed on you by default in Haskell.  There is nothing special about Haskell that makes this possible, and nothing special about ML that inhibits it.  It’s all a mode of use of modules.

So why don’t we do this by default?  Because it’s not such a great idea.  Yes, I know it sounds wonderful at first, but then you realize that it’s pretty horrible.  Once you’re in the IO monad, you’re stuck there forever, and are reduced to Algol-style imperative programming.  You cannot easily convert between functional and monadic style without a radical restructuring of code.  And you are deprived of the useful concept of a benign effect.

The moral of the story is that of course ML “has monads”, just like Haskell.  Whether you want to use them is up to you; they are just as useful, and just as annoying, in ML as they are in Haskell.  But they are not forced on you by the language designers!