The Stack Lifestyle

Andrew Plotkin tells us that “if you already know what recursion is, just remember the answer. Otherwise, find someone who is standing closer to Douglas Hofstadter than you are; then ask him or her what recursion is.”

Recursion is a delight for any computer science student first learning it. There’s something magical about sorting a list by asking your program to sort the right half, left half, and then merge. There’s something magical about taking that ugly code festered with loops and switches and replacing it with two lines.

My high school computer science book (it sucked) kept telling me that “recursion” is inefficient but could never tell me why. Indeed, in terms of time complexity, sometimes a recursive solution is the best that exists (quick sort, for example). But as anyone with a slightly more nuanced understanding of memory allocation knows, the computer’s dashboard (if you will) looks a little bit like this:

Credit to David Malan

Every time a method (function, subroutine, procedure, whatever you fancy) is called it’s placed on top of this so-called “stack”. The canonical explanation of a stack goes something along the line of having a stack (no, literally) of tasks to complete. You look at whatever’s on the top and begin with that, but when your boss drops by and adds another task you drop what you’re doing and work on that. As nerds say, LIFO (or last-in-first-out).

The problem with recursion is that each time a method recurses, another subroutine is added to the stack. Calling a method takes both memory and time, both of which are inconveniences that might not be relevant to one of those boring, old loops. Indeed failing recursion must be the inspiration of  the famous website, StackOverflow, which is an eponym of maxing out your pre-allocated stack space.

So now let’s make the leap to real-life. For the most part, in computers, the time required to place a procedure on the call-stack is infinitesimal. So recursion is pretty cool, here, because you get a rich variety of data structures and algorithms that are elegant and effective.

Now about ten years ago, before Jimmy Wales invented Wikipedia (or hell, before Tim Berners-Lee conceived a hyperlink), imagine reading one of those terrible World Book Encyclopedias. I remember my Dad buying a whole, grand set, for god knows how many dollars when I was seven. I think I might have read two entries. It’s a better decoration to seem all polished and nuanced than anything else (much like an expensive globe).

When reading an article in one of these books, when you came across a term you didn’t know, chances are you just kept reading. The cost of finding out what that term means was just too high. You’d have to locate the volume or dictionary in which that entry would be stored, alphabetically navigate to that page, read it, and damnit there’s another term you don’t know… Before you know it, you have the whole bloody encyclopedia strewn on the floor and you haven’t learned a thing. Time for a beer.

So, knowing that it’s just not worth your time to find a small term, your algorithm might have looked something to the effect of:

define function -> readArticle(words):

     while (not finished):

          if (notUnderstood(word) and isImportant(word)):

               readArticle(findEntryOn(words[i]))

               continue

         else:

               continue

haveDrink()

And if you’re the average person the barrier to entry of “importance” will likely be quite high, lest you want to read the whole encyclopedia.

Wikipedia has fundamentally changed how we look at this algorithm. We’ve basically all but abdicated the qualifier of importance. The “cost of calling” a link, if you will, or recursing is barely anything anymore (except for my house in Chennai which might still be using dial-up). The old encyclopedia’s offered us one thing in return for their cumbersome and pretentious being, we were compelled to read only that which we need and that is important.

Today, I barely find myself reading whole Wikipedia pages anymore, and it’s a shame. I’m too busy reading an intermediary link because it’s so computationally cheap to do so.

Look I’m not a cognitive psychologist or a behavioral economist or whatever, but I wonder if this fundamentally changes the way our brains are wired – just as fundamentally as stacks are different from queues. The iPhone and Mountain Lion notification center draw your attention from whatever you were doing at a moment’s buzz.

Here’s the problem. We’re probably hitting something of the computational equivalent of stack overflow. Our brains don’t have that much memory (I read on the incredibly reliable source of YahooAnswers). Do you ever start researching for a paper, get lost in Wikipedia and then eventually just quit? Yeah, that’s probably stack overflow, or something…

Would we be more efficient if the cost of calling a function was greater, and if so, to what extent?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s