gen_server With Rotating List

// December 15th, 2009 // Uncategorized

For those of you who didn’t go to SF Erlounge, it was a blast.  In addition to getting to hear Tom Preston-Werner’s exciting presentation on BERT, BERT-RPC, and the new GitHub architecture, I also got to meet quite a few people who are interested in Erlang, but don’t have a lot of experience.

The great thing about these people is that they are great at bringing simple (for me) problems.  One of these problems was so simple that it occurred to me to write it and post the source.

This particular problem is dead simple.  He wanted to be able to have (possibly many) workers retrieve values from a list, rotating the list on each step.  Using gen_server to serialize the requests was obvious, but I also think that this is a great example that illustrates how gen_server can be used to wrap useful, functional (i.e. tail-recursive) code.

Have a look at the source.

It’s a simple gen_server that maintains two lists as its only state.  The first list is a list of "fresh" entries, and the second is a list of "stale" entries.  All it does is retrieve entries from the fresh list, one at a time; returns them to the user; and then accumulates them in another list of used entries.

When the fresh list would become empty, it returns the value to the caller, then automatically reverses the stale list (with the last value).  This is a minor optimization so that clients don’t have to wait on lists:reverse/1 unless absolutely necessary.

The demos each contain a start function, and the data they use is just the module list as managed by the code server.

To compile this, just use erlc on each file, or run erl -make with them in a directory.  To run them, try something like "erl -s demo_slow start".  You’ll need to kill the VM to stop it (use ctrl+c to break out and q to quit).


3 Responses to “gen_server With Rotating List”

  1. witek says:

    “queue” module is also quite halpfull for similar and more general operations. It is also quite fast (amortized O(1)).

    The trick with using quickly gen:reply, and then in background reversing list is very nice :)

  2. Witek,

    The “Amortized O(1)” is why I bothered with the trick at all. The problem with really long lists (over a few hundred entries) is that the “Amortized O(1)” is really N-1 times of O(1) and 1 time of O(N) for an list of length N. This makes that one time you get O(N) horrible.

    This still amortizes the same, but the delay runs in parallel with the reply, so it’s only a noticeable if you happen to have a few queries stacked up at the point that the list must be reversed.

    Thanks for the interest.

  3. witek says:


    Yes, in both cases we have amortized O(1), but in your solution differences beetwen fastes and slowest calls are big (N-1 O(1), and 1 O(N)). In case of “queue” it is slightly softer (most O(1), some O(log N)).

    I could be wrong, but it would be good benchmark to measure histogram of delays for fast clients.