It was this exact part of the problem which motivated my mails. I have been unable to think of a good way to pull things out of a pipe, socket, etc. in 60 second batches in such a way that I could keep pulling into the new batch, while processing the last batch, without forgetting all of the statistics I had collected in between.
Ah, ok, I think I see what you mean. Almost everything I do is forked/threaded, and I have just grown accustomed to doing all state maintenance through a database. So, what do you actually need to retain from batch to batch? I'm assuming it's not the entirety of the raw data, right? Otherwise you'd certainly have to write it to file or DB. So, if it's some basic numbers, can you just write the summaries to a DB? I've taken lately to creating simple tables with just an ID and a BLOB column and writing JSON-serialized blobs to synchronize my workers to get a kind of pseudo-noSQL. I use a DB because it takes care of all of the transactional locking for me.