On Thu, Feb 16, 2006 at 09:56:30AM +0100, Balazs Scheidler wrote:
FWIW, I modified syslog-ng 1.6.9 to usleep(10000) at the end of the io_iter() loop in src/main.c. This dropped CPU consumption by about fourfold - instead of spinning in that io_iter() loop, reading one or two messages per poll(), it's now reading several (3-15) messages per poll().
Increasing the time to 20000 usec cuts CPU in half again - to about 1/8 of the original consumption. It's now using ~10-15% of a CPU under the same heavy log load from ~750 connections. Granted, this increases latency, but I don't think 10 or 20 msec delays will kill anything.
The idea is cool, although I have to admit it is really a hack :)
:-)
This might be useful to others as well. I will probably add it as a global option. Thanks for tracking this down.
I've been tinkering with different delay times. 40 msec greatly increases the number of descriptors per poll(), to 30-50, but decreases CPU utilization under very heavy log load (950 descriptors) to ~12%. Do you have a guesstimate at what point I should stop, where the latency of the delay plus processing might be too much? thanks, john -- John Morrissey _o /\ ---- __o jwm@horde.net _-< \_ / \ ---- < \, www.horde.net/ __(_)/_(_)________/ \_______(_) /_(_)__