Hi, Rob Munsch <rmunsch@solutionsforprogress.com> [20060411 15:55:20 -0400]:
'allo all,
I've 1.6.9 in a centralized environment logging to MySQL. I'm currently working up a rotate system whereby: - the db is locked - mysqldump, gzip, scp off the box - rows with a timestamp older than 7 days are deleted - unlock - carry on.
I would suggest a slight order alteration, the key is to always lock the database for as short a time as possible: 1. dump - locks for you and then unlocks [2. lock?] 3. purge 'junk' [4. unlock?] 5. gzip, scp whatever The 'carry on' bit would happen whilst the gzip is taking place in the background, just make sure you nice up the process to something like 19 or something so the computer will only use *spare* CPU cycles to gzip/scp your mysqldump data about. Just as a side note, if you simply purging rows with a simple SQL statement I'm pretty sure you can drop the locking/unlocking step altogether; its only if you need to maintain 'state' where objects are related to other objects. In your case you are simply chopping off the bottom of the table. This probably does not hold too well in an MySQL cluster I'm guessing....but thats starting to get way above my head there :) An optimisation would be to index the 'date' column which only contains 'year-month-day' so that instantly the SQL database can scrub those entries, your 'lock' would really not be in place then for any length of time.
and was wondering a couple things. Firstly, 1) is this a horribly bad idea that should be replaced with a completely different plan? failing that,
Well it looks like you always want at least seven days of data in the database so its hard to think of an alternative method.
2) How long does the default log_fifo_size of 2048 (lines, yes?) hold up, volume-wise? That is, while the tables are locked, i am assuming that this is where messages start piling up until they're unlocked. At the moment, i'm not dealing with high volume, but once everything seems in place i'm going to be adding many more hosts.
If you do lock/unlock you might want to break up things into smaller chunks to give the chance for the buffers to flush, so: 1 lock 2. purge day 14 3. unlock 4. lock 5. purge day 13 6. unlock 7. lock 8. purge day 12 ....etc etc to 30ish. lock 31. purge day 7 32. unlock Inbetween the locks/unlocks any buffers could be processed....however to be honest if you index your 'date' column the above will probably be a terrible and pointless approach.
is there any kind of rule of thumb for this value vis-a-vis the logs generated? At what sort of daily volume should i look towards upping it?
Thats for the others here to deal with.... Cheers Alex
Thanks!
-- Rob Munsch Solutions For Progress IT
_______________________________________________ syslog-ng maillist - syslog-ng@lists.balabit.hu https://lists.balabit.hu/mailman/listinfo/syslog-ng Frequently asked questions at http://www.campin.net/syslog-ng/faq.html