Just the same as Arya, we too have a performance issue in this syslog-ng . Particularly we got only 20 servers which are giving in the logs to a server and the queries are taking more time to display the results. Is there any other way of optimising the dbs or any special tweakings that can be done to the mysql server as such . Thanks in advance for any kind suggestions Deva On 5/14/06, Arya, Manish Kumar <m.arya@yahoo.com> wrote:
Hi Guys,
Thanks for your valuable suggestions for syslog-ng UI.
I have seen that most of the UIs are avialable with databases. I have syslog-ng+oracle setup too. but I am not happy with performance. we have a central log server with 3000G SAN and 15 GB RAM. and 20,000 devices are suppose to pump logs 24x7 :) with oracle we faced two serious issues, thats why i also started pumping logs in files along with db.
-inserts, i have using named pipe to insert logs in db, but oracle somehow drops inserts, becuase "rate of arival of events" is much larger than "rate of insert operations". I have noticed that there is about 80-90% event drops in db.
-select, when we search logs, it was really really bad performance it took too long to give results. but then we did indexing on hostname and partitioned table on time (new range partition is created after every 6 hrs) This improved system performance to some extent.
can you guys suggest me if mysql or postgre will be better to overcome above to problems (but remember our db is huge :), so I am not sure if mysql or postgre is able to handle such big db)
Regards, -Manish
--- Jon Stearley <jrstear@sandia.gov> wrote:
On May 11, 2006, at 12:09 PM, Ken Garland wrote:
file("/logs/log01/indexlog/$YEAR/$MONTH/$DAY/$HOST"
... -should be able to to parallel search to improve search response time.
If you decide to go with SQL and have $$, netezza.com will almost certainly overcome your speed issues (parallel harware sql!). Having gotten utterly bogged down with Mysql on Linux (stripes, chunks, huge indexes), I just went back to files because they are simple and sufficient for my purposes.
if you are splitting all logs up into subdirs like that you will have quite a fun time doing any parsing.
If dirs/logs are arranged according to the factors used for subset selection (year/month/day/host) and the dirs/logs are listed in a (periodically updated) file (eg "corpus.docs" in sisyphus), subset selection can be done by simply grepping the file and concatenating the resulting dirs/logs. This is one implementation option underlying the clog.man page I sent earlier. Further subset selection by facility and priority could then be done by grepping the resulting log content (further dirs/logs splitting by facility/ priority presents multiple bad side effects). $0.02
-jon
_______________________________________________ syslog-ng maillist - syslog-ng@lists.balabit.hu https://lists.balabit.hu/mailman/listinfo/syslog-ng Frequently asked questions at http://www.campin.net/syslog-ng/faq.html
__________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com _______________________________________________ syslog-ng maillist - syslog-ng@lists.balabit.hu https://lists.balabit.hu/mailman/listinfo/syslog-ng Frequently asked questions at http://www.campin.net/syslog-ng/faq.html