Hello,
I have seen that most of the UIs are avialable with databases. I have syslog-ng+oracle setup too. but I am not happy with performance.
Any numbers, performance figures?
we have a central log server with 3000G SAN and 15 GB RAM. and 20,000 devices are suppose to pump logs 24x7 :)
What's your expected/measured: o rate of arrival avg/peak in lines/s and socket connections/s o data volume/s of this central log server? What kind of hardware/OS/Oracle version and Configuration are you running this central log server with?
with oracle we faced two serious issues, thats why i also started pumping logs in files along with db.
It's almost always faster to write log files to flat files.
-inserts, i have using named pipe to insert logs in db, but oracle somehow drops inserts, becuase "rate of arival of events" is much larger than "rate of insert operations". I have noticed that there is about 80-90% event drops in db.
Parallel Inserts or one single pipe? Do you purge old data from your DB and if so, in which interval?
-select, when we search logs, it was really really bad performance it took too long to give results. but then we did indexing on hostname and partitioned table on time (new range partition is created after every 6 hrs) This improved system performance to some extent.
What's your time frame expectation regarding your select statements?
can you guys suggest me if mysql or postgre will be better to overcome above to problems (but remember our db is huge :), so I am not sure if mysql or postgre is able to handle such big db)
Cheers, Roberto Nibali, ratz -- echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq' | dc