Cannot send Syslog-ng to Elasticsearch
Hello, Recently I've tried following along with the Syslog-NG to Elasticsearch and Kibana blog posts and Admin Documentation for integrating Syslog-NG into Elasticsearch but I'm unable to integrate the two. I see in the .conf files the destination calls for creating and Index Pattern for Syslog-NG but when I curl the existing indices I do not see syslog-ng. Also, I'm now receiving two errors. The first I'm fairly certain we need to resolve but I've not been able to find adequate documentation on how to identify the issue let along resolve it, and the second I'm not sure if we actually need to fix. The two issues: Issue#1: I/O error occurred syslog-ng[26432]: Syslog connection established; fd='12', server='AF_INET(127.0.0.1:9200)', local='AF_INET(0.0.0.0:0)' syslog-ng[26432]: I/O error occurred while writing; fd='12', error='Broken pipe (32)' syslog-ng[26432]: Syslog connection broken; fd='12', server='AF_INET(127.0.0.1:9200)', time_reopen='60' Issue#2: Error opening plugin module; module='mod-java', error='libjvm.so: cannot open shared object file: No such file or directory' For issue 1 I'm not sure what to do or how to resolve it. For issue 2, I know for certain libjvm does exist, and I've mapped the LD_LIBRARY_PATH to the directory libjvm.so resides in. Ultimately, are these two issues preventing Syslog-NG from sending to Elasticsearch or are they just separate issues to tackle after I get things cleared up, and most importantly if they're not related, how do I integrate Syslog-NG with Elasticsearch and Kibana. Documentation is not helpful and not concise. Thanks!
Hello, Regarding your elasticsearch issue: Depending on your version I would suggest you to try out the new C based elasticsearch destination (there is no need for java setup). The commit that introduced gives an example how to configure: commit 381ceb14e578553faaef3ea005146cb988a9f444 Refs: {origin/pr/2509}, syslog-ng-3.18.1-374-g381ceb14e Author: Zoltan Pallagi <pzolee@balabit.com> AuthorDate: Mon Feb 4 16:14:21 2019 +0100 Commit: Zoltan Pallagi <pzolee@balabit.com> CommitDate: Mon Feb 4 16:14:21 2019 +0100 Added elasticsearch-http() destination This destination is based on the native http destination of syslog-ng and uses elasticsearch bulk api (https://www.elastic.co/guide/en/elasticsearch/reference/6.5/docs-bulk.html) Example: destination d_elasticsearch_http { elasticsearch-http(index("my_index") type("my_type") url("http://my_elastic_server:9200/_bulk")); }; Issue#1: I/O error occurred: This issue should not be related to sending data to elasticsearch, as that seems like a network destination, which tries to send data to 9200 port, but the server cuts the connection (probably because malformed data). Curios enough the 92000 is a standard elasticsearch port. Would you please share your configuration ? and/or what this destination supposed to do ? -- kokan ________________________________________ From: syslog-ng <syslog-ng-bounces@lists.balabit.hu> on behalf of Allen Olivas <allen.olivas@infodefense.com> Sent: 08 July 2019 23:22 To: syslog-ng@lists.balabit.hu Subject: [syslog-ng] Cannot send Syslog-ng to Elasticsearch CAUTION: This email originated from outside of the organization. Do not follow guidance, click links, or open attachments unless you recognize the sender and know the content is safe. Hello, Recently I’ve tried following along with the Syslog-NG to Elasticsearch and Kibana blog posts and Admin Documentation for integrating Syslog-NG into Elasticsearch but I’m unable to integrate the two. I see in the .conf files the destination calls for creating and Index Pattern for Syslog-NG but when I curl the existing indices I do not see syslog-ng. Also, I’m now receiving two errors. The first I’m fairly certain we need to resolve but I’ve not been able to find adequate documentation on how to identify the issue let along resolve it, and the second I’m not sure if we actually need to fix. The two issues: Issue#1: I/O error occurred syslog-ng[26432]: Syslog connection established; fd='12', server='AF_INET(127.0.0.1:9200)', local='AF_INET(0.0.0.0:0)' syslog-ng[26432]: I/O error occurred while writing; fd='12', error='Broken pipe (32)' syslog-ng[26432]: Syslog connection broken; fd='12', server='AF_INET(127.0.0.1:9200)', time_reopen='60' Issue#2: Error opening plugin module; module='mod-java', error='libjvm.so: cannot open shared object file: No such file or directory' For issue 1 I’m not sure what to do or how to resolve it. For issue 2, I know for certain libjvm does exist, and I’ve mapped the LD_LIBRARY_PATH to the directory libjvm.so resides in. Ultimately, are these two issues preventing Syslog-NG from sending to Elasticsearch or are they just separate issues to tackle after I get things cleared up, and most importantly if they’re not related, how do I integrate Syslog-NG with Elasticsearch and Kibana. Documentation is not helpful and not concise. Thanks!
Hey Peter, I'll look into using the elasticsearch-http() destination. Does the elasticsearch-http() destination go directly into syslog-ng.conf or do I need to make a new .conf file (like elastic-http.conf) and add it to the conf.d/ directory? OR does it go in syslog-ng.conf and also the usr/share/syslog-ng/include/scl/elasticsearch/plugin.conf file? Per your request (and I do hope this helps illuminate things) I'm uploading our config files for syslog-ng, syslog-ng/.conf.d/elasticsearch.conf, and the plugin.conf Syslog-ng.conf: @version: 3.20 @module mod-java @include "scl.conf" @define allow-config-dups 1 # Syslog-ng configuration file, compatible with default Debian syslogd # installation. # First, set some global options. options { chain_hostnames(off); flush_lines(0); use_dns(no); use_fqdn(no); owner("root"); group("adm"); perm(0640); stats_freq(0); bad_hostname("^gconfd$"); }; ######################## # Sources ######################## # This is the default behavior of sysklogd package # Logs may come from unix stream, but not from another machine. # source s_src { system(); internal(); }; # If you wish to get logs from remote machine you should uncomment # this and comment the above source line. # source s_net { tcp(port(514)); udp(port(514)); syslog(); }; ###### # patterndb parser parser pattern_db { db-parser( file("/opt/syslog-ng/etc/patterndb.xml") ); }; ######################## # Destinations ######################## # First some standard logfile # destination d_auth { file("/var/log/auth.log"); }; destination d_cron { file("/var/log/cron.log"); }; destination d_daemon { file("/var/log/daemon.log"); }; destination d_kern { file("/var/log/kern.log"); }; destination d_lpr { file("/var/log/lpr.log"); }; destination d_mail { file("/var/log/mail.log"); }; destination d_syslog { file("/var/log/syslog"); }; destination d_user { file("/var/log/user.log"); }; destination d_uucp { file("/var/log/uucp.log"); }; # This files are the log come from the mail subsystem. # destination d_mailinfo { file("/var/log/mail.info"); }; destination d_mailwarn { file("/var/log/mail.warn"); }; destination d_mailerr { file("/var/log/mail.err"); }; # Logging for INN news system # destination d_newscrit { file("/var/log/news/news.crit"); }; destination d_newserr { file("/var/log/news/news.err"); }; destination d_newsnotice { file("/var/log/news/news.notice"); }; # Some 'catch-all' logfiles. # destination d_debug { file("/var/log/debug"); }; destination d_error { file("/var/log/error"); }; destination d_messages { file("/var/log/messages"); }; # The root's console. # destination d_console { usertty("root"); }; # Virtual console. # destination d_console_all { file(`tty10`); }; # The named pipe /dev/xconsole is for the nsole' utility. To use it, # you must invoke nsole' with the -file' option: # # $ xconsole -file /dev/xconsole [...] # destination d_xconsole { pipe("/dev/xconsole"); }; # Send the messages to an other host # #destination d_net { tcp("127.0.0.1" port(1000) log_fifo_size(1000)); }; ##### ### Elasticsearch Destination # destination d_elastic { tcp("127.0.0.1" port(9200) template("$(format-json --scope selected_macros --scope nv_pairs --exclude DATE --key ISODATE)\n")); }; # Debian only destination d_ppp { file("/var/log/ppp.log"); }; ######################## # Filters ######################## # Here's come the filter options. With this rules, we can set which # message go where. filter f_dbg { level(debug); }; filter f_info { level(info); }; filter f_notice { level(notice); }; filter f_warn { level(warn); }; filter f_err { level(err); }; filter f_crit { level(crit .. emerg); }; filter f_debug { level(debug) and not facility(auth, authpriv, news, mail); }; filter f_error { level(err .. emerg) ; }; filter f_messages { level(info,notice,warn) and not facility(auth,authpriv,cron,daemon,mail,news); }; filter f_auth { facility(auth, authpriv) and not filter(f_debug); }; filter f_cron { facility(cron) and not filter(f_debug); }; filter f_daemon { facility(daemon) and not filter(f_debug); }; filter f_kern { facility(kern) and not filter(f_debug); }; filter f_lpr { facility(lpr) and not filter(f_debug); }; filter f_local { facility(local0, local1, local3, local4, local5, local6, local7) and not filter(f_debug); }; filter f_mail { facility(mail) and not filter(f_debug); }; filter f_news { facility(news) and not filter(f_debug); }; filter f_syslog3 { not facility(auth, authpriv, mail) and not filter(f_debug); }; filter f_user { facility(user) and not filter(f_debug); }; filter f_uucp { facility(uucp) and not filter(f_debug); }; filter f_cnews { level(notice, err, crit) and facility(news); }; filter f_cother { level(debug, info, notice, warn) or facility(daemon, mail); }; filter f_ppp { facility(local2) and not filter(f_debug); }; filter f_console { level(warn .. emerg); }; ######################## # Log paths ######################## log { source(s_src); filter(f_auth); destination(d_auth); }; log { source(s_src); filter(f_cron); destination(d_cron); }; log { source(s_src); filter(f_daemon); destination(d_daemon); }; log { source(s_src); filter(f_kern); destination(d_kern); }; log { source(s_src); filter(f_lpr); destination(d_lpr); }; log { source(s_src); filter(f_syslog3); destination(d_syslog); destination(d_elastic); }; log { source(s_src); filter(f_user); destination(d_user); }; log { source(s_src); filter(f_uucp); destination(d_uucp); }; log { source(s_src); filter(f_mail); destination(d_mail); }; #log { source(s_src); filter(f_mail); filter(f_info); destination(d_mailinfo); }; #log { source(s_src); filter(f_mail); filter(f_warn); destination(d_mailwarn); }; #log { source(s_src); filter(f_mail); filter(f_err); destination(d_mailerr); }; log { source(s_src); filter(f_news); filter(f_crit); destination(d_newscrit); }; log { source(s_src); filter(f_news); filter(f_err); destination(d_newserr); }; log { source(s_src); filter(f_news); filter(f_notice); destination(d_newsnotice); }; #log { source(s_src); filter(f_cnews); destination(d_console_all); }; #log { source(s_src); filter(f_cother); destination(d_console_all); }; #log { source(s_src); filter(f_ppp); destination(d_ppp); }; log { source(s_src); filter(f_debug); destination(d_debug); }; log { source(s_src); filter(f_error); destination(d_error); }; log { source(s_src); filter(f_messages); destination(d_messages); }; log { source(s_src); filter(f_console); destination(d_console_all); destination(d_xconsole); }; log { source(s_src); filter(f_crit); destination(d_console); }; # All messages send to a remote site # #log { source(s_src); destination(d_net); }; ### # Include all config files in /etc/syslog-ng/conf.d/ ### @include "/etc/syslog-ng/conf.d/*.conf" Elasticsearch.conf @include "scl/elasticsearch/plugin.conf" source s_net { udp(); }; # All interfaces source s_src { system(); internal(); }; block destination d_elastic() { elasticsearch2( client-lib-dir("/usr/share/elasticsearch/lib/") cluster("searchguard-demo") index("syslog-${YEAR}.${MONTH}.${DAY}") type("syslog") client-mode("https") # cluster-url("https://127.0.0.1:9200/") ); }; log { source(s_net); destination(d_elastic); flags(flow-control); }; Plugin.conf ## scl/elasticsearch/plugin.conf -- Elasticsearch destination for syslog-ng ## ## Copyright (c) 2014 BalaBit IT Ltd, Budapest, Hungary ## Copyright (c) 2014 Gergely Nagy <algernon@balabit.hu> ## ## This program is free software; you can redistribute it and/or modify it ## under the terms of the GNU General Public License version 2 as published ## by the Free Software Foundation, or (at your option) any later version. ## ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with this program; if not, write to the Free Software ## Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA ## ## As an additional exemption you are allowed to compile & link against the ## OpenSSL libraries as published by the OpenSSL project. See the file ## COPYING for details. block destination d_elastic() { elasticsearch2( client-lib-dir("/usr/share/elasticsearch/lib/") index("syslog-${YEAR}.${MONTH}.${DAY}") type("syslog") client-mode("https") cluster-name("searchguard-demo") # cluster-url("https://127.0.0.1:9200/") ); }; Please let me know if there's anything else I can provide to help better understand and resolve this issue. Thanks, Allen Olivas -----Original Message----- From: syslog-ng <syslog-ng-bounces@lists.balabit.hu> On Behalf Of Peter Kokai (pkokai) Sent: Tuesday, July 9, 2019 12:03 AM To: syslog-ng@lists.balabit.hu Subject: Re: [syslog-ng] Cannot send Syslog-ng to Elasticsearch Hello, Regarding your elasticsearch issue: Depending on your version I would suggest you to try out the new C based elasticsearch destination (there is no need for java setup). The commit that introduced gives an example how to configure: commit 381ceb14e578553faaef3ea005146cb988a9f444 Refs: {origin/pr/2509}, syslog-ng-3.18.1-374-g381ceb14e Author: Zoltan Pallagi <pzolee@balabit.com<mailto:pzolee@balabit.com>> AuthorDate: Mon Feb 4 16:14:21 2019 +0100 Commit: Zoltan Pallagi <pzolee@balabit.com<mailto:pzolee@balabit.com>> CommitDate: Mon Feb 4 16:14:21 2019 +0100 Added elasticsearch-http() destination This destination is based on the native http destination of syslog-ng and uses elasticsearch bulk api (https://www.elastic.co/guide/en/elasticsearch/reference/6.5/docs-bulk.html) Example: destination d_elasticsearch_http { elasticsearch-http(index("my_index") type("my_type") url("http://my_elastic_server:9200/_bulk")); }; Issue#1: I/O error occurred: This issue should not be related to sending data to elasticsearch, as that seems like a network destination, which tries to send data to 9200 port, but the server cuts the connection (probably because malformed data). Curios enough the 92000 is a standard elasticsearch port. Would you please share your configuration ? and/or what this destination supposed to do ? -- kokan ________________________________________ From: syslog-ng <syslog-ng-bounces@lists.balabit.hu<mailto:syslog-ng-bounces@lists.balabit.hu>> on behalf of Allen Olivas <allen.olivas@infodefense.com<mailto:allen.olivas@infodefense.com>> Sent: 08 July 2019 23:22 To: syslog-ng@lists.balabit.hu<mailto:syslog-ng@lists.balabit.hu> Subject: [syslog-ng] Cannot send Syslog-ng to Elasticsearch CAUTION: This email originated from outside of the organization. Do not follow guidance, click links, or open attachments unless you recognize the sender and know the content is safe. Hello, Recently I’ve tried following along with the Syslog-NG to Elasticsearch and Kibana blog posts and Admin Documentation for integrating Syslog-NG into Elasticsearch but I’m unable to integrate the two. I see in the .conf files the destination calls for creating and Index Pattern for Syslog-NG but when I curl the existing indices I do not see syslog-ng. Also, I’m now receiving two errors. The first I’m fairly certain we need to resolve but I’ve not been able to find adequate documentation on how to identify the issue let along resolve it, and the second I’m not sure if we actually need to fix. The two issues: Issue#1: I/O error occurred syslog-ng[26432]: Syslog connection established; fd='12', server='AF_INET(127.0.0.1:9200)', local='AF_INET(0.0.0.0:0)' syslog-ng[26432]: I/O error occurred while writing; fd='12', error='Broken pipe (32)' syslog-ng[26432]: Syslog connection broken; fd='12', server='AF_INET(127.0.0.1:9200)', time_reopen='60' Issue#2: Error opening plugin module; module='mod-java', error='libjvm.so: cannot open shared object file: No such file or directory' For issue 1 I’m not sure what to do or how to resolve it. For issue 2, I know for certain libjvm does exist, and I’ve mapped the LD_LIBRARY_PATH to the directory libjvm.so resides in. Ultimately, are these two issues preventing Syslog-NG from sending to Elasticsearch or are they just separate issues to tackle after I get things cleared up, and most importantly if they’re not related, how do I integrate Syslog-NG with Elasticsearch and Kibana. Documentation is not helpful and not concise. Thanks! ______________________________________________________________________________ Member info: https://lists.balabit.hu/mailman/listinfo/syslog-ng Documentation: http://www.balabit.com/support/documentation/?product=syslog-ng FAQ: http://www.balabit.com/wiki/syslog-ng-faq
It's pretty much a drop-in replacement for the java destination, so you don't really need to do anything special. Here's what i use (template omitted), hopefully that helps - destination d_weblog_elastic { elasticsearch-http( index("logs-${YEAR}${MONTH}") type("test") persist-name("logs") url("http://127.0.0.1:9200/_bulk") time-zone("UTC") template("$(format-json ...)") ); }; This article contains some good reference material, you'd adjust slightly to accommodate your environment - https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-and-elasticsearch... On Tue, Jul 9, 2019 at 11:31 AM Allen Olivas <allen.olivas@infodefense.com> wrote:
Hey Peter,
I'll look into using the elasticsearch-http() destination. Does the elasticsearch-http() destination go directly into syslog-ng.conf or do I need to make a new .conf file (like elastic-http.conf) and add it to the conf.d/ directory? OR does it go in syslog-ng.conf and also the usr/share/syslog-ng/include/scl/elasticsearch/plugin.conf file?
Per your request (and I do hope this helps illuminate things) I'm uploading our config files for syslog-ng, syslog-ng/.conf.d/elasticsearch.conf, and the plugin.conf
*Syslog-ng.conf: *
@version: 3.20 @module mod-java @include "scl.conf" @define allow-config-dups 1
# Syslog-ng configuration file, compatible with default Debian syslogd # installation.
# First, set some global options. options { chain_hostnames(off); flush_lines(0); use_dns(no); use_fqdn(no); owner("root"); group("adm"); perm(0640); stats_freq(0); bad_hostname("^gconfd$"); };
######################## # Sources ######################## # This is the default behavior of sysklogd package # Logs may come from unix stream, but not from another machine. # source s_src { system(); internal(); };
# If you wish to get logs from remote machine you should uncomment # this and comment the above source line. # source s_net { tcp(port(514)); udp(port(514)); syslog(); };
###### # patterndb parser parser pattern_db { db-parser( file("/opt/syslog-ng/etc/patterndb.xml") ); };
######################## # Destinations ######################## # First some standard logfile # destination d_auth { file("/var/log/auth.log"); }; destination d_cron { file("/var/log/cron.log"); }; destination d_daemon { file("/var/log/daemon.log"); }; destination d_kern { file("/var/log/kern.log"); }; destination d_lpr { file("/var/log/lpr.log"); }; destination d_mail { file("/var/log/mail.log"); }; destination d_syslog { file("/var/log/syslog"); }; destination d_user { file("/var/log/user.log"); }; destination d_uucp { file("/var/log/uucp.log"); };
# This files are the log come from the mail subsystem. # destination d_mailinfo { file("/var/log/mail.info"); }; destination d_mailwarn { file("/var/log/mail.warn"); }; destination d_mailerr { file("/var/log/mail.err"); };
# Logging for INN news system # destination d_newscrit { file("/var/log/news/news.crit"); }; destination d_newserr { file("/var/log/news/news.err"); }; destination d_newsnotice { file("/var/log/news/news.notice"); };
# Some 'catch-all' logfiles. # destination d_debug { file("/var/log/debug"); }; destination d_error { file("/var/log/error"); }; destination d_messages { file("/var/log/messages"); };
# The root's console. # destination d_console { usertty("root"); };
# Virtual console. # destination d_console_all { file(`tty10`); };
# The named pipe /dev/xconsole is for the nsole' utility. To use it, # you must invoke nsole' with the -file' option: # # $ xconsole -file /dev/xconsole [...] # destination d_xconsole { pipe("/dev/xconsole"); };
# Send the messages to an other host # #destination d_net { tcp("127.0.0.1" port(1000) log_fifo_size(1000)); };
##### ### Elasticsearch Destination # destination d_elastic { tcp("127.0.0.1" port(9200) template("$(format-json --scope selected_macros --scope nv_pairs --exclude DATE --key ISODATE)\n")); };
# Debian only destination d_ppp { file("/var/log/ppp.log"); };
######################## # Filters ######################## # Here's come the filter options. With this rules, we can set which # message go where.
filter f_dbg { level(debug); }; filter f_info { level(info); }; filter f_notice { level(notice); }; filter f_warn { level(warn); }; filter f_err { level(err); }; filter f_crit { level(crit .. emerg); };
filter f_debug { level(debug) and not facility(auth, authpriv, news, mail); }; filter f_error { level(err .. emerg) ; }; filter f_messages { level(info,notice,warn) and not facility(auth,authpriv,cron,daemon,mail,news); };
filter f_auth { facility(auth, authpriv) and not filter(f_debug); }; filter f_cron { facility(cron) and not filter(f_debug); }; filter f_daemon { facility(daemon) and not filter(f_debug); }; filter f_kern { facility(kern) and not filter(f_debug); }; filter f_lpr { facility(lpr) and not filter(f_debug); }; filter f_local { facility(local0, local1, local3, local4, local5, local6, local7) and not filter(f_debug); }; filter f_mail { facility(mail) and not filter(f_debug); }; filter f_news { facility(news) and not filter(f_debug); }; filter f_syslog3 { not facility(auth, authpriv, mail) and not filter(f_debug); }; filter f_user { facility(user) and not filter(f_debug); }; filter f_uucp { facility(uucp) and not filter(f_debug); };
filter f_cnews { level(notice, err, crit) and facility(news); }; filter f_cother { level(debug, info, notice, warn) or facility(daemon, mail); };
filter f_ppp { facility(local2) and not filter(f_debug); }; filter f_console { level(warn .. emerg); };
######################## # Log paths ######################## log { source(s_src); filter(f_auth); destination(d_auth); }; log { source(s_src); filter(f_cron); destination(d_cron); }; log { source(s_src); filter(f_daemon); destination(d_daemon); }; log { source(s_src); filter(f_kern); destination(d_kern); }; log { source(s_src); filter(f_lpr); destination(d_lpr); }; log { source(s_src); filter(f_syslog3); destination(d_syslog); destination(d_elastic); }; log { source(s_src); filter(f_user); destination(d_user); }; log { source(s_src); filter(f_uucp); destination(d_uucp); };
log { source(s_src); filter(f_mail); destination(d_mail); }; #log { source(s_src); filter(f_mail); filter(f_info); destination(d_mailinfo); }; #log { source(s_src); filter(f_mail); filter(f_warn); destination(d_mailwarn); }; #log { source(s_src); filter(f_mail); filter(f_err); destination(d_mailerr); };
log { source(s_src); filter(f_news); filter(f_crit); destination(d_newscrit); }; log { source(s_src); filter(f_news); filter(f_err); destination(d_newserr); }; log { source(s_src); filter(f_news); filter(f_notice); destination(d_newsnotice); }; #log { source(s_src); filter(f_cnews); destination(d_console_all); }; #log { source(s_src); filter(f_cother); destination(d_console_all); };
#log { source(s_src); filter(f_ppp); destination(d_ppp); };
log { source(s_src); filter(f_debug); destination(d_debug); }; log { source(s_src); filter(f_error); destination(d_error); }; log { source(s_src); filter(f_messages); destination(d_messages); };
log { source(s_src); filter(f_console); destination(d_console_all); destination(d_xconsole); }; log { source(s_src); filter(f_crit); destination(d_console); };
# All messages send to a remote site # #log { source(s_src); destination(d_net); };
### # Include all config files in /etc/syslog-ng/conf.d/ ### @include "/etc/syslog-ng/conf.d/*.conf"
*Elasticsearch.conf*
@include "scl/elasticsearch/plugin.conf"
source s_net { udp(); }; # All interfaces source s_src { system(); internal(); };
block destination d_elastic() { elasticsearch2( client-lib-dir("/usr/share/elasticsearch/lib/") cluster("searchguard-demo") index("syslog-${YEAR}.${MONTH}.${DAY}") type("syslog") client-mode("https") # cluster-url("https://127.0.0.1:9200/") ); };
log { source(s_net); destination(d_elastic); flags(flow-control); };
*Plugin.conf*
## scl/elasticsearch/plugin.conf -- Elasticsearch destination for syslog-ng ## ## Copyright (c) 2014 BalaBit IT Ltd, Budapest, Hungary ## Copyright (c) 2014 Gergely Nagy <algernon@balabit.hu> ## ## This program is free software; you can redistribute it and/or modify it ## under the terms of the GNU General Public License version 2 as published ## by the Free Software Foundation, or (at your option) any later version. ## ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with this program; if not, write to the Free Software ## Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA ## ## As an additional exemption you are allowed to compile & link against the ## OpenSSL libraries as published by the OpenSSL project. See the file ## COPYING for details.
block destination d_elastic() { elasticsearch2( client-lib-dir("/usr/share/elasticsearch/lib/") index("syslog-${YEAR}.${MONTH}.${DAY}") type("syslog") client-mode("https") cluster-name("searchguard-demo") # cluster-url("https://127.0.0.1:9200/") ); };
Please let me know if there's anything else I can provide to help better understand and resolve this issue.
Thanks,
Allen Olivas
-----Original Message----- From: syslog-ng <syslog-ng-bounces@lists.balabit.hu> On Behalf Of Peter Kokai (pkokai) Sent: Tuesday, July 9, 2019 12:03 AM To: syslog-ng@lists.balabit.hu Subject: Re: [syslog-ng] Cannot send Syslog-ng to Elasticsearch
Hello,
Regarding your elasticsearch issue: Depending on your version I would suggest you to try out the new C based elasticsearch destination (there is no need for java setup).
The commit that introduced gives an example how to configure:
commit 381ceb14e578553faaef3ea005146cb988a9f444 Refs: {origin/pr/2509}, syslog-ng-3.18.1-374-g381ceb14e Author: Zoltan Pallagi <pzolee@balabit.com> AuthorDate: Mon Feb 4 16:14:21 2019 +0100 Commit: Zoltan Pallagi <pzolee@balabit.com> CommitDate: Mon Feb 4 16:14:21 2019 +0100
Added elasticsearch-http() destination
This destination is based on the native http destination of syslog-ng and uses elasticsearch bulk api ( https://www.elastic.co/guide/en/elasticsearch/reference/6.5/docs-bulk.html )
Example: destination d_elasticsearch_http { elasticsearch-http(index("my_index") type("my_type") url("http://my_elastic_server:9200/_bulk")); };
Issue#1: I/O error occurred: This issue should not be related to sending data to elasticsearch, as that seems like a network destination, which tries to send data to 9200 port, but the server cuts the connection (probably because malformed data). Curios enough the 92000 is a standard elasticsearch port.
Would you please share your configuration ? and/or what this destination supposed to do ?
-- kokan
________________________________________ From: syslog-ng <syslog-ng-bounces@lists.balabit.hu> on behalf of Allen Olivas <allen.olivas@infodefense.com> Sent: 08 July 2019 23:22 To: syslog-ng@lists.balabit.hu Subject: [syslog-ng] Cannot send Syslog-ng to Elasticsearch
CAUTION: This email originated from outside of the organization. Do not follow guidance, click links, or open attachments unless you recognize the sender and know the content is safe.
Hello,
Recently I’ve tried following along with the Syslog-NG to Elasticsearch and Kibana blog posts and Admin Documentation for integrating Syslog-NG into Elasticsearch but I’m unable to integrate the two.
I see in the .conf files the destination calls for creating and Index Pattern for Syslog-NG but when I curl the existing indices I do not see syslog-ng.
Also, I’m now receiving two errors. The first I’m fairly certain we need to resolve but I’ve not been able to find adequate documentation on how to identify the issue let along resolve it, and the second I’m not sure if we actually need to fix.
The two issues:
Issue#1: I/O error occurred syslog-ng[26432]: Syslog connection established; fd='12', server='AF_INET(127.0.0.1:9200)', local='AF_INET(0.0.0.0:0)' syslog-ng[26432]: I/O error occurred while writing; fd='12', error='Broken pipe (32)' syslog-ng[26432]: Syslog connection broken; fd='12', server='AF_INET(127.0.0.1:9200)', time_reopen='60'
Issue#2: Error opening plugin module; module='mod-java', error='libjvm.so: cannot open shared object file: No such file or directory'
For issue 1 I’m not sure what to do or how to resolve it. For issue 2, I know for certain libjvm does exist, and I’ve mapped the LD_LIBRARY_PATH to the directory libjvm.so resides in.
Ultimately, are these two issues preventing Syslog-NG from sending to Elasticsearch or are they just separate issues to tackle after I get things cleared up, and most importantly if they’re not related, how do I integrate Syslog-NG with Elasticsearch and Kibana. Documentation is not helpful and not concise.
Thanks!
______________________________________________________________________________ Member info: https://lists.balabit.hu/mailman/listinfo/syslog-ng Documentation: http://www.balabit.com/support/documentation/?product=syslog-ng FAQ: http://www.balabit.com/wiki/syslog-ng-faq
______________________________________________________________________________ Member info: https://lists.balabit.hu/mailman/listinfo/syslog-ng Documentation: http://www.balabit.com/support/documentation/?product=syslog-ng FAQ: http://www.balabit.com/wiki/syslog-ng-faq
hello, I've checked the syslog-ng-2.21.1 is the first release that has the elasticsearch-http destination (but it can be backported to 3.20.1, see later). The first issue you got because: destination d_elastic { tcp("127.0.0.1" port(9200) template("$(format-json --scope selected_macros --scope nv_pairs --exclude DATE --key ISODATE)\n")); }; This aims to send messages to elasticsearch via plain tcp, that won't work. You should use elasticsearch2 / elasticsearch-http destination instead (the http stack is missing in this case). The second issue came from your configuration, as actually it does not uses the elasticsearch2 destination (java based). In your configuration you have the *d_elastic* name used for two different thing, in the configuration it is allowed to use the same name for different context. You could name a destination group (so you could reference in multiple logpaths), as: ``` destination dst_name { ... }; ``` and the usage: ``` log { ... destination(dst_name); //in this context you could reference a destination group }; ``` You could also name a *block* (which is a kinda template, macro), as: ``` block destination dst_name() { ... }; ``` the usage ``` destination other_destination_name { dst_name(); //here you could use a block that has a *destination* context }; ``` oh and you could do something like this (which could be confusing, strange) ``` block destination myblock() { ... }; destination myblock { myblock(); }; ``` In your configuration you have exactly the same thing, you create a *d_elastic* destination group with a tcp destination, and also a *d_elastic* block with destination context that contains a destination using elasticsearch2 (the java based stuff). But you only reference the *d_elastic* destination group, not the correct elasticsearch based one. (issue 2) Those defined *d_elastic* blocks you do not require; use the *elasticsearch2* / *elasticsearch-http* directly in the *d_elastic* destination group: ``` destination d_elastic { elasticsearch... }; ``` The question raises why did you recieved missing jvm.so errors iff the java based elasticsearch destination is not in use; the answer is because of "@module mod-java" module preloading logic forces syslog-ng to load the java based module. This actually you do not have to do anymore (older syslog-ng version required it), so it would be better to remove this line. finally coming back to your real issue, not being able to utilize elasticsearch. as because the *elasticsearch-http* is not present in syslog-ng-3.20.1 (based on your configuration I assume this is your current version), you have two option. * ignore the *elasticsearch-http* for now, and use *elasticsearch2* java based (@faxm0dem had suggestion on how to debug this case) * "port" back to the 3.20.1 (it is kinda possible) Here is the starting point: https://github.com/balabit/syslog-ng/blob/1742b11e5cfa6544ece38aaecde96e9b42... (current implementation) You must store this into your ${prefix}/share/syslog-ng/include/scl/elasticsearch/elastic-http.conf (where prefix is your install path) In the following line: body("$(format-json --scope none --omit-empty-values index._index=`index` index._type=`type` index._id=`custom_id`)\n`template`") you must remove the *--omit-empty-values* as it is not supported in current version. Depending on your elasticsearch version, you have to remove additional values, because the *type* has been removed in the newer version of elasticsearch. See about it, and which fields it effects in the following blogpost: https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-with-elastic-stac... If your elasticsearch does not require the *type*, you could remove from the *index._type=`type`*, and after that it should be fine. -- kokan ________________________________________ From: syslog-ng <syslog-ng-bounces@lists.balabit.hu> on behalf of Allen Olivas <allen.olivas@infodefense.com> Sent: 09 July 2019 17:31 To: Syslog-ng users' and developers' mailing list Subject: Re: [syslog-ng] Cannot send Syslog-ng to Elasticsearch CAUTION: This email originated from outside of the organization. Do not follow guidance, click links, or open attachments unless you recognize the sender and know the content is safe. Hey Peter, I'll look into using the elasticsearch-http() destination. Does the elasticsearch-http() destination go directly into syslog-ng.conf or do I need to make a new .conf file (like elastic-http.conf) and add it to the conf.d/ directory? OR does it go in syslog-ng.conf and also the usr/share/syslog-ng/include/scl/elasticsearch/plugin.conf file? Per your request (and I do hope this helps illuminate things) I'm uploading our config files for syslog-ng, syslog-ng/.conf.d/elasticsearch.conf, and the plugin.conf Syslog-ng.conf: @version: 3.20 @module mod-java @include "scl.conf" @define allow-config-dups 1 # Syslog-ng configuration file, compatible with default Debian syslogd # installation. # First, set some global options. options { chain_hostnames(off); flush_lines(0); use_dns(no); use_fqdn(no); owner("root"); group("adm"); perm(0640); stats_freq(0); bad_hostname("^gconfd$"); }; ######################## # Sources ######################## # This is the default behavior of sysklogd package # Logs may come from unix stream, but not from another machine. # source s_src { system(); internal(); }; # If you wish to get logs from remote machine you should uncomment # this and comment the above source line. # source s_net { tcp(port(514)); udp(port(514)); syslog(); }; ###### # patterndb parser parser pattern_db { db-parser( file("/opt/syslog-ng/etc/patterndb.xml") ); }; ######################## # Destinations ######################## # First some standard logfile # destination d_auth { file("/var/log/auth.log"); }; destination d_cron { file("/var/log/cron.log"); }; destination d_daemon { file("/var/log/daemon.log"); }; destination d_kern { file("/var/log/kern.log"); }; destination d_lpr { file("/var/log/lpr.log"); }; destination d_mail { file("/var/log/mail.log"); }; destination d_syslog { file("/var/log/syslog"); }; destination d_user { file("/var/log/user.log"); }; destination d_uucp { file("/var/log/uucp.log"); }; # This files are the log come from the mail subsystem. # destination d_mailinfo { file("/var/log/mail.info"); }; destination d_mailwarn { file("/var/log/mail.warn"); }; destination d_mailerr { file("/var/log/mail.err"); }; # Logging for INN news system # destination d_newscrit { file("/var/log/news/news.crit"); }; destination d_newserr { file("/var/log/news/news.err"); }; destination d_newsnotice { file("/var/log/news/news.notice"); }; # Some 'catch-all' logfiles. # destination d_debug { file("/var/log/debug"); }; destination d_error { file("/var/log/error"); }; destination d_messages { file("/var/log/messages"); }; # The root's console. # destination d_console { usertty("root"); }; # Virtual console. # destination d_console_all { file(`tty10`); }; # The named pipe /dev/xconsole is for the nsole' utility. To use it, # you must invoke nsole' with the -file' option: # # $ xconsole -file /dev/xconsole [...] # destination d_xconsole { pipe("/dev/xconsole"); }; # Send the messages to an other host # #destination d_net { tcp("127.0.0.1" port(1000) log_fifo_size(1000)); }; ##### ### Elasticsearch Destination # destination d_elastic { tcp("127.0.0.1" port(9200) template("$(format-json --scope selected_macros --scope nv_pairs --exclude DATE --key ISODATE)\n")); }; # Debian only destination d_ppp { file("/var/log/ppp.log"); }; ######################## # Filters ######################## # Here's come the filter options. With this rules, we can set which # message go where. filter f_dbg { level(debug); }; filter f_info { level(info); }; filter f_notice { level(notice); }; filter f_warn { level(warn); }; filter f_err { level(err); }; filter f_crit { level(crit .. emerg); }; filter f_debug { level(debug) and not facility(auth, authpriv, news, mail); }; filter f_error { level(err .. emerg) ; }; filter f_messages { level(info,notice,warn) and not facility(auth,authpriv,cron,daemon,mail,news); }; filter f_auth { facility(auth, authpriv) and not filter(f_debug); }; filter f_cron { facility(cron) and not filter(f_debug); }; filter f_daemon { facility(daemon) and not filter(f_debug); }; filter f_kern { facility(kern) and not filter(f_debug); }; filter f_lpr { facility(lpr) and not filter(f_debug); }; filter f_local { facility(local0, local1, local3, local4, local5, local6, local7) and not filter(f_debug); }; filter f_mail { facility(mail) and not filter(f_debug); }; filter f_news { facility(news) and not filter(f_debug); }; filter f_syslog3 { not facility(auth, authpriv, mail) and not filter(f_debug); }; filter f_user { facility(user) and not filter(f_debug); }; filter f_uucp { facility(uucp) and not filter(f_debug); }; filter f_cnews { level(notice, err, crit) and facility(news); }; filter f_cother { level(debug, info, notice, warn) or facility(daemon, mail); }; filter f_ppp { facility(local2) and not filter(f_debug); }; filter f_console { level(warn .. emerg); }; ######################## # Log paths ######################## log { source(s_src); filter(f_auth); destination(d_auth); }; log { source(s_src); filter(f_cron); destination(d_cron); }; log { source(s_src); filter(f_daemon); destination(d_daemon); }; log { source(s_src); filter(f_kern); destination(d_kern); }; log { source(s_src); filter(f_lpr); destination(d_lpr); }; log { source(s_src); filter(f_syslog3); destination(d_syslog); destination(d_elastic); }; log { source(s_src); filter(f_user); destination(d_user); }; log { source(s_src); filter(f_uucp); destination(d_uucp); }; log { source(s_src); filter(f_mail); destination(d_mail); }; #log { source(s_src); filter(f_mail); filter(f_info); destination(d_mailinfo); }; #log { source(s_src); filter(f_mail); filter(f_warn); destination(d_mailwarn); }; #log { source(s_src); filter(f_mail); filter(f_err); destination(d_mailerr); }; log { source(s_src); filter(f_news); filter(f_crit); destination(d_newscrit); }; log { source(s_src); filter(f_news); filter(f_err); destination(d_newserr); }; log { source(s_src); filter(f_news); filter(f_notice); destination(d_newsnotice); }; #log { source(s_src); filter(f_cnews); destination(d_console_all); }; #log { source(s_src); filter(f_cother); destination(d_console_all); }; #log { source(s_src); filter(f_ppp); destination(d_ppp); }; log { source(s_src); filter(f_debug); destination(d_debug); }; log { source(s_src); filter(f_error); destination(d_error); }; log { source(s_src); filter(f_messages); destination(d_messages); }; log { source(s_src); filter(f_console); destination(d_console_all); destination(d_xconsole); }; log { source(s_src); filter(f_crit); destination(d_console); }; # All messages send to a remote site # #log { source(s_src); destination(d_net); }; ### # Include all config files in /etc/syslog-ng/conf.d/ ### @include "/etc/syslog-ng/conf.d/*.conf" Elasticsearch.conf @include "scl/elasticsearch/plugin.conf" source s_net { udp(); }; # All interfaces source s_src { system(); internal(); }; block destination d_elastic() { elasticsearch2( client-lib-dir("/usr/share/elasticsearch/lib/") cluster("searchguard-demo") index("syslog-${YEAR}.${MONTH}.${DAY}") type("syslog") client-mode("https") # cluster-url("https://127.0.0.1:9200/") ); }; log { source(s_net); destination(d_elastic); flags(flow-control); }; Plugin.conf ## scl/elasticsearch/plugin.conf -- Elasticsearch destination for syslog-ng ## ## Copyright (c) 2014 BalaBit IT Ltd, Budapest, Hungary ## Copyright (c) 2014 Gergely Nagy <algernon@balabit.hu> ## ## This program is free software; you can redistribute it and/or modify it ## under the terms of the GNU General Public License version 2 as published ## by the Free Software Foundation, or (at your option) any later version. ## ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with this program; if not, write to the Free Software ## Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA ## ## As an additional exemption you are allowed to compile & link against the ## OpenSSL libraries as published by the OpenSSL project. See the file ## COPYING for details. block destination d_elastic() { elasticsearch2( client-lib-dir("/usr/share/elasticsearch/lib/") index("syslog-${YEAR}.${MONTH}.${DAY}") type("syslog") client-mode("https") cluster-name("searchguard-demo") # cluster-url("https://127.0.0.1:9200/") ); }; Please let me know if there's anything else I can provide to help better understand and resolve this issue. Thanks, Allen Olivas -----Original Message----- From: syslog-ng <syslog-ng-bounces@lists.balabit.hu> On Behalf Of Peter Kokai (pkokai) Sent: Tuesday, July 9, 2019 12:03 AM To: syslog-ng@lists.balabit.hu Subject: Re: [syslog-ng] Cannot send Syslog-ng to Elasticsearch Hello, Regarding your elasticsearch issue: Depending on your version I would suggest you to try out the new C based elasticsearch destination (there is no need for java setup). The commit that introduced gives an example how to configure: commit 381ceb14e578553faaef3ea005146cb988a9f444 Refs: {origin/pr/2509}, syslog-ng-3.18.1-374-g381ceb14e Author: Zoltan Pallagi <pzolee@balabit.com<mailto:pzolee@balabit.com>> AuthorDate: Mon Feb 4 16:14:21 2019 +0100 Commit: Zoltan Pallagi <pzolee@balabit.com<mailto:pzolee@balabit.com>> CommitDate: Mon Feb 4 16:14:21 2019 +0100 Added elasticsearch-http() destination This destination is based on the native http destination of syslog-ng and uses elasticsearch bulk api (https://www.elastic.co/guide/en/elasticsearch/reference/6.5/docs-bulk.html) Example: destination d_elasticsearch_http { elasticsearch-http(index("my_index") type("my_type") url("http://my_elastic_server:9200/_bulk")); }; Issue#1: I/O error occurred: This issue should not be related to sending data to elasticsearch, as that seems like a network destination, which tries to send data to 9200 port, but the server cuts the connection (probably because malformed data). Curios enough the 92000 is a standard elasticsearch port. Would you please share your configuration ? and/or what this destination supposed to do ? -- kokan ________________________________________ From: syslog-ng <syslog-ng-bounces@lists.balabit.hu<mailto:syslog-ng-bounces@lists.balabit.hu>> on behalf of Allen Olivas <allen.olivas@infodefense.com<mailto:allen.olivas@infodefense.com>> Sent: 08 July 2019 23:22 To: syslog-ng@lists.balabit.hu<mailto:syslog-ng@lists.balabit.hu> Subject: [syslog-ng] Cannot send Syslog-ng to Elasticsearch CAUTION: This email originated from outside of the organization. Do not follow guidance, click links, or open attachments unless you recognize the sender and know the content is safe. Hello, Recently I’ve tried following along with the Syslog-NG to Elasticsearch and Kibana blog posts and Admin Documentation for integrating Syslog-NG into Elasticsearch but I’m unable to integrate the two. I see in the .conf files the destination calls for creating and Index Pattern for Syslog-NG but when I curl the existing indices I do not see syslog-ng. Also, I’m now receiving two errors. The first I’m fairly certain we need to resolve but I’ve not been able to find adequate documentation on how to identify the issue let along resolve it, and the second I’m not sure if we actually need to fix. The two issues: Issue#1: I/O error occurred syslog-ng[26432]: Syslog connection established; fd='12', server='AF_INET(127.0.0.1:9200)', local='AF_INET(0.0.0.0:0)' syslog-ng[26432]: I/O error occurred while writing; fd='12', error='Broken pipe (32)' syslog-ng[26432]: Syslog connection broken; fd='12', server='AF_INET(127.0.0.1:9200)', time_reopen='60' Issue#2: Error opening plugin module; module='mod-java', error='libjvm.so: cannot open shared object file: No such file or directory' For issue 1 I’m not sure what to do or how to resolve it. For issue 2, I know for certain libjvm does exist, and I’ve mapped the LD_LIBRARY_PATH to the directory libjvm.so resides in. Ultimately, are these two issues preventing Syslog-NG from sending to Elasticsearch or are they just separate issues to tackle after I get things cleared up, and most importantly if they’re not related, how do I integrate Syslog-NG with Elasticsearch and Kibana. Documentation is not helpful and not concise. Thanks! ______________________________________________________________________________ Member info: https://lists.balabit.hu/mailman/listinfo/syslog-ng Documentation: http://www.balabit.com/support/documentation/?product=syslog-ng FAQ: http://www.balabit.com/wiki/syslog-ng-faq
Hi Peter, I updated syslog-ng (and ubuntu actually). Currently running Ubuntu 18.04 and Syslog-NG 3.22.1 I added the destination elasticsearch-http to the syslog-ng.conf but am getting an error parsing destination statement. destination d_elasticsearch_http { elasticsearch-http( index("my-index") type("my-type") url("http://es-node1:9200/_bulk" "http://es-node2:9200/_bulk" "http://es-node3:9200/_bulk") persist-name("my-persist")); }; The error I'm receiving is what I see after "sudo syslog-ng -c syslog-ng.service -e -v" Error parsing destination statement, destination plugin elasticsearch-http not found in /etc/syslog-ng/syslog-ng.conf:84:3-84:21: 79 #destination d_net { tcp("127.0.0.1" port(1000) log_fifo_size(1000)); }; 80 81 #Elasticsearch_HTTP 82 83 destination d_elasticsearch_http { 84----> elasticsearch-http( 84----> ^^^^^^^^^^^^^^^^^^ 85 index("syslog-${YEAR}.${MONTH}.${DAY}")) 86 type("syslog-ng") 87 url("http://127.0.0.1:9200/_bulk") 88 persist-name("my-persist")); 89 }; I did an "apt-cache search syslog-ng-mod" but don't see the specific plugin for elasticsearch-http. Not sure A) how to verify its installed even, or B) how to install it if its missing. Also not sure how to configure the plugin and the .conf plugin statement. Thanks for you help and advice with this. -----Original Message----- From: syslog-ng <syslog-ng-bounces@lists.balabit.hu> On Behalf Of Peter Kokai (pkokai) Sent: Tuesday, July 9, 2019 12:00 PM To: Syslog-ng users' and developers' mailing list <syslog-ng@lists.balabit.hu> Subject: Re: [syslog-ng] Cannot send Syslog-ng to Elasticsearch hello, I've checked the syslog-ng-2.21.1 is the first release that has the elasticsearch-http destination (but it can be backported to 3.20.1, see later). The first issue you got because: destination d_elastic { tcp("127.0.0.1" port(9200) template("$(format-json --scope selected_macros --scope nv_pairs --exclude DATE --key ISODATE)\n")); }; This aims to send messages to elasticsearch via plain tcp, that won't work. You should use elasticsearch2 / elasticsearch-http destination instead (the http stack is missing in this case). The second issue came from your configuration, as actually it does not uses the elasticsearch2 destination (java based). In your configuration you have the *d_elastic* name used for two different thing, in the configuration it is allowed to use the same name for different context. You could name a destination group (so you could reference in multiple logpaths), as: ``` destination dst_name { ... }; ``` and the usage: ``` log { ... destination(dst_name); //in this context you could reference a destination group }; ``` You could also name a *block* (which is a kinda template, macro), as: ``` block destination dst_name() { ... }; ``` the usage ``` destination other_destination_name { dst_name(); //here you could use a block that has a *destination* context }; ``` oh and you could do something like this (which could be confusing, strange) ``` block destination myblock() { ... }; destination myblock { myblock(); }; ``` In your configuration you have exactly the same thing, you create a *d_elastic* destination group with a tcp destination, and also a *d_elastic* block with destination context that contains a destination using elasticsearch2 (the java based stuff). But you only reference the *d_elastic* destination group, not the correct elasticsearch based one. (issue 2) Those defined *d_elastic* blocks you do not require; use the *elasticsearch2* / *elasticsearch-http* directly in the *d_elastic* destination group: ``` destination d_elastic { elasticsearch... }; ``` The question raises why did you recieved missing jvm.so errors iff the java based elasticsearch destination is not in use; the answer is because of "@module mod-java" module preloading logic forces syslog-ng to load the java based module. This actually you do not have to do anymore (older syslog-ng version required it), so it would be better to remove this line. finally coming back to your real issue, not being able to utilize elasticsearch. as because the *elasticsearch-http* is not present in syslog-ng-3.20.1 (based on your configuration I assume this is your current version), you have two option. * ignore the *elasticsearch-http* for now, and use *elasticsearch2* java based (@faxm0dem had suggestion on how to debug this case) * "port" back to the 3.20.1 (it is kinda possible) Here is the starting point: https://github.com/balabit/syslog-ng/blob/1742b11e5cfa6544ece38aaecde96e9b42... (current implementation) You must store this into your ${prefix}/share/syslog-ng/include/scl/elasticsearch/elastic-http.conf (where prefix is your install path) In the following line: body("$(format-json --scope none --omit-empty-values index._index=`index` index._type=`type` index._id=`custom_id`)\n`template`") you must remove the *--omit-empty-values* as it is not supported in current version. Depending on your elasticsearch version, you have to remove additional values, because the *type* has been removed in the newer version of elasticsearch. See about it, and which fields it effects in the following blogpost: https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-with-elastic-stac... If your elasticsearch does not require the *type*, you could remove from the *index._type=`type`*, and after that it should be fine. -- kokan ________________________________________ From: syslog-ng <syslog-ng-bounces@lists.balabit.hu> on behalf of Allen Olivas <allen.olivas@infodefense.com> Sent: 09 July 2019 17:31 To: Syslog-ng users' and developers' mailing list Subject: Re: [syslog-ng] Cannot send Syslog-ng to Elasticsearch CAUTION: This email originated from outside of the organization. Do not follow guidance, click links, or open attachments unless you recognize the sender and know the content is safe. Hey Peter, I'll look into using the elasticsearch-http() destination. Does the elasticsearch-http() destination go directly into syslog-ng.conf or do I need to make a new .conf file (like elastic-http.conf) and add it to the conf.d/ directory? OR does it go in syslog-ng.conf and also the usr/share/syslog-ng/include/scl/elasticsearch/plugin.conf file? Per your request (and I do hope this helps illuminate things) I'm uploading our config files for syslog-ng, syslog-ng/.conf.d/elasticsearch.conf, and the plugin.conf Syslog-ng.conf: @version: 3.20 @module mod-java @include "scl.conf" @define allow-config-dups 1 # Syslog-ng configuration file, compatible with default Debian syslogd # installation. # First, set some global options. options { chain_hostnames(off); flush_lines(0); use_dns(no); use_fqdn(no); owner("root"); group("adm"); perm(0640); stats_freq(0); bad_hostname("^gconfd$"); }; ######################## # Sources ######################## # This is the default behavior of sysklogd package # Logs may come from unix stream, but not from another machine. # source s_src { system(); internal(); }; # If you wish to get logs from remote machine you should uncomment # this and comment the above source line. # source s_net { tcp(port(514)); udp(port(514)); syslog(); }; ###### # patterndb parser parser pattern_db { db-parser( file("/opt/syslog-ng/etc/patterndb.xml") ); }; ######################## # Destinations ######################## # First some standard logfile # destination d_auth { file("/var/log/auth.log"); }; destination d_cron { file("/var/log/cron.log"); }; destination d_daemon { file("/var/log/daemon.log"); }; destination d_kern { file("/var/log/kern.log"); }; destination d_lpr { file("/var/log/lpr.log"); }; destination d_mail { file("/var/log/mail.log"); }; destination d_syslog { file("/var/log/syslog"); }; destination d_user { file("/var/log/user.log"); }; destination d_uucp { file("/var/log/uucp.log"); }; # This files are the log come from the mail subsystem. # destination d_mailinfo { file("/var/log/mail.info"); }; destination d_mailwarn { file("/var/log/mail.warn"); }; destination d_mailerr { file("/var/log/mail.err"); }; # Logging for INN news system # destination d_newscrit { file("/var/log/news/news.crit"); }; destination d_newserr { file("/var/log/news/news.err"); }; destination d_newsnotice { file("/var/log/news/news.notice"); }; # Some 'catch-all' logfiles. # destination d_debug { file("/var/log/debug"); }; destination d_error { file("/var/log/error"); }; destination d_messages { file("/var/log/messages"); }; # The root's console. # destination d_console { usertty("root"); }; # Virtual console. # destination d_console_all { file(`tty10`); }; # The named pipe /dev/xconsole is for the nsole' utility. To use it, # you must invoke nsole' with the -file' option: # # $ xconsole -file /dev/xconsole [...] # destination d_xconsole { pipe("/dev/xconsole"); }; # Send the messages to an other host # #destination d_net { tcp("127.0.0.1" port(1000) log_fifo_size(1000)); }; ##### ### Elasticsearch Destination # destination d_elastic { tcp("127.0.0.1" port(9200) template("$(format-json --scope selected_macros --scope nv_pairs --exclude DATE --key ISODATE)\n")); }; # Debian only destination d_ppp { file("/var/log/ppp.log"); }; ######################## # Filters ######################## # Here's come the filter options. With this rules, we can set which # message go where. filter f_dbg { level(debug); }; filter f_info { level(info); }; filter f_notice { level(notice); }; filter f_warn { level(warn); }; filter f_err { level(err); }; filter f_crit { level(crit .. emerg); }; filter f_debug { level(debug) and not facility(auth, authpriv, news, mail); }; filter f_error { level(err .. emerg) ; }; filter f_messages { level(info,notice,warn) and not facility(auth,authpriv,cron,daemon,mail,news); }; filter f_auth { facility(auth, authpriv) and not filter(f_debug); }; filter f_cron { facility(cron) and not filter(f_debug); }; filter f_daemon { facility(daemon) and not filter(f_debug); }; filter f_kern { facility(kern) and not filter(f_debug); }; filter f_lpr { facility(lpr) and not filter(f_debug); }; filter f_local { facility(local0, local1, local3, local4, local5, local6, local7) and not filter(f_debug); }; filter f_mail { facility(mail) and not filter(f_debug); }; filter f_news { facility(news) and not filter(f_debug); }; filter f_syslog3 { not facility(auth, authpriv, mail) and not filter(f_debug); }; filter f_user { facility(user) and not filter(f_debug); }; filter f_uucp { facility(uucp) and not filter(f_debug); }; filter f_cnews { level(notice, err, crit) and facility(news); }; filter f_cother { level(debug, info, notice, warn) or facility(daemon, mail); }; filter f_ppp { facility(local2) and not filter(f_debug); }; filter f_console { level(warn .. emerg); }; ######################## # Log paths ######################## log { source(s_src); filter(f_auth); destination(d_auth); }; log { source(s_src); filter(f_cron); destination(d_cron); }; log { source(s_src); filter(f_daemon); destination(d_daemon); }; log { source(s_src); filter(f_kern); destination(d_kern); }; log { source(s_src); filter(f_lpr); destination(d_lpr); }; log { source(s_src); filter(f_syslog3); destination(d_syslog); destination(d_elastic); }; log { source(s_src); filter(f_user); destination(d_user); }; log { source(s_src); filter(f_uucp); destination(d_uucp); }; log { source(s_src); filter(f_mail); destination(d_mail); }; #log { source(s_src); filter(f_mail); filter(f_info); destination(d_mailinfo); }; #log { source(s_src); filter(f_mail); filter(f_warn); destination(d_mailwarn); }; #log { source(s_src); filter(f_mail); filter(f_err); destination(d_mailerr); }; log { source(s_src); filter(f_news); filter(f_crit); destination(d_newscrit); }; log { source(s_src); filter(f_news); filter(f_err); destination(d_newserr); }; log { source(s_src); filter(f_news); filter(f_notice); destination(d_newsnotice); }; #log { source(s_src); filter(f_cnews); destination(d_console_all); }; #log { source(s_src); filter(f_cother); destination(d_console_all); }; #log { source(s_src); filter(f_ppp); destination(d_ppp); }; log { source(s_src); filter(f_debug); destination(d_debug); }; log { source(s_src); filter(f_error); destination(d_error); }; log { source(s_src); filter(f_messages); destination(d_messages); }; log { source(s_src); filter(f_console); destination(d_console_all); destination(d_xconsole); }; log { source(s_src); filter(f_crit); destination(d_console); }; # All messages send to a remote site # #log { source(s_src); destination(d_net); }; ### # Include all config files in /etc/syslog-ng/conf.d/ ### @include "/etc/syslog-ng/conf.d/*.conf" Elasticsearch.conf @include "scl/elasticsearch/plugin.conf" source s_net { udp(); }; # All interfaces source s_src { system(); internal(); }; block destination d_elastic() { elasticsearch2( client-lib-dir("/usr/share/elasticsearch/lib/") cluster("searchguard-demo") index("syslog-${YEAR}.${MONTH}.${DAY}") type("syslog") client-mode("https") # cluster-url("https://127.0.0.1:9200/") ); }; log { source(s_net); destination(d_elastic); flags(flow-control); }; Plugin.conf ## scl/elasticsearch/plugin.conf -- Elasticsearch destination for syslog-ng ## ## Copyright (c) 2014 BalaBit IT Ltd, Budapest, Hungary ## Copyright (c) 2014 Gergely Nagy <algernon@balabit.hu> ## ## This program is free software; you can redistribute it and/or modify it ## under the terms of the GNU General Public License version 2 as published ## by the Free Software Foundation, or (at your option) any later version. ## ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with this program; if not, write to the Free Software ## Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA ## ## As an additional exemption you are allowed to compile & link against the ## OpenSSL libraries as published by the OpenSSL project. See the file ## COPYING for details. block destination d_elastic() { elasticsearch2( client-lib-dir("/usr/share/elasticsearch/lib/") index("syslog-${YEAR}.${MONTH}.${DAY}") type("syslog") client-mode("https") cluster-name("searchguard-demo") # cluster-url("https://127.0.0.1:9200/") ); }; Please let me know if there's anything else I can provide to help better understand and resolve this issue. Thanks, Allen Olivas -----Original Message----- From: syslog-ng <syslog-ng-bounces@lists.balabit.hu> On Behalf Of Peter Kokai (pkokai) Sent: Tuesday, July 9, 2019 12:03 AM To: syslog-ng@lists.balabit.hu Subject: Re: [syslog-ng] Cannot send Syslog-ng to Elasticsearch Hello, Regarding your elasticsearch issue: Depending on your version I would suggest you to try out the new C based elasticsearch destination (there is no need for java setup). The commit that introduced gives an example how to configure: commit 381ceb14e578553faaef3ea005146cb988a9f444 Refs: {origin/pr/2509}, syslog-ng-3.18.1-374-g381ceb14e Author: Zoltan Pallagi <pzolee@balabit.com<mailto:pzolee@balabit.com>> AuthorDate: Mon Feb 4 16:14:21 2019 +0100 Commit: Zoltan Pallagi <pzolee@balabit.com<mailto:pzolee@balabit.com>> CommitDate: Mon Feb 4 16:14:21 2019 +0100 Added elasticsearch-http() destination This destination is based on the native http destination of syslog-ng and uses elasticsearch bulk api (https://www.elastic.co/guide/en/elasticsearch/reference/6.5/docs-bulk.html) Example: destination d_elasticsearch_http { elasticsearch-http(index("my_index") type("my_type") url("http://my_elastic_server:9200/_bulk")); }; Issue#1: I/O error occurred: This issue should not be related to sending data to elasticsearch, as that seems like a network destination, which tries to send data to 9200 port, but the server cuts the connection (probably because malformed data). Curios enough the 92000 is a standard elasticsearch port. Would you please share your configuration ? and/or what this destination supposed to do ? -- kokan ________________________________________ From: syslog-ng <syslog-ng-bounces@lists.balabit.hu<mailto:syslog-ng-bounces@lists.balabit.hu>> on behalf of Allen Olivas <allen.olivas@infodefense.com<mailto:allen.olivas@infodefense.com>> Sent: 08 July 2019 23:22 To: syslog-ng@lists.balabit.hu<mailto:syslog-ng@lists.balabit.hu> Subject: [syslog-ng] Cannot send Syslog-ng to Elasticsearch CAUTION: This email originated from outside of the organization. Do not follow guidance, click links, or open attachments unless you recognize the sender and know the content is safe. Hello, Recently I’ve tried following along with the Syslog-NG to Elasticsearch and Kibana blog posts and Admin Documentation for integrating Syslog-NG into Elasticsearch but I’m unable to integrate the two. I see in the .conf files the destination calls for creating and Index Pattern for Syslog-NG but when I curl the existing indices I do not see syslog-ng. Also, I’m now receiving two errors. The first I’m fairly certain we need to resolve but I’ve not been able to find adequate documentation on how to identify the issue let along resolve it, and the second I’m not sure if we actually need to fix. The two issues: Issue#1: I/O error occurred syslog-ng[26432]: Syslog connection established; fd='12', server='AF_INET(127.0.0.1:9200)', local='AF_INET(0.0.0.0:0)' syslog-ng[26432]: I/O error occurred while writing; fd='12', error='Broken pipe (32)' syslog-ng[26432]: Syslog connection broken; fd='12', server='AF_INET(127.0.0.1:9200)', time_reopen='60' Issue#2: Error opening plugin module; module='mod-java', error='libjvm.so: cannot open shared object file: No such file or directory' For issue 1 I’m not sure what to do or how to resolve it. For issue 2, I know for certain libjvm does exist, and I’ve mapped the LD_LIBRARY_PATH to the directory libjvm.so resides in. Ultimately, are these two issues preventing Syslog-NG from sending to Elasticsearch or are they just separate issues to tackle after I get things cleared up, and most importantly if they’re not related, how do I integrate Syslog-NG with Elasticsearch and Kibana. Documentation is not helpful and not concise. Thanks! ______________________________________________________________________________ Member info: https://lists.balabit.hu/mailman/listinfo/syslog-ng Documentation: http://www.balabit.com/support/documentation/?product=syslog-ng FAQ: http://www.balabit.com/wiki/syslog-ng-faq ______________________________________________________________________________ Member info: https://lists.balabit.hu/mailman/listinfo/syslog-ng Documentation: http://www.balabit.com/support/documentation/?product=syslog-ng FAQ: http://www.balabit.com/wiki/syslog-ng-faq
Hello, Ok I've got it configured but now its I think its not building the index and updating elasticsearch because of HTTPS and authentication. I have searchguard set up for elasticsearch and kibana. I'm assuming I need Syslog-ng to use the SSL certs searchguard has in place for elasticsearch. My new conf contains: #Elasticsearch_HTTP destination d_elastic { elasticsearch-http( index("logs-${YEAR}${MONTH}") type("test") url("https://127.0.0.1:9200/_bulk") ); }; And log { source(s_src); destination(d_elastic); }; I see now elasticsearch-http() is referenced in /usr/share/syslog-ng/include/scl/elasticsearch/elastic-http.conf I think I'm almost done getting this thing working! 😊 Any tips on the authentication part? Thank you all for your support! -----Original Message----- From: syslog-ng <syslog-ng-bounces@lists.balabit.hu> On Behalf Of Peter Kokai (pkokai) Sent: Tuesday, July 9, 2019 12:03 AM To: syslog-ng@lists.balabit.hu Subject: Re: [syslog-ng] Cannot send Syslog-ng to Elasticsearch Hello, Regarding your elasticsearch issue: Depending on your version I would suggest you to try out the new C based elasticsearch destination (there is no need for java setup). The commit that introduced gives an example how to configure: commit 381ceb14e578553faaef3ea005146cb988a9f444 Refs: {origin/pr/2509}, syslog-ng-3.18.1-374-g381ceb14e Author: Zoltan Pallagi <pzolee@balabit.com> AuthorDate: Mon Feb 4 16:14:21 2019 +0100 Commit: Zoltan Pallagi <pzolee@balabit.com> CommitDate: Mon Feb 4 16:14:21 2019 +0100 Added elasticsearch-http() destination This destination is based on the native http destination of syslog-ng and uses elasticsearch bulk api (https://www.elastic.co/guide/en/elasticsearch/reference/6.5/docs-bulk.html) Example: destination d_elasticsearch_http { elasticsearch-http(index("my_index") type("my_type") url("http://my_elastic_server:9200/_bulk")); }; Issue#1: I/O error occurred: This issue should not be related to sending data to elasticsearch, as that seems like a network destination, which tries to send data to 9200 port, but the server cuts the connection (probably because malformed data). Curios enough the 92000 is a standard elasticsearch port. Would you please share your configuration ? and/or what this destination supposed to do ? -- kokan ________________________________________ From: syslog-ng <syslog-ng-bounces@lists.balabit.hu> on behalf of Allen Olivas <allen.olivas@infodefense.com> Sent: 08 July 2019 23:22 To: syslog-ng@lists.balabit.hu Subject: [syslog-ng] Cannot send Syslog-ng to Elasticsearch CAUTION: This email originated from outside of the organization. Do not follow guidance, click links, or open attachments unless you recognize the sender and know the content is safe. Hello, Recently I’ve tried following along with the Syslog-NG to Elasticsearch and Kibana blog posts and Admin Documentation for integrating Syslog-NG into Elasticsearch but I’m unable to integrate the two. I see in the .conf files the destination calls for creating and Index Pattern for Syslog-NG but when I curl the existing indices I do not see syslog-ng. Also, I’m now receiving two errors. The first I’m fairly certain we need to resolve but I’ve not been able to find adequate documentation on how to identify the issue let along resolve it, and the second I’m not sure if we actually need to fix. The two issues: Issue#1: I/O error occurred syslog-ng[26432]: Syslog connection established; fd='12', server='AF_INET(127.0.0.1:9200)', local='AF_INET(0.0.0.0:0)' syslog-ng[26432]: I/O error occurred while writing; fd='12', error='Broken pipe (32)' syslog-ng[26432]: Syslog connection broken; fd='12', server='AF_INET(127.0.0.1:9200)', time_reopen='60' Issue#2: Error opening plugin module; module='mod-java', error='libjvm.so: cannot open shared object file: No such file or directory' For issue 1 I’m not sure what to do or how to resolve it. For issue 2, I know for certain libjvm does exist, and I’ve mapped the LD_LIBRARY_PATH to the directory libjvm.so resides in. Ultimately, are these two issues preventing Syslog-NG from sending to Elasticsearch or are they just separate issues to tackle after I get things cleared up, and most importantly if they’re not related, how do I integrate Syslog-NG with Elasticsearch and Kibana. Documentation is not helpful and not concise. Thanks! ______________________________________________________________________________ Member info: https://lists.balabit.hu/mailman/listinfo/syslog-ng Documentation: http://www.balabit.com/support/documentation/?product=syslog-ng FAQ: http://www.balabit.com/wiki/syslog-ng-faq
Hi, On Tue, Jul 09, 2019 at 09:56:50PM +0000, Allen Olivas wrote:
Ok I've got it configured but now its I think its not building the index and updating elasticsearch because of HTTPS and authentication. I have searchguard set up for elasticsearch and kibana. I'm assuming I need Syslog-ng to use the SSL certs searchguard has in place for elasticsearch.
You can use almost any authentication method supported by Searchguard. We use client certificates for syslog-ng, and here's how the config looks like: destination d_coloss { elasticsearch-http( url("https://node01:9200/_bulk" "https://node02:9200/_bulk") index("syslog-${YEAR}-${MONTH}-${DAY}") time-zone("UTC") type("") workers(4) batch_lines(128) batch_timeout(10000) timeout(100) tls( ca-file("/path/to/ca.pem") cert-file("/path/to/syslog_ng.crt.pem") key-file("/path/to/syslog_ng.key.pem") peer-verify(yes) ) ); }; And here are the searchguard permissions for the syslog-ng user's role: sg_role_syslog_ng: indices: "syslog": "*": - WRITE - CREATE_INDEX - indices:admin/mapping/put cluster: - CLUSTER_COMPOSITE_OPS - cluster:monitor/nodes/info - cluster:monitor/nodes/liveness - cluster:monitor/state
Hello, I'm trying to authenticate with searchguard. The destination I have specified includes the tls() configuration you suggested: destination d_elastic { elasticsearch-http( url("https://127.0.0.1:9200/_bulk") index("logs-${YEAR}.${MONTH}.${DAY}") type("syslog") client-mode ("transport") tls( ca-file("/path/to/ca.pem") cert-file("/path/to/syslog_ng.crt.pem") key-file("/path/to/syslog_ng.key.pem") peer-verify(yes) ) ); }; My problem now is it still doesn't seem to authenticate or work with elasticsearch. Should I have an entry in the elasticsearch.yml? Searchguard has already been configured for elasticsearch and kibana. Also is your elastic-http-plugin.conf referencing the yml file or the client-mode ("searchguard")? I'm not entirely sure what all needs to be configured. The specific errors I'm seeing are below: [2019-07-10T01:44:39.077952] Server disconnected while preparing messages for sending, trying again; driver='d_elastic#0', location='#buffer:4:3', worker_index='0', time_reopen='60', batch_size='3' [2019-07-10T01:44:39.100211] curl: error sending HTTP request; url='https://127.0.0.1:9200/_bulk', error='Problem with the local SSL certificate', worker_index='3', driver='d_elastic#0', location='#buffer:4:3' [2019-07-10T01:44:39.100230] Target server down, but no alternative server available. Falling back to retrying after time-reopen(); url='https://127.0.0.1:9200/_bulk', worker_index='3', driver='d_elastic#0', location='#buffer:4:3'----- When I check the indices available I do not see anything created for syslog-ng. I feel like its almost configured so I'm pretty excited to get this completed and documented on my end. Thanks again for all the support. Original Message----- From: syslog-ng <syslog-ng-bounces@lists.balabit.hu> On Behalf Of Fabien Wernli Sent: Wednesday, July 10, 2019 1:20 AM To: Syslog-ng users' and developers' mailing list <syslog-ng@lists.balabit.hu> Subject: Re: [syslog-ng] Cannot send Syslog-ng to Elasticsearch Hi, On Tue, Jul 09, 2019 at 09:56:50PM +0000, Allen Olivas wrote:
Ok I've got it configured but now its I think its not building the index and updating elasticsearch because of HTTPS and authentication. I have searchguard set up for elasticsearch and kibana. I'm assuming I need Syslog-ng to use the SSL certs searchguard has in place for elasticsearch.
You can use almost any authentication method supported by Searchguard. We use client certificates for syslog-ng, and here's how the config looks like: destination d_coloss { elasticsearch-http( url("https://node01:9200/_bulk" "https://node02:9200/_bulk") index("syslog-${YEAR}-${MONTH}-${DAY}") time-zone("UTC") type("") workers(4) batch_lines(128) batch_timeout(10000) timeout(100) tls( ca-file("/path/to/ca.pem") cert-file("/path/to/syslog_ng.crt.pem") key-file("/path/to/syslog_ng.key.pem") peer-verify(yes) ) ); }; And here are the searchguard permissions for the syslog-ng user's role: sg_role_syslog_ng: indices: "syslog": "*": - WRITE - CREATE_INDEX - indices:admin/mapping/put cluster: - CLUSTER_COMPOSITE_OPS - cluster:monitor/nodes/info - cluster:monitor/nodes/liveness - cluster:monitor/state ______________________________________________________________________________ Member info: https://lists.balabit.hu/mailman/listinfo/syslog-ng Documentation: http://www.balabit.com/support/documentation/?product=syslog-ng FAQ: http://www.balabit.com/wiki/syslog-ng-faq
On Wed, Jul 10, 2019 at 06:47:48AM +0000, Allen Olivas wrote:
My problem now is it still doesn't seem to authenticate or work with elasticsearch.
How did you create the user certificate? You can test it using curl: curl --key /path/to/key --cert /path/to/cert https://localhost:9200/
Should I have an entry in the elasticsearch.yml? Searchguard has already been configured for elasticsearch and kibana. Also is your elastic-http-plugin.conf referencing the yml file or the client-mode ("searchguard")? I'm not entirely sure what all needs to be configured.
Client-mode is not a valid option for the elasticsearch-http() driver, so don't use it (it was an option to the java elastic dest).
[2019-07-10T01:44:39.100211] curl: error sending HTTP request; url='https://127.0.0.1:9200/_bulk', error='Problem with the local SSL certificate', worker_index='3', driver='d_elastic#0', location='#buffer:4:3'
Again, test the client certificate with curl. My guess is that you generated a node certificate instead of a client certificate.
Hi, I've just used and set up the cert generator for PoC with SearchGuard. When I do that curl I get connection refused: sudo curl --key /etc/elasticsearch/CN=demouser.key.pem --cert /etc/elasticsearch/CN=demouser.crt.pem https://localhost:9200/ curl: (7) Failed to connect to localhost port 9200: Connection refused I can share configs and anything else you might need. Any thoughts? Currently my integration is broken. ☹ -----Original Message----- From: Fabien Wernli <wernli@in2p3.fr> Sent: Wednesday, July 10, 2019 1:55 AM To: Allen Olivas <allen.olivas@infodefense.com> Cc: Syslog-ng users' and developers' mailing list <syslog-ng@lists.balabit.hu> Subject: Re: RE: [syslog-ng] Cannot send Syslog-ng to Elasticsearch On Wed, Jul 10, 2019 at 06:47:48AM +0000, Allen Olivas wrote:
My problem now is it still doesn't seem to authenticate or work with elasticsearch.
How did you create the user certificate? You can test it using curl: curl --key /path/to/key --cert /path/to/cert https://localhost:9200/
Should I have an entry in the elasticsearch.yml? Searchguard has already been configured for elasticsearch and kibana. Also is your elastic-http-plugin.conf referencing the yml file or the client-mode ("searchguard")? I'm not entirely sure what all needs to be configured.
Client-mode is not a valid option for the elasticsearch-http() driver, so don't use it (it was an option to the java elastic dest).
[2019-07-10T01:44:39.100211] curl: error sending HTTP request; url='https://127.0.0.1:9200/_bulk', error='Problem with the local SSL certificate', worker_index='3', driver='d_elastic#0', location='#buffer:4:3'
Again, test the client certificate with curl. My guess is that you generated a node certificate instead of a client certificate.
On Wed, Jul 10, 2019 at 05:16:01PM +0000, Allen Olivas wrote:
curl: (7) Failed to connect to localhost port 9200: Connection refused
This probably means that your elasticsearch instance doesn't listen on the right interface. Can you share the output of the following command please? netstat -tpln
Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 1032/systemd-resolv tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1874/sshd tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 2145/master tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 13557/sshd: aolivas tcp 0 0 127.0.0.1:6011 0.0.0.0:* LISTEN 15586/sshd: aolivas tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 1314/node tcp 0 0 0.0.0.0:1515 0.0.0.0:* LISTEN 2329/ossec-authd tcp6 0 0 :::9200 :::* LISTEN 1738/java tcp6 0 0 :::9300 :::* LISTEN 1738/java tcp6 0 0 :::22 :::* LISTEN 1874/sshd tcp6 0 0 :::55000 :::* LISTEN 1734/nodejs tcp6 0 0 :::25 :::* LISTEN 2145/master tcp6 0 0 ::1:6010 :::* LISTEN 13557/sshd: aolivas tcp6 0 0 ::1:6011 :::* LISTEN 15586/sshd: aolivas -----Original Message----- From: Fabien Wernli <wernli@in2p3.fr> Sent: Wednesday, July 10, 2019 3:14 PM To: Allen Olivas <allen.olivas@infodefense.com> Cc: Syslog-ng users' and developers' mailing list <syslog-ng@lists.balabit.hu> Subject: Re: RE: RE: [syslog-ng] Cannot send Syslog-ng to Elasticsearch On Wed, Jul 10, 2019 at 05:16:01PM +0000, Allen Olivas wrote:
curl: (7) Failed to connect to localhost port 9200: Connection refused
This probably means that your elasticsearch instance doesn't listen on the right interface. Can you share the output of the following command please? netstat -tpln
Hi, On Wed, Jul 10, 2019 at 08:22:38PM +0000, Allen Olivas wrote:
Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 1032/systemd-resolv tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1874/sshd tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 2145/master tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 13557/sshd: aolivas tcp 0 0 127.0.0.1:6011 0.0.0.0:* LISTEN 15586/sshd: aolivas tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 1314/node tcp 0 0 0.0.0.0:1515 0.0.0.0:* LISTEN 2329/ossec-authd tcp6 0 0 :::9200 :::* LISTEN 1738/java tcp6 0 0 :::9300 :::* LISTEN 1738/java tcp6 0 0 :::22 :::* LISTEN 1874/sshd tcp6 0 0 :::55000 :::* LISTEN 1734/nodejs tcp6 0 0 :::25 :::* LISTEN 2145/master tcp6 0 0 ::1:6010 :::* LISTEN 13557/sshd: aolivas tcp6 0 0 ::1:6011 :::* LISTEN 15586/sshd: aolivas
It seems to me your ES is listening on ipv6 only. Please retry after setting the following in your elasticsearch.yml: network.host: - 127.0.0.1 And then curl to 127.0.0.1 explicitly (localhost may resolve to ::1)
I made the changes. Here are the results of the netstat and the curl to 127.0.0.1:9200: aolivas@wazuhserver:~$ sudo netstat -tpln Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 1032/systemd-resolv tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1874/sshd tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 2145/master tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 27958/sshd: kwheele tcp 0 0 127.0.0.1:6011 0.0.0.0:* LISTEN 7040/sshd: aolivas@ tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 1314/node tcp 0 0 0.0.0.0:1515 0.0.0.0:* LISTEN 2329/ossec-authd tcp6 0 0 127.0.0.1:9200 :::* LISTEN 7771/java tcp6 0 0 127.0.0.1:9300 :::* LISTEN 7771/java tcp6 0 0 :::22 :::* LISTEN 1874/sshd tcp6 0 0 :::55000 :::* LISTEN 1734/nodejs tcp6 0 0 :::25 :::* LISTEN 2145/master tcp6 0 0 ::1:6010 :::* LISTEN 27958/sshd: kwheele tcp6 0 0 ::1:6011 :::* LISTEN 7040/sshd: aolivas@ curl 'https://127.0.0.1:9200/' curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: https://curl.haxx.se/docs/sslcerts.html I've been fighting with integrating syslog-ng with elasticsearch and searchguard. The problem I'm having with that is the SSL certs and the options for the elasticsearch.yml file. For test purposes I'm about to create my own root CA, client certificates and keys, etc. and add them to the elasticsearch.yml file. I also have to update the filebeat.yml file so that filebeat and elasticsearch can authenticate and communicate. Once those are in place I think the tls () statement should work, right? Any advice? I'm not too experienced with SSL/TLS certs, so I go into this a little cautious. Thanks, -----Original Message----- From: Fabien Wernli <wernli@in2p3.fr> Sent: Thursday, July 11, 2019 2:00 AM To: Allen Olivas <allen.olivas@infodefense.com> Cc: Syslog-ng users' and developers' mailing list <syslog-ng@lists.balabit.hu> Subject: Re: RE: RE: RE: [syslog-ng] Cannot send Syslog-ng to Elasticsearch Hi, On Wed, Jul 10, 2019 at 08:22:38PM +0000, Allen Olivas wrote:
Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 1032/systemd-resolv tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1874/sshd tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 2145/master tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 13557/sshd: aolivas tcp 0 0 127.0.0.1:6011 0.0.0.0:* LISTEN 15586/sshd: aolivas tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 1314/node tcp 0 0 0.0.0.0:1515 0.0.0.0:* LISTEN 2329/ossec-authd tcp6 0 0 :::9200 :::* LISTEN 1738/java tcp6 0 0 :::9300 :::* LISTEN 1738/java tcp6 0 0 :::22 :::* LISTEN 1874/sshd tcp6 0 0 :::55000 :::* LISTEN 1734/nodejs tcp6 0 0 :::25 :::* LISTEN 2145/master tcp6 0 0 ::1:6010 :::* LISTEN 13557/sshd: aolivas tcp6 0 0 ::1:6011 :::* LISTEN 15586/sshd: aolivas
It seems to me your ES is listening on ipv6 only. Please retry after setting the following in your elasticsearch.yml: network.host: - 127.0.0.1 And then curl to 127.0.0.1 explicitly (localhost may resolve to ::1)
Ok so my attempt to build and add the certificates and CA still did not work. On whim I pointed the TLS statement to the existing demo certs from searchguard. After restarting syslog-ng I found the service was still running (I don't know why it worked this time and not the million other times I tried it) but data is still not traversing to elasticsearch due to (I believe) two new errors. These two errors are most likely related and not separate errors altogether. Here are the two errors I'm seeing: 1: From /var/log/message - Server returned with a 4XX (client errors) status code, which means we are not authorized or the URL is not found.; 2: From /var/log/error - syslog-ng[18498]: Message(s) dropped while sending message to destination; driver='d_elastic#0', worker_index='1', time_reopen='60', batch_size='3' I also see the following from 'syslog-ng -c syslog-ng.service -e -v' Unknown argument, adding it to __VARARGS__; argument='tls', value='\x0a ca-file("/etc/elasticsearch/root-ca.pem")\x0a cert-file("/etc/elasticsearch/esnode.pem")\x0a key-file("/etc/elasticsearch/esnode-key.pem")\x0a peer-verify(yes)\x0a ', reference='/etc/syslog-ng/syslog-ng.conf:83:3' I don't know why that would be an unknown argument but maybe that's the problem right there? Thoughts? Thanks for all your support everyone! -----Original Message----- From: Fabien Wernli <wernli@in2p3.fr> Sent: Thursday, July 11, 2019 2:00 AM To: Allen Olivas <allen.olivas@infodefense.com> Cc: Syslog-ng users' and developers' mailing list <syslog-ng@lists.balabit.hu> Subject: Re: RE: RE: RE: [syslog-ng] Cannot send Syslog-ng to Elasticsearch Hi, On Wed, Jul 10, 2019 at 08:22:38PM +0000, Allen Olivas wrote:
Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 1032/systemd-resolv tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1874/sshd tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 2145/master tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 13557/sshd: aolivas tcp 0 0 127.0.0.1:6011 0.0.0.0:* LISTEN 15586/sshd: aolivas tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 1314/node tcp 0 0 0.0.0.0:1515 0.0.0.0:* LISTEN 2329/ossec-authd tcp6 0 0 :::9200 :::* LISTEN 1738/java tcp6 0 0 :::9300 :::* LISTEN 1738/java tcp6 0 0 :::22 :::* LISTEN 1874/sshd tcp6 0 0 :::55000 :::* LISTEN 1734/nodejs tcp6 0 0 :::25 :::* LISTEN 2145/master tcp6 0 0 ::1:6010 :::* LISTEN 13557/sshd: aolivas tcp6 0 0 ::1:6011 :::* LISTEN 15586/sshd: aolivas
It seems to me your ES is listening on ipv6 only. Please retry after setting the following in your elasticsearch.yml: network.host: - 127.0.0.1 And then curl to 127.0.0.1 explicitly (localhost may resolve to ::1)
On Thu, Jul 11, 2019 at 09:48:47PM +0000, Allen Olivas wrote:
Ok so my attempt to build and add the certificates and CA still did not work. On whim I pointed the TLS statement to the existing demo certs from searchguard.
After restarting syslog-ng I found the service was still running (I don't know why it worked this time and not the million other times I tried it) but data is still not traversing to elasticsearch due to (I believe) two new errors. These two errors are most likely related and not separate errors altogether.
Here are the two errors I'm seeing: 1: From /var/log/message - Server returned with a 4XX (client errors) status code, which means we are not authorized or the URL is not found.; 2: From /var/log/error - syslog-ng[18498]: Message(s) dropped while sending message to destination; driver='d_elastic#0', worker_index='1', time_reopen='60', batch_size='3'
That looks like progress to me! What does curl say? (use -k or --capath) Also, don't make tests with syslog-ng as long as you haven't sorted out that: 1. The connectivity with curl is established e.g. `curl --cert ... --key ... https://127.0.0.1:9200` gives you 40x http status code 2. The permissions with searchguard are correct e.g. `curl ... https://127.0.0.1:9200/_bulk -Hcontent-type:application/json -d '{...}'` gives you a 20x Once that's established, you can start hooking up syslog-ng.
Ok, I curl'd cert and key to 127.0.0.1:9200 and got: "curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: https://curl.haxx.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above." The cert and key are the demo searchguard ones, esnode.pem and esnode-key.pem Once I can wrap my head around how this is all working together, etc, I'll swap those out for legitimate certs and keys. So that's where I stand. I think once I can resolve this part I should be good to go. -----Original Message----- From: Fabien Wernli <wernli@in2p3.fr> Sent: Friday, July 12, 2019 1:37 AM To: Allen Olivas <allen.olivas@infodefense.com> Cc: Syslog-ng users' and developers' mailing list <syslog-ng@lists.balabit.hu> Subject: Re: [syslog-ng] Cannot send Syslog-ng to Elasticsearch On Thu, Jul 11, 2019 at 09:48:47PM +0000, Allen Olivas wrote:
Ok so my attempt to build and add the certificates and CA still did not work. On whim I pointed the TLS statement to the existing demo certs from searchguard.
After restarting syslog-ng I found the service was still running (I don't know why it worked this time and not the million other times I tried it) but data is still not traversing to elasticsearch due to (I believe) two new errors. These two errors are most likely related and not separate errors altogether.
Here are the two errors I'm seeing: 1: From /var/log/message - Server returned with a 4XX (client errors) status code, which means we are not authorized or the URL is not found.; 2: From /var/log/error - syslog-ng[18498]: Message(s) dropped while sending message to destination; driver='d_elastic#0', worker_index='1', time_reopen='60', batch_size='3'
That looks like progress to me! What does curl say? (use -k or --capath) Also, don't make tests with syslog-ng as long as you haven't sorted out that: 1. The connectivity with curl is established e.g. `curl --cert ... --key ... https://127.0.0.1:9200` gives you 40x http status code 2. The permissions with searchguard are correct e.g. `curl ... https://127.0.0.1:9200/_bulk -Hcontent-type:application/json -d '{...}'` gives you a 20x Once that's established, you can start hooking up syslog-ng.
Hi, On Mon, Jul 08, 2019 at 09:22:51PM +0000, Allen Olivas wrote:
For issue 1 I'm not sure what to do or how to resolve it. For issue 2, I know for certain libjvm does exist, and I've mapped the LD_LIBRARY_PATH to the directory libjvm.so resides in.
Please be aware that passing environment variables to daemons can be tricky (no I won't rant on systemd), can you check if it's set for syslog-ng (assuming linux here) by checking /proc/<pid of syslog-ng>/environ ? You could also run syslog-ng in the foreground to see more output `syslog-ng -Fdv' Cheers
participants (4)
-
Allen Olivas
-
Fabien Wernli
-
Nik Ambrosch
-
Peter Kokai (pkokai)