[syslog-ng] disk-buffer in elasticsearch2 destination loses messages if docker container is killed

Jose Angel Santiago jasantiago at stratio.com
Thu Sep 13 13:50:00 UTC 2018


Hi,

I'm running syslog-ng (with an elasticsearch2 destination configured)
within a docker container, and I'm trying to avoid loss of messages if I
kill the docker container and I start it again.

This is my scenary:

- A service which produces 20 lines of log per second
- A sislog-ng instance reading from a wildcard-file source (but actually it
only reads logs from the above service, let's call it syslog-agent), which
sends all logs to another syslog-ng instance (the one running in a docker
container, let's call it syslog-relay) though a network destination.
- The syslog-relay sends messages to an elasticsearch instance, with
following configuration:

options {
    chain-hostnames(no);
    use-dns(no);
    keep-hostname(yes);
    owner("syslog-ng");
    group("stratio");
    perm(0640);
    time-reap(30);
    mark-freq(10);
    stats-freq(0);
    bad-hostname("^gconfd$");
    flush-lines(100);
    log-fifo-size(1000);
    };





















*destination d_elastic_default_0 {    elasticsearch2(
cluster("myelastic")        cluster-url("https://myelastic.logs:9200
<https://myelastic.logs:9200>")        client_mode("https")
index("default")        type("log")        flush-limit(1)
disk-buffer(            mem-buf-size(16M)
disk-buf-size(16M)            reliable(yes)
dir("/syslog-ng/log")        )        http-auth-type("clientcert")
java-keystore-filepath("/etc/syslog-ng/certificates/syslog-relay.jks")
java-keystore-password("XXXXXX")
java-truststore-filepath("/etc/syslog-ng/certificates/ca-bundle.jks")
java-truststore-password("XXXXXXXXXX")    );};*

- The dir "/syslog-ng/log" is mapped to a path "/tmp/buffer" from the host
where the docker container is running, so when I kill the docker container,
the buffer file is not lost.
- I've set flush-limit to 1 because I thought that I may lost 1 message
only as much.

This architecture is working fine (flush-limit=1 makes very slow, but for
this test is ok), but if I kill the syslog-relay docker container, wait 5
to 10 seconds and start it again from scratch, I can see that several
hundreds of logs are missing in elasticsearch. I check it by stopping the
logger service and letting syslog-ng agent & relay to finish the process
enqueued messages.

I can see in the syslog-agent stats that all logs messages have been
processed, so it seems the problem is on the syslog-relay.

Is this behaviour expected? If so, how can I protect against loss of
messages in case of a syslog-relay docker container unexpected kill?

Thanks in advance.


-- 

| Jose Angel Santiago

[image: Logo_signature2.png] <http://www.stratio.com/>

Vía de las dos Castillas, 33, Ática 4, 3ª Planta

28224 Pozuelo de Alarcón, Madrid, Spain

+34 918 286 473 | www.stratio.com
<https://twitter.com/stratiobd> <https://www.linkedin.com/company/stratiobd>
<https://www.youtube.com/c/StratioBD>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.balabit.hu/pipermail/syslog-ng/attachments/20180913/3a5f09c8/attachment.html>


More information about the syslog-ng mailing list