Hello,

My initial guess would be that you do not store outside of the container the persist file.
There is a so called persist file, which is important to keep as well
That file you can find under $PREFIX/var/syslog-ng.persist


--
Kokan

On Thu, Sep 13, 2018 at 3:50 PM Jose Angel Santiago <jasantiago@stratio.com> wrote:
Hi,

I'm running syslog-ng (with an elasticsearch2 destination configured) within a docker container, and I'm trying to avoid loss of messages if I kill the docker container and I start it again.

This is my scenary:

- A service which produces 20 lines of log per second
- A sislog-ng instance reading from a wildcard-file source (but actually it only reads logs from the above service, let's call it syslog-agent), which sends all logs to another syslog-ng instance (the one running in a docker container, let's call it syslog-relay) though a network destination.
- The syslog-relay sends messages to an elasticsearch instance, with following configuration:

options {
    chain-hostnames(no);
    use-dns(no);
    keep-hostname(yes);
    owner("syslog-ng");
    group("stratio");
    perm(0640);
    time-reap(30);
    mark-freq(10);
    stats-freq(0);
    bad-hostname("^gconfd$");
    flush-lines(100);
    log-fifo-size(1000);
    };

destination d_elastic_default_0 {
    elasticsearch2(
        cluster("myelastic")
        cluster-url("https://myelastic.logs:9200")
        client_mode("https")
        index("default")
        type("log")
        flush-limit(1)
        disk-buffer(
            mem-buf-size(16M)
            disk-buf-size(16M)
            reliable(yes)
            dir("/syslog-ng/log")
        )
        http-auth-type("clientcert")
        java-keystore-filepath("/etc/syslog-ng/certificates/syslog-relay.jks")
        java-keystore-password("XXXXXX")
        java-truststore-filepath("/etc/syslog-ng/certificates/ca-bundle.jks")
        java-truststore-password("XXXXXXXXXX")
    );
};


- The dir "/syslog-ng/log" is mapped to a path "/tmp/buffer" from the host where the docker container is running, so when I kill the docker container, the buffer file is not lost.
- I've set flush-limit to 1 because I thought that I may lost 1 message only as much.

This architecture is working fine (flush-limit=1 makes very slow, but for this test is ok), but if I kill the syslog-relay docker container, wait 5 to 10 seconds and start it again from scratch, I can see that several hundreds of logs are missing in elasticsearch. I check it by stopping the logger service and letting syslog-ng agent & relay to finish the process enqueued messages.

I can see in the syslog-agent stats that all logs messages have been processed, so it seems the problem is on the syslog-relay.  

Is this behaviour expected? If so, how can I protect against loss of messages in case of a syslog-relay docker container unexpected kill?

Thanks in advance.


--

| Jose Angel Santiago

Logo_signature2.png

Vía de las dos Castillas, 33, Ática 4, 3ª Planta

28224 Pozuelo de Alarcón, Madrid, Spain

+34 918 286 473 | www.stratio.com


______________________________________________________________________________
Member info: https://lists.balabit.hu/mailman/listinfo/syslog-ng
Documentation: http://www.balabit.com/support/documentation/?product=syslog-ng
FAQ: http://www.balabit.com/wiki/syslog-ng-faq