[syslog-ng] Internal error, duplicate configuration elements...
David Hauck
davidh at netacquire.com
Wed Jun 11 17:17:45 CEST 2014
Hi Evan,
On Wednesday, June 11, 2014 8:16 AM, syslog-ng-bounces at lists.balabit.hu wrote:
> Reorganizing the log configuration with junctions helps in some
> circumstances. In my case I need to use a different template depending
> on the source of the message (from a specific file source) so I need
> to use the same destination IP/port, but defined with a different
> syslog-ng "destination" specification so that I can use a different template.
>
> This results in the same problem that I can't work around by using a
> different junction/log structure.
Yes, interesting. This restriction is turning out to be somewhat unfortunate. I'm still trying to figure out how to wedge this constraint into my configuration...
Thanks,
-David
> Evan.
>
> On 06/10/2014 11:51 PM, Balazs Scheidler wrote:
>> Hi,
>>
>> the issue is that syslog-ng uses the destination IP address/port
>> combination
> as an identifier to recover the queue when reloading syslog-ng, so it
> does have some issues (e.g. one borrowing the queue of the other, or
> using the same queue after reload). This is not good.
>>
>> This could be solved with a long-discussed, explicit id() parameter,
>> that
> would let the administrator assign a custom ID, which can be made
> different for the conflicting drivers, but that hasn't happened yet.
>>
>> Using junctions and log expressions could help you to reorganize your
>> configuration in a way that prevent this kind of conflict:
>>
>> https://bazsi.blogs.balabit.com/2012/01/syslog-ng-flexibility-improv
>> em
>> ents/
>>
>> Hope this helps.
>>
>>
>> On Fri, Jun 6, 2014 at 4:15 PM, David Hauck <davidh at netacquire.com
>> <mailto:davidh at netacquire.com>> wrote:
>>
>> This morning Gergely Nagy wrote:
>>> David Hauck <davidh at netacquire.com <mailto:davidh at netacquire.com>>
>>> writes:
>>>
>>>> Couple things here:
>>>> 1. If this is an error why doesn't 'syslog-ng -s' indicate it as such?
>>>
>>> Because -s checks for syntax only, and this is not a syntax
>> error, it's one level higher.
>>
>> OK.
>>>> 2. Other than the error, things appear to function
>> correctly. Why is
> this?
>>>
>>> That'd be a bit hard to explain without going into low-level details,
>>> but I'll try: it works because of luck, mostly. UDP destinations will
>>> not trip over each other when used like this. If you'd try
>> that with files, all hell would break loose.
>>
>> OK, thanks, having some understanding of how the low level
> implementation works does help since I don't remember this constraint
> being discussed in the manual/docs (but I may have missed something
> there?). Initially it also wasn't clear that these destinations
> weren't combined at a high level in the implementation so that they,
> for example, operated within a "pipeline" and thereby enforced a
> "single-writer" paradigm. Either way, and as you say, this is required
> for file destinations, but
>> certainly not for network destinations (even in the disconnect case).
>>
>> In fact, my log() statements with the referenced destinations
>> are
> complex enough that I may need to rely on the multiple network
> destination specification capability anyway since combining these as
> you suggest would make maintenance of the configuration even more
> difficult (there's a tangle of rules/conditions associated with one of the destination log() statements).
>>
>>>> 3. Other blocks do not require that their contents contain unique
>>>> statements. For example, I can create filter blocks that have
>>>> statements that intersect. Why not destination blocks?
>>>
>>> Because filters generally don't have state, so they can't conflict
>>> with each other, they can work independently. For
>> destinations, that's
> not the case. For example, if you'd have something like this:
>>>
>>> destination d_1 { file("/var/log/some.log"); udp(...); }; destination
>>> d_2 { file("/var/log/some.log"); tcp(...); };
>>>
>>> That would break horribly, as two threads would try to write to the
>>> same file, and would write over each other, resulting in garbage. That
>>> doesn't happen with network destinations, but those have different
>>> issues. For example, if a network target goes down, and you had
>>> duplicates in your config, then they'd notice the target
>> being down
> separately, increasing the chance you'll loose logs. If you had one
> target, and bound it to various sources in a log{} statement instead,
> then you'd only have one instance of the driver.
>>
>> I'm wondering if there might be other issues associated with
>> multiple
> (common) network destinations (you only mention one as an example)?
> The example you mention isn't relevant in my case. Whether the network
> destination is a combined statement/driver or not the payload includes
> single packets, which are written separately, no? If this is the case
> it wouldn't matter which packet send detected the network error - one
> or the other would get lost in both cases.
>>
>> Thanks,
>> -David
> __________________________________________________________
> ____________________
>> Member info: https://lists.balabit.hu/mailman/listinfo/syslog-ng
>> Documentation:
>> http://www.balabit.com/support/documentation/?product=syslog-ng
>> FAQ: http://www.balabit.com/wiki/syslog-ng-faq
>>
>>
>>
>> --
>> Bazsi
>>
>>
>>
> __________________________________________________________
> ____________
>> ________ Member info:
>> https://lists.balabit.hu/mailman/listinfo/syslog-ng
>> Documentation:
>> http://www.balabit.com/support/documentation/?product=syslog-ng
>> FAQ: http://www.balabit.com/wiki/syslog-ng-faq
>>
>
>
More information about the syslog-ng
mailing list