Hi, I think this has to do with my testing using log messages that have already been written to a file. (i.e. I don't control the log server that receives the messages on the net - I have a sample file to parse) I have been doing some testing using various "no-parse" and "syslog-protocol" flags on the file source, as well as looking at what macros get which values on my end. So - long term, I would really want to be able to "replay" logs stored in flat files to (re-)ingest them into elasticsearch. That said - I think my current parsing issue may be due to the source being "cat >> file_source" Let me noodle on it over the weekend. There may be a simple answer here. Thanks again! Jim ---- Fabien Wernli <wernli@in2p3.fr> wrote:
Hi Jim,
On Thu, Sep 10, 2015 at 05:00:07PM -0400, jrhendri@roadrunner.com wrote:
I would like to be able to have each field recognized separately so that kibana could search for specific things like "source-address", etc. I thought these would be available under the .SDATA. set, but apparently I missed something.
Is this possible (and I am just lacking understanding) or am I expecting too much?
It's pretty much how it should work. As you can see from the 3.7 online guide, it's the `message_template` controls the fields which will be indexed in Elasticsearch. It's looking good in your example.
These are the last three tests I have used within the elasticsearch destination and they all essentially result in the same thing.
* Could you show us the full configuration? * Before looking into Kibana, you should use the elasticsearch API to list the fields e.g. by checking the mapping, dumping a document by id, or searching: (respectively)
curl 0:9200/<index> curl 0:9200/<index>/<type>/<id> curl 0:9200/<index>/_search
You're welcome to join #syslog-ng on freenode or #balabit/syslog_ng on gitter so we could move forward more quickly
Cheers