Using the patterndb to
associate metadata with each syslog message is
great, however, you have to ensure that those
additional fields are output into the json object
that you send to elasticsearch.
We do exactly what you are trying to do, so that
our elasticsearch document contains all of the
fields parsed by our patterndb.
as an example we have a log line of
September 12th 2016, 11:17:25.836
chiru.comp.uvic.ca
mail.info in.imapproxyd: LOGOUT:
'"vgmodi"' from server sd [200]
and an elasticsearch document shown below. The
cfgmgr* fields come from our asset management
system. The PATTERNID comes from our pattern
database entry and the user and sd fields come
from the patterndb data parsers.
You don't need to have a pattern for every log
line at the start. The second example below is a
syslog line that does NOT match any pattern in our
database.
{
"_index": "flare-2016.09.12.11",
"_type": "flare",
"_id": "AVcfnhBMpdjtwzWgS7rU",
"_score": 1,
"_source": {
"user": "vgmodi",
"sd": "200",
"flare": {
"profile": "DCS"
},
"cfgmgrrole": "ADMIN",
"cfgmgrosFull": "Redhat 5_64",
"cfgmgros": "unix",
"cfgmgrmodel": "ESX 5",
"cfgmgrlocation": "ESX-BCP",
"cfgmgrenvironment": "BCP PROD",
"cfgmgrassetType": "Virtual Server",
"SOURCEHOST": "
chiru.comp.uvic.ca",
"SHORTHOST": "chiru",
"PROGRAM": "in.imapproxyd",
"PRIORITY": "info",
"PID": "5129",
"PATTERNID": "864",
"MESSAGE": "LOGOUT: '\"vgmodi\"' from server
sd [200]",
"ISODATE": "2016-09-12T11:17:25.836-07:00",
"HOST": "
chiru.comp.uvic.ca",
"FACILITY": "mail"
},
"fields": {
"ISODATE": [
1473704234822
]
}
}
Non-matching log line.
{
"_index": "flare-2016.09.12.11",
"_type": "flare",
"_id": "AVcfpLpIpdjtwzWgXfVD",
"_score": 1,
"_source": {
"flare": {
"profile": "DCS"
},
"cfgmgrrole": "ADMIN",
"cfgmgrosFull": "Redhat 6_64",
"cfgmgros": "unix",
"cfgmgrmodel": "ESX 5",
"cfgmgrlocation": "ESX-PROD",
"cfgmgrenvironment": "Prod",
"cfgmgrassetType": "Virtual Server",
"SOURCEHOST": "
tyrant.comp.uvic.ca",
"SHORTHOST": "tyrant",
"PROGRAM": "cas",
"PRIORITY": "info",
"MESSAGE": "prod: [ajp-apr-8009-exec-37]: Mon
Sep 12 11:24:31 PDT
2016,CAS,SERVICE_TICKET_NOT_CREATED,
https://www.uvic.ca/netlink/j_spring_cas_security_check,audit:unknown,206.87.181.44,www.uvic.ca",
"ISODATE": "2016-09-12T11:24:31.000-07:00",
"HOST": "
tyrant.comp.uvic.ca",
"FACILITY": "daemon"
},
"fields": {
"ISODATE": [
1473704671000
]
}
}
On 09/12/2016 11:08 AM, Scot Needy wrote:
Hello List,
I’m trying to understand the use case of pattern_db when the destination will be ES. My initial understanding was that I could use patterndb as an engine to tag my log message data with attributes, but it doesn’t seem to work that way. I have a json output like this in Kibana.
In a loghost deployment, It looks like I would need to manually align a patterndb filter with each host_message type even before patterned comes into play.
Q) What is the right solution for enriching message data into ES ?
Example JSON from Kibana MESSAGE is not parsed.
=======================
{
"_index": "syslog-ng_2016.09.12",
"_type": "syslog-ng",
"_id": "AVcdnzJla9VjMdxDYo8Z",
"_score": null,
"_source": {
"PROGRAM": “###-asa11",
"PRIORITY": "warning",
"MESSAGE": "%ASA-4-106023: Deny tcp src outside:###.###.31.2/33553 dst public:###.###.7.191/443 by access-group \"outside_access_in\" [0x2c1c6a65, 0x0]",
"ISODATE": "2016-09-12T13:57:03-04:00",
"HOST": “###.###.###.###",
"FACILITY": "local5",
"@timestamp": "2016-09-12T13:57:03-04:00"
},
"fields": {
"ISODATE": [
1473703023000
],
"@timestamp": [
1473703023000
]
},
"sort": [
1473703023000
]
}