-
When I searched data in my ElasticSearch cluster from Kibana, I found that there was a
_jsonparsefailure
tag in each log entry. -
I checked the stdout and stderr of the Logstash progress, nothing found.
-
I checked the configuration of Logstash, no json plugin is used.
-
To ensure the
_jsonparsefailure
tag is generated by Logstash or ElasticSearch, I added the following code to the output section.stdout { codec => rubydebug \ }
And then there’s a
_jsonparsefailure
in stdout, so it’s added by Logstash. -
I added
--debug
option to restart the Logstash progress and get the following log... Feb 15 03:33:22 ... :message=>"JSON parse failure. Falling back to plain-text", :error=>#<LogStash::Json::ParserError: Unrecognized token 'Feb': was expecting ('true', 'false' or 'null')
-
I tested my match pattern and the log entry at Test grok patterns, it’s right. And “Feb 15 03:33:22” is just a normal timestamp.
-
So I guessed the problem the raised by the input
input { kafka { topic_id => "" zk_connect => "" group_id => "" consumer_threads => 20 } }
I searched
logstash kafka
and get the default configuration of input kafkaLogstash version Setting Input type Required Default value 5.2 codec codec No “plain” 2.3 codec codec No “json” -
My Logstash version is
2.3
, so I addedcodec => "plain"
to kafka and finally the_jsonparsefailure
disappeared.