Fix the '_jsonparsefailure' in Logstash

  1. When I searched data in my ElasticSearch cluster from Kibana, I found that there was a _jsonparsefailure tag in each log entry.

  2. I checked the stdout and stderr of the Logstash progress, nothing found.

  3. I checked the configuration of Logstash, no json plugin is used.

  4. To ensure the _jsonparsefailure tag is generated by Logstash or ElasticSearch, I added the following code to the output section.

     stdout {
         codec => rubydebug \
     }
    

    And then there’s a _jsonparsefailure in stdout, so it’s added by Logstash.

  5. I added --debug option to restart the Logstash progress and get the following log

     ... Feb 15 03:33:22 ...
     :message=>"JSON parse failure. Falling back to plain-text", :error=>#<LogStash::Json::ParserError: Unrecognized token 'Feb': was expecting ('true', 'false' or 'null')
    
  6. I tested my match pattern and the log entry at Test grok patterns, it’s right. And “Feb 15 03:33:22” is just a normal timestamp.

  7. So I guessed the problem the raised by the input

     input {    
         kafka {
             topic_id => ""
             zk_connect => ""
             group_id => ""        
             consumer_threads => 20
         }
     }
    

    I searched logstash kafka and get the default configuration of input kafka

    Logstash version Setting Input type Required Default value
    5.2 codec codec No “plain”
    2.3 codec codec No “json”
  8. My Logstash version is 2.3, so I added codec => "plain" to kafka and finally the _jsonparsefailure disappeared.