A customer of mine, requires xml data as separate field data for further investigation. The data itself is part of a log message that is processed by Logstash. Logstash provides the powerful XML
filter plugin for further parsing.
A customer of mine, requires xml data as separate field data for further investigation. The data itself is part of a log message that is processed by Logstash. Logstash provides the powerful XML
filter plugin for further parsing.
By default, Docker captures the standard output (and standard error) of all your containers, and writes them in files using the JSON format. It is advised to set a max size, otherwise you will run out of disk space. Having unified logging with Elasticsearch allows you to investigate logs in a single point of view. Sending the logs to Elasticsearch from the Docker containers is quite easy. Fluentd is a data collector, which a Docker container can use by omitting the option --log-driver=fluentd
.
Storing data in Elasticsearch with city names, offers the capability to display in Kibana the distribution of the data in geographical map. To use that feature, you have to declare a geo_point
type in your index mapping. I named the field location
. To translate the city names to their respective geo point I use the logstash translate filter. Having a small dictionary, logstash will just take the value for your input city. You could also use zip codes, but this would require a more detailed data source. For the demonstration of the translation plugin it is sufficient.
At work I still run the Elasticsearch Cluster in version 5.6.4. While I’m eager to upgrade and keep up the pace, I don’t always have the chance to upgrade immediately. A customer of mine needed a small set of data in Excel. Elasticsearch 6 or moreover Kibana 6 offers the CSV export in the X-Pack extensions. To use that functionality, I needed to export a fragment of desired data from my production cluster. Since the Reindex API allows us to read data from remote and write it, I simply ramped up my private cluster in v 6.1.1 with Docker and started the reindexing.
Providing a HTTP health check service with Nginx, is straightforward. If you do ensure that Nginx closes the HTTP connection instead keeping it alive. The basic option therefore is:
Performing a reindex job in Elasticsearch gives you the time the job took
.
Find ObjectId of the chat room
> db.rooms.find({slug:"elk"}).pretty()
{
"_id" : ObjectId("59a666cfa9886c002c30b404"),
"owner" : ObjectId("59a547c2aed276003facf84f"),
"name" : "Elasticschrott",
"slug" : "elk",
"description" : "Everything about the Elasticsearch Universe, including Logstash, Beats",
"private" : false,
"lastActive" : ISODate("2017-08-31T08:46:33.690Z"),
"created" : ISODate("2017-08-30T07:18:39.885Z"),
"messages" : [ ],
"participants" : [ ],
"archived" : true,
"__v" : 0
}
A chat room in letschat was archived. To revive it we can alter the document in the MongoDB instance.
I got this situation in a log file, where the JSON information is after the grep output of the file name.