1. 2018-05-04 - Monitor Kibana queries with Packetbeat; Tags: Monitor Kibana queries with Packetbeat
    Loading...

    Monitor Kibana queries with Packetbeat

    If you are using X-Pack Monitoring you have a good overview of your Kibana performance. Sometimes it is necessary to know more. Packetbeat can monitor your http traffic between Kibana and the Elasticsearch node.

    Customize Docker image

    I run packetbeat in Docker. We can extend from the official image and add customizations to our docker image.

    FROM docker.elastic.co/beats/packetbeat:6.2.4
    COPY packetbeat.yml /usr/share/packetbeat/packetbeat.yml
    COPY packetbeat.keystore /usr/share/packetbeat/packetbeat.keystore
    COPY ca.crt /usr/share/packetbeat/ca.crt
    USER root
    RUN chown packetbeat /usr/share/packetbeat/packetbeat.yml
    RUN chown packetbeat /usr/share/packetbeat/packetbeat.keystore
    RUN chown packetbeat /usr/share/packetbeat/ca.crt
    RUN chmod go-wrx /usr/share/packetbeat/packetbeat.keystore
    USER packetbeat
    
    • packetbeat.yml contains the packetbeat configuration
    • packetbeat.keystore contains the password used in above configuration
    • ca.crt contains the root certificate authority of the secured Elasticsearch cluster

    Packetbeat Configuration

    Packetbeat will capture far more, than you might be interested in, therefore ensure you filter only the Kibana searches. Kibana uses the Multisearch API (path = /msearch) as POST request to the Elasticsearch node. Filtering only for GET requests, will get you nowhere.

    The http.request.body contains the interesting part, whereas the http.response.body will give you the answer from Elasticsearch. Since a lot of text is stored, I remove the http.request.body before ingestion.

    processors:    
      - drop_event.when.not.contains:
          query: "search"
      - drop_event.when.equals:
          query: "POST /.kibana/_search"
      - drop_event.when.equals:
          query: "POST /.reporting-*/_search"
      - drop_event.when.equals:
          query: "POST /.reporting-*/esqueue/_search"
      # not interested in the response (or how long it took), duration is given by packetbeat
      - drop_fields:
          when.equals:
            query: "POST /_msearch"
          fields: ["http.response.body"]
      # filter out load balancer
      - drop_event.when:
          and:
          - contains:
              http.response.headers.server: "nginx"
          - equals:
              ip: "10.22.22.221"
    

    Above snippet contains only the filtering. The whole packetbeat configuration can be found on this gist.

    The overall configuration contains an output to Apache Kafka, where Logstash reads and ships it to my monitoring cluster. The important part is, ingest the packetbeat data capture in another cluster, than the monitored one! You will create an infinite loop otherwise, if you review the results in Kibana instance. A dedicated cluster like a monitoring cluster is recommended.

    Ansible Deployment

    In the ansible playbook.yml the docker module contains all the run args.

    ---
    
    - hosts: kibana-host
      any_errors_fatal: True
      serial: 1
      vars:
        container_name: packetbeat
        image_name: cinhtau/packetbeat:6.2.4
    
      tasks:
        - assert:
            that:
              - image_name is not none
    
        - name: remove old container
          command: docker rm -f "" warn=no
          ignore_errors: true
    
        - name: install new container
          docker:
            name: ""
            image: ""
            state: started
            pull: always
            log_driver: json-file
            log_opt:
              max-size: 10m
            env:
              TZ: Europe/Zurich
            net: host
            cap_add: NET_ADMIN
            restart_policy: on-failure
            restart_policy_retry: 10
    

    Examine Results

    In the Kibana instance of the monitoring cluster you can examine for instance all requests that took longer than 2 seconds.

    Comments


    Leave a comment


  2. 2018-04-17 - Dashboard with id x not found; Tags: Dashboard with id x not found
    Loading...

    Dashboard with id x not found

    X-Pack Reporting allows to automate and generate daily reports on pre-existing dashboards or visualizations in Kibana. To keep security tight I have created a reporting user. The first run with the reporting user gave me some mystery. Reporting complained Dashboard with id 'AWLOnWVZLaWygeBEGxLJ' not found. I did some digging and found the reason, which I am going to elaborate about in this post.

    Use-Case Scenario

    My watch watches for failed watches. Watcher itself is powerful but IMHO not very good for business users. You need deep or average knowledge about Elasticsearch in order to get it right. We have a couple of watches which failed from time to time. In my automation as Elasticsearch Admin I have to keep an eye on that.

    The watch:

    GET /_xpack/watcher/watch/failed_watches
    
    {
    "watch": {
        "trigger": {
          "schedule": {
            "daily": {
              "at": [
                "01:30"
              ]
            }
          }
        },
        "input": {
          "none": {}
        },
        "condition": {
          "always": {}
        },
        "actions": {
          "email_developers": {
            "email": {
              "profile": "standard",
              "attachments": {
                "count_report.pdf": {
                  "reporting": {
                    "url": "https://cinhtau.net/api/reporting/generate/dashboard/AWLOnWVZLaWygeBEGxLJ?_g=(time:(from:now-1d%2Fd,mode:quick,to:now))",
                    "auth": {
                      "basic": {
                        "username": "reporting_wotscher",
                        "password": "guess-what"
                      }
                    }
                  }
                }
              },
              "to": [
                "le-mapper@cinhtau.net"
              ],
              "subject": "Failed Watches"
            }
          }
        }
      }
    }
    

    Kibana Object

    First thought was ok, let’s check if dashboard has the respective id or can be found. Search in your kibana index. The default is .kibana.

    POST /.kibana/_search
    {
      "query": { "ids": { "values": ["AWLOnWVZLaWygeBEGxLJ" ] } }
    }
    

    If you get a similar output it is there.

    {
      "took": 2,
      "timed_out": false,
      "_shards": {
        "total": 1,
        "successful": 1,
        "skipped": 0,
        "failed": 0
      },
      "hits": {
        "total": 1,
        "max_score": 1,
        "hits": [
          {
            "_index": ".kibana",
            "_type": "dashboard",
            "_id": "AWLOnWVZLaWygeBEGxLJ",
            "_score": 1,
            "_source": {
              "title": "Report Failed Watches",
              "hits": 0,
              "description": "",
              "panelsJSON": """[{"size_x":3,"size_y":3,"panelIndex":1,"type":"visualization","id":"Watcher-Duration","col":4,"row":1},{"size_x":6,"size_y":3,"panelIndex":2,"type":"visualization","id":"fa7e4420-4080-11e7-ab57-7554a52ae433","col":7,"row":1},{"size_x":3,"size_y":3,"panelIndex":3,"type":"visualization","id":"Watches-Done","col":1,"row":1}]""",
              "optionsJSON": """{"darkTheme":false}""",
              "uiStateJSON": """{"P-1":{"vis":{"defaultColors":{"0 - 100":"rgb(0,104,55)"}}},"P-2":{"vis":{"params":{"sort":{"columnIndex":null,"direction":null}}}},"P-3":{"vis":{"defaultColors":{"0 - 100":"rgb(0,104,55)"}}}}""",
              "version": 1,
              "timeRestore": false,
              "kibanaSavedObjectMeta": {
                "searchSourceJSON": """{"id":"AWLOeppmLaWygeBEGxLI","filter":[{"meta":{"index":".watcher-*","type":"phrase","key":"state","value":"failed","disabled":false,"negate":false,"alias":null},"query":{"match":{"state":{"query":"failed","type":"phrase"}}},"$state":{"store":"appState"}},{"query":{"match_all":{}}}],"highlightAll":true,"version":true}"""
              }
            }
          }
        ]
      }
    }
    

    X-Pack Security

    Next checkpoint is to look on the security or granted permissions. In the official docs they are using the superuser elastic. Not recommended.

    The reporting user must have the roles

    • reporting_user in order to execute report generation
    • watcher_user in order to read the watch data
    • kibana_user in order to access the Kibana objects

    A quick check with Kibana Console:

    GET /_xpack/security/user/reporting_wotscher
    
    {
      "reporting_wotscher": {
        "username": "reporting_wotscher",
        "roles": [
          "monitoring_user",
          "reporting_user",
          "watcher_user"
        ],
        "full_name": "Elastic Wotscher",
        "email": "le-mapper@cinhtau.net",
        "metadata": {},
        "enabled": true
      }
    }
    

    kibana_user is missing. Without the permission he could not read the .kibana index and thus can’t find the dashboard object with the id.

    Add the permission with Kibana Console.

    PUT /_xpack/security/user/reporting_wotscher
    {
        "username": "reporting_wotscher",
        "password: "guess-it",
        "roles": [
          "monitoring_user",
          "reporting_user",
          "watcher_user",
          "kibana_user"
        ],
        "full_name": "Elastic Wotscher",
        "email": "le-mapper@cinhtau.net",
        "metadata": {},
        "enabled": true  
    }
    

    Comments


    Leave a comment


  3. 2017-05-10 - Customize Kibana Look and Feel for different environments; Tags: Customize Kibana Look and Feel for different environments
    Loading...

    Customize Kibana Look and Feel for different environments

    My company uses Elasticsearch and Kibana for various reasons. One of my responsibilities is to ensure the stability of our elasticsearch test and production cluster. My users and me have problems to distinguish the various environments (test and production). It happen more than once, that I executed an action with Sense/Console in the wrong environment :anguished:. To make a better distinctive appearance, I dug a little in the Kibana source around and found it no so hard, to alter the appearance.

    For fun I choose to create underlying Ironman theme, since one of my co-workers is a vivid fan of the Marvel comic super hero. You may keep the original look and feel for production or test. It depends on your taste or company compliance. I use Kibana 5.4.0 and Linux for the modifications. Ironman is a trademark by Marvel Comics and I do not intend to violate or distribute any of their materials to the public. This Kibana Theme is solely for demonstration purposes only.

    Kibana Iron Man Theme

    Logo and Navigation

    You may simply change the generated CSS, but I prefer to alter the Less Stylesheets. All references are from $KIBANA_HOME, that is your path where you have extracted Kibana.

    For above theme I adjusted these files:

    • $KIBANA_HOME/src/ui/public/chrome/directives/global_nav/global_nav.less
    • $KIBANA_HOME/src/ui/public/chrome/directives/global_nav/global_nav_link/global_nav_link.less

    The global_nav.less file contains the background image and the background color. Copy your logo into the folder $KIBANA_HOME/src/ui/images and exchange it with the kibana.svg file. To match the logo I added the background color property.

    To change the colors of the navigation panel, go to the global-nav section. Just define or replace the @ironman variables.

    To fit the background color, I choose to change the active color in global_nav_link.less.

    Adjust Login Window

    Since I have already changed the navigation and logo, for fun reasons I want to change the login window. The login window is provided by the x-pack plugin for commercial license holders. So you can change it, but won’t see it unless you have a license for that. The default installation of x-pack grants you a 30-day trial period. Following files are altered in the process:

    • $KIBANA_HOME/src/ui/views/chrome.jade
    • $KIBANA_HOME/plugins/x-pack/plugins/security/public/views/login/login.less

    I have drafted two different login windows, depending which one may fit better to the Ironman Icons of Everaldo Coelho (Yellow Icons). At the current time (2017-05-22), I can’t reach the website of Everaldo or Yellow Icons, to provide the original license information. On Iconfinder it is stated as personal use only.

    Ironman Login One

    Ironman Login Two

    To alter the background image you have to touch the jade (or renamed pug) file.

    Pug is a high performance template engine heavily influenced by Haml and implemented with JavaScript for Node.js and browsers. Pug

    Changes for the background images are done in chrome.jade. I copy the ironman logo into the generated optimize/bundles folder.

    The login.less of the x-pack security plugin contains the background colors, that you may alter to your needs.

    Summary

    Altering the look and feel is not so complicated. I hope I could layout the process and modifications for your own branding. In the end if you have a corporate style guide, you only need to adapt to those. Following picture is an example, what you can accomplish.

    Branded Kibana

  4. 2017-04-28 - Shard Allocation Filtering; Tags: Shard Allocation Filtering
    Loading...

    Shard Allocation Filtering

    If you run Elasticsearch and use Kibana for various reasons, you better ensure to perform automatic backups. The time spend in searches, visualizations and dashboard should be worth that. If an Elasticsearch upgrade goes south, you are happy to have a backup. The main advantages of an Elasticsearch cluster, that you can join and remove additional nodes, which may differ in their resources and capacity. That’s the situation I constantly deal at work. Shard Allocation Filtering helps to setup smart rules for example hot warm architecture.

    Elasticsearch can backup the kibana index to various repositories. The only possibility, for me at work, is to store the backup to a shared NAS. That is the Elasticsearch file storage repository. My clusters consists of multiple nodes where some nodes are not attached to NAS. If kibana has shard on nodes, without the attached NAS mountpoint the backup fails. Shard allocation filtering enables me to mark indices to reside only on nodes with specific attributes.

    In the elasticsearch configuration - elasticsearch.yml - I put the node attribute named rack with the value with-nas.

    cluster.name: test
    node.name: omega
    node.master: true
    node.data: true
    node.ingest: true
    node.attr.rack: "with-nas"
    

    In the Kibana Console (Sense) I can change settings of the kibana index.

    PUT .kibana/_settings
    {
      "index.routing.allocation.include.rack": "with-nas"
    }
    

    After that the master node will ensure that the .kibana index will be put on the nodes with the respective value.

  5. 2017-01-16 - Provide health check port for Kibana; Tags: Provide health check port for Kibana
    Loading...

    Provide health check port for Kibana

    The new Kibana 5.1.2 x-pack monitoring plugin is a job well done! With the major version, you can also monitor your Kibana instances - yes Kibana can be run clustered :sunglasses:. Having Kibana running in Docker, it allows you deploy it even faster. Back to the monitoring part. It was a real lifesaver and gave me some insights on the Kibana application life-cycle.

    I notice the amount of http connections is astonishing. :dizzy_face:

    Connection Problems

    Even I know that our Kibana is popular, but it is very unlikely that this many users and sessions are alive. I just made a control check by restarting one instance. And my suspicion was right. After the docker restart the connection started at zero.

    Connections after restart

    If I compare the instances, I can easily figure out which instance was rebooted and which one runs for days. :wink:

    Memory Consumption

    My personal suspicion: The DNS network load-balancer is doing a probe on the application port 5601 for the health-check. If you have a probe from 30 seconds to 2 minutes, it would explain the numbers. If the load-balancer is responsible for that, I should give him another port for the health-check or probe. For this task I choose netcat.

    Netcat is a computer networking utility for reading from and writing to network connections using TCP or UDP.

    Netcat Homepage

    In the Dockerfile we install netcat and expose port 5602 for the health check.

    >FROM kibana:5.1.2 # .. do some setup COPY docker-entrypoint.sh / RUN chmod +x /docker-entrypoint.sh # install netcat RUN apt-get update -y && apt-get install netcat -y EXPOSE 5601 5602 ENTRYPOINT ["/docker-entrypoint.sh"] CMD ["kibana"]

    In docker-entrypoint.sh just start netcat to listen to port 5602 and reply with hello. The other stuff is for stopping the loop if CTRL + C is pressed, which is in a docker terminal not the usual case. netcat will terminate the connection for sure after echoing hello! ( Adele greets ya :wink: )

    while [ 1 ]; do nc -l -p 5602 -c "echo hello"; test $? -gt 128 && break; done &
    

    Build and deployed the new docker container for Kibana with integrated health-check port. I talk to the network guys and they change the health-check probe to 5602. Of course it took some time for them to switch it. As you can see the http connections stopped to increase exponentially from the point on they performed the switch.

    Increase Stop

    After a rolling restart I have the regular numbers what I would have expected from the beginning.

    Normal Connections

    From my personal view, I was afraid that Kibana was only an UI improvement, but looking at the commercial plug-ins, it has improved a lot. Sadly they don’t come for free, but that doesn’t mean they aren’t worth their license.

    This is only a temporary solution. Use Nginx for instance with Kibana to provide a healthcheck, but ensure to close the connection by the load balancer probe.

  6. 2016-09-28 - Monitor process and used ports of Kibana; Tags: Monitor process and used ports of Kibana
    Loading...

    Monitor process and used ports of Kibana

    Monit has the capability to check for a process name. The process itself can also provided a service on a dedicated port, in this Kibana in Production, which uses SSL and expose its service on port 5601.

    CHECK PROCESS kibana WITH MATCHING "/opt/kibana/bin/../node/bin/node\s"
      group elkstack
      if failed
          host monitoring
          port 5601
          type tcp
          protocol https
      then alert
        alert admin@cinhtau.net
    
  7. 2016-09-06 - Visualize Elasticsearch Watcher Statistics with Kibana; Tags: Visualize Elasticsearch Watcher Statistics with Kibana
    Loading...

    Visualize Elasticsearch Watcher Statistics with Kibana

    My previous post, demonstrated how to use Elasticsearch Watcher for log file alerting. Elasticsearch Watcher itself keeps data, about its watches and actions.

    To have an overview, you can create powerful Dashboards in Kibana for Watcher. In comparison the watch statistics for 1 hour and 24 hours. watcher-dashboard-24h

  8. 2016-08-10 - Remove password from private ssl key; Tags: Remove password from private ssl key
    Loading...

    Remove password from private ssl key

    In the kibana.yml configuration, I setup the mandatory configuration for SSL.

    server.ssl.key: "/opt/kibana/latest/ssl/key.pem"
    server.ssl.cert: "/opt/kibana/latest/ssl/cert.pem"
    

    Kibana can’t handle private SSL certificates with passwords (key.pem).

    tail -f /var/log/kibana/error.log
    FATAL [Error: error:0907B068:PEM routines:PEM_READ_BIO_PRIVATEKEY:bad password read]
    

    Therefore I had to remove the password in order to use existing private key. We just export the key into a new keyfile.

    openssl rsa -in key.pem -out newkey.pem
    

    The new file should contain following beginning and end:

    -----BEGIN RSA PRIVATE KEY-----
    ...
    -----END RSA PRIVATE KEY-----
    
  9. 2016-05-23 - Backup Kibana; Tags: Backup Kibana
    Loading...

    Backup Kibana

    Kibana is the visual web interface for elasticsearch. You can create searches, visualizations and dashboards. Sometimes you spend a lot of valuable work into them. Therefore is essential to have some kind of backup for Kibana. The Kibana data itself, is stored in Elasticsearch in the .kibana index. One way is to use the snapshot and restore capability of Elasticsearch.

    I created two scripts, that runs in Jenkins. The first script is a daily snapshot and the second script is a monthly snapshot. From there we are capable to restore the snapshot from a daily or monthly basis. These scripts only create backup from kibana, which is not very large. Other indices aren’t in scope.

    The daily snapshot:

    SNAPSHOT=snapshot_$(date +%u)
    # Test
    echo -n "Delete previous snapshot from test-cluster"
    curl -XDELETE "http://localhost:9200/_snapshot/kibana/$SNAPSHOT" -s
    echo -n "Create new snapshot for test-cluster"
    curl -XPUT "http://localhost:9200/_snapshot/kibana/$SNAPSHOT" -s -d '{ "indices": ".kibana",  "ignore_unavailable": "true",  "include_global_state": false }'
    # Prod
    echo -n "Delete previous snapshot from prod-cluster"
    curl -XDELETE "http://elasticsearch:9200/_snapshot/kibana_prod/$SNAPSHOT" -s
    echo -n "Create new snapshot for prod-cluster"
    curl -XPUT "http://elasticsearch:9200/_snapshot/kibana_prod/$SNAPSHOT"  -s -d '{ "indices": ".kibana",  "ignore_unavailable": "true",  "include_global_state": false }'
    

    The Jenkins schedule, e.g. would last have run at Sunday, May 22, 2016 11:12:54 PM CEST; would next run at Monday, May 23, 2016 11:12:54 PM CEST.

    H 23 * * *
    

    The monthly snapshot:

    SNAPSHOT=backup_$(date +%m)
    # Test
    echo -n "Create monthly snapshot for test-cluster"
    curl -XDELETE "http://localhost:9200/_snapshot/kibana/$SNAPSHOT" -s
    curl -XPUT "http://localhost:9200/_snapshot/kibana/$SNAPSHOT" -s -d '{ "indices": ".kibana",  "ignore_unavailable": "true",  "include_global_state": false }'
    # Prod
    echo -n "Create monthly snapshot for prod-cluster"
    curl -XDELETE "http://elasticsearch:9200/_snapshot/kibana_prod/$SNAPSHOT" -s
    curl -XPUT "http://elasticsearch:9200/_snapshot/kibana_prod/$SNAPSHOT" -s -d '{ "indices": ".kibana",  "ignore_unavailable": "true",  "include_global_state": false }'
    

    The Jenkins schedule, e.g. would last have run at Sunday, May 1, 2016 12:16:07 AM CEST; would next run at Wednesday, June 1, 2016 12:16:07 AM CEST.

    H 0 1 * *
    
  10. 2015-10-05 - Backup and restore Kibana 4 objects; Tags: Backup and restore Kibana 4 objects
    Loading...

    Backup and restore Kibana 4 objects

    This post explains the backup of Kibana 4.1 objects and how to restore them.

    Kibana 4 has the basic concept

    • you define a search
    • based on the search you create a visualization
    • visualization can be combined into a dashboard

    In this order for backup you need to export and import the objects. Kibana 4 offers under SettingsObjects for each object type the export functionality. Assume you have multiple environments like:

    • development
    • integration/staging
    • pre-production
    • production

    If the dashboard, visualizations and searches only differ in the environment name, you can export the Kibana objects for one environment. The output has a JSON format. You can just simply search and replace the environment variable and import for each environment a clone dashboard.

  11. 2015-09-09 - Convert to correct base in Kibana 4; Tags: Convert to correct base in Kibana 4
    Loading...