1. 2018-05-05 - Run metricbeat as docker container; Tags: Run metricbeat as docker container
    Loading...

    Run metricbeat as docker container

    Metricbeat as docker container, is a decent monitoring solution to monitor other docker containers in conjunction with Elasticsearch and Kibana. Additionally metricbeat can monitor the docker system itself. You might run into some problems, which I would like to share.

    If you have the docker module enabled and get this error message in the elasticsearch document.

    Get http://unix.sock/containers/json?: dial unix /var/run/docker.sock: connect: no such file or directory
    

    You need to mount /var/run/docker.sock as volume (read-only is sufficient) in the docker run option.

    /var/run/docker.sock:/var/run/docker.sock:ro
    

    Metricbeat has the user id 1000 in the container. To allow the container user to read from, set as superuser in the docker host system:

    setfacl -m u:1000:rw /var/run/docker.sock
    

    Comments


    Leave a comment


  2. 2018-03-15 - Using Sidecar Container for Elasticsearch Configuration; Tags: Using Sidecar Container for Elasticsearch Configuration
    Loading...

    Using Sidecar Container for Elasticsearch Configuration

    Applications shipped in Docker containers are a major game changer, especially having a Elasticsearch cluster. My production cluster consists of 11 nodes. In the core, Elasticsearch is the same. Each node though has its specific configuration, settings and purpose. On top of that, Elasticsearch X-Pack Security in Version 6 requires that the communication within the cluster must run encrypted. This is accomplished by SSL certificates. Each node has its own private key and certificate. So I was facing with the problem, how to ship the node specific parts along with the core elasticsearch container. Use the core container as baseline and copy the configuration and certificate into the container? This would resolve in 11 specific images. Not in the spirit of reusability though. :thinking: The better approach or answer came by remembering the tech talk Docker Patterns by Roland Huss, given at the Java Conference (Javaland 2016). Use a configuration container as a sidecar!

    Concept

    The basic question to the pattern is: How to configure containerized applications for different environments ?

    Solution Patterns:

    • Env-Var Configuration
    • Configuration Container
    • Configuration Service

    ENV-VAR Configuration

    The bullet points from Docker Patterns:

    • Standard configuration method for Docker container
    • Specified during build or run time
    • Universal

    We can define environment variables and configure the docker application to use them. Environment variables can be overwritten by passing them.

    Elasticsearch already does that, for example:

    We have multiple environment bypasses in above example.

    • For instance we set docker environment variables like timezone (TZ).
    • The -E node.name=alpha is an Elasticsearch argument.

    The elasticsearch node configuration was identical. Only the node specific information were provided.

    This was done by me in the past. It worked until the requirement for node certificates. This doesn’t work for certificate files. Let’s take a look on the next approach.

    Configuration container

    The bullet points from Docker Patterns:

    • Configuration in extra Docker container
    • Volume linked during runtime

    Pros

    • Flexible
    • Explicit

    Cons

    • Static
    • Maintenance overhead
    • Custom file layout

    The basic idea is to run 11 elasticsearch containers and just attaching or linking them to the configuration container (symbolically as sidecar).

    Summary

    The third approach is using some kind of central configuration registry or service. Consul, etcd or Apache Zookeeper are viable solutions, but in my scenario with Elasticsearch not applicable.

    So Sidecar it is!

    Sidecar

    This pattern is named Sidecar because it resembles a sidecar attached to a motorcycle. In the pattern, the sidecar is attached to a parent application and provides supporting features for the application. The sidecar also shares the same lifecycle as the parent application, being created and retired alongside the parent. The sidecar pattern is sometimes referred to as the sidekick pattern and is a decomposition pattern.

    Motorcycle with Sidecar

    Creating the Sidecar Container

    I will demonstrate how I applied the sidecar pattern for my elasticsearch test-cluster of 3 nodes.

    First my project layout:

    Each node has following node specific configuration. Mandatory for Elasticsearch is elasticsearch.yml and certs folder for X-Pack security.

    Dockerfile

    The Dockerfile to build the sidecar container.

    The COPY command copies the configuration to the container /config/ directory. Pay attention to ignore unwanted files in the .dockerignore file.

    Check contents

    You can inspect the sidecar container by executing the list directory command ls. After the execution, the docker container is automatically removed.

    Deployment

    On the docker host, we don’t need to run the sidecar container. If we just create the container the docker volume is available.

    Usage

    Using Elasticsearch with configuration: To use the configuration volume from our sidecar container, omit this option --volumes-from es_config, where es_config is the name of the docker container.

    Container Lifecycle

    The sidecar container wil be removed, if you have some cleanup, but the volume still exist, since it used in by the elasticsearch application container. A docker inspect of the application container will give you the name of the used volume.

    The volume name f89912de22e2b34170e0b331c8a5e25b00f921f4e2417c6b140382389fadee7e is used.

    To check:

    As you can see the volume name is hard for humans to remember. Docker offers to create named volumes as cherry on top.

    Summary

    • The sidecar pattern is very useful.
    • Sidecar containers have multiple purposes:
      • Configuration
      • Proxy (using Nginx as Load Balancer or Proxy)
      • Logging (running log shipper for centralized logging)
      • every other abstraction
    • A sidecar container lifecycle is tight to its application container(s).

    Comments


    Leave a comment


  3. 2018-03-13 - Apache Kafka Management and Monitoring; Tags: Apache Kafka Management and Monitoring
    Loading...

    Apache Kafka Management and Monitoring

    Monitoring for Apache Kafka is crucial to know the moment when to act or scale out your Kafka clusters. Besides the CLI commands, metrics are also accessible over JMX and jconsole. A more convenient way is to have a GUI that displays it. This post focus on Kafka Manager, a administration GUI for Kafka by Yahoo.

    Other solutions like elasticsearch and metricbeat kafka module by elastic also exists, but the module itself is beta and subject to change. Therefore I discard the idea using Kibana and Elasticsearch.

    Enable JMX

    JMX must be enabled in order to get them displayed in kafka-manager.

    In my ansible playbook.yml for the Apache Kafka Docker container, add these environment variables.

    KAFKA_JMX_OPTS: "-Dcom.sun.management.jmxremote \
      -Dcom.sun.management.jmxremote.authenticate=false \
      -Dcom.sun.management.jmxremote.ssl=false \
      -Djava.rmi.server.hostname={{ansible_hostname}} \
      -Dcom.sun.management.jmxremote.rmi.port=9099"
    JMX_PORT: 9099
    

    Pay attention to the exposed JMX port. You can choose any port. I think port 9099 is fitting, as the default exposed port is 9092.

    Kafka Manager

    If Apache ZooKeeper and Apache Kafka is operational I start a separate docker container for kafka-manager. The most recent docker image for kafka-manager is by sheepkiller.

    Run interactive, adjust the ZK_HOSTS to your needs.

    docker run -it --rm -p 9000:9000 \
    -e ZK_HOSTS="alpha:2181,beta:2181,gamma:2181" \
    sheepkiller/kafka-manager -Djava.net.preferIPv4Stack=true
    

    Or run detached for production mode

    docker run -d --name kafka-manager -p 9000:9000 \
    -e ZK_HOSTS="alpha:2181,beta:2181,gamma:2181" \
    --restart always \
    --log-driver json-file --log-opt max-size=10m \
    sheepkiller/kafka-manager -Djava.net.preferIPv4Stack=true
    

    You can access the GUI on the docker host over port 9000. See below exposed metrics.

    Kafka Manager Admin Interface

    Comments


    Leave a comment


  4. 2018-03-08 - Apache ZooKeeper in Production: Replicated ZooKeeper; Tags: Apache ZooKeeper in Production: Replicated ZooKeeper
    Loading...

    Apache ZooKeeper in Production: Replicated ZooKeeper

    Apache Kafka uses Apache ZooKeeper. Apache Kafka needs coordination and Apache ZooKeeper is the piece of software which provides it. Coordinating distributed applications is ZooKeeper’s job. As part of my Kafka evaluation I investigated how to run Apache ZooKeeper in a production scenario for Apache Kafka. This a detailed documentation and summary of my observations. I won’t go into detail how coordination is done for Apache Kafka with ZooKeeper. I might explain it in another article. This article focus on ZooKeeper in a production environment concerning High Availability scenarios.

    General

    Some details about my scenario:

    • Using 9 nodes (Yes that my production, btw. Raspberry Pi is pretty cheap in case you wanna try it out yourself)
    • Node List: 1. alpha, 2. beta, 3. gamma, 4. delta, 5. epsilon, 6. zeta, 7. eta, 8. theta, 9. omega
    • Deployment in Docker Containers with Ansible to above nodes, using this ZooKeeper Docker image
    • Ports:
      • 2181 - the client port for Apache Kafka or other clients
      • 2888 - port for ZooKeeper to connect to other ZooKeeper peers to coordinate
      • 3888 - the port for leader election, if the cluster needs to determine who is in charge
      • port 2888 and 3888 must not be exposed regarding Docker Networking

    Getting Started

    • From the Apache Zookeeper Guide: Using Apache ZooKeeper in production, you should run it in replicated mode.

    What does replicated mode mean?

    • A replicated group of servers in the same application is called a quorum,
    • and in replicated mode, all servers in the quorum have copies of the same configuration file.

    Some notes on Kafka’s ZooKeeper usage:

    • You have more than one ZooKeeper instance for Apache Kafka.
    • If Kafka uses a ZooKeeper cluster, some called it ensemble (Kafka in Action).

    In the Kafka server.properties you can provide a connection string with all the ZooKeeper instances. What about a Load Balancer? Let’s leave that out of the equation :wink:.

    Kafka’s server.properties

    >zookeeper.connect="alpha:2181,beta:2181,gamma:2181,delta:2181,epsilon:2181,zeta:2181,eta:2181,theta:2181,omega:2181"

    Architecture Design

    Some information from Apache ZooKeeper

    Minimum Requirement

    Apache ZooKeeper Getting Started:

    For replicated mode, a minimum of three servers are required, and it is strongly recommended that you have an odd number of servers. If you only have two servers, then you are in a situation where if one of them fails, there are not enough machines to form a majority quorum. Two servers is inherently less stable than a single server, because there are two single points of failure.

    Apache ZooKeeper Administration Guide:

    Usually three servers is more than enough for a production install, but for maximum reliability during maintenance, you may wish to install five servers. With three servers, if you perform maintenance on one of them, you are vulnerable to a failure on one of the other two servers during that maintenance. If you have five of them running, you can take one down for maintenance, and know that you’re still OK if one of the other four suddenly fails.

    Summary:

    • minimum 3 servers
    • odd numbers is better for majority election
    • recommendation is 5 servers

    To my specific scenario:

    • Having 9 servers the majority election takes place with 5 servers!
    • 3 ZooKeeper servers are not the majority in a cluster of 9!

    Deployment

    As I have stated Ansible is used to ship ZooKeeper in Docker containers.

    Ansible Playbook

    Here is the playbook.yml and explained in detail.

    # playbook for Apache Zookeeper deployment with docker containers
    # some problems with exposed ports, switch to network_mode host as workaround
    ---
    
    - hosts: all
      vars:
        image: zookeeper
        ports:
          - "2181:2181"
          - "2888:2888"
          - "3888:3888"
        ids:
          alpha:    1
          beta:     2
          gamma:    3
          delta:    4
          epsilon:  5
          zeta:     6
          eta:      7
          theta:    8
          omega:    9
        mappings:
          ZOO_SERVERS: >
            server.1=alpha:2888:3888
            server.2=beta:2888:3888
            server.3=gamma:2888:3888
            server.4=delta:2888:3888
            server.5=epsilon:2888:3888
            server.6=zeta:2888:3888
            server.7=eta:2888:3888
            server.8=theta:2888:3888
            server.9=omega:2888:3888
          ZOO_MY_ID: "{{ids[ansible_hostname]}}"
      tasks:
        - name: deploy
          docker_container:
            env:
              "{{mappings}}"
            name: zookeeper
            image: zookeeper
            network_mode: host
            pull: true
            log_opt:
              max-file: "3"
              max-size: 25m
            state: started
            restart: yes
            restart_policy: always
            restart_retries: 10
            volumes:
              - "/var/opt/zookeeper:/data"
              - "/var/log/zookeeper:/datalog"
    

    The environment variables are important.

    • ZOO_MY_ID = Each ZooKeeper server needs a unique id. We pass it by using a dictionary, i.e. alpha has the id 1.
    • ZOO_SERVERS = The server list is mandatory for Apache ZooKeeper. Each line is concatenated to a single line and pass as docker environment argument.

    Run to deploy ansible-playbook playbook.yml.

    Inspect Deployment

    Each ZooKeeper server is shipped in a docker container with the name zookeeper. Inspecting the alpha container:

    tan@alpha:~> docker exec -it zookeeper /bin/bash
    bash-4.4# cat /conf/
    configuration.xsl  log4j.properties   zoo.cfg            zoo_sample.cfg
    bash-4.4# cat /conf/zoo.cfg
    clientPort=2181
    dataDir=/data
    dataLogDir=/datalog
    tickTime=2000
    initLimit=5
    syncLimit=2
    maxClientCnxns=60
    server.1=alpha:2888:3888
    server.2=beta:2888:3888
    server.3=gamma:2888:3888
    server.4=delta:2888:3888
    server.5=epsilon:2888:3888
    server.6=zeta:2888:3888
    server.7=eta:2888:3888
    server.8=theta:2888:3888
    server.9=omega:2888:3888
    
    • The passed arguments are used to write the ZooKeeper configuration in /conf.
    • Pay attention that the docker image does not used the conf directory within the ZooKeeper shipment.
    bash-4.4# cd conf/
    bash-4.4# ls
    bash-4.4# pwd
    /zookeeper-3.4.11/conf
    

    We have seen that the server list is written in the ZooKeeper configuration, but where is ZOO_MY_ID is stored?

    ZooKeeper stores it in myid in the data directory. This is the mounted volume /var/opt/zookeeper.

    On the docker host system:

    tan@alpha:/var/opt/zookeeper> cat myid
    1
    

    Leader Election

    How does ZooKeeper elect its leader? As we know this is a majority election. For demonstration purposes, I will start node by node to illustrate the behavior of ZooKeeper, since I have found some bogus information on various blog pages, I want to prevent you from misinformation. As we know 5 servers are mandatory for leader election.

    Initial

    Start Server 1: alpha

    tan@alpha:/opt/ansible/zookeeper> ansible-playbook -l alpha playbook.yml
    
    PLAY [all] *********************************************************************
    
    TASK [setup] *******************************************************************
    ok: [alpha]
    
    TASK [deploy] ******************************************************************
    changed: [alpha]
    
    PLAY RECAP *********************************************************************
    alpha: ok=2    changed=1    unreachable=0    failed=0
    

    ZooKeeper allows us to issue four letter commands via telnet or nc (netcat) to check its status with the stats command.

    tan@alpha:/opt/ansible/zookeeper> telnet alpha 2181
    Trying 10.22.62.124...
    Connected to alpha.
    Escape character is '^]'.
    stat
    This ZooKeeper instance is not currently serving requests
    Connection closed by foreign host.
    

    The message This ZooKeeper instance is not currently serving requests is important for us, though this node is up but not operational. Another command is ruok (are you ok?).

    tan@alpha:/opt/ansible/zookeeper> telnet alpha 2181
    Trying 10.22.62.124...
    Connected to fo-ppd01-dc1.
    Escape character is '^]'.
    ruok
    imok
    Connection closed by foreign host.
    

    ZooKeeper responds imok (I am ok :smile:). The alpha node is up.

    Start other nodes

    Starting beta ZooKeeper server 2 and check with netcat

    ansible-playbook -l beta playbook.yml
    echo stats | nc beta 2181
    This ZooKeeper instance is not currently serving requests
    

    Repeat this until server 5 (epsilon).

    tan@alpha:/opt/ansible/zookeeper> echo stats | nc epsilon 2181
    Zookeeper version: 3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0, built on 11/01/2017 18:06 GMT
    Clients:
     /10.22.62.128:56598[0](queued=0,recved=1,sent=0)
    
    Latency min/avg/max: 0/0/0
    Received: 2
    Sent: 1
    Connections: 1
    Outstanding: 0
    Zxid: 0x600000000
    Mode: leader
    Node count: 4
    

    We see that ZooKeeper was able to elect the leader as the majority vote could take place. Check the logs:

    2018-03-08 10:42:59,758 [myid:5] - INFO  [LearnerHandler-/10.22.62.124:55684:LearnerHandler@535] - Received NEWLEADER-ACK message from 1
    2018-03-08 10:42:59,758 [myid:5] - INFO  [LearnerHandler-/10.22.190.126:39461:LearnerHandler@535] - Received NEWLEADER-ACK message from 4
    2018-03-08 10:42:59,758 [myid:5] - INFO  [LearnerHandler-/10.22.190.121:49735:LearnerHandler@535] - Received NEWLEADER-ACK message from 3
    2018-03-08 10:42:59,775 [myid:5] - INFO  [LearnerHandler-/10.22.62.126:39509:LearnerHandler@535] - Received NEWLEADER-ACK message from 2
    2018-03-08 10:42:59,776 [myid:5] - INFO  [QuorumPeer[myid=5]/0:0:0:0:0:0:0:0:2181:Leader@962] - Have quorum of supporters, sids: [ 1,2,3,4,5 ]; starting up and setting last processed zxid: 0x600000000
    

    Check the previous nodes:

    tan@alpha:~> for i in alpha beta gamma delta; do echo $i: $(echo stats | nc $i 2181 | grep "Mode"); done
    alpha: Mode: follower
    beta: Mode: follower
    gamma: Mode: follower
    delta: Mode: follower
    

    I have written a small gist how to check for all nodes who is the leader.

    Important: The startup order of a ZooKeeper server is not relevant, i.e. it does not matter that alpha has to be started first.

    High Availability Scenarios

    • Having 9 nodes, means that you must operate with 5 nodes, in order to keep the cluster operational.
    • 4 nodes can be altered or upgraded in the mean time.

    • The recommended scenario is 5 nodes, that means 3 nodes must be alive.
    • If only 2 nodes are alive, ZooKeeper will stop serving requests until the third node is up again.

    Summary

    • Running Zookeeper in Replicated Mode is simple.
    • Ansible und Docker are great and essential for maintaining a production cluster.
    • Some details where intentionally left out, e.g. how much disk space must or should a ZooKeeper server have. This is really dependent on your use case with Apache Kafka or Apache Cassandra using Apache ZooKeeper.

    Comments


    Leave a comment


  5. 2017-08-29 - Setup letschat with docker containers; Tags: Setup letschat with docker containers
    Loading...

    Setup letschat with docker containers

    Having chat rooms for specific topics or projects have become quite popular. IMHO it is a nice addon for development and deployment lifecycle management. HipChat or Slack are some popular providers. If you want to have an internal chat system, letschat is a quick way to accomplish that. It has some flaws, debugging LDAP was a horror, but basically is good enough in its vanilla state. Take as it is and its free (MIT License). We use it currently for receiving notifications from our continuous integration system.

    Persistent data

    letschat uses as data storage MongoDB. Start MongoDB as a docker container with a data volume for persistent storage.

    docker run --name chat-mongo -v /var/opt/mongodb/data:/data/db -d mongo:latest
    

    MongoDB is not clustered or high available. This is a simple setup.

    Docker Customizations

    Upload directory

    mkdir -p /var/opt/letschat/uploads
    

    Config directory

    mkdir -p /opt/letschat/config
    

    Place the content of this settings.yml in the created directory.

    env: production
    
    http:
      enable: true
      host: 'localhost'
      port: 5000
    
    files:
      enable: true
      provider: local
      local:
        dir: uploads
    
    xmpp:
      enable: false
      port: 5222
      domain: example.com
    
    database:
      uri: mongodb://localhost/letschat
    
    secrets:
      cookie: secretsauce
    
    auth:
      providers: [local]
    

    Start Letschat

    We map the volumes and link with the MongoDB container, so letschat can use it for data persistence.

    docker run  \
      --name letschat \
      --link chat-mongo:mongo \
      -p 8091:8080 -p 5222:5222 \
      -v /opt/letschat/config:/usr/src/app/config \
      -v /var/opt/letschat/uploads:/usr/src/app/uploads \
      --restart=always \
      -d sdelements/lets-chat 
    
  6. 2017-06-19 - Install gosu for Docker; Tags: Install gosu for Docker
    Loading...

    Install gosu for Docker

    gosu is an essential help for dockerized applications. Following recipe is from daily work. I always have some connectivity issues due to security precautions. This recipe works especially behind corporate firewalls with a http proxy.

    The ENVinstruction in the Dockerfile gives us some flexibility.

    ENV GOSU_VERSION=1.10 \
        http_proxy=${http_proxy:-http://10.0.2.2:3128} \
        https_proxy=${https_proxy:-https://10.0.2.2:3128}
    

    The first line defines the release version of gosu, which can easily updated by bumping the version.

    Working behind a corporate firewall, requires to define the proxy environment variables http_proxy and https_proxy. You can pass the variables in the docker build command. If not omitted the defined default proxy 10.0.2.2 and port 3128 is taken. I use an overall proxy - cntlm - to ease the setup. Using Jenkins to build docker containers, often comes with different settings. Therefore you can always override the default setting by passing the arguments into the docker build command.

    docker build --build-arg http_proxy=http://the-mapper-proxy:8080 --build-arg https_proxy=https://secure-proxy:8181 
    

    For Alpine distributions, this RUN command fetch the respective target from github, checks for validity and install it in the docker container for usage.

    # add gosu
    RUN apk update && \
        apk add vim && apk add wget && \
        set -x \
        && apk add --no-cache --virtual .gosu-deps \
            dpkg \
            gnupg \
            openssl \
        && dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')" \
        && wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch" \
        && wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc" \
        && export GNUPGHOME="$(mktemp -d)" \
        && gpg --keyserver ha.pool.sks-keyservers.net --keyserver-options http-proxy=$http_proxy --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 \
        && gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu \
        && rm -r "$GNUPGHOME" /usr/local/bin/gosu.asc \
        && chmod +x /usr/local/bin/gosu \
        && gosu nobody true \
        && apk del .gosu-deps
    
  7. 2017-04-19 - Use usermod and groupmod in Alpine Linux Docker Images; Tags: Use usermod and groupmod in Alpine Linux Docker Images
    Loading...

    Use usermod and groupmod in Alpine Linux Docker Images

    Elastic uses Alpine Linux as base image for their Elasticsearch docker images. Since v5.3.1 there will be no more supported docker images on dockerhub with Debian, it made it necessary to rewrite my docker files for my company. One obstacle is to assign the elasticsearch user to a specific uid and gid on the docker host system.

    Deprecated images

    Since I’m not familiar with Alpine Linux I had to investigate a little. To have usermod and groupmod, I have to install the shadow package.

    # change uid and gid for elasticsearch user
    RUN apk --no-cache add shadow && \
        usermod -u 2500 elasticsearch && \
        groupmod -g 2500 elasticsearch
    
  8. 2017-01-03 - Instant monitoring for Docker with cAdvisor; Tags: Instant monitoring for Docker with cAdvisor
    Loading...

    Instant monitoring for Docker with cAdvisor

    Google has an Open Source (Apache License 2.0) solution for instant monitoring docker containers, the Container Advisor (cAdvisor). It is a powerful simple solution, for instant monitoring. The drawback, it keeps no history. You may export the data to Elasticsearch, but keep in mind, it is an extra effort and not current post subject. I’m going to introduce you basic problems with docker in this post.

    Intro

    I did a little Docker container - Elasticsearch Curator (es-curator) - and wanted to know how much memory does it consume on the big machine. Therefore you might use docker stats. It might give you a very long list like this (output shortened):

    >CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O 0220fea60e6a 0.08% 720.9 MB / 134.3 GB 0.54% 54.06 MB / 45.96 MB 128.8 MB / 438.3 kB .. dbfc1c6bf14a 0.22% 900.4 MB / 134.3 GB 0.67% 166.3 MB / 70.31 MB 152.8 MB / 25.99 MB e21a72e127ed 0.95% 1.664 GB / 134.3 GB 1.24% 150.9 MB / 236.6 MB 158.3 MB / 39.27 MB e62122b6c9f6 22.74% 8.224 GB / 8.59 GB 95.74% 0 B / 0 B 88.09 GB / 183.2 GB

    Holy shit, I don’t know my container id, so therefore I need to look it up with docker ps | grep es-curator.

    cAdvisor

    Then I asked myself, there must be someone out there, who did already the monitoring job. That’s cAdvisor. Just run the docker run command and it works like that.

    Overview

    The overview page, gives you nice gauges: cAdvisor all overview

    As you can see, Google Charts is behind the curtain, and a good indicator for problem areas.

    Process Overview

    If I do on my shell (output prettified), I can look up my process ids.

    # docker top es-curator
    UID      PID     PPID    C  STIME  TTY  TIME                CMD
    root     27446   17307   0  07:15  ?    00:00:00            /bin/sh /docker-entrypoint.sh
    root     27662   27446   0  07:15  ?    00:00:00            crond -f d8 -l 8
    

    This is displayed the process overview: cAdvisor Processes

    CPU Usage

    Having 32 CPUs can be a little bit messy in the diagram :wink:. cAdvisor CPU usage

    Network Usage

    You can filter with cAdvisor the respective Interface. cAdvisor Network

    Container Overview

    But it much simpler if you open the overview of the desired container :sunglasses:. Container Overview

  9. 2016-12-28 - Using proxy in Dockerfile; Tags: Using proxy in Dockerfile
    Loading...

    Using proxy in Dockerfile

    Using docker in companies with tight security involves using proxies for pulling the image data of dockerhub. In my previous post I illustrated several ways how to enable docker using the proxy. I assume you are using a central local proxy auth server like CNTLM. So no bypassing any auth data. In the Dockerfile itself, it can be used like this:

    ENV http_proxy http://10.0.2.2:3128
    ENV https_proxy https://10.0.2.2:3128
    

    Most programs like curl, wget, apt (Debian) or apk (Alpine Linux) honor that. So something like this should work:

    RUN apk update && apk add wget
    

    Some programs however have dedicated options like gpg --keyserver <keyserver-name> --keyserver-options http-proxy=<proxy-data>.

    ENV GOSU_VERSION 1.9
    RUN set -x \
         && apk add --no-cache --virtual .gosu-deps \
             dpkg \
             gnupg \
             openssl \
         && dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')" \
         && wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch" \
         && wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc" \
         && export GNUPGHOME="$(mktemp -d)" \
         && gpg --keyserver ha.pool.sks-keyservers.net --keyserver-options http-proxy=$http_proxy --recv-keys                       B42F6819007F00F88E364FD4036A9C25BF357DD4 \
         && gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu \
         && rm -r "$GNUPGHOME" /usr/local/bin/gosu.asc \
         && chmod +x /usr/local/bin/gosu \
         && gosu nobody true \
         && apk del .gosu-deps
    
  10. 2016-12-28 - Use custom docker registry; Tags: Use custom docker registry
    Loading...

    Use custom docker registry

    Using docker since 2013 allows you to use custom docker registries with the docker pull and push commands. Just add the registry location to the repository name.

    It will look like my.registry.address:port/repository_name. Following example illustrates my recent usage. Before we can push to a repository it must be tagged. Find the images with:

    [root@localhost fo-elasticsearch-curator]# docker images | grep curator | grep 4.2.5
    frontoffice/es-curator   4.2.5               be06e3ba93ab        40 minutes ago      103.9 MB
    

    Tag it like this:

    [root@localhost fo-elasticsearch-curator]# docker tag be06e3ba93ab artifactory.cinhtau.net/frontoffice/es-curator:4.2.5
    

    If you don’t use a version (4.2.5) in tag, it will become latest. You can also do multiple tags:

    [root@localhost fo-elasticsearch-curator]# docker tag be06e3ba93ab artifactory.cinhtau.net/frontoffice/es-curator:4.2.5
    [root@localhost fo-elasticsearch-curator]# docker tag be06e3ba93abartifactory.cinhtau.net/frontoffice/es-curator:latest
    

    Now you are ready to push:

    [root@localhost fo-elasticsearch-curator]# docker push artifactory.cinhtau.net/frontoffice/es-curator:4.2.5
    The push refers to a repository [artifactory.cinhtau.net/frontoffice/es-curator]
    db35aa2d9103: Pushed
    c53b2bda49da: Pushed
    d0a52e0c52ff: Pushed
    cf16c8a484e6: Pushed
    9e8d827bd66e: Pushed
    ae9ff5adae09: Pushed
    cde1526b53e0: Pushed
    513678dc07f5: Pushed
    874b7daf5cb3: Pushed
    2613530126e7: Pushed
    cc318934a479: Pushed
    c9fc143a069a: Pushed
    011b303988d2: Pushed
    latest: digest: sha256:2202206bb6e2b038849c382fffcb0ae769736bc7429d47c751f35b4a0991d0fd size: 3038
    

    On the docker daemon host you can pull or run it directly.

    # docker pull artifactory.cinhtau.net/frontoffice/es-curator:4.2.5
    # docker run -d -v /var/log/docker/curator:/var/log/curator --name=es-curator artifactory.cinhtau.net/frontoffice/es-curator:4.2.5
    
  11. 2016-11-08 - Monitor Elasticsearch in Docker with Monit; Tags: Monitor Elasticsearch in Docker with Monit
    Loading...

    Monitor Elasticsearch in Docker with Monit

    Running Elasticsearch as docker container is straightforward. If you don’t have a cluster manager like Kubernetes, monit can help you to keep track of the container lifecycle.

    An exemplary monit configuration:

    CHECK PROCESS elasticsearch WITH MATCHING "org.elasticsearch.bootstrap.Elasticsearch"
    CHECK PROGRAM elasticsearch_container WITH PATH "/usr/bin/docker top elasticsearch"
      if status != 0 then alert
        alert warning@cinhtau.net
      group elkstack
    CHECK HOST elasticsearch_healthcheck WITH ADDRESS cinhtau.net
      if failed url http://cinhtau.net:9200 for 5 cycles
        then alert
          alert warning@cinhtau.net BUT not on { action, instance }
      depends on elasticsearch_container
      group elkstack
    CHECK FILE elasticsearch_logfile with path /var/log/elasticsearch/test-cluster.log
      if match "ERROR" for 2 times within 5 cycles then alert
        alert elasticsearch@cinhtau.net BUT not on { action, instance, nonexist }
      depends on elasticsearch_container
      group elkstack
    

    Pay attention to the nonexist option. Monit does an implicit check if the logifle exists. Elasticsearch writes a log file. Our housekeeping, logrotate or some kind of janitor script, rename, compress or delete this file. So if the file is missing, monit would complain without the option. If the file doesn’t exists, which is basically good for prod, you don’t want to be notified or warned. No logs, no errors, no worries.

  12. 2016-09-27 - Run Sonarqube with Docker and PostgreSQL; Tags: Run Sonarqube with Docker and PostgreSQL
    Loading...

    Run Sonarqube with Docker and PostgreSQL

    A long time ago (seems like ages to me) I have programming in Java and let my projects analyze with Sonarqube. I always remembered that every Sonarqube upgrade wasn’t quick to made. Since Docker I now have the possibility to run the latest stable Sonarqube version. No manual upgrades anymore. Sounds wonderful. Following installation was made on my Linux Box running Ubuntu 16.04.01 LTS with Docker 1.11.2 and PostgreSQL 9.5.

    PostgreSQL Installation

    First I need PostgreSQL for Sonar to store its data.

    >sudo apt-get install postgresql

    Initial setup

    Login as postgres user and alter the password for the database user

    sudo -u postgres psql template1 
    
    ALTER USER postgres WITH PASSWORD 'fancypassword';
    

    Move data directory

    As default the data directory is defined in /var/lib/postgresql/9.5/main. This is my SSD. Data can be stored on my regular HDD. Therefore move it to /home/postgresql.

    This step is not necessary :smirk:.

    Create database

    We need a user for Sonarqube. I just named it sonar.

    sudo -u postgres createuser -D -P sonar
    

    The options explained:

    * `-D` &rarr; The new user will not be allowed to create databases
    * `-P` &rarr; Password prompt
    

    Now we create the database for sonar and assign the encoding and user. I just choose the innovative name sonar :smile:.

    sudo -u postgres createdb sonar --encoding=UTF-8 --owner=sonar
    

    Sonarqube with Docker

    Now we need to pull the Docker image from https://hub.docker.com/_/sonarqube/. I chose the image variant lts-alpine. This image is based on the popular Alpine Linux project, available in the alpine official image. Alpine Linux is much smaller than most distribution base images (~5MB), and thus leads to much slimmer images in general.

    sudo docker pull sonarqube:lts-alpine
    

    Testing and troubleshooting

    If we run it interactively with the settings for PostgreSQL:

    $ docker run -it --name sonarqube \
        -p 9000:9000 -p 9092:9092 \
        -e SONARQUBE_JDBC_USERNAME=sonar \
        -e SONARQUBE_JDBC_PASSWORD=sonar \
        -e SONARQUBE_JDBC_URL=jdbc:postgresql://localhost/sonar \
        sonarqube:lts-alpine
    

    We get an error :-o

    2016.09.26 20:19:45 INFO  web[o.sonar.db.Database] Create JDBC data source for jdbc:postgresql://localhost:5432/sonar
    Caused by: org.postgresql.util.PSQLException: Connection refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
    

    Fixing connection problem

    The connection between Docker and PostgreSQL doesn’t work. Docker has its own interface with IP assigned.

    tan@omega:~$ ifconfig 
    docker0   Link encap:Ethernet  HWaddr 02:42:3d:4c:f3:00  
              inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
              inet6 addr: fe80::42:3dff:fe4c:f300/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:18707 errors:0 dropped:0 overruns:0 frame:0
              TX packets:18753 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0 
              RX bytes:8669821 (8.6 MB)  TX bytes:9622716 (9.6 MB)
    

    Therefore PostgreSQL has to allow this IP range. Add the line for 172.17.x.x in /etc/postgresql/9.5/main/pg_hba.conf:

    # IPv4 local connections:
    host    all             all             127.0.0.1/32            md5
    host    all             all             172.17.0.0/16           md5
    

    Now we need to apply the changes, either with a simple reload

    sudo -u postgres /usr/lib/postgresql/9.5/bin/pg_ctl -D /home/postgresql/9.5/main reload
    

    or just simply restart the service. Basically it isn’t necessary. But since it is a local non-shared installation:

    /etc/init.d/postgresql restart
    

    Tuning docker run

    The official run command has some flaws. For portability you shouldn’t use localhost. You can add the option --add-host=database to expose your database host IP to docker.

    Furthermore we need to map the declared volumes for data and extensions. Otherwise all changes or installed plug-ins are gone after the docker container stops. I decided to put everything in home directory.

    mkdir -p sonar/data sonar/extensions
    
    sudo docker rm sonarqube && sudo docker run -d --name sonarqube \
     -p 9000:9000 -p 9092:9092 \
     -v /home/tan/sonar/data:/opt/sonarqube/data \
     -v /home/tan/sonar/extensions:/opt/sonarqube/extensions \
     -e SONARQUBE_JDBC_USERNAME=sonar \
     -e SONARQUBE_JDBC_PASSWORD=sonar \
     -e SONARQUBE_JDBC_URL=jdbc:postgresql://192.168.1.123:5432/sonar \
     --add-host=database:192.168.1.123 \
     sonarqube:lts-alpine
    

    If we run it interactively again, we see that the connection is ok and sonar creates some tables.

    2016.09.26 20:22:49 INFO  web[o.sonar.db.Database] Create JDBC data source for jdbc:postgresql://192.168.1.123:5432/sonar
    ..
    2016.09.26 20:22:57 INFO  web[DbMigration] ==  InitialSchema: migrating ==================================================
    2016.09.26 20:22:57 INFO  web[DbMigration] -- create_table(:projects, {})
    

    Some tables shown in DataGrip. postgresql-sonarqube

    Check Docker container

    If the container is running, docker ps shows you the information you need.

    tan@omega:~$ sudo docker ps
    CONTAINER ID        IMAGE                  COMMAND             CREATED             STATUS              PORTS                                            NAMES
    44a2292b59a6        sonarqube:lts-alpine   "./bin/run.sh"      17 minutes ago      Up 17 minutes       0.0.0.0:9000->9000/tcp, 0.0.0.0:9092->9092/tcp   sonarqube
    

    The wide output isn’t much readable, therefore we can format it more human readable.

    
    tan@omega:~$ sudo docker ps --format 'CONTAINER ID: {{.ID}}\nIMAGE: {{.Image}}\nCOMMAND: {{.Command}}\nCREATED: {{.CreatedAt}}\nSTATUS: {{.Status}}\nPORTS: {{.Ports}}\nNAMES: {{.Names}}'
    CONTAINER ID: 44a2292b59a6
    IMAGE: sonarqube:lts-alpine
    COMMAND: "./bin/run.sh"
    CREATED: 2016-09-26 22:30:14 +0200 CEST
    STATUS: Up 23 minutes
    PORTS: 0.0.0.0:9000->9000/tcp, 0.0.0.0:9092->9092/tcp
    NAMES: sonarqube
    
    

    Check if the docker container listens on the Sonar ports.

    tan@omega:~$ netstat -na | grep  ':9000\|:9092'
    tcp6       0      0 :::9092                 :::*                    LISTEN     
    tcp6       0      0 :::9000                 :::*                    LISTEN
    

    Setup Sonar

    Now we a running sonar instance, we need some plugins for code inspection. Go to http://localhost:9000 with your browser and login with the defaults admin/admin.

    Go to Administration → Update Center and install all the plugins you need, in my case:

    * Java
    * Checkstyle
    * Findbugs
        * PMD
    

    You need to restart the Sonar server, which can be done within the Administration web GUI.

    Analyze Maven Project

    If you use maven for a java project, you add the maven sonar plugin to your build section:

    <build>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>3.5.1</version>
            <configuration>
                <source>1.8</source>
                <target>1.8</target>
            </configuration>
        </plugin>
        <plugin>
            <groupId>org.sonarsource.scanner.maven</groupId>
            <artifactId>sonar-maven-plugin</artifactId>
            <version>3.1.1</version>
        </plugin>
    </plugins>
    </build>
    

    Run code inspection

    mvn sonar:sonar
    

    It defaults to localhost:9000. At the bottom of the Maven output you will see some INFO messages.

    [INFO] Analysis report generated in 69ms, dir size=46 KB
    [INFO] Analysis reports compressed in 31ms, zip size=23 KB
    [INFO] Analysis report uploaded in 385ms
    [INFO] ANALYSIS SUCCESSFUL, you can browse http://localhost:9000/dashboard/index/net.cinhtau:ssh-demo
    

    Shutdown Sonar

    If you don’t need sonar simply stop your sonar container:

    sudo docker stop sonarqube
    
  13. 2016-09-27 - Make docker ps readable; Tags: Make docker ps readable
    Loading...

    Make docker ps readable

    Does it bother you, that docker ps has too wide output?

    $ sudo docker ps
    CONTAINER ID        IMAGE                  COMMAND             CREATED             STATUS              PORTS                                            NAMES
    44a2292b59a6        sonarqube:lts-alpine   "./bin/run.sh"      17 minutes ago      Up 17 minutes       0.0.0.0:9000->9000/tcp, 0.0.0.0:9092->9092/tcp   sonarqube
    

    Well the command itself looks ugly (defining a custom format in GO language)

    
    sudo docker ps --format 'CONTAINER ID: {{.ID}}\nIMAGE: {{.Image}}\nCOMMAND: {{.Command}}\nCREATED: {{.CreatedAt}}\nSTATUS: {{.Status}}\nPORTS: {{.Ports}}\nNAMES: {{.Names}}'
    
    

    but the output is at least more readable :-)

    CONTAINER ID: 44a2292b59a6
    IMAGE: sonarqube:lts-alpine
    COMMAND: "./bin/run.sh"
    CREATED: 2016-09-26 22:30:14 +0200 CEST
    STATUS: Up 23 minutes
    PORTS: 0.0.0.0:9000->9000/tcp, 0.0.0.0:9092->9092/tcp
    NAMES: sonarqube
    
  14. 2016-09-26 - Show docker container size; Tags: Show docker container size
    Loading...

    Show docker container size

    If you build docker containers, ensure that you don’t write any data within the containers. Therefore you can use mapped volumes or data containers. Basically your docker containers that hosts the application or service should be immutable. I won’t go into details why, but how to check that a docker container does not grow.

    Docker provided the ps command with the option -s or --size.

    The command

    sudo docker ps -s
    

    will give some output. Depending how many containers you have, it can be a little bit unreadable. See example output:

    tan@epsilon:~> sudo docker ps -s
    CONTAINER ID        IMAGE                                   COMMAND                  CREATED             STATUS              PORTS                                                                                                                                                                                                                        NAMES                         SIZE
    f68a2ba2b243        6fd8f0170256                            "/bin/sh -c '$JAVA_HO"   3 days ago          Up 3 days           0.0.0.0:33369->2251/tcp, 0.0.0.0:33368->8080/tcp                                                                                                                                                                             trusting_meitner              32.77 kB (virtual 410.3 MB)
    390b7f2e559f        mongo:3.2.9                             "/entrypoint.sh mongo"   3 days ago          Up 3 days           27017/tcp                                                                                                                                                                                                                    kickass_yalow                 8 B (virtual 366.3 MB)
    8e64707ad052        elasticsearch:2.4.0         "/docker-entrypoint.s"   7 days ago          Up 7 days                                                                                                                                                                                                                                        elasticsearch                 32.77 kB (virtual 413.4 MB)
    a428d4a08dc6        14b6487a15bf                            "/bin/sh -c '$JBOSS_H"   9 days ago          Up 9 days           0.0.0.0:32908->8080/tcp    
    

    Therefore docker provides the formatting option: Pretty-print containers using a Go template

    
    sudo docker ps --format '{{.Names}}\n{{.Image}}:{{.Size}}\n' -s
    
    

    The placeholders are described at Docker Reference ps formatting.

    This will give us some more readable output:

    elasticsearch
    elasticsearch:2.4.0:32.77 kB (virtual 413.4 MB)
    
    kibana
    kibana-test:4.6.1:121.5 kB (virtual 661.7 MB)
    
    suspicious_austin
    716ee716f7bf:864.7 kB (virtual 664.3 MB)
    
    services
    8e28731dcc86:71.51 MB (virtual 1.425 GB)
    
    mnorc
    c18ea635a8ed:864.6 kB (virtual 664.3 MB)
    
  15. 2016-09-02 - Use Travis CI in Github to build and deploy to dockerhub; Tags: Use Travis CI in Github to build and deploy to dockerhub
    Loading...

    Use Travis CI in Github to build and deploy to dockerhub

    I love reveal.js - The HTML Presentation Framework. Attending at the Javaland 2016 Conference I saw a awesome usage of reveal.js within a docker container in the Docker Patterns Talk by Roland Huß. Curious and eager to know I explored his github account. Mr. Huß offers the basics in the docker-reveal repository. Using github for docker builds is a great idea. Then I started to play around with docker myself, mostly to maintain and ease administering multiple elasticsearch nodes in a cluster. I felt using github offers me the opportunity to use Travis CI to build the docker image and deploy it to dockerhub - the docker image storage. Is was easier than I thought and is much better than building it manually everytime. This post covers the progress and results.

    Basic Steps

    The general roadmap:

    • Create a public docker repository at dockerhub, to push the docker images to it :-)
    • Create a github repository, to maintain the Dockerfile and source
    • Setup continous build, use in github Travis CI to build and push the docker image to dockerhub
    • Run (pull the docker image from dockerhub) and have fun.

    Dockerhub

    Dockerhub offers unlimited public repositories free of charge for storing the docker images of your software projects. If your don’t want to expose your software to the public choose a private repository.

    Github

    GitHub is a Git repository hosting service that provides free repositories for the public, mostly Open Source development. I simply take my existing docker project, improved-docker-elasticsearch, a custom tailored elasticsearch instance.

    Travis CI

    The Travis CI integration is very simple. You only need to create .travis.yml file, that contains the build definition and will be explained in detail.

    Prior conditions

    We want use the docker service in Travis CI. Docker runs as root, so you need sudo permissions.

    sudo: required
    services:
      - docker
    

    Build the docker image

    I put the build instruction into before_install and check if the build was correctly build in the script section.

    before_install:
      - docker build -t cinhtau/elasticsearch .
    script:
      - docker images cinhtau/elasticsearch
    

    Deploy to Dockerhub

    The last section, contains the push instruction to dockerhub, only if the image was correctly build.

    after_success:
      - if [ "$TRAVIS_BRANCH" == "master" ]; then
        docker login -e="$DOCKER_EMAIL" -u="$DOCKER_USERNAME" -p="$DOCKER_PASSWORD";
        docker push cinhtau/elasticsearch;
        fi
    

    The environment variables are setup in the repository settings within Travis CI (see screenshot). travis-ci-docker-variables

    Run it anywhere

    Having the docker image in dockerhub, I can use my elasticsearch image on any computer that can pull the image from the dockerhub repository and has the right requirements to run the image. No need to build it locally anymore :-) .

    tan@omega:~$ sudo docker pull cinhtau/elasticsearch
    Using default tag: latest
    latest: Pulling from cinhtau/elasticsearch
    8ad8b3f87b37: Already exists
    751fe39c4d34: Already exists
    b165e84cccc1: Already exists
    acfcc7cbc59b: Already exists
    04b7a9efc4af: Already exists
    b16e55fe5285: Already exists
    8c5cbb866b55: Already exists
    e4412b99da57: Pull complete
    60fa44913e1f: Pull complete
    593bcc8c9106: Pull complete
    b065e784dc32: Pull complete
    10cc1e0e4dd9: Pull complete
    093a531dbb6f: Pull complete
    Digest: sha256:c90986a7f3799cdabc7c62ef7f576ed97a3d6648fb5c80a984312b26ec0375ea
    Status: Downloaded newer image for cinhtau/elasticsearch:latest
    
  16. 2016-08-31 - Remove all exited docker images; Tags: Remove all exited docker images
    Loading...

    Remove all exited docker images

    Playing around with docker may leave a lot of exited images. A one-line command to cleanup your working environment. :-) Just use sudo docker ps -a | grep Exit | cut -d ' ' -f 1 | xargs sudo docker rm. Remove the sudo if you are root.

    A small demonstration:

    tan@omega:~/Sources/improved-docker-elasticsearch$ sudo docker ps -a
    CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS                        PORTS               NAMES
    23e8aac03da1        cinhtau/elasticsearch   "/docker-entrypoint.s"   9 minutes ago       Exited (130) 5 minutes ago                        prickly_stonebraker
    a5ff8a4c917b        cinhtau/elasticsearch   "/docker-entrypoint.s"   9 minutes ago       Exited (130) 9 minutes ago                        vinh
    0a4e13dcfe52        cinhtau/elasticsearch   "/docker-entrypoint.s"   11 minutes ago      Exited (130) 10 minutes ago                       suspicious_shaw
    632aa3ff3e8f        cinhtau/elasticsearch   "/docker-entrypoint.s"   13 minutes ago      Exited (130) 12 minutes ago                       berserk_dijkstra
    2f17d64ff7db        cinhtau/elasticsearch   "/docker-entrypoint.s"   17 minutes ago      Exited (130) 16 minutes ago                       tender_bohr
    19345357c2af        cinhtau/elasticsearch   "/docker-entrypoint.s"   18 minutes ago      Exited (0) 17 minutes ago                         elated_brattain
    00e077ac1d20        cinhtau/elasticsearch   "/docker-entrypoint.s"   22 minutes ago      Exited (130) 18 minutes ago                       desperate_goldstine
    ddf3a8e382a9        cinhtau/elasticsearch   "/docker-entrypoint.s"   28 minutes ago      Exited (130) 27 minutes ago                       modest_davinci
    a5ecec7eeeeb        cinhtau/elasticsearch   "/docker-entrypoint.s"   28 minutes ago      Exited (64) 28 minutes ago                        gloomy_lovelace
    9dbdc789a006        hello-world             "/hello"                 5 weeks ago         Exited (0) 5 weeks ago                            tiny_pasteur
    

    Remove it.

    tan@omega:~/Sources/improved-docker-elasticsearch$ sudo docker ps -a | grep Exit | cut -d ' ' -f 1 | xargs sudo docker rm
    23e8aac03da1
    a5ff8a4c917b
    0a4e13dcfe52
    632aa3ff3e8f
    2f17d64ff7db
    19345357c2af
    00e077ac1d20
    ddf3a8e382a9
    a5ecec7eeeeb
    9dbdc789a006
    

    Check if everything is gone.

    tan@omega:~/Sources/improved-docker-elasticsearch$ sudo docker ps -a
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    
  17. 2016-08-29 - Remove docker containers from a dedicated image; Tags: Remove docker containers from a dedicated image
    Loading...

    Remove docker containers from a dedicated image

    Show all containers

    vinh@omega:~/fo-elasticsearch> sudo docker ps -a | more
    CONTAINER ID        IMAGE                    COMMAND                   CREATED      STATUS            PORTS NAMES
    cdf0e35d5ff7        fo-elasticsearch:2.3.5   "/docker-entrypoint.s"    3 hours ago  Exited (1) 3 hours ago	sharp_archimedes
    33f5cdae22c4        fo-elasticsearch:2.3.5   "/docker-entrypoint.s"    3 hours ago  Exited (1) 3 hours ago	stupefied_bose
    55653d309704        fo-elasticsearch:2.3.5   "/docker-entrypoint.s"    3 hours ago  Exited (1) 3 hours ago	hopeful_lichterman
    2a2e6fa1f301        fo-elasticsearch:2.3.5   "/docker-entrypoint.s"    3 hours ago  Exited (1) 3 hours ago	mad_bose
    de360f421c59        fo-elasticsearch:2.3.5   "/docker-entrypoint.s"    3 hours ago  Exited (1) 3 hours ago	lonely_hopper
    

    Remove image fo-elasticsearch:2.3.5 with force option

    vinh@omega:~/fo-elasticsearch> sudo docker rmi -f fo-elasticsearch:2.3.5
    Untagged: fo-elasticsearch:2.3.5
    Deleted: 769a7e694463af988bb4642372c224205494ea89da10194ecbd46fff44b05b80
    Deleted: 6399ea8e5d8811fd506a144c25d4d59165f5abda6471a2f3843ef625c0ffa70e
    Deleted: f13b31b07c7b84b1061481777de2e961fec1e63aa71b4a7f17ade8da0ed71d1f
    Deleted: e83558c706eb8ac7ebda0746637f173d244699897a6780381089a8319b93d13f
    Deleted: 220e1245516feab963d72a868e68a510b00098b82cdd11f838882f5ccc937f5e
    Deleted: 35508b4a2d87dadfb888c4738763057ea58c5f586b03223432503a0453b7e8bc
    Deleted: 610154ead36fd555e5be3ddab3e16e95e577ec1a76db1a4017245fc77b02e420
    Deleted: 27123f13de8b49d0843e86aada4a19853d920642d743fbf760c79c76e13d05f6
    Deleted: 2f5c85d694978c3090d5845fe0754098a0d3fba3752cd6d962bec67a03d8aad3
    Deleted: 3870261826c31850f44a54d0f49a960eaec95a3bf16da133f923a0468dc3fceb
    Deleted: e7fa446d863b19d8c543f8bf1e049a6fd9b6f04dfa28a56297a665ce41f29d61
    Deleted: 90b881a329356ba572805c8ef20be462f95d1596e4db6795a8a33735f03a4c5a
    Deleted: 23209cfa01ab5cf518a557a34c00e28e824b182e62312a71cde260dd5b69a6b4
    Deleted: 77f39095a9a6f70b9d04008a8f45d016175a622584199128b3b344222b626bb9
    Deleted: 05d62645704a41a1462ac5465744ad02b08c7e30f25530f2ceb94f7c20487ec7
    Deleted: 47d37d621283bf7c0bc5f06dadb19ef7c53509018e6112a74511620e61a79e14
    
  18. 2016-07-22 - Docker behind proxy with CNTLM; Tags: Docker behind proxy with CNTLM
    Loading...

    Docker behind proxy with CNTLM

    Docker on windows in a corporate environment is behind a proxy. Therefore I use CNTLM for a proxy authentication. This post demonstrate how to remove the default vm (virtualbox image) and create one with CNTLM as proxy. All commands were executed on the windows command prompt.

    Invoke docker-machine help

    C:\>docker-machine --help
    Usage: docker-machine [OPTIONS] COMMAND [arg...]
    Create and manage machines running Docker.
    Version: 0.5.6, build 61388e9
    Author:
      Docker Machine Contributors - <https://github.com/docker/machine>
    Options:
      --debug, -D                                           Enable debug mode
      -s, --storage-path "C:\Users\tknga\.docker\machine"   Configures storage path [$MACHINE_STORAGE_PATH]
      --tls-ca-cert                                         CA to verify remotes against [$MACHINE_TLS_CA_CERT]
      --tls-ca-key                                          Private key to generate certificates [$MACHINE_TLS_CA_KEY]
      --tls-client-cert                                     Client cert to use for TLS [$MACHINE_TLS_CLIENT_CERT]
      --tls-client-key                                      Private key used in client TLS auth [$MACHINE_TLS_CLIENT_KEY]
      --github-api-token                                    Token to use for requests to the Github API [$MACHINE_GITHUB_API_TOKEN]
      --native-ssh                                          Use the native (Go-based) SSH implementation. [$MACHINE_NATIVE_SSH]
      --bugsnag-api-token                                   BugSnag API token for crash reporting [$MACHINE_BUGSNAG_API_TOKEN]
      --help, -h                                            show help
      --version, -v                                         print the version
    Commands:
      active                Print which machine is active
      config                Print the connection config for machine
      create                Create a machine
      env                   Display the commands to set up the environment for the Docker client
      inspect               Inspect information about a machine
      ip                    Get the IP address of a machine
      kill                  Kill a machine
      ls                    List machines
      regenerate-certs      Regenerate TLS Certificates for a machine
      restart               Restart a machine
      rm                    Remove a machine
      ssh                   Log into or run a command on a machine with SSH.
      scp                   Copy files between machines
      start                 Start a machine
      status                Get the status of a machine
      stop                  Stop a machine
      upgrade               Upgrade a machine to the latest version of Docker
      url                   Get the URL of a machine
      version               Show the Docker Machine version or a machine docker version
      help                  Shows a list of commands or help for one command
    Run 'docker-machine COMMAND --help' for more information on a command.
    

    Remove default image

    C:\>docker-machine rm default
    About to remove default
    Are you sure? (y/n): y
    Successfully removed default
    

    Create new default

    C:\>docker-machine create -d virtualbox --engine-env HTTP_PROXY=http://10.0.2.2:3128 --engine-env HTTPS_PROXY=http://10.0.2.2:3128 default
    Running pre-create checks...
    (default) Copying C:\Users\tanfun\.docker\machine\cache\boot2docker.iso to C:\Users\tanfun\.docker\machine\machines\default\boot2docker.iso...
    (default) Creating VirtualBox VM...
    (default) Creating SSH key...
    (default) Starting the VM...
    (default) Waiting for an IP...
    Waiting for machine to be running, this may take a few minutes...
    Machine is running, waiting for SSH to be available...
    Detecting operating system of created instance...
    Detecting the provisioner...
    Provisioning with boot2docker...
    Copying certs to the local machine directory...
    Copying certs to the remote machine...
    Setting Docker configuration on the remote daemon...
    Checking connection to Docker...
    Docker is up and running!
    To see how to connect Docker to this machine, run: docker-machine env default
    

    After that you can run helloworld docker-hello-world

  19. 2016-07-21 - View stdout of docker container; Tags: View stdout of docker container
    Loading...

    View stdout of docker container

    https://docs.docker.com/compose/reference/logs/#logs

    Show last 100 lines from image id dd81eb497f43

    sudo docker logs --tail 100 dd81eb497f43
    

    Follow the logs (short and long version)

    # sudo docker logs -f dd81eb497f43
    sudo docker logs --follow dd81eb497f43
    
  20. 2016-07-21 - Start bash in docker container; Tags: Start bash in docker container
    Loading...

    Start bash in docker container

    Replace id and there you go:

    sudo docker exec -i -t dd81eb497f43 /bin/bash
    

    Or use the image name

    docker run -ti ubuntu:latest /bin/bash