1. 2017-04-24 - Rename multiple files; Tags: Rename multiple files
    Loading...

    Rename multiple files

    My current migration from WordPress to Jekyll took me the WordPress exporter of Jekyll. It generated html files. I want to migrate them to markdown, therefore I needed a quick solution to rename all html files into files with markdown file extensions.

    Let’s have a look first on directory structure:

    tan@omega:~/sources/my-awesome-site/_posts$ ll
    total 28
    drwxrwx--- 1 tan tan     0 Apr 23 21:10 ./
    drwxrwx--- 1 tan tan 24576 Apr 24 09:29 ../
    drwxrwx--- 1 tan tan     0 Apr 23 21:10 2015/
    drwxrwx--- 1 tan tan  4096 Apr 23 21:10 2016/
    drwxrwx--- 1 tan tan     0 Apr 23 21:10 2017/
    

    Let’s check the last 10 entries

    tan@omega:~/sources/my-awesome-site/_posts$ find . -name "*.html" -exec echo {} \; | tail
    ./2016/09/2016-09-14-localization-problem-while-deinstalling-oracle-11g-on-windows.html
    ./2016/09/2016-09-15-handling-logstash-input-multiline-codec.html
    ./2016/09/2016-09-19-reindex-data-in-elasticsearch.html
    ./2016/09/2016-09-20-amazing-video-for-an-amazing-song.html
    ./2016/09/2016-09-21-groups-of-groups-in-ansible.html
    ./2016/09/2016-09-21-secret-nations-tonight.html
    ./2016/09/2016-09-24-use-ansible-for-cluster-management.html
    ./2016/09/2016-09-25-correct-type-mapping-in-index-template-for-elasticsearch.html
    ./2016/09/2016-09-28-checking-for-running-port-on-windows-cmd.html
    ./2016/09/2016-09-28-monitor-process-and-used-ports-of-kibana.html
    

    Total sum of affected files

    tan@omega:~/sources/my-awesome-site/_posts$ find . -name "*.html" -exec echo {} \; | wc -l
    280
    

    Use Perl’s rename command with regular expression:

    find . -name "*.html" -exec rename -v 's/\.html$/\.md/' {} \;
    

    The verbose option of rename shows the detailed rename action.

    ./2016/09/2016-09-15-handling-logstash-input-multiline-codec.html renamed as ./2016/09/2016-09-15-handling-logstash-input-multiline-codec.md
    ./2016/09/2016-09-19-reindex-data-in-elasticsearch.html renamed as ./2016/09/2016-09-19-reindex-data-in-elasticsearch.md
    ./2016/09/2016-09-20-amazing-video-for-an-amazing-song.html renamed as ./2016/09/2016-09-20-amazing-video-for-an-amazing-song.md
    ./2016/09/2016-09-21-groups-of-groups-in-ansible.html renamed as ./2016/09/2016-09-21-groups-of-groups-in-ansible.md
    ./2016/09/2016-09-21-secret-nations-tonight.html renamed as ./2016/09/2016-09-21-secret-nations-tonight.md
    ./2016/09/2016-09-24-use-ansible-for-cluster-management.html renamed as ./2016/09/2016-09-24-use-ansible-for-cluster-management.md
    ./2016/09/2016-09-25-correct-type-mapping-in-index-template-for-elasticsearch.html renamed as ./2016/09/2016-09-25-correct-type-mapping-in-index-template-for-elasticsearch.md
    ./2016/09/2016-09-28-checking-for-running-port-on-windows-cmd.html renamed as ./2016/09/2016-09-28-checking-for-running-port-on-windows-cmd.md
    ./2016/09/2016-09-28-monitor-process-and-used-ports-of-kibana.html renamed as ./2016/09/2016-09-28-monitor-process-and-used-ports-of-kibana.md
    
  2. 2016-11-17 - Delete zero byte files; Tags: Delete zero byte files
    Loading...

    Delete zero byte files

    If you the situation, that a lot of zero byte files exists in your current directory a simple find command, helps you to get rid of it.

    find . -name '*' -size 0 -print0 | xargs -0 rm
    
  3. 2016-11-07 - Comment and uncomment files with sed; Tags: Comment and uncomment files with sed
    Loading...

    Comment and uncomment files with sed

    If you have the need to comment a whole file sed is very handy to comment and uncomment files.

    sed -i 's/^\([^#]\)/#\1/g' /etc/monit/elasticsearch
    

    To uncomment remove all # at the beginning of each line.

    sed -i 's/^#//g' /etc/monit/elasticsearch
    
  4. 2016-09-28 - Checking for running port on Windows cmd; Tags: Checking for running port on Windows cmd
    Loading...

    Checking for running port on Windows cmd

    Using Windows command line is sometimes challenging. Especially if you want to check if a special port is up and running (listening). This post demonstrates how.

    Usually in Linux, you would go for

    netstat -na | grep ":3128"
    

    But Windows has nothing grep like :-x . Since Windows XP, findstr was introduced. It offers similar functionality to grep.

    netstat -ano | findstr ":3128"
    

    Some excerpt from the help

    C:\Users>netstat \help
     -a            Displays all connections and listening ports.
     -n            Displays addresses and port numbers in numerical form.
     -o            Displays the owning process ID associated with each connection.
    

    Some example output:

    C:\Users>netstat -ano | findstr 3128
      TCP    127.0.0.1:3128         0.0.0.0:0              LISTENING       6348
    

    The last column contains the process id (PID). The pid can be used with a filter in tasklist to retrieve the process name.

    C:\Users>tasklist /?
    TASKLIST [/S system [/U username [/P [password]]]]
             [/M [module] | /SVC | /V] [/FI filter] [/FO format] [/NH]
    Description:
        This tool displays a list of currently running processes on
        either a local or remote machine.
    Parameter List:
    ..
       /FI    filter           Displays a set of tasks that match a
                               given criteria specified by the filter.
    ..
    Filters:
        Filter Name     Valid Operators           Valid Value(s)
        -----------     ---------------           --------------------------
        PID             eq, ne, gt, lt, ge, le    PID value
    
    C:\Users>tasklist /FI "PID eq 6348"
    Image Name                     PID Session Name        Session#    Mem Usage
    ========================= ======== ================ =========== ============
    cntlm.exe                     6348 Console                    1      5'732 K
    

    If you have cygwin running on the Windows machine, you can stay with grep.

    $ netstat -na | grep :3128
      TCP    127.0.0.1:3128         0.0.0.0:0              LISTENING
    
  5. 2016-09-08 - Housekeeping of log files; Tags: Housekeeping of log files
    Loading...

    Housekeeping of log files

    Writing software also results in writing application logs. Therefore log rotating or house keeping is essential to free the space of old and unused log files. While Linux provides logrotate, you may run into the situations that you aren’t root or an user with root permissions and are not eligible to use logrotate. A simple shell script will also provide the essential cleanup.

    Cleanup Script

    Several important things to mention before using it:

    • You use this on your own risk :-)
    • This is a demo script or the real commands were commented. Uncomment if you want to use them.
    • Script was written for Bash v4.x
    • Algorithm effiency: Delete before compress, no point to zip something that may be deleted.
    #!/usr/bin/env bash
    # ===
    LOG_DIR=$1
    ARCHIVE_DAYS=$2
    DELETE_DAYS=$3
    # ===
    do_help() {
        echo "Usage: $0 {log directory} {zip after days} {delete after days}"
        exit $1
    }
    do_check() {
        if [[ "$ARCHIVE_DAYS" = "" ]] ; then
          echo "number of days for archiving log files is missing"
          do_help 1
        fi
        if [[ "$DELETE_DAYS" = "" ]] ; then
          echo "number of days for deleting log files is missing"
          do_help 1
        fi
        if [[ $DELETE_DAYS -lt $ARCHIVE_DAYS ]] ; then
            echo "DELETE DAYS must be greater than ARCHIVE DAYS"
            exit 1
        fi
    }
    start() {
        echo -e "Delete files older than $DELETE_DAYS days: \n"
        find "$LOG_DIR" -type f -mtime +"$DELETE_DAYS" -print
        #find "$LOG_DIR" -type f -mtime +"$DELETE_DAYS" -print -delete
        echo
        echo -e "Archive files older than $ARCHIVE_DAYS days: \n"
        find "$LOG_DIR" -type f -mtime +"$ARCHIVE_DAYS" -print
        #find "$LOG_DIR" -type f ! -name \*.bz2 -mtime +"$ARCHIVE_DAYS" -print -exec bzip2 -q -f -9 {} \;
        echo
        df -h $LOG_DIR
        exit $?
    }
    if [[ -n "$LOG_DIR" ]] ; then
        do_check
        start
    else
        do_help 1
    fi
    

    Demonstration

    Display help if mandatory option (log dir) is missing

    www-data@alpha:~# ./log-cleanup.sh
    Usage: ./log-cleanup.sh {log directory} {zip after days} {delete after days}
    

    Exit if no archive after x days exists

    www-data@alpha:~# ./log-cleanup.sh /var/log/apache2/
    number of days for archiving log files is missing
    Usage: ./log-cleanup.sh {log directory} {zip after days} {delete after days}
    

    Exit if no delete after x days exists

    www-data@alpha:~# ./log-cleanup.sh /var/log/apache2/ 2
    number of days for deleting log files is missing
    Usage: ./log-cleanup.sh {log directory} {zip after days} {delete after days}
    

    Perform cleanup, first delete, than compress

    www-data@alpha:~# ./log-cleanup.sh /var/log/apache2/ 2 4
    Delete files older than 4 days:
    /var/log/apache2/error.log.14.gz
    /var/log/apache2/access.log.12.gz
    /var/log/apache2/error.log.13.gz
    /var/log/apache2/error.log.10.gz
    ..
    Archive files older than 2 days:
    /var/log/apache2/error.log.7.gz
    /var/log/apache2/error.log.5.gz
    /var/log/apache2/error.log.9.gz
    /var/log/apache2/error.log.8.gz
    ..
    Filesystem         Size  Used Avail Use% Mounted on
    /dev/ploop12345p6   50G  7.2G   40G  16% /var/log
    

    Check if someone applies reverse order

    www-data@alpha:~# ./log-cleanup.sh /var/log/apache2/ 14 7
    DELETE DAYS must be greater than ARCHIVE DAYS
    
  6. 2016-09-06 - Using dictionaries in bash 4; Tags: Using dictionaries in bash 4
    Loading...

    Using dictionaries in bash 4

    Bash 4 supports dictionaries, hash tables or associative arrays. I was in need of that feature writing an logstash script, working with environment variables in logstash itself. A simple demonstration.

    Check if bash version is 4.

    tan@delta:~> bash --version
    GNU bash, version 4.2.46(1)-release (x86_64-redhat-linux-gnu)
    Copyright (C) 2011 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
    This is free software; you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.
    

    A example excerpt from my script:

    # determine log environment depending on hostname
    declare -A servers
    servers=( ["omega"]="dev" ["delta"]="test" ["beta"]="ppd" ["alpha"]="prd" )
    export LOG_ENV="${servers[$(hostname)]}"
    # =====
    

    Declare dictionary named servers

    tan@delta:~> declare -A servers
    

    Put values in dictionary

    tan@delta:~> servers=( ["omega"]="dev" ["delta"]="test" ["beta"]="ppd" ["alpha"]="prd" )
    

    Retrieve value from dictionary with key

    tan@delta:~> export LOG_ENV="${servers[$(hostname)]}"
    tan@delta:~> echo $LOG_ENV
    test
    
  7. 2016-06-01 - Retrieve process id from ps; Tags: Retrieve process id from ps
    Loading...

    Retrieve process id from ps

    Example process without the grep command

    tan@omega:~/bin> ps -Af | grep '[f]ile-watchdog.sh start'
    tan 40502 1 0 12:23 pts/1 00:00:00 bash ./file-watchdog.sh start
    tan 40504 40502 0 12:23 pts/1 00:00:00 bash ./file-watchdog.sh start
    

    Show only process id

    tan@omega:~/bin> ps -Af | grep '[f]ile-watchdog.sh start' | awk '{ print $2}'
    40502
    40504
    

    Show first process

    tan@omega:~/bin> ps -Af | grep '[f]ile-watchdog.sh start' | awk '{ print $2}' | head -n 1
    40502
    

    Show last process

    tan@omega:~/bin> ps -Af | grep '[f]ile-watchdog.sh start' | awk '{ print $2}' | tail -n 1
    40504
    
  8. 2016-03-21 - Migrate /tmp to RAM storage; Tags: Migrate /tmp to RAM storage
    Loading...

    Migrate /tmp to RAM storage

    To migrate your /tmp directory into RAM storage you can alter the /etc/fstab.

    tmpfs	/tmp   tmpfs   defaults,noatime,nosuid,nodev,noexec,mode=1777,size=512M 0 0
    

    The usable size is limited to 512 MB. After the reboot the changes are applied.

  9. 2016-02-10 - Parsing output with multiple whitespace; Tags: Parsing output with multiple whitespace
    Loading...

    Parsing output with multiple whitespace

    This post demonstrates how to parse output separated with multiple whitespace in the bash/shell.

    I have to implement some elasticsearch curator functions, since python is not an option on my machine :-( . I query elasticsearch for the catalog of indices.

    vinh@cinhtau:~> curl -s http://localhost:9200/_cat/indices?v
    health status index                  pri rep docs.count docs.deleted store.size pri.store.size
    green  open   logstash-2016.02.06      5   1    1899524      1077536      4.4gb          2.2gb
    green  open   logstash-2016.02.05      5   1    3051521      1078468      6.1gb            3gb
           close  logstash-2016.02.04
           close  logstash-2016.02.03
    green  open   logstash-2016.02.09      5   1    3571320      1077284      6.1gb            3gb
    green  open   logstash-2016.02.08      5   1    3854980      1076828      8.3gb          4.1gb
    green  open   logstash-2016.02.07      5   1    1384753      1077256      3.5gb          1.7gb
    green  open   .marvel-es-2016.02.10    1   1     415332         2970    393.9mb        196.9mb
    green  open   .kibana                  1   1         53            4    245.3kb        122.1kb
    green  open   .marvel-es-2016.02.08    1   1     113514          850     97.4mb         48.7mb
    green  open   .marvel-es-2016.02.09    1   1     348231         2682    332.2mb          166mb
    green  open   logstash-2016.02.12      5   1    1623111            0      5.9gb          2.8gb
    green  open   logstash-2016.02.11      5   1    2748311        42212      5.9gb          2.9gb
    green  open   logstash-2016.02.10      5   1    4494718      1021304      8.3gb          4.1gb
    ..
    

    If try cut with the delimiter ‘ ‘ it won’t work, because of the multiple spaces between the status and index name. In this case you can use awk with the regex of multiple spaces ' +':

    vinh@cinhtau:~> curl -s http://localhost:9200/_cat/indices | awk -F ' +' '{print $3}'
    logstash-2016.02.06
    logstash-2016.01.15
    logstash-2016.01.16
    logstash-2016.02.05
    logstash-2016.02.04
    logstash-2016.01.13
    logstash-2016.02.03
    logstash-2016.02.09
    logstash-2016.02.08
    logstash-2016.01.17
    logstash-2016.01.18
    logstash-2016.02.07
    .marvel-es-2016.02.10
    .kibana
    
  10. 2015-12-23 - Append log entry to log file; Tags: Append log entry to log file
    Loading...

    Append log entry to log file

    Today a co-worker asked me how to get the value of a configuration entry from a java properties file. For this purpose cut does a splendid job.

    We cut the line with delimiter = and retrieve the second field.

    echo "logfile=/var/log/java/msl.log" | cut -f2 -d=
    /var/log/java/msl.log
    

    He needed the log filename to append a log message to it. Simply killing a Java process, is not the desired way, but if you aren’t the vendor of that particular software, you go for it. The application itself does not or can’t write any details into the log file. Thats why he is appending a log message to the logfile. To be logstash compliant to my configuration, the format should be maintained. We using date to create a compliant timestamp format.

    echo "$(date +%Y-%m-%d\ %H:%M:%S),000 [Control-M]  INFO (control-m-agent) Shutdown connector"
    2015-12-22 15:47:42,000 [Control-M]  INFO (control-m-agent) Shutdown connector
    

    The whole code used in a bash script.

    LOGFILE=$(cat app.properties | grep logfile | cut -f2 -d= )
    echo "$(date +%Y-%m-%d\ %H:%M:%S),000 [Control-M]  INFO (control-m-agent) Shutdown connector" >> $LOGFILE
    
  11. 2015-12-20 - Using date in the shell; Tags: Using date in the shell
    Loading...

    Using date in the shell

    A small example how to use the date function in the linux shell. For ISO-Format yyyy-MM-dd%Y-%m-%d

    pi@dojo:~$ echo $(date +%Y-%m-%d)
    2013-01-23
    

    With time

    pi@dojo:~$ echo $(date +%Y-%m-%d\ %H:%M:%S)
    2013-01-23 21:37:09
    
  12. 2015-12-19 - Using a pidfile for control flow; Tags: Using a pidfile for control flow
    Loading...

    Using a pidfile for control flow

    I had to write recently some shell scripts that may be used for LSB like init.d services. It was a quite good experience and fresh up of my bash scripting skills.

    One of the major requirements:

    • The application can only be started once. A second instance is not allowed.
    • To shutdown, we have to kill the process.

    Here are my basic steps.

    Define colors

    I define some colours to illustrate the importance or status of some text.

    COLOR_SUCCESS="\\033[1;32m"
    COLOR_FAILURE="\\033[1;31m"
    COLOR_WARNING="\\033[1;33m"
    COLOR_NORMAL="\\033[0;39m"
    

    Remember process id

    By starting the application, we need to remember the process id (pid), that has be assigned. This pid will be stored in the file, named pidfile with the extension .pid. I recommend to use a variable for the location of the pidfile.

    # place pid according to FHS in /var/run is not possible,
    # since it is not started as root
    pidfile="$SCRIPT_HOME/java_app.pid"
    

    If the application is started as root, put the pid under /var/run. If the application is running as different user and not started by root, you have to use a custom location.

    Write pidfile

    In the start function, execute the program.

    start() {
        echo -e "Start $name using classpath $CLASSPATH \n"
        ${JAVA_HOME} -Djava.library.path=bin  \
        -classpath $CLASSPATH  \
        net.cinhtau.Starter logback.xml > /dev/null 2> "${SCRIPT_HOME}/$tan.err" &
        # Generate the pidfile from here. If we instead made the forked process
        # generate it there will be a race condition between the pidfile writing
        # and a process possibly asking for status.
        echo $! > $pidfile
    }
    

    The special parameter $! expands to the process ID of the most recently executed background (asynchronous) command. This pid is stored to the pidfile. See Bash Beginners Guide for more information about special parameters.

    Check if process is still alive

    The status function is essential for checking if the application, or more correctly the process id is still alive.

    status() {
        if [ -f "$pidfile" ]; then
            pid=`cat "$pidfile"`
            if [ -e "/proc/$pid" ] ; then
                # process by this pid is running.
                # It may not be our pid, but that's what you get with just pidfiles.
                return 0
            else
                return 2 # program is dead but pid file exists
            fi
        else
            return 3 # program is not running
        fi
    }
    

    If the pidfile exists, we look for the pid under /proc, the process pseudo filesystem.

    Stop application based on pidfile

    To stop the application, we have to kill the process. If the application is successfully killed, we have to remove the pidfile.

    stop() {
        # Try a few times to kill TERM the program
        if status; then
            pid=`cat "$pidfile"`
            echo "Killing $name (pid $pid)"
            kill -HUP $pid
            # Wait for it to exit.
            for i in `seq 1 5`; do
                echo "Waiting $name (pid $pid) to die..."
                status || break
                sleep 1
            done
            if status ; then
                echo "$name stop failed; still running."
            else
                echo "$name stopped."
                rm $pidfile
            fi
        fi
    }
    

    Use pidfile for control flow

    This pidfile can now be used for the control flow, e.g. don’t start the application a second time, if the application is already running.

    echo_success() {
      echo -n -e $"[$COLOR_SUCCESS  OK  $COLOR_NORMAL]"
      echo -ne "\r"
      return 0
    }
    case "$1" in
        start)
            if [ -e $pidfile ] ; then
                pid=$(cat $pidfile)
                echo -n -e "Application Component is already running under $COLOR_WARNING $pid $COLOR_NORMAL \n"
            else
                start
            fi
        ;;
        stop)
            stop ;;
        force-stop) force_stop ;;
        status)
            status
            code=$?
            if [ $code -eq 0 ]; then
                echo_success && echo -e "\t $name is running, process `cat $pidfile`"
            else
                echo -e " \t $name is $COLOR_WARNING not running $COLOR_NORMAL"
            fi
            exit $code
        ;;
        restart)
            stop && start
        ;;
        *)
            echo "Usage: cinhtau.sh {start|stop|force-stop|status|restart}"
            exit 3
        ;;
    esac
    exit $?
    
  13. 2015-12-19 - $var or ${var}?; Tags: $var or ${var}?
    Loading...
  14. 2015-12-19 - Clear screen; Tags: Clear screen
    Loading...

    Clear screen

    Working under terminals or cli interfaces can be a messy thing. From time to time it is nice to have a clear screen. This little post shows how to do that.

    Therefore the command clear is a wonderful way to empty the previous output. For Windows in the cmd it is cls. One of the main advantages working together with different people is the exchange of knowledge. A consultant hinted that using CTRL + L is the common shortcut for all terminals. That way is much faster :smile:.

  15. 2015-11-12 - Create help output with echo in shell scripts; Tags: Create help output with echo in shell scripts
    Loading...

    Create help output with echo in shell scripts

    Writing bash (shell) scripts is sometimes necessary to automate little task and ease the maintenance. Providing a good help is also must, for the others admins, that have to use it. This post illustrates a small example.

    Following program checks for three essential commands. If they aren’t available (installed) it displays a message and shows all options in an example function. echo interprets backslash characters with the -e option. Below example prints the information in a newline \n.

    #!/bin/bash
    # Usage information
    displayhelp () {
        echo -e "\nUsage: `basename $0` [options] input_files_or_directories ...\n"
        echo -e "OPTIONS:\n"
        echo " -h help ;)"
        echo " -d target directory (directory structure of input_directory is preserved)"
        echo -e " -r target bitrate"
        exit 0
    }
    # Required programs
    OGGINFO=`which ogginfo`
    OGGDEC=`which oggdec`
    LAME=`which lame`
    if [ "$OGGINFO"="" -o "$OGGDEC"="" -o "$LAME"="" ]; then
        echo -e "\nERROR: ogginfo, oggdec and lame are required!\n"
        displayhelp
        exit 0
    fi
    

    The output

    root@pelion:~# ./help-demo.sh
    ERROR: ogginfo, oggdec and lame are required!
    Usage: help-demo.sh [options] input_files_or_directories ...
    OPTIONS:
     -h help ;)
     -d target directory (directory structure of input_directory is preserved)
     -r target bitrate
    
  16. 2015-11-03 - Find and delete files older than x days; Tags: Find and delete files older than x days
    Loading...

    Find and delete files older than x days

    On Linux you have powerful options with the find command. This post demonstrates how to find files older than 14 days (2 weeks) and remove them. Of course you can choose the amount of days as you like.

    We search files with the modification time. Display the files

    find . -mtime +14 -exec echo {} \;
    

    Remove the files (with force option, in case it is interactive and have to reply with y)

    find . -mtime +14 -exec rm -f {} \;
    

    Combined command with multiple portions of exec

    find . -mtime +14 -exec echo {} \; -exec rm -f {} \;
    
  17. 2015-10-26 - Permission Calculator; Tags: Permission Calculator
    Loading...

    Permission Calculator

    http://permissions-calculator.org/ is a nice web application, to calculate octal permissions for chmod.

    • The basic chmod command allows the change the access rights for user + group + others.
    • The numeric computation consists of 2^2=4, 2^1=2, 2^0=1 = rwx

    The current permission scheme in numeric and symbolic values:

    # Permission `rwx`
    7 read, write and execute (4 + 2 + 1 = 7) `rwx`
    6 read and write (4 + 2 = 6) `rw-`
    5 read and execute (4 + 1 = 5) `r-x`
    4 read only `r--`
    3 write and execute `-wx`
    2 write only `-w-`
    1 execute only `--x`
    0 none `---`
  18. 2015-09-29 - Using watch to monitor processes; Tags: Using watch to monitor processes
    Loading...

    Using watch to monitor processes

    Using Linux you can monitor processes with watch. It executes a program periodically, showing the output in full screen.

    This example monitors every 10 seconds if the logstash process is alive. We exclude the grep command itself.

    watch -n 10 "ps auxww | grep \[l\]ogstash"
    

    To monitor the database PostgreSQL, every process, that starts with postgres:

    watch "ps auxww | grep ^postgres"
    

    Example output

    $ ps auxww | grep ^postgres
    postgres   960  0.0  1.1  6104 1480 pts/1    SN   13:17   0:00 postgres -i
    postgres   963  0.0  1.1  7084 1472 pts/1    SN   13:17   0:00 postgres: writer process
    postgres   965  0.0  1.1  6152 1512 pts/1    SN   13:17   0:00 postgres: stats collector process
    postgres   998  0.0  2.3  6532 2992 pts/1    SN   13:18   0:00 postgres: tgl runbug 127.0.0.1 idle
    postgres  1003  0.0  2.4  6532 3128 pts/1    SN   13:19   0:00 postgres: tgl regression [local] SELECT waiting
    postgres  1016  0.1  2.4  6532 3080 pts/1    SN   13:19   0:00 postgres: tgl regression [local] idle in transaction
    
  19. 2015-08-26 - Find process id (pid) of dedicated server instance; Tags: Find process id (pid) of dedicated server instance
    Loading...

    Find process id (pid) of dedicated server instance

    This post demonstrates how to retrieve the pid of a dedicated JBoss application server, running parallel on a linux server with other application servers. The combination of ps, grep and awk allows a operator friendly output.

    Searching for JBoss server with name dev

    ps -ef | grep 'dev'
    

    Options explained

    • -e = Display information about other users’ processes, including those without controlling terminals.
    • -f = Display the uid, pid, parent pid, recent CPU usage, process start time, controlling tty, elapsed CPU usage, and the associated command.

    The output will also contain the grep command. To avoid that, we can exclude with

    grep '[d]ev'
    

    Using awk to print a friendly output. The field numbers may vary depending on your output.

    tan@cinhtau:~> ps -ef | grep '[d]ev' | awk '{ print $2 " " $8 " " $9 " " $NF}'
    13606 /usr/lib/jvm/jdk1.8.0_51/bin/java -D[Server:dev] org.jboss.as.server
    

    awk explained:

    • $2 contains pid (the information I want)
    • $8 contains command
    • $9 contains argument (the server argument)
    • $NF print last field (the awk builtin variable NF gives you the total number of fields in a record)

    The given pid 13606 is used to access the JVM with a cli (command line interface) for jmx. The purpose is Monitoring and Management Using JMX Technology.

    The Java virtual machine (Java VM ) has built-in instrumentation that enables you to monitor and manage it using the Java Management Extensions (JMX) technology. These built-in management utilities are often referred to as out-of-the-box management tools for the Java VM. You can also monitor any appropriately instrumented applications using the JMX API.

    Java SE Documentation cjmx is a command line JMX client intended to be used when graphical tools (e.g., JConsole, VisualVM) are unavailable.

    java -jar path/to/cjmx.jar [PID]