1. 2016-09-11 - NonStop SQL MX with IntelliJ or DataGrip; Tags: NonStop SQL MX with IntelliJ or DataGrip
    Loading...

    NonStop SQL MX with IntelliJ or DataGrip

    NonStop SQL is a commercial relational database management system that is designed for fault tolerance and scalability for the HP NonStop. The latest version of the product is SQL MX 3.2.1 which was released in February 2013. This post describes how to setup IntelliJ or respective DataGrip to work with SQL MX.

    General

    Basic steps:

    • Create NonStop SQL MX driver setup
    • Configure Connection

    Create NonStop SQL MX driver

    First we need to create a driver within IntelliJ since the HP NonStop is rather exotic in the common Java development. To do this, we need the JDBC driver for SQL MX from HP itself. In the below screenshot you see the default settings. intellij-add-driver-data_sources_and_drivers

    Configure connection

    After you have successfully create the driver definition for the HP NonStop SQL MX, you can setup the data source. Take into consideration, that data on HP NonStop is highly critical and therefore avoid accidental changes, use a read only connection. The scope for the data source is also important. You can assign the scope to the project or the IDE, i.e. the data source is available in any project. Below you can see the default settings. Replace the user and password with your user. intellij-database-settings-data_sources_and_drivers

    Query resultset

    If you query some tables and the alias is set, column names are displayed in the output. datagrip-hpns_resultset_with_column_headers If the alias for a column isn’t set, DataGrip/IntelliJ displays anonymous. datagrip-hpns_resultset_without_column_headers As workaround you can execute the invoke statement (which shows the table definition). datagrip-hpns_invoke

  2. 2016-03-30 - Logging from HP NonStop to Elasticsearch cluster; Tags: Logging from HP NonStop to Elasticsearch cluster
    Loading...

    Logging from HP NonStop to Elasticsearch cluster

    This article demonstrates the fundemental milestones to get a decent log reporting on the HP NonStop to an Elasticsearch cluster. The HP NonStop itself offers with OSS an minimal Linux OS on top of the Guardian layer. Following articles involves the configuration on the HP NonStop (sending party) to the Linux Server, that runs Logstash and Elasticsearch (receiving party). We will also call the HP NonStop Tandem, for clarification.

    The scenario

    This article needs a basic understanding of Logstash and HP NonStop OSS. The circumstances are: My company has a HP NonStop (Itanium architecture). On the Tandem machine, several tomcat web applications are running and logging. Viewing the log files with tail under OSS is a pain in the .. you know where :wink: . So the basic idea is to report the log files to elasticsearch and view them with Kibana. The HP NonStop isn’t capable of running logstash (problems with JRuby), logstash-forwarder or filebeat (written in Go). There is an unofficial logstash forwarder implmentation in github. This programme was written for the IBM AIX and fits the purpose of running basic java applications on the Itanium architecture.

    Getting started

    Before we may begin we need to create self signed SSL certificates, that are essential for the logstash forwarder protocol lumberjack and the logstash input configuration. Logstash supports all certificates, including self-signed certificates. To generate a certificate, we run the following command on the Linux Server (receiving party):

    >$ openssl req -x509 -batch -nodes -newkey rsa:2048 -keyout logstash-forwarder.key -out logstash-forwarder.crt -days 365

    This will generate a key at logstash-forwarder.key and the 1-year valid certificate at logstash-forwarder.crt. Both the server that is running logstash-forwarder as well as the logstash instances receiving logs will require these files on disk to verify the authenticity of messages. That means we have to distribute it also on the Tandem (the sending party). The logstash forwarder also needs a Java Keystore. We create a new one with the self-signed certificate

    keytool -importcert -trustcacerts -file logstash-forwarder.crt -alias ca -keystore keystore.jks
    

    The command will ask for a password, just the use the default changeit for simplicity. You may choose another password, but keep in mind to remember it.

    Configure logstash

    Logstash, that runs on the Linux Server, needs a lumberjack input configuration:

    input {
      lumberjack {
        port => 5400
        ssl_certificate => "/opt/logstash-2.2.1/logstash-forwarder.crt"
        ssl_key => "/opt/logstash-2.2.1/logstash-forwarder.key"
      }
    }
    

    We just choose the free port 5400 for simplicity. The output may be elasticsearch or for testing just stdout.

    output {
        elasticsearch {
            host => "10.24.62.120"
            protocol => "http"
            port => 9200
            index => "tandem-%{+YYYY.MM.dd}"
        }
        stdout {
            codec => rubydebug
        }
    }
    

    Of course can also apply custom filters, but for simplicity I leave it out the equation.

    The HP NonStop side

    The first obstacle under OSS is to setup the correct Java environment:

    export JAVA_HOME=/usr/tandem/java7.0
    export PATH=$PATH:$JAVA_HOME/bin
    

    Allowing programmes to use the TCP/IP stack is a special case, and had to be done:

    add_define =tcpip^process^name class=map file=\$ZKIP
    

    We assign the current OSS to the process name $ZKIP, that allows us to talk with the Linux Server on the outgoing site. You may have to replace the process name with your respective process name on your Tandem/HP NonStop. Download the latest release from above github repository and upload it to the HP NonStop.

    Configure the forwarder

    I put the SSL certificates under the same folder of the logstash-forwarder. The forwarder needs a configuration, which files he should tail and forward to. An example:

    {
       "network": {
         "servers": [ "10.24.62.120:5400" ],
         "ssl certificate": "/opt/logstash-forwarder/logstash-forwarder.crt",
         "ssl key": "/opt/logstash-forwarder/logstash-forwarder.key",
         "ssl ca": "/opt/logstash-forwarder/keystore.jks",
         "timeout": 15
       },
       "files": [
         {
           "paths": [
             "/var/dev/log/tomcat-server/-*.log"
           ],
           "fields": { "type": "logs" }
         }, {
           "paths": [
             "/var/dev/log/java/*.log"
           ],
           "fields": { "type": "logs" }
         }
       ]
     }
    

    Start the forwarder

    After that we can start the java logstash forwarder with the defined configuration:

    nohup java -jar logstash-forwarder-java-0.2.3.jar -config config > forwarder.log 2> error.log &
    

    On the receiving site or Kibana you should see the incoming messages flying in.

    Final steps

    After testing successfully the log forwarding you may configure a new pathway server to run the application automatically.

  3. 2015-10-16 - Processes within HP NonStop; Tags: Processes within HP NonStop
    Loading...

    Processes within HP NonStop

    This post explains the basic concept about a process on the Tandem. It also contains a (told) custom process presentation (Aiellogram) by Dennis Aiello, a tech trainer for Tandem computers from my training on the HP premises. The main conceptional focus is how to keep a process fault tolerant.

    In computing, a process is an instance of a computer program that is being executed. It contains the program code and its current activity. A critical operating process, e.g. disk process, run within the HP NonStop (previously Tandem) as process pair spanned over two CPUs:

    • The primary process (1 CPU)
    • The backup process (1 CPU)

    Any process that encounters a fault, e.g. the CPU crashed, is automatically switched to the backup instance. When the backup has taken over, it will become the primary, and creates a new backup instance, in case the new primary may also crashed. This can go on until every CPU is exhausted. In theory this might happen, but in reality, having e.g. 8 real CPUs crash simultaneously or after each other is unlikely. The process on a Tandem has following characteristics:

    • A process name (if none given, automatically assigned by the OS) like $data1, the backup process has the same process name!
    • A process identifier in ,
    </em> notation like 1,714 (cpu 1, pin 714) * A priority (Range 1-255, User Range: 1-199, System Range: 200-255), Order by highest number (desc) * A code space and data space (see diagram) * A process owner by notation group.user. The process user super.super is like root, the manager of the system. In numbers it is in this notation (255,255) * A terminal from which the process was started