Monday, October 17, 2016

VNC Viewer

I recently built a desktop for development in my company’s CORP network. By using VNC, I can access the same Gnome session even from home. Just simply connect to VPN, and start a vnc viewer, I can resume what I do at office.
I have a CentOS 6 on my development desktop. I installed RPM pacakages: vnc-server. (TODO: Details about setting up VNC server.)
I tried TigerVNC Viewer, but it was really bad, especially F8 menu key because I use Eclipse, F8 is used a lot when you are debugging. And there is not a pop-up menu bar like VirtualBox when you are in full-screen mode.
RealVNC satisfies all my requirements about VNC:
  • A menu bar
  • Better configuration UI on Windows 7
  • I can write a script to start VNC viewer
    through SSH tunnel make the viewer goes into full-screen mode on my secondary display automatically.
In CORP network, just click the VNC button on Windows’ taskbard, type the password, the viewer is in full-screen mode on my secondary display.
#!/bin/bash
set -e

shutdown() {
  if [ -n "$tunnel_pid" ]; then
    kill $tunnel_pid
  fi
}

trap shutdown EXIT

ssh -N -L 5902:localhost:5902 mydev.desktop &
tunnel_pid=$!

~/apps/bin/VNC-Viewer-5.2.1-Windows-64bit.exe \
  --Monitor='\\.\Display2' \
  --FullScreen=1 \
  localhost:2
I also installed cygwin on Windows so that I could use OpenSSH. The script can clean up the SSH tunnel once the script quits.
At home, the network connection is slow. And that script will fail because RealVNC gets timeout. I didn’t find how to make RealVNC wait for a longer time. I just simply run SSH tunnel and viewer separately in different screens of GNU Screen in a Cygwin terminal.
I chose JPEG encode for better performance when I at home.
Here is the script using SSH Control Socket to stop the tunnel. %h is a TOKEN of SSH, which will be substituted with the host name SSH tries to access. Check ControlPath of ssh-config.
#!/bin/bash
set -e

SSH_HOST=bwang.desktop.mycompany.com
CTL_SOCK=/tmp/ssh_tunnel_%h.sock

shutdown() {
    ssh -S $CTL_SOCK -O stop $SSH_HOST
}

trap shutdown EXIT

ssh -f -N -L 5902:localhost:5902 -M -S $CTL_SOCK -o ExitOnForwardFailure=yes $SSH_HOST

if [ $? -eq 0 ]; then
    /Applications/RealVNC/VNC\ Viewer.app/Contents/MacOS/vncviewer \
          -useaddressbook    $SSH_HOST
else
    echo "Failed to start SSH tunnel!"
fi

Sunday, October 9, 2016

Don't set limit nofile to unlimited

After I set nofile to unlimited in /etc/security/limits.d/90-nproc.conf like this,
*          hard    nproc     unlimited
*          hard    nofile    unlimited

*          soft    nproc     unlimited
*          soft    nofile    unlimited
I could not boot into Gnome on my VM running CentOS 6.8 any more. Even I could switch to a terminal using ctrl-alt-F2, I seemed not be able to log in. I reboot the VM using a live-cd ISO. Then mount the file system, after setting to 32K as below, reboot it, the problem was fixed.
*          -    nproc     unlimited
*          -    nofile    32768
Notes:
# find out the logical volume of the hard drive
fdisk -l

# mount the logical volume
mkdir /mnt/hd
mount /dev/mapper/Volume-lv_root /mnt/hd
vi /mnt/hd/etc/security/limits.d/90-nproc.conf

Monday, September 12, 2016

Gradle show dependencies of the specified configuration

I record the method here in case I forget how to show the dependencies for ONLY one configuration like runtime again.
  • Show dependencies for one configuration ONLY: ./gradlew dependencies --configuration runtime
  • Show a task’s arguments: ./gradlew help --task dependencies

Record Request and Reponse of HTTP Samplers in JMeter

  • Add “Simple Data Writer”
  • Give the file name like “result.xml”
  • Click Configure button.
  • Check the radio buttons for
    • Save As XML
    • Save URL
    • Save Response Data (XML)
  • Start the jmeter test
  • Check the XML file
  • If you stop jmeter, then start another run, the requests and responses in the new run will be appended to the XML file.
<testResults>
<httpSample t="17" lt="17" ts="1473705874608" s="true" lb="Profile Request" rc="200" rm="OK" tn="Profile (year) 1-1" dt="text" by="253" ng="1" na="2">
  <responseData class="java.lang.String">...</responseData>
  <java.net.URL>...</java.net.URL>
</httpSample>
</testResults>

Friday, September 9, 2016

Make Spark read Teradata directly.

Spark SQL supports read JDBC resource, but the paragraph in the latest 2.0 document is not really helpful:
The JDBC driver class must be visible to the primordial class loader on the client session and on all executors. This is because Java’s DriverManager class does a security check that results in it ignoring all drivers not visible to the primordial class loader when one goes to open a connection. One convenient way to do this is to modify compute_classpath.sh on all worker nodes to include your driver JARs.
I wrote this blog to explain how to run against Teradata database:
  • in spark-shell and YARN-cluster mode
  • spark-submit and YARN-cluster mode

spark-shell and YARN mode

...
--conf spark.executor.extraClassPath=./tdgssconfig-15.10.00.22.jar:./terajdbc4-15.10.00.22.jar \
--driver-class-path $LIB_DIR/tdgssconfig-15.10.00.22.jar:$LIB_DIR/terajdbc4-15.10.00.22.jar \
...
  • for executor.extraClassPath, the path is the current directory ./, which is where YARN starts the executor.
  • for --driver-class-path, “$LIB_DIR” is the directory where those jdbc driver jars live, it is on the host where you run spark-shell.

spark-submit and YARN-cluster mode

...
--conf spark.executor.extraClassPath=./tdgssconfig-15.10.00.22.jar:./terajdbc4-15.10.00.22.jar \
--driver-class-path ./tdgssconfig-15.10.00.22.jar:./terajdbc4-15.10.00.22.jar \
--jars $LIB_DIR/tdgssconfig-15.10.00.22.jar,$LIB_DIR/terajdbc4-15.10.00.22.jar,<other jars>
...

Explanation

Two differences:
  • In spark-shell, you don’t have to put Terdata jdbc driver jars in --jars, because --driver-class-path already does. If you put them in --jars, it doesn’t hurt. But in spark-submit, you have to add them in --jars.
  • In spark-submit, --driver-class-path uses the current directory ./, not $LIB_IDR in spark-shell.
Lost? Here is why:
  • When you run spark-submit with YARN-cluster mode , the Spark app driver actually runs in a YARN container, not where you type “spark-submit”.
    • --jars makes teradata jdbc driver jars copied to the container directory where the Spark app driver (YARN appMaster) runs.
    • --driver-class-path just declares you need teradata jdbc driver jars. Be the Spark app driver runs in a YARN container, this argument tells the spark app driver JVM where to find the jdbc jars. Because the jars are copied into the YARN container’s directory, to the JVM of spark app driver, the container directory is the current directory.
    • spark-submit actually cand not find the jdbc jars in ./ in --driver-class-path, so the jdbc jars won’t be copied to the YARN container. That is why you have to use --jars to tell spark where to find those jars and ship them to the container.
  • In spark-shell, the spark app driver is started on the host where you type “spark-shell”. So you need to give the full path in --driver-class-path

Friday, May 27, 2016

Bring back google-chrome after upgrading to CentOS 6.8 and Chrome 51.

I don’t know which one is root cause: upgrading to CenOS 6.8 or Chrome 51. I used install_chrome.sh on http://chrome.richardlloyd.org.uk/ to install Google chrome on my CentOS 6 VirtualBox VM. It worked very well until this upgrade. If I ran google-chrome, the window popped up, but it is almost black.
I use the following command to investigate this problem
google-chrome --disable-plugins --disable-extensions --user-data-dir=/tmp/chrome-user-dir --enable-logging --log-level=0
  • disable all plugins and extensions
  • use a new user dir
  • enable logs
Also checking the chrome process --type=gpu-process as parameter
$ ps -ef | grep chrome
bwang  1358  1284  1 10:58 pts/6    00:00:00 /opt/google/chrome/chrome --enable-features=... --disable-features=... --type=gpu-process --channel=1284.0.1688276239 --enable-logging --log-level=0 --window-depth=24 --user-data-dir=/tmp/chrome-user-dir --supports-dual-gpus=false --gpu-driver-bug-workarounds=4,54 --gpu-vendor-id=0x80ee --gpu-device-id=0xbeef --gpu-driver-vendor=Chromium --gpu-driver-version=1.9 --user-data-dir=/tmp/chrome-user-dir --enable-logging --log-level=0 --v8-natives-passed-by-fd --v8-snapshot-passed-by-fd
The log file /tmp/chrome-user-dir/chrome_debug.log shows
...
[2945:2945:0527/111600:ERROR:texture_manager.cc(2746)] [.CommandBufferContext.DisplayCompositor-0x3d53365d63c0]GL ERROR :GL_INVALID_ENUM : glTexImage2D: <- error from previous GL command
[23:23:0527/111600:WARNING:ipc_message_attachment_set.cc(57)] MessageAttachmentSet destroyed with unconsumed descriptors: 0/1
[2945:2945:0527/111600:ERROR:gles2_cmd_decoder.cc(2167)] [.CommandBufferContext.CompositorWorker-0x3d53365d6280]GL ERROR :GL_INVALID_ENUM : GLES2DecoderImpl::DoBindTexImage2DCHROMIUM: <- error from previous GL command
[2945:2945:0527/111600:ERROR:gles2_cmd_decoder.cc(2167)] [.CommandBufferContext.CompositorWorker-0x3d53365d6280]GL ERROR :GL_INVALID_VALUE : ScopedTextureBinder::dtor: <- error from previous GL command
...
Looks like google-chrome use gpu for acceleration. So the solution is simple google-chrome --disable-gpu brings back chrome on my CentOS 6.8 VM.

Wednesday, May 18, 2016

How to create .epub and .mobi version of Gradle User Guide?

Gradle User Guide is written using docbook, and gradle build already have single HTML and pdf built. But I really want to load it into my kindle. Because docbook supports converting docbook to epub and epub3, I want to build it by myself.
You need to install docbook-xsl. On cygwin, I installed 1.77.1-1
$ cygcheck -c | grep docbook
build-docbook-catalog        1.5-2              OK
docbook-xsl                  1.77.1-1           OK

$ cygcheck -l docbook-xsl | grep epub
/usr/share/sgml/docbook/xsl-stylesheets/epub/bin/dbtoepub
/usr/share/sgml/docbook/xsl-stylesheets/epub/bin/lib/docbook.rb
/usr/share/sgml/docbook/xsl-stylesheets/epub/bin/xslt/obfuscate.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub/docbook.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub/README
/usr/share/sgml/docbook/xsl-stylesheets/epub3/chunk.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub3/chunkfast.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub3/docbook-epub.css.xml
/usr/share/sgml/docbook/xsl-stylesheets/epub3/docbook.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub3/epub3-chunk-mods.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub3/epub3-element-mods.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub3/profile-chunk.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub3/profile-docbook.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub3/README
/usr/share/sgml/docbook/xsl-stylesheets/epub3/titlepage.templates.xml
/usr/share/sgml/docbook/xsl-stylesheets/epub3/titlepage.templates.xsl
You’d better to read epub3/README, which describes the steps how to build a epub eBook from docbook. The command looks like this:
 xsltproc --stringparam base.dir ebook/OEBPS/ --xinclude /usr/share/sgml/docbook/xsl-stylesheets/epub3/chunk.xsl ../gradle/subprojects/docs/build/src/userguide.xml
One thing you need to pay more attention, you must have the last slash of ebook/OEBPS/. The above command will generate mimetype and META-INF in the directory ebook.
$ ls ebook
META-INF/  mimetype  OEBPS/
If you don’t append “/“, the command will create a directory ebook/OEBPS..
To build Gradle User Guide using docbook to epub, you need to do as follows:
  • You need to add cols="?" to <tgroup in the xml files in ~/gradle/subprojects/docs/src/docs/userguide. Otherwise, you will encounter the error Error: CALS tables must specify the number of columns. You can search the xml file using <tgroup> and cols="3" or cols="4".
    grep -R '<tgroup' ~/gradle/subprojects/docs/src/docs/userguide
    
  • You need to make build docs:userguide first. Because the document has a lot of sample codes, they are only added when you do a build. If you use userguide.xml in gradle/subprojects/docs/src/docs/userguide/userguid.xml, you won’t see the sample codes in the ebook.
  • After xlstproc, just run zip -r -X ../gradle-user-guide.epub mimetype META-INF OEBPS in ebook.
  • If you want .mobi for Kindle, convert the epub file in Calibre.

Friday, May 13, 2016

How to make @timestamp using GMT when using Fluentd, Elasticsearch and Kibana?

My log is a JSON one-liner output by a Node.js application, there is a field called “time” which is GMT time.
{ "req": {}, "time":"2016-05-12T19:18:38.123Z" }
I want to keep the timestamp in GMT in Kibana. But it is not a straight forward thing as I thought. It took me couple of hours to make the timestamp work correctly using Fluentd, Elasticsearch and Kibana.
I use in_tail and fluent-plugin-elasticsearch to parse the log and load into Elasticsearch, and I search the logs using Kibana.
Here is my fluentd config file.
<source>
  @type tail
  format json

  read_from_head true
  path <path>/debug.log
  pos_file /var/run/td-agent/pos/debug.log.pos

  keep_time_key true
  time_key time
  time_format "%FT%T.%L%z"

  refresh_interval 10s

  tag debug
</source>
<match debug>
  @type elasticsearch
  hosts                my-es-server-1,my-es-server-2

  logstash_format      true
  logstash_prefix        debug
  utc_index  true

  time_key  time
  time_key_format      %FT%T.%L%z
</match>
  • keep_time_key, time_key and time_format are necessary in in_tail. Because the default value of time_key is time, and keep_time_key is true, fluentd will always parse the timestamp from your json message.
    • If you don’t put keep_time_key, field time will be removed, and the timestamp will be in the timezone of the host where td-agent is running.
    • If you don’t give time_format, the default time parser cannot parse this format because the time has milliseconds, your @timestamp will be wrong.
  • in elasticsearch
    • you need to put time_key. Fluentd will copy time to @timestamp, so @timestamp will have the exact same UTC string as time.
    • time_key_format will be used to parse the time and use it to generate logstash index name when logstash_format=true and utc_index=true. So the index name like debug-2016.05.12 will match the times in your log.
  • In Kibana, you might see the timestamp is actually shown in your local timezone like ‘PDT’. You need to go to “Settings -> Advanced -> dateFormat:tz”, change the default value “Browser” to “GMT”. So that the timestamps will be all GMT times.

Monday, May 9, 2016

Spark Cassandra Connector and DataFrame

When you write a DataFrame to a cassandra table, be careful to use SaveMode.Overwrite. In spark-cassandra-connector-1.6.0-M2, TRUNCATE $keyspace.$table will be called. See the code in CassandraSourceRelation.scala.
I did observe something weird when I use the following code to write a data frame to a cluster of Cassandra 2.1.8:
df.write
  .format(""org.apache.spark.sql.cassandra")
  .mode(SaveMode.Overwrite)
  .options(Map("table" -> table, "keyspace" -> keyspace))
  .save
After the scheduled spark job finishes, in CQLSH, the table is empty when running select * from keyspace.table limit 10. The same results if I change consistency level to QUORUM, and even ALL. It might take some time, then the query returns the results.
If I start the job manually from the command line, however, most of time the query can return the results.
If you check the CQL document for TRUNCATE, setting consistency level to ALL is required.
Note: The consistency level must be set to ALL prior to performing a TRUNCATE operation. All replicas must remove the data.
I don’t think the consistency level is changed before calling TRUNCATE $keyspace.$table in spark-cassandra-connector. The default consistency level is LOCAL_QUORUM. That might be the root cause.

Tuesday, May 3, 2016

How to resolve spark-cassandra-connector's Guava version conflict in spark-shell

In my blog How to resolve spark-cassandra-connector Guava version conflicts in Yarn cluster mode, I explained how to resolve Guava version issue in Yarn cluster mode. This blog covers how to do it in spark-shell.
The first thing is, when you start spark-shell with --master yarn, you actually run in yarn-client mode. Unfortunately my method for Yarn cluster mode won’t work. You may still get an exception as below:
Caused by: java.lang.NoSuchMethodError: com.google.common.util.concurrent.Futures.withFallback(Lcom/google/common/util/concurrent/ListenableFuture;Lcom/google/common/util/concurrent/FutureFallback;Ljava/util/concurrent/Executor;)Lcom/google/common/util/concurrent/ListenableFuture;
What’s wrong? If you log on the data node, and check the launch_container.sh for you Yarn application, you will find guava-16.0.1.jar is in the first one in the classpath
export CLASSPATH="$PWD/guava-16.0.1.jar:$PWD:$PWD/__spark__.jar:$HADOOP_CLIENT_CONF_DIR:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/*:$HADOOP_COMMON_HOME/lib/*:$HADOOP_HDFS_HOME/*:$HADOOP_HDFS_HOME/lib/*:$HADOOP_YARN_HOME/*:$HADOOP_YARN_HOME/lib/*:$HADOOP_MAPRED_HOME/*:$HADOOP_MAPRED_HOME/lib/*:$MR2_CLASSPATH"
Here is the trick: you need to add Guava jar to --files in your command
spark-shell \
  --master yarn-cluster \
  --driver-class-path <local path of guava-16.0.1.jar> \
  --conf spark.executor.extraClassPath=./guava-16.0.1.jar \
  --jars <local path of guava-16.0.1.jar> \
  --files <local path of guava-16.0.1.jar> \
  ...
Sounds weird? You can do this test, then you will understand why. Run spark-shell command, when you see the prompt, don’t do anything, log on the data nodes where your application’s spark executors are running. Find the application’s cache, what will you find?
# ls /grid/0/yarn/nm/usercache/bwang/appcache/application_1459869234031_5503/container_e45_1459869234031_5503_01_000004/
container_tokens                       launch_container.sh
default_container_executor_session.sh  __spark__.jar
default_container_executor.sh          tmp
Where are those jars listed in —jars? The answer is those jars are copied until you start some action of RDD or DataFrame in the spark-shell. Unfortunately, the JVM of the executor is already started, and the JVM might use the older version of Guava already.
If you add Guava jar in --files, the jar will be copied to the executor’s container. And guava-16.0.1.jar will be chosen over the older version of Guava.
Updates
Adding --files is not necessary for Spark 2.0.1. For Spark 2.1, you are not able to start Spark shell if you keep it. All of jars are already distributed in Spark 2 when each executor’s JVM starts.