Friday, May 27, 2016

Bring back google-chrome after upgrading to CentOS 6.8 and Chrome 51.

I don’t know which one is root cause: upgrading to CenOS 6.8 or Chrome 51. I used install_chrome.sh on http://chrome.richardlloyd.org.uk/ to install Google chrome on my CentOS 6 VirtualBox VM. It worked very well until this upgrade. If I ran google-chrome, the window popped up, but it is almost black.
I use the following command to investigate this problem
google-chrome --disable-plugins --disable-extensions --user-data-dir=/tmp/chrome-user-dir --enable-logging --log-level=0
  • disable all plugins and extensions
  • use a new user dir
  • enable logs
Also checking the chrome process --type=gpu-process as parameter
$ ps -ef | grep chrome
bwang  1358  1284  1 10:58 pts/6    00:00:00 /opt/google/chrome/chrome --enable-features=... --disable-features=... --type=gpu-process --channel=1284.0.1688276239 --enable-logging --log-level=0 --window-depth=24 --user-data-dir=/tmp/chrome-user-dir --supports-dual-gpus=false --gpu-driver-bug-workarounds=4,54 --gpu-vendor-id=0x80ee --gpu-device-id=0xbeef --gpu-driver-vendor=Chromium --gpu-driver-version=1.9 --user-data-dir=/tmp/chrome-user-dir --enable-logging --log-level=0 --v8-natives-passed-by-fd --v8-snapshot-passed-by-fd
The log file /tmp/chrome-user-dir/chrome_debug.log shows
...
[2945:2945:0527/111600:ERROR:texture_manager.cc(2746)] [.CommandBufferContext.DisplayCompositor-0x3d53365d63c0]GL ERROR :GL_INVALID_ENUM : glTexImage2D: <- error from previous GL command
[23:23:0527/111600:WARNING:ipc_message_attachment_set.cc(57)] MessageAttachmentSet destroyed with unconsumed descriptors: 0/1
[2945:2945:0527/111600:ERROR:gles2_cmd_decoder.cc(2167)] [.CommandBufferContext.CompositorWorker-0x3d53365d6280]GL ERROR :GL_INVALID_ENUM : GLES2DecoderImpl::DoBindTexImage2DCHROMIUM: <- error from previous GL command
[2945:2945:0527/111600:ERROR:gles2_cmd_decoder.cc(2167)] [.CommandBufferContext.CompositorWorker-0x3d53365d6280]GL ERROR :GL_INVALID_VALUE : ScopedTextureBinder::dtor: <- error from previous GL command
...
Looks like google-chrome use gpu for acceleration. So the solution is simple google-chrome --disable-gpu brings back chrome on my CentOS 6.8 VM.

Wednesday, May 18, 2016

How to create .epub and .mobi version of Gradle User Guide?

Gradle User Guide is written using docbook, and gradle build already have single HTML and pdf built. But I really want to load it into my kindle. Because docbook supports converting docbook to epub and epub3, I want to build it by myself.
You need to install docbook-xsl. On cygwin, I installed 1.77.1-1
$ cygcheck -c | grep docbook
build-docbook-catalog        1.5-2              OK
docbook-xsl                  1.77.1-1           OK

$ cygcheck -l docbook-xsl | grep epub
/usr/share/sgml/docbook/xsl-stylesheets/epub/bin/dbtoepub
/usr/share/sgml/docbook/xsl-stylesheets/epub/bin/lib/docbook.rb
/usr/share/sgml/docbook/xsl-stylesheets/epub/bin/xslt/obfuscate.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub/docbook.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub/README
/usr/share/sgml/docbook/xsl-stylesheets/epub3/chunk.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub3/chunkfast.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub3/docbook-epub.css.xml
/usr/share/sgml/docbook/xsl-stylesheets/epub3/docbook.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub3/epub3-chunk-mods.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub3/epub3-element-mods.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub3/profile-chunk.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub3/profile-docbook.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub3/README
/usr/share/sgml/docbook/xsl-stylesheets/epub3/titlepage.templates.xml
/usr/share/sgml/docbook/xsl-stylesheets/epub3/titlepage.templates.xsl
You’d better to read epub3/README, which describes the steps how to build a epub eBook from docbook. The command looks like this:
 xsltproc --stringparam base.dir ebook/OEBPS/ --xinclude /usr/share/sgml/docbook/xsl-stylesheets/epub3/chunk.xsl ../gradle/subprojects/docs/build/src/userguide.xml
One thing you need to pay more attention, you must have the last slash of ebook/OEBPS/. The above command will generate mimetype and META-INF in the directory ebook.
$ ls ebook
META-INF/  mimetype  OEBPS/
If you don’t append “/“, the command will create a directory ebook/OEBPS..
To build Gradle User Guide using docbook to epub, you need to do as follows:
  • You need to add cols="?" to <tgroup in the xml files in ~/gradle/subprojects/docs/src/docs/userguide. Otherwise, you will encounter the error Error: CALS tables must specify the number of columns. You can search the xml file using <tgroup> and cols="3" or cols="4".
    grep -R '<tgroup' ~/gradle/subprojects/docs/src/docs/userguide
    
  • You need to make build docs:userguide first. Because the document has a lot of sample codes, they are only added when you do a build. If you use userguide.xml in gradle/subprojects/docs/src/docs/userguide/userguid.xml, you won’t see the sample codes in the ebook.
  • After xlstproc, just run zip -r -X ../gradle-user-guide.epub mimetype META-INF OEBPS in ebook.
  • If you want .mobi for Kindle, convert the epub file in Calibre.

Friday, May 13, 2016

How to make @timestamp using GMT when using Fluentd, Elasticsearch and Kibana?

My log is a JSON one-liner output by a Node.js application, there is a field called “time” which is GMT time.
{ "req": {}, "time":"2016-05-12T19:18:38.123Z" }
I want to keep the timestamp in GMT in Kibana. But it is not a straight forward thing as I thought. It took me couple of hours to make the timestamp work correctly using Fluentd, Elasticsearch and Kibana.
I use in_tail and fluent-plugin-elasticsearch to parse the log and load into Elasticsearch, and I search the logs using Kibana.
Here is my fluentd config file.
<source>
  @type tail
  format json

  read_from_head true
  path <path>/debug.log
  pos_file /var/run/td-agent/pos/debug.log.pos

  keep_time_key true
  time_key time
  time_format "%FT%T.%L%z"

  refresh_interval 10s

  tag debug
</source>
<match debug>
  @type elasticsearch
  hosts                my-es-server-1,my-es-server-2

  logstash_format      true
  logstash_prefix        debug
  utc_index  true

  time_key  time
  time_key_format      %FT%T.%L%z
</match>
  • keep_time_key, time_key and time_format are necessary in in_tail. Because the default value of time_key is time, and keep_time_key is true, fluentd will always parse the timestamp from your json message.
    • If you don’t put keep_time_key, field time will be removed, and the timestamp will be in the timezone of the host where td-agent is running.
    • If you don’t give time_format, the default time parser cannot parse this format because the time has milliseconds, your @timestamp will be wrong.
  • in elasticsearch
    • you need to put time_key. Fluentd will copy time to @timestamp, so @timestamp will have the exact same UTC string as time.
    • time_key_format will be used to parse the time and use it to generate logstash index name when logstash_format=true and utc_index=true. So the index name like debug-2016.05.12 will match the times in your log.
  • In Kibana, you might see the timestamp is actually shown in your local timezone like ‘PDT’. You need to go to “Settings -> Advanced -> dateFormat:tz”, change the default value “Browser” to “GMT”. So that the timestamps will be all GMT times.

Monday, May 9, 2016

Spark Cassandra Connector and DataFrame

When you write a DataFrame to a cassandra table, be careful to use SaveMode.Overwrite. In spark-cassandra-connector-1.6.0-M2, TRUNCATE $keyspace.$table will be called. See the code in CassandraSourceRelation.scala.
I did observe something weird when I use the following code to write a data frame to a cluster of Cassandra 2.1.8:
df.write
  .format(""org.apache.spark.sql.cassandra")
  .mode(SaveMode.Overwrite)
  .options(Map("table" -> table, "keyspace" -> keyspace))
  .save
After the scheduled spark job finishes, in CQLSH, the table is empty when running select * from keyspace.table limit 10. The same results if I change consistency level to QUORUM, and even ALL. It might take some time, then the query returns the results.
If I start the job manually from the command line, however, most of time the query can return the results.
If you check the CQL document for TRUNCATE, setting consistency level to ALL is required.
Note: The consistency level must be set to ALL prior to performing a TRUNCATE operation. All replicas must remove the data.
I don’t think the consistency level is changed before calling TRUNCATE $keyspace.$table in spark-cassandra-connector. The default consistency level is LOCAL_QUORUM. That might be the root cause.

Tuesday, May 3, 2016

How to resolve spark-cassandra-connector's Guava version conflict in spark-shell

In my blog How to resolve spark-cassandra-connector Guava version conflicts in Yarn cluster mode, I explained how to resolve Guava version issue in Yarn cluster mode. This blog covers how to do it in spark-shell.
The first thing is, when you start spark-shell with --master yarn, you actually run in yarn-client mode. Unfortunately my method for Yarn cluster mode won’t work. You may still get an exception as below:
Caused by: java.lang.NoSuchMethodError: com.google.common.util.concurrent.Futures.withFallback(Lcom/google/common/util/concurrent/ListenableFuture;Lcom/google/common/util/concurrent/FutureFallback;Ljava/util/concurrent/Executor;)Lcom/google/common/util/concurrent/ListenableFuture;
What’s wrong? If you log on the data node, and check the launch_container.sh for you Yarn application, you will find guava-16.0.1.jar is in the first one in the classpath
export CLASSPATH="$PWD/guava-16.0.1.jar:$PWD:$PWD/__spark__.jar:$HADOOP_CLIENT_CONF_DIR:$HADOOP_CONF_DIR:$HADOOP_COMMON_HOME/*:$HADOOP_COMMON_HOME/lib/*:$HADOOP_HDFS_HOME/*:$HADOOP_HDFS_HOME/lib/*:$HADOOP_YARN_HOME/*:$HADOOP_YARN_HOME/lib/*:$HADOOP_MAPRED_HOME/*:$HADOOP_MAPRED_HOME/lib/*:$MR2_CLASSPATH"
Here is the trick: you need to add Guava jar to --files in your command
spark-shell \
  --master yarn-cluster \
  --driver-class-path <local path of guava-16.0.1.jar> \
  --conf spark.executor.extraClassPath=./guava-16.0.1.jar \
  --jars <local path of guava-16.0.1.jar> \
  --files <local path of guava-16.0.1.jar> \
  ...
Sounds weird? You can do this test, then you will understand why. Run spark-shell command, when you see the prompt, don’t do anything, log on the data nodes where your application’s spark executors are running. Find the application’s cache, what will you find?
# ls /grid/0/yarn/nm/usercache/bwang/appcache/application_1459869234031_5503/container_e45_1459869234031_5503_01_000004/
container_tokens                       launch_container.sh
default_container_executor_session.sh  __spark__.jar
default_container_executor.sh          tmp
Where are those jars listed in —jars? The answer is those jars are copied until you start some action of RDD or DataFrame in the spark-shell. Unfortunately, the JVM of the executor is already started, and the JVM might use the older version of Guava already.
If you add Guava jar in --files, the jar will be copied to the executor’s container. And guava-16.0.1.jar will be chosen over the older version of Guava.
Updates
Adding --files is not necessary for Spark 2.0.1. For Spark 2.1, you are not able to start Spark shell if you keep it. All of jars are already distributed in Spark 2 when each executor’s JVM starts.