Monday, September 12, 2016

Gradle show dependencies of the specified configuration

I record the method here in case I forget how to show the dependencies for ONLY one configuration like runtime again.
  • Show dependencies for one configuration ONLY: ./gradlew dependencies --configuration runtime
  • Show a task’s arguments: ./gradlew help --task dependencies

Record Request and Reponse of HTTP Samplers in JMeter

  • Add “Simple Data Writer”
  • Give the file name like “result.xml”
  • Click Configure button.
  • Check the radio buttons for
    • Save As XML
    • Save URL
    • Save Response Data (XML)
  • Start the jmeter test
  • Check the XML file
  • If you stop jmeter, then start another run, the requests and responses in the new run will be appended to the XML file.
<testResults>
<httpSample t="17" lt="17" ts="1473705874608" s="true" lb="Profile Request" rc="200" rm="OK" tn="Profile (year) 1-1" dt="text" by="253" ng="1" na="2">
  <responseData class="java.lang.String">...</responseData>
  <java.net.URL>...</java.net.URL>
</httpSample>
</testResults>

Friday, September 9, 2016

Make Spark read Teradata directly.

Spark SQL supports read JDBC resource, but the paragraph in the latest 2.0 document is not really helpful:
The JDBC driver class must be visible to the primordial class loader on the client session and on all executors. This is because Java’s DriverManager class does a security check that results in it ignoring all drivers not visible to the primordial class loader when one goes to open a connection. One convenient way to do this is to modify compute_classpath.sh on all worker nodes to include your driver JARs.
I wrote this blog to explain how to run against Teradata database:
  • in spark-shell and YARN-cluster mode
  • spark-submit and YARN-cluster mode

spark-shell and YARN mode

...
--conf spark.executor.extraClassPath=./tdgssconfig-15.10.00.22.jar:./terajdbc4-15.10.00.22.jar \
--driver-class-path $LIB_DIR/tdgssconfig-15.10.00.22.jar:$LIB_DIR/terajdbc4-15.10.00.22.jar \
...
  • for executor.extraClassPath, the path is the current directory ./, which is where YARN starts the executor.
  • for --driver-class-path, “$LIB_DIR” is the directory where those jdbc driver jars live, it is on the host where you run spark-shell.

spark-submit and YARN-cluster mode

...
--conf spark.executor.extraClassPath=./tdgssconfig-15.10.00.22.jar:./terajdbc4-15.10.00.22.jar \
--driver-class-path ./tdgssconfig-15.10.00.22.jar:./terajdbc4-15.10.00.22.jar \
--jars $LIB_DIR/tdgssconfig-15.10.00.22.jar,$LIB_DIR/terajdbc4-15.10.00.22.jar,<other jars>
...

Explanation

Two differences:
  • In spark-shell, you don’t have to put Terdata jdbc driver jars in --jars, because --driver-class-path already does. If you put them in --jars, it doesn’t hurt. But in spark-submit, you have to add them in --jars.
  • In spark-submit, --driver-class-path uses the current directory ./, not $LIB_IDR in spark-shell.
Lost? Here is why:
  • When you run spark-submit with YARN-cluster mode , the Spark app driver actually runs in a YARN container, not where you type “spark-submit”.
    • --jars makes teradata jdbc driver jars copied to the container directory where the Spark app driver (YARN appMaster) runs.
    • --driver-class-path just declares you need teradata jdbc driver jars. Be the Spark app driver runs in a YARN container, this argument tells the spark app driver JVM where to find the jdbc jars. Because the jars are copied into the YARN container’s directory, to the JVM of spark app driver, the container directory is the current directory.
    • spark-submit actually cand not find the jdbc jars in ./ in --driver-class-path, so the jdbc jars won’t be copied to the YARN container. That is why you have to use --jars to tell spark where to find those jars and ship them to the container.
  • In spark-shell, the spark app driver is started on the host where you type “spark-shell”. So you need to give the full path in --driver-class-path

Friday, May 27, 2016

Bring back google-chrome after upgrading to CentOS 6.8 and Chrome 51.

I don’t know which one is root cause: upgrading to CenOS 6.8 or Chrome 51. I used install_chrome.sh on http://chrome.richardlloyd.org.uk/ to install Google chrome on my CentOS 6 VirtualBox VM. It worked very well until this upgrade. If I ran google-chrome, the window popped up, but it is almost black.
I use the following command to investigate this problem
google-chrome --disable-plugins --disable-extensions --user-data-dir=/tmp/chrome-user-dir --enable-logging --log-level=0
  • disable all plugins and extensions
  • use a new user dir
  • enable logs
Also checking the chrome process --type=gpu-process as parameter
$ ps -ef | grep chrome
bwang  1358  1284  1 10:58 pts/6    00:00:00 /opt/google/chrome/chrome --enable-features=... --disable-features=... --type=gpu-process --channel=1284.0.1688276239 --enable-logging --log-level=0 --window-depth=24 --user-data-dir=/tmp/chrome-user-dir --supports-dual-gpus=false --gpu-driver-bug-workarounds=4,54 --gpu-vendor-id=0x80ee --gpu-device-id=0xbeef --gpu-driver-vendor=Chromium --gpu-driver-version=1.9 --user-data-dir=/tmp/chrome-user-dir --enable-logging --log-level=0 --v8-natives-passed-by-fd --v8-snapshot-passed-by-fd
The log file /tmp/chrome-user-dir/chrome_debug.log shows
...
[2945:2945:0527/111600:ERROR:texture_manager.cc(2746)] [.CommandBufferContext.DisplayCompositor-0x3d53365d63c0]GL ERROR :GL_INVALID_ENUM : glTexImage2D: <- error from previous GL command
[23:23:0527/111600:WARNING:ipc_message_attachment_set.cc(57)] MessageAttachmentSet destroyed with unconsumed descriptors: 0/1
[2945:2945:0527/111600:ERROR:gles2_cmd_decoder.cc(2167)] [.CommandBufferContext.CompositorWorker-0x3d53365d6280]GL ERROR :GL_INVALID_ENUM : GLES2DecoderImpl::DoBindTexImage2DCHROMIUM: <- error from previous GL command
[2945:2945:0527/111600:ERROR:gles2_cmd_decoder.cc(2167)] [.CommandBufferContext.CompositorWorker-0x3d53365d6280]GL ERROR :GL_INVALID_VALUE : ScopedTextureBinder::dtor: <- error from previous GL command
...
Looks like google-chrome use gpu for acceleration. So the solution is simple google-chrome --disable-gpu brings back chrome on my CentOS 6.8 VM.

Wednesday, May 18, 2016

How to create .epub and .mobi version of Gradle User Guide?

Gradle User Guide is written using docbook, and gradle build already have single HTML and pdf built. But I really want to load it into my kindle. Because docbook supports converting docbook to epub and epub3, I want to build it by myself.
You need to install docbook-xsl. On cygwin, I installed 1.77.1-1
$ cygcheck -c | grep docbook
build-docbook-catalog        1.5-2              OK
docbook-xsl                  1.77.1-1           OK

$ cygcheck -l docbook-xsl | grep epub
/usr/share/sgml/docbook/xsl-stylesheets/epub/bin/dbtoepub
/usr/share/sgml/docbook/xsl-stylesheets/epub/bin/lib/docbook.rb
/usr/share/sgml/docbook/xsl-stylesheets/epub/bin/xslt/obfuscate.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub/docbook.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub/README
/usr/share/sgml/docbook/xsl-stylesheets/epub3/chunk.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub3/chunkfast.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub3/docbook-epub.css.xml
/usr/share/sgml/docbook/xsl-stylesheets/epub3/docbook.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub3/epub3-chunk-mods.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub3/epub3-element-mods.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub3/profile-chunk.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub3/profile-docbook.xsl
/usr/share/sgml/docbook/xsl-stylesheets/epub3/README
/usr/share/sgml/docbook/xsl-stylesheets/epub3/titlepage.templates.xml
/usr/share/sgml/docbook/xsl-stylesheets/epub3/titlepage.templates.xsl
You’d better to read epub3/README, which describes the steps how to build a epub eBook from docbook. The command looks like this:
 xsltproc --stringparam base.dir ebook/OEBPS/ --xinclude /usr/share/sgml/docbook/xsl-stylesheets/epub3/chunk.xsl ../gradle/subprojects/docs/build/src/userguide.xml
One thing you need to pay more attention, you must have the last slash of ebook/OEBPS/. The above command will generate mimetype and META-INF in the directory ebook.
$ ls ebook
META-INF/  mimetype  OEBPS/
If you don’t append “/“, the command will create a directory ebook/OEBPS..
To build Gradle User Guide using docbook to epub, you need to do as follows:
  • You need to add cols="?" to <tgroup in the xml files in ~/gradle/subprojects/docs/src/docs/userguide. Otherwise, you will encounter the error Error: CALS tables must specify the number of columns. You can search the xml file using <tgroup> and cols="3" or cols="4".
    grep -R '<tgroup' ~/gradle/subprojects/docs/src/docs/userguide
    
  • You need to make build docs:userguide first. Because the document has a lot of sample codes, they are only added when you do a build. If you use userguide.xml in gradle/subprojects/docs/src/docs/userguide/userguid.xml, you won’t see the sample codes in the ebook.
  • After xlstproc, just run zip -r -X ../gradle-user-guide.epub mimetype META-INF OEBPS in ebook.
  • If you want .mobi for Kindle, convert the epub file in Calibre.

Friday, May 13, 2016

How to make @timestamp using GMT when using Fluentd, Elasticsearch and Kibana?

My log is a JSON one-liner output by a Node.js application, there is a field called “time” which is GMT time.
{ "req": {}, "time":"2016-05-12T19:18:38.123Z" }
I want to keep the timestamp in GMT in Kibana. But it is not a straight forward thing as I thought. It took me couple of hours to make the timestamp work correctly using Fluentd, Elasticsearch and Kibana.
I use in_tail and fluent-plugin-elasticsearch to parse the log and load into Elasticsearch, and I search the logs using Kibana.
Here is my fluentd config file.
<source>
  @type tail
  format json

  read_from_head true
  path <path>/debug.log
  pos_file /var/run/td-agent/pos/debug.log.pos

  keep_time_key true
  time_key time
  time_format "%FT%T.%L%z"

  refresh_interval 10s

  tag debug
</source>
<match debug>
  @type elasticsearch
  hosts                my-es-server-1,my-es-server-2

  logstash_format      true
  logstash_prefix        debug
  utc_index  true

  time_key  time
  time_key_format      %FT%T.%L%z
</match>
  • keep_time_key, time_key and time_format are necessary in in_tail. Because the default value of time_key is time, and keep_time_key is true, fluentd will always parse the timestamp from your json message.
    • If you don’t put keep_time_key, field time will be removed, and the timestamp will be in the timezone of the host where td-agent is running.
    • If you don’t give time_format, the default time parser cannot parse this format because the time has milliseconds, your @timestamp will be wrong.
  • in elasticsearch
    • you need to put time_key. Fluentd will copy time to @timestamp, so @timestamp will have the exact same UTC string as time.
    • time_key_format will be used to parse the time and use it to generate logstash index name when logstash_format=true and utc_index=true. So the index name like debug-2016.05.12 will match the times in your log.
  • In Kibana, you might see the timestamp is actually shown in your local timezone like ‘PDT’. You need to go to “Settings -> Advanced -> dateFormat:tz”, change the default value “Browser” to “GMT”. So that the timestamps will be all GMT times.

Monday, May 9, 2016

Spark Cassandra Connector and DataFrame

When you write a DataFrame to a cassandra table, be careful to use SaveMode.Overwrite. In spark-cassandra-connector-1.6.0-M2, TRUNCATE $keyspace.$table will be called. See the code in CassandraSourceRelation.scala.
I did observe something weird when I use the following code to write a data frame to a cluster of Cassandra 2.1.8:
df.write
  .format(""org.apache.spark.sql.cassandra")
  .mode(SaveMode.Overwrite)
  .options(Map("table" -> table, "keyspace" -> keyspace))
  .save
After the scheduled spark job finishes, in CQLSH, the table is empty when running select * from keyspace.table limit 10. The same results if I change consistency level to QUORUM, and even ALL. It might take some time, then the query returns the results.
If I start the job manually from the command line, however, most of time the query can return the results.
If you check the CQL document for TRUNCATE, setting consistency level to ALL is required.
Note: The consistency level must be set to ALL prior to performing a TRUNCATE operation. All replicas must remove the data.
I don’t think the consistency level is changed before calling TRUNCATE $keyspace.$table in spark-cassandra-connector. The default consistency level is LOCAL_QUORUM. That might be the root cause.