Monday, January 8, 2018

Automic Application Manager V9 Client: Run with Java 9

My team uses Automic AM v9.1.0 (v9.1.0_28363_28431), which is configured with Java 6 and Java 7 by default. It also works in Java 8. However, if you run it with Java 9, you will encounter network error with the followings in detail: DHPublicKey does not comply to algorithm constraints
 at java.base/
 at java.base/
 at java.base/
The reason you see this error is Java 9 requires DH key size must be larger than 1024, and the version of v9.1.0 must use DHPublicKey shorter than 1024. Check /usr/java/default/conf/security/
jdk.tls.disabledAlgorithms=SSLv3, RC4, MD5withRSA, DH keySize < 1024, \
    EC keySize < 224
If you update the file as below, you can run the client with Java 9.
jdk.tls.disabledAlgorithms=SSLv3, RC4, MD5withRSA, DH keySize < 768, \
    EC keySize < 224
The reason I want to run AM using Java 9 because Java 9 have a better support for GDK and I can get sharp font on HiDPI screen like MacBook Pro with Retina Display.
  • You can run $JAVA_HOME/bin/javaws -J-Dawt.useSystemAAFontSettings=lcd http://am_host/am_engine/Client.jnlp to get better fonts with smooth edge.
  • Set environment variable GDK_SCALE=2 to make the client window large on a HiDPI display.
Because this method needs to change, it will be applied to every application using the Java 9 deployment, and may be a security issue. I’m running AM in a docker container, there is no such a problem, because the Java with this change is only for AM.

Read HAR to Spark DataFrame

When I use spark-streaming to pull JSON events from a Kafka topic and persist the data into HDFS, I have to handle a lot of tiny files because the volume of the Kafka topic is pretty small. Too many small files will cause bad performance of Hadoop NameNode. Usually you need to build Hadoop Archive (.har) to make the small files into one big archive file.
The problem is how to read the archive file (.har) into Spark DataFrame. Method text and json of Spark DataFrameReader won’t work for the path of an archive file. You have to use SparkContext#textFile and the file path needs to be ${har_path}/*.
Here is the example showing how to read the files in a HAR. DataFrameReader read nothing for all three path patterns. SparkContext.textFile successfully read the data for the patterns of dir and file.
val har = "har:///tmp/test-data/bwang/starring/tag-v1-1511170200-1511175600.har"

val paths = Map(
    "har" -> har,
    "dir" -> s"$har/tag-*",
    "file" -> s"$har/tag-*/part-*"

println("DataFrameReader different HAR paths")

paths.foreach {
    case (kind, path) =>
        val data =
        println(s"--- Reading $kind using path $path."), false)

println("SparkContext#textFile different HAR paths")

paths.foreach {
    case (kind, path) =>
        try {
            val data = sc.textFile(path).toDF
            println(s"--- Reading $kind using path $path.")
  , false)
        } catch {
            case e: =>
                println(s" --   Failed. ${e.getMessage}")