Thursday, December 29, 2011

DB21018E DB2 CLP cannot start

DB2 CLP refused to start after I killed some db2 processes. And finally I found this error from db2diag.log. Actually db2 and db2bp use message queue to communicate. And somehow all queues (16) are all used. use command "ipcs -qp" to show the current message queues
2011-12-29-05.01.05.656543-480 E46743E405          LEVEL: Severe (OS)
PID     : 29100                TID  : 47033606104272PROC : db2
INSTANCE: db2c97               NODE : 000
FUNCTION: DB2 UDB, oper system services, sqloexec, probe:20
MESSAGE : ZRC=0x870F00F2=-2029059854=SQLO_NORES
          "no resources to create process or thread"
CALLED  : OS, -, msgget                           OSERR: ENOSPC (28)
You can use "ipcrm -q msgid" to remove them. After removing all those message queues, db2 CLP starts.

Wednesday, December 7, 2011

Input DB2 password using Python openpty()

Here is how to use python openpty() to input DB2 password.
  • You need use openpty() because db2 insists reading from a terminal. You cannot use PIPE for stdin to pass the password.
  • You have to read from stdout first. If you write the password through pty before reading, db2 may not read the password because it needs time to start.
  • In python, read() will read until EOF. readline() won't work in that db2 prints "Enter " and waits for the input, no new line is present yet.
import os
import subprocess

m,s = os.openpty()
print m,s
p = subprocess.Popen("db2.sh", stdin=s, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
f = os.fdopen(m, "w")
out = p.stdout.read(5)
print "OUT: %s" % out
if out == "Enter":
  f.write("mypassword\n")
else:
  exit(-1)

print "\nSTDOUT:\n"
for line in p.stdout:
  print line
print "\nSTDERR:\n"
for line in p.stderr:
  print line
print p.returncode
#!/bin/bash
set -e
source /home/db2c97/sqllib/db2profile
db2 connect to DEVEDW user myname
echo $?
echo "loading ..."
db2 "select count(*) from players"
echo $?

Friday, December 2, 2011

Fix "The property zDeviceTemplates does not exist"

I wanted to bind my monitoring templates with my Hadoop devices programmatically. So I wrote a zendmd script like this:

...
templates = {
    "clientnode": [],
    "secondarynamenode": [ 'HadoopJVM', 'HadoopNameNode', 'HadoopDFS' ],
    "namenode":   [ 'HadoopJVM', 'HadoopNameNode', 'HadoopDFS' ],
    "jobtracker": [ 'HadoopJVM', 'HadoopJobTracker', 'HadoopFairScheduler' ],
    "datanode":   [ 'HadoopJVM', 'HadoopDataNode', 'HadoopTaskTracker' ],
    "utility":    []
    }
...

for item in dmd.Devices.Server.SSH.Linux.Ganglia.devices.objectItems():
  (name, device) = item
  bindings = set([ 'Device' ])
  rule = findRule(name, rules)
  if rule:
    device.zGangliaHost = gmond[rule["cluster"]]
    for t in rule["kinds"]:
      bindings = bindings.union(templates[t])
  device.zDeviceTemplates = list(bindings)
  print name, device.zDeviceTemplates
commit()

The basic idea is to define a list of templates for each node and set the templates list to zDeviceTemplates.

It worked after running this script in zendmd, and you can find all monitoring templates for the device. But you cannot bind templates in the WebUI any more. If you try to load objects including templates using ImportRM.loadObjectFromXML(xmlfile=f), it will throw this error "The property zDeviceTemplates does not exist".

Another problem is: zGangliaHost won't show up in "Configuration properties" after running the script, but Ganglia ZenPack works well.

I found exactly the same problem http://community.zenoss.org/thread/5812, which suggested "delete the device and create it again".

Actually you should never assign zGangliaHost and zDeviceTemplates directly. You should use device.setZenProperty('zGangliaHost', gmond[rule["cluster"]]) and device.setZenProperty('zDeviceTemplates', list(bindings)). setZenProperty actually maintains a internal property dict. If you assign zGangliaHost or zDeviceTemplates directly (using attribute directly), the property dict will not contain those properties, and you will get the error.

But you cannot call setZenProperty to set the property any more after the error is already thrown. You will kept gotten "the property doesn't exist" error. How should I fix it without delete the device?

It is actually pretty simple: delete attribute from zGangliaHost and zDeviceTemplates. Actually zenoss check if the property name is valid before set the property. If the object already has the attribute, the property name will be invalid because zenoss will add attribute to the object for each property. Unfortunately, the error message is misleading. This is my fix script:

for (id, dev) in dmd.Devices.Server.SSH.Linux.Ganglia.devices.objectItems():
  print '----- %s' % id
  
  try:
    gangliaHost = dev.zGangliaHost
    delattr(dev, 'zGangliaHost')
    dev.setZenProperty('zGangliaHost', gangliaHost)
  except:
    print 'Missing zGangliaHost'

  devTemplates = dev.zDeviceTemplates
  delattr(dev, 'zDeviceTemplates')
  dev.setZenProperty('zDeviceTemplates', devTemplates)

  print 'zGangliaHost = %s' % dev.zGangliaHost
  print 'zDeviceTemplates = %s' % dev.zDeviceTemplates
commit()

Thursday, November 17, 2011

Invalid artifact issue of Eclipselink Nexus Proxy Repository

I use Nexus as my internal maven repository. I set up a proxy repository "Eclipselink Maven Mirror" for Eclipselink. Eclipselink uses this URL http://www.eclipse.org/downloads/download.php?r=1&nf=1&file=/rt/eclipselink/maven.repo which actually redirects you to a mirror site. Just put this link into "Remote Storage Location" and create a proxy repository. I made Eclipselink repository as the last one in "Ordered Group Repository" in "Public Repository" Configuration tab. My maven settings.xml defines a mirror to use the public repository group like this
    <mirrors>
        <mirror>
            <id>nexus-public</id>
            <mirrorOf>*</mirrorOf>
            <url>http://nexus:8080/nexus/content/groups/public</url>
        </mirror>
    </mirrors>
Unfortunately it doesn't work well. When maven gets some of artifacts, especially my own artifacts, Nexus returns invalid pom or jar files in this "Eclipselink Maven Mirror", which are actually HTML pages. And the artifacts will be saved into maven local repository, and make my maven builds fail again and again. I don't know how Nexus search the public repositories to locate an artifact. But looks like Nexus tries "EclipseLink Maven Mirror", then access the Eclipselink Repo using the URL listed above. Unfortunately this URL will return a HTML page once the artifact is not found. But Nexus didn't check if it is invalid because I didn't configure it. Here is my solution:
  • Setup two mirrors: one for eclipselink and one for the others except eclipselink.
  •         
                nexus-public
                *,!eclipselink-repo
                http://nexus:8080/nexus/content/groups/public
            
            
                nexus-eclipselink
                eclipselink-repo
                http://nexus:8080/nexus/content/repositories/eclipselink-maven-mirror
            
    
  • Set "File Content Validation" to true For "Eclipselink Maven Mirror"

Tuesday, November 8, 2011

Hive Metastore Trick: "get_privilege_set failed"

Three kinds of Hive metastores are supported in Hive 0.7.1:
  • Embeded
  • Local
  • Remote

It seems straight forward. Right? If MySQL metastore database is installed on serverA, and the Hive client is running on serverB, which one you should use? Definitely not Embedded. Should use "Remote"? Does this configuration hive-site.xml work?


<property>
  <name>hive.metastore.local</name>
  <value>false</value>
  <description>controls whether to connect to remove metastore server or open a new metastore server in Hive Client JVM</description>
</property>

<property>
  <name>hive.metastore.uris</name>
  <value>thrift://serverA:8003</value>
  <description>host and port for the thrift metastore server</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionURL</name>
  <value>jdbc:mysql://chelhadedw002/metastore_dev</value>
  <description>JDBC connect string for a JDBC metastore</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
  <value>com.mysql.jdbc.Driver</value>
  <description>Driver class name for a JDBC metastore</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionUserName</name>
  <value>username</value>
</property>

<property>
  <name>javax.jdo.option.ConnectionPassword</name>
  <value>password</value>
</property>

Unfortunately this doesn't work correctly. We could run "show tables", but Hive threw the exception when we run a select statement like this
FAILED: Hive Internal Error: org.apache.hadoop.hive.ql.metadata.HiveException(org.apache.thrift.TApplicationException: get_privilege_set failed: unknown result)
org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.TApplicationException: get_privilege_set failed: unknown result
        at org.apache.hadoop.hive.ql.metadata.Hive.get_privilege_set(Hive.java:1617)
        at org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider.authorizeUserPriv(DefaultHiveAuthorizationProvider.java:201)
        at org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider.authorizeUserAndDBPriv(DefaultHiveAuthorizationProvider.java:226)
        at org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider.authorizeUserDBAndTable(DefaultHiveAuthorizationProvider.java:259)
        at org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider.authorize(DefaultHiveAuthorizationProvider.java:159)
        at org.apache.hadoop.hive.ql.Driver.doAuthorization(Driver.java:531)
        at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:393)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:736)
        at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:209)
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:286)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:513)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:186)
Caused by: org.apache.thrift.TApplicationException: get_privilege_set failed: unknown result
        at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_privilege_set(ThriftHiveMetastore.java:2414)
        at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_privilege_set(ThriftHiveMetastore.java:2379)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.get_privilege_set(HiveMetaStoreClient.java:1042)
        at org.apache.hadoop.hive.ql.metadata.Hive.get_privilege_set(Hive.java:1615)
        ... 15 more

The problem of our configuration is "We should use LOCAL metastore even the MySQL metastore locates on a different server", in other word, hive.metastore.local=true. If you want to use "Remote" metastore, you need to start the metastore service hive --service metastore. It seems working when we use "serverA:8003", because we install Hue on serverA, and beeswax starts a metastore thrift service on port 8003. That is why you can run show tables successfully, but get the above error when you run select.

Another issue is: when you use remote metastore, you will still have this issue. see https://issues.apache.org/jira/browse/HIVE-2554 and https://issues.apache.org/jira/browse/HIVE-2405.

Tuesday, October 18, 2011

Puppet logs

It took me hours to figure out how to make puppet write the log files. I'm using EPEL puppet-2.6.6 on CentOS 5.6 x86_64. Puppet documentation is misleading, you may find puppetdlog and masterlog in this page Puppet Configuration Reference. I tried to set them as syslog or a file in /var/log/puppet, neither worked. It turns out that you have to give --logdest option on the command line. If you run puppet service, you can set the option in /etc/sysconfig/puppet and /etc/sysconfig/puppetmaster here is /etc/sysconfig/puppet
# Where to log to. Specify syslog to send log messages to the system log.
PUPPET_LOG=/var/log/puppet/agent.log

# Autoflush logs
PUPPET_EXTRA_OPTS=--autoflush
and /etc/sysconfig/puppetmaster
PUPPETMASTER_LOG=/var/log/puppet/master.log

PUPPETMASTER_EXTRA_OPTS=--autoflush
It is better to add --autoflush. I like to use puppet kick and monitor the log. Without --autoflush, puppet seems not working because the log is not written to disk.

Thursday, October 13, 2011

Hadoop Cluster Monitoring

Set zGangliaHost for a lot of servers

I realized that it was a big problem to set zGangliaHost for my Hadoop clusters, totally 55 servers. Fortunately, Zenoss is powerful if you can write some small Python code. Here is my solution: Load them using zenbatchload like this:
$ zenbatchload dev-cluster.txt
Here is the dev-cluster.txt:
/Devices/Server/SSH/Linux/Ganglia
    devnode001 comments="My Hadoop DEV cluster, client node", zGangliaHost="devnode001", setGroups='/Hadoop/DEV/ClientNode'
    devnode002 comments="My Hadoop DEV cluster, master node", zGangliaHost="devnode001", setGroups='/Hadoop/DEV/MasterNode'
    devnode003 comments="My Hadoop DEV cluster, data node", zGangliaHost="devnode001", setGroups='/Hadoop/DEV/DataNode'
You can set zGangliaHost and groups in a file. It is perfect. You can generate this file easily using a script. ZENBATCHLOAD HOW TO may be obsolete. My Zenoss version (3.1.0) doesn't use -i. My Zenoss administrator created the devices for my clusters without zGangliaHost. I don't want him to delete those devices and use zenbatchload. Here is my solution.
$ zendmd --script=set_gangliahost.py
Here is set_gangliahost.py:
import re

dev = re.compile('(hdcl001|had002|had01[0-2]).*', re.IGNORE
CASE)
test = re.compile('(hdcledw002|had001|had01[0-2]).*', re.IGNOR
ECASE)
prod = re.compile('prod.*', re.IGNORECASE)
for item in dmd.Devices.Server.SSH.Linux.Ganglia.devices.objectItems():
    (name, device) = item
    if dev.match(name):
        device.zGangliaHost = 'dev-gmond'
    elif test.match(name):
        device.zGangliaHost = 'test-gmond'
    elif prod.match(name):
        device.zGangliaHost = 'prod-gmond'
commit()
dev-gmond is the server name where you run gmond

Zenoss Ganglia ZenPack Fix

My company uses Zenoss to monitor all Linux hosts. And we want to use Ganglia ZenPack to monitor our Hadoop clusters. The ZenPack from this link doesn't work in my Zenoss, community version 3.1.0. A lot of weird things happened when I loaded the egg using "Advance -> Settings -> ZenPacks -> Install ZenPack ...". I also tried the source code of the ZenPack in github, and no luck. Finally I fixed the issue of Ganglia ZenPack and it worked perfectly in Zenoss 3.1.0. I created an ZenPack in Zenoss using "Create ZenPack ...", and used the folders under $ZENHOME/ZenPacks as a skeleton. And copied the files in the ZenPack from github except the skins. I also changed this line, but I don't remember if this is important.
diff ZenPacks/jschroeder/GangliaMonitor/datasources/GangliaMonitorDataSource.py /workplace/ws-zenpacks/ZenPacks.jschroeder.GangliaMonitor/ZenPacks/jschroeder/GangliaMonitor/datasources/GangliaMonitorDataSource.py 
72c72
<             return self.hostname
---
>             return self.host
Because I am a newbie of Zenoss, the following may be useful for you:
  • Switch to zenoss sudo su - zenoss
  • run zenpack --link --install=/tmp/ZenPacks.jschroeder.GangliaMonitor
  • If everything is correct, you should see ZenPack in the web page.
You'd better use command line tool zenpack, it will save you a lot of time if you are new to Zenoss and Python. You probably need to run zopectl restart time to time. From my experience with Zenoss, if your ZenPack works, everything looks perfect. If you have something wrong in your ZenPack, you are doomed because Zenoss doesn't tell you too much useful information, especially when you use Zenoss web. For example, I deleted MANIFEST.in in the folder, and the egg was built without the folders libexec and objects/objects.xml. After I installed this egg, ZenPack appears in the webpage after I run "zopectl restart", but it never worked as I expected.

Thursday, October 6, 2011

Puppet kick

I encountered several problems when I tried puppet kick. I did setup /etc/hosts to resolve pslave1 and could ping the host. It turns out that I have to enable tcp/8139 on pslave1's firewall.
$ sudo puppet kick -f --debug --host pslave1.puppet-test.com
Triggering pslave1.puppet-test.com
Host pslave1.puppet-test.com failed: No route to host - connect(2)
pslave1.puppet-test.com finished with exit code 2
Failed: pslave1.puppet-test.com
Then I run into another problem, I did add the following in /etc/puppet/auth.conf like this (THIS IS WRONG)
# this one is not stricly necessary, but it has the merit
# to show the default policy which is deny everything else
path /
auth any

path /run
method save
allow pmaster.puppet-test.com
And I did add run this command to create namespaceauth.conf
sudo touch /etc/puppet/namespaceauth.conf
But it is still don't allow me to kick the agent:
warning: Denying access: Forbidden request: pmaster.puppet-test.com(192.168.56.101) access to /run/pslave1.puppet-test.com [save] authenticated  at line 93
err: Forbidden request: pmaster.puppet-test.com(192.168.56.101) access to /run/pslave1.puppet-test.com [save] authenticated  at line 93
Finally I found why: because I put "path /run" after "path /". Here is the correct auth.conf
path /run
auth any
method save
allow pmaster.puppet-test.com

# this one is not stricly necessary, but it has the merit
# to show the default policy which is deny everything else
path /
auth any
You can run puppet agent like this to get the debug information:
sudo puppet agent --listen --debug --no-daemonize --verbose

Puppet master, symlink and SELinux

I created a puppet module p4 under my home folder and symlinked the module folder into /etc/puppet/modules. I can run sudo puppet apply test.pp succefully on the master, but when I ran
sudo puppet agent --no-daemonize --verbose --onetime
on an agent machine, I got the following error:
err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find class p4 at /etc/puppet/manifests/nodes.pp:2 on node pslave1.puppet-test.com

This page is helpful: http://groups.google.com/group/puppet-users/browse_thread/thread/66361418d801a97c. But my situation is different, the permission of module folders is rwxrwxr-x. I ran this command
sudo strace -e trace=file -f puppet master --no-daemonize --debug 2>&1 | tee log
It turned out that there WAS a "permission denied" issue:
[pid 15508] stat("/etc/puppet/modules/p4", 0x7fff44cfb630) = -1 EACCES (Permission denied)
After I copied p4 folder to /usr/share/puppet/modules, everything worked. SELinux is installed on my CentOS. It must be SELinux that blocks puppet to access the file.

Thursday, September 29, 2011

Install 64-bit CentOS on 64-bit Windows 7 using VirtualBox

My Laptop is ThinkPad T410 with Intel Core i5 M520 @ 2.4GHz and Windows 7 Enterprise 64-bit. I want to install a CentOS 5.6 64bit as a guest OS using VirtualBox.

But had the error "Your CPU does not suupoort long mode. Use a 32bit distribution."

It turns out that I have to turn off a BIOS setting for VT-d which is enabled by default. After that, when you create a new machine, enable Settings->System->Acceleration->Enable VT-x/AMD-v. Then everything works smoothly.


Wednesday, September 7, 2011

How to find DB2 servers in Windows?

I have a couple of DB2 databases configured in a machine and the original ODBC profile file cannot be found. How can I know where is the server for a database? Control panel-> Administrative Tools-> Data Sources (ODBC) doesn't provide the host.

Actually you can use DB2->Set-up Tools->Configuration Assistant. When it starts, click menu Configure -> Export Profile -> All ...

Search the node of your DB in the profile file, then you will find the host name or ip address.

[NODE>MYDB]
ServerType=DB2LINUX
Nodetype=U
Protocol=TCPIP
Hostname=10.116.152.112
Portnumber=50000
Security=0


Chrome and NTLM authentication

There is an IIS web server in my company network. I did use Chrome on my Windows machine to access that web site without providing my credentials. However, after I did some change some where, it has been asking me the user name and password since then.

Finally I figured out why. Chrome uses Windows "Internet options" for NTLM authentication. If you place the web site in "Trusted sites" and Chrome won't try NTLM authentication. Only the web site in "Local Intranet", Chrome will use NTLM authentication and you can access that site without credentials.

Transfer database connections between two Eclipse workspaces

I have two Eclipse workspace A and B, and created several database connections in workspace A. How can I get all those connections set up in workspace B without do it one by one?

  1. Go to workspaceA/.metadata/.plugins/org.eclipse.datatools.connectivity
  2. Copy ServerProfiles.bak and ServerProfiles.dat to  workspaceB/.metadata/.plugins/org.eclipse.datatools.connectivity
  3. Restart Eclipse

JRuby Rails 3 render a file on a windows server

I want to render public/401.html if the user is not authorized to access the web site.

My environment is jruby 1.6, rails 3.0.5, and tomcat 7.0.11 on windows.

And I found the code from "JRuby cookbook"

PUBLIC_DIR = if defined?($servlet_context)
$servlet_context.getRealPath('/')
else
"#{RAILS_ROOT}" + '/public'
end

Unfortunately this code still didn't work, still got "Missing template".

The reason is windows path. PUBLIC_DIR will be "c:/tomcat/webapps/myapp" and the file is "c:/tomcat/webapps/myapp/401.html". When rails searches the view paths, it tries to add each item in view paths as a prefix of your file. One of the view paths is "c:/", then the path which rails tries to search looks like "c:/c:/tomcat/webapp/myapp/401.html". Of course, Rails cannot find the file.

$servlet_context.getRealPath('/').gsub(/^\w:/, "").gsub(/\\/, "/")

Friday, August 19, 2011

How to debug Pig UDFs in Eclipse?

It is not that hard as you think actually.
  1. Create a maven project using m2eclipse.
  2. Add org.apache.pig:pig as dependency.
  3. Click "Debug configurations ...".
  4. Create a "Java Application".
  5. Main class "org.apache.pig.Main".
  6. In Arguments tab, put "-x local" and other arugments in "Program arguments".
  7. In Environment tab, create a variable "PATH" as "${env_var:path};c:\cygwin\bin" if your OS is Windows.
  8. Debug your script, and Eclipse will stop at the breakpoint you set in UDF.

How to debug Hadoop MapReduce jobs in Eclipse.

It is actually very easy to debug Hadoop MapReduce jobs in Eclipse, especially when you use maven.
  1. Create a maven project using m2eclipse.
  2. Add org.apache.hadoop:hadoop-core as dependency.
  3. You can set breakpoint at any line in your code.
  4. Right-click your drive class, Debug As -> Java Application
  5. In arguments tab of launch configuration, put "-fs file:/// -jt local -Dmapred.local.dir=c:/temp/hadoop your_input_file c:/temp/hadoop/output" in "Program arguments"
  6. If you run on Windows, you have to use Cygwin because hadoop uses external shell command "chmod". In Environment tab, add environment variable PATH, value is ${env_var:path};c:\cygwin\bin. Then hadoop can find chmod.
  7. Click debug, you can debug your MapReduce code in eclipse. Hadoop is running in local mode.




Tuesday, August 16, 2011

If java is installed in c:\Program Files\Java\, it is headache to make Hadoop/Hive works. Hive reports an error like this:

/workplace/apps/hadoop-0.20.2-cdh3u1/bin/hadoop: line 300: /cygdrive/c/Program:
No such file or directory

Apparently the space in JAVA_HOME doesn't work.

I tried several solutions on the Internet, but none of them works. For example,

export JAVA_HOME=/cygdrive/c/Program\ Files/Java/jdk1.6.0_25
export JAVA_HOME=/cygdrive/c/"Program Files"/Java/jdk1.6.0_25

Here is how I solve this problem: create a soft link like this

ln -s /cygdrive/c/Program\ Files/Java/jdk1.6.0_25 /usr/java/default

and set JAVA_HOME in ${HADOOP_HOME}/conf/hadoop-env.sh like this

export JAVA_HOME=/usr/java/default

Friday, May 6, 2011

SyntaxHighlighter Vertical Scrollbar

SyntaxHighlighter 3.0.83 seems have annoying issue, there is always a vertical scrollbar which you can only scroll a little. I found a solution here http://xbfish.com/2011/04/26/remove-vertical-scrollbar-in-syntaxhighlighter/, but it doesn't work.

After several tries, I found that commenting line-height in shCore.css works
.syntaxhighlighter a,
.syntaxhighlighter div,
.syntaxhighlighter code,
.syntaxhighlighter table,
.syntaxhighlighter table td,
.syntaxhighlighter table tr,
.syntaxhighlighter table tbody,
.syntaxhighlighter table thead,
.syntaxhighlighter table caption,
.syntaxhighlighter textarea {
-moz-border-radius: 0 0 0 0 !important;
-webkit-border-radius: 0 0 0 0 !important;
background: none !important;
border: 0 !important;
bottom: auto !important;
float: none !important;
height: auto !important;
left: auto !important;
// line-height: 1.5em !important;

Tuesday, April 19, 2011

ActiveRecord JDBC adapter multiple database bug

My rails application need to two database simultaneously, one is db2 and the other is SQL server. Unfortunately I got an error called "undefined method `identity=' for". After googling, there is a bug reported here https://github.com/nicksieger/activerecord-jdbc-adapter/issues/25.

After tracing JDBC adapter source, I found the reason for my case: the class method column_types in arjdbc/jdbc/column.rb

@column_types ||= ::ArJdbc.constants.map{|c|
::ArJdbc.const_get c }.select{ |c|
c.respond_to? :column_selector }.map{|c|
c.column_selector }.inject({}) { |h,val|
h[val[0]] = val[1]; h }

Both MsSQL and DB2 are lazy loaded. When you have multiple databases, once @column_types is instantiated for db2, it will never be changed. But it is possible that MsSQL module is not loaded yet and so its column_selector is never called.

def self.column_selector
[/sqlserver|tds|Microsoft SQL/i, lambda {|cfg,col| col.extend(::ArJdbc::MsSQL::Column)}]
end

column_selector extends JdbcColumn with MsSQL version Column, and it defines :identity. If it is not called, "undefined method 'identity='" will be thrown.

The simple way I found to fix this issue: put the following in config/application.rb

# overcome activerecord-jdbc-adapter bug "undefined method 'identity='"
# for multiple databases by preloading the driver
if defined?(ArJdbc::Version::VERSION)
require 'arjdbc/db2'
require 'arjdbc/mssql'
end

Rails filters for Ajax pages

1. layout/application generate the application frame, jquery tabs
2. for each tab, the content is loaded by ajax call
3. the response of an ajax call is just a segment of html or json data
4. Need an easy way to send a request to the server just like ajax call so that I can debug the issue without load the whole page.

SOLUTION 1:

1. Routing .js or .html
2. ajax always asks for .js

SOLUTION 2: Filter

Rails 3 SSO using JRuby, Tomcat, and Waffle

Friday, April 15, 2011

Get JRE 6 zip

You can only find JRE installation exe file from Oracle.com. But you can extract the zip file from it.

* Download JRE installation exe file
* Launch the installation, but stop when the first dialog is shown
* Go to C:\Users\yourname\AppData\LocalLow\Sun\Java\jre1.6.0_24
* Open Data1.cab in WinRAR
* You will find core.zip.

Friday, April 8, 2011

Warbler configuration

Warbler is so simple and powerful to package all files you need for a Rails application into a war. However, you may want to customize how warbler packages the war file because you may not want to deploy a tens mega-bytes war file every time and you Rails application is very small. I just thinner my 36MB war file to 1MB by removing all slow-changed dependent jars and gems. Of course, doing so you have to have some additional steps to maintain the dependencies.

1. Three jars in WEB-INF/lib, jruby-core, jruby-rack and jruby-stdlib, use 12MB
2. All rails dependent gems may use 6MB

And all the above files are not changed too frequently.

1. How to exclude all dependent gems in war

config.gem_dependencies = false

2. How to exclude jruby jars

config.java_libs = []

After

Monday, April 4, 2011

SyntaxHighlighter hints

SyntaxHighlighter provides a simple method to highlight the code:

SyntaxHighlighter.all();

But this method is actually hooked with body's onload() event. When I use some ajax call and want to format the page dynamically, this method won't work. It is actually easy to fix that: use highlight method directly.

SyntaxHighlighter.highlight();

And default font size of SyntaxHighlighter is too small. Found this page by googling

http://www.kerrywong.com/2009/04/05/changing-syntaxhighlighter-font-size/

it works, but there is an issue that the line numbers in gutter won't match the lines. You can fix this use the same font-size as below:

.syntaxhighlighter a,
.syntaxhighlighter div,
.syntaxhighlighter code,
.syntaxhighlighter table,
.syntaxhighlighter table td,
.syntaxhighlighter table tr,
.syntaxhighlighter table tbody,
.syntaxhighlighter table thead,
.syntaxhighlighter table caption,
.syntaxhighlighter textarea {
...
font-size: 1.01em;
...
}

.syntaxhighlighter table td.gutter .line {
text-align: right !important;
padding: 0 0.5em 0 1em !important;
font-size: 1.01em;
}

Thursday, March 24, 2011

IBM_DB ActiveRecord Limit Issue

I use ibm-db-2.5.6 and activerecord-3.0.5. I encountered the same problem described in this lighthouse tick https://rails.lighthouseapp.com/projects/8994/tickets/6012-incorrect-sql-generation-for-db2-in-rails-303

It is very simple to fix that. I put the following code snippet into my config/application.rb after your application module. And everything works correctly.

Arel 2.0.9 doesn't have an ToSql visitor for DB2. The interesting thing is some versions of DB2 server actually support LIMIT clause. For example:

this windows version supports LIMIT

Database server = DB2/NT64 9.7.2

but this linux version doesn't

Database server = DB2/LINUXX8664 9.7.2


# this is a patch for ibm_db
if defined?(IBM_DB) && !Arel::Visitors::VISITORS.has_key?('ibm_db')
module Arel
module Visitors
class DB2 < Arel::Visitors::ToSql
private
def visit_Arel_Nodes_Limit o
"FETCH FIRST #{visit o.expr} ROWS ONLY"
end
end

VISITORS['ibm_db'] = Arel::Visitors::DB2
end
end
end

Thursday, February 17, 2011

Start Ruby 1.8.7 without setting system path on Windows

I installed cygwin and cygwin-ruby on my Windows 7. If I put Ruby-1.8.7-win32 path in my system path, the path also appears in cygwin path. Although cygwin-ruby (/usr/bin/ruby) appears before Ruby-win32 in PATH, I'm still want to avoid any confusion, so I don't add Ruby-Win32 into my system path.

When I want to run Ruby-1.8.7-Win32, I can do like this:

C:\Windows\System32\cmd.exe /E:ON /K C:\workplace\apps\Ruby187\bin\setrbvars.bat

You can create a shortcut and pin this on the task bar.

Wednesday, February 9, 2011

Identify the Physical Host of a Virtual Server using PowerShell

Found this page by googling:

http://portal.sivarajan.com/2010/01/identify-physical-host-of-virtual.html

$regPath= "HKLM:\SOFTWARE\Microsoft\Virtual Machine\Guest\Parameters"
$regValue = get-itemproperty -path $regPath
$regValue | fl "VirtualMachineName","PhysicalHostNameFullyQualified"