Some useful Java libraries

Here is a video from Javazone, where some useful Java libraries are mentioned:

02:30 xbean-finder: Annotation finder in jars.
03:50 shrinkwrap: Can download artifacts from Maven repo.
05:50 zt-zip: zip file utils from Zeroturnaround.
07:25 Airline: command line parser helper.
10:30 really-executable-jars-maven-plugin: makes self-extracting jars.
12:00 jansi: colorized command line.
13:30 jmxutils: JMX beans by annotations.
15:00 Feign: HTTP helper / annotation lib. OkHttp is also mentioned.
17:20 Jerry / Lagarto: JQuery-style selectors in Java.
20:00 jchronic: converting English free-text to time.
21:40 assertj: fluent interface for asserts in unit tests.
23:10 vtte: very trivial template engine.
24:45 tape: a collection of queue-related classes for Android.
26:10 connector/mxj (GPL): mysql helper.
28:10 ness-pg-embedded: prostgresql helper.
29:10 slice: effective off-heap memory helper tool instead of ByteBuffers.
32:20 paranamer: named params in java from bytecode.
34:00 sshj: ssh client.
35:00 sshd-core: ssh daemon.
36:50 jline2: emacs style CLI helper.
38:00 zt-exec: starting processes conveniently.
39:30 jBcrypt: password hashing easily.
40:50 joda-money: helper for working with amounts of money.
42:10 jnr-ffi: JNI helper.
43:30 sqlite-jdbc: jdbc driver for sqlite.
44:50 java-classmate: generics parameter type discoverer.
46:20 jackson-module-afterburner: speeds up Jackson.
47:30 jackson-dataformat-yaml: YAML support for Jackson.
48:20 unix4j: command line tools from Java, like grep.
49:40 parboiled: parser generator library.
51:45 MVEL: simple brebuilt expression language.
52:40 typesafe-config: tool for reading configs.


SAX Parser truncation problems

Have you ever met with strange problems regarding SAX parser when textual content seems to be truncated? When it seems that the parser transmits only a fragment of the content which is inside an XML element. Maybe not. Maybe yes but haven't noticed.

Parser reads blocks of stream and it may call characters method more than one times. Well, it's written in the Javadoc and it's quite logical. If I have a very long text content, it couldn't had been processed in one go. It has to be split into parts.

So, rather than assigning the content to a simple string, use concatenation instead and evaluate the content on endElement.

By the way, the magical number is 2048. The parser implementation typically uses this block size. Unfortunately it's a kind of thing which easily creeps under the radar of tests. Nobody writes tests for long data.

See also this on Stackoverflow.



Recently I faced with NoClassDefFoundErrors during unit testing despite all class files were in place. It was quite strange. Furthermore this error happened only on our CI (Linux) server, I couldn't reproduce it on desktop, neither on Windows nor on Linux.

I started to investigate the stack traces and it seemed that the native open method of the FileInputStream was failed. After more investigation by JProfiler I found, that many classLoader.getResourceAsStream invocation happens in our code, but those streams are never closed.

Meanwhile I found this nice writing about Tuning Linux applications which says: "Some Linux applications; for example, a JVM, might require a higher file descriptor limit. If an application can't open files because the file descriptor limit has been exceeded, you might get a NoClassDefFoundError error message."

Then it says how to increase number of file handlers (ulimit -n 2048), but let's rather close the resources guys. It would have been quite unlucky if the application starts saying NoClassDefFoundErrors in the wild after some days.