How to encrypt passwords in the Tomcat server.xml file

By default, Tomcat stores passwords in server.xml in clear text, which can lead to obvious security lapses.

The easiest way to mitigate against user account compromise is to use a password digest (SHA, MD2 or MD5 are supported).

With $CATALINA_HOME/lib/catalina.jar and $CATALINA_HOME/bin/tomcat-juli.jar on your class path, just use the following to generate the digested passwords:

java org.apache.catalina.realm.RealmBase \
   -a {algorithm} {cleartext-password}

The digest technique works by having the incoming clear text password (as entered by the user) digested, and the results compared to the stored digested password. If the Two digests match, the password entered by the user must be correct, and the authenticate() method of the Realm succeeds.

ACCU Conference 2006

I went to the ACCU Spring Conference 2006 last week. There were some interesting sessions, as usual.

XSLT2 and XPath2

Version 1 of XSLT and XPath were fairly limited in their XML processing abilities in some respects: no possibility to reference local variables was the worst. Version 2.0 of these langauges fixes this and other shortcomings with a raft of new features and generalisations. In fact XSLT2 and XPath2 are very different from their predecessors.

XSLT2 allows operations on temporary/local variables and returned node sets. This can lead to greatly simplified XSLT documents. In addition, there are some nice new operator and keywords:

  • xsl:for-each – generalised operations over the universal new Sequence type (see below)
  • xsl:for-each-group – allows GROUP BY (pivot) of data
  • xsl:analyze-string – use RegEx to match text in nodes
  • xsl:function – define a custom function in XSL, and call it using XPath2 expressions
  • xsl:unparsed-text – handle non-XML text e.g. CSV

The most fundamental change in XPath2 is that all XPath2 expressions now operate upon the (typed) Sequence datatype instead of node sets. XPath2 also allows conditional expressions, whereas in XPath1 all expressions had to resolve at ‘compile’ time.

Comments are now allowed in XPath2 expressions, and nested loops are allowed (akin to JOIN in SQL). There is a new doc() function for pulling in nodes from a separate XML document, and RegEx support has been beefed up.

Java Server Faces 1.2 (JSF)

Java Server Faces is Sun’s answer to ASP.NET, and shares the smae basic approach of separating logic from presentation.

Sun seems to have taken the ‘any browser’ abstraction from ASP.NET and extended this to ‘any device’: we were shown a demonstration of the same JSF application serving pages to a web browser, a Telnet client, and a Jabber client, of all things!

It’s possible to define much of the application and component configuration via config files, and this process seemed simpler than the techinique for ASP.NET, even version 2.0 with the improved config file handlers.

The other nice thing about JSF 1.2 was the Page Flow model: a sequence of navigation actions by the user can be captured in the config file, allowing JSF to craft up appropriate links (e.g. for Edit, Save, Delete actions) automatically.

Due Diligence

I spend some valuable time talking to on of the keynote speakers about Due Diligence reviews for software.

Approximation: the source code doesn’t matter: it’s the environment and processes which determine how maintainable the software is.

Sun Tech Days 2004

I attended the Sun Tech Days in London last week. There was a lot of marketing schmooze, and some pretty dull and dry Java implementation stuff, but it was certainly worth going. Heh, it was bizarre to witness the hand-wringing over Microsoft technologies (particularly .NET) by many of the speakers; they spoke as if apologising for a ill-behaved child! Three sessions were interesting to varying degrees: Emerging Web Services Standards, Web Services CodeCamp, and J2EE Transformation Patterns

Emerging Web Services Standards

Amidst all the hyperbole surrounding Web Services, it is important to define what a Web Service actually is. In its simplest form, a Web Service is an Interprocess Communication mechanism: even the name is a little misleading, as the ‘Web’ as most people know it need not enter into the arena at all. Even the underlying transport need not be HTTP; Web Services can leverage* a variety of protocols, including SMTP, FTP and even WAP or LDAP.

Web Services thus can offer similar services to existing IPC platforms, such as DCOM and CORBA, but — crucially — with many, many benefits over those technologies. The pairing of Web Services with XML allows the provision of services to clients with widely varying requirements; clients can extract from the XML only those operations in which they are interested, without being burdened with the requirement to implement a raft of irrelevant features. In other words, it is Clients which determine the level of service, not the Servers. DCOM/CORBA etc. require the service Endpoints to be fixed and known beforehand; Web Services + XML allow dynamic discovery of Endpoints and services.

There was a brief discussion of Web Services for Remote Portals (WSRP), and how they can be used to abstract the data and functionality provided by a Web Service from the necessary data presentation: [WSRP] aims to allow for interoperability between different kinds of intermediary applications and visual, user-facing web services. Web Services Distributed Management (WSDM) is a little like a meta-WSDL, and allows monitoring and management of Web Services, in a somewhat Aspect-Oriented manner.

The final section of the seminar was devoted to Security. For Web Services, it is not enough to ensure Transport-level security (e.g. HTTPS); rather, within an interconnected group of Web Services, some form of “federated” identification-based security is necessary, probably with “single sign-on” authentication, and trust Relationships. There is currently no standard available for this type of security, although the OASIS group is working on an implementation using SAML.

Web Services CodeCamp

A Web Service on the J2EE platform were introduced as: a set of endpoints or ports operating on messages, running within a container, and described entirely by a WSDL document. This implies that a typical Java Web Service will be implemented as a Component, and executed by an Application Server within a Container, in much the smae way that EJBs are handled. Two different Endpoint implementations are available: stateful (using Servlets and JAX/RPC) and stateless (using ‘Session Beans’** and EJBs). JAX/RPC essentially specifies how the XML-to-Java mapping is to be achieved, including the WSDL definition. This mapping includes data types, message handlers, and interface stubs; in fact, the mapping of many Java constructs to WSDL is quite natural:

WSDL to Java mappings

Defining a Web Service interface for JAX/RPC is very similar to defining the remote interface of EJBs:

package hello;

import java.rmi.Remote;
import java.rmi.RemoteException;

public interface HelloIF extends Remote {
public String sayHello(String s) throws RemoteException;
}

At this point, it became apparent that deploying a Java-based web service is actually non-trivial. Unlike the xcopy style deployment model of ASP.NET, there is a variety of TLAs to contend with when using J2EE. Tools like Sun Studio ONE (or whatever it is now called) can package Java Web Services automatically, as one would expect.

There are three different models for programming under JAX/RPC:

  1. Stub-based – static, with both endpoints (WSDL, stub) defined at compile time
  2. Dynamic proxy – WSDL exists at compile time, but stub (implemention) created at runtime
  3. Dynamic invocation interface – both endpoints created at runtime

Naturally, this order also corresponds to both the order of complexity in programming, and flexibility for applications.

JAX/RPC represents a tightly-coupled programming model, where — irrespective of endpoint resolution — messages are exchanged ‘immediately’. The Document-Driven (stateless) programming model, however, provides for more loosely-coupled application communication, typically between peers rather than the implicit client-server arrangement of RPC. Data is exchanged by using XML documents (on the wire) rather than .

J2EE Transformation Patterns

This session was essentially an advertisement for OptimalJ, although it also provided a useful overview of Model Driven Architecture. The case for code generation tools like OptimalJ was made by the speaker thus: given a single requirements document and ten competant programmers, how many distinct implementations can you expect? The answer, of course, is ten distinct implementations – one for each coder. Now throw some less experienced coders into the mix, or someone whose first language is not the same as that used int he requirements document, and the validity of the implementations begins to disintegrate. If the requirements of the project change part-way through — as experience has shown time after time that they will — then how is it possible to prove or have any idea that the code is a valid implementation of the (new) requirements?

Many modelling tools attempt to ’round-trip’ between Requirements, Model, and Code. Transformation Patterns, on the other hand, are uni-directional: from Requirements to Model, and then from Model to Code. Code is thus always the output or goal of the process, never one of the inputs to it. This leads to a fill-in-the-blanks coding strategy; however, code is never deleted by the transformation procedure, leaving the programmer in control once the stubs have been generated.

* isn’t this word truly awful?
** another terrible name