14 December 2010

ActiveMQ and Tomcat Articles + 40% Discount on ActiveMQ In Action



As I mentioned here in late October, ActiveMQ In Action entered Manning's pre-production process, the final process of review and editing before actually printing the book. While working through this process, I was asked to write an article for TomcatExpert.com website so I decided to pull a bit of content from one of the chapters that discusses integrating ActiveMQ and Tomcat.

The first article in the three-part series was published titled, ActiveMQ and Tomcat: Perfect Partners. The first article briefly introduces ActiveMQ and the second and third articles provides the details on integrating ActiveMQ with Tomcat using both local JNDI and global JNDI. These two approaches are unique enough and the integration process tricky enough that this info was determined to be good content for an article.

The second article in the series is named Integrating ActiveMQ With Tomcat Using Local JNDI . It covers the integration of ActiveMQ using a local JNDI context. I've had many people come to me who have encountered issues with this style of integration, so I know it can be confusing.

The third article in this series is Integrating ActiveMQ With Tomcat Using Global JNDI and it focuses on configuring JMS resources using global a JNDI context. The global JNDI configuration can be rather difficult because it is a container-wide deployment instead of contained inside of a single web application.

The example application used in the articles comes straight from chapter 8 of ActiveMQ In Action and uses a simple message flow. An HTML form is used to post a message to a queue in ActiveMQ that is consumed by the Spring message listener container. Whereas this example model is simple to demonstrate the power lies in using this model in a more distributed manner and/or with a stand alone ActiveMQ instance. This topic is popular enough that I've been asked numerous times about so I guess it's a good thing it was included in the book.

Just in time for the holidays, you can get 40% off your purchase of ActiveMQ In Action! Just use the coupon code *activemq40* at the time of checkout. Enjoy!

26 October 2010

New Features in ActiveMQ 5.4: Automatic Cluster Update and Rebalance



Apache ActiveMQ 5.4.0 was released in August, followed quickly in September by a 5.4.1 release. Not only were there tons of fixes in these releases, but there were also some really great new features including message scheduling, support for web sockets, new Unix control scripts, full message priority, producer message caching and cluster client updates and cluster client rebalancing just to name a few. In this blog post, I'm going to discuss the new cluster client updates and cluster client rebalancing features so that you get a taste of how they are used.

Problem Introduction


When using a network of brokers with ActiveMQ, the configuration of the brokers that form the network has always been rather static. The first step toward a more dynamic network of brokers was a feature I presented in a previous blog post titled How to Use Automatic Failover In an ActiveMQ Network of Brokers. In the event of a broker connection failure, the use of the failover transport for the network connections between brokers allows those connections to be automatically reestablished. This is a wonderful feature for sure, but it only got us part of the way toward a truly dynamic cluster of brokers.

Two new features in ActiveMQ 5.4 introduce the concept of making the cluster of brokers even more dynamic. These two items are the ability to:
  • Update the clients in the cluster
  • Rebalance the brokers in the cluster
Both of these features are quite interesting so I will explain how each one works.

Update Cluster Clients


In the past, when clients connected to brokers in the cluster, it was recommended to keep a comma-separated list of broker URIs in the failover transport configuration. Below is an example of this style of configuration:

failover:(tcp://machineA:61616,tcp://machineB:61616,tcp://machineC:61616)?randomize=false

The failover configuration example above lives on the client side and contains a static list of the URIs for each broker in the cluster. In the event that a broker in the cluster fails, the failover transport is what allows a client to automatically reconnect to another broker in that list of broker URIs. Unfortunately this style of configuration can be difficult to maintain because it is static. If you want to add another broker to the cluster, every client's failover transport configuration must be updated manually. Depending on the number of clients in your cluster, this could really be a maintenance headache. This is where the first new features comes to the rescue.

ActiveMQ 5.4 provides the ability to automatically update the clients in the cluster. That is, if a new broker joins or leaves the existing network of brokers, the clients' failover transport configurations no longer need to be manipulated manually. Using configuration options on the broker, you can tell the broker to update each client's failover transport configuration automatically. Below is an example of this new feature:


<broker brokerName="brokerA" ...>
...
 <transportConnectors>
   <transportConnector name="tcp-connector" uri="tcp://192.168.0.23:61616" updateClusterClients="true" />
 </<transportConnectors>
...
</broker>


The configuration above is on the broker-side. Notice the new attribute in the <transportConnector> element named updateClusterClients=true. This attribute is used in conjunction with the failover transport on the client-side and it tells the broker to automatically update the client's failover transport configuration when the network topology changes. In addition to the updateClusterClients=true property, there are also a few others including:
  • updateClusterClientsOnRemove - Updates a client when brokers are removed from the cluster.
  • updateClusterFilter - A comma-separated list of regexes to match broker names that are part of the cluster. This allows flexibility for the inclusion/exclusion of brokers.
  • updateURIsURL - Used to provide the path to a file containing a comma-separted list of broker URIs.
These new features are extremely powerful because they allow for a much more dynamic network of brokers configuration. Anyone who has had to deal with the static nature of the failover transport configuration should understand the power in these new features and do some experimentation to see how they operate.

Rebalance Cluster Clients


The second new feature also builds upon the failover transport configuration, but for a slightly different purpose. Consider the fact that when a new broker is added to/removed from the cluster that clients cannot automatically take advantage of it. Even with the new ability to update the clients so that they have knowledge of the broker being added/removed, there was no way previously for them to actually use that broker unless a failure occurred. Well that's what this feature does.

ActiveMQ 5.4 allows clients to be automatically disconnected from their current broker and reconnect to a different broker. Here's an example to illustrate this feature. Let's say you have a cluster of three brokers: brokerA, brokerB and brokerC, each of which has some clients connected. When a new broker is added to the cluster, if the updateClusterClients property is set to true, then the clients will be notified about the new broker, but no action will be taken unless the rebalanceClusterClients property is set to true. When the rebalanceClusterClients property is set to true, the clients will be automatically be disconnected from their current broker in order to reconnect to another broker in the cluster. Below is an example configuration for the new rebalance property:


<broker brokerName="brokerA" ...>
...
 <transportConnectors>
   <transportConnector name="tcp-connector" uri="tcp://192.168.0.23:61616" updateClusterClients="true" rebalanceClusterClients="true" />
 </<transportConnectors>
...
</broker>


Notice the new rebalanceClusterClients attribute in the <transportConnector> element. This property enables the clients to immediately take advantage of the new broker in the cluster. Instead of waiting for the next connection failure and a reconnect from the failover transport, the clients are told to reconnect immediately to another broker in their list.

Testing The New Features


Testing these two new features is pretty easy actually. Below are the steps I have used on a few occasions:

  1. Make sure that your clients are logging the broker URI to which they are connected for sending or receiving messages
  2. Configure each client to only have one broker URI in its failover transport configuration
  3. Configure the transport connector on the broker-side to set the updateClusterClients property to true and the rebalanceClusterClients property to true
  4. Start up the brokers in your cluster
  5. Start up the clients that connect to a broker in the cluster
  6. Add a new broker to the cluster and observe the following behavior:
Due to the two new properties that have been set on the broker-side, each client will be notified of the new broker that was added to the cluster AND each client will automatically reconnect. That is, the functionality of the failover transport will be engaged so that each client is disconnected from the current broker and reconnected to another broker in the list (i.e., the list of broker URIs in the failover transport configuration).

The fact that each client reconnects to a new broker tells you that:
  1. The updateClusterClients property is working correctly because you should see the logging change from one broker URI to another. Remember that each client was started with only one broker URI in their failover transport config. The fact that they are reconnecting tells you that they are receiving notifications of changes to the cluster.
  2. The rebalanceClusterClients property is working properly because the clients reconnected.
Verify this using the logging from each client. You will see that each client was sending or receiving messages to/from one broker URI and suddenly the logging changes to another broker URI. This tells you that the clients are being updated and rebalanced.

Conclusion


These new features are quite powerful additions to the ActiveMQ network of brokers. They really advance ActiveMQ beyond the static configurations upon which we have all relied for many years now. Most likely the sys admins and dev ops folks will enjoy these features the most because they will no longer need to manually manage a static list of broker URIs for clients.

As I said earlier, many other great features were also introduced in ActiveMQ 5.4 and 5.4.1. So try them out yourself to see if they help to improve your application development.

Update: If you only set the updateClusterClients="true" and the rebalanceClusterClients="true" options, you will notice that when a broker in the network fails and is brought back up, the client connections to other brokers in the network are not automatically rebalanced. This is due to the lack of the updateClusterClientsOnRemove="true" option. After adding this option to the config, network broker clients are notified of broker failures which basically completes the circle and allows the automatic rebalancing to work as it should.

40% Off ActiveMQ In Action To Celebrate Going Into Production!



Good news -- ActiveMQ In Action has now entered the Manning production process! This means that we are done with the writing of content for the book and it is currently being copy edited and undergoing technical review. In fact, we are currently seeking reviewers for the book. If you are interested to review the book, please send an email to mkt@manning.com.

Also, we are currently offering 40% off your purchase of ActiveMQ In Action! Just use the coupon code *activemq40* at the time of checkout. Hurry and make your purchase today as this discount won't last long!

25 October 2010

SpringOne 2GX 2010 in Chicago



Last week I attended our annual SpringOne 2GX developer conference and I saw many excellent presentations by many different folks. As is common at a developer conference, there were numerous times where I was torn by the decision to attend one talk and not another due to time scheduling. Probably the most intriguing talk I attended was a Technical deep-dive of hypervisors and virtualization for developers by Richard McDougall. I learned a lot about virtualization that I didn't know and, as always, Richard's depth of knowledge on this topic is completely impressive! But my favorite talk was one delivered by my friends Dave Syer and Mark Fisher titled Concurrent and Distributed Applications with Spring.

I always enjoy discussions, books, articles, etc. on the topic of Java concurrency, but Dave and Mark's session was different. In the same vein as the Java Concurrency in Practice book, Dave and Mark brought a practicality to the topic that really scored a goal with the attendees. When I go to conferences I tend to judge sessions not solely based on my own opinion, but moreso on the opinion of other attendees. Not only was this session completely packed, but it stayed packed the entire time, i.e., people were not walking out early. This was due to the way that Dave and Mark delivered a difficult topic in a way that all Java developers can understand. What many of the conference sessions, books, articles, etc. have in common when presenting Java concurrency is that they don't seek to simplify the content and present it in a way that makes it easier to understand. Dave and Mark really tore down most of the complexity with Java concurrency and presented it in a way that makes it seem approachable. In fact, I saw more people taking notes in this session than in any other that I attended. They also provided hands-on examples that were simplified and to the point. I think that they could have gone on discussing this topic for another 90 minutes and people would have stayed. Now, aren't you jealous that you weren't there?!

Other excellent sessions included:
  • Gaining visibility into enterprise Spring applications with tc server Spring Edition by Steve Mayzak
  • Clustering and load-balancing with tc Server and httpd by Mark Thomas
  • Mastering MVC 3 by Keith Donald
  • Developer Tools to push your Productivity by Andy Clement and Christian Dupuis
  • What's new in Spring Integration 2.0? by Mark Fisher and Oleg Zhurakousky
  • Diagnosing Performance Issues, with Spring Insight, Before it's a Problem by Scott Andrews and Jon Travis
  • Spring and Java EE 6 by Juergen Hoeller
  • Payments in one API by John Davies and Rossen Stoyanchev
  • How to build business applications using Google Web Toolkit and Spring Roo by Amit Manjhi
  • Harnessing the power of HTML5 by Scott Andrews and Jeremy Grelle
  • Case Study: EMC Next-Generation ERP Integration Architecture by Brian Dussault and Tom McCuch
  • Case Study: Migrating Hyperic HQ from EJB to Spring by Jennifer Hickey
  • Monitoring Spring Batch and Spring Integration with SpringSource Hyperic by Dave Syer


There were also tons of really great Groovy and Grails talks such as:
  • Transforming to Groovy by Venkat Subramaniam
  • GORM Inside And Out by Jeff Brown
  • Improving your Groovy Code Quality by Venkat Subramaniam
  • Tuning Grails applications by Peter Ledbrook
  • Unit and Functional Testing using Groovy by Venkat Subramaniam
  • Groovy and Concurrency by Paul King
  • Tomorrow's Tech Today: HTML 5 + Grails by Scott Davis


So if you didn't make it to SpringOne this year, now you have just a taste of what you missed. And hopefully you have good reason to consider attending next year. Any guesses on where the conference might take place next year? Of course, I'm rooting for Boulder, CO. Adam? Are you listening? ;-)

21 May 2010

50% Off ActiveMQ In Action!



The ActiveMQ In Action book has made available a new early access release before going into final review and copy editing. All 14 chapters are included in this MEAP release.

For a limited time you can get the definitive book on ActiveMQ at a 50% discount! Just use the coupon code *activemq50* at the time of checkout. Hurry and get this discounted price while it lasts because this offer expires on Monday, May 31, 2010.

I want to purchase ActiveMQ in Action today!

Tuning JMS Message Consumption In Spring



In a previous blog post titled Using Spring to Receive JMS Messages, I introduced the use of the Spring default message listener container for asynchronous consumption of JMS messages. One very common discovery that folks make when first using JMS is that producers can send messages much faster than consumers can receive and process them. When using JMS queues, I always recommend the use of more consumers than you have producers. (When using JMS topics, you should only use a single consumer to guard against receiving the same message multiple times.) This is a normal situation with message-oriented middleware (MOM) and it is easy to handle if you are using the Spring message listener container.

The Spring DefaultMessageListenerContainer (DMLC) is a highly flexible container for consuming JMS messages that can handle many different use cases via the numerous properties that it provides. For the situation mentioned above, the DMLC offers the ability to dynamically scale the number of consumers. That is, as the number of messages available for consumption increases, the DMLC can automatically increase and decrease the number of consumers. To configure the DMLC to automatically scale the number message consumers, the concurrentConsumers property and the maxConcurrentConsumers property are used. Below is an example JMS namespace style of XML configuration that employs these properties:


<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:jms="http://www.springframework.org/schema/jms"
xmlns:p="http://www.springframework.org/schema/p"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/jms http://www.springframework.org/schema/jms/spring-jms-3.0.xsd">

<!-- A JMS connection factory for ActiveMQ -->
<bean id="connectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory"
p:brokerURL="tcp://foo.example.com:61616" />

<!-- A POJO that implements the JMS message listener -->
<bean id="simpleMessageListener" class="com.mycompany.SimpleMessageListener">

<!-- A JMS namespace aware Spring configuration for the message listener container -->
<jms:listener-container
container-type="default"
connection-factory="connectionFactory"
acknowledge="auto"
concurrency="10-50">
<jms:listener destination="TEST.FOO" ref="simpleMessageListener" method="onMessage" />
</jms:listener-container>
</beans>


Notice the concurrency="10-50" property above. This is a simplified configuration for setting the concurrentConsumers=10 and the maxConcurrentConsumers=50 properties of the DMLC. This tells the DMLC to always start up a minimum of 10 consumers. When a new message has been received, if the maxConcurrentConsumers has not been reached and the value of the idleConsumerLimit property has not been reached, then a new consumer is created to process the message. This behavior from the DMLC continues up to the limit set by the maxConcurrentConsumers property. When no messages are being received and the consumers become idle, the number of consumers is automatically decreased.

(NOTE: The idleConsumerLimit property is used to specify the the maximum number of consumers that are allowed to be idle at a given time. The use of this property was recently clarified a bit in the Spring 3.x trunk. Increasing this limit causes invokers to be created more aggressively. This can be useful to ramp up the number of consumers faster.)

It is important to be aware of a couple of things related to this dynamic scaling:
  1. You should not increase the number of concurrent consumers for a JMS topic. This leads to concurrent consumption of the same message, which is hardly ever desirable.
  2. The concurrentConsumers property and the maxConcurrentConsumers property can be modified at runtime, e.g., via JMX

The dynamic scaling can be tuned even further through the use of the idleTaskExecutionLimit property. The use of this property is best explained by a portion of the Javadoc:


Within each task execution, a number of message reception attempts (according to the "maxMessagesPerTask" setting) will each wait for an incoming message (according to the "receiveTimeout" setting). If all of those receive attempts in a given task return without a message, the task is considered idle with respect to received messages. Such a task may still be rescheduled; however, once it reached the specified "idleTaskExecutionLimit", it will shut down (in case of dynamic scaling).

Raise this limit if you encounter too frequent scaling up and down. With this limit being higher, an idle consumer will be kept around longer, avoiding the restart of a consumer once a new load of messages comes in. Alternatively, specify a higher "maxMessagesPerTask" and/or "receiveTimeout" value, which will also lead to idle consumers being kept around for a longer time (while also increasing the average execution time of each scheduled task).


Note the recommendations if you experience dynamic scaling taking place too often. To deal with this situation, you should experiment with increases to one or more of the following properties:
  • idleTaskExecutionLimit - The limit for the number of allowed idle executions of a receive task. The default is 1 causing idle resources to be closed early once a task does not receive a message.
  • maxMessagesPerTask - The maximum number of messages to process in a single task. This determines how long a task lives before being reaped. The default is unlimited (-1) so you may not need to change this property.
  • receiveTimeout - The timeout to be used for JMS receive operations. The default is 1000 ms.
  • As I noted above, in the Spring 3.x trunk, the idleConsumerLimit property was clarified a bit recently and exposed as a writable property. This is yet another property for tuning for situations where you need to ramp up the number of concurrent consumers faster.

One important thing to note about using these various properties for tuning. These are not usable in the JMS namespace style of XML configuration. To use these properties, you must use either a pure Spring XML configuration or straight Java. Below is an example of how to use the receiveTimeout property and the idleTaskExecutionLimit property:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:p="http://www.springframework.org/schema/p"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">

<!-- A JMS connection factory for ActiveMQ -->
<bean id="connectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory"
p:brokerURL="tcp://foo.example.com:61616" />

<!-- The JMS destination -->
<bean id="destination" class="org.apache.activemq.command.ActiveMQQueue"
physicalName="TEST.FOO" />

<!-- A POJO that implements the JMS message listener -->
<bean id="simpleMessageListener" class="com.mycompany.SimpleMessageListener">

<!-- A pure Spring configuration for the message listener container -->
<bean id="msgListenerContainer"
class="org.springframework.jms.listener.DefaultMessageListenerContainer"
p:connectionFactory-ref="connectionFactory"
p:destination-ref="destination"
p:messageListener-ref="simpleMessageListener"
p:concurrentConsumers="10"
p:maxConcurrentConsumers="50"
p:receiveTimeout="5000"
p:idleTaskExecutionLimit="10"
p:idleConsumerLimit="5" />

</beans>

In the example configuration above, the receiveTimeout property is set to five seconds to tell the DMLC's receive operation to poll for message for five seconds instead of the default one second. Also, the idleTaskExecutionLimit property is set to 10 to allow tasks to execute 10 times instead of the default value of 1. Lastly, the idleConsumerLimit property specifies the limit on the number of idle consumers. This property can be used to more aggressively ramp up the number of concurrent consumers.

In addition to tuning these various properties for dynamic consumer scaling, it is also important to understand that the DMLC can also provide various levels of caching for JMS resources (i.e., JMS connections, sessions and consumers). By default, the DMLC will cache all JMS resources unless an external transaction manager is configured (because some containers require fresh JMS resources for external transactions). When an external transaction manager is configured, none of the JMS resources are cached by defualt. The level of caching can be configured using the cacheLevel property. This property allows for a tiered caching from connection, to session, to consumer. This allows caching of:
  • The connection
  • The connection and the session
  • The connection, the session and the consumer

Below is an example configuration that uses the cacheLevel property to specify consumer level caching:


<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:jms="http://www.springframework.org/schema/jms"
xmlns:p="http://www.springframework.org/schema/p"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/jms http://www.springframework.org/schema/jms/spring-jms-3.0.xsd">

<!-- A JMS connection factory for ActiveMQ -->
<bean id="connectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory"
p:brokerURL="tcp://foo.example.com:61616" />

<!-- A POJO that implements the JMS message listener -->
<bean id="simpleMessageListener" class="com.mycompany.SimpleMessageListener">

<!-- A JMS namespace aware Spring configuration for the message listener container -->
<jms:listener-container
container-type="default"
connection-factory="connectionFactory"
acknowledge="auto"
concurrency="10-50"
cache="consumer">
<jms:listener destination="TEST.FOO" ref="simpleMessageListener" method="onMessage" />
</jms:listener-container>
</beans>

By caching at the consumer level, this means that the connection, the session and the consumer is cached. Notice that the cacheLevel property can be used with the Spring JMS namespace style of XML configuration.

The session is cached based on ack-mode and the consumer is cached based on the session, the selector and the destination. It's necessary to know this info to better understand where/when the JMS resources can and cannot be reused. For example, caching consumers that use different selectors and consume from different destinations is only going to be relevant if you partition these items appropriately for reuse. That is, you may need to use a separate connection and listener-container configuration if the cache keys are different and if you want to cache sessions or consumers for reuse.

The overall point of the caching is that it can help to reduce the potential recurring thrash involved in creation and destruction of JMS resources. Reducing the thrash by using caching and employing the appropriate partitioning of these resources so as to allow for reuse can definitely improve the overall performance of the application.

Hopefully this post helps you understand how to tune JMS message consumption in Spring. As you employ Spring JMS to your applications and experiment further and further, you will discover how much the DMLC is actually doing for you and how many more features it has beyond what you can easily build yourself.

08 February 2010

Using Spring to Receive JMS Messages



Have you ever had a need to create your own JMS consumer? Or will you have this need in the future? If you answered yes to either one of these questions, this post will simplify your life.

In the previous post, I discussed Using the Spring JmsTemplate to Send JMS Messages. As a follow-on, in this post I will demonstrate how to receive messages using Spring JMS. Although the previously mentioned JmsTemplate can receive messages synchronously, here I will focus on asynchronous message reception using the Spring message listener container architecture, specifically the DefaultMessageListenerContainer.

The DefaultMessageListenerContainer (DMLC) is another wonderful convenience class that is part of the Spring Framework's JMS package. As you can see in the Javadoc, the DMLC is not a single class, but a well-abstracted hierarchy for the purpose of receiving messages. The reason for this is that the DMLC takes its inspiration from Message Driven Beans (MDB).

MDBs were originally defined in the EJB 2.0 spec as a stateless, transaction aware message listener that use JMS resources provided by the Java EE container. MDBs can also be pooled by the Java EE container in order to scale up. In short, MDBs were designed for asynchronous message reception in a way that the Java EE container could manage them. Although the intention was good, unfortunately the disadvantages of MDBs are numerous including:
  • MDBs are static in their configuration and creation (they cannot be created dynamically)
  • MDBs can only listen to a single destination
  • MDBs can only send messages after first receiving a message
  • MDBs require an EJB container (and therefore the Java EE container)
Although the Spring DMLC took its inspiration from MDBs, it did not replicate these disadvantages; quite the opposite, in fact. The Spring DMLC is commonly used to create what have become known as Message-Driven POJOs (MDP). MDPs offer all of the same functionality as MDBs but without the disadvantages listed above. The Spring DMLC provides many features including:
  • Various levels of caching of the JMS resources (connections and sessions) and JMS consumers for increased performance
  • The ability to dynamically grow and shrink the number of consumers to concurrently process messages based on load (see setConcurrentConsumers and setMaxConcurrentConsumers) for additional performance
  • Automatically re-establishes connections if the message broker becomes unavailable
  • Asynchronous execution of a message listener using the Spring TaskExecutor
  • Support for local JMS transactions as well as an external transaction manager around message reception and listener execution
  • Support for various message acknowledgement modes, each providing different semantics
For some situations, it is important to understand the additional error handling and the redelivery semantics that are provided by the DMLC. For more information, see the AbstractMessageListenerContainer JavaDoc.

The reason I recommend the DMLC (or even the SimpleMessageListenerContainer) is because writing JMS consumers can be a lot of work. In doing so, you must manually handle and mange the JMS resources and the JMS consumers, any concurrency that is necessary and any use of transactions. If you've ever done such work you know how arduous and error prone it can be. Certainly MDBs provide some of these features but with all their disadvantages. By creating MDPs using the Spring DMLC, I have seen users save a tremendous amount of time and increase their productivity significantly. This is because the DMLC offers much flexibility, robustness, a high amount of configurability and it has widespread deployment in businesses all over the world (so it has been widely tested).

Compared to MDBs, use of the Spring DMLC is actually surprisingly simple. The easiest way to get started is to using an XML configuration as the Spring DMLC provides JMS namespace support. Below is a Spring application context that demonstrates the configuration to use the Spring DMLC with Apache ActiveMQ:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:jms="http://www.springframework.org/schema/jms"
       xmlns:p="http://www.springframework.org/schema/p"
       xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/jms http://www.springframework.org/schema/jms/spring-jms-3.0.xsd">

  <!-- A JMS connection factory for ActiveMQ -->
  <bean id="connectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory"
  p:brokerURL="tcp://foo.example.com:61616" />

  <!-- A POJO that implements the JMS message listener -->
  <bean id="simpleMessageListener" class="com.mycompany.SimpleMessageListener">

  <!-- The Spring message listener container configuration -->
  <jms:listener-container
      container-type="default"
      connection-factory="connectionFactory"
      acknowledge="auto">
    <jms:listener destination="TEST.FOO" ref="simpleMessageListener" method="onMessage" />
  </jms:listener-container>
</beans>


For folks who are already familiar with the Spring Framework, the XML above is quite straightforward. It defines a connection factory bean for ActiveMQ, a message listener bean and the Spring listener-container. Notice that the jms:listener contains the destination name and not the listener-container. This level of separation is important because it means that the listener-container is not tied to any destination, only the jms:listener is. You can define as many jms:listener elements as is necessary for your application and the container will handle them all.

Below is the message listener implementation:

import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.TextMessage;

import org.apache.log4j.Logger;

public class SimpleMessageListener implements MessageListener {

  private static final Logger LOG = Logger.getLogger(SimpleMessageListener.class);

  public void onMessage(Message message) {
      try {
       TextMessage msg = (TextMessage) message;
       LOG.info("Consumed message: " + msg.getText());
      } catch (JMSException e) {
          // TODO Auto-generated catch block
          e.printStackTrace();
      }
  }

}

The message listener implementation is deliberately simple as its only purpose is to demonstrate receiving the message and logging the payload of the message. Although this listener implements the javax.jms.MessageListener interface, there are a total of three options available for implementing a message listener to be used with the Spring DMLC:
  • The javax.jms.MessageListener - This is what was used in the example above. It is a standardized interface from the JMS spec but handling threading is up to you.
  • The Spring SessionAwareMessageListener - This is a Spring-specific interface the provides access to the JMS session object. This is very useful for request-response messaging. Just be aware that you must do your own exception handling (i.e., override the handleListenerException method so exceptions are not lost).
  • The Spring MessageListenerAdapter - This is a Spring-specific interface that allows for type-specific message handling. Use of this interface avoids any JMS-specific dependencies in your code.


So not only is the Spring message listener container easy to use, it is also full of options to adapt to many environments. And I've only focused on the DefaultMessageListenerContainer here, I have not talked about the SimpleMessageListenerContainer (SMLC) beyond a simple mention. At a high level the difference is that the SMLC is static and provides no support for transactions.

One very big advantage of the Spring message listener container is that this type of XML config can be used in a Java EE container, in a servlet container or stand alone. This same Spring application context will run in Weblogic, JBoss, Tomcat or in a stand alone Spring container. Furthermore, the Spring DMLC also works with just about any JMS compliant messaging middleware available. Just define a bean for the JMS connection factory for your MOM and possibly tweak a few properties on the listener-container and you can begin consuming messages from different MOMs.

I should also note that the XML configuration is certainly not a requirement either. You can go straight for the underlying Java classes in your own code if you wish. I've used each style in various situations, but to begin using the Spring DMLC in the shortest amount of time, I find the Spring XML application context the fastest.

Update: I have made all of the code for these examples available via a GitHub repo.

04 February 2010

Using Spring to Send JMS Messages



Recently I stumbled upon a number of places in the some docs and mailing lists where claims are made that the Spring JmsTemplate is full of anti-patterns, is horribly inefficient and shouldn't be used. Well I'm here to debunk these erroneous claims by pointing out a class in the Spring Framework that was overlooked entirely.

The Spring JmsTemplate is a convenience class for sending and receiving JMS messages in a synchronous manner. The JmsTemplate was originally designed to be used with a J2EE container where the container provides the necessary pooling of the JMS resources (i.e., connections, consumers and producers). Such requirements came from the EJB spec. But when developers began using the JmsTemplate outside of J2EE containers, and because some JMS providers do not offer caching/pooling of JMS resources, a different solution was necessary. Enter the Spring CachingConnectionFactory.

The CachingConnectionFactory is meant to wrap a JMS provider's connection to provide caching of sessions, connections and producers as well as automatic connection recovery. By default, it uses a single session to create many connections and this model works very well with most MOMs. But if you need to scale further, you can also specify the number of sessions to cache using the sessionCacheSize property.

Below is a snippet from a Spring app context that demonstrates the configuration for the CachingConnectionFactory


...
<!-- A connection to ActiveMQ -->
<bean id="amqConnectionFactory"
class="org.apache.activemq.ActiveMQConnectionFactory"
p:brokerURL='tcp://localhost:61616" />

<!-- A cached connection to wrap the ActiveMQ connection -->
<bean id="cachedConnectionFactory"
class="org.springframework.jms.connection.CachingConnectionFactory"
p:targetConnectionFactory-ref="amqConnectionFactory"
p:sessionCacheSize="10" />

<!-- A destination in ActiveMQ -->
<bean id="destination"
class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg value="FOO.TEST" />
</bean>

<!-- A JmsTemplate instance that uses the cached connection and destination -->
<bean id="producerTemplate"
class="org.springframework.jms.core.JmsTemplate"
p:connectionFactory-ref="cachedConnectionFactory"
p:defaultDestination-ref="destination" />
...


As you can see, the configuration for the CachingConnectionFactory along with the JmsTemplate is quite simple. Furthermore, these two classes are also both in the org.springframework.jms package path so they're both included in the spring-jms jar file making their use even easier.

The only thing left to do is utilize the jmsTemplate bean in your Java code to actually send a message. This is shown below:


public class SimpleMessageProducer {

private static final Logger LOG = Logger.getLogger(SimpleMessageProducer.class);

@Autowired
protected JmsTemplate jmsTemplate;

protected int numberOfMessages = 100;

public void sendMessages() throws JMSException {
StringBuilder payload = null;

for (int i = 0; i < numberOfMessages; ++i) {

payload = new StringBuilder();
payload.append("Message [").append(i).append("] sent at: ").append(new Date());

jmsTemplate.send(new MessageCreator() {
public Message createMessage(Session session) throws JMSException {
TextMessage message = session.createTextMessage(payload.toString());
message.setIntProperty("messageCount", i);
LOG.info("Sending message number [" + i + "]");
return message;
}
});
}
}
}


The SimpleMessageProducer class above demonstrates the use of Spring autowiring to resolve the relationship between the jmsTemplate property and the producerTemplate in the app context further above. Then an anonymous MessageCreator instance is used to actually create a message for the jmsTemplate to send.

The JmsTemplate and the CachingConnectionFactory are both very widely used in businesses of all sizes throughout the world. Coupled with one of the Spring message listener containers, they provide an ideal solution.

I'll elaborate on message consumption using the Spring DefaultMessageListenerContainer and the SimpleMessageListenerContainer in a future blog post.

21 January 2010

How to Use Automatic Failover In an ActiveMQ Network of Brokers

Update: Here is the tarball of ActiveMQ configurations used for this blog post.

Last week I tested a new feature in ActiveMQ 5.3.0 to support automatic failover/reconnect in a network of brokers. Besides adding this information to the ActiveMQ book, one person also suggested that I also post it on my blog for easier access, so here you go!

Folks familiar with ActiveMQ already know that a network of brokers allows many broker instances to be networked for massive scalability. Prior to the addition of this feature in ActiveMQ 5.3, if one of the brokers in the network went down, reestablishing a connection with that broker when it comes back up is a manual process wrought with difficulty. By adding support for failover to the network of brokers, any broker in the network can come and go at will without any manual intervention. A very powerful feature, indeed. Although this post is long, the outcome of the testing is well worth it.



The first thing to note is the topology for the network of brokers. I used a network of three brokers named amq1, amq2 and amq3. The attached diagram explains the topology, including the consumers and producers. amq1 and amq2 are stand alone with no network connector. amq3 defines a network connector with failover to amq1 and amq2. Consumers exist on amq1 and amq2. Producer will connect to amq3. To start with, I have only configured a uni-directional network connector in amq3. Later I will change the configuration for a bi-directional network connector.

Thanks to the ability to upload any file to Google Docs this week, you can download the configuration files for the three brokers.

The next thing to do is outline the steps I used to test out this feature. These steps were performed on Mac OS X (Unix) but could easily be adapted for Windoze. Below are those steps:

1) Open six terminal windows as defined below:
1a) Terminal 1 = cd into the amq1 dir
1b) Terminal 2 = cd into the amq2 dir
1c) Terminal 3 = cd into the amq3 dir
1d) Terminal 4 = cd into the amq1/example dir
1e) Terminal 5 = cd into the amq1/example dir
1f) Terminal 6 = cd into the amq1/example dir

2) Terminal 1: start up amq1 (./bin/activemq)
3) Terminal 2: start up amq2 (./bin/activemq)
4) Terminal 3: start up amq3 (./bin/activemq)

Thanks to the configuration of the ActiveMQ logging interceptor, you should see that amq3 makes a network connection to either amq1 or amq2. For the rest of these steps, let's assume that amq3 connected to amq1.

5) Terminal 4: start up a consumer on amq1 (ant consumer -Durl=tcp://0.0.0.0:61616)
6) Terminal 5: start up a consumer on amq2 (ant consumer -Durl=tcp://0.0.0.0:61617)
7) Terminal 6: start up a producer on amq3 (ant producer -Durl=tcp://0.0.0.0:61618)

You should see 2000 messages sent to amq3. The messages should be forwarded to either amq1. The consumer connected to amq1 should have received the 2000 messages and shut down.

8) Terminal 1: shut down amq1 (ctrl-c)

Note the logging that shows the failover taking place successfully. Let's test it to see if the demand forwarding bridge actually got started.

9) Terminal 6: start up a producer on amq3 (ant producer -Durl=tcp://0.0.0.0:61618)

You should see 2000 messages sent to amq3. The consumer connected to amq2 receives the 2000 messages and shuts down.

10) Terminal 1: start up amq1 (./bin/activemq)

11) Terminal 2: shut down amq2 (ctrl-c)

Again, the failover took place successfully. Let's continue just a bit further to see if it will continue to failover if I bring up amq1 again.

12) Terminal 4: start up a consumer on amq1 (ant consumer -Durl=tcp://0.0.0.0:61616)

13) Terminal 6: start up a producer on amq3 (ant producer -Durl=tcp://0.0.0.0:61618)

You should see 2000 messages sent to amq3. The consumer connected to amq1 receives the 2000 messages and shuts down.

This proves that the failover transport is supported in a network connector and it does work correctly with a uni-directional network connector. In addition to a uni-directional network connector, I also tested a bi-directional network connector. This only requires a slight change to the configuration of the network connector in amq3. In the amq3 XML configuration file, in the network connector element, add a duplex=true attribute. Below is the network connector element for amq3 with the change:

<networkConnector name="amq3-nc" 
  uri="static:(failover:(tcp://0.0.0.0:61616,tcp://0.0.0.0:61617))" 
  dynamicOnly="true" 
  networkTTL="3" 
  duplex="true" />


With this minor change in configuration, the network connector is now bi-directional. I.e., communication between amq3 and whichever broker it connects to is two-way instead of just one-way. This means that messages can be sent in either direction, not just in one direction originating from amq3.

Below are the steps I used to test this specific change:

1) Open five terminal windows as defined below:
1a) Terminal 1 = cd into the amq1 dir
1b) Terminal 2 = cd into the amq2 dir
1c) Terminal 3 = cd into the amq3 dir
1d) Terminal 4 = cd into the amq1/example dir
1e) Terminal 5 = cd into the amq1/example dir

2) Terminal 1: start up amq1 (./bin/activemq)
3) Terminal 2: start up amq2 (./bin/activemq)
4) Terminal 3: start up amq3 (./bin/activemq)

You should see that amq3 makes a network connection to either amq1 or amq2. For the rest of these steps, let's assume that amq3 connected to amq1.

5) Terminal 4: start up a consumer on amq1 (ant consumer -Durl=tcp://0.0.0.0:61616)
6) Terminal 5: start up a producer on amq3 (ant producer -Durl=tcp://0.0.0.0:61618)

You should see 2000 messages sent to amq3. The messages should be forwarded to amq1. The consumer connected to amq1 should receive the 2000 messages and shut down.

Let's test the duplex capability of the network connector in amq3 now. To do this we'll send messages to amq1 and consume those messages from amq3.

7) Terminal 4: start up a consumer on amq3 (ant consumer -Durl=tcp://0.0.0.0:61618)
8) Terminal 5: start up a producer on amq1 (ant producer -Durl=tcp://0.0.0.0:61616)

You should see 2000 messages sent to amq1. The messages should be forwarded to amq3. The consumer connected to amq3 should receive the 2000 messages and shut down. This proves that the duplex feature is working. Now let's cause a failover/reconnect to take place and run through this same set of steps with amq3 and amq2.

9) Terminal 1: shut down amq1 (ctrl-c)

Notice the logging that shows the failover taking place successfully so that amq3 connects to amq2 now.

10) Terminal 4: start up a consumer on amq2 (ant consumer -Durl=tcp://0.0.0.0:61617)
11) Terminal 5: start up a producer on amq3 (ant producer -Durl=tcp://0.0.0.0:61618)

You should see 2000 messages sent to amq3. The messages should be forwarded to amq2. The consumer connected to amq2 should receive the 2000 messages and shut down.

Now let's test the duplex feature in the network connector.

12) Terminal 4: start up a producer on amq2 (ant consumer -Durl=tcp://0.0.0.0:61617)
13) Terminal 5: start up a consumer on amq3 (ant producer -Durl=tcp://0.0.0.0:61618)

You should see 2000 messages sent to amq2. The messages should be forwarded to amq3. The consumer connected to amq3 should receive the 2000 messages and shut down.

This proves that the duplex feature of the network connector works after a failover/reconnect to amq2.

This is a great addition to ActiveMQ that really improves the usability of a network of brokers. I already have some very large clients using this feature successfully, some of which are using a network of over 2000 brokers.

Hopefully these steps are clear enough to follow for your own use. If you need any clarifications, please contact me.

Update: Here is the tarball of ActiveMQ configurations used for this blog post.