• Menu
  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

JavaBeat

Java Tutorial Blog

  • Java
    • Java 7
    • Java 8
    • Java EE
    • Servlets
  • Spring Framework
    • Spring Tutorials
    • Spring 4 Tutorials
    • Spring Boot
  • JSF Tutorials
  • Most Popular
    • Binary Search Tree Traversal
    • Spring Batch Tutorial
    • AngularJS + Spring MVC
    • Spring Data JPA Tutorial
    • Packaging and Deploying Node.js
  • About Us
    • Join Us (JBC)
  • Java
    • Java 7
    • Java 8
    • Java EE
    • Servlets
  • Spring Framework
    • Spring Tutorials
    • Spring 4 Tutorials
    • Spring Boot
  • JSF Tutorials
  • Most Popular
    • Binary Search Tree Traversal
    • Spring Batch Tutorial
    • AngularJS + Spring MVC
    • Spring Data JPA Tutorial
    • Packaging and Deploying Node.js
  • About Us
    • Join Us (JBC)

Deploying ActiveMQ for large numbers of concurrent applications

March 23, 2011 //  by Krishna Srinivasan//  Leave a Comment

This article is based on ActiveMQ in Action, to be published 24-March-2011. It is being reproduced here by permission from Manning Publications. Manning publishes MEAP (Manning Early Access Program,) ebooks and pbooks. MEAPs are sold exclusively through Manning.com. All print book purchases include an ebook free of charge. When mobile formats become available all customers will be contacted and upgraded. Visit Manning.com for more information.

Deploying ActiveMQ for large numbers of concurrent applications

We are going to look at scaling your ActiveMQ applications and examine three techniques to allow you to do that. We will start with vertical scaling, where a single broker is used for thousands of connections and Queues. Then we will look at scaling connections to tens of thousands of connections by using techniques for horizontally scaling your applications using networks. Finally, we will examine traffic partitioning, which will balance scaling and performance but will add more complexity to your ActiveMQ application.

Vertical scaling

Vertical scaling is a technique that increases the number of connections and the load that a single ActiveMQ broker can handle. By default, the ActiveMQ broker is designed to move messages as efficiently as possible to ensure low latency and good performance. However, there are some configuration decisions that you can make to ensure that the ActiveMQ broker can handle both a large number of concurrent connections and a large number of Queues. We will examine each in turn.

By default, ActiveMQ will use blocking I/O to handle transport connections, which results in using a thread per connection. You can use non-blocking I/O on the ActiveMQ broker (and still use the default transport on the client) to reduce the number of threads used. You can configure non-blocking I/O to be the transport connector for the broker in the ActiveMQ configuration file. An example of how to do this follows:

<broker>
		<transportConnectors>
			<transportConnector name="nio" uri="nio://localhost:61616"/>
		</<transportConnectors>
	</broker>

In addition to using a thread per connection for blocking I/O, the ActiveMQ broker can use a thread for dispatching messages per client connection. You can ensure it uses a thread pool instead by setting the system property org.apache.activemq.UseDedicatedTaskRunner to false or by setting the ACTIVEMQ_OPTS property in the start-up script in the bin directory as follows:

ACTIVEMQ_OPTS="-Dorg.apache.activemq.UseDedicatedTaskRunner=false"

Ensuring your ActiveMQ broker has enough memory to handle lots of concurrent connections is a two-step process. First, you need to ensure that the JVM on which the broker is running is configured for enough memory. Again you can use the ACTIVEMQ_OPTS property in the start-up script:

ACTIVEMQ_OPTS="-Xmx1024M -Dorg.apache.activemq.UseDedicatedTaskRunner=false"

Secondly, ensure you configure enough memory for your ActiveMQ broker to use by setting the System Usage memory limit. For an ActiveMQ broker with more than a few hundred active connections, 512 MB should be the minimum. You can configure the memory limit in the ActiveMQ configuration file, as shown in listing 1.

Listing 1 Setting memory limit for the ActiveMQ broker

<systemUsage>
		<systemUsage>
			<memoryUsage>
				<memoryUsage limit="512 mb"/>
			</memoryUsage>
			<storeUsage>
				<storeUsage limit="10 gb" name="foo"/>
			</storeUsage>
			<tempUsage>
				<tempUsage limit="1 gb"/>
			</tempUsage>
		</systemUsage>
	</systemUsage>

It’s also advisable to reduce the CPU load per connection. If you are using the OpenWire wire format, turn off tight encoding, which can be CPU intensive. You can turn off tight encoding on a client-by-client basis. The parameters for OpenWire can be set as part of the URI used to connect to the ActiveMQ Broker. For example:

String uri = "failover://(tcp://localhost:61616?wireFormat.tightEncodingEnabled=false)";
	ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory(url);

So, we have looked at some tuning aspects for scaling an ActiveMQ broker to handle thousands of connections; now we can look at tuning the broker to handle thousands of Queues.

The default Queue configuration uses a separate thread for paging messages from the store into the Queue to be dispatched to interested Message Consumers. For a large number of Queues, it’s advisable to disable this by enabling the optimizeDispatch property for all Queues, as in listing 2.

Example 2 Setting optimizeDispatch

<destinationPolicy>
		<policyMap>
			<policyEntries>
				<policyEntry queue=">" optimizedDispatch="true"/>
			</policyEntries>
		</policyMap>
	</destinationPolicy>

Note that we use the wildcard ‘>’ character to denote all Queues. In addition, the default message store for ActiveMQ is built for speed but not designed for high scalability. The reason is that the index system it uses requires two file descriptors per destination. So, ensure you can scale not only to thousands of connections but also to tens of thousands of Queues; use either a JDBC Message Store or the new KahaDB Message Store.

So far we have looked at scaling connections, reducing thread usage, and selecting the right Message Store to scale. A sample configuration for ActiveMQ tuned for scaling is shown in listing 3.

Listing 3 Configuration for scaling

<broker xmlns="http://activemq.org/config/1.0"
		brokerName="amq-broker" dataDirectory="${activemq.base}/data">
		<persistenceAdapter>
			<kahaDB directory="${activemq.base}/data" journalMaxFileLength="32mb"/>
		</persistenceAdapter>
		<destinationPolicy>
			<policyMap>
				<policyEntries>
					<policyEntry queue=">" optimizedDispatch="true"/>
				</policyEntries>
			</policyMap>
		</destinationPolicy>
		<systemUsage>
			<systemUsage>
				<memoryUsage>
					<memoryUsage limit="512 mb"/>
				</memoryUsage>
				<storeUsage>
					<storeUsage limit="10 gb" name="foo"/>
				</storeUsage>
				<tempUsage>
					<tempUsage limit="1 gb"/>
				</tempUsage>
			</systemUsage>
		</systemUsage>
		<!-- The transport connectors ActiveMQ will listen to -->
		<transportConnectors>
			<transportConnector name="openwire"
				uri="nio://localhost:61616" />
		</transportConnectors>
	</broker>

Having looked at how to scale an ActiveMQ broker, we should look at using networks to increase horizontal scaling.

Horizontal scaling

In addition to scaling a single broker, you can use networks to increase the number of ActiveMQ brokers available for your applications. As networks automatically pass messages to connected brokers that have interested consumers, you can configure your clients to connect to a cluster of brokers, selecting one at random to connect to, for example:

failover://(tcp://broker1:61616,tcp://broker2:61616)?randomize=true

In order to make sure that messages for Queues or durable Topic subscribers are not orphaned on a broker, configure the networks to use dynamicOnly and a low network prefetchSize, as follows:

<networkConnector uri="static://(tcp://remotehost:61617)"
		name="bridge"
		dynamicOnly="true"
		prefetchSize="1"
	</networkConnector>

Using networks for horizontal scaling does introduce more latency because, potentially, messages have to pass through multiple brokers before being delivered to a consumer.

There is another alternative deployment, which provides great scalability and performance but requires more application planning. This hybrid solution, called traffic partitioning, combines vertical scaling of a broker with application-level splitting of destinations across different brokers. We will look at this next.

Traffic partitioning

Client-side traffic partitioning is a hybrid of the vertical and horizontal partitioning previously described. Networks are typically not used because the client application decides what traffic should go to which broker(s). The client application has to maintain multiple JMS Connections and decide which JMS Connection should be used for the destinations used.

An advantage of not directly using network connections is that you reduce the overhead of forwarding messages between brokers. You do need to balance that with the additional complexity that occurs in your application. A representation of using traffic partitioning can be seen in figure 1.

<?xml version="1.0" encoding="utf-8"?> 
<mx:ComboBox xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/halo"> 
<mx:dataProvider> 
<mx:Object stateCode="AK" label="Alaska"/> 
<mx:Object stateCode="AL" label="Alabama"/> 
<!-- the rest of the US states --> 
</mx:dataProvider> 
</mx:ComboBox>

Summary

We have covered vertical and horizontal scaling and traffic partitioning with ActiveMQ. This knowledge should help you to understand how to use ActiveMQ to provide connectivity for thousands of concurrent connections and tens of thousands of destinations.

Category: ActiveMQTag: ActiveMQ

About Krishna Srinivasan

He is Founder and Chief Editor of JavaBeat. He has more than 8+ years of experience on developing Web applications. He writes about Spring, DOJO, JSF, Hibernate and many other emerging technologies in this blog.

Previous Post: « New Features in CSS 3.0
Next Post: Portlets are easy »

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Primary Sidebar

Follow Us

  • Facebook
  • Pinterest

FEATURED TUTORIALS

New Features in Spring Boot 1.4

Difference Between @RequestParam and @PathVariable in Spring MVC

What is new in Java 6.0 Collections API?

The Java 6.0 Compiler API

Introductiion to Jakarta Struts

What’s new in Struts 2.0? – Struts 2.0 Framework

JavaBeat

Copyright © by JavaBeat · All rights reserved
Privacy Policy | Contact