Performance Improvements in ActiveMQ

(last updated 2013-Jul-31)

There are a number of ways to help scale ActiveMQ vertically and horizontally.  Horizontal scaling (unless just putting a limited number of queues on each server) generally needs vertical scaling as well.  (By vertical, I mean put as many queues and as much traffic as possible through one server, horizontal means making it easy to throw another server in the mix to take the load.)

Vertical scaling (usually needed for horizontal scaling anyway):
NIO instead of TCP: use nio for the transport connector setting in the broker (http://activemq.apache.org/configuring-transports.html) as in
<transportConnectors>
     <transportConnector name="openwire" uri="nio://0.0.0.0:61616"
               updateClusterClients="true" 
               rebalanceClusterClients="true"
               updateClusterClientsOnRemove="true" enableStatusMonitor="true"/>
</transportConnectors>

see also: http://www.pepperdust.org/?p=150. If you want, you can also have a TCP, as well as NIO, transportConnector for use for the network of brokers configuration - make sure to use a different port number and name and use that port number in the network of brokers configurations as well.
Why use NIO? With default settings, there will be a thread per destination and a thread per connection (and that's without network of brokers turned on) if blocking I/O is on (i.e. not using nio) - thus it is important to turn NIO on.
View of ActiveMQ threads on producer/consumer links: http://fusesource.com/wiki/display/ProdInfo/Understanding+the+Threads+Allocated+in+ActiveMQ


Optimizeddispatch: set optimizeddispatch to true (on queue policy) http://activemq.apache.org/per-destination-policies.html. This only applies to queues and stops the system from using a separate thread for dispatching.  Optimizeddispatch is set in the broker in the destination policy section, for example:
         <destinationPolicy>
            <policyMap>
              <policyEntries>
                <policyEntry queue=">" optimizedDispatch="true"/>
              <policyEntries>
            <policyMap>
         <destinationPolicy>
queue=">" means all queues (in ActiveMQ's conf, > means all to the right, * means on character).


Dedicatedtaskrunner: turn off dedicatedtaskrunner: http://activemq.apache.org/how-do-i-configure-10s-of-1000s-of-queues-in-a-single-broker-.html
Turning off dedicatedtaskrunner means that a pool of threads is used to handle the queues as opposed to a new thread per queue. Turning this off can be done in the activemq file in the bin directory adding -Dorg.apache.activemq.UseDedicatedTaskRunner=false to the ACTIVEMQ_OPTS line. (also mentioned in passing here: http://bsnyderblog.blogspot.co.uk/2011/02/activemq-and-message-redelivery.html)

It can be important to use both optimizedispatch=true and usededicatedtaskrunner=false:
http://activemq.2283324.n4.nabble.com/Large-number-of-queues-HowTo-td2364929.html
one relies on the thread pool and the other lets the thread pool be used.  This page http://activemq.apache.org/javalangoutofmemory.html also mentions turning off dedicatedTaskRunner as well as mentioning JMS template gotchas.

Caching: this is related more to issues with stuck and/or missing messages - try turning off caching on queues (and producer flow control) as in:
   <policyEntry queue=">" producerFlowControl="false" memoryLimit="1mb" useCache="false">
(from: https://issues.apache.org/jira/browse/AMQ-3167 & http://activemq.2283324.n4.nabble.com/Message-stuck-on-queue-td3076607.html)

Horizontal scaling:

Horizontal scaling is achieved by throwing more ActiveMQ instances at the problem, but we need the instances to be aware of each other (see hybrid for when they aren't aware).  
The key feature of the horizontal scaling configuration is the Network Connector.

Network Connector:
<networkConnectors>
       <networkConnector uri="static:(tcp://localhost:61616)" <!-- or use nio instead of tcp -->
         prefetchSize="1" <!-- must be >0 as brokers don't poll for messages like consumers do -->
         conduitSubscriptions="true"
         networkTTL="3"
         duplex="true"/>
</networkConnectors>
conduitSubscriptions="true" - multiple consumers subscribing to the same destination are treated as one consumer by the network 
networkTTL="3" -  controls how many times a message can move around brokers
http://www.javabeat.net/articles/print.php?article_id=267 - comes from the ActiveMQ in Action book and has more info on configuring for HA 

http://bsnyderblog.blogspot.co.uk/2010/01/how-to-use-automatic-failover-in.html
http://www.slideshare.net/dejanb/advanced-messaging-with-apache-activemq (note that this presentation mentions prefetch=1000 on network connector, which doesn't agree with other sources is not right).

Network of brokers will create higher thread numbers as it relies on advisory topics - one thread for each topic and one topic for each queue in use where there was already another thread handling it.  The threading changes listed in Vertical Scaling will help keep the number of thread down.  http://www.commonsensecode.com/2010/07/02/activemq-supporting-thousands-of-concurrent-connections/
One last thought - check this post for an observation of heavy load with the network of broker solution.

Other Settings
Memory Usage Limits (relevant to producer flow control): http://activemq.2283324.n4.nabble.com/Questions-on-upgrading-to-5-6-0-td4644092.html - version 5.6.0 added a comment to flag issues that were going silent.

Increase memory:
 - give activemq more memory on startup via the activemq file in the bin directory adding -Xmx 2048M to the ACTIVEMQ_OPTS line (adding to useDedicatedTaskRunner).
- set system usage:
<systemUsage>
  <systemUsage>
    <memoryUsage>
       <memoryUsage limit="1 gb"> <!-- how much memory ActiveMQ can use, also point at which a producer will be blocked if sending too many messages, the default is 64MB-->
     </memoryUsage>
     <tempUsage>
       <tempUsage limit = "2 gb"> <!-- how much disk space non-persisted messages can use, will error if set higher than your available disk space, default is 50GB -->
     </tempUsage>
     <storeUsage> <!-- won't work with 5.5.0 and lower versions -->
       <storeUsage limit = "100 gb"> <!-- how much disk space persisted messages can use, will warn if set higher than your available disk space, default is 100GB -->
     </storeUsage>
    </systemUsage>
  </systemUsage>

The ActiveMQ docs (including ActiveMQ in Action) and the forums are a little inconsistent on the meaning of memoryUsage as to whether it applies to the whole broker or only non-persistent messages.
http://activemq.apache.org/producer-flow-control.html
http://activemq.2283324.n4.nabble.com/storeUsage-with-kahaDB-which-files-td3034710.html 
The latter link seems to make it clear :)  To add to that, if the memoryUsage limit has been reached producer flow control will kick in; if producer flow control is off, then the thread will be blocked until space is free.

Issues moving messages:
Prefetch limits: Set prefetch to a low value for network connectors (0 only for consumer connections, >1  for network connectors).  Prefetch for network connectors needs to be greater than or equal to 1 since brokers don't poll for messages like consumers do.
http://blog.garytully.com/2010/01/activemq-prefetch-and-asyncdispatch.html
Queue values can be changed in the broker's queue policy section using queuePrefetch to control (http://activemq.apache.org/per-destination-policies.html)
Note that setting a low prefetch may have some negative impact on performance as a large number of messages aren't handled as a group saving on overhead.

Hybrid scaling (traffic partitioning):
Vertical scaling is essential to get ActiveMQ to produce the most per instance. Horizontal scaling can add more machines (although not all believe that network of brokers are a good choice: http://www.commonsensecode.com/2010/07/02/activemq-supporting-thousands-of-concurrent-connections/)
 Combining vertical with horizontal, but leaving out the overhead of network of brokers, is a hybrid approach relying on partitioning traffic between servers.  This could be done by setting all queues that begin with A on one instance and B on another.  The applications would need to know which ActiveMQ instance to send/recieve messages/events for B - i.e. you need to set these manually.  Hybrid scaling - queue groups on specific brokers which requires more app configuration, but provides better scaling, a solution that scales well, but the manual configuration is a known downside.

Producer Flow Control
Producer flow control sounds like a great idea, but one of the downsides is that it uses an extra thread per queue being controlled.  In a network of broker set up, the advisory topics are also being flow controlled and while you can stop producer flow control on the queues, you can't on the advisory topics.
possibly remove queue limits and set async close to false:
https://issues.apache.org/jira/browse/AMQ-1739 

<policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb"> The memoryLimit should be less than the  memoryUsage setting/number of queues to avoid reaching memory limits.

Some links related to producer flow control (this and the last message were looking for CLOSE_WAIT reasons)
The above configuration changes are for scaling; however, scaling doesn't necessarily handle high availability.  ActiveMQ has 3 basics options for HA with one being deprecated and replaced with a new approach soon:
Shared nothing (note that this has been removed in ActiveMQ 5.8) The shared nothing master/slave configuration doesn't have a good recovery method - not ideal for many situations. Recovery requires downtime and copying files between systems.  It relies on keeping a slave updated with all messages and changes such that if the master goes down, the slave can take over.

Shared DB storage - relies on a DB to provide storage and a locking location for two or more ActiveMQ instances to try to control.  The one with the lock is the master, should the master go down, the lock is released and a slave can take over.  Simple configuration, relatively robust, but will be limited by the performance of the DB.
We ran this configuration extensively until one week where we were flooded with messages due to a new system going live (and no planning being made).  The message back log grew, the data queries too longer and longer and the performance of the system dropped to barely functioning levels.  The situation was severe and our option was to go with ActiveMQ's KahaDB disk level storage for much faster throughput - our systems recovered quickly.  We've also had issues with the DB basically locking for lengthy periods of time (10min-hours) if someone pressed 'purge' on a large queue via the web admin UI if the system was DB backed. Due to our concerns on purging DB backed queues and some of our systems generating too many DLQ messages, we had to write a script in our DB to delete messages from the corresponding DLQs; this was fairly easy as the queue name is clear in the SQL table.

Shared storage master/slave - good if you have a SAN.  Be sure to use nfsv4 or higher and make sure that file locking works (and times out!).  This configuration is much like the Shared DB storage, but utilizes faster disk storage options - higher through put is attainable.

LevelDB replicated storage - coming soon in ActiveMQ 5.9 http://activemq.apache.org/replicated-leveldb-store.html It seems to rely on the ActiveMQ brokers communicating state change from the elected master to a number of slaves.  When the master fails, a slave will be elected master based on the slave with the most recent updates.

Other options
Consider Apache Apollo 1.0 as it supports jms (according to hiramchirino.com/blog) or HornetMQ.  RabbitMQ doesn't support JMS directly otherwise it would be higher on our list of alternatives.
It looks like Apollo code may make it into ActiveMQ so the prospects are looking better.

Dealing with performance issues
Even with a good setup there can be some performance issues (most of ours stemmed from the network of brokers). See other pages on this blog for more info, especially: Network of Brokers Revisited and Performance Issues.

Comments

Popular Posts