Tag Archives: ActiveMQ

[repost ]managing ActiveMQ with JMX APIs

original:http://www.consulting-notes.com/2010/08/monitoring-and-managing-activemq-with.html

here is a quick example of how to programmatically access ActiveMQ MBeans to monitor and manipulate message queues…

first, get a connection to a JMX server (assumes localhost, port 1099, no auth)
note, always cache the connection for subsequent requests (can cause memory utilization issues otherwise)

JMXServiceURL url = new JMXServiceURL(“service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi”);
JMXConnector jmxc = JMXConnectorFactory.connect(url);
MBeanServerConnection conn = jmxc.getMBeanServerConnection();

then, you can execute various operations such as addQueue, removeQueue, etc…

String operationName=”addQueue”;
String parameter=”MyNewQueue”;
ObjectName activeMQ = new ObjectName(“org.apache.activemq:BrokerName=localhost,Type=Broker”);
if(parameter != null) {
Object[] params = {parameter};
String[] sig = {“java.lang.String”};
conn.invoke(activeMQ, operationName, params, sig);
} else {
conn.invoke(activeMQ, operationName,null,null);
}

also, you can get an ActiveMQ QueueViewMBean instance for a specified queue name…

ObjectName activeMQ = new ObjectName(“org.apache.activemq:BrokerName=localhost,Type=Broker”);
BrokerViewMBean mbean = (BrokerViewMBean) MBeanServerInvocationHandler.newProxyInstance(conn, activeMQ,BrokerViewMBean.class, true);

for (ObjectName name : mbean.getQueues()) {
QueueViewMBean queueMbean = (QueueViewMBean)
MBeanServerInvocationHandler.newProxyInstance(mbsc, name, QueueViewMBean.class, true);

if (queueMbean.getName().equals(queueName)) {
queueViewBeanCache.put(cacheKey, queueMbean);
return queueMbean;
}
}

then, execute one of several APIs against the QueueViewMBean instance…

queue monitoring – getEnqueueCount(), getDequeueCount(), getConsumerCount(), etc…

queue manipulation – purge(), getMessage(String messageId), removeMessage(String messageId), moveMessageTo(String messageId, String destinationName), copyMessageTo(String messageId, String destinationName), etc…

summary
The APIs can easily be used to build a web or command line based tool to support remote ActiveMQ management features. That being said, all of these features are available via the JMX console itself and ActiveMQ does provide a web console to support some management/monitoring tasks.

See these pages for more information…

http://activemq.apache.org/jmx-support.html
http://activemq.apache.org/web-console.html

[repost ]Using JMX to monitor Apache ActiveMQ

original:http://activemq.apache.org/jmx.html

Apache ActiveMQ has extensive support for JMX to allow you to monitor and control the behavior of the broker via the JMX MBeans.

Using JMX to monitor Apache ActiveMQ

You can enable/disable JMX support as follows…

1. Run a broker setting the broker property useJmx to true. (From 4.0 onwards this is enabled by default)
i.e.

For xbean configuration

<broker useJmx="true" brokerName="BROKER1">
...
</broker>

For url configuration

broker:(tcp://localhost:61616)?useJmx=true

2. Run a JMX console (e.g. jconsole – JMX console included in the JDK <JAVA_HOME>/bin/jconsole.exe)

3. Connect to the given JMX URL:

The ActiveMQ broker should appear in the list of local connections, if you are running JConsole on the same host as ActiveMQ.

To connect to a remote ActiveMQ instance, or if the local process does not show up, use Remote Process option, and enter an URL. Here is an example localhost URL:

service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi

Using the Apache ActiveMQ version on OS X it appears as follows:

ActiveMQ MBeans Reference

For additional references provided below is a brief hierarchy of the mbeans and a listing of the properties, attributes, and operations of each mbeans.

 Mbean Type Properties / ObjectName Attributes Operations
 Broker
  • Type=Broker
  • BrokerName=<broker identifier>
  • BrokerId
  • TotalEnqueueCount
  • TotalDequeueCount
  • TotalConsumerCount
  • TotalMessages
  • TotalMessagesCached
  • MemoryLimit
  • MemoryPercentageUsed
  • start
  • stop
  • terminateJVM
  • resetStatistics
  • gc
 Destination
  • Type=Queue|Topic
  • Destination=<destination identifier>
  • BrokerName=<name of broker>
  • EnqueueCount
  • DequeueCount
  • ConsumerCount
  • Messages
  • MessagesCached
  • resetStatistics
  • gc
 NetworkConnector
  • Type=NetworkConnector
  • BrokerName=<name of broker>
  • start
  • stop
 Connector
  • Type=Connector
  • ConnectorName=<connector identifier>
  • BrokerName=<name of broker>
  • EnqueueCount
  • DequeueCount
  • start
  • stop
  • resetStatistics
 Connection
  • Type=Connection
  • Connection=<connection identifier>
  • BrokerName=<name of broker>
  • EnqueueCount
  • DequeueCount
  • DispatchQueueSize
  • Active
  • Blocked
  • Connected
  • Slow
  • start
  • stop
  • resetStatistics

Currently, we have just provided basic information to monitor, e.g. number of messages exchanged, consumer count, producer count, etc.

Command line utilities are also available to let you monitor ActiveMQ. Refer to ActiveMQ Command Line Tools Reference for usage information.

Password Protecting the JMX Connector

(For Java 1.5+)

1. Make sure JMX is enabled, but tell ActiveMQ not create its own connector so that it will use the default JVM JMX connector.

<broker xmlns="http://activemq.org/config/1.0" brokerName="localhost"useJmx="true">

  ...

  <managementContext>
     <managementContext createConnector="false"/>
  </managementContext>

  ...

</broker>

2. Create access and password files

conf/jmx.access:

# The "monitorRole" role has readonly access.
# The "controlRole" role has readwrite access.
monitorRole readonly
controlRole readwrite

conf/jmx.password:

# The "monitorRole" role has password "abc123".
# The "controlRole" role has password "abcd1234".
monitorRole abc123
controlRole abcd1234

(Make sure both files are not world readable – more info can be find here to protect files)

For more details you can see the Monitoring Tomcat Document

3. Modify the “activemq” startup script (in bin) to enable the Java 1.5+ JMX connector

Find the “SUNJMX=” line and change it too:

1. Windows

  SUNJMX=-Dcom.sun.management.jmxremote.port=1616 -Dcom.sun.management.jmxremote.ssl=false \
    -Dcom.sun.management.jmxremote.password.file=%ACTIVEMQ_BASE%/conf/jmx.password \
    -Dcom.sun.management.jmxremote.access.file=%ACTIVEMQ_BASE%/conf/jmx.access

2. Unix

  SUNJMX="-Dcom.sun.management.jmxremote.port=1616 -Dcom.sun.management.jmxremote.ssl=false \
    -Dcom.sun.management.jmxremote.password.file=${ACTIVEMQ_BASE}/conf/jmx.password \
    -Dcom.sun.management.jmxremote.access.file=${ACTIVEMQ_BASE}/conf/jmx.access"

This could be set in /etc/activemq.conf instead (if you have root access):

1. Windows

ACTIVEMQ_HOME=DRIVE_LETTER:/where/ActiveMQ/is/installed
ACTIVEMQ_BASE=%ACTIVEMQ_HOME%
SUNJMX=-Dcom.sun.management.jmxremote.port=1616 -Dcom.sun.management.jmxremote.ssl=false \
    -Dcom.sun.management.jmxremote.password.file=%ACTIVEMQ_BASE%/conf/jmx.password \
    -Dcom.sun.management.jmxremote.access.file=%ACTIVEMQ_BASE%/conf/jmx.access

2. Unix

ACTIVEMQ_HOME=DRIVE_LETTER:/where/ActiveMQ/is/installed
ACTIVEMQ_BASE=${ACTIVEMQ_HOME}
SUNJMX="-Dcom.sun.management.jmxremote.port=1616 -Dcom.sun.management.jmxremote.ssl=false \
    -Dcom.sun.management.jmxremote.password.file=${ACTIVEMQ_BASE}/conf/jmx.password \
    -Dcom.sun.management.jmxremote.access.file=${ACTIVEMQ_BASE}/conf/jmx.access"

4. Start ActiveMQ

You should be able to connect to JMX on the JMX URL

service:jmx:rmi:///jndi/rmi://<your hostname>:1616/jmxrmi

And you will be forced to login.

Advanced JMX Configuration

The activemq.xml configuration file allows you to configure how ActiveMQ is exposed to JMX for management.  In some cases, you may need to tweak some of it’s settings such as what port is used.

Example:

<broker useJmx="true">
	<managementContext>
	   <managementContext connectorPort="2011" jmxDomainName="test.domain"/>
	</managementContext>
</broker>

In 4.0.1 or later, on Java 1.5 or later we try and use the default platform MBeanServer (so that things like the JVM threads & memory settings are visible).

If you wish to change the Java 5 JMX settings you can use various JMX system properties

For example you can enable remote JMX connections to the Sun JMX connector, via setting the following environment variable (using set or export depending on your platform). These settings only configure the Sun JMX connector within Java 1.5+, not the JMX connector that ActiveMQ creates by default.

SUNJMX=-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=1616 \
-Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false

(The SUNJMX environment variable is simple used by the “activemq” startup script, as additional startup parameters for java. If you start ActiveMQ directly, you’ll have to pass these parameters yourself.)

ManagementContext Properties Reference

Property Name Default Value Description
useMBeanServer true If true then it avoids creating a new MBean server if a MBeanServer has already been created in the JVM
jmxDomainName org.apache.activemq The jmx domain that all objects names will use
createMBeanServer true If we should create the MBeanServer is none is found.
createConnector true If we should create a JMX connector (to allow remote management) for the MBeanServer
connectorPort 1099 The port that the JMX connector will use
connectorHost localhost The host that the JMX connector and RMI server (if rmiServerPort>0) will use
rmiServerPort 0 The RMI server port, handy if port usage needs to be restricted behind a firewall
connectorPath /jmxrmi The path that JMX connector will be registered under
findTigerMBeanServer true Enables/disables the searching for the Java 5 platform MBeanServer

Release Notes – ActiveMQ – Version 5.5.0 – HTML format

Configure Release Notes

Bug

  • [AMQ-1035] – Problem with STOMP C++ Client connecting with a AMQ Broker having Authorization and Authentication Plugins installed
  • [AMQ-1604] – Please make the following configuration changes so releases into production are simpler
  • [AMQ-1780] – ActiveMQ broker does not automatically reconnect if the connection to the database is lost
  • [AMQ-1997] – Memory leak in broker – Temporary Queue related (relating to bug AMQ-1790)
  • [AMQ-2138] – Memory Leak in ActiveMQConnection
  • [AMQ-2171] – Browse queue hangs with prefetch = 0
  • [AMQ-2213] – Equals method return wrong result for TopicSession / QueueSession
  • [AMQ-2218] – Message delivery to consumer eventually pauses if consumer publishes to the same queue it receives from
  • [AMQ-2223] – Documentation References Advisory Messages which are not valid in xsd
  • [AMQ-2256] – Unnecessary TcpTransportFactory NumberFormatException and warning
  • [AMQ-2336] – Redeliveried messages stops consumers from going on consuming the rest of messages in the queue
  • [AMQ-2402] – SystemPropertiesConfiguration swaps JMX user / password
  • [AMQ-2496] – journaledJDBC not creating ACTIVEMQ_MSGS with MS SQL 2008
  • [AMQ-2633] – Missing documentation: http://activemq.apache.org/producer-flow-control.html
  • [AMQ-2662] – ActiveMQEndpointWorker.stop() sometimes is needing many minutes to shutdown
  • [AMQ-2673] – Producer started before consumer leads to a “javax.jms.JMSException: Unmatched acknowledege” (repro available)
  • [AMQ-2683] – Producer Flow Control Does Not Seem to Work with Topics
  • [AMQ-2718] – Copyright banner on page footer of ActiveMQ Console is outdated
  • [AMQ-2736] – KahaDB doesn’t clean up old files
  • [AMQ-2758] – rollback does not work on topic
  • [AMQ-2798] – Occaional hangs on ensureConnectionInfoSent
  • [AMQ-2852] – Memory leak when undeploying webapp with ActiveMQ client
  • [AMQ-2929] – Compressed text message received by consumer uncompressed
  • [AMQ-2944] – Failover transport always re-connects to the first configured transport instead of the original transport speficied in the connection url
  • [AMQ-2954] – NPE in JobSchedulerStore after restarts
  • [AMQ-2955] – Message getting stuck on queue, leading to KahaDB log files not being deleted and disk running out of space
  • [AMQ-2963] – JMSBridgeConnectors does not work with IBM MQ and ActiveMQ 5.4.0
  • [AMQ-2978] – FailoverTransport sometimes reconnects on Connection.close()
  • [AMQ-2981] – Connecting to broker using discovery protocol fails
  • [AMQ-3000] – Multiple Cron Scheduled Messages don’t fire every minute as configured
  • [AMQ-3006] – STOMP connector assigns wrong default priority to incoming messages
  • [AMQ-3015] – Javascript client does not establish session properly.
  • [AMQ-3026] – Statistics plugin sample hanging when switching to http transport protocol
  • [AMQ-3033] – BrokerService leaks threads when scheduler or jmx are enabled
  • [AMQ-3036] – Scheduled message CRON strings not parsed correctly
  • [AMQ-3038] – Possible Memory-Leak as ActiveMQTempDestinations don’t get deleted when ActiveMQConnection.close() is called
  • [AMQ-3040] – ConnectionState.getTempDesinations() should be renamed to ConnectionState.getTempDestinations()
  • [AMQ-3041] – TemporyQueue will never get unregisterd from JMX which leads to a memory leak
  • [AMQ-3056] – Exception when Redelivery ack is processed by topic subscription
  • [AMQ-3062] – “Deflater has been closed” exception when jms.useCompression=true and using ActiveMQBytesMessage
  • [AMQ-3067] – ActiveMQBlobMessage.copy(..) does not copy the name attribute
  • [AMQ-3068] – Error creating tables on Oracle jdbc store
  • [AMQ-3071] – ConcurrentModificationException thrown in PriorityNetworkDispatchPolicy
  • [AMQ-3075] – Auto-create database fails with PostgreSQL (Error in SQL: ‘drop primary key’)
  • [AMQ-3076] – spurious KahaDB warnings
  • [AMQ-3077] – ArraysIndexOutOfBoundsException : -32768 in “BrokerService[xxx] Task” thread
  • [AMQ-3081] – Durable subscriptions are not removed from mbean
  • [AMQ-3084] – Typo “DispachedCounter” in response when running activemq-admin
  • [AMQ-3085] – IndexOutOfBoundsException on FailoverTransport.updateURIs after: already known: java.net.UnknownHostException
  • [AMQ-3088] – ActiveMQ Web Console “Scheduled” Tab Invocation Fails and returns an unclear message.
  • [AMQ-3092] – Deleting a Queue from the console results in lost messages
  • [AMQ-3093] – Client should provide handling of JMSPriority messages outside of range 0-9.
  • [AMQ-3094] – ajax client does not receive all messages
  • [AMQ-3095] – Broker policyEntry DurableTopicPrefetch is ignored by default because of connection.optimizedMessageDispatch
  • [AMQ-3115] – reportInterval property ignored by DiscardingDLQBrokerPlugin
  • [AMQ-3119] – Proxy connector stop sending messages after failover
  • [AMQ-3120] – KahaDB error: “Could not locate data file”
  • [AMQ-3122] – Recovery after out of disk space (when space freed up) needs manual intervention
  • [AMQ-3124] – Failover transport client gets corrupted connectedBrokers data
  • [AMQ-3125] – updateClusterFilter/ClientsOnRemove broken when running JMX broker
  • [AMQ-3129] – Can only have one duplex networkConnection per transportConnection
  • [AMQ-3130] – ActiveMQ’s Activator not discovering other bundles with extensions.
  • [AMQ-3140] – Lost messages when scheduling messages concurrently
  • [AMQ-3141] – Messages may be lost when schedule them with a short delay
  • [AMQ-3142] – Prepare the upgrade to Karaf 2.2
  • [AMQ-3143] – JMX attribute change doesn’t affect store usage
  • [AMQ-3149] – concurrentStoreAndDispatchQueues when cache disabled can lead to skipped message dispatch, leaving message pending for some time
  • [AMQ-3153] – An expired message that is consumed and resent with an updated expiration never expires again.
  • [AMQ-3160] – ConcurrentModificationException in ActiveMQ Journal Checkpoint Worker
  • [AMQ-3161] – Race condition in ActiveMQ Journal Checkpoint worker thread cleanup leads to multiple running instances
  • [AMQ-3162] – ActiveMQ checkpoint worker makes unnecessary repeated calls to Journal.getFileMap(), leading to excessive memory usage
  • [AMQ-3165] – ActiveMQ 5.4.2 Admin – Accessing Scheduled.jsp giving an Exception in log file
  • [AMQ-3167] – possible skipped Queue messages in memory limited configuration with fast consumers
  • [AMQ-3176] – Potential deadlock in duplex network connector recreation, resulting in dangling connections
  • [AMQ-3180] – JMX Browse of BytesMessage fails with javax.management.openmbean.OpenDataException: Argument’s element itemValues[8]=”[B@de15a0″ is not a valid value for this item
  • [AMQ-3181] – ActiveMQConnectionFactory fails in an Applet enviroment
  • [AMQ-3182] – JAAS PropertiesLoginModule does not maintain internal validity state, so will commit in error after an invalid login attempt
  • [AMQ-3185] – Closing a VMTransport can cause all other VMTransports to be prematurely closed
  • [AMQ-3187] – IllegalMonitorStateException in default topic consumer of maven-activemq-perf-plugin
  • [AMQ-3190] – Durable Subscription – missing messages when selector matching sub resumes after broker restart
  • [AMQ-3193] – Consumers won’t get msgs after JMX operation removeMatchingMessages() was called on a queue.
  • [AMQ-3199] – CRON next scheduled time incorrectly calculated
  • [AMQ-3200] – Scheduled CRON jobs execute twice
  • [AMQ-3202] – Sending an Empty MapMessage over HttpTransport fails with exception
  • [AMQ-3206] – Unsubscribed durable sub can leave dangling message reference in kahaDB, visible after a restart
  • [AMQ-3209] – URISupport.createURIWithQuery() fails on some composite uris.
  • [AMQ-3211] – JMSXUserId Can be spoofed by client
  • [AMQ-3220] – Wildcards do not work with included destinations for network connectors.
  • [AMQ-3222] – Failover and SimpleDiscovery – query parameters getting dropped
  • [AMQ-3238] – Topic-Messages not redelivered to durable subscription after rollback and reconnect

Improvement

  • [AMQ-2492] – Microsoft SQL Server JDBC Driver 2.0 not recognized
  • [AMQ-2968] – Add Apache commons daemon (jsvc/procrun) start/stop support.
  • [AMQ-3045] – Add property maximumRedeliveryDelay in org.apache.activemq.RedeliveryPolicy
  • [AMQ-3078] – Copyright message is out of date in admin console
  • [AMQ-3105] – Require JDK 6
  • [AMQ-3134] – Add support of MS SQL JDBC driver (version 3.0)
  • [AMQ-3138] – The Camel ActiveMQComponent should default create ActiveMQConnectionFactory with the provided broker url
  • [AMQ-3139] – Remove queue and topic endpoints in Camel when they are removed in CamelEndpointLoader
  • [AMQ-3145] – cacheEnabled attribute should be exposed on the queueview via jmx
  • [AMQ-3146] – Add original destination to Stomp messages received from DLQ
  • [AMQ-3148] – LoggingBrokerPlugin addConnection(..) log output is meaningless
  • [AMQ-3150] – Please Update log4j to latest version (1.2.16)
  • [AMQ-3159] – Log file offset in addition to file location in checkpointUpdate()
  • [AMQ-3174] – ConsumerTool (in examples) should show how to do batch acknowledgement using either transacted session or CLIENT_ACKNOWLEDGE
  • [AMQ-3175] – Allow setting soTimeout for Http/Https transports
  • [AMQ-3178] – 5.3.x clients to 5.4 brokers always get updated cluster information in the broker info, this should be configurable
  • [AMQ-3184] – Upgrade to Camel 2.6.0
  • [AMQ-3188] – Full table scan for durable subs in jdbc store when priority enabled; very slow with large message backlog
  • [AMQ-3191] – Add setTrustStore() and setKeyStore() methods to ActiveMQSslConnectionFactory class
  • [AMQ-3192] – Add setTrustStore() and setKeyStore() methods to ActiveMQSslConnectionFactory class
  • [AMQ-3195] – NetworkConnector initialization should be backed by an executor
  • [AMQ-3196] – Speed up initial message delivery for offline durable sub with keepDurableSubsActive=true and JDBC store
  • [AMQ-3197] – Virtual destinations and wildcards
  • [AMQ-3198] – Allow JAAS GuestLoginModule to fail if users specify a password
  • [AMQ-3205] – Update ActivationSpec
  • [AMQ-3207] – Various improvements to features.xml possible with karaf-2.2
  • [AMQ-3218] – Mutlitple Exclusive Consumers: It is currently not possible to always ensure that a new exclusive consumer replaces any existing one
  • [AMQ-3231] – Stomp Frame should mask passcode header in toString output, so it does not pollute the log
  • [AMQ-3237] – FileLock.tryLock() doesn’t work well in all environments
  • [AMQ-3241] – “Unkown” is an incorrect spelling in ActiveMQMessageProducerSupport.java
  • [AMQ-3244] – Enable PropertiesLoginModule JAAS module to optionally cache values in memory

New Feature

  • [AMQ-3003] – Allow the option of a DLQ per durable subscription DeadLetterStrategy
  • [AMQ-3010] – ActiveMQInputStream should allow to specify a timeout like MessageConsumer.receive() does
  • [AMQ-3107] – Fire advisory when network bridge is starter/stopped
  • [AMQ-3108] – Show network bridges in web console
  • [AMQ-3109] – Show bridges created by duplex connectors in JMX
  • [AMQ-3177] – Switch to use slf4j as logger (instad of commons logging)
  • [AMQ-3183] – Set JMSXUserID value based on authenticated principal
  • [AMQ-3186] – Allow producer and consumer throttling in maven-activemq-perf-plugin
  • [AMQ-3204] – Support non-standard destination path separators
  • [AMQ-3219] – Enable MDC logging
  • [AMQ-3236] – In the case of DLQ processing due to an exception from onMessage, provide the exception string as a message property

ActiveMQ 5.5.0 released

Apache ActiveMQ is the most popular and powerful open source messaging and Integration Patterns provider.

Apache ActiveMQ is fast, supports many Cross Language Clients and Protocols, comes with easy to use Enterprise Integration Patterns and many advanced features while fully supporting JMS 1.1 and J2EE 1.4. Apache ActiveMQ is released under the Apache 2.0 License

Grab yourself a Download, try our Getting Started Guide, surf our FAQ or start Contributing and join us on our Discussion Forums.

Features

  • Supports a variety of Cross Language Clients and Protocols from Java, C, C++, C#, Ruby, Perl, Python, PHP
    • OpenWire for high performance clients in Java, C, C++, C#
    • Stomp support so that clients can be written easily in C, Ruby, Perl, Python, PHP, ActionScript/Flash, Smalltalk to talk to ActiveMQ as well as any other popular Message Broker
  • full support for the Enterprise Integration Patterns both in the JMS client and the Message Broker
  • Supports many advanced features such as Message Groups, Virtual Destinations, Wildcards and Composite Destinations
  • Fully supports JMS 1.1 and J2EE 1.4 with support for transient, persistent, transactional and XA messaging
  • Spring Support so that ActiveMQ can be easily embedded into Spring applications and configured using Spring’s XML configuration mechanism
  • Tested inside popular J2EE servers such as Geronimo, JBoss 4, GlassFish and WebLogic
    • Includes JCA 1.5 resource adaptors for inbound & outbound messaging so that ActiveMQ should auto-deploy in any J2EE 1.4 compliant server
  • Supports pluggable transport protocols such as in-VM, TCP, SSL, NIO, UDP, multicast, JGroups and JXTA transports
  • Supports very fast persistence using JDBC along with a high performance journal
  • Designed for high performance clustering, client-server, peer based communication
  • REST API to provide technology agnostic and language neutral web based API to messaging
  • Ajax to support web streaming support to web browsers using pure DHTML, allowing web browsers to be part of the messaging fabric
  • CXF and Axis Support so that ActiveMQ can be easily dropped into either of these web service stacks to provide reliable messaging
  • Can be used as an in memory JMS provider, ideal for unit testing JMS

ActiveMQ – Version 5.4.2 released

Apache ActiveMQ 5.4.2 is primarily a maintenance release which resolves
61 issues mostly bug fixes and improvements

Getting the Binary Distributions

Description Download Link PGP Signature file of download
Windows Distribution apache-activemq-5.4.2-bin.zip apache-activemq-5.4.2-bin.zip.asc
Unix/Linux/Cygwin Distribution apache-activemq-5.4.2-bin.tar.gz apache-activemq-5.4.2-bin.tar.gz.asc

Verify the Integrity of Downloads

It is essential that you verify the integrity of the downloaded files using the PGP or MD5 signatures. The PGP signatures can be verified using PGP or GPG. Begin by following these steps:

  1. Download the KEYS
  2. Download the asc signature file for the relevant distribution
  3. Verify the signatures using the following commands, depending on your use of PGP or GPG:
    $ pgpk -a KEYS
    $ pgpv apache-activemq-<version>-bin.tar.gz.asc

    or

    $ pgp -ka KEYS
    $ pgp apache-activemq-<version>-bin.tar.gz.asc

    or

    $ gpg --import KEYS
    $ gpg --verify apache-activemq-<version>-bin.tar.gz.asc

(Where <version> is replaced with the actual version, e.g., 5.1.0, 5.2.0, etc.).

Alternatively, you can verify the MD5 signature on the files. A Unix program called md5 or md5sum is included in most Linux and Unix distributions. It is also available as part of GNU Textutils. Windows users can utilize any of the following md5 programs:

Getting the Binaries using Maven 2

To use this release in your maven project, the proper dependency configuration that you should use in your Maven POM is:

<dependency>
  <groupId>org.apache.activemq</groupId>
  <artifactId>activemq-core</artifactId>
  <version>5.4.2</version>
</dependency>

Getting the Source Code

Source Distributions

SVN Tag Checkout

svn co http://svn.apache.org/repos/asf/activemq/tags/activemq-5.4.2

Changelog

For a more detailed view of new features and bug fixes, see the release notes

Also see the previous ActiveMQ 5.4.1 Release

Release notes:

Sub-task

  • [AMQ-2791] – Add Message Priority support into PendingMessageCursor

Bug

  • [AMQ-2103] – Memory leak when marshaling ActiveMQTextMessage to persistent store
  • [AMQ-2451] – logging.properties are not found automatically by start-script
  • [AMQ-2452] – Option to “activemq-admin” do not work properly
  • [AMQ-2453] – start/control-script is not suitable for professional environments
  • [AMQ-2551] – Locking issue with MySQL InnoDB
  • [AMQ-2584] – Massege store is not cleaned when durable topic subscribers are refusing messages
  • [AMQ-2695] – Invalid messages in the pending queue of durable subscriptions.
  • [AMQ-2764] – For “duplex” network connection, after restart one ActiveMQ, message is missing.
  • [AMQ-2902] – ResourceAdapter logs confusing Exception upon pool connection disposal
  • [AMQ-2935] – java.io.EOFException: Chunk stream does not exist at page on broker start
  • [AMQ-2938] – ActiveMQ Console requires Jasypt bundle which is not part of the ActiveMQ features
  • [AMQ-2939] – Disable Spring 3 schema validation
  • [AMQ-2942] – Can’t configure an inactivity monitor for https transport
  • [AMQ-2945] – KahaDB corrupted when too many messages accumulate: EOFException
  • [AMQ-2948] – Support ajax clients in multiple windows/tabs in a single browser
  • [AMQ-2950] – XA transactions not rolled back when on connection close
  • [AMQ-2952] – Message groups with small prefetch
  • [AMQ-2959] – Scheduler not honoring activemq.store.dir property
  • [AMQ-2965] – ActiveMQ fails to start if no DNS resolution for hostname is available
  • [AMQ-2966] – Null messages occurring when using VM transport, topics and multiple consumers
  • [AMQ-2967] – Have Schedular support disabled by default
  • [AMQ-2970] – Fire advisory events when destinations are created/delete via JMX
  • [AMQ-2972] – STOMP over Websockets do not work in Chrome
  • [AMQ-2973] – Removing composite subscription clears all dispatched messages
  • [AMQ-2975] – New shell scripts doesn’t work well with multiple broker instances
  • [AMQ-2980] – Seeing inflight messages that are not consumed when jmsPriority is enabled and have intermittent durable consumer
  • [AMQ-2982] – Sticky KahaDB log files due to local transaction rollback
  • [AMQ-2983] – Sticky KahaDB log files due to concurrent consumer with local transaction
  • [AMQ-2985] – Missing messages in durable subscription with selector and KahaDB
  • [AMQ-2986] – StorePercentUsage is not refreshed when retrieved over JMX
  • [AMQ-2993] – Virtual topic interceptor process advisory messages
  • [AMQ-2999] – peer transport factory mapping localhost incorrectly to loopback
  • [AMQ-3002] – activemq-security.xml plugin usage
  • [AMQ-3005] – The spring.schemas file contains an invalid mapping
  • [AMQ-3007] – Kahadb LockFile.lock() leaks file descriptors if tryLock() returns an IOException
  • [AMQ-3013] – Problem with removing durable subscribers from the BrokerView
  • [AMQ-3020] – Message is lost while browsing composite queues over the network
  • [AMQ-3021] – HttpTunnelServlet leaks BlockingQueueTransport objects, causing eventual OOM on heap space
  • [AMQ-3025] – ActiveMQ child instances create their PID file in the parent’s data directory and refer to the parent’s configuration files when started
  • [AMQ-3028] – ActiveMQ broker processing slows with consumption from large store
  • [AMQ-3029] – Exception when try to browse ActiveMQBlobMessage via JMX
  • [AMQ-3035] – activemq script ignores ACTIVEMQ_SSL_OPTS from environment
  • [AMQ-3039] – Cannot import broker config using entities anymore
  • [AMQ-3049] – initialReconnectDelay on failover transport is not being honored
  • [AMQ-3050] – ActiveMQ standalone script doesn’t return with 0 when stop is called.
  • [AMQ-3052] – Memory leak in SimpleAuthenticationBroker
  • [AMQ-3054] – add property placeholder bean to activemq-jdbc.xml

Improvement

  • [AMQ-2789] – Add Support For Message Priority
  • [AMQ-2885] – Upgrade aries to 0.1-r964701
  • [AMQ-2925] – PooledConnection.getConnection() should be public
  • [AMQ-2930] – Does ActiveMQ run on Windows Vista and/or Windows 7?
  • [AMQ-2932] – A little optimization to IdGenerator and a potential issue with the counter
  • [AMQ-2988] – Allow to retrieve the JMSProperties when using JMS Stream (ActiveMQInputStream)
  • [AMQ-2989] – Upgrade xmlpull with xpp3
  • [AMQ-2990] – Allow to specify the chunk size when using JMS Stream (ActiveMQOutputStream)
  • [AMQ-2997] – Default log4j.properties has Camel set at ERROR level. Please lower this to WARN or maybe even better at INFO

New Feature

  • [AMQ-2395] – Allow JDBC persistence adapter to use custom prefixes
  • [AMQ-2927] – Implement custom brokerId assignment strategy
  • [AMQ-2940] – Add a way to select and delete scheduled/delayed messages with a message selector
  • [AMQ-2941] – Add a non-JMX way to browse and delete scheduled/delayed messages
  • [AMQ-3017] – Add support for stream data to filesystem when using BlobMessages
  • [AMQ-3044] – Enable securing created JMX connector

https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12311210&styleName=Html&version=12315625

[repost]Sify.com Architecture – A Portal at 3900 Requests Per Second

original:

Sify.com Architecture – A Portal at 3900 Requests Per Second

Sify.com is one of the leading portals in India. Samachar.com is owned by the same company and is one of the top content aggregation sites in India, primarily targeting Non-resident Indians from around the world. Ramki Subramanian, an Architect at Sify, has been generous enough to describe the common back-end for both these sites. One of the most notable aspects of their architecture is that Sify does not use a traditional database. They query Solr and then retrieve records from a distributed file system. Over the years many people have argued for file systems over databases. Filesystems can work for key-value lookups, but they don’t work for queries, using Solr is a good way around that problem. Another interesting aspect of their system is the use of Drools for intelligent cache invalidation. As we have more and more data duplicated in multiple specialized services, the problem of how to keep them synchronized is a difficult one. A rules engine is a clever approach.

Platform / Tools

  • Linux
  • Lighty
  • PHP5
  • Memcached
  • Apache Solr
  • Apache ActiveMQ / Camel
  • GFS (clustered File System)
  • Gearman
  • Redis
  • Mule with ActiveMQ.
  • Varnish
  • Drools

Stats

  • ~150 million page views a month.
  • Serves 3900 Request / second.
  • Back-end is runs on 4 blades hosting about 30 VMs.

Architecture

  • The system is completely virtualized. We have put to use most of VMs capabilities also, like we move VMs across blades when one blade is down or when the load needs to be redistributed. We have templatized the VMs and so we can provision systems in less than 20 minutes. It is currently manual, but in the next version of the system we are planning on automating the whole provisioning, commissioning, de-commissioning, moving around VMs and also auto-scaling.
  • No Databases
  • 100% Stateless
  • RESTful interface supporting: XML, JSON, JS, RSS / Atom
  • Writes and reads have different Paths.
    • Writes are queued, transformed and routed through ActiveMQ/Camel to other HTTP services. It is used as an ESB (enterprise service bus).
    • Reads, like search, are handled from PHP directly by the web-servers.
  • Solr is used as an indexing / searching engine. If somebody asks for a file giving the key, it is directly served out of storage. If somebody says “give me all files where author=Todd,” it hits Solr and then storage. Queries are performed using Apache Solr as our search-engine and we have a distributed setup for the same.
  • All files are stored in the clustered file system (GFS). Queries hit Solr and it returns the data that we want. If we need full data, we hit the storage after fetching the ids from the search. This approach makes the system completely horizontally scalable and there is zero dependency on a database. It works very well for us after the upgrade to latest version. We just run 2 nodes for the storage and we can add few more nodes if need be.
  • Lighty front ends GFS. Lighty is really very good for serving static files. It can casually take 8000+ requests per second for the kind of files we have (predominantly small XMLs and images).
  • All of the latest NoSQL databases like CouchDB, MongoDB, Cassandra, etc. would just be replacements for our storage layer. None of them are close to Solr/Lucene in search capability. MongoDB is the best in the lot in terms of querying but the “contains” and the like searches needs to be done with a regex and that is a disaster with 5 million docs! We believe our Distributed file-system based approach more scalable than many of those NoSQL database systems for storage at this point.

Future

  • CouchDB or Hadoop or Cassandra for Event analytics (user clicks, real time graphs and trends).
  • Intelligent Cache invalidation using Drools. Data will be pushed through a queue and a Drools engine will determine which URLs need to be invalidated. It will go clear them in our cache engine or Akamai. The approach is like this. Once a query (URL) hits our backend, we will log that query. The logged query will then be parsed and pushed into the Drools system. The system would take that input and create rules dynamically into the system if it is not already existing. That’s part A. Then our Content Ingestion system will keep pushing all content it is getting into a Drools queue. Once the data comes in, we will fire all the rules against the content. For every matched rule, generate the URLs and we will give a delete request to the cache servers (Akamai or Varnish) for those URLs. Thats part B. Part B is not as simple as mentioned above. There will be many different cases. For example, we support “NOW”, greater than, less than, NOT, etc in the query, those will really give us big headache.
    • There are mainly 2 reasons we are doing all this, very high cache-hit rates and almost immediate updates to end-users. And remember the 2 reasons have never got along well in the past!
    • I think it will perform well and scale. Drools is really good at this kind of problem. Also on analysis, we figured out the queries are mostly constant across many days. For example, we have close to 40,000 different queries a day and it will be repeating every day in almost same pattern. Only the data will change for that query. So, we could setup multiple instances and just replicate the rules in different systems, that way we can scale it horizontally too.
  • Synchronous reads, but fewer layers, less PHP intervention and socket connections.
  • Distributed (write to different Shards) and asynchrounous writes using Queue/ESB(Mule).
  • Heavy caching using Varnish or Akamai.
  • Daemons for killing crons and stay more close to real-time.
  • Parallel and background processing using Gearman and automatic process additions for auto-scaling processing.
  • Realtime distribution of content using Kaazing or eJabberd to both end users and internal systems.
  • Redis for caching digests of content to determine duplicates.
  • We are looking at making the whole thing more easily administrable and turn on VMs and process from within the app-admin. We have looked at Apache Zookeeper and looking at RESTful APIs provided by VMWare and Xen and to do the integration with our system. This will enable us to do auto-scaling.
  • The biggest advantage we have is the bandwidth in the data center has not a constraint as we are ISPs ourselves. I’m looking at ways to use that advantage in the system and see how we can build clusters that can process huge amounts of content quickly, in parallel.

Lessons Learned

  • ActiveMQ proved disastrous many times! Very poor socket handling. We use to hit the TCP socket limits in less than 5 minutes from a restart. Though its claimed that its fixed in 5.0 and 5.2, it wasn’t working for us. We tried in many ways to make it live longer, like a day at least. We hacked around by deploying old libraries with new releases and made it stay up longer. After all that, we deployed two MQs (message queues) to make sure at least the editorial updates of content is going through OK.
    • Later we figured out that problem was not only that, but using topics was also a problem. Using Topic with just four subscribers would just make MQ hang in a few hours. We killed the whole Topic based approach after huge hair loss and moved them all to a queue. Once the data comes in to the main queue, we push the data in to four different queues. Problem fixed. Of course over period of 15 days or something, it will throw some exception or OOME (out of memory error) and will force us to restart. We are just living with it. In the next version, we are using Mule to handle all of this and clustering at the same too. We are also trying to figure out a way to get out of the dependency in the order of messages, that will make it easier to distribute.
  • Solr
    • Restarts. We have to keep restarting it very frequently. Don’t really know the reason yet, but because its has redundancies we better placed than the MQ. We have gone to the extent of automating the restarts by doing a query and if there is no response or time-outs, we restart Solr.
    • Complex Queries. For complex queries the query response time is really poor. We have about 5 million docs and lot of queries do return in less than a second, but when we have a query with a few “NOT”s and many fields and criteria, it takes 100+ secs. We worked around this by splitting the query into more simpler ones and merging the results in PHP space.
    • Realtime. Another serious issue we have is that the Solr does not reflect the changes committed in real-time. It takes anywhere between 4 mins to 10 mins! Given the industry we are in and the competition, 10 mins late news makes us irrelevant. Looked at Zoie-Solr plugin but our Ids are alpha-numeric and Zoie doesn’t support that. We are looking at fixing that ourselves in Zoie.
  • GFS Locking issue. This used to be very serious issue for us. GFS will lock down the whole cluster and it will make our storage completely inaccessible. There was an issue with GFS 4.0 and we upgraded to 5.0 and it seems to be fine from then.
  • Lighty and PHP do not get along very well. Performance wise both are good but Apache/PHP is more stable. Lighty goes cranky some times with PHP_FCGI process hanging and CPU usage goes to 100%.

I’d really like to thank Ramki for taking the time write about how their system works. Hopefully you can learn something useful from their experience that will help you on your own adventures. If you would like to share the architecture for your fabulous system, both paying it forward and backward, please contact me and we’ll get started.

  • Linux
  • Lighty
  • PHP5
  • Memcached
  • Apache Solr
  • Apache ActiveMQ / Camel
  • GFS (clustered File System)
  • Gearman
  • Redis
  • Mule with ActiveMQ.
  • Varnish
  • Drools
  • ~150 million page views a month.
  • Serves 3900 Request / second.
  • Back-end is runs on 4 blades hosting about 30 VMs.
  • ActiveMQ proved disastrous many times! Very poor socket handling. We use to hit the TCP socket limits in less than 5 minutes from a restart. Though its claimed that its fixed in 5.0 and 5.2, it wasn’t working for us. We tried in many ways to make it live longer, like a day at least. We hacked around by deploying old libraries with new releases and made it stay up longer. After all that, we deployed two MQs (message queues) to make sure at least the editorial updates of content is going through OK.
    • Later we figured out that problem was not only that, but using topics was also a problem. Using Topic with just four subscribers would just make MQ hang in a few hours. We killed the whole Topic based approach after huge hair loss and moved them all to a queue. Once the data comes in to the main queue, we push the data in to four different queues. Problem fixed. Of course over period of 15 days or something, it will throw some exception or OOME (out of memory error) and will force us to restart. We are just living with it. In the next version, we are using Mule to handle all of this and clustering at the same too. We are also trying to figure out a way to get out of the dependency in the order of messages, that will make it easier to distribute.
  • Solr
    • Restarts. We have to keep restarting it very frequently. Don’t really know the reason yet, but because its has redundancies we better placed than the MQ. We have gone to the extent of automating the restarts by doing a query and if there is no response or time-outs, we restart Solr.
    • Complex Queries. For complex queries the query response time is really poor. We have about 5 million docs and lot of queries do return in less than a second, but when we have a query with a few “NOT”s and many fields and criteria, it takes 100+ secs. We worked around this by splitting the query into more simpler ones and merging the results in PHP space.
    • Realtime. Another serious issue we have is that the Solr does not reflect the changes committed in real-time. It takes anywhere between 4 mins to 10 mins! Given the industry we are in and the competition, 10 mins late news makes us irrelevant. Looked at Zoie-Solr plugin but our Ids are alpha-numeric and Zoie doesn’t support that. We are looking at fixing that ourselves in Zoie.
  • GFS Locking issue. This used to be very serious issue for us. GFS will lock down the whole cluster and it will make our storage completely inaccessible. There was an issue with GFS 4.0 and we upgraded to 5.0 and it seems to be fine from then.
  • Lighty and PHP do not get along very well. Performance wise both are good but Apache/PHP is more stable. Lighty goes cranky some times with PHP_FCGI process hanging and CPU usage goes to 100%.