如果多个ARTEMIS以单体的方式启动,则各个实例互不相关,不能实现高可用。
集群
因此需将多个实例组队--集群,以实现高可用,即集群内可实现实例的失败转移,消息的负载均衡,消息的重新分配。
这需要在集群配置区指定所有所有实例的IP和PORT。
但集群实例如果DOWN机,其保存的消息也会随之消失,因此需要实现高可用,有两种方式:共享存储及消息复制。
共享存储
共享存储是由master/slave对组成,指两个实例保存消息的目录相同,且一个是master,另一个是slave,同一时间只有一个实例对外提供服务,这个实例就是master。当master down机时,slave就会接手,变成master。由于使用的目录保存消息,因此slave启用时,消息不会丢失。
消息复制
消息复制同样是由master/slave对组成,是指slave实例复制其master实例上的消息,因此slave实例有master实例上的消息的备份。当master down机,则slave变成master,由于消息之前已进行过复制,因此消息不会丢失。
消息的重新分配
消息的重新分配是指消息分布在各个实例,如果只有一个consumer连上集群,则所有消息都会被消费,不需要不同的consumer连不同的实例。
新建集群的命令
create-artemis-cluster.sh
#! /bin/bash
MQ_PORT=$1
ARTEMIS_HOME=$2
HOST=$3
HTTP_PORT=$4
STATIC_CONNECTOR=$5
CLUSTER_NAME=$6
INSTANCE_FOLDER=$7
DATA_FOLDER=$8
HA_MODE=$9
IS_MASTER_OR_SLAVE=${10}
echo $MQ_PORT
echo $IS_MASTER_OR_SLAVE
rm -rf $INSTANCE_FOLDER
if [ "$HA_MODE" == "shared-store" ]
then
echo "shared-store mode"
DATA_FOLDER=$ARTEMIS_HOME/data/$CLUSTER_NAME/$DATA_FOLDER
else
echo "replicated mode"
DATA_FOLDER=data
fi
if [ "$STATIC_CONNECTOR" == "no" ]
then
echo "no staticCluster"
STATIC_CONNECTOR=""
else
echo "need staticCluster"
STATIC_CONNECTOR="--staticCluster $STATIC_CONNECTOR"
fi
./bin/artemis create \
--no-amqp-acceptor \
--no-mqtt-acceptor \
--no-stomp-acceptor \
--no-hornetq-acceptor \
--clustered \
$STATIC_CONNECTOR \
--max-hops 1 \
--$HA_MODE \
$IS_MASTER_OR_SLAVE \
--cluster-user admin-cluster \
--cluster-password admin-cluster \
--failover-on-shutdown true \
--data $DATA_FOLDER \
--default-port $MQ_PORT \
--encoding UTF-8 \
--home $ARTEMIS_HOME \
--name $HOST:$MQ_PORT \
--host $HOST \
--http-host $HOST \
--http-port $HTTP_PORT \
--user admin \
--password admin \
--require-login \
--role admin \
--use-client-auth \
$INSTANCE_FOLDER
create-artemis-cluster-node1.sh
#! /bin/bash
BIN_PATH=$(cd `dirname $0`; pwd)
cd $BIN_PATH/../
ARTEMIS_HOME=`pwd`
echo $ARTEMIS_HOME
MASTER_IP_1=10.10.27.69
SLAVE_IP_1=10.10.27.69
MASTER_IP_2=10.10.27.69
SLAVE_IP_2=10.10.27.69
MASTER_PORT_1=62616
SLAVE_PORT_1=62617
MASTER_PORT_2=62618
SLAVE_PORT_2=62619
MASTER_HTTP_PORT_1=8261
SLAVE_HTTP_PORT_1=8262
MASTER_HTTP_PORT_2=8263
SLAVE_HTTP_PORT_2=8264
MASTER_NODE_1=node1
SLAVE_NODE_1=node2
MASTER_NODE_2=node3
SLAVE_NODE_2=node4
STATIC_CONNECTOR=tcp://$SLAVE_IP_1:$SLAVE_PORT_1,tcp://$MASTER_IP_2:$MASTER_PORT_2,tcp://$SLAVE_IP_2:$SLAVE_PORT_2
CLUSTER=cluster-1
DATA_FOLDER=data-1
HA_MODE=shared-store
INSTANCE_FOLDER=brokers/$HA_MODE/$CLUSTER/$MASTER_NODE_1
./bin/create-artemis-cluster.sh $MASTER_PORT_1 $ARTEMIS_HOME $MASTER_IP_1 $MASTER_HTTP_PORT_1 $STATIC_CONNECTOR $CLUSTER $INSTANCE_FOLDER $DATA_FOLDER $HA_MODE
create-artemis-cluster-replication-node1.sh
#! /bin/bash
BIN_PATH=$(cd `dirname $0`; pwd)
cd $BIN_PATH/../
ARTEMIS_HOME=`pwd`
echo $ARTEMIS_HOME
MASTER_IP_1=10.10.27.69
SLAVE_IP_1=10.10.27.69
MASTER_IP_2=10.10.27.69
SLAVE_IP_2=10.10.27.69
MASTER_IP_3=10.10.27.69
SLAVE_IP_3=10.10.27.69
MASTER_PORT_1=63616
SLAVE_PORT_1=63617
MASTER_PORT_2=63618
SLAVE_PORT_2=63619
MASTER_PORT_3=63620
SLAVE_PORT_3=63621
MASTER_HTTP_PORT_1=8361
SLAVE_HTTP_PORT_1=8362
MASTER_HTTP_PORT_2=8363
SLAVE_HTTP_PORT_2=8364
MASTER_HTTP_PORT_3=8365
SLAVE_HTTP_PORT_3=8366
MASTER_NODE_1=node1
SLAVE_NODE_1=node2
MASTER_NODE_2=node3
SLAVE_NODE_2=node4
MASTER_NODE_3=node5
SLAVE_NODE_3=node6
#STATIC_CONNECTOR=tcp://$SLAVE_IP_1:$SLAVE_PORT_1,tcp://$MASTER_IP_2:$MASTER_PORT_2,tcp://$SLAVE_IP_2:$SLAVE_PORT_2,tcp://$MASTER_IP_3:$MASTER_PORT_3,tcp://$SLAVE_IP_3:$SLAVE_PORT_3
STATIC_CONNECTOR=no
CLUSTER_1=cluster-1
CLUSTER_2=cluster-2
DATA_FOLDER_1=data-1
DATA_FOLDER_2=data-2
DATA_FOLDER_3=data
HA_MODE_1=shared-store
HA_MODE_2=replicated
CURRENT_IP=$MASTER_IP_1
CURRENT_PORT=$MASTER_PORT_1
CURRENT_HTTP_PORT=$MASTER_HTTP_PORT_1
CURRENT_NODE=$MASTER_NODE_1
CURRENT_CLUSTER=$CLUSTER_2
CURRENT_DATA_FOLDER=$DATA_FOLDER_3
CURRENT_HA_MODE=$HA_MODE_2
INSTANCE_FOLDER=brokers/$CURRENT_HA_MODE/$CURRENT_CLUSTER/$CURRENT_NODE
./bin/create-artemis-cluster.sh $CURRENT_PORT $ARTEMIS_HOME $CURRENT_IP $CURRENT_HTTP_PORT $STATIC_CONNECTOR $CURRENT_CLUSTER $INSTANCE_FOLDER $CURRENT_DATA_FOLDER $CURRENT_HA_MODE
Artemis的配置文件
node1:
<?xml version='1.0'?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>10.10.27.69</name>
<persistence-enabled>true</persistence-enabled>
<!-- this could be ASYNCIO, MAPPED, NIO
ASYNCIO: Linux Libaio
MAPPED: mmap files
NIO: Plain Java Files
-->
<journal-type>ASYNCIO</journal-type>
<paging-directory>/opt/contech/artemis/apache-artemis-2.15.0/data/cluster-1/paging</paging-directory>
<bindings-directory>/opt/contech/artemis/apache-artemis-2.15.0/data/cluster-1/bindings</bindings-directory>
<journal-directory>/opt/contech/artemis/apache-artemis-2.15.0/data/cluster-1/journal</journal-directory>
<large-messages-directory>/opt/contech/artemis/apache-artemis-2.15.0/data/cluster-1/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>4096</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
<!--
This value was determined through a calculation.
Your system could perform 62.5 writes per millisecond
on the current journal configuration.
That translates as a sync write every 16000 nanoseconds.
Note: If you specify 0 the system will perform writes directly to the disk.
We recommend this to be 0 if you are using journalType=MAPPED and journal-datasync=false.
-->
<journal-buffer-timeout>16000</journal-buffer-timeout>
<!--
When using ASYNCIO, this will determine the writing queue depth for libaio.
-->
<journal-max-io>4096</journal-max-io>
<!--
You can verify the network health of a particular NIC by specifying the <network-check-NIC> element.
<network-check-NIC>theNicName</network-check-NIC>
-->
<!--
Use this to use an HTTP server to validate the network
<network-check-URL-list>http://www.apache.org</network-check-URL-list> -->
<!-- <network-check-period>10000</network-check-period> -->
<!-- <network-check-timeout>1000</network-check-timeout> -->
<!-- this is a comma separated list, no spaces, just DNS or IPs
it should accept IPV6
Warning: Make sure you understand your network topology as this is meant to validate if your network is valid.
Using IPs that could eventually disappear or be partially visible may defeat the purpose.
You can use a list of multiple IPs, and if any successful ping will make the server OK to continue running -->
<!-- <network-check-list>10.0.0.1</network-check-list> -->
<!-- use this to customize the ping used for ipv4 addresses -->
<!-- <network-check-ping-command>ping -c 1 -t %d %s</network-check-ping-command> -->
<!-- use this to customize the ping used for ipv6 addresses -->
<!-- <network-check-ping6-command>ping6 -c 1 %2$s</network-check-ping6-command> -->
<connectors>
<!-- Connector used to be announced through cluster connections and notifications -->
<connector name="artemis">tcp://10.10.27.69:62616</connector>
<connector name = "node2">tcp://10.10.27.69:62617</connector>
<connector name = "node3">tcp://10.10.27.69:62618</connector>
<connector name = "node4">tcp://10.10.27.69:62619</connector>
</connectors>
<!-- how often we are looking for how many bytes are being used on the disk in ms -->
<disk-scan-period>5000</disk-scan-period>
<!-- once the disk hits this limit the system will block, or close the connection in certain protocols
that won't support flow control. -->
<max-disk-usage>90</max-disk-usage>
<!-- should the broker detect dead locks and other issues -->
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>212000</page-sync-timeout>
<!-- the system will enter into page mode once you hit this limit.
This is an estimate in bytes of how much the messages are using in memory
The system will use half of the available memory (-Xmx) by default for the global-max-size.
You may specify a different value here if you need to customize it to your needs.
<global-max-size>100Mb</global-max-size>
-->
<acceptors>
<!-- useEpoll means: it will use Netty epoll if you are on a system (Linux) that supports it -->
<!-- amqpCredits: The number of credits sent to AMQP producers -->
<!-- amqpLowCredits: The server will send the # credits specified at amqpCredits at this low mark -->
<!-- amqpDuplicateDetection: If you are not using duplicate detection, set this to false
as duplicate detection requires applicationProperties to be parsed on the server. -->
<!-- amqpMinLargeMessageSize: Determines how many bytes are considered large, so we start using files to hold their data.
default: 102400, -1 would mean to disable large mesasge control -->
<!-- Note: If an acceptor needs to be compatible with HornetQ and/or Artemis 1.x clients add
"anycastPrefix=jms.queue.;multicastPrefix=jms.topic." to the acceptor url.
See https://issues.apache.org/jira/browse/ARTEMIS-1644 for more information. -->
<!-- Acceptor for every supported protocol -->
<acceptor name="artemis">tcp://10.10.27.69:62616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
</acceptors>
<cluster-user>admin-cluster</cluster-user>
<cluster-password>admin-cluster </cluster-password>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>artemis</connector-ref>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>node2</connector-ref>
<connector-ref>node3</connector-ref>
<connector-ref>node4</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<ha-policy>
<shared-store>
<master>
<failover-on-shutdown>true</failover-on-shutdown>
</master>
</shared-store>
</ha-policy>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="admin"/>
<permission type="deleteNonDurableQueue" roles="admin"/>
<permission type="createDurableQueue" roles="admin"/>
<permission type="deleteDurableQueue" roles="admin"/>
<permission type="createAddress" roles="admin"/>
<permission type="deleteAddress" roles="admin"/>
<permission type="consume" roles="admin"/>
<permission type="browse" roles="admin"/>
<permission type="send" roles="admin"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="admin"/>
</security-setting>
</security-settings>
<address-settings>
<!-- if you define auto-create on certain queues, management has to be auto-create -->
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
<redistribution-delay>0</redistribution-delay>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ" />
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue" />
</anycast>
</address>
</addresses>
<!-- Uncomment the following if you want to use the Standard LoggingActiveMQServerPlugin pluging to log in events
<broker-plugins>
<broker-plugin class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
<property key="LOG_ALL_EVENTS" value="true"/>
<property key="LOG_CONNECTION_EVENTS" value="true"/>
<property key="LOG_SESSION_EVENTS" value="true"/>
<property key="LOG_CONSUMER_EVENTS" value="true"/>
<property key="LOG_DELIVERING_EVENTS" value="true"/>
<property key="LOG_SENDING_EVENTS" value="true"/>
<property key="LOG_INTERNAL_EVENTS" value="true"/>
</broker-plugin>
</broker-plugins>
-->
</core>
</configuration>
node2
<?xml version='1.0'?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>10.10.27.69</name>
<persistence-enabled>true</persistence-enabled>
<!-- this could be ASYNCIO, MAPPED, NIO
ASYNCIO: Linux Libaio
MAPPED: mmap files
NIO: Plain Java Files
-->
<journal-type>ASYNCIO</journal-type>
<paging-directory>/opt/contech/artemis/apache-artemis-2.15.0/data/cluster-1/paging</paging-directory>
<bindings-directory>/opt/contech/artemis/apache-artemis-2.15.0/data/cluster-1/bindings</bindings-directory>
<journal-directory>/opt/contech/artemis/apache-artemis-2.15.0/data/cluster-1/journal</journal-directory>
<large-messages-directory>/opt/contech/artemis/apache-artemis-2.15.0/data/cluster-1/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>4096</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
<!--
This value was determined through a calculation.
Your system could perform 25 writes per millisecond
on the current journal configuration.
That translates as a sync write every 40000 nanoseconds.
Note: If you specify 0 the system will perform writes directly to the disk.
We recommend this to be 0 if you are using journalType=MAPPED and journal-datasync=false.
-->
<journal-buffer-timeout>40000</journal-buffer-timeout>
<!--
When using ASYNCIO, this will determine the writing queue depth for libaio.
-->
<journal-max-io>4096</journal-max-io>
<!--
You can verify the network health of a particular NIC by specifying the <network-check-NIC> element.
<network-check-NIC>theNicName</network-check-NIC>
-->
<!--
Use this to use an HTTP server to validate the network
<network-check-URL-list>http://www.apache.org</network-check-URL-list> -->
<!-- <network-check-period>10000</network-check-period> -->
<!-- <network-check-timeout>1000</network-check-timeout> -->
<!-- this is a comma separated list, no spaces, just DNS or IPs
it should accept IPV6
Warning: Make sure you understand your network topology as this is meant to validate if your network is valid.
Using IPs that could eventually disappear or be partially visible may defeat the purpose.
You can use a list of multiple IPs, and if any successful ping will make the server OK to continue running -->
<!-- <network-check-list>10.0.0.1</network-check-list> -->
<!-- use this to customize the ping used for ipv4 addresses -->
<!-- <network-check-ping-command>ping -c 1 -t %d %s</network-check-ping-command> -->
<!-- use this to customize the ping used for ipv6 addresses -->
<!-- <network-check-ping6-command>ping6 -c 1 %2$s</network-check-ping6-command> -->
<connectors>
<!-- Connector used to be announced through cluster connections and notifications -->
<connector name="artemis">tcp://10.10.27.69:62617</connector>
<connector name = "node1">tcp://10.10.27.69:62616</connector>
<connector name = "node3">tcp://10.10.27.69:62618</connector>
<connector name = "node4">tcp://10.10.27.69:62619</connector>
</connectors>
<!-- how often we are looking for how many bytes are being used on the disk in ms -->
<disk-scan-period>5000</disk-scan-period>
<!-- once the disk hits this limit the system will block, or close the connection in certain protocols
that won't support flow control. -->
<max-disk-usage>90</max-disk-usage>
<!-- should the broker detect dead locks and other issues -->
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>180000</page-sync-timeout>
<!-- the system will enter into page mode once you hit this limit.
This is an estimate in bytes of how much the messages are using in memory
The system will use half of the available memory (-Xmx) by default for the global-max-size.
You may specify a different value here if you need to customize it to your needs.
<global-max-size>100Mb</global-max-size>
-->
<acceptors>
<!-- useEpoll means: it will use Netty epoll if you are on a system (Linux) that supports it -->
<!-- amqpCredits: The number of credits sent to AMQP producers -->
<!-- amqpLowCredits: The server will send the # credits specified at amqpCredits at this low mark -->
<!-- amqpDuplicateDetection: If you are not using duplicate detection, set this to false
as duplicate detection requires applicationProperties to be parsed on the server. -->
<!-- amqpMinLargeMessageSize: Determines how many bytes are considered large, so we start using files to hold their data.
default: 102400, -1 would mean to disable large mesasge control -->
<!-- Note: If an acceptor needs to be compatible with HornetQ and/or Artemis 1.x clients add
"anycastPrefix=jms.queue.;multicastPrefix=jms.topic." to the acceptor url.
See https://issues.apache.org/jira/browse/ARTEMIS-1644 for more information. -->
<!-- Acceptor for every supported protocol -->
<acceptor name="artemis">tcp://10.10.27.69:62617?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
</acceptors>
<cluster-user>admin-cluster</cluster-user>
<cluster-password>admin-cluster </cluster-password>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>artemis</connector-ref>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>node1</connector-ref>
<connector-ref>node3</connector-ref>
<connector-ref>node4</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<ha-policy>
<shared-store>
<slave>
<failover-on-shutdown>true</failover-on-shutdown>
</slave>
</shared-store>
</ha-policy>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="admin"/>
<permission type="deleteNonDurableQueue" roles="admin"/>
<permission type="createDurableQueue" roles="admin"/>
<permission type="deleteDurableQueue" roles="admin"/>
<permission type="createAddress" roles="admin"/>
<permission type="deleteAddress" roles="admin"/>
<permission type="consume" roles="admin"/>
<permission type="browse" roles="admin"/>
<permission type="send" roles="admin"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="admin"/>
</security-setting>
</security-settings>
<address-settings>
<!-- if you define auto-create on certain queues, management has to be auto-create -->
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
<redistribution-delay>0</redistribution-delay>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ" />
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue" />
</anycast>
</address>
</addresses>
<!-- Uncomment the following if you want to use the Standard LoggingActiveMQServerPlugin pluging to log in events
<broker-plugins>
<broker-plugin class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
<property key="LOG_ALL_EVENTS" value="true"/>
<property key="LOG_CONNECTION_EVENTS" value="true"/>
<property key="LOG_SESSION_EVENTS" value="true"/>
<property key="LOG_CONSUMER_EVENTS" value="true"/>
<property key="LOG_DELIVERING_EVENTS" value="true"/>
<property key="LOG_SENDING_EVENTS" value="true"/>
<property key="LOG_INTERNAL_EVENTS" value="true"/>
</broker-plugin>
</broker-plugins>
-->
</core>
</configuration>
node3
<acceptors>
<acceptor name="artemis">tcp://10.10.27.69:62618?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
</acceptors>
<connectors>
<!-- Connector used to be announced through cluster connections and notifications -->
<connector name="artemis">tcp://10.10.27.69:62618</connector>
<connector name = "node1">tcp://10.10.27.69:62616</connector>
<connector name = "node2">tcp://10.10.27.69:62617</connector>
<connector name = "node4">tcp://10.10.27.69:62619</connector>
</connectors>
<cluster-user>admin-cluster</cluster-user>
<cluster-password>admin-cluster </cluster-password>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>artemis</connector-ref>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>node1</connector-ref>
<connector-ref>node2</connector-ref>
<connector-ref>node4</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<ha-policy>
<shared-store>
<master>
<failover-on-shutdown>true</failover-on-shutdown>
</master>
</shared-store>
</ha-policy>
<address-settings>
<address-setting match="#">
<redistribution-delay>0</redistribution-delay>
</address-setting>
</address-settings>
node4
<acceptors>
<acceptor name="artemis">tcp://10.10.27.69:62619?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
</acceptors>
<connectors>
<!-- Connector used to be announced through cluster connections and notifications -->
<connector name="artemis">tcp://10.10.27.69:62619</connector>
<connector name = "node1">tcp://10.10.27.69:62616</connector>
<connector name = "node2">tcp://10.10.27.69:62617</connector>
<connector name = "node3">tcp://10.10.27.69:62618</connector>
</connectors>
<cluster-user>admin-cluster</cluster-user>
<cluster-password>admin-cluster </cluster-password>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>artemis</connector-ref>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>node1</connector-ref>
<connector-ref>node2</connector-ref>
<connector-ref>node4</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<ha-policy>
<shared-store>
<master>
<failover-on-shutdown>true</failover-on-shutdown>
</master>
</shared-store>
</ha-policy>
<address-settings>
<address-setting match="#">
<redistribution-delay>0</redistribution-delay>
</address-setting>
</address-settings>
JAVA Client
由于使用了多个active/passive对的集群,为了使客户端能在多个active中失败跳转,使用了apid包,如果单独使用artemis的包,就只能在单个active/passive对中跳转,不能在多个active中跳转。
pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.paul</groupId>
<artifactId>test-artemis-cluster</artifactId>
<version>0.0.1-SNAPSHOT</version>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version>
<maven.compiler.source>${java.version}</maven.compiler.source>
<maven.compiler.target>${java.version}</maven.compiler.target>
<qpid.jms.version>0.23.0</qpid.jms.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.qpid</groupId>
<artifactId>qpid-jms-client</artifactId>
<version>${qpid.jms.version}</version>
</dependency>
</dependencies>
</project>
jndi.properties
java.naming.factory.initial=org.apache.qpid.jms.jndi.JmsInitialContextFactory
java.naming.security.principal=admin
java.naming.security.credentials=admin
connectionFactory.ConnectionFactory=failover:(amqp://10.10.27.69:62616,amqp://10.10.27.69:62617,amqp://10.10.27.69:62618,amqp://10.10.27.69:62619)
queue.queue/exampleQueue=TEST
Receiver.java
package com.paul.testartemiscluster;
import javax.jms.Connection;
import javax.jms.ConnectionFactory;
import javax.jms.MessageConsumer;
import javax.jms.Queue;
import javax.jms.Session;
import javax.jms.TextMessage;
import javax.naming.InitialContext;
public class Receiver {
public static void main(String args[]) throws Exception{
try {
InitialContext context = new InitialContext();
Queue queue = (Queue) context.lookup("queue/exampleQueue");
ConnectionFactory cf = (ConnectionFactory) context.lookup("ConnectionFactory");
try (
Connection connection = cf.createConnection("admin", "admin");
) {
Session session = connection.createSession(true, Session.AUTO_ACKNOWLEDGE);
MessageConsumer consumer = session.createConsumer(queue);
connection.start();
while (true) {
for (int j = 0 ; j < 10; j++) {
TextMessage receive = (TextMessage) consumer.receive(0);
System.out.println("received message " + receive.getText());
}
session.commit();
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
Reference
Networks of Brokers in AMQ 7 Broker (Clustering)
Installing the AMQ7 Broker
Architecting messaging solutions with Apache ActiveMQ Artemis
Artemis集群(18)
(9)Artemis网络孤立(脑分裂)
Artemis高可用性和故障转移(19)
Configuring Multi-Node Messaging Systems
Creating a broker in ActiveMQ Artemis
ActiveMQ Artemis cluster does not redistribute messages after one instance crash