Putting the following in /etc/rabbitmq/rabbitmq-env.conf
will make RabbitMQ and epmd listen on only localhost:
export RABBITMQ_NODENAME=rabbit@localhost
export RABBITMQ_NODE_IP_ADDRESS=127.0.0.1
export ERL_EPMD_ADDRESS=127.0.0.1
It takes a bit more work to configure Erlang to only use localhost for the higher numbered port (which is used for clustering nodes as far as I can tell). If you don't care about clustering and just want Rabbit to be run fully locally then you can pass Erlang a kernel option for it to only use the loopback interface.
To do so, create a new file in /etc/rabbitmq/
- I'll call it rabbit.config
. In this file we'll put the Erlang option that we need to load on run time.
[{kernel,[{inet_dist_use_interface,{127,0,0,1}}]}].
If you're using the management plugin and also want to limit that to localhost, you'll need to configure its ports separately, making the rabbit.config include this:
[
{rabbitmq_management, [
{listener, [{port, 15672}, {ip, "127.0.0.1"}]}
]},
{kernel, [
{inet_dist_use_interface,{127,0,0,1}}
]}
].
(Note RabbitMQ leaves epmd running when it shuts down, so if you want to block off Erlang's clustering port, you will need to restart epmd separately from Rabbit.)
Next we need to have RabbitMQ load this at startup. Open up /etc/rabbitmq/rabbitmq.conf
again and put the following at the top:
export RABBITMQ_CONFIG_FILE="/etc/rabbitmq/rabbit"
This loads that config file when the rabbit server is started and will pass the options to Erlang.
You should now have all Erlang/RabbitMQ processes listening only on localhost! This can be checked with netstat -ntlap
EDIT : In older versions of RabbitMQ, the configuration file is : /etc/rabbitmq/rabbitmq.conf
. However, this file has been replaced by the rabbit-env.conf
file.
Same problem but there is a little understanding to be done as well as a possible pitfall.
Firstly, I was fooled by the fact that I didn't pass my vhost to the command:
rabbitmqctl set_policy -p myvhost HA '*' '{"ha-mode": "all"}'
Otherwise the vhost defaults to "/"
After this, when I logged onto the web console, I saw that the node field was reporting on two nodes ...now. Great :-)
However, if you bring one up and down, then the other up and down, the queue disappears!? This is because there is NO "synchronisation" in the mirroring, ONLY "stacking". Meaning if you bring a node down, the rest of the messages are served from the remaining node (or nodes). If you bring a new/existing node up, it will only mirror NEW messages that are added.
I'm fairly new to this so I would assume that having 3 nodes would be far better than two. This means that if one node goes down, there is still resiliance over the other two nodes (depending what your biz case is right). Of course if two nodes go down, you have lost replication for anything left in the queues. I reckon this should be called the "3 strike setup"!
Best Answer
The RabbitMQ team monitors this mailing list and only sometimes answers questions on StackOverflow.
You don't provide details about what RabbitMQ version, Erlang version, operating system, or Node.js library you are using. Please in the future always provide people with details like this to make it easier for them to help out.
Since you are new to RabbitMQ clustering, so I highly recommend that you read our documentation for it - https://www.rabbitmq.com/clustering.html
Both nodes must be running the same version of RabbitMQ and Erlang. You create the cluster according to the documentation by running the clustering commands on one node. If you were successful the
rabbitmqctl cluster_status
command will indicate so.I am assuming that you are using the squaremo/amqp.node library. That library does not support providing multiple hosts in the connection URI. You must implement that yourself, and you would also have to implement connection recovery yourself. Please see this GitHub issue.
However, one library that does support multiple nodes and reconnection is called
rascal
. You can either userascal
or borrow code from it (here, for example).