RabbitMQ Trigger is responsible for the consumption of messages from a RabbitMQ broker.

This trigger has 2 messages acknowledge strategies (receivement confirmation):

1. Automatic acknowledge

The confirmation of each message received by the trigger happens in an automatic and immediate way in its reception and the broker understands it's been delivered. On the one hand, the automatic acknowledge guarantees a great performance, but on the other hand it doesn't stop the message loss if the associated pipeline doesn't process it or process it incorrectly.

2. Manual acknowledge

Each message received by the trigger is kept as "unacked", a state of no confirmation, while they're being processed by the pipeline. In the manual acknowledge, the RabbitMQ broker understands the message is pending and, if there's any problem in the trigger infrastructure or the pipeline responds with error, the message might be reprocessed. The number of configured consumers for the pipeline dictates how many messages can be processed at the same time. For that, the prefetch size of RabbitMQ is configured with the same value the pipeline consumers.

Take a look at the configuration parameters of this trigger:

  • Account: name of the account to be used (it must be a basic-type account).

  • Hostname: address of the RabbitMQ host.

  • Port: port where the RabbitMQ is listening.

  • Virtual Host: configuration of virtual host that defines the RabbitMQ tenant to be accessed.

  • Auto Acknowledge: if “true”, the message will be confirmed as soon as it gets to the trigger and it won't wait for a response from the associated pipeline; if "false", the message will be pending while the pipeline is processing it.

  • Binary Message: if "true", it defines that the message to be received will be binary and, therefore, its content will be presented as base64; if "false", the message will be presented as a text.

  • Maximum Timeout: how long a pipeline can be executed (in milliseconds).

  • Expiration: maximum time a message waits in a pipeline line.

  • Allow Redelivery of Messages: if the option is activated, a pipeline execution will happen again in case of error; otherwise, there won't be another execution in case of error.

IMPORTANT: the RabbitMQ client doesn't allow limiting the size of messages. If too-large messages are sent, the infrastructure of triggers from Digibee can refuse them. We don't advise the dispatch of too-large messages through message buses.

Consumers

The configuration of consumers made in the deployment of a pipeline directly impacts the consumption throughput and messages output when the RabbitMQ Trigger is activated. If the auto acknowledge is disabled, the number of consumers gets even more important. This is due to the number of simultaneous messages being processed is equal to the number of consumers.

Queues, routing keys and exchanges declaration

RabbitMQ Trigger doesn't declare queues configuration parameters, routing keys and echanges in the RabbitMQ broker. For the trigger to consume messages from queues, it's expected that every configuration is made.

Message format in the pipeline input

Pipelines associated with RabbitMQ Trigger receive the following message as input:

{
"body": <STRING message content; if binary, then Base64>,
"properties": {
"appId": <STRING application id>,
"classId": <STRING class id>,
"clusterId": <STRING cluster id>,
"contentEncoding": <STRING content encoding>,
"contentType": <STRING message content type>,
"correlationId": <STRING correlation id>,
"deliveryMode": <INT delivery mode>,
"expiration": <STRING message expiration in ms>,
"messageId": <STRING message id>,
"priority": <INT message priority>,
"replyTo": <STRING reply to queue>,
"type": <STRING message type>,
"userId": <STRING user id>,
"timestamp": <LONG message timestamp>
}
"headers": {
"header1": "value1", ...
},
"envelope": {
"deliveryTag": <LONG message delivery tag>
"exchange": <STRING exchange that processed the message>
"routingKey": <STRING routing key used to route message>
}
}

Did this answer your question?