When a pipeline is configured and published with any Scheduler Trigger variable, a function that executes the flow in pre-established breaks is created. For that, a cron expression is followed, defined in the configurations of this type of trigger.

Click here to know more about the cron expression.

Scheduler Trigger has 4 types. They are:

  • 5-Minute Scheduler: has a 5-minute pre-configuration. When you deploy a pipeline with this variable, the executions are programmed for every 5 minutes.

  • 30-Minute Scheduler: has a 30-minute pre-configuration. When you deploy a pipeline with this variable, the executions are programmed for every 30 minutes.

  • Midnight Scheduler: has a pre-configuration to be triggered at midnight. When you deploy a pipeline with this variable, the executions are programmed for midnight.

  • Custom Scheduler: doesn’t have a pre-configuration, allowing you to custom a cron expression. When you deploy a pipeline with this variable, the executions are programmed according to the cron expression you specified.

IMPORTANT: the Midnight Scheduler doesn’t allow the Time Zone to be configured. That way, the execution happens at Time Zone UTC midnight, which can be different from your Time Zone. If you need to configure the Time Zone, you can use the Custom Scheduler and then define the midnight-recurrence information in your parameters.

Take a look at the configuration parameters of Scheduler Trigger:

  • Cron Expression: expression that defines the seconds, minutes, hours and the recurrence of a pipeline in days. You can have more information about the expressions format by clicking here. But if you want to know how to build them, then click here.

  • Time Zone: defines under which Time Zone the pipeline will be executed. If no Time Zone is defined, the UTC standard will be followed (12h UTC corresponds to 9h in the Sao Paulo Time Zone, for example).

  • Maximum Timeout: limit time for the pipeline to process information before returning an answer (standard = 30000, limit = 900000). In milliseconds. If the processing takes longer than the parameter definition, the execution is finished.

  • Retries: maximum number of tries if the execution fails.

  • Allow Redelivery Of Messages: if the option is enabled, the message can be resend when the Pipeline Engine fails. Read the article about the Pipeline Engine to have more details.

  • Allow Concurrent Scheduling: indicates if the pipeline must follow the rule, which means, if it should start the execution even if previous executions are running. Let’s say a pipeline is configured for execution every 3 minutes. However, one of the previous executions took 4 minutes to finish. That way, there’re different scenarios:

- If enabled: the following executions happen along with the current one.

- If disabled: the following execution, on top of the other ones, won’t be started until the previous execution is finished.

Scheduler Trigger in Action

This trigger can be used in some cases in which it’s necessary to search system data that don’t have the capacity to send data to Digibee using HTTP, REST, HTTP File, Kafka, RabbitMQ and JMS. Some of these scenarios are:

  • to search files in directories from SFTP, FTP, S3, Google Cloud Storage, etc.;

  • to search information directly in databases (in this case, we recommend the use of the Stream DB component with pagination);

  • to execute status verification calls in Platform endpoints that don’t have the capacity to sensitize pipelines through webhooks.

See how the trigger behaves in a determined situation and what its respective configuration is.

  • Pipeline executed every 30 seconds, without overlap using a static data source

Observe how to configure a pipeline with Scheduler Trigger to be automatically executed every 30 seconds without an execution overlap. A 2-minute Timeout that follows the Sao Paulo Time Zone (UTC-3) will be configured.

Firstly, create a new pipeline and configure the trigger. The configuration can be done in the following way:

Now observe how to configure a MOCK in the pipeline so it becomes the data provider that the endpoint returns in the end. Select the indicated component, connect it to the trigger and configure it with this JSON:

{
"data": {
"products": [
{
"name": "Samsung 4k Q60T 55",
"price": 3278.99
},
{
"name": "Samsung galaxy S20 128GB",
"price": 3698.99
}
]
}
}

After having done that, every time that the pipeline is executed, the JSON defined as answer will be automatically returned.

After being deployed, it’s possible to see the pipeline execution in Dashboard > Finished Executions:

Did this answer your question?