Efficient execution and minimized task consumption

                                                                                      When putting workflows into production, it is important to make sure that they are built as efficiently as possible.

                                                                                      There are several reasons for this:

                                                                                      • It can help reduce processing time and increase the efficiency of your organizational procedures which are dependent on Tray-managed automations

                                                                                      • It can drastically reduce your overall task usage and hence your monthly bill

                                                                                      • It can make your workflows much easier to maintain - generally speaking more efficient workflows are better laid out, easier to understand and easier to edit

                                                                                      For more examples and guided walkthroughs on this topic please check out the following links:

                                                                                      Efficient workflow step management
                                                                                      Copy

                                                                                      Consolidating / combining workflow steps
                                                                                      Copy

                                                                                      Once you have built a project, you may find that, in order to perform any necessary data mapping and transformations, the structure is dominated by large amounts of loops, booleans, list helper steps, object helper steps etc. as in the following workflow:

                                                                                      Even if each run of your workflow (kicked off by a third-party trigger or a scheduled trigger) completes reasonably quickly, if it is triggered frequently, this will result in a large amount of overall tasks, thus increasing your monthly bill significantly.

                                                                                      In this case you can consolidate all of your steps into one or a handful of json transformer or script connector steps

                                                                                      The following screenshot shows the above workflow condensed into 3 json transformer steps:

                                                                                      Note that, for transparency, you can choose to condense into a handful of clearly-named steps so that it is still clear what each step is doing in the workflow.

                                                                                      In the above case changing from 30+ steps to 4 is a huge improvement!

                                                                                      Avoid nested looping and complex structures
                                                                                      Copy

                                                                                      As discussed in our documentation on using callable workflows, it is important to build your projects in a clean and modular fashion.

                                                                                      Generally speaking, as a workflow gets more and more complex and difficult to read, this is an indicator that callable workflows should be used to create sub-processing functions.

                                                                                      The following screenshot shows an example of processing batches of messages from a slack channel. In this case we:

                                                                                      1. Use two script steps which filter out irrelevant messages (done in two steps for workflow readability)

                                                                                      2. Check if this batch actually needs to be processed

                                                                                      3. Send it to a sub-processing callable workflow

                                                                                      The sub-processing callable then has its own loop and conditional complexity, which would have made the parent workflow extremely bloated and complex:

                                                                                      The benefit here is not necessarily in reducing task consumption, but it will certainly massively increase efficiency by:

                                                                                      • Filtering out irrelevant data

                                                                                      • Processing async batches in parallel and thus massively reduce execution time

                                                                                      • Making both your workflows and logs much more readable so you will be able to spot bottlenecks and identify where you can make efficiency improvements

                                                                                      Filtering workflow step payloads
                                                                                      Copy

                                                                                      When retrieving data from third-party services you should be aware that sometimes large payloads of data will be returned, and it may be necessary to apply some filtering to remove objects which do not need to be processed.

                                                                                      One of the key points here is that, for lists / arrays, you want to minimize the amount of data that is being looped through with the loop connector.

                                                                                      Correct filtering of step payloads can mean the difference between e.g. looping through 1000 objects and looping through 10 objects.

                                                                                      Pre-filtering payload results (with Service connector operations)
                                                                                      Copy

                                                                                      Certain connector operations allow you to use custom search / filter operations to get to the exact data that you need.

                                                                                      Making use of these filters can massively reduce the size of the payload that comes into your workflow, so you can immediately get started with any processing and transformation tasks, knowing that they will be done as quickly as possible.

                                                                                      The following example shows using the 'conditions' in the Salesforce 'find records' operation - in this case to only fetch records that have been modified after a certain date:

                                                                                      Post-filtering payload results (with core and helper connectors)
                                                                                      Copy

                                                                                      If service connector operations offer no or limited filtering capabilities, and a large payload is still being returned which contains a significant amount of irrelevant objects, you can make use of Tray's core and helper connectors to carry out post-filtering.

                                                                                      The following example shows making use of the List helpers filter operation to exclude all objects where 'email' is null:

                                                                                      Efficient use of booleans and branches
                                                                                      Copy

                                                                                      Consolidating boolean steps
                                                                                      Copy

                                                                                      A fairly common mistake is to 'chain' boolean conditions by using multiple boolean steps:

                                                                                      This ignores the fact that you can set multiple conditions in one boolean step and use the 'strictness' (ANY/ALL) policy

                                                                                      The above example could be facilitated by adding each of the conditions in one step, and setting the strictness to 'Satisfy ALL conditions':

                                                                                      'Property exists' checks
                                                                                      Copy

                                                                                      The boolean connector 'property exists' operation can be used to check for the existence of a single property within a payload.

                                                                                      To check for the existence of multiple properties you can use the normal multi-condition boolean check in combination with Tray's fallback feature.

                                                                                      As per the following screenshot you can set the fallback value to 'false' for each condition and set the check to 'not equal to' false and set the strictness to 'Satisfy ALL conditions':

                                                                                      Using branches instead of booleans
                                                                                      Copy

                                                                                      Sometimes using multiple booleans can be an indicator that you should be using the branch connector instead.

                                                                                      Remember that the branch connector is essentially a 'switch statement' generator - it allows you to make a single check of a particular field's value and then take 1 of multiple paths based on the result.

                                                                                      As per programming principles, you should recognise when to use switch statements versus multiple if / else statements (in the form of booleans).

                                                                                      Optimal trigger usage
                                                                                      Copy

                                                                                      Choosing between webhook / scheduled trigger
                                                                                      Copy

                                                                                      When considering using a webhook it is worth looking at the third party webhook documentation and assessing whether or not it gives you the granular controls you need to use it effectively and efficiently.

                                                                                      The primary consideration here is does it give you the pre-filtering options you need (as described below) to make sure your workflows aren't being triggered too frequently and with too much data in the payload.

                                                                                      If this is the case then you should consider using the scheduled trigger along with e.g. the last runtime method so that you can set exactly when your workflow runs and only return data that has been updated since the last run.

                                                                                      In this case you would then make use of a fetch / find type operation to get the data each time the workflow runs.

                                                                                      Pre-filtering webhooks
                                                                                      Copy

                                                                                      As mentioned above, when setting up a webhook in the third party service, you should make sure that you take full advantage of the settings to make sure your workflow is only being triggered when necessary.

                                                                                      For example, the webhook settings may offer a matrix whereby you can tick e.g. only the one or two types of events out of dozens that you want to be notified about:

                                                                                      It may also have further custom filters that you can build in:

                                                                                      If you do not use these pre-filters it could mean that you will have to do post-filtering within your workflow, which will use more tasks.

                                                                                      In the worst cases your workflow could be triggering say 500 times a day, each time running 3 filtering tasks before terminating and so taking up 1500 tasks per day. With correct use of filtering it might only be running 3 times a day and taking up 20-30 tasks!

                                                                                      Using multiple webhooks
                                                                                      Copy

                                                                                      If you need to be notified about multiple event types coming from a particular service, it is worth considering setting up a workflow for each event type and / or custom pre-filter.

                                                                                      If multiple event types and differently-structured payloads are coming in to the same workflow then you might find that you need to build complex branching, conditional and transformation logic which means that each time your workflow is triggered, 10x the amount of tasks are run.

                                                                                      In this case, you should set up multiple webhooks all going to different Tray workflows, if this is allowed in the third party service.

                                                                                      Optimal scheduled trigger frequency
                                                                                      Copy

                                                                                      It is always worth reviewing the frequency of your schedule triggered-workflows, particularly if they kick off a number of task-heavy processes.

                                                                                      It is worth looking at:

                                                                                      • How much data is being fetched each time and how much is being meaningfully processed? If there are many empty runs or only small amounts are being processed each time, perhaps the trigger can be less frequent.

                                                                                      • Does your automation really need to be close to real-time or can it be e.g. twice daily updates?

                                                                                      • Does the destination service or notified stakeholders need the data sent to them as frequently or do they only need it e.g. once a week?