Salesforce Record-Triggered Automation Guide
This guide gives suggestions for tools to use for different kinds of triggered automation use cases and explains why those suggestions are good. It also shows how Flow handles bulkification and loop control automatically for the customer and gives some tips on how to build for performance and automation.
Here are the main things to remember:
Takeaway #1: Flow and Apex are the best no-code and code-based options on the market for prompted automation.
Takeaway #2: Don’t use Workflow Rules and Process Builder to change fields in the same record. Instead, start updating fields in the same record with before-save flow events.
Takeaway #3: If you can, start putting use cases in after-save flow triggers instead of Process Builder and Workflow (except for same-record field changes, in which case see takeaway #2).
Takeaway #4: Use Apex if you need high-performance batch processing or if you need to create logic that is very complex. (For more details, see Well-Architected – Transaction Handling.)
Takeaway #5: You don’t have to put all of your record-triggered automation into a single “mega flow” per object, but you should think about how to organise and manage your automation for the long term. (For more information, see Well-Architected – Composable.)
This document is about technology that is set off by a record. Check out Architect’s Guide to Building Forms on Salesforce for the same kind of review of the tools for building forms on Salesforce.
Low Code | ————————————–> | Pro Code | ||
---|---|---|---|---|
Before-Save Flow Trigger | After-Save Flow Trigger | After-Save Flow Trigger + Apex | Apex Triggers | |
Same-Record Field Updates | Available | Not Ideal | Not Ideal | Available |
High-Performance Batch Processing | Not Ideal | Not Ideal | Not Ideal | Available |
Cross-Object CRUD | Not Available | Available | Available | Available |
Asynchronous Processing | Not Available | Available | Available | Available |
Complex List Processing | Not Available | Not Ideal | Available | Available |
Custom Validation Errors | Not Available | Not Available | Not Available | Available |
- Available = should work fine, with basic considerations.
- Not Ideal = possible, but with important and potentially limiting considerations.
- Not Available = no plans to support in any capacity in the next twelve months
The table above shows the most common ways that triggers are used and the tools that we think work best for each case.
When more than one tool is available for a use case, you should pick the one that will let you apply and manage the use case for the least amount of money. This will depend a lot on how your team is put together.
For example, if your team already has a well-established CI/CD process and a well-managed framework for handling Apex events, it will probably be cheaper to stay on that road. In this case, changing the way your organisation works to use Flow development would cost a lot. On the other hand, if your team doesn’t have regular access to coder tools or a strong institutionalised culture of code quality, you might be better off with triggered flows that more people can maintain than with several lines of code that only a few people can maintain.
Flow triggers are a good choice for a team with a mix of skills or a lot of admin skills because they are faster and easier to fix, manage, and add to than any other no-code option in the past. If you only have a few developers, you can use flow triggers to give business process application to other people. This lets you put your developers to work on projects and jobs that make the most of their skills.
Bye Bye Process Builder and Workflow Rules
Even though Process Builder and Workflow Rules have a long way to go before they are no longer supported, we suggest that you start using Flow for all of your future low-code processes. Flow is better built to meet the growing needs of Salesforce users for features and the ability to add on.
- Most Workflow Rules are used to change fields for the same record. Workflow Rules are known for being fast, but because they cause a repetitive saveOpen link in new window, they will always be much slower and use up more resources than a single before-save flow trigger that does the same thing.
- Also, Workflow Rules is a totally different system from Flow. It has different information and a different engine. Any changes that Salesforce makes to Flow that have nothing to do with speed, like debugging, management, CI/CD, and so on, will never help Workflow Rules, and the same is true for Workflow Rules.
Process Builder will never be as fast as Flow and will always be harder to fix bugs in. It also has a hard-to-read list viewOpen link in new window. - Process Builder runs on top of the Flow runtime, but there is a big difference between the metadata model of Process Builder and the metadata model of the Flow runtime. Because of this abstraction, the underlying information for a process resolves to a Flow description that is jumbled, less efficient, and often hard to understand. And a Flow description that is all messed up is much harder to fix than one that is not.
- In addition to the problems that have already been mentioned, both Process Builder and Workflow Rules have a very poor setup step that makes processing take longer in every save-order operation. In practise, we’ve found that most Process Builder and Workflow Rules are “no-ops” at runtime (that is, the criteria aren’t met, so no actions are done), and the setup step doesn’t help them much. Unfortunately, this part of the application is very deep in the code, where making changes is very risky.
- In the new initiated Flow design, this expensive step of setting up has been taken out.
Because of these things, Salesforce will put more money into Flow in the future. We suggest building in Flow if you can, and only using Process Builder or Workflow if you have to.
At this point, Flow has fixed all the big feature gaps we had found between it, Workflow Rules, and Process Builder. Salesforce continues to put money into fixing the remaining small gaps, such as improving formulas and entry conditions and making Flow easier to use in places where it is more complicated.
A Word About Names
Flow has brought a new idea to low-code automation by splitting its record triggers into before and after save in the trigger order of executionOpen link in new window. This works the same way as the equivalent feature in Apex and makes it possible to change fields in the same record much more quickly. But this makes the Flow user experience more complicated, and people who didn’t know about triggers found the terms confusing. So, throughout this guide, we will still call these two choices “before save” and “after save,” but in Flow Builder, they are now called “fast field update” and “actions and related records.”
Think about the case
Same-Record Field Updates
Record-Changed Flow: Before Save | Record-Changed Flow: After Save | Record-Changed Flow: After Save + Apex | Apex Triggers | |
---|---|---|---|---|
Same-Record Updates | Available | Not Ideal | Not Ideal | Available |
The most important thing we say in this guide is to take steps to reduce the number of updates to the same record fields that happen after the save. Or, to put it more simply, stop doing field updates on the same record in Workflow Rules or Process Builder methods. And don’t start updating fields on the same record in after-save flow events! Instead, start putting same-record field update actions in before-save flow triggers or before-save Apex triggers. By design, updating the same record field before saving is much faster than updating the same record field after saving. This is true for two main reasons:
- The field values of the record are already in memory, so they don’t need to be loaded again.
- The update is done by changing the values of the record in memory and depending on the original DML action to save the changes to the database. This prevents not only an expensive DML action but also the whole repeated save that comes after it.
Well, that’s the idea, but what does it look like in real life?
Our tests (Performance Discussion: Updates to the Same-Record Fields) give you a taste of the real world. In our tests, using before-save triggers to do bulk changes on the same record was anywhere from 10 to 20 times faster than using Workflow Rules or Process Builder. So, even though there are still some potential limits to Apex, we don’t think speed should be a problem when applying before-save flow triggers, except maybe in the most extreme cases.
The main problem with before-save flow triggers is that they don’t have many functions. You can only change the underlying record and query records, loop, assess formulas, give variables, and make choices (using Switch lines, for example). You can’t add subflows or tasks that can be called from Apex to a before-save flow trigger. In a before-save Apex trigger, you can do anything you want (except direct DML on the underlying record). We’ve made sure that before-save flow events only support the processes that will bring about the speed gains we’ve already talked about.
We know that same-record field changes make up the vast majority of Workflow Rule actions that are run site-wide and are also a big reason why Process Builder doesn’t run well. There will be a lot of great speed gains if any “recursive saves” are taken out of the save order and done before the save.
High-Performance Batch Processing
Record-Changed Flow: Before Save | Record-Changed Flow: After Save | Record-Changed Flow: After Save + Apex | Apex Triggers | |
---|---|---|---|---|
High-Performance Batch Processing | Not Ideal | Not Ideal | Not Ideal | Available |
If you want to evaluate complex logic in batches quickly and efficiently, Apex’s ability to be set up and its many debugging and tools features are for you. Here are a few examples of what we mean by “complex logic,” and why we suggest Apex.
- Defining and judging complex reasoning rules or sentences.
When handling very complicated formulas, Flow’s formula engine sometimes doesn’t work well. This problem is made worse by the fact that formulas are currently built and handled in a certain order during runtime. We’re looking into ways to compile formulas in batches, but formula solving will always be done one at a time. We haven’t yet found out why formula solving isn’t working as well as it should. - Complex list processing, like extracting and changing data from a lot of records by repeating over and over and over again.
See Complex List Processing to learn more about what you can’t do with lists right now in Flow. - Anything that needs to work like a map or a set.
Map is not a datatype that can be used with Flow. Also, if an Apex invocable action sends an Apex object to Flow and that Apex object has a member variable of type Map, you won’t be able to view that member variable in Flow. But the member variable stays around during runtime, so if Flow passes that Apex object to another Apex invocable action, you can access the member variable in the receiver Apex invocable action.
The Map datatype is not on the one-year plan for Flow. - Transaction savepoints
Transaction savepoints don’t work in Flow triggers and probably won’t work in Flow ever.
Even though before-save flow triggers aren’t quite as fast as before-save Apex triggers in barebones speed races, the overhead isn’t as noticeable when the whole transaction is taken into account. Before-save flow events should still be fast enough for the vast majority of same-record field change batch cases that are not too complicated (like the ones listed above). Since they are always more than 10 times faster than Workflow Rules, you can use them anywhere you use Workflow Rules now.
Flow has some capabilities for batch handling that doesn’t need to happen right away during the first deal, but they are still more limited and have less features than Apex. Scheduled Flows can currently do a batch action on up to 250,000 records per day and can be used for data sets that are unlikely to get close to that limit. Scheduled Paths in record-triggered flows now also have batch sizes that can be changed. This means that if needed, admins can change the usual batch size of 200 to a different number. This can be used for things like external callouts that can’t work with the batch size that is set by default. (For more details, see Well-Architected – Transaction Handling.)
Cross-Object CRUD
Record-Changed Flow: Before Save | Record-Changed Flow: After Save | Record-Changed Flow: After Save + Apex | Apex Triggers | |
---|---|---|---|---|
Cross-Object CRUD | Not Available | Available | Available | Available
|
No matter what tool you use, you need a database process to create, change, or delete a different record than the one that started the transaction. The before-save flow trigger is the only tool that doesn’t allow cross-object “crupdeletes” (a combination of the create, update, and delete actions).
At the moment, Apex is faster at raw database operations than Flow. That is, the Apex runtime needs less time than the Flow runtime to prepare, run, and process the result of a specific database call (like a call to make a case). In practise, though, if you want to make big speed gains, you are more likely to get more out of finding and improving poor user implementations before you try to optimise for lower-level processes. The time it takes to run real user code on the app server is usually much longer than the time it takes to run database tasks.
The worst user applications tend to use a lot of DML lines when fewer would do. For example, here is a way to set up a flow trigger that uses two Update Records elements to change two fields on the parent account record of a case.
Picture of a flow planner with an after-save flow that triggers DML parts more than once.
This isn’t the best way to do things because it means that at runtime, two DML actions and two save orders will be performed. If you combine the changes to the two fields into a single Update Records element, only one DML action will be run at runtime.
Workflow Rules has a reputation for being very good at what it does. Part of this is because Workflow Rules limits the amount of DML that it does when saving.
- All instant, same-record field change actions from all Workflow rules on an object are immediately combined into a single DML statement at runtime (as long as their conditions are met).
- For all the Workflow rules on an object, the quick, detail-to-master cross-object field change steps are grouped together at runtime in the same way.
- Workflow Rules doesn’t have a lot of support for DML across objects to begin with.
So, when it comes to cross-object DML, the idea is to keep the amount of DML to a minimum from the start.
- Before you can start to optimise, you need to know where all the DML is going. This step is easier when logic is spread across fewer triggers and you have to look in fewer places. This is one reason why the one/two-trigger-per-object pattern is so popular, but you can also solve this problem by instituting strong documentation practises, keeping object-centric subflows, or making your own design standards that make it easy to find DML at design time.
- Once you know where all the DML is happening, try to combine any DML that affects the same record into as few Update Records parts as possible.
- When dealing with more complicated use cases that require changing multiple fields on a linked record in a certain order or based on a condition, you might want to make a record variable to hold the data from the related record temporarily in memory. During the flow’s logical order, use the Assignment element to change the temporary data in that variable. At the end of the flow, use a single explicit Update Records operation to save the temporary data to the database.
Sometimes this is easier to say than to do, especially if you aren’t having speed problems. If you aren’t having performance problems, you may find that optimising your site isn’t worth the time and money.
Complex List Processing
Record-Changed Flow: Before Save | Record-Changed Flow: After Save | Record-Changed Flow: After Save + Apex | Apex Triggers | |
---|---|---|---|---|
Complex List Processing | Not Available | Not Ideal | Available | Available |
There are a few big things that Flow can’t do with list processing right now.
- Flow comes with a small set of simple list processing functions that you can use right away.
- You can’t refer to an item in a Flow collection by index or by using the Loop feature (each time the Loop element goes through a collection, it just assigns the next value in the collection to the Loop variable; this assignment is done by value, not by reference). So, you can’t do anything in Flow with MyList[myIndexVariable] that you could do in Apex.
- Even during batch processing, loops are run one after the other. Because of this, SOQL or DML actions that are inside a loop are not bulked up, which increases the chance that the transaction Governor limits will be reached.
Some standard list-processing jobs, like in-place data transformations, sorting, and filters, are too hard to do in Flow because of these limits. On the other hand, they are much easier and faster to do in Apex.
This is where using invocable Apex to extend flows can really shine. Apex engineers can and have made list processing methods that are fast, flexible, and don’t depend on the type of object. Since these functions are stated to be invokable, Flow users can easily use them. It’s a great way to keep the implementation of business logic in a tool that business-facing users can use, instead of pushing engineers to implement functional logic in a tool that isn’t as good for that.
When developing an invocable ApexOpen link in a new window, you should think about the following:
- The writer is in charge of making sure that their invokable Apex method is bulkified correctly. Invocable methods can be called from a trigger context, like a process or an after-save flow, so they need to be able to handle multiple calls at once. During runtime, flow calls the method and passes a list with the data from each suitable Flow Interview in the batch.
- Methods that can be called from Apex can now be set up to accept general sObject inputsOpen link in new window. Even though this capability is more generic, it makes it possible to build and maintain a single Apex method that can be called from many triggers across various sObjects. When you combine the general sObject pattern with dynamic Apex, you can make solutions that are both elegant and useful.
Since this guide was written, Flow has added more list processing features, such as the ability to filter and sort. But it still doesn’t have all of Apex’s list processing features, so the suggestion to use Apex or modularize individual components for more complicated use cases still stands.
Asynchronous Processing
Record-Changed Flow: Before Save | Record-Changed Flow: After Save | Record-Changed Flow: After Save + Apex | Apex Triggers | |
---|---|---|---|---|
Fire & Forget Asynchronous Processing | Not Available | Available | Available | Available |
Other Asynchronous Processing | Not Available | Available | Available | Available
|
In the world of computing, asynchronous processing can mean a lot of different things, but when it comes to record-triggers, there are usually only a few things that come up. It’s usually asked for instead of the default, which is to make changes at the same time as the trigger order of executionOpen link in new window. Let’s talk about why you might or might not want to act at the same time.
Synchronous processing has many advantages
- Minimal Database Transactions: To get the most out of database transactions, record-change events are usually set to run during the first transaction. As we saw with same-record field updates, you can optimise these because you already know that the record that triggered the update will be changed. This means that you can use before-save to combine the other automatic updates into a single database transaction.
- Consistent Rollbacks: In the same way, putting changes to other records into the first transaction means that the total change to the database will be atomic from a data integrity point of view, and rollbacks can be done at the same time. So, if a record-triggered flow on an account changes all the connected contacts, but then a different piece of automation later in the transaction throws an error that cancels the whole transaction, those contacts won’t be updated if the original update to the account didn’t go through.
Synchronous processing has some problems.
- Time Window: The record-trigger starts with an open transaction to the database that can’t be completed until all the steps in the trigger order of execution have happened. This means that you only have a certain amount of time to do any more synchronous processing, since the database can’t stay open forever. And if a user changes a record, we don’t want the user to have to wait for a long time after making a change.
- Governor Limits: Because of the above time constraints, Apex and Flow have more flexible governor limits for asynchronous processing than for synchronous processing. This is to make sure that performance is uniform.
- Support for External Objects and Callouts: In general, any access to an external system that needs to wait for an answer (for example, to change the starting record with a new value) will take too long to do within the original open transaction. Some invocable actions work around this problem by using custom code to put their own processing on hold until the first transaction is done. This is done by things like email alerts and outbound messages, which is why you can call an outbound message from an after-save flow but not from an External Service action. But this isn’t true for the vast majority of callouts, and whenever possible, we suggest splitting callouts into their own separate processes.
- Mixed DML: Sometimes, you may want to do cross-object CRUD on objects in Setup and objects that are not in Setup. For example, you might want to update a user and a contact that is connected to the user after a certain change. Due to security concerns, these things can’t be done in a single transaction, so in some use cases, a second asynchronous process needs to start a new transaction.
With these things in mind, both Flow and Apex have ways to run logic in an asynchronous way for use cases that need different deals, callouts to other services, or will just take too long. For Apex, the best way to do delayed processing is in a class called “Queueable.” For after-save flows in Flow, we suggest using the Run Asynchronously path to get the same result with less code. (For more information on synchronous and asynchronous processing, see Well-Architected – Throughput Optimisation.)
One of the most important things to think about when choosing between low-code and pro-code is how much power Apex will give you over callouts. Flow has a set number of retries and some basic mistake handling through its fault path, but Apex gives you more direct control. For a mixed use case, you can call System.enqueueJob against Queueable Apex from within an invocable Apex method, and then call the method from Flow using the invocable action structure.
When testing any solution, but especially one that uses callouts, it’s important to think through what happens when a step fails, runs out of time, or gives back bad data. In general, asynchronous processing is more powerful, but it requires the creator to think more about edge cases, especially if that process is part of a bigger answer that may depend on a specific value. For example, if your automated pricing system needs to talk to a credit check centre, what will happen to the price if the credit check system is down for maintenance? What if it gives a wrong answer? What state will your Opportunity or Lead be in during that time, and what system is waiting on that result? Apex lets you customise how it handles errors more than Flow, and you can even cause a failure case on purpose. This may be the deciding factor between the two.
How about other answers?
Before, low-code admins used different methods (or “hacks”) to make asynchronous processing work. One was to make a time-based process (in process Rules), a planned action (in Process Builder), or a planned Path (in Flow) that ran 0 minutes after the trigger ran. This basically did the same thing that the Run Asynchronously path does now, but the new specialised way has some benefits, such as how quickly it will run. A 0-minute planned action could take a minute or more to fully run, but Run Asynchronously is optimised to make sure it is added to the queue and runs as quickly as possible. Run Asynchronously might also allow more stateful features in the future, like being able to see the previous value of the record that triggered the action, even though it can’t do this right now. It does some special caching to make things run faster.
The other “hack” that has been used is to add a Pause element using an autolaunched subflow that waited for zero minutes and then call that flow from Process Builder. This “zero-wait pause” will essentially break the transaction and plan the remaining routines to run in its own transaction, but the methods it uses don’t scale well because they weren’t made for this reason. Because of this, more use will lead to problems with speed and flow interview limits. Also, the flow becomes more fragile and harder to fix bugs in. Customers who used this method often had to stop using it once they got big. We don’t suggest going down that road (no joke meant), so subflows called from record-triggered flows can’t use it.
Transferring Data or State Between Processes
One of the things that people like about the “zero-wait pause” is how they think the synchronous and asynchronous operations are connected. In this hack, a flow variable can stay the same before and after the pause, even if the pause takes weeks or months. From a design point of view, this can look good, but it goes against the programming concepts that asynchronous processing is meant to model. Splitting up processes so they can run at different times gives them more freedom and control over how they run, but the data they work with usually needs to be self-contained. Even if only a few milliseconds pass between two separate processes, that data could change in that time, and it almost certainly will if it’s longer. Flow variables, like the ones in New Resource, are only meant to last as long as the process that they are part of. If that information will be needed by a different process, even one that is set to run asynchronously as soon as it ends, it should be saved in persistent storage. Most of the time, this will be a unique field on the object of the record that started the flow. Any path on a record-triggered flow will instantly load this as $Record. For example, if you use Get Records to get an associated name from a contact for a record and you want to use that name in an asynchronous path, you will need to either call Get Records again or save the associated name back to $Record. We suggest using Apex if you need advanced caching or other ways to store data besides Salesforce objects and records. (For more information about state management, see Well-Architected – Manage State.)
Summary
When it comes to asynchronous processing, your record-triggered automation may need more care and thought, especially if you need to call out to other systems or keep the same state between processes. The Run Asynchronously way in Flow should meet most of your low-code needs, but some complicated ones, like custom errors or customizable retries, will need Apex instead.
Custom Validation Errors
Record-Changed Flow: Before Save | Record-Changed Flow: After Save | Record-Changed Flow: After Save + Apex | Apex Triggers | |
---|---|---|---|---|
Custom Validation Errors | Not Available | Not Available | Not Available | Available |
At the moment, Flow doesn’t have a way to stop DML actions from committing or to throw custom errors. For example, the addError() Apex method can’t be called from Flow via the Apex invocal method.
Designing your automation to be triggered by records
When it comes to creating record-triggered automation, there are a lot of arguments about what the best practises are. Some of the following things may have been said to you:
- Each object should only use one tool or system.
- Put all of your processing in one Process Builder or parent flow and all of your rules in subflows.
- Don’t put all of your processes in one place. Instead, break your flows up into the tiniest pieces available.
- There is an ideal amount of flows per object, which could be 1, 2, 5, or even 1,000.
- There are pieces of truth in all of the advice, but none of them are perfect for everyone’s problems or wants. There will always be exceptions to rules and rules that only apply in some situations. This part tells you about the specific problems that different pieces of advice solve, so you can make your own decisions.
What are you trying to figure out?
Performance
When using Process Builder to build automation, speed was a big reason why it was best to build one process per object/trigger. Process Builder has a high startup cost, so every time a Process Builder ran on a record edit, it would cause a speed hit. Since Process Builder didn’t come with any gating entry conditions, these hits would always happen on every edit. Flow works differently than Process Builder, so it doesn’t have as much of a starting cost, but it does have some. Raw speed tests between Flow and Apex for the same use cases will usually show that Apex is faster, at least in theory. This is because Flow’s low-code benefits add at least one layer of abstraction, but this small difference is not a big deal from a performance standpoint for most use cases.
Flow also has entry conditions, which can be used to keep a flow out of a record-edit and reduce the speed effect by a huge amount. Most changes to a record probably won’t require running software that makes more changes. So, if you fix a typo in a description, for example, you don’t have to run the programme again. You can set up conditions for entry so that automation only runs when a certain conditional state is met. Any changes that are made to a record are kept track of, and the programme only runs when a certain change is made. So, you can run a script when a chance is closed or on the edit that changed its state from open to closed. Either of these choices is better than running a programme every time something changes with a lost chance.
Summary
Making your record-triggered automation work well is a complex problem, and no single design rule will cover all the factors. When designing for Flow, there are two important things to keep in mind:
- If you put all of your processes into a single flow, it won’t affect speed as much as if you put it into various flows.
- If you use entry conditions to keep out changes that don’t affect a certain use case, you can improve the performance of your record-triggered automation by a lot.
This guide goes over a number of speed factors and suggestions, such as updating fields with before-save flows and getting rid of unnecessary or repeated DML processes whenever possible. Those places should be taken care of first because they are often where real-world customer performance problems happen.
Troubleshooting
We would love to never have to fix problems with robots as builders, but we do have to do it sometimes. Spreading your system across multiple tools can work when you’re first building it, but as changes are made in different places, it can cause more trouble over time. This is where the advice to use either Apex or Flow to automate a single object comes from. There isn’t a unified troubleshooting experience across all Salesforce tools yet, so based on how complicated your organisation is and how much testing and troubleshooting you expect to do, you may want to automate with just one tool. Because of their surroundings or the skills of their managers and engineers, some customers make this a hard and fast rule. Others find it helpful to split their automation between Flow and Apex. For example, they might use invocable actions for automation pieces that are too complicated or need to be handled carefully, and then call them from Flow so that more users can use them.
Summary
When upkeep, fixing, or problems (like different people changing the same field) are likely to be a problem, it may be best to put all of an object’s routines into a single tool. There are also other ways to do this, like using invocable actions to give non-admins access to more complicated functions.
Ordering
For a long time, the main reason to combine technology into a single process or flow was to make sure that things were in the right order. The only way to keep two pieces of technology separate while still making sure they ran in order was to connect them. This led to trouble very fast. As organisations became more flexible and needed to adapt to changes in the business, these “mega flows” became hard to manage and hard to change, even for small changes.
Flow trigger ordering, which was added in Spring ’22, lets admins give their flows a priority number and promise the order in which they will run. This priority value is not a fixed number, so the values don’t have to be named 1, 2, 3, etc. in order. Instead, the flow will run in the order specified, with a tie-breaker for similar values (for example, if there are two priorities of 1, they will run in alphabetical order) to make sure that other processes, controlled packages, or movement between orgs doesn’t get in the way. All flows that don’t have a trigger order (all current or outdated flows) will run between 1000 and 1001 to make sure they work with older systems. If you want to leave your active flows alone, you can start the ordering for any new flows you want to run after the active flows at 1001. As a best practise, leave room between the processes you number. For example, instead of 1, 2, and 3, use 10, 20, and 30. So, if you want to add a flow in the future, you can give it the number 15 to put it between your first and second flows without having to turn off and change the ones that are already running.
Summary
In the past, the need to place orders has led to suggestions that all technology should be put into a single flow. Flow trigger ordering means that you no longer have to do that. (For more tips on how to handle transactions well, see Well-Architected – Transaction Handling.)
Source: Salesforce Documentation.
Salesforce Documentation: Record-Triggered Automation