How to Manage Integration Events in an Event-Driven Architecture?

Event-driven architecture is an architecture model based on event management (registration, publication, consumption, etc.). It is classified as a distributed architecture. In addition, the resources are not on the same machine. This allows for a weak link between a consumer and a producer who does not know how the published events will be processed.
By definition, an event is something that happened in the past. There are two event types:
- Domain events, which occurred within the same bounded context. Using Microsoft’s definition, a “domain event is something that happened in the domain that you want other parts of the same domain (in-process) to be aware of. The notified parts usually react somehow to the events.”
- Integration events, which send information from a business context to notify external systems. This is the event type addressed in a distributed architecture.
How and Based on What Criteria Should an Integration Event Be Published?
One criterion must be met for an integration event to be published in a distributed architecture: “atomicity.” This entails executing its changes (its instructions) until the end. If an interruption or failure occurs, the execution must be canceled and returned to the previous state to avoid any partial transactions. More specifically, the recording of a system state (such as persisting an entity) and the publication of an integration event must be atomic.
However, due to factors such as the unavailability of the publication medium, there is a risk data inconsistency between the various systems.
As you can see, there is no guarantee that the transport medium for the integration event (Line 8) will be available once the transaction has been completed (Line 2).
So, meeting this atomicity criterion is important to keep data consistent.
What Steps Should Be Taken to Improve Atomicity?
Several measures can improve this atomic aspect, including:
- Change data capture
- Outbox pattern
- Event sourcing
This list is not exhaustive. The goal here is to provide you with several options to consider and implement based on your business and project context.
Change Data Capture
This data integration pattern lets you track when and how data has changed. However, it allows you to retrieve this information and process it to notify an external system.
Several data management systems use this pattern, such as Microsoft SQL Server, PostgreSQL, and MySQL. They provide an agent that logs database activities like inserts, deletes, and updates.
This feature is also available for NoSQL databases such as Cosmos through the “change feed.” Microsoft defines the change feed as follows: “Change feed in Azure Cosmos DB is a persistent record of changes to a container in the order they occur.”
All of these changes can be captured using one or more Azure Functions to generate integration events that will be published via an Azure Service Bus:
However, if you use this Change Data Capture pattern, you will be overly reliant on the infrastructure. If you decide to change your database management system one day, you will have to reconsider your solution. This can sometimes cause problems…
Outbox Pattern
The outbox pattern is another architecture pattern that aims to make system state registration and the publication of an integration event atomic.
This pattern consists of two steps:
- First, the system state and integration events are saved in the database;
- Next, these events are published through a thread/process, the “publisher.”
After an event is published, its status changes to “read.”
The outbox pattern also requires you to implement certain actions. Thus, data and events must be recorded in a single transaction.
The goal is to ensure that when you save the entity, you also save the associated event(s) to be published.
Some tools already have these features built in, so they don’t need to be developed separately. This is the case with CAP, “a library based on .net standard, which is a solution to deal with distributed transactions, also has the function of EventBus, it is lightweight, easy to use, and efficient.” (Source: CAP documentation)
Note that the publisher can accidentally send out the same event more than once if it cannot update its status in the database.
So, the recipient must make sure that the integration event is handled correctly. This is so that processing the same event more than once has no unforeseen effects, such as creating duplicates.
Event Sourcing
This third method uses event sourcing to publish integration events. Event sourcing is an architecture pattern that records all of a system’s events in an event store. This method differs significantly from traditional databases because the most recent state is recorded first.
Furthermore, the current system state is simply the result of an accumulation of these saved events.
Event Store and Martendb.io are two of the most popular event stores on the market.
Each of these event stores supports observability via subscriptions. This concept is similar to change data capture (CDC) in that each action in the system—in this case, the recording of a business event—sends out information that can be used to:
- Update the read modes
- Execute the next step in a workflow context
- Create integration events to notify external systems In practice, you will need to publish them using a transport medium.
Although event sourcing appears to be the preferred approach, it can only be used for some potential and ongoing projects. Furthermore, in a legacy context, you probably won’t use an event store, and it would be impossible to benefit from it.
Adopt and Adapt…
This array of solutions to address the atomicity issue is merely a means to an end. There are factors to consider regardless of which solution you choose.
Today, an event-driven (distributed) architecture may be paramount, or even imposed, by existing enterprise solutions or the nature of internal communication.
So, whichever solution you adopt, it is critical to consider the context when making your choice. Existing infrastructure, business needs, and audit requirements can influence this context.
Do you want experts to help you with your digital transformation projects? Contact us!