top of page
Search

Dual write and data inconsistency


Dual write occurs when the service aims to change the data at two systems.


Dual write case

Let's assume that there is a User microservice. Which owns the User data and it's the microservice responsibility to publish that User data to the different services at the system.

When a new User data reaches the microservice. The data should be saved in the database and only then to publish it to the different services using a queuing system.


How this leads to data inconsistency

If inserting to the database was successful. But sending the User event to the queue failed.

Then the system will be in an inconsistent state.


 

Potential solutions

  1. Distributed transactions X

  2. Logical transaction -

  3. Outbox pattern V


Distributed transactions

This method should be avoided. Distributed transactions don't scale well. It requires the whole system to be up and it's not the way to do it when running Microservices.


Logical transaction

The operation of updating the database and sending the event can be tied in a DB transaction and logical transaction.

e.g.:

  1. Begin transaction

  2. Insert / Update the User entity

  3. Send event to a queue - wait for acknowledgment

  4. Commit the transaction

Although, this approach might work. However, it's not viable in all queuing systems.

This technique may still lead to data inconsistency when the User data is acknowledged by the queuing system but the database fails to commit the transaction!


Outbox pattern

The main idea behind the outbox pattern is that the service provides an outbox table within its database. The outbox table will include all the events that will be sent to different queues per message type. In the background, there will be a CDC (change data capture) plugin that reads the commit log of the outbox table and publish the events to the relevant queue.

e.g.:

  1. New User data is received

  2. Begin transaction

  3. Insert / Update the User entity

  4. Insert the message payload to the outbox table

  5. Delete the entry from the outbox table 🙄

  6. Commit transaction


Using this approach will prevent data inconsistency. The User data at all other services will be eventually consistent with the User microservice.


If you are using Apache Kafka as your queuing system then you can use the Kafka connect plugin to read the outbox table, Filter according to the record type, and route to the preferred topic.


The delete from the outbox table is intentional ?

Indeed the delete statement from the outbox table is intentional since the record will be written anyway to the commit log and the plugin that reads that log should be configured to pick up only the inserted records.

This will save you from writing an additional procedure that clears the table once in a while.


Conclusion

Dual writes should be avoided. However, next time if you face it at least you know how it can be handled properly.

822 views0 comments

Recent Posts

See All
Post: Blog2_Post
bottom of page