Dual write and data inconsistency
Dual write occurs when the service aims to change the data at two systems.
Dual write case
Let's assume that there is a User microservice. Which owns the User data and it's the microservice responsibility to publish that User data to the different services at the system.
When a new User data reaches the microservice. The data should be saved in the database and only then to publish it to the different services using a queuing system.
How this leads to data inconsistency
If inserting to the database was successful. But sending the User event to the queue failed.
Then the system will be in an inconsistent state.
Potential solutions
Distributed transactions X
Logical transaction -
Outbox pattern V
Distributed transactions
This method should be avoided. Distributed transactions don't scale well. It requires the whole system to be up and it's not the way to do it when running Microservices.
Logical transaction
The operation of updating the database and sending the event can be tied in a DB transaction and logical transaction.
e.g.:
Begin transaction
Insert / Update the User entity
Send event to a queue - wait for acknowledgment
Commit the transaction
Although, this approach might work. However, it's not viable in all queuing systems.
This technique may still lead to data inconsistency when the User data is acknowledged by the queuing system but the database fails to commit the transaction!
Outbox pattern
The main idea behind the outbox pattern is that the service provides an outbox table within its database. The outbox table will include all the events that will be sent to different queues per message type. In the background, there will be a CDC (change data capture) plugin that reads the commit log of the outbox table and publish the events to the relevant queue.
e.g.:

New User data is received
Begin transaction
Insert / Update the User entity
Insert the message payload to the outbox table
Delete the entry from the outbox table 🙄
Commit transaction
Using this approach will prevent data inconsistency. The User data at all other services will be eventually consistent with the User microservice.
If you are using Apache Kafka as your queuing system then you can use the Kafka connect plugin to read the outbox table, Filter according to the record type, and route to the preferred topic.
The delete from the outbox table is intentional ?
Indeed the delete statement from the outbox table is intentional since the record will be written anyway to the commit log and the plugin that reads that log should be configured to pick up only the inserted records.
This will save you from writing an additional procedure that clears the table once in a while.
Conclusion
Dual writes should be avoided. However, next time if you face it at least you know how it can be handled properly.