Transactional outbox. Part II, Message Relay.

In the previous article I left this part, so it’s time to explain it. There’s no myth about it, the Message Relay job is to read from the Outbox table that we are using as a queue, and do these reads periodically. Two questions came to my head the first time I read this:

  1. How frequently you want to do these reads?
  2. How will affect the number of workers(clients consulting this outbox table) into the performance of the DB? In order to respond to the incoming requests.

In general: Is it this efficient? I mean, does the DB do a good job with all these reads?

These two questions are quite important, because yes the pattern on paper looks cool, but the reality is that if you don’t have a good way to do this then it’s kind of useless. If there’s no tool that support your pattern, without you having to develop those tools, then for the daily developer that just want to apply this pattern, it’s frustrating. Let’s try to answer these questions, and bring some light into the subject.

Analysis

Polling publisher

In the Chris Richardson’s book it’s described this pattern to fetch the data. Nothing mystic about this pattern, which is good the more simple the better. The idea it’s just making a select statement on outbox table, filtering the messages/events according if they were published or not. Something like this:

SELECT * FROM events WHERE published = false;

A query similar to this, and including order by ASC on a specific field that keep track of the order, let’s assume is the id, so the whole query could be

SELECT * FROM events WHERE published = false ORDER BY id ASC;

Now, this should return us the wanted messages/events stored in the table. After this we should send the data to the broker and delete or marked as published these messages from the table. That’s cool, right? Simple, you cannot get it more simple than that. We will need to make some math in order to actually know if this is actually a good pattern for this problem.

Hypothetical situation

Let’s suppose we have 4 workers, meaning 4 clients making the reads of this table. Each of these workers are reading every second. Now these are, 345600 reads per day( 4*24*60*60 = 3456000), and not all these reads give you actual new data. These is the main problem, you are basically reading from the DB on faith that you will get new data sometime. That’s not good in my opinion. After those reads you also need to delete if you got new data. How much cost could carriage this, I really don’t know, you should try it yourself for a couple of months.

Now a better approach, would be if you has a way to know when to read. What I mean by that, is that instead of reading every second, we could know when new data has been committed into the table. In this way you could reduce the number of reads, because you will only read when there’s actually new data in the table.

Basic golang implementation

Here is a simple implementation of the pattern previously described


package msgrelay

import (
	"context"
	"log"

	"github.com/Gealber/outbox/repositories/model"
)

type eventRepo interface {
	List(ctx context.Context) ([]*model.Event, error)
	Delete(ctx context.Context, ids []string) error
}

// Poll performns POLLING PUBLISHER PATTERN, in a very simple and inefficient manner :).
func Poll(ctx context.Context, eventRepo eventRepo) error {
	log.Println("EXECUTING MSG RELAY...")
	// list events unpublished.
	events, err := eventRepo.List(ctx)
	if err != nil {
		return err
	}

	if len(events) == 0 {
		return nil
	}

	ids := make([]string, 0, len(events))

	// try to publish them.
	for _, event := range events {
		ids = append(ids, event.ID)
	}

    // of course I'm not actually publishing into a broker.
    log.Printf("PUBLISHING EVENTS INTO BROKER: %+v\n", events)

	// delete them from outbox db.
	return eventRepo.Delete(ctx, ids)
}

Transaction log tailing

This is a quite obscure, but way more efficient to deal with this problem, basically consist in tailing Database logs to know when a transaction it’s been committed into the table. Then publish this as message into the broker. How to do this? That’s the obscure part, well each Database has different mechanism to trailing this logs, for example in:

Honestly the one that seems less obscure is AWS DynamoDB, they seems to have examples or how to use this, check it out here. Of course that tied you up to AWS, so …

Another problem with this approach is that I couldn’t find a single library in Go that deals with this, I found one service that is meant to be used for Postgres WAL, you can find it here. I need to try it honestly haven’t give it a chance. This approach, this pattern seems so but so hacky(is this a word?), I mean consist literally on dealing yourself or through a library that interact directly with the API of the specific DB you are using. That’s not easy, specially if you are just trying to use the pattern to solve a much simpler problem. I would say that there’s a lack of libraries that deals with this, maybe in the future that change. Take into consideration that I’m looking this under the perspective of someone that works with Golang, I know in Java the author of the book made a framework for this specific purpose.

Conclusion

These patterns are quite intructives, and of course if you read the article you noticed that I’m actually learning about them at the moment, so if you found any issue with my explanation feel free to reach out. I’ll be happy to learn from you. I’m still not convinced that these particular patterns are practical, from the perspective of a developer that doesn’t know how to program for the Postgres API or Mysql, etc…, but if in some point we have libraries that support this(I’m refering to Transaction log tailing), definitely is a good alternatives. Unfortunately that’s not the reality. Maybe I didn’t made well research on the available libraries, so feel free to reach out again. Bye that’s all :).

EDIT

Huge edit, I wasn't aware of CockroachDB CDC, this is quite amazing by the way. Also checkout this video, on Event-Driven Architecture Lecture. This is awesome.

Bibliography

  1. Transaction log tailing.
  2. Polling publisher.