What is CQRS, and why and how do we use it?
This article is aimed towards anyone with a basic understanding of application architectures. It requires some knowledge of how applications and data stores fit together, as well as a superficial knowledge of AWS.
It looks to address the following points:
- What is Command Query Responsibility Segregation (CQRS)?
- When and why do we use it?
- What is Event Sourcing, and why is it commonly used with CQRS?
- How might these ideas be implemented in an AWS environment?
All of these points will be covered in the context of adding a comment to a picture on a social media site. This example will help us work through some of the core ideas of the pattern.
Conceptually CQRS isn’t too complex. In a more traditional architecture we may read and write to the same data store, potentially via the same API.
However, sometimes, this is not appropriate. What if we have different requirements for both reading and writing? Writing may contain complex logic, or the read models may not fit well with the way the data is stored for writing. We may be doing very little writing compared to reading, or we may want to expose some data for some purposes, but not for others. To summarise, we closely bind the notions of reading and of writing data.
To avoid this we separate out the the two ideas.
The above diagram demonstrates the general concept, although the exact implementation of the idea can vary depending on use case. For example, the read data store may just be materialised views in the write data base, or the two application layers may not be as separate as shown.
By achieving what is demonstrated in the diagram we can:
- Separate concerns. We can now develop differently for read and write, pushing any write complexities into the appropriate code.
- Offer separate data models for reading and writing, simplifying queries.
- Scale read and write infrastructure separately.
- Secure the two purposes independently.
However, none of the above comes for free. The downsides of using a more involved pattern include:
- Increased complexity, especially around the synchronisation of the data stores.
- Eventual consistency. As there is some latency between the read and write versions of the data, there is no guarantee of queries returning the most recent versions of data.
Let’s return to our example — adding a comment to a picture on a social media site. We will use this to drill further into the definitions of a command and a query, as well as to explore the notion of event sourcing.
We would like to allow our user to create, read, update and delete their comments. These are then separated into command-based (create, update and delete) and query-based (read) functionalities.
Commands should directly reflect the task they are responsible for (for example, create comment) and can be synchronous or asynchronous depending on our use case. Queries should obviously be read only.
This is the perfect time to introduce Event Sourcing. Event Sourcing is where we store actions on data, rather than the data itself.
In a traditional model we may have a row in a table that represents a comment. For simplicity’s sake we will say the table has two columns, the identifier of the picture we are commenting on, and the comment text.
When we create a comment we add a row, when we update the comment we amend the comment text, when we delete the comment we delete the row. Reading the comment corresponds to reading the row.
In event sourcing we maintain an append-only list of actions. The following would be a valid, chronological action list:
- Create comment
- Update comment text
- Delete comment
The outcome of this would be the same as if no comment had been added.
Typically the event store would publish these events to be consumed by dependent systems. In the below diagram we can begin to see how this may be used for CQRS.
The outstanding question is the relative benefits of the two approaches.
- In the more traditional way of doing things we lose history unless it is stored separately as an audit log. Using event sourcing we retain history by default.
- In event sourcing we circumvent the case where multiple update operations attempt to take place on the same piece of data. This improves performance and scalability.
- In the event sourcing approach we need to be careful of the order of events. It is recommended to include a timestamp such that when we play events we can recognise the correct sequencing.
- We can only retrieve the current state of a piece of data by replaying all of the events. In a traditional approach the current state is preserved more readily. We can address this by taking snapshots of the data state at certain points in the event list.
An Example AWS Architecture
So how might we implement a CQRS pattern with event sourcing? The below is a suggested architecture for our commenting system in AWS:
Despite containing some simplifications, the above is enough to demonstrate an approximate physical implementation.
Superficially we would like to present our API as a single point of entry, without having the notions of commands and queries separate to the user. An AWS API Gateway gives us the power to route different requests separately.
DynamoDB has a functionality called Streams, which publishes item level modifications to data in a table. A Lambda can then be configured to subscribe to these events, persisting them in an appropriate format to an RDS read data store.
We now revisit the query route. In a similar manner read queries are redirected from the API Gateway to an ECS Fargate service. This service then queries a Redis Cache for the comments related to the query. If the cache contains a hit then it is returned, if it is not then we query the read store directly, updating the cache on our return.
In conclusion we have reviewed the definition, purpose and a potential implementation of the CQRS pattern.