Zulfiqar's weblog

Architecture, security & random .Net

Archive for the ‘Couchbase’ Category

Redis vs Couchbase

Posted by zamd on October 10, 2014

Recently in my project, we experienced latency issues which required us to introduce a caching layer in our architecture. I evaluated Couchbase & Redis as potential technology choices and have decided to go with Redis as it nicely fits our data & computation model. In this post, I’ll briefly share our requirements, data model and the factors which lead us to choose Redis over Couchbase.

Our latency issues were twofold:

  • Large data returned from various corporate services like Stock levels, Product & Price etc. This data is mostly reference/semi-static and is cachable (for hours at least)
  • Running computation (matching, copying & intersection) on large lists of objects and copying them back & forth from cache

The data structures involved in our solutions were set of tuples (2-tuples) containing a TPNB(identifier) and a number, which nicely aligns with the Redis Sorted Set data structure, where each member of the set has a key and a score.

Canonical Data Model

Stock Count







Ledger Stock Level







Stock Count Variance







Product Price


















The power of Redis comes from the fact that you can perform operations directly on the sets. Couchbase has no such facility and you are required to retrieve target documents into the application server and apply your logic/computation to store the resultant document back in the cache. The Redis architecture significantly reduces data copying in and out of cache by pushing computation to the data.

We started by storing Ledger stock level as a sorted set. Each [TPNB, Level] pair was stored as a distinct element within the set and we used standard ZADD command inside a transaction to populate the set.


zadd sc.03211.ledger 4 018616111

zadd sc.03211.ledger 7 018616112


When we start the stock count, we simply union the ledger set with a non-existent set to create our count variance set. This is a single instruction in Redis and no data is copied in or out of cache.

zunionstore sc.03211.variance 2 sc.03211.ledger non-existent-set

When we need to start the count, we need the same product boundary as the ‘ledger set’ but all the product counts needs to reset to ‘0’. For this, we again used a union operation with a weight of ‘0’ & cloned the ledger position as our ‘start count’ position.

zunionstore sc.03211.count 2 sc.03211.ledger non-existent-set WEIGHTS 0 0

Couchbase’s memcached type buckets have a max ‘value size’ limit of 1MB, which means we can’t store our data objects as a single key-value item, even the 20MB limit of Couchbase [type] buckets would be a stretch in some cases.

Redis has no real data size limit and can store up to 4 billion members in a sorted set.

Once the initial data structures were set up, we use INCR & DCR commands in a transaction to increment count & decrement variance in real time based on our inventory scan event stream.

We categorize our variance by simply intersecting the relevant ‘category set’ with ‘variance set’ and storing the resulting set a ‘categorized-variance’. Again this is a single Redis instruction requiring no data copying

zinterstore sc.032111.variance.h71 2 sc.032111.variance product. cat.h71 WEIGHTS 1 1

Redis also offers the ability to run custom logic inside the cache as Lua scripts. In the example below, we used a custom script to price the variance by multiplying each product count with unit price

local leftSetId = KEYS[1]

local rightSetId = KEYS[2]

local destSetId = KEYS[3]


local data = redis.call(‘zrange’, leftSetId, 0, -1, ‘WITHSCORES’)

for k, v in pairs(data) do

    if k % 2 == 1 then

        local val  = v

        local leftScore = tonumber(data[k + 1])

        local rightScore = tonumber(redis.call(‘zscore’, rightSetId, val))


        if rightScore then

            redis.call(‘zadd’, destSetId, leftScore * rightScore, val)




return ‘ok’

Redis has built-in support for master/slave replication through its Sentinel platform. A cluster of master/slave nodes can be deployed across data centres and Sentinel can provide auto-failover when the master goes down. Sentinel publishes key failover events using the Redis pub/sub & client can use these events to reconfigure themselves to the new master once a failover is completed.

Redis cluster is currently under development and would enable auto-sharding and transparent failover on top of the current replication model. The currently proposed model is very similar to Couchbase where dumb clients simply tries a random instance & gets redirected if the target instance doesn’t master the key. Smart Clients cache the cluster map and always go to the correct host based on the cached cluster map.

Currently proxy solutions like twimproxy can be used to provide auto-sharding on top of Sentinel replication.

In summary, Redis was more than a simple key/value store for us. The fact we can run computation inside the cache to filter, merge and group data was the key differentiator for Redis against Couchbase.

The clustering and high availability support in Redis is still bit lacking which makes it a risky choice as the master/primary data store of the application. In such scenarios, Couchbase should be the preferred technology choice.


Posted in Couchbase, Redis | Tagged: | 3 Comments »

RFID Stock Management–Architecture & Challenges

Posted by zamd on September 12, 2014

I’m currently working on an RFID based stock management solution – The high level goals of the solution is to ensure, the stock shown on the computer is the stock available in the store, correct products (in correct quantities) are displayed on the shop floor & to reduce stock loss by having a real-time visibility of what’s passing through the tills before it’s taken out of the store. As a start we are only doing this for clothing products where our items are source (factory) tagged with RFID tags and these items are then tracked from deliveries to sale & returns using various types of RFID readers like Handhelds, Fixed Portals, Security Gates and Click & Collect readers etc.

The hardware side of project is interesting but it’s mostly off the shelf readers & gates supplied by Motorola, Checkpoint & Nedap. These readers are doing the bulk of work and we run a simple integration agent on top of them to connect them to our software backend.

Our software backend is SOA based (REST) web services built with ASP.NET Web API & hosted on Windows. From design & implementation perspective, we use CQS, DDD & Event Sourcing and our domain entities are persisted (using NHibernate) in Oracle (Exadata) which is our master data store. We use SpecFlow/NCrunch to automate our acceptance testing and NUnit for unit testing.

We started this as a typical .NET project and had interesting challenges around performance & latency on the read path, which pushed us to do more & more work on the asynchronous write path. We started to separate our read & write stores and decided to build the read store completely in the cache based on the event stream we capture on the write pathh. We started with Couchbase with it’s memcached Data Bucket as our first choice for read store– Couchbase is a great technology, it converges the key-value & document store models into one great product. I love the power & simplicity of it’s map/reduce framework.

For us the Couchbase didn’t work as well as our latency was still high because of the computation involved on huge list of stock data. We needed to bring data streams from the cache into our service and compute variances and categorization etc and then store the computed results back in the cache.

Our next choice was Redis, the Sorted Set data structure in Redis aligned nicely with the data model we need to store & compute. I’m extremely impressed with the power of Redis and ability to run computation in the cache is exactly what we needed. Most of our computation can be done with a single union or intersection command on sorted sets which is a sub millisecond operation. We are actively building on this model and in future posts, I’ll share more details on our architecture and specifics of Redis usage. Our high level architecture looks like this…



Posted in Architecture, Couchbase, Redis | Leave a Comment »