Multiple Lock with Key Implementation in Go

Photo by Gabriel Gusmao on Unsplash

A couple of months ago, I was developing services to calculate balance transaction sent from one’s user account to another’s user account and vice versa. Each transaction request is handled concurrently, thus reading and writing to databases also happens concurrently. Concurrent operations could cause data race if handled incorrectly and causing indeterminate value in user’s balance calculation. To avoid this, we need to make balance calculation on each user’s account run synchronously.

In this article, we will discuss how we can solve this problem using our Multiple Lock implementation and compare our Multiple Lock implementation with regular golang Mutex. You can look for the sample project here Multiple Lock with Key.

Let’s first take a look on our code in Multiple Lock with Key implementation and some other examples before we jump into solution for the problem I mentioned above.

Here is implementation for our Multiple Lock with Key.

In the code above, we use sync.Map to store reference of keylock struct which take counter and lock field. The counter is used to indicate number of operation currently hold the lock and the lock field is used to store reference to golang RWMutex. When the counter number is less than or equal to zero, we remove the reference to keylock and put back the lock into pool. Note that we are using sync.Pool to store reference of sync.RWMutex so that we can reuse the lock from previous operation or create new RWMutex if none is available. We also use internal lock for acquiring and releasing lock to ensure that only one operation of acquiring or releasing lock can run at one time.

Let’s run some benchmarking to compare the Multi Lock with Key and the regular golang Mutex.

In the code above, we have runWithMultiLock and runWithMutexLock function which execute block of program using our Multi Lock with Key and using regular golang Mutex. Let’s take a closer look at our Multi Lock benchmark function, here we have following block of code.

As you can see, above code will run using different keys based on value of we assigned to div variable. We will run benchmark on both function using fibonnaci function using small number and increasing it to see their performance, starting from count = 10. For our Multi Lock Benchmark we will run it using div value equal 2. We get the following result.

We got from the result that operation with golang Mutex is faster than our Multi Lock implementation. Let’s run it again using count = 30.

Now we get different result, previously operation with golang Mutex terminate faster than our Multi Lock implementation. The reason our Multi Lock with key implementation run faster on heavier operation is because we are using different keys for our lock. Since the locking mechanism is based on key, locks with different keys does not block each other. But our locks implementation produce overheads when acquiring and releasing locks which make locking and unlocking operation more expensive than regular golang Mutex, thus Multi Lock implementation on smaller operation run slower and ineffective. If we run each heavy operations using different keys, each iteration will actually get shorter.

But if we try to run same function on smaller operation, we actually got longer execution time than the previous Benchmark due to overheads.

Multiple Lock in Practice

When we are working with operations running concurrently using shared resources , we might want to protect our shared resources so that only one process can access our shared resources at one time in this case our database. You might want to do something like applying locks directly on the shared data. But I would not recommend doing it, instead we can apply locks on the operations that is using the shared data. In the sample project of Multiple Lock with Key, we have data layer, repository layer, service layer, and handler for incoming request. Data layer is our data model, repository layer is used to read and write from and to our shared data, service layer contains all our application business logic. Putting lock on our repository layer cause all reading and writing operation to database run synchronously. What we want to do is apply locking on the balance calculation. So let’s put the lock on our service layer where balance calculation happens.

Above is our application logic for user’s balance calculation. Each user’s transaction request will invoke TransferBalance function, which involve balance substraction from origin user’s account, addition to destination user’s account and transaction fee deduction from origin user’s account. On each subtract and add balance operation, we use locks based on user account. By using user account as key on our lock, operation on the same user account will run synchronously, while other operation on other user account can still run concurrently.

Conclusion

Golang Mutex ensure that all processes using the same lock to run synchronously and allow multiple processes to safely access shared resources. When the lock is released by the process holding it, other processes will be racing to acquire the lock, and block again when the lock is acquired. While synchronization can be achieved using simple Locking and Unlocking, it is ineffective for heavy operations that need advanced locking mechanism and could cause unnecessary bottleneck.

Using multiple locks with key, we can provide differrent locks to multiple operations. By doing so, multiple operations holding different locks can execute codes simultaneously while also achieved synchronization on operations using same the same lock. We also see from our benchmark result that running heavy operations using locks with different keys allow faster execution time but its implementation is ineffective and slower for smaller operation because of overheads on each lock operation. It is up to the user to decide when to use lock with different key or use golang mutex in case of smaller operation.

If you have any doubts and question, feel free to leave some comments.

Thank you for reading…