API Concurrency – How to Prevent User from Requesting API Method Multiple Times in Parallel

apiconcurrencylanguage-agnostic

Let's say we have an app where the users gain points and can exchange them for rewards. The exchange request, in pseudo-code, could look like this:

function exchangePointsForReward(userId, rewardId){
    user = getUser(userId)
    reward = getReward(rewardId)
    if (user.points >= reward.requiredPoints){
        giveRewardToUser(userId, rewardId)
        reduceUserPoints(userId, reward.requiredPoints)
    }
}

But if we have a malicious user what stops them from crafting a request in their favorite programming language and sending it 20 times at the same time? Before the first request reaches reduceUserPoints() ten other might've already got as far as addNewReward(). Sure, the user's points at the end of the day might go deep into negative, but what stops the user from quickly grabbing the rewards and using them up? How can I ensure that only one operation can be executed for a user at the same time?

One solution I can think of is the operation tries to acquire a lock at the beginning of the operation and only a single lockable operation can run for a user at any moment:

function aquireLock(userId){
    lockId = getRandomLockId()
    database.query("UPDATE user SET lock={lockId} WHERE user={userId} AND lock IS NULL");
    return database.query("SELECT lock WHERE user = {userId}").first === lockId;
}

function exchangePointsForReward(userId, rewardId){
    if (!aquireLock(userId)){
        throw new Error("Failed to acquire lock");
    }
    user = getUser(userId)
    reward = getReward(rewardId)
    if (user.points >= reward.requiredPoints){
        giveRewardToUser(userId, rewardId)
        reduceUserPoints(userId, reward.requiredPoints)
    }
    releaseLock(userId);
}

But is there any better strategy here? The question is database-agnostic.

Best Answer

I think your locking strategy will work, although I'd suggest looking at a distributed lock manager (e.g., Apache's Helix project) instead of going to the database just to get a lock. This would ensure that users didn't get locked out if your service gets restarted mid-request.

Another possibility would be to bucket the user IDs to queue them to specific request handlers; this would serialize the parallel requests, but is probably more complicated and could lead to processing bottlenecks (at least until you got your configuration tuned).