Morgan Tocker (mtocker) wrote,

Atomic operations between keys

I've been hanging out with the cool kids lately, and learning new technologies. One common problem I notice people have, is safely emulating atomic operations between keys without transactions. i.e. in the classic example:

User Account 1 has $100 in it.
User Account 2 has $100 in it.
Transfer $20 From 1 -> 2.

Now, assume that our database crashes during the balance transfer indicated above. What happens?

The most common way I have seen this emulated is using a third-party 'table' (a.k.a. a journal, or log) and a scheme that could probably be described as a two phase commit. I see subtle bugs in most implementations of this. I do not want to point any fingers, so I will show two naive examples in my own non-transactional database, MyISAM:

Example 1: Using the third-party table to log "the atomic action"

i.e.


CREATE TABLE accounts (
 id INT NOT NULL primary key auto_increment,
 balance INT NOT NULL
) ENGINE=MyISAM;

CREATE TABLE modifications_log (
 id INT NOT NULL primary key auto_increment
 account_1 INT NOT NULL,
 account_2 INT NOT NULL,
 operation VARCHAR('20') NOT NULL,
 amount INT NOT NULL,
 is_complete TINYINT,
) ENGINE=MyISAM;

# Load initial data
INSERT INTO accounts VALUES (1, 100), (2, 100);

# Statement 1: Transfer $20 from A->B in the journal.
INSERT INTO modifications_log (account_1, account_2, operation, amount, is_complete) VALUES (1, 2, 'MOVE_LEFT_TO_RIGHT', 20, 0);

# Statements 2 & 3: Perform the actual modifications
UPDATE accounts SET balance =  balance-20 WHERE id = 1;
UPDATE accounts SET balance =  balance+20 WHERE id = 2;

# Statement 4: Mark action as complete
UPDATE modifications_log SET is_complete = 1 WHERE id = [[insert id from statement 1]];

The (worst) problem with this design, is that it doesn't actually solve the problem that it was designed to solve. If we have a failure, we can't just look at the modifications_log and see which updates still need to be applied.

Let me explain: If we found an incomplete modification (modifications_log.is_complete=0), we know that statement 4 was not successful, and that statement 1 was successful, but we have *absolutely* no idea as to if statements 2 or 3 were successful. It is also unsafe to just re-apply these statements, because they are not idempotent. In simple terms idempotent means that we should be able to replay a statement over and over, and produce the same result.

Example 2: Using the third party table to log resulting values

i.e.
CREATE TABLE accounts (
 id INT NOT NULL primary key auto_increment,
 balance INT NOT NULL
) ENGINE=MyISAM;

CREATE TABLE modifications_log (
 id INT NOT NULL primary key auto_increment
 account_1_new_value INT NOT NULL,
 account_2_new_value INT NOT NULL,
 is_complete TINYINT,
) ENGINE=MyISAM;

# Load initial data
INSERT INTO accounts VALUES (1, 100), (2, 100);

# Statement 1: Fetch what will be the new balance (current less $20)
SELECT balance-20 FROM accounts WHERE id = 1;

# Statement 2:  Fetch what will be the new balance (current plus $20)
SELECT balance+20 FROM accounts WHERE id = 2;

# Statement 3: Transfer from A->B in the modification log.
INSERT INTO modifications_log (account_1_new_value, account_2_new_value, is_complete) VALUES ([[statement 1 result]], [[statement 2 result]], 0);

# Statements 4 & 5: Perform the actual update
UPDATE accounts SET balance = [[statement1 result]] WHERE id = 1;
UPDATE accounts SET balance = [[statement2 result]] WHERE id = 2;

# Statement 6: Mark the whole operation as complete
UPDATE modifications_log SET is_complete = 1 WHERE id = [[insert id from statement 3]]

This solution makes it easy to recover. If statement 3 was successful, but you had a crash before statement 6, you should be able to reapply statements 4 & 5 safely, regardless of whether or not they had previously been run.

However, this design also has its own problems:

1) When you run statements 1 & 2, you need to make sure nobody else can modify the data until you get all the way to statement 6, in any part of the application. If you fail to do this, you have a potential race condition. For example, if we try two transfers at once:

# Statement 1 notices 100 is in the table, subtracts 20, returns result of 80.
# Statement 2 notices 100 is in the table, adds 20, returns result of 120
--> [[race condition query]] A request to transfer $10 from account 1 to account 2 commences, deciding that the new balance should be $90 and $110 respectively. 
# Statement 4 runs as it would have, and sets account #1 to $80.

By the time the race condition query is finished, we have either $90 and $110, respectively, or $80 and $120, when really we should have $70 and $130. Eek!

We can prevent the race condition by introducing locking to the equation. However, this introduces its own can of worms (noting that even if our database server doesn't support locking we can normally emulate it as long as it has some sort of atomic CAS operation).

So onto the first example of locking:

LOCK account1;
# run statement 1
LOCK account2;
# run statement 2
# run statement 3
# run statement 4
# run statement 5
# run statement 6
UNLOCK account1;
UNLOCK account2;

The problem with this locking implementation, is that it is also incomplete. What happens if we have SESSION1 issue a statement #1 that locks like this:

 LOCK account1;
 # run statement 1

And around the same time SESSION2 issue a statement 1 that looks like this:

LOCK account2;
 # run statement 1

(.. and also assume for the example that for statement 2, both want to acquire a lock on the other record.)

In this case, neither operation SESSION1 or SESSION2 will be able to complete, as they will be blocked like a dog chasing its tail. We have ourselves a deadlock: each SESSION has a resource the other one wants, and will not release their respective locks until they have finished all work.

What we have to do is work on (a) perhaps choosing a lock acquisition path that is less prone to deadlocks, and ideally (b) implementing deadlock detection.

I believe the Dining philosophers problem tells us that if we number our forks acquire locks in an agreed upon way we should be OK. i.e. order the account ids, always pick the lower number row first, and always acquire all locks before starting any work. (Noting that I say "I believe" in the context above, because it makes me nervous talking in tautologies here. I usually leave this problem to smarter people).

In terms of deadlock detection, we are lucky in the context above that data is only modified after all locks have been acquired so we do not ever have to worry about rolling back work. A poor man's method would be to say that any SESSION that has exceeded N seconds is automatically invalidated, and must be retried by the application.

In summary:

  • I am sure it is possible to get this done right.
  • It is not actually easy to get it done right. Every time you invest in those magical statements START TRANSACTION and COMMIT, you save yourself a lot of hidden effort.
  • Post a new comment

    Error

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

  • 6 comments