Concurrency Problems

In a multiuser environment, as several users try to access the same data at the same time, trying to perform different actions, you can encounter various concurrency problems.

In the following sections you learn every potential problem and, later in this chapter, you will see how to use the right isolation level to solve the concurrency problem and how SQL Server uses locks to support these isolation levels.

Every concurrency problem is illustrated with a figure. Every figure shows two connections, called Connection A and Connection B. These two connections can be made from the same computer or from different computers, and by the same user or by different users. SQL Server has to solve concurrency problems between connections, regardless of which user or client com puter is used to establish these connections.

Lost Updates

You can experience a lost update situation whenever two connections modify the same data in sequence, because SQL Server will maintain only the last successful update. Consider the example of Figure 13.1:

Figure 13.1. Updating data from two connections can produce lost updates.


  1. Connection A retrieves the UnitPrice of Product 25 in the variable @UP for later use.

  2. Connection B retrieves the UnitPrice of Product 25 in the variable @UP for later use.

  3. Connection A updates the price of Product 25, increasing the price 20% over the price saved in the variable @UP. The change is permanent.

  4. Connection B updates the price of Product 25, increasing the price 20% over the price saved in the variable @UP. Connection B is unaware that Connection A changed the product after step 2. The new price is changed permanently—overwriting the price modified by Connection A in step 3.

  5. Connection A checks the new price and compares it with the calculated price and finds that they are different. The update from Connection A is lost.

  6. Connection B checks the new price and it is the same as the intended computed value.

This problem can be prevented by writing atomic UPDATE statements in both Connection A and Connection B. An atomic UPDATE contains the reading operation and the writing operation in a single statement, as in Listing 13.10.

Code Listing 13.10. Write Atomic UPDATE Statements to Prevent Lost Updates
						
USE Northwind
GO

UPDATE Products
SET UnitPrice = UnitPrice * 1.2
WHERE ProductID = 25

Uncommitted Dependency (Dirty Read)

Reading data without using locks can produce unexpected results. Other connections can modify data temporarily, and you can use that new data in your calculations, driving you to incorrect results.

In this case, you could say that you are reading dirty data, because the data has not been committed yet. In other words, your calculations depend on uncommitted data.

Figure 13.2 shows an example of this situation:

Figure 13.2. Reading data without using locks can produce dirty reads.


  1. Connection A starts a new transaction to update product prices. Inside the transaction, Connection A increases product prices by 20%. The transaction is not committed yet, so these changes are provisional.

  2. Connection B decides to work reading uncommitted data. Connection B retrieves the price for product 25. This price is dirty, because Connection A has changed this value temporarily to be 20% higher than before.

  3. Connection A changes its mind and decides to roll back the changes on product prices. In this case, product 25 recovers its original price.

  4. Connection B, unaware of the changes made by Connection A in step 3, creates a new Order and a new Order Detail for product 25, using the price incorrectly retrieved in step 2. These insertions in the database are permanent.

SQL Server 2000 prevents this problem automatically by using READ COMMITTED as the default isolation level. In this way, other connections cannot see the changes before they are considered permanent. Listing 13.11 shows how to declare the same operation in Connection B as READ COMMITTED, using an atomic INSERT instead of a SELECT followed by INSERT. In this case, Connection B must wait for Connection A to liberate its locks before proceeding with the UPDATE operation.

Code Listing 13.11. Do Not Use the READ UNCOMMITTED Isolation Level If You Want to Prevent Dirty Reads
						
USE Northwind
GO

SET TRANSACTION ISOLATION LEVEL READ COMMITTED

DECLARE @ID int

INSERT Orders
(CustomerID, OrderDate)
VALUES ('BSBEV', '2000-11-25')

SET @ID = @@IDENTITY

INSERT [Order Details]
(OrderID, ProductID, UnitPrice, Quantity, Discount)
SELECT @ID, ProductID, UnitPrice, 10, 0.1
FROM Products
WHERE ProductID = 25

Note

Later in this chapter, you will learn the READ UNCOMMITTED and READ COMMITTED isolation levels.


Inconsistent Analysis (Nonrepeatable Read)

In a multiuser environment, other users dynamically modify the data stored in a database.

Trying to execute a long-running process, such as a monthly report, can produce some inconsistencies because of changes produced in the database from the beginning of the report to the end. This can be considered an inconsistent analysis, because every time you read data, the data is different. This situation is called nonrepeatable reads.

To produce a long-running report, you must retrieve the number of orders from the Order Details table because you are interested only in orders with Order Details. Later in the report, you might want to calculate the average of total price per order, using the value retrieved previously.

Figure 13.3 shows an example of this situation:

Figure 13.3. This is an example of inconsistent analysis.


  1. Connection A retrieves the total number of orders with at least one row in the Order Details table, and stores this value in the @Count variable.

  2. Connection B updates the Order Details table changing the quantity of the Product 20 in the Order 10272.

  3. The total number of rows has not been changed from Connection B, and now you can calculate the total price of the complete Order Details table and store this value in the @Total variable. This total includes the change made by Connection B in step 2.

  4. Connection B updates the Order Details table again, changing the discount to Product 20.

  5. Connection A is not aware of the changes on discount that Connection B applied to Product 20. So, it does not know that the content of the variable @Total is invalid. Connection A calculates the average price per order in two ways and every way provides a different result.

This problem can be prevented by minimizing the repeated reads and trying to convert them into atomic operations, or using the REPEATABLE READ isolation level. When using the REPEATABLEREAD isolation level, the retrieved data is locked until the transaction terminates, preventing changes to the data from other connections.

Listing 13.12 shows that the average price calculated directly or indirectly produces the same results in an atomic SELECT statement.

Code Listing 13.12. Use Atomic SELECT Statements to Prevent Repeatable Reads
						
USE Northwind
GO

SELECT AVG(TotalPrice) AS 'Actual AVG',
SUM(TotalPrice) / COUNT(OrderID) AS 'Calc AVG'
FROM (
SELECT OrderID, SUM(UnitPrice * Quantity * (1 - Discount))
AS TotalPrice
FROM [Order Details]
GROUP BY OrderID) AS TotOrders

Note

Later in this chapter, you will learn the REPEATABLE READ isolation levels.


Phantom Reads

In the preceding section, you covered the concurrency problem due to updates in data previously read. If other connections are inserting data in the range of data you are analyzing, you can find that those new rows appear in your result sets with no apparent reason, from your connection point of view. These new rows are called phantoms.

This problem produces inconsistent analysis as well, because your previous totals are no longer valid after the insertion of new rows.

Figure 13.4 shows a very simple example of this problem:

Figure 13.4. Insertions from other connections can appear in your results as phantom rows.


  1. Connection A retrieves the list of orders that includes the Product 25, and it produces a result set with 6 rows, including orders 10259, 10337, 10408, 10523, 10847, and 10966.

  2. Connection B inserts a new order detail in the Order 10615 with the product 25.

  3. Connection A resends the same statement as in step 1, but it retrieves a new row, correspondent to the Order 10615. Connection A was unaware of this insertion, so it considers this row to be a phantom.

Caution

Having phantoms is nothing to avoid per se, unless they produce inconsistent analysis as well, which is the usual case.


Preventing this problem is more difficult than in the previous cases, because SQL Server needs to lock rows that do not exist yet, locking non-existing data.

You can use the SERIALIZABLE isolation level, or the SERIALIZABLE optimizer hint, to prevent phantom reads. Listing 13.13 shows how to use the SERIALIZABLE optimizer hint.

Code Listing 13.13. Using SERIALIZABLE Optimizer Hint Inside a Transaction to Prevent Phantom Reads
						
USE Northwind
GO

SELECT OrderID,
ProductID, UnitPrice
FROM [Order Details] (SERIALIZABLE)
WHERE ProductID = 37

Note

Later in this chapter, you will learn how SQL Server implements the SERIALIZABLE isolation level.


..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.42.163