Chapter 13. Transactions and Concurrency

Let's say a buyer logs in to the bookshop application and purchases a book. The following actions should take place in the event of purchase:

  • Charge the buyer the cost of the book

  • Reduce the stock of the book

If the charge on the credit card fails, the stock shouldn't be reduced. Also, when the book is out of stock, the buyer shouldn't be charged. That means either both actions should be successfully completed, or they should have no effect. These actions collectively are called a transaction or unit of work. In essence, transactions provide an all-or-nothing proposition.

The concept of transactions is inherited from database management systems. By definition, a transaction must be atomic, consistent, isolated, and durable (ACID):

  • Atomicity means that if one step fails, then the whole unit of work fails.

  • Consistency means that the transaction works on a set of data that is consistent before and after the transaction. The data must be clean after the transaction. From a database perspective, the clean and consistent state is maintained by integrity constraints. From an applications perspective, the consistent state is maintained by the business rules.

  • Isolation means one transaction isn't visible to other transactions. Isolation makes sure the execution of a transaction doesn't affect other transactions.

  • Durability means that when data has been persisted, it isn't lost.

In an application, you need to identify when a transaction begins and when it ends. The starting point and ending point of a transaction are called transaction boundaries, and the technique of identifying them in an application is called transaction demarcation. You can set the transaction boundaries either programmatically or declaratively. This chapter shows you how.

Multiuser systems like databases need to implement concurrency control to maintain data integrity. There are two main approaches to concurrency control:

  • Optimistic: Involves some kind of versioning to achieve control

  • Pessimistic: Uses a locking mechanism to obtain control

    Some of the methods that are used in concurrency control are as follows:

  • Two-phase locking: This is a locking mechanism that uses two distinct phases to achieve concurrency. In the first phase of transaction execution, called the expanding phase, locks are acquired, and no locks are released. In the second phase, called the shrinking phase, locks are released, and no new locks acquired. This guarantees serializability. The transactions are serialized in the order in which the locks are acquired. Strict two-phase locking is a subset of two-phase locking. All the write locks are released only at the end (after committing or aborting), and read locks are released regularly during phase 2. With both these locking mechanisms, a deadlock is possible but can be avoided if you maintain a canonical order for obtaining locks. So, if two processes need locks on A and B, then they both request first A and then B.

  • Serialization: A transaction schedule is a sequential representation of two or more concurrent transactions. The transaction schedules have a property called serializability. Two transactions that are updating the same record are executed one after the other and don't overlap in time.

  • Time-stamp-based control: This is a nonlocking mechanism for concurrency control. Every transaction is given a timestamp when it starts. Every object or record in the database also has a read timestamp and a write timestamp. These three timestamps are used to define the isolation rules that are used in this type of concurrency control.

  • Multiversion concurrency control: When a transaction does a read on a database, a snapshot is created, and the data is read from that snapshot. This isolates the data from other concurrent transactions. When the transaction modifies a record, the database creates a new record version instead of overwriting the old record. This mechanism gives good performance because lock contention between concurrent transactions is minimized—in fact, lock contention is eliminated between read locks and write locks, which means read locks never block a write lock. Most current databases, such as Oracle, MySQL, SQL Server, and PostgreSQL, implement the multiversion concurrent control for concurrency.

This chapter shows how Hibernate implements the optimistic and pessimistic concurrency approaches.

Using Programmatic Transactions in a Standalone Java Application

Problem

If you're working on a standalone Java application, how do you achieve transaction demarcation?

Solution

In a multiuser application, you need more than one connection to support multiple concurrent users. It's also expensive to create new connections when you need to interact with the database. For this, you need to have a connection pool that creates and manages database connections. Every thread that needs to interact with the database requests a connection from the pool and executes its queries. After it's done, the connection is returned to the pool. Then, if another thread requests a connection, the connection pool may provide it with same connection.

Usually, an application server provides a connection pool. When you're working in a standalone Java application, you need to use a third-party solution that provides connection pools. Hibernate comes with an open source third-party connection pooling framework called C3P0. Apache also provides a connection-pooling framework called Commons DBCP. You need to configure Hibernate to use one of these frameworks. After the connection pool is configured, you can use the Hibernate's Transaction API for transaction demarcation and use connections from the connection pool.

How It Works

You need to add the Java c3p0-0.9.1.jar file that comes with Hibernate to the build path. In your Eclipse IDE, select your Java project, and right-click to edit build path. (See the explanation in Chapter 1 if you have trouble adding the jar to the build path.) In the Hibernate.cfg.xml file, add the following configuration:

<property name="connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider
</property>
<property name="hibernate.c3p0.min_size">5</property>
<property name="hibernate.c3p0.max_size">10</property>
<property name="hibernate.c3p0.timeout">300</property>
<property name="hibernate.c3p0.max_statements">50</property>
<property name="hibernate.c3p0.acquire_increment">1</property>
<property name="hibernate.c3p0.idle_test_period">3000</property>

Here's an explanation for each of the parameters you set:

  • connection.provider_class specifies that C3P0ConnectionProvider is the class providing the connections.

  • min_size is the minimum number of connections that are ready at all times.

  • max_size is the maximum number of connections in the pool. This is the only property that is required for C3P0 to be enabled.

  • timeout is the maximum idle time for a connection, after which the connection is removed from the pool.

  • max_statements is the maximum number of prepared statements that can be cached.

  • idle_test_period is the time in seconds before which a connection is automatically validated.

  • acquire_increment is the number of connections acquired when the pool is exhausted.

These are all the settings required in the configuration file. Hibernate also provides a property called hibernate.transaction.factory_class that you can set in the configuration file. It provides the factory to use to instantiate transactions, and it defaults to JDBCTransactionFactory:

SessionFactory sessionFactory = new Configuration().configure().buildSessionFactory();
Session session = sessionFactory.openSession();
Transaction tx = null;
try {
        tx = session.beginTransaction();
        BookCh2 book = new BookCh2();
        book.setName("Hibernate Recipes Book  ");
        book.setPrice(200);
        book.setPublishDate(new Date());
        session.saveOrUpdate(book);
        tx.commit();

        } catch (RuntimeExceptione) {
                try{
                        if(tx != null)
                        {
                                tx.rollback();
                        }
                }catch(RuntimeException ex)
                {
                        log.error("Cannot rollback transaction");
                }
                throw e;
        } finally {
                session.close();
        }

The session is provided by the sessionFactory.openSession() call. A database connection isn't opened when the session is created; this keeps the session creation a non-expensive task. The database connection is retrieved from the connection pool when the call session.beginTransaction() is made, and all queries are executed using this connection. The entities to be persisted are cached in the persistent context of the session. The commit() on the Transaction flushes the persistent context and completes the save of the entities. Another Transaction can be opened after the current Transaction is committed. Then, a new connection is provided by the connection pool. All resources and connections are release when close() is called on the Session.

The exceptions thrown by Hibernate are RuntimeExceptions and subtypes of RuntimeExceptions. These exceptions are fatal, and hence you have to roll back your transactions. Because rolling back a transaction can also throw an exception, you have to call the rollback method within a try/catch block.

In JPA, you have to define the same configurations as for Hibernate, and the EntityTransaction API is used to manage transactions:

EntityManager manager = null;
EntityTransaction tx = null;
try {
        EntityManagerFactory managerFactory =
Persistence.createEntityManagerFactory("book");
        manager = managerFactory.createEntityManager();
         tx = manager.getTransaction();
        tx.begin();
        BookCh2 newBook = new BookCh2();
newBook.setBookName("Hibernate Recipes Phase1");
        newBook.setPublishDate(new Date());
        newBook.setPrice(new Long(50));
        manager.persist(newBook);
        tx.commit();
        log.debug("Transaction committed");
}catch (RuntimeException e)
{
        try
        {
                if(tx != null)
                {
                        tx.rollback();
                }
        }catch(RuntimeException ex)
        {
                log.error("Cannot rollback transaction");
        }

        throw e;
} finally
{
        manager.close();
}

getTransaction() is called on the EntityManager to get a transaction. The actual connection is requested from the pool with the transaction's begin() method. The commit() on the transaction flushes the persistent context and completes the save of the entities. The EntityManager's close() method is called in the finally block to make sure the session is closed even in the case of an exception.

Using Programmatic Transactions with JTA

Problem

Suppose you're using an application server that provides support to manage resources. Application servers like WebLogic, WebSphere, and JBoss can provide connections from a connection pool and manage transactions. How do you achieve programmatic transaction demarcation using the Java Transaction API (JTA)?

Solution

You have to configure a Java enterprise application with Hibernate as the persistent framework. You also need to configure Hibernate to use JTATransactionFactory to provide JTATransaction objects. You then need to use the UserTransaction interface to manage transactions programmatically.

How It Works

You use WebLogic (version 9.2) as the application server to demonstrate using JTA for programmatic transaction demarcation. The assumption is that you're well versed with building and deploying your application.

To begin, you need to configure the data source on the application server. To do so, you need to start the WebLogic server, log in to the console, and configure the data source. You must also add the database driver to the server library. For Derby, you add derbyClient.jar to the server library. Make sure you test connectivity from the console!

You need to the provide the following information (with your values):

  • JNDI Name: local_derby. The application uses this to obtain database connections.

  • Database Name: BookShopDB

  • Host Name: localhost. If you're pointing to a remote database, then this value is something like the IP of that machine.

  • Port: 1527.

  • User: book.

  • Password: book.

After you successfully configure the data source on the application server, you need to configure Hibernate. The key properties you're required to provide are as follows:

  • Hibernate.connection.datasource: The value of this property must be the same as the JNDI name of the data source you configured on the application server.

  • Hibernate.transaction.factory_class: The default value for this is JDBCTransactionFactory. But because you want to use a JTA transaction for transaction demarcation, you need to define this as JTATransactionFactory.

  • Hibernate.transaction.manager_lookup_class: The value of this transaction manager is dependent on the application server. Each application server has a different JTA implementation, so you have to tell Hibernate which JTA implementation to use. Hibernate supports most major application server implementation; you use org.hibernate.transaction.WeblogicTransactionManagerLookup for WebLogic.

The following is the Hibernate configuration:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE hibernate-configuration PUBLIC
                "-//Hibernate/Hibernate Configuration DTD 3.0//EN"
                "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
    <session-factory name="book">
            <property name="hibernate.connection.datasource">local_derby</property>
        <property
name="hibernate.transaction.factory_class">org.hibernate.transaction.JTATransactionFactory</property>
<property
name="hibernate.transaction.manager_lookup_class">org.hibernate.transaction.WeblogicTransactionManagerLookup</property>
         <property name="hibernate.dialect">org.hibernate.dialect.DerbyDialect</property>
        <property name="hibernate.show_sql">true</property>
        <property name="hibernate.cache.use_second_level_cache">false</property>
       <mapping resource="book.xml" />

    </session-factory>
 </hibernate-configuration>

You now need to write a class that provides SessionFactory for the complete application. To do so, you defined a utility class that has a static method to return the SessionFactory. Here's the code implementation:

import org.hibernate.SessionFactory;
import org.hibernate.cfg.Configuration;

public class HibernateUtil {

        private HibernateUtil(){}

        public static SessionFactory getSessionFactory()
        {
                SessionFactory factory = null;;
                try {

                        factory = new Configuration().configure().buildSessionFactory();

                } catch (Exception e) {

                        e.printStackTrace();
                }
                return factory;
        }
}

Now you're ready to use JTA for transaction demarcation. You get the UserTransaction by looking up the JNDI registry:

public void saveBook(Book book) throws NotSupportedException, SystemException, NamingException, Exception {
                System.out.println("Enter DAO Impl");
                Session session = null;
                UserTransaction tx = (UserTransaction)new InitialContext()
                .lookup("java:comp/UserTransaction");
                try {

                        SessionFactory factory = HibernateUtil.getSessionFactory();
                        tx.begin();
                        session = factory.openSession();
session.saveOrUpdate(book);
                        session.flush();
                        tx.commit();

                }catch (RuntimeException e) {

                        try
                        {
                                tx.rollback();
                        }catch(RuntimeException ex)
                        {
                                System.out.println("**** RuntimeException in BookDaoImpl ");
                        }
                        throw e;

                }finally
                {
                        session.close();
                }
        }

Note that you explicitly call session.flush(). You need to do an explicit flush because Hibernate's default implementation doesn't flush the session. You can, however, override the default implementation by configuring the hibernate.transaction.flush_before_completion property. You also configure the hibernate.transaction.auto_close_session property to avoid calling session.close() explicitly in every method. The Hibernate configuration file is as follows:

<session-factory name="book">
            <property name="hibernate.connection.datasource">local_derby</property>
        <property name="hibernate.transaction.factory_class">org.hibernate.transaction.JTATransactionFactory</property>
        <property name="hibernate.transaction.manager_lookup_class">org.hibernate.transaction.WeblogicTransactionManagerLookup</property>
        <property name="hibernate.transaction.flush_before_completion">true</property>
        <property name="hibernate.transaction.auto_close_session">true</property>
        <property name="hibernate.dialect">org.hibernate.dialect.DerbyDialect</property>
        <property name="hibernate.show_sql">true</property>
        <property name="hibernate.cache.use_second_level_cache">false</property>

        <!--<property name="hbm2ddl.auto">create</property>-->

            <mapping resource="book.xml" />

    </session-factory>

And the code looks a little simpler, as shown here:

Session session = null;
                UserTransaction tx = (UserTransaction)new InitialContext()
.lookup("java:comp/UserTransaction");
                try {

                        SessionFactory factory = HibernateUtil.getSessionFactory();
                        tx.begin();
                        session = factory.openSession();
                        session.saveOrUpdate(book);
                        tx.commit();

                }catch (RuntimeException e) {

                        try
                        {
                                tx.rollback();
                        }catch(RuntimeException ex)
                        {
                                System.out.println("**** RuntimeException in BookDaoImpl ");
                        }
                        throw e;

                }

In JPA, the implementation is very similar to Hibernate:

EntityManager manager = null;
UserTransaction tx = (UserTransaction)new InitialContext()
                .lookup("java:comp/UserTransaction");

try {
        EntityManagerFactory managerFactory = HibernateUtil.getFactory();
             manager = managerFactory.createEntityManager();
         tx.begin();
        manager.persist(book);
        tx.commit();
        }catch (RuntimeException e)
        {
                try
                {
                        if(tx != null)
                        {
                                tx.rollback();
                        }
                }catch(RuntimeException ex)
                {
                        System.out.println ("Cannot rollback transaction");
                }

                throw e;
        } finally {
                manager.close();
        }

Enabling Optimistic Concurrency Control

Problem

Suppose two transactions are trying to update a record in the database. The first transaction updates and commits successfully. The second transaction tries to update and fails. So, the transaction is rolled back. The problem is that the first update is lost. Or, what if the second transaction successfully updates, as illustrated in Table 13-1? The changes made by the first transaction are overwritten.

Table 13-1. Lost Updates: Transaction 1 Updates Are Lost

Time

Transaction Account

T1

Transaction 1 begins.

T2

Transaction 2 begins.

T3

Transaction 1 updates record R1.

T4

Transaction 2 updates record R1.

T5

Transaction 1 commits.

T6

Transaction 2 commits.

How do you handle such cases of lost updates? And how do you enable versioning in Hibernate?

Solution

You need to understand isolation levels in order to choose a concurrency control mechanism. Access to database records is classified as reads and writes. The concurrency control mechanisms define the rules that dictate when to allow reads and writes.

A dirty read occurs when one transaction reads changes made by another transaction that haven't yet been committed (see Table 13-2). Basically, a dirty read means reading uncommitted data.

Table 13-2. Dirty Read: A Transaction Reading Uncommitted Data

Time

Transaction Account

T1

Transaction 1 begins.

T2

Transaction 2 begins.

T3

Transaction 1 updates record R1.

T4

Transaction 2 reads uncommitted record R1.

T5

Transaction 1 rolls back its update.

T6

Transaction 2 commits.

An unrepeatable read occurs when a transaction reads a record twice and the record state is different between the first and the second read. This happens when another transaction updates the state of the record between the two reads (see Table 13-3).

Table 13-3. Unrepeatable Read: A Transaction Reading a Record Twice

Time

Transaction Account

T1

Transaction 1 begins.

T2

Transaction 1 reads record R1.

T3

Transaction 2 begins.

T4

Transaction 2 updates record R1.

T5

Transaction 2 commits.

T6

Transaction 1 reads record R1 (the record R1 read at time T2 is in a different state than at time T6).

T7

Transaction 1 commits.

A phantom read occurs when a transaction executes two identical queries, and the collection of rows returned by the second query is different from the first. This also happens when another transaction inserts records into or deletes records from the table between the two reads.

Table 13-4. Phantom Read: Reading a Range of Data That Changes in Size During a Transaction

Time

Transaction Account

T1

Transaction 1 begins.

T2

Transaction 1 reads a range of records RG1.

T3

Transaction 2 begins.

T4

Transaction 2 inserts records.

T5

Transaction 2 commits.

T6

Transaction 1 reads the range of records RG1 (RG1's size has changed from time T2 to T6).

T7

Transaction 1 commits.

Isolation defines how and when changes made by one transaction are made visible to other transactions. Isolation is one of the ACID properties. For better performance and concurrency control, isolation is divided by the ANSI SQL standard into levels that define the degree of locking when you select data. The four isolation levels are as follows (see also Table 13-5):

  • Serializable: Transaction are executed serially, one after the other. This isolation level allows a transaction to acquire read locks or write locks for the entire range of data that it affects. The Serializable isolation level prevents dirty reads, unrepeatable reads, and phantom reads, but it can cause scalability issues for an application.

  • Repeatable Read: Read locks and write locks are acquired. This isolation level doesn't permit dirty reads or unrepeatable reads. It also doesn't a acquire range lock, which means it permits phantom reads. A read lock prevents any write locks from being acquired by other concurrent transaction. This level can still have some scalability issues.

  • Read Committed: Read locks are acquired and released immediately, and write locks are acquired and released at the end of the transaction. Dirty reads aren't allowed in this isolation level, but unrepeatable reads and phantom reads are permitted. By using the combination of persistent context and versioning, you can achieve the Repeatable Read isolation level.

  • Read Uncommitted: Changes made by one transaction are made visible to other transactions before they're committed. All types of reads, including dirty reads, are permitted. This isolation level isn't recommended for use. If a transaction's uncommitted changes are rolled back, other concurrent transactions may be seriously affected.

Table 13-5. Summarizing the Reads That Are Permitted for Various Isolation Levels

Isolation level

Dirty Read

Unrepeated Read

Phantom Read

Serializable

-

-

-

Repeatable Read

-

-

Permitted

Read Committed

-

Permitted

Permitted

Read Uncommitted

Permitted

Permitted

Permitted

Every database management system has a default setting for the isolation level. You can change the default isolation level in the DBMS configuration. On JDBC, you can set the isolation level by using a property called hibernate.connection.isolation. Hibernate uses the following values to set a particular isolation level:

  • 8: Serializable isolation

  • 4: Repeatable Read isolation

  • 2: Read Committed isolation

  • 1: Read Uncommitted isolation

This setting is applicable only when the connection isn't obtained from an application server. In this scenario, you need to change the isolation level in the application server configuration.

Now, let's come back to the case of lost updates described at the beginning of this recipe (also in Table 13-1). You've seen a case where an update made by transaction 1 is lost when transaction 2 commits. Most applications are database connections with Read Committed isolation and use the optimistic concurrency control. One way of implementing optimistic control is to use versioning.

How It Works

Hibernate provides automatic versioning. Each entity can have a version, which can be a number or a timestamp. Hibernate increments the number when the entity is modified. If you're saving with an older version (which is the case for transaction 2 in the lost-update problem), Hibernate compares the versions automatically and throws an exception.

To add a version number to an entity, you add a property called version of type int or Integer:

public class BookCh2 {
        private long isbn;
        private String name;
        private Date publishDate;
        private int price;
        private int version;
}

In the book.xml configuration file, you add the version element. Note that the version element must be placed immediately after the id element:

<hibernate-mapping package="com.hibernaterecipes.chapter2" auto-import="false" >
        <import class="BookCh2" rename="bkch2"/>
        <class name="BookCh2" table="BOOK" dynamic-insert="true" dynamic-update="true" schema="BOOK">
                <id name="isbn"  column="isbn" type="long">
                        <generator class="hilo">
                        </generator>
                </id>
                <version name="version" access="field" column="version"></version>
                <property name="name" type="string" column="BOOK_NAME" />
                <property name="publishDate" type="date" column="PUBLISH_DATE" />
                <property name="price" type="int" column="PRICE" />
        </class>
</hibernate-mapping>

A new column called version is created in the BOOK table. Using JPA, you add the version variable to the Book class and annotate it with the Version element:

@Entity (name="bkch2")
@org.hibernate.annotations.Entity(dynamicInsert = true, dynamicUpdate = true)
@Table        (name="BOOK")
public class BookCh2 {

        @Id
        @GeneratedValue (strategy=GenerationType.TABLE)
        @Column (name="ISBN")
        private long isbn;

        @Version
        @Column (name="version")
        private Integer version;

        @Column (name="book_Name")
        private String bookName;

        /*@Column (name="publisher_code")
        String publisherCode;*/

        @Column (name="publish_date")
        private Date publishDate;

        @Column (name="price")
        private Long price;
        // getters and setters
}

You can also use timestamps to version by adding a variable of type Date:

public class BookCh2 implements Serializable{
        private long isbn;
        private String name;
        private Date publishDate;
        private int price;
        private Date timestamp;
        // getters and setters
}

The XML mapping file has a timestamp element as shown here:

<hibernate-mapping package="com.hibernaterecipes.chapter2" auto-import="false" >
        <import class="BookCh2" rename="bkch2"/>
        <class name="BookCh2" table="BOOK" dynamic-insert="true" dynamic-update="true" schema="BOOK">
                <id name="isbn"  column="isbn" type="long">
                        <generator class="hilo">
                        </generator>
                </id>
                <timestamp name="timestamp" access="field" column="timestamp"></timestamp>
                <property name="name" type="string" column="BOOK_NAME" />
                <property name="publishDate" type="date" column="PUBLISH_DATE" />
                <property name="price" type="int" column="PRICE" />
        </class>
</hibernate-mapping>

You can also implement versioning without a version or timestamp by using the attribute optimistic-lock on the class mapping. It works when the entity is retrieved and modified in the same session. It doesn't work with detached objects. If you need to use optimistic concurrency control with detached objects, you must use a version or timestamp:

<hibernate-mapping package="com.hibernaterecipes.chapter2" auto-import="false" >
        <import class="BookCh2" rename="bkch2"/>
        <class name="BookCh2" table="BOOK" dynamic-insert="true" dynamic-update="true" schema="BOOK" optimistic-lock="all">
                <id name="isbn"  column="isbn" type="long">
                        <generator class="hilo">
                        </generator>
                </id>
                <property name="name" type="string" column="BOOK_NAME" />
                <property name="publishDate" type="date" column="PUBLISH_DATE" />
                <property name="price" type="int" column="PRICE" />
        </class>
</hibernate-mapping>

This isn't a popular option because it's slower and is complex to implement. In addition, JPA does not standardize this technique. So, if you need to use optimistic locking in JPA, you must use Hibernate's annotations, as shown here:

@Entity (name="bkch2")
@org.hibernate.annotations.Entity
(dynamicInsert = true, dynamicUpdate = true,
                 optimisticLock=org.hibernate.annotations.OptimisticLockType.ALL)
@Table        (name="BOOK")
public class BookCh2 {

        @Id
        @GeneratedValue (strategy=GenerationType.TABLE)
        @Column (name="ISBN")
        private long isbn;

        @Version
        @Column (name="version")
        private Integer version;

        @Column (name="book_Name")
        private String bookName;

        /*@Column (name="publisher_code")
        String publisherCode;*/

        @Column (name="publish_date")
        private Date publishDate;

        @Column (name="price")
        private Long price;
        // getters and setters
}

Using Pessimistic Concurrency Control

Problem

How do you implement pessimistic concurrency control in your application to save the book entity?

Solution

Most applications have Read Committed as the isolation level. This isolation level permits unrepeatable reads, which isn't desirable. One way to avoid unrepeatable reads is to implement versioning in the application. This upgrading of the isolation level from Read Committed to Repeatable Read comes with scalability issues. So, as an application developer, you may not want to make an application-wide upgrade to versioning—you may just want to upgrade the isolation level on a per-unit basis. To do so, Hibernate provides the lock() method in the Hibernate session object.

How It Works

The following code demonstrates how an unrepeatable read can happen. You first get the book using session.get(), and then you use a query to read the name of the book. If an update happens between these two calls to the book's record in the database by a concurrent transaction, you have a case of unrepeatable reads:

Session session = getSession();
Transaction tx = null;
tx = session.beginTransaction();
BookCh2 book = (BookCh2)session.get(BookCh2.class, new Long(32769));
String name = (String) session.createQuery("select b.name from bkch2 b where b.isbn = :isbn")
                .setParameter("isbn", book.getIsbn()).uniqueResult();
System.out.println("BOOk's Name- "+name);
tx.commit();
session.close();

You can use session.lock to upgrade the isolation level. The previous code is updated as follows:

Session session = getSession();
Transaction tx = null;
tx = session.beginTransaction();
BookCh2 book = (BookCh2)session.get(BookCh2.class, new Long(32769));
session.lock(book, LockMode.UPGRADE);
String name = (String) session.createQuery("select b.name from bkch2 b where b.isbn = :isbn")
                .setParameter("isbn", book.getIsbn()).uniqueResult();
System.out.println("BOOk's Name- "+name);
tx.commit();
session.close();

Or you can directly call

BookCh2 book = (BookCh2)session.get(BookCh2.class, new Long(32769),LockMode.UPGRADE);

LockMode.UPGRADE creates a lock on the specific book record in the database. Now, other concurrent transactions can't modify the book record.

Hibernate supports the following lock modes:

  • LockMode.NONE: This is the default lock mode. If an object is requested with this lock mode, a READ lock is obtained if it's necessary to read the state from the database, rather than pull it from a cache.

  • LockMode.READ: In this lock mode, an object is read from the database. The object's version is checked, just as in memory.

  • LockMode.UPGRADE: Objects loaded in this lock mode are materialized using an SQL select ... for update It's equivalent to LockModeType.READ in Java Persistence.

  • LockMode.UPGRADE_NOWAIT: This lock mode attempts to obtain an upgrade lock using an Oracle-style select for update nowait. The semantics of this lock mode, once obtained, are the same as UPGRADE.

  • LockMode.FORCE: This lock mode results in a forced version increment. It's equivalent to LockModeType.Write in Java Persistence

  • LockMode.WRITE: A WRITE lock is obtained when an object is updated or inserted. This lock mode is for internal use only and isn't a valid mode for load() or lock().

Summary

In this chapter, you've seen how to manage transactions programmatically. You've learned to use the Hibernate Transaction API and the Java Transaction API for transaction demarcation.

Optimistic concurrency control and pessimistic concurrency control are the two approaches used to achieve concurrency control. Optimistic concurrency control involves using either a version or a timestamp to maintain the version as a database column. Pessimistic control is used on a per-transaction basis and is achieved by using session.lock() with a lock mode of UPGRADE or UPGRADE_NOWAIT.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.126.56