Chapter 18. Designing client/server applications

In this chapter

  • Patterns for client/server architecture
  • Writing and testing a persistence layer
  • Integrating EJBs with JPA

Most JPA developers build client/server applications with a Java-based server accessing the database tier through Hibernate. Knowing how the EntityManager and system transactions work, you could probably come up with your own server architecture. You’d have to figure out where to create the EntityManager, when and how to close it, and how to set transaction boundaries.

You may be wondering what the relationship is between requests and responses from and to your client, and the persistence context and transactions on the server. Should a single system transaction handle each client request? Can several consecutive requests hold a persistence context open? How does detached entity state fit into this picture? Can you and should you serialize entity data between client and server? How will these decisions affect your client design?

Before we start answering these questions, we have to mention that we won’t talk about any specific frameworks besides JPA and EJB in this chapter. There are several reasons the code examples use EJBs in addition to JPA:

  • Our goal is to focus on client/server design patterns with JPA. Many cross-cutting concerns, such as data serialization between client and server, are standardized in EJB, so we don’t have to solve every problem immediately. We know you probably won’t write an EJB client application. With the example EJB client code in this chapter, though, you’ll have the foundation to make informed decisions when choosing and working with a different framework. We’ll discuss custom serialization procedures in the next chapter and explain how to exchange your JPA-managed data with any client.
  • We can’t possibly cover every combination of client/server frameworks in the Java space. Note that we haven’t even narrowed our scope to web server applications. Of course, web applications are important, so we’ll dedicate the next chapter to JPA with JSF and JAX-RS. In this chapter, we’re concerned with any client/server system relying on JPA for persistence, and abstractions like the DAO pattern, which are useful no matter what frameworks you use.
  • EJBs are effective even if you only use them on the server side. They offer transaction management, and you can bind the persistence context to stateful session beans. We’ll discuss these details as well, so if your application architecture calls for EJBs on the server side, you’ll know how to build them.

Throughout this chapter, you implement two simple use cases with straightforward workflows as an actual working application: editing an auction item, and placing bids for an item. First we look at the persistence layer and how you can encapsulate JPA operations into reusable components: in particular, using the DAO pattern. This will give you a solid foundation to build more application functionality.

Then you implement the use cases as conversations: units of work from the perspective of your application users. You see the code for stateless and stateful server-side components and the impact this has on client design and overall application architecture. This affects not only the behavior of your application but also its scalability and robustness. We repeat all the examples with both strategies and highlight the differences.

Let’s start with fleshing out a persistence layer and the DAO pattern.

18.1. Creating a persistence layer

In section 3.1.1, we introduced the encapsulation of persistence code in a separate layer. Although JPA already provides a certain level of abstraction, there are several reasons you should consider hiding JPA calls behind a facade:

  • A custom persistence layer can provide a higher level of abstraction for data-access operations. Instead of basic CRUD and query operations as exposed by the EntityManager, you can expose higher-level operations, such as get-MaximumBid(Item i) and findItems(User soldBy) methods. This abstraction is the primary reason to create a persistence layer in larger applications: to support reuse of the same data-access operations.
  • The persistence layer can have a generic interface without exposing implementation details. In other words, you can hide the fact that you’re using Hibernate (or Java Persistence) to implement the data-access operations from any client of the persistence layer. We consider persistence-layer portability an unimportant concern because full object/relational mapping solutions like Hibernate already provide database portability. It’s highly unlikely that you’ll rewrite your persistence layer with different software in the future and still not want to change any client code. Furthermore, Java Persistence is a standardized and fully portable API; there is little harm in occasionally exposing it to clients of the persistence layer.

The persistence layer can unify data-access operations. This concern relates to portability, but from a slightly different angle. Imagine that you have to deal with mixed data-access code, such as JPA and JDBC operations. By unifying the facade that clients see and use, you can hide this implementation detail from the client. If you have to deal with different types of data stores, this is a valid reason to write a persistence layer.

If you consider portability and unification to be side effects of creating a persistence layer, your primary motivation is achieving a higher level of abstraction and improve the maintainability and reuse of data-access code. These are good reasons, and we encourage you to create a persistence layer with a generic facade in all but the simplest applications. But always first consider using JPA directly without any additional layering. Keep it as simple as possible, and create a lean persistence layer on top of JPA when you realize you’re duplicating the same query and persistence operations.

Many tools available claim to simplify creating a persistence layer for JPA or Hibernate. We recommend that you try to work without such tools first, and only buy into a product when you need a particular feature. Be especially wary of code and query generators: the frequently heard claims of a holistic solution to every problem, in the long term, can become a significant restriction and maintenance burden. There can also be a huge impact on productivity if the development process depends on running a code-generation tool. This is of course also true for Hibernate’s own tools: for example, if you have to generate the entity class source from an SQL schema every time you make a change. The persistence layer is an important part of your application, and you must be aware of the commitment you’re making by introducing additional dependencies. You see in this chapter and the next how to avoid the repetitive code often associated with persistence-layer components without using any additional tools.

There is more than one way to design a persistence layer facade—some small applications have a single DataAccess class; others mix data-access operations into domain classes (the Active Record pattern, not discussed in this book)—but we prefer the DAO pattern.

18.1.1. A generic data access object pattern

The DAO design pattern originated in Sun’s Java Blueprints more than 15 years ago; it’s had a long history. A DAO class defines an interface to persistence operations relating to a particular entity; it advises you to group together code that relates to the persistence of that entity. Given its age, there are many variations of the DAO pattern. The basic structure of our recommended design is shown in figure 18.1.

Figure 18.1. Generic DAO interfaces support arbitrary implementations.

We designed the persistence layer with two parallel hierarchies: interfaces on one side, implementations on the other. The basic instance-storage and -retrieval operations are grouped in a generic super-interface and a superclass that implements these operations with a particular persistence solution (using Hibernate, of course). The generic interface is extended by interfaces for particular entities that require additional business-related data-access operations. Again, you may have one or several implementations of an entity DAO interface.

Let’s quickly look at some of the interfaces and methods shown in this illustration. There are a bunch of finder methods. These typically return managed (in persistent state) entity instances, but they may also return arbitrary data-transfer objects such as ItemBidSummary. Finder methods are your biggest code duplication issue; you may end up with dozens if you don’t plan carefully. The first step is to try to make them as generic as possible and move them up in the hierarchy, ideally into the top-level interface. Consider the findByName() method in the ItemDAO: you’ll probably have to add more options for item searches soon, or you may want the result presorted by the database, or you may implement some kind of paging feature. We’ll elaborate on this again later and show you a generic solution for sorting and paging in section 19.2.

The methods offered by the DAO API indicate clearly that this is a state-managing persistence layer. Methods such as makePersistent() and makeTransient() change an entity instance’s state (or the state of many instances at once, with cascading enabled). A client can expect that updates are executed automatically (flushed) by the persistence engine when an entity instance is modified (there is no perform-Update() method). You’d write a completely different DAO interface if your persistence layer were statement-oriented: for example, if you weren’t using Hibernate to implement it, but rather only plain JDBC.

The persistence layer facade we introduce here doesn’t expose any Hibernate or Java Persistence interface to the client, so theoretically you can implement it with any software without making changes to the client code. You may not want or need persistence-layer portability, as explained earlier. In that case, you should consider exposing Hibernate or Java Persistence interfaces—for example, you could allow clients to access the JPA CriteriaBuilder and then have a generic findBy(CriteriaQuery) method. This decision is up to you; you may decide that exposing Java Persistence interfaces is a safer choice than exposing Hibernate interfaces. You should know, however, that although it’s possible to change the implementation of the persistence layer from one JPA provider to another, it’s almost impossible to rewrite a persistence layer that is state-oriented with plain JDBC statements.

Next, you implement the DAO interfaces.

18.1.2. Implementing the generic interface

Let’s continue with a possible implementation of the GenericDAO interface:

Path: /apps/app-model/src/main/java/org/jpwh/dao/GenericDAOImpl.java

public abstract class GenericDAOImpl<T, ID extends Serializable>
  implements GenericDAO<T, ID> {
<enter/>
  @PersistenceContext
    protected EntityManager em;
<enter/>
    protected final Class<T> entityClass;
<enter/>
    protected GenericDAOImpl(Class<T> entityClass) {
        this.entityClass = entityClass;
    }
<enter/>
    public void setEntityManager(EntityManager em) {
        this.em = em;
    }
<enter/>
    // ...
<enter/>
}

This generic implementation needs two things to work: an EntityManager and an entity class. A subclass must provide the entity class as a constructor argument. The EntityManager, however, can be provided either by a runtime container that understands the @PersistenceContext injection annotation (for example, any standard Java EE container) or through setEntityManager().

Next, let’s look at the finder methods:

Path: /apps/app-model/src/main/java/org/jpwh/dao/GenericDAOImpl.java

public abstract class GenericDAOImpl<T, ID extends Serializable>
    implements GenericDAO<T, ID> {
<enter/>
    // ...
<enter/>
    public T findById(ID id) {
        return findById(id, LockModeType.NONE);
    }
<enter/>
    public T findById(ID id, LockModeType lockModeType) {
        return em.find(entityClass, id, lockModeType);
    }
<enter/>
    public T findReferenceById(ID id) {
        return em.getReference(entityClass, id);
    }
<enter/>
    public List<T> findAll() {
        CriteriaQuery<T> c =
            em.getCriteriaBuilder().createQuery(entityClass);
        c.select(c.from(entityClass));
        return em.createQuery(c).getResultList();
    }
<enter/>
    public Long getCount() {
        CriteriaQuery<Long> c =
           em.getCriteriaBuilder().createQuery(Long.class);
        c.select(em.getCriteriaBuilder().count(c.from(entityClass)));
        return em.createQuery(c).getSingleResult();
    }
<enter/>
    // ...
}

You can see how the code uses the entity class to perform the query operations. We’ve written some simple criteria queries, but you could use JPQL or SQL.

Finally, here are the state-management operations:

Path: /apps/app-model/src/main/java/org/jpwh/dao/GenericDAOImpl.java

public abstract class GenericDAOImpl<T, ID extends Serializable>
    implements GenericDAO<T, ID> {
<enter/>
    // ...
<enter/>
    public T makePersistent(T instance) {
        // merge() handles transient AND detached instances
        return em.merge(instance);
    }
<enter/>
    public void makeTransient(T instance) {
        em.remove(instance);
    }
<enter/>
    public void checkVersion(T entity, boolean forceUpdate) {
        em.lock(
            entity,
            forceUpdate
                ? LockModeType.OPTIMISTIC_FORCE_INCREMENT
                : LockModeType.OPTIMISTIC
        );
    }
}

An important decision is how you implement the makePersistent() method. Here we’ve chosen EntityManager#merge() because it’s the most versatile. If the given argument is a transient entity instance, merging will return a persistent instance. If the argument is a detached entity instance, merging will also return a persistent instance. This provides clients with a consistent API without worrying about the state of an entity instance before calling makePersistent(). But the client needs to be aware that the returned value of makePersistent() is always the current instance and that the argument it has given must now be thrown away (see section 10.3.4).

You’ve now completed building the basic machinery of the persistence layer and the generic interface it exposes to the upper layer of the system. In the next step, you create entity-related DAO interfaces and implementations by extending the generic interface and implementation.

18.1.3. Implementing entity DAOs

Everything you’ve created so far is abstract and generic—you can’t even instantiate GenericDAOImpl. You now implement the ItemDAO interface by extending GenericDAOImpl with a concrete class.

First you must make choices about how callers will access the DAOs. You also need to think about the life cycle of a DAO instance. With the current design, the DAO classes are stateless except for the EntityManager member.

Caller threads can share a DAO instance. In a multithreaded Java EE environment, for example, the automatically injected EntityManager is effectively thread-safe, because internally it’s often implemented as a proxy that delegates to some thread- or transaction-bound persistence context. Of course, if you call setEntityManager() on a DAO, that instance can’t be shared and should only be used by one (for example, integration/unit test) thread.

An EJB stateless session bean pool would be a good choice, and thread-safe persistence context injection is available if you annotate the concrete ItemDAOImpl as a stateless EJB component:

Path: /apps/app-model/src/main/java/org/jpwh/dao/ItemDAOImpl.java

@Stateless
public class ItemDAOImpl extends GenericDAOImpl<Item, Long>
    implements ItemDAO {
<enter/>
    public ItemDAOImpl() {
        super(Item.class);
    }
<enter/>
    // ...
}

You see in a minute how the EJB container selects the “right” persistence context for injection.

Thread-safety of an injected EntityManager

The Java EE specifications don’t document clearly the thread-safety of injected EntityManagers with @PersistenceContext. The JPA specification states that an EntityManager may “only be accessed in a single-threaded manner.” This would imply that it can’t be injected into inherently multithreaded components such as EJBs, singleton beans, and servlets that, because it’s the default, don’t run with Single-ThreadModel. But the EJB specification requires that the EJB container serializes calls to each stateful and stateless session bean instance. The injected Entity-Manager in stateless or stateful EJBs is therefore thread-safe; containers implement this by injecting an EntityManager placeholder. Additionally, your application server might (it doesn’t have to) support thread-safe access to injected EntityManagers in a singleton bean or multithreaded servlet. If in doubt, inject the thread-safe Entity-ManagerFactory, and then create and close your own application-managed Entity-Manager in your component’s service methods.

Next are the finder methods defined in ItemDAO:

Path: /apps/app-model/src/main/java/org/jpwh/dao/ItemDAOImpl.java

@Stateless
public class ItemDAOImpl extends GenericDAOImpl<Item, Long>
    implements ItemDAO {
<enter/>
    // ...
<enter/>
    @Override
    public List<Item> findAll(boolean withBids) {
        CriteriaBuilder cb = em.getCriteriaBuilder();
        CriteriaQuery<Item> criteria = cb.createQuery(Item.class);
        // ...
        return em.createQuery(criteria).getResultList();
    }
<enter/>
    @Override
    public List<Item> findByName(String name, boolean substring) {
        // ...
    }
<enter/>
    @Override
    public List<ItemBidSummary> findItemBidSummaries() {
        CriteriaBuilder cb = em.getCriteriaBuilder();
        CriteriaQuery<ItemBidSummary> criteria =
            cb.createQuery(ItemBidSummary.class);
        // ...
        return em.createQuery(criteria).getResultList();
    }
}

You shouldn’t have any problem writing these queries after reading the previous chapters; they’re straightforward: Either use the criteria query APIs or call externalized JPQL queries by name. You should consider the static metamodel for criteria queries, as explained in the section “Using a static metamodel” in chapter 3.

With the ItemDAO finished, you can move on to BidDAO:

Path: /apps/app-model/src/main/java/org/jpwh/dao/BidDAOImpl.java

@Stateless
public class BidDAOImpl extends GenericDAOImpl<Bid, Long>
    implements BidDAO {
<enter/>
    public BidDAOImpl() {
        super(Bid.class);
    }
}

As you can see, this is an empty DAO implementation that only inherits generic methods. In the next section, we discuss some operations you could potentially move into this DAO class. We also haven’t shown any UserDAO or CategoryDAO code and assume that you’ll write these DAO interfaces and implementations as needed.

Our next topic is testing this persistence layer: Should you, and if so, how?

18.1.4. Testing the persistence layer

We’ve pulled almost all the examples in this book so far directly from actual test code. We continue to do so in all future examples, but we have to ask: should you write tests for the persistence layer to validate its functionality?

In our experience, it doesn’t usually make sense to test the persistence layer separately. You could instantiate your domain DAO classes and provide a mock EntityManager. Such a unit test would be of limited value and quite a lot of work to write. Instead, we recommend that you create integration tests, which test a larger part of the application stack and involve the database system. All the rest of the examples in this chapter are from such integration tests; they simulate a client calling the server application, with an actual database back end. Hence, you’re testing what’s important: the correct behavior of your services, the business logic of the domain model they rely on, and database access through your DAOs, all together.

The problem then is preparing such integration tests. You want to test in a real Java EE environment, in the actual runtime container. For this, we use Arquillian (http://arquillian.org), a tool that integrates with TestNG. With Arquillian, you prepare a virtual archive in your test code and then execute it on a real application server. Look at the examples to see how this works.

A more interesting problem is preparing test data for integration tests. Most meaningful tests require that some data exists in the database. You want to load that test data into the database before your test runs, and each test should work with a clean and well-defined data set so you can write reliable assertions.

Based on our experience, here are three common techniques to import test data:

  • Your test fixture executes a method before every test to obtain an Entity-Manager. You manually instantiate your test data entities and persist them with the EntityManager API. The major advantage of this strategy is that you test quite a few of your mappings as a side effect. Another advantage is easy programmatic access to test data. For example, if you need the identifier value of a particular test Item in your test code, it’s already there in Java because you can pass it back from your data-import method. The disadvantage is that test data can be hard to maintain, because Java code isn’t a great data format. You can clear test data from the database by dropping and re-creating the schema after every test using Hibernate’s schema-export feature. All integration tests in this book so far have used this approach; you can find the test data-import procedure next to each test in the example code.
  • Arquillian can execute a DbUnit (http://dbunit.sourceforge.net) data-set import before every test run. DbUnit offers several formats for writing data sets, including the commonly used flat XML syntax. This isn’t the most compact format but is easy to read and maintain. The examples in this chapter use this approach. You can find Arquillian’s @UsingDataSet on the test classes with a path to the XML file to import. Hibernate generates and drops the SQL schema, and Arquillian, with the help of DbUnit, loads the test data into the database. If you like to keep your test data independent of tests, this may be the right approach for you. If you don’t use Arquillian, manually importing a data set is pretty easy with DbUnit—see the SampleDataImporter in this chapter’s examples. We deploy this importer when running the example applications during development, to have the same data available for interactive use as in automated tests.
  • In section 9.1.1, you saw how to execute custom SQL scripts when Hibernate starts. The load script executes after Hibernate generates the schema; this is a great utility for importing test data with plain INSERT SQL statements. The examples in the next chapter use this approach. The major advantage is that you can copy/paste the INSERT statements from an SQL console into your test fixture and vice versa. Furthermore, if your database supports the SQL row value constructor syntax, you can write compact multirow insertion statements like insert into MY_TABLE (MY_COLUMN) values (1), (2), (3), ....

We leave it up to you to pick a strategy. This is frequently a matter of taste and how much test data you have to maintain. Note that we’re talking about test data for integration tests, not performance or scalability tests. If you need large amounts of (mostly random) test data for load testing, consider data-generation tools such as Benerator (http://databene.org/databene-benerator.html).

This completes the first iteration of the persistence layer. You can now obtain ItemDAO instances and work with a higher level of abstraction when accessing the database. Let’s write a client that calls this persistence layer and implement the rest of the application.

18.2. Building a stateless server

The application will be a stateless server application, which means no application state will be managed on the server between client requests. The application will be simple, in that it supports only two use cases: editing an auction item and placing a bid for an item.

Consider these workflows to be conversations: units of work from the perspective of the application user. The point of view of application users isn’t necessarily the same that we as developers have on the system; developers usually consider one system transaction to be a unit of work. We now focus on this mismatch and how a user’s perspective influences the design of server and client code. We start with the first conversation: editing an item.

18.2.1. Editing an auction item

The client is a trivial text-based EJB console application. Look at the “edit an auction item” user conversation with this client in figure 18.2.

Figure 18.2. This conversation is a unit of work from a user’s perspective.

The client presents the user with a list of auction items; the user picks one. Then the client asks which operation the user would like to perform. Finally, after entering a new name, the client shows a success confirmation message. The system is now ready for the next conversation. The example client starts again and presents a list of auction items.

The sequence of calls for this conversation is shown in figure 18.3. This is your road map for the rest of the section.

Figure 18.3. The calls in the “Edit an auction item” conversation

Let’s have a closer look at this in code; you can refer to the bullet items in the illustration to keep track of where we are. The code you see next is from a test case simulating the client, followed by code from the server-side components handling these client calls.

The client retrieves a list of Item instances from the server to start the conversation and also requests with true that the Item#bids collection be eagerly fetched. Because the server doesn’t hold the conversation state, the client must do this job:

Path: /apps/app-stateless-server/src/test/java/org/jpwh/test/stateless/AuctionServiceTest.java

The server-side code handles the call with the help of DAOs:

Path: /apps/app-stateless-server/src/main/java/org/jpwh/stateless/AuctionServiceImpl.java

(You can ignore the interfaces declared here; they’re trivial but necessary for remote calls and local testing of an EJB.) Because no transaction is active when the client calls getItems(), a new transaction is started. The transaction is committed automatically when the method returns. The @TransactionAttribute annotation is optional in this case; the default behavior requires a transaction on EJB method calls.

The getItems() EJB method calls the ItemDAO to retrieve a List of Item instances . The Java EE container automatically looks up and injects the ItemDAO, and the EntityManager is set on the DAO. Because no EntityManager or persistence context is associated with the current transaction, a new persistence context is started and joined with the transaction. The persistence context is flushed and closed when the transaction commits. This is a convenient feature of stateless EJBs; you don’t have to do much to use JPA in a transaction.

A List of Item instances in detached state (after the persistence context is closed) is returned to the client . You don’t have to worry about serialization right now; as long as List and Item and all other reachable types are Serializable, the EJB framework takes care of it.

Next, the client sets the new name of a selected Item and asks the server to store that change by sending the detached and modified Item :

Path: /apps/app-stateless-server/src/test/java/org/jpwh/test/stateless/AuctionServiceTest.java

The server takes the detached Item instance and asks the ItemDAO to make the changes persistent , internally merging modifications:

Path: /apps/app-stateless-server/src/main/java/org/jpwh/stateless/AuctionServiceImpl.java

public class AuctionServiceImpl implements AuctionService {
<enter/>
    // ...
<enter/>
    @Override
    public Item storeItem(Item item) {
        return itemDAO.makePersistent(item);
    }
<enter/>
    // ...
}

The updated state—the result of the merge—is returned to the client.

The conversation is complete, and the client may ignore the returned updated Item. But the client knows that this return value is the latest state and that any previous state it was holding during the conversation, such as the List of Item instances, is outdated and should probably be discarded. A subsequent conversation should begin with fresh state: using the latest returned Item, or by obtaining a fresh list.

You’ve now seen how to implement a single conversation—the entire unit of work, from the user’s perspective—with two system transactions on the server. Because you only loaded data in the first system transaction and deferred writing changes to the last transaction, the conversation was atomic: changes aren’t permanent until the last step completes successfully. Let’s expand on this with the second use case: placing a bid for an item.

18.2.2. Placing a bid

In the console client, a user’s “placing a bid” conversation looks like figure 18.4. The client presents the user with a list of auction items again and asks the user to pick one. The user can place a bid and receives a success-confirmation message if the bid was stored successfully. The sequence of calls and the code road map are shown in figure 18.5.

Figure 18.4. A user placing a bid: a unit of work from the user’s perspective

Figure 18.5. The calls in the “Placing a bid” conversation

We again step through the test client and server-side code. First , the client gets a list of Item instances and eagerly fetches the Item#bids collection. You saw the code for this in the previous section.

Then, the client creates a new Bid instance after receiving user input for the amount , linking the new transient Bid with the detached selected Item. The client has to store the new Bid and send it to the server. If you don’t document your service API properly, a client may attempt to send the detached Item:

Path: /apps/app-stateless-server/src/test/java/org/jpwh/test/stateless/AuctionServiceTest.java

Here, the client assumes that the server knows it added the new Bid to the Item#bids collection and that it must be stored. Your server could implement this functionality, maybe with merge cascading enabled on the @OneToMany mapping of that collection. Then, the storeItem() method of the service would work as in the previous section, taking the Item and calling the ItemDAO to make it (and its transitive dependencies) persistent.

This isn’t the case in this application: the service offers a separate placeBid() method. You have to perform additional validation before a bid is stored in the database, such as checking whether it was higher than the last bid. You also want to force an increment of the Item version, to prevent concurrent bids. Hence, you document the cascading behavior of your domain model entity associations: Item#bids isn’t transitive, and new Bid instances must be stored through the service’s placeBid() method exclusively.

The implementation of the placeBid() method on the server takes care of validation and version checking:

Path: /apps/app-stateless-server/src/main/java/org/jpwh/stateless/AuctionServiceImpl.java

Two interesting things are happening here. First, a transaction is started and spans the placeBid() method call. The nested EJB method calls to ItemDAO and BidDAO are in that same transaction context and inherit the transaction. The same is true for the persistence context: it has the same scope as the transaction . Both DAO classes declare that they need the current @PersistenceContext injected; the runtime container provides the persistence context bound to the current transaction. Transaction and persistence-context creation and propagation with stateless EJBs is straightforward, always “along with the call.”

Second, the validation of the new Bid is business logic encapsulated in the domain model classes. The service calls Item#isValid(Bid) and delegates the responsibility for validation to the Item domain model class. Here’s how you implement this in the Item class:

Path: /apps/app-model/src/main/java/org/jpwh/model/Item.java

@Entity
public class Item implements Serializable {
<enter/>
    // ...
    public boolean isValidBid(Bid newBid) {
        Bid highestBid = getHighestBid();
        if (newBid == null)
            return false;
        if (newBid.getAmount().compareTo(new BigDecimal("0")) != 1)
            return false;
        if (highestBid == null)
            return true;
        if (newBid.getAmount().compareTo(highestBid.getAmount()) == 1)
            return true;
        return false;
    }
<enter/>
    public Bid getHighestBid() {
        return getBids().size() > 0
           ? getBidsHighestFirst().get(0) : null;
    }
<enter/>
    public List<Bid> getBidsHighestFirst() {
        List<Bid> list = new ArrayList<>(getBids());
        Collections.sort(list);
        return list;
    }
<enter/>
    // ...
}

The isValid() method performs several checks to find out whether the Bid is higher than the last bid. If your auction system has to support a “lowest bid wins” strategy at some point, all you have to do is change the Item domain-model implementation; the services and DAOs using that class won’t know the difference. (Obviously, you’d need a different message for the InvalidBidException.)

What is debatable is the efficiency of the getHighestBid() method. It loads the entire bids collection into memory, sorts it there, and then takes just one Bid. An optimized variation could look like this:

Path: /apps/app-model/src/main/java/org/jpwh/model/Item.java

@Entity
public class Item implements Serializable {
<enter/>
    // ...
<enter/>
    public boolean isValidBid(Bid newBid,
                              Bid currentHighestBid,
                              Bid currentLowestBid) {
        // ...
    }
<enter/>
}

The service (or controller, if you like) is still completely unaware of any business logic—it doesn’t need to know whether a new bid must be higher or lower than the last one. The service implementation must provide the currentHighestBid and -currentLowestBid when calling the Item#isValid() method. This is what we hinted at earlier: that you may want to add operations to the BidDAO. You could write database queries to find those bids in the most efficient way possible without loading all the item bids into memory and sorting them there.

The application is now complete. It supports the two use cases you set out to implement. Let’s take a step back and analyze the result.

18.2.3. Analyzing the stateless application

You’ve implemented code to support conversations: units of work from the perspective of the users. The users expect to perform a series of steps in a workflow and that each step will be only temporary until they finalize the conversation with the last step. That last step is usually a final request from the client to the server, ending the conversation. This sounds a lot like transactions, but you may have to perform several system transactions on the server to complete a particular conversation. The question is how to provide atomicity across several requests and system transactions.

Conversations by users can be of arbitrary complexity and duration. More than one client request in a conversation’s flow may load detached data. Because you’re in control of the detached instances on the client, you can easily make a conversation atomic if you don’t merge, persist, or remove any entity instances on the server until the final request in your conversation workflow. It’s up to you to somehow queue modifications and manage detached data where the list of items is held during user think-time. Just don’t call any service operation from the client that makes permanent changes on the server until you’re sure you want to “commit” the conversation.

One issue you have to keep an eye on is equality of detached references: for example, if you load several Item instances and put them in a Set or use them as keys in a Map. Because you’re then comparing instances outside the guaranteed scope of object identity—the persistence context—you must override the equals() and hashCode() methods on the Item entity class as explained in section 10.3.1. In the trivial conversations with only one detached list of Item instances, this wasn’t necessary. You never compared them in a Set, used them as keys in a HashMap, or tested them explicitly for equality.

You should enable versioning of the Item entity for multiuser applications, as explained in the section “Enabling versioning” in chapter 11. When entity modifications are merged in AuctionService#storeItem(), Hibernate increments the Item’s version (it doesn’t if the Item wasn’t modified, though). Hence, if another user has changed the name of an Item concurrently, Hibernate will throw an exception when the system transaction is committed and the persistence context is flushed. The first user to commit their conversation always wins with this optimistic strategy. The second user should be shown the usual error message: “Sorry, someone else modified the same data; please restart your conversation.”

What you’ve created is a system with a rich client or thick client; the client isn’t a dumb input/output terminal but an application with an internal state independent of the server (recall that the server doesn’t hold any application state). One of the advantages of such a stateless server is that any server can handle any client request. If a server fails, you can route the next request to a different server, and the conversation process continues. The servers in a cluster share nothing; you can easily scale up your system horizontally by adding more servers. Obviously, all application servers still share the database system, but at least you only have to worry about scaling up one tier of servers.

Keeping changes after race conditions

In acceptance testing, you’ll probably discover that users don’t like to restart conversations when a race condition is detected. They may demand pessimistic locking: while user A edits an item, user B shouldn’t even be allowed to load it into an editor dialog. The fundamental problem isn’t the optimistic version checks at the end of a conversation; it’s that you lose all your work when you start a conversation from scratch.

Instead of rendering a simple concurrency error message, you could offer a dialog that allows the user to keep their now invalid changes, merge them manually with the modifications made by the other user, and then save the combined result. Be warned, though: implementing this feature can be time-consuming, and Hibernate doesn’t help much.

The downside is that you need to write rich-client applications, and you have to deal with network communication and data-serialization issues. Complexity shifts from the server side to the client side, and you have to optimize communication between the client and server.

If, instead of on an EJB client, your (JavaScript) client has to work on several web browsers or even as a native application on different (mobile) operating systems, this can certainly be a challenge. We recommend this architecture if your rich client runs in popular web browsers, where users download the latest version of the client application every time they visit your website. Rolling out native clients on several platforms, and maintaining and upgrading the installations, can be a significant burden even in medium-sized intranets where you control the user’s environment.

Without an EJB environment, you have to customize serialization and transmission of detached entity state between the client and the server. Can you serialize and deserialize an Item instance? What happens if you didn’t write your client in Java? We’ll look at this issue in section 19.4.

Next, you implement the same use cases again, but with a very different strategy. The server will now hold the conversational state of the application, and the client will only be a dumb input/output device. This is an architecture with a thin client and a stateful server.

18.3. Building a stateful server

The application you’ll write is still simple. It supports the same two uses cases as before: editing an auction item and placing a bid for an item. No difference is visible to the users of the application; the EJB console client still looks like figures 18.2 and 18.4.

With a thin client, it’s the server’s job to transform data for output into a display format understood by the thin client—for example, into HTML pages rendered by a web browser. The client transmits user input operations directly to the server—for example, as simple HTML form submissions. The server is responsible for decoding and transforming the input into higher-level domain model operations. We keep this part simple for now, though, and use only remote method invocation with an EJB client.

The server must then also hold conversational data, usually stored in some kind of server-side session associated with a particular client. Note that a client’s session has a larger scope than a single conversation; a user may perform several conversations during a session. If the user walks away from the client and doesn’t complete a conversation, temporary conversation data must be removed on the server at some point. The server typically handles this situation with timeouts; for example, the server may discard a client’s session and all the data it contains after a certain period of inactivity. This sounds like a job for EJB stateful session beans, and, indeed, they’re ideal for this kind of architecture if you’re in need of a standardized solution.

Keeping those fundamental issues in mind, let’s implement the first use case: editing an auction item.

18.3.1. Editing an auction item

The client presents the user again with a list of auction items, and the user picks one. This part of the application is trivial, and you don’t have to hold any conversational state on the server. Have a look at the sequence of calls and contexts in figure 18.6.

Figure 18.6. The client retrieves data already transformed for immediate display.

Because the client isn’t very smart, it doesn’t and shouldn’t understand what an Item entity class is. It loads a List of ItemBidSummary data-transfer objects:

Path: /apps/app-stateful-server/src/test/java/org/jpwh/test/stateful/AuctionServiceTest.java

List<ItemBidSummary> itemBidSummaries = auctionService.getSummaries();

The server implements this with a stateless component because there is no need for it to hold any conversational state at this time:

Path: /apps/app-stateful-server/src/main/java/org/jpwh/stateful/AuctionServiceImpl.java

@javax.ejb.Stateless
@javax.ejb.Local(AuctionService.class)
@javax.ejb.Remote(RemoteAuctionService.class)
public class AuctionServiceImpl implements AuctionService {
<enter/>
    @Inject
    protected ItemDAO itemDAO;
<enter/>
    @Override
    public List<ItemBidSummary> getSummaries() {
        return itemDAO.findItemBidSummaries();
    }
}

Even if you have a stateful server architecture, there will be many short conversations in your application that don’t require any state to be held on the server. This is both normal and important: holding state on the server consumes resources. If you implemented the getSummaries() operation with a stateful session bean, you’d waste resources. You’d only use the stateful bean for a single operation, and then it would consume memory until the container expired it. Stateful server architecture doesn’t mean you can only use stateful server-side components.

Next, the client renders the ItemBidSummary list, which only contains the identifier of each auction item, its description, and the current highest bid. This is exactly what the user sees on the screen, as shown in figure 18.2. The user then enters an item identifier and starts a conversation that works with this item. You can see the road map for this conversation in figure 18.7.

Figure 18.7. The client controls the conversation boundaries on the server.

The client notifies the server that it should now start a conversation, passing an item identifier value :

Path: /apps/app-stateful-server/src/test/java/org/jpwh/test/stateful/AuctionServiceTest.java

itemService.startConversation(itemId);

The service called here isn’t the pooled stateless AuctionService from the last section. This new ItemService is a stateful component; the server will create an instance and assign it to this client exclusively. You implement this service with a stateful session bean:

Path: /apps/app-stateful-server/src/main/java/org/jpwh/stateful/ItemServiceImpl.java

There are many annotations on this class, defining how the container will handle this stateful component. With a 10-minute timeout, the server removes and destroys an instance of this component if the client hasn’t called it in the last 10 minutes. This handles dangling conversations: for example, when a user walks away from the client.

You also disable passivation for this EJB: an EJB container may serialize and store the stateful component to disk, to preserve memory or to transmit it to another node in a cluster when session-failover is needed. This passivation won’t work because of one member field: the EntityManager. You’re attaching a persistence context to this stateful component with the EXTENDED switch, and the EntityManager isn’t java.io.Serializable.

FAQ: Why can’t you serialize an EntityManager?

There is no technical reason why a persistence context and an EntityManager can’t be serialized. Of course, the EntityManager would have to reattach itself upon deserialization to the right EntityManagerFactory on the target machine, but that’s an implementation detail. Passivation of persistence contexts has so far been out of scope of the JPA and Java EE specifications.

Most vendors, however, have an implementation that is serializable and knows how to deserialize itself correctly. Hibernate’s EntityManager can be serialized and deserialized and tries to be smart when it has to find the right persistence unit upon deserialization.

With Hibernate and a Wildfly server, you could enable passivation in the previous example, and you’d get session failover and high availability with a stateful server and extended persistence contexts. This isn’t standardized, though; and as we discuss later, this strategy is difficult to scale.

There was also a time when not even Hibernate’s EntityManager was serializable. You may encounter legacy framework code that tries to work around this entire issue, such as the Seam framework’s ManagedEntityInterceptor. You should avoid this and look for a simpler solution, which usually means sticky sessions in a cluster, stateless server design, or the CDI server-side conversation-management solution we’ll discuss in the next chapter with a request-scoped persistence context.

You use @PersistenceContext to declare that this stateful bean needs an Entity-Manager and that the container should extend the persistence context to span the same duration as the life cycle of the stateful session bean. This extended mode is an option exclusively for stateful EJBs. Without it, the container will create and close a persistence context whenever a transaction commits. Here, you want the persistence context to stay open beyond any transaction boundaries and remain attached to the stateful session bean instance.

Furthermore, you don’t want the persistence context to be flushed automatically when a transaction commits, so you configure it to be UNSYNCHRONIZED. Hibernate will only flush the persistence context after you manually join it with a transaction. Now Hibernate won’t automatically write to the database changes you make to loaded persistent entity instances; instead, you queue them until you’re ready to write everything.

At the start of the conversation, the server loads an Item instance and holds it as conversational state in a member field (figure 18.7). The ItemDAO also needs an EntityManager; remember that it has a @PersistenceContext annotation without any options. The rules for persistence context propagation in EJB calls you’ve seen before still apply: Hibernate propagates the persistence context along with the transaction context. The persistence context rides along into the ItemDAO with the transaction that was started for the startConversation() method call. When startConversation() returns, the transaction is committed, but the persistence context is neither flushed nor closed. The ItemServiceImpl instance waits on the server for the next call from the client.

The next call from the client instructs the server to change the item’s name :

Path: /apps/app-stateful-server/src/test/java/org/jpwh/test/stateful/AuctionServiceTest.java

itemService.setItemName("Pretty Baseball Glove");

On the server, a transaction is started for the setItemName() method. But because no transactional resources are involved (no DAO calls, no EntityManager calls), nothing happens but a change to the Item you hold in conversational state:

Path: /apps/app-stateful-server/src/main/java/org/jpwh/stateful/ItemServiceImpl.java

Note that the Item is still be in persistent state—the persistence context is still open! But because it isn’t synchronized, it won’t detect the change you made to the Item, because it won’t be flushed when the transaction commits.

Finally, the client ends the conversation, giving the OK to store all changes on the server (figure 18.7):

Path: /apps/app-stateful-server/src/test/java/org/jpwh/test/stateful/AuctionServiceTest.java

itemService.commitConversation();

On the server, you can flush changes to the database and discard the conversational state:

Path: /apps/app-stateful-server/src/main/java/org/jpwh/stateful/ItemServiceImpl.java

You’ve now completed the implementation of the first use case. We’ll skip the implementation of the second use case (placing a bid) and refer you to the example code for details. The code for the second case is almost the same as the first, and you shouldn’t have any problems understanding it. The important aspect you must understand is how persistence context and transaction handling work in EJBs.

Note

There are additional rules for EJBs about persistence-context propagation between different types of components. They’re extremely complex, and we’ve never seen any good use cases for them in practice. For example, you probably wouldn’t call a stateful EJB from a stateless EJB. Another complication is disabled or optional EJB method transactions, which also influence how the persistence context is propagated along with component calls. We explained these propagation rules in the previous edition of this book. We recommend that you try to work only with the strategies shown in this chapter, and keep things simple.

Let’s discuss some of the differences between a stateful and a stateless server architecture.

18.3.2. Analyzing the stateful application

As in our analysis of the stateless application, the first question is how the unit of work is implemented from the perspective of the user. In particular, you need to ask how atomicity of the conversation works and how you can make all steps in the workflow appear as a single unit.

At some point, usually when the last request in a conversation occurs, you commit the conversation and write the changes to the database. The conversation is atomic if you don’t join the extended EntityManager with a transaction until the last event in the conversation. No dirty checking and flushing will occur if you only read data in unsynchronized mode.

While you keep the persistence context open, you can keep loading data lazily by accessing proxies and unloaded collections; this is obviously convenient. The loaded Item and other data become stale, however, if the user needs a long time to trigger the next request. You may want to refresh() some managed entity instances during the conversation if you need updates from the database, as explained in section 10.2.6. Alternatively, you can refresh to undo an operation during a conversation. For example, if the user changes the Item#name in a dialog but then decides to undo this, you can refresh() the persistent Item instance to retrieve the “old” name from the database. This is a nice feature of an extended persistence context and allows the Item to be always available in managed persistent state.

Savepoints in conversations

You may be familiar with savepoints in JDBC transactions: after changing some data in a transaction, you set a savepoint; later you roll back the transaction to the save-point, undoing some work but keeping changes made before the savepoint was set. Unfortunately, Hibernate doesn’t offer a concept similar to savepoints for the persistence context. You can only roll back an entity instance to its database state with the refresh() operation. Regular JDBC savepoints in a transaction can be used with Hibernate (you need a Connection; see section 17.1), but they won’t help you implement undo in a conversation.

The stateful server architecture may be more difficult to scale horizontally. If a server fails, the state of the current conversation and indeed the entire session is lost. Replicating sessions on several servers is a costly operation, because any modification of session data on one server involves network communication to (potentially all) other servers.

With stateful EJBs and a member-extended EntityManager, serialization of this extended persistence context isn’t possible. If you use stateful EJBs and an extended persistence context in a cluster, consider enabling sticky sessions, causing a particular client’s requests to always route to the same server. This allows you to handle more load with additional servers easily, but your users must accept losing session state when a server fails.

On the other hand, stateful servers can act as a first line of caches with their extended persistence contexts in user sessions. Once an Item has been loaded for a particular user conversation, that Item won’t be loaded again from the database in the same conversation. This can be a great tool to reduce the load on your database servers (the most expensive tier to scale).

An extended persistence-context strategy requires more memory on the server than holding only detached instances: the persistence context in Hibernate contains a snapshot copy of all managed instances. You may want to manually detach() managed instances to control what is held in the persistence context, or disable dirty checking and snapshots (while still being able to lazy load) as explained in section 10.2.8.

There are alternative implementations of thin clients and stateful servers, of course. You can use regular request-scoped persistence contexts and manage detached (not persistent) entity instances on the server manually. This is certainly possible with detaching and merging but can be much more work. One of the main advantages of the extended persistence context, transparent lazy loading even across requests, would no longer be available either. In the next chapter, we’ll show you such a stateful service implementation with request-scoped persistence contexts in CDI and JSF, and you can compare it with the extended persistence context feature of EJBs you’ve seen in this chapter.

Thin client systems typically produce more load on servers than rich clients do. Every time the user interacts with the application, a client event results in a network request. This can even happen for every mouse click in a web application. Only the server knows the state of the current conversation and has to prepare and render all information the user is viewing. A rich client, on the other hand, can load raw data needed for a conversation in one request, transform it, and bind it locally to the user interface as needed. A dialog in a rich client can queue modifications on the client side and fire a network request only when it has to make changes persistent at the end of a conversation.

An additional challenge with thin clients is parallel conversations by one user: what happens if a user is editing two items at the same time—for example, in two web browser tabs? This means the user has two parallel conversations with the server. The server must separate data in the user session by conversation. Client requests during a conversation must therefore contain some sort of conversation identifier so you can select the correct conversation state from the user’s session for each request. This happens automatically with EJB clients and servers but probably isn’t built into your favorite web application framework (unless it’s JSF and CDI, as you’ll see in the next chapter).

One significant benefit of a stateful server is less reliance on the client platform; if the client is a simple input/output terminal with few moving parts, there is less chance for things to go wrong. The only place you have to implement data validation and security checks is the server. There are no deployment issues to deal with; you can roll out application upgrades on servers without touching clients.

Today, there are few advantages to thin client systems, and stateful server installations are declining. This is especially true in the web application sector, where easy scalability is frequently a major concern.

18.4. Summary

  • You implemented simple conversations—units of work, from the perspective of your application user.
  • You saw two server and client designs, with stateless and stateful servers, and learned how Hibernate fits into both these architectures.
  • You can work with either detached entity state or an extended conversation-scoped persistence context.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.179.85