Generating a chunked output using Jersey APIs

A chunked response means that instead of waiting for the entire result, the results are split into chunks (partial results) and sent one after the other. Sending a response in chunks is useful for a RESTful web API if the resource returned by the API is huge in size.

With Jersey, you can use the org.glassfish.jersey.server.ChunkedOutput class as the return type to send the response to a client in chunks. The chunked output content can be any data type for which MessageBodyWriter<T> (entity provider) is available.

When you specify ChunkedOutput as the return type for a REST resource method, it tells the runtime that the response will be chunked and sent one by one to the client. Seeing ChunkedOutput as the return type for a method, Jersey will switch to the asynchronous processing mode while processing this method at runtime, without you having to explicitly use AsyncResponse in the method signature. Furthermore, when the response content is generated for this method, Jersey will set Transfer-Encoding: chunked in the response header. The chunked transfer encoding allows the server to maintain an HTTP-persistent connection for sending the result to the client in a series of chunks, as and when they become available.

The following example shows how you can use ChunkedOutput to return a large amount of data:

//Other imports are omitted for brevity 
import org.eclipse.persistence.queries.CursoredStream; 
import org.glassfish.jersey.server.ChunkedOutput;  
 
@Stateless 
@Path("employees") 
public class EmployeeResource { 
   
  //A thread manager that manages a long-running job  
    private final ExecutorService executorService = 
       Executors.newCachedThreadPool(); 
   
   @GET 
    @Path("chunk") 
    @Produces({MediaType.APPLICATION_JSON}) 
    public ChunkedOutput<List<Employee>> findAllInChunk() { 
        final ChunkedOutput<List<Employee>> output = new 
            ChunkedOutput<List<Employee>>() { 
             }; 
         //Execute the thread 
        executorService.execute(new  
           LargeCollectionResponseType(output)); 
 
        // Returns chunked output set by   
        // LargeCollectionResponseType thread class 
        return output; 
    } 
 
     //This thread class reads employee records from database 
     //in batches. This thread is used to back up asynchronous 
     //processing of the chunked output. 
 
    class LargeCollectionResponseType implements Runnable { 
 
        ChunkedOutput output; 
        EntityManager entityManagerLocal = null; 
        //Stream class used to deal 
        //with large collections 
        CursoredStream cursoredStream = null; 
        //Page size used for reading records from DB 
        final int PAGE_SIZE = 50; 
 
        LargeCollectionResponseType(ChunkedOutput output) { 
            this.output = output; 
            //Get the entity manager instance 
            EntityManagerFactory emf =  
                Persistence.createEntityManagerFactory("EMP_PU"); 
            entityManagerLocal = emf.createEntityManager(); 
            //Get employee query 
            //Employee entity definition is not shown for brevity  
            Query empQuery = entityManagerLocal. 
                createNamedQuery("Employee.findAll"); 
            //AScrollableCursor is enabled using query hint 
            //This hint allows the client to scroll through  
            //the results page by page 
            empQuery.setHint("eclipselink.cursor", true); 
            cursoredStream = (CursoredStream)  
                empQuery.getSingleResult(); 
        } 
 
        public void run() { 
            try { 
                boolean hasMore = true; 
                do { 
                    //Scroll through the results page by page 
                    List<Employee> chunk = (List<Employee>) 
                       getNextBatch(cursoredStream, PAGE_SIZE); 
                    hasMore = (chunk != null && chunk.size() > 0); 
                    if (hasMore) { 
                        //Write current chunk to ChunkedOutput 
                        output.write(chunk); 
                    } 
                } while (hasMore); 
            } catch (IOException e) { 
                // IOException thrown when writing the 
                // chunks of response: Should be handled 
                e.printStackTrace(); 
            } finally { 
                try { 
                    output.close(); 
                } catch (IOException ioe) { 
                    ioe.printStackTrace(); 
                } 
 
            } 
        } 
 
        //CursoredStream is used to deal with large  
        //collections returned from TOPLink queries  
        //more efficiently 
        private List<Employee> getNextBatch(CursoredStream  
            cursoredStream, int pagesize) { 
            List emps = null; 
            if (!cursoredStream.atEnd()) { 
                emps = cursoredStream.next(pagesize); 
            }  
            return emps; 
        } 
//Rest of the code goes here 
} 

Here is a quick summary of this example:

  • This example uses a JPA entity to read an employee record from the database. The employee entity definition is not listed in the code snippet in order to save space. We use org.eclipse.persistence.queries.CursoredStream to read records in batches from the database. Under the cover, CursoredStream wraps a database result set cursor to provide a stream on the resulting selected objects.
  • This example defines the ChunkedOutput<List<Employee>> findAllInChunk() method to return the employee collection in a series of chunks.
  • Jersey will process the resource method that returns ChunkedOutput asynchronously. This is the reason why this example uses a worker thread to read records from the database. The read operation makes use of the org.eclipse.persistence.queries.CursoredStream class from EclipseLink (which is the JPA provider for this example) to wrap a database result set cursor to provide a stream on the resulting selected objects.
  • While returning a response to the client, Jersey will add the Transfer-Encoding: chunked response header for the ChunkedOutput return type. The client now knows that the response is going to be chunked, so it reads each chunk of the response separately, processes it, and waits for more chunks to arrive at the same connection. This allows the server to maintain an HTTP-persistent connection for sending the series of chunks to clients. Once everything is done, the server closes the connection.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.73.127