Salesforce is committed to ensuring that the interactive user experience (browser or mobile response times) of your applications and that of its own is as optimal as possible. In a multitenant environment, it uses the governors as one way to manage this. The other approach is to provide a means for you to move the processing of code from the foreground (interactive) into the background (async mode). This section of the chapter discusses the design considerations, implementation options, and benefits available in this context.
As the code running in the background is, by definition, not holding up the response times of the pages the users are using, Salesforce can and does throttle when and how often async processes execute, depending on the load on the servers at the time. For this reason, it currently does not provide an SLA on exactly when an async piece of code will run or guarantee exactly when the Apex Schedule code will run at the allotted time. This aspect of the platform can at times be a source of frustration for end users, waiting for critical business processes to complete. However, there are some design, implementation, user interface, and messaging considerations that can make a big difference in terms of managing their expectations.
The good news is that the platform rewards you for running your code in async by giving you increased governors in areas such as Heap, SOQL queries, and CPU governor. You can find the full details in their documentation (attempting to document them in a book is a nonstarter as they are constantly being improved and removed in some cases).
Understanding Execut ion Governors and Limits: http://www.salesforce.com/us/developer/docs/apexcode/Content/apex_gov_limits.htm.
The platform provides some straightforward ways in which you can execute code in async. What is often overlooked though is error handling and general user experience considerations. This section presents some points to consider functionally when moving a task or process into async or the background to use a more end user-facing term:
AsyncApexJob
for such messages.There are three ways to implement the asynchronous code in Force.com; all run with the same default extended governors compared to the interactive execution context.
As stated previously, there is no guarantee when the code will be executed. Salesforce monitors the frequency and length of each execution, adjusting a queue that takes into consideration performance over the entire instance and not just the subscriber org. It also caps the number of async jobs over a rolling 24-hour period. See the Understanding Execution Governors and Limits section for more details. Salesforce provides a detailed white paper describing the queuing.
Asynchronous Processing in Force.com: https://developer.salesforce.com/page/Asynchronous_Processing_in_Force_com.
Apex Scheduler can be used to execute code in an async context via its execute
method. This also provides enhanced governors. Depending on the processing involved, you might choose to start a Batch Apex job from the execute
method. Otherwise, if you want a one-off, scheduled execution, you can use the Database.scheduleBatch
method.
Consider the following implementation guidelines when utilizing
@future
jobs:
@future
jobs (consider Batch Apex).@future
methods should be bulkified, and if there is a possibility of bulk data loading, there is a greater chance of hitting the daily per user limit for @future
jobs.@future
, Batch Apex or Queueables job. This is particularly an issue if your packaged code can be invoked by code (such as that written by developer X) in a subscriber org. If they have their own @future
methods indirectly invoking your use of this feature, an exception will occur. As such, try to avoid using these features from a packaged Apex Trigger context.Consider the following implementation guidelines when utilizing Queueables:
Queueables
and the execute method. Then, use the System.enqueueJob
method to start your job:public void execute(QueueableContext context)
Queueables
can contain serializable state that represents the work to be undertaken. This can be complex lists, maps, or your own Apex types. @future
jobs.In our scenario, the Race Data records loaded into Salesforce do not by default associate themselves with the applicable Contestant records. As noted earlier, the Contestant record has a Race Data Id field that contains an Apex-calculated unique value made up from the Year, Race Name, and Driver Id fields concatenated together.
The following Batch Apex job is designed to process all the Race Data records, calculate the compound Contestant Race Data Id field, and update the records with the associated Contestant record relationship. The following example highlights a number of other things in terms of the Selector and Service layer patterns usage within this context:
RaceDataSelector
class is used to provide the QueryLocator
method needed, keeping the SOQL logic (albeit simple in this case) encapsulated in the Selector layer.RaceService.processData
method (added to support this job) is passed in the IDs of the records rather than the records passed to the execute
method in the scope
parameter. This general approach avoids any concern over the record data being old, as Batch Apex serialized all record data at the start of the job.LogService
class is used to capture exceptions thrown and log them to a Custom Object, utilizing the Apex job ID as a means to differentiate log entries from others jobs. Note that the use of LogService
is for illustration only; it is not implemented in the sample code for this chapter.finish
method, new NotificationService
encapsulates logic, which is able to notify end users (perhaps via e-mail, tasks, or Chatter as considered earlier) of the result of the job, passing on the Apex job ID, such that it can leverage the information stored in the log Custom Object and/or ApexAsyncJob
to form an appropriate message to the user. Again, this is for illustration only.The following code shows the implementation of the Batch Apex job to process the race data records:
public class ProcessRaceDataJob implements Database.Batchable<SObject> { public Database.QueryLocator start(Database.BatchableContext ctx) { return RaceDataSelector.newInstance().selectAllQueryLocator(); } public void execute(Database.BatchableContext ctx, List<RaceData__c> scope) { try { Set<Id> raceDataIds = new Map<Id, SObject>(scope).keySet(); RaceService.processData(raceDataIds); } catch (Exception e) { LogService.log(ctx.getJobId(), e); } } public void finish(Database.BatchableContext ctx) { NotificationService.notify( ctx.getJobId(), 'Process Race Data'), } }
Note that even the code to start the Apex job is implemented in a Service
method; this allows for certain pre-checks for concurrency and configuration to be encapsulated. To run this job, a list view Custom Button has been placed on the Race Data object in order to start the process. The controller calls the Service method to start the job as follows:
Id jobId = RaceService.runProcessDataJob();
Consider the following implementation guidelines when utilizing Batch Apex:
start
method are cached by the platform and passed back via the execute
method as opposed to being the latest records at that time from the database. Depending on how long the job takes to execute, such information might have changed on the database in the meantime.execute
method call) to the optimum use of governors and throughput.Note that the ability to start a Batch Apex job from another can be a very powerful feature in terms of processing large amounts of data through various stages or implementing a clean-up process. This is often referred to as Batch Apex Chaining. However, keep in mind that there is no guarantee the second job will start successfully, so ensure you have considered error handling and recovery situations carefully.
As stated previously, the platform ultimately controls when and how long your job can take. However, there are a few tips that you can use to improve things:
start
method returns as quickly as possible by leveraging indexes.execute
method. This will be a balance between the maximum number of records that can be passed to each execute (which is currently 2000) and the governors required to process the records. The preceding example uses 2000 as the operation itself is fairly inexpensive. This means that only 50 chunks are required to execute the job (given the test volume data created in this chapter), which in general, allows the job to execute faster as each burst of activity gets more done at a time. Utilize the Limits
class with some debug statements when volume testing to determine how much of the governors your code is using, and if you can afford to do so, increase the scope size applied. Note that if you get this wrong (as in the data complexity in the subscribers orgs is more than you calibrated for your test data), it can be adjusted further in the subscriber org via a protected Custom Setting.The RaceService.processData
service method demonstrates a great use of external ID fields when performing DML to associate records. Note that in the following code, it has not been necessary to query the Contestant__c
records to determine the applicable record ID to place in the RaceData__c.Contestants__c
field. Instead the Contestants__r
relationship field is used to store an instance of the Contestant__c
object with only the external ID field populated, as it is also unique and it can be used in this way instead of the ID.
public void processData(Set<Id> raceDataIds) { fflib_SObjectUnitOfWork uow =Application.UnitOfWork.newInstance(); for(RaceData__c raceData : (List<RaceData__c>) Application.Selector.selectById(raceDataIds)) { raceData.Contestant__r = new Contestant__c( RaceDataId__c = Contestants.makeRaceDataId( raceData.Year__c, raceData.RaceName__c, raceData.DriverId__c)); uow.registerDirty(raceData); } uow.commitWork(); }
If the lookup by external ID fails during the DML update (performed by the Unit Of Work in the preceding case), a DML exception is thrown, for example:
Foreign key external ID: 2014-suzuka-98 not found for field RaceDataId__c in entity Contestant__c
This approach of relating records without using or looking up the Salesforce ID explicitly is also available in the Salesforce SOAP and REST APIs as well as through the various Data Loader tools. However, in a data loader scenario, the season, race, and driver ID values will have to be pre-concatenated within the CSVfile.
18.118.198.81