In chapter 5, we already discussed most of the basic functionality of the org.eclipse.core.runtime
plug-in. This chapter covers the remaining facilities of Eclipse Platform runtime: APIs for logging, tracing, storing preferences, and other such core functionality. These various services, although not strictly needed by all plug-ins, are common enough that they merit being located directly alongside the Eclipse kernel. In Eclipse 3.0, this plug-in was expanded to add infrastructure for running and managing background operations. This chapter answers some of the questions that may arise when you start to use this new concurrency infrastructure.
A progress monitor is a callback interface that allows a long-running task to report progress and respond to cancellation. Typically, a UI component will create a monitor instance and pass it to a low-level component that does not know or care about the UI. Thus, an IProgressMonitor
is an abstraction that allows for decoupling of UI and non-UI components.
Each monitor instance has a strictly defined lifecycle. The first method that must be called is beginTask
, which specifies a description of the operation and the number of units of work that it will take. This work value doesn’t need to be very precise; your goal here is to give the user a rough estimate of how long it will take. If you have no way of estimating the amount of work, you can pass a work value of IProgressMonitor.UNKNOWN
, which will result in a continuously animated progress monitor that does not give any useful information to the user.
After beginTask
, you should call subTask
and worked
periodically as the task progresses. The sum of the values passed to the worked
method must equal the total work passed to beginTask
. The subTask
messages can be sent as often as you like, as they provide more details about what part of the task is currently executing. Again, you don’t need to be precise here. Simply give the user a rough idea of what is going on.
Finally, you must call done
on the monitor. One consequence of calling done
is that any unused portion of the progress bar will be filled up. If your code is part of a larger operation, failing to call done will mean that the portion of the progress bar allotted to your part of the operation will not be filled. To ensure done
gets called, you should place it in a finally
block at the very end of your operation.
Here is a complete example of a long-running operation reporting progress:
try { monitor.beginTask("Performing decathlon: ", 10); monitor.subTask("hammer throw"); //perform the hammer throw monitor.worked(1); //... repeat for remaining nine events } finally { monitor.done(); }
The monitor can also be used to respond to cancellation requests. When the user requests cancellation, the method isCanceled
will return true
. Your long-running operation should check this value occasionally and abort if a cancellation has occurred. A common method of quickly aborting a long-running operation is to throw OperationCanceledException
.
When using progress monitors in Eclipse, an important rule is that all API methods expect a fresh, unused progress monitor. You cannot pass them a monitor that has had beginTask
called on it or a monitor that has already recorded some units of work. The reasons for this are clear. API methods can be called from a variety of places and cannot predict how many units of work they represent in the context of a long-running operation. An API method that deletes a file might represent all the work for an operation or might be called as a small part of a much larger operation. Only the code at the top of the call chain has any way of guessing how long the file deletion will take in proportion to the rest of the task.
But if every API method you call expects a fresh monitor, how do you implement an operation that calls several such API methods? The solution is to use a SubProgressMonitor
, which acts as a bridge between caller and callee. This monitor knows how many work units the parent has allocated for a given work task and how many units of work the child task thinks it has. When the child task reports a unit of progress, the SubProgressMonitor
scales that work in proportion to the number of parent work units available.
If you are lost at this point, it is probably best to look at a simple example. This fictional move
method is implemented by calling a copy
method, followed by a delete
method. The move
method estimates that the copying will take 80 percent of the total time, and that the deletion will take 20 percent of the time:
public void move(File a, File b, IProgressMonitor pm) { try { pm.beginTask("Moving", 10); copy(a, b, new SubProgressMonitor(pm, 8)); delete(a, new SubProgressMonitor(pm, 2)); } finally { m.done(); } }
The copy
and delete
methods, in turn, will call beginTask
on the SubProgressMonitor
that was allocated to it. The copy method might decide to report one unit of work for each 8KB chunk of the file. Regardless of the size of the file, the SubProgressMonitor
knows that it can report only eight units of work to the parent monitor and so it will scale reported work units accordingly.
FAQ 119 How do I use progress monitors?
The Eclipse runtime plug-in provides a simple set of APIs for logging exceptions, warnings, or other information useful in debugging or servicing a deployed Eclipse product. The intent of the log is to record information that can be used later to diagnose problems in the field. Because this information is not directed at users, you do not need to worry about translating messages or simplifying explanations into a form that users will understand. The idea is that when things go wrong in the field, your users can send you the log file to help you figure out what happened.
Each plug-in has its own log associated with it, but all logged information eventually makes its way into the platform log file (see the getLogFileLocation
method on Platform
). The log for a plug-in is accessed from the plug-in’s class, using getLog
inherited from Plugin
. You can attach a listener to an individual log or to the platform log if you are interested in receiving notification of logged events. Use addLogListener
on either Platform
or the result of Plugin.getLog()
.
You can write any kind of IStatus
object to the log file, including a MultiStatus
if you have hierarchies of information to display. If you create your own subclass of the utility class Status
, you can override the getMessage
method to return extra information to be displayed in the log file. Many plug-ins add convenience methods to their plug-in class for writing messages and errors to the log:
import org.eclipse.core.runtime.Status; ... public void log(String msg) { log(msg, null); } public void log(String msg, Exception e) { getLog().log(new Status(Status.INFO, myPluginID, Status.OK, msg, e)); }
During development, you can browse and manipulate the platform log file using the PDE Error Log view (Window > Show View > Other > PDE Runtime > Error Log). You can also have the log file mirrored in the Java console by starting Eclipse with the -consoleLog
command-line argument.
eclipse -vm c:jreinjava.exe -consoleLog
We explicitly pass the VM because on Windows you have to use java.exe
instead of javaw.exe
if you want the Java console window to appear.
FAQ 27 Where can I find that elusive .log
file?
During development, it is common practice to print debugging messages to standard output. One common idiom for doing this is
private static final boolean DEBUG = true; ... if (DEBUG) System.out.println("So far so good");
The advantage of this approach is that you can flip the DEBUG
field to false
when it comes time to deploy your code. The Java compiler will then remove the entire if
block from the class file as flow analysis reveals that it is unreachable. The downside of this approach is that all the hard work that went into writing useful debug statements is lost in the deployed product. If your user calls up with a problem, you will have to send a new version of your libraries with the debug switches turned on before you can get useful feedback. Eclipse provides a tracing facility that is turned off by default but can be turned on in the field with a few simple steps.
To instrument your code, use the methods Plugin.isDebugging
and Platform.getDebugOption
to add conditions to your trace statements:
private static final String DEBUG_ONE = "org.eclipse.faq.examples/debug/option1"; ... String debugOption = Platform.getDebugOption(DEBUG_ONE); if (ExamplesPlugin.getDefault().isDebugging() && "true".equalsIgnoreCase(debugOption)) System.out.println("Debug statement one.");
This approach will also allow you to turn on tracing dynamically using the method Plugin.setDebugging
. This can be very useful while debugging if you want to limit the amount of trace output that is seen. You simply start with tracing off and then turn it on when your program reaches a state at which tracing is useful.
If you do not need dynamic trace enablement or if you are concerned about code clutter or performance, another tracing style yields cleaner and faster source:
private static final boolean DEBUG_TWO = ExamplesPlugin.getDefault().isDebugging() && "true".equalsIgnoreCase(Platform.getDebugOption( "org.eclipse.faq.examples/debug/option2")); ... if (DEBUG_TWO) System.out.println("Debug statement two.");
This tracing style is not quite as good as the standard approach outlined at the beginning of this FAQ. Because the debug flag cannot be computed statically, the compiler will not be able to completely optimize out the tracing code. You will still be left with the extra code bulk, but the performance will be good enough for all but the most extreme applications.
To turn tracing on, you need to create a trace-options file that contains a list of the debug options that you want to turn on. By default, the platform looks for a file called .options
in the Eclipse install directory. This should be a text file in the Java properties file format, with one key=value
pair per line. To turn on the trace options in the preceding two examples, you need an options file that looks like this:
org.eclipse.faq.examples/debug=true org.eclipse.faq.examples/debug/option1=true org.eclipse.faq.examples/debug/option2=true
The first line sets the value of the flag returned by Plugin.isDebugging
, and the next two lines define the debug option strings returned by the getDebugOption
method on Platform
.
Hint: If you use tracing in your plug-in, you should keep in your plug-in install directory a .options
file that contains a listing of all the possible trace options for your plug-in. This advertises your tracing facilities to prospective users and developers; in addition, the Run-time Workbench launch configuration will detect this file and use it to populate the Tracing Options page (Figure 6.1). If you browse through the plug-in directories of the Eclipse SDK, you will see that several plug-ins use this technique to document their trace options; for example, see org.eclipse.core.resources
or org.eclipse.jdt.core
.
Finally, you need to enable the tracing mechanism by starting Eclipse with the -debug
command-line argument. You can, optionally, specify the location of the debug options file as either a URL or a file-system path after the -debug
argument.
Each plug-in has a local workspace preference store for saving arbitrary primitive data types and strings. Preferences displayed in the Workbench Preferences dialog are typically stored in these local plug-in preference stores. The Import and Export buttons in the Workbench Preferences page also operate on these plug-in preference stores.
Retrieving values from plug-in preferences is fairly straightforward:
Plugin plugin = ExamplesPlugin.getDefault(); Preferences prefs = plugin.getPluginPreferences(); int value = prefs.getInt("some.key");
When preferences are changed, the preference store must be explicitly saved to disk:
prefs.setValue("some.key", 5); plugin.savePluginPreferences();
The plug-in preference store can also store default values for any preference in that plug-in. If a value has not been explicitly set for a given key, the default value is returned. If the key has no default value set, a default default value is used: zero for numeric preferences, false
for Boolean preferences, and the empty string for string preferences. If the default default is not appropriate for your preference keys, you should establish a default value for each key by overriding the initializeDefaultPluginPreferences
method in your Plugin
subclass:
protected void initializeDefaultPluginPreferences() { prefs = plugin.getPluginPreferences(); prefs.setDefault("some,key", 1); }
Although you can change default values at any time, it is generally advised to keep them consistent so that users know what to expect when they revert preferences to their default values. Programmatically reverting a preference to its default value is accomplished with the setToDefault
method.
FAQ 124 How do I use the preference service?
In addition to the plug-in preferences for the local workspace obtained via Plugin.getPluginPreferences
, a preference service new in Eclipse 3.0 can be used to store preferences in different places. This facility is similar to the Java 1.4 preferences API. Each preference object is a node in a global preference tree. The root of this preference tree is accessed by using the getRootNode
method on IPreferenceService
. The children of the root node represent different scopes. Each scope subtree chooses how and where it is persisted: whether on disk, in a database, or not at all. Below the root of each scope are typically one or more levels of contexts before preferences are found. The number and format of these contexts depend on the given scope. For example, the instance and default scopes use a plug-in ID as a qualifier. The project scope uses two qualifiers: the project name and the plug-in ID. Thus, the fully qualified path of a preference node for the FAQ Examples plug-in in the workspace project My Project
would be
/project/My Project/org.eclipse.faq.examples
Below the level of scopes and scope contexts, the preference tree can have further children for storing hierarchies of information. This is analogous to the org.eclipse.ui.IMemento
facility used for persisting user-interface states. With the new preference mechanism, it is very easy to create hierarchies of preference nodes for storing hierarchical preference data.
Lookups in this global preference tree can be done in a number of ways. If you know exactly which scope and context you are looking for, you can start at the root node and navigate downward using the node
methods. Each scope typically also provides a public class for obtaining preferences within that scope. For example, you can create an instance of ProjectScope
on an IProject
to obtain preferences stored in that project. Finally, the preference service has methods for doing preference lookups that search through all scopes. This can be used when you allow users to specify what scope their preferences are stored at and you want more local scopes to override values in more global scopes. For example, JDT compiler preferences can be specified at the project scope if you want to override preferences for a given project. They can also be stored at the instance scope to specify preferences that apply to all projects in a workspace.
FAQ 125 What is a preference scope?
The preference service uses the notion of preference scopes to describe the various areas where preferences can be saved. The platform includes the following four preference scopes:
Configuration scope. Preferences stored in this scope are shared by all workspaces that are launched using a particular configuration of Eclipse plug-ins. On a single-user installation, this serves to capture preferences that are common to all workspaces launched by that user. On a multi-user installation, these preferences are shared by all users of the configuration.
Instance scope. Preferences in this scope are specific to a single Eclipse workspace. The old API method getPluginPreferences
on Plugin
stores its preferences at this scope.
Default scope. This scope is not stored on disk at all but can be used to store default values for all your keys. When values are not found in other scopes, the default scope is consulted last to provide reasonable default values.
Project scope. This scope stores values that are specific to a single project in your workspace, such as code formatter and compiler settings. Note that this scope is provided by the org.eclipse.core.resources
plug-in, which is not included in the Eclipse Rich Client Platform. This scope will not exist in applications that don’t explicitly include the resources
plug-in.
Plug-ins can also define their own preference scopes, using the org.eclipse.core.runtime.preference
extension point. If you define your own scope, you can control how and where your preferences are loaded and stored. However, for most clients, the four built in scopes will be sufficient.
Adapters in Eclipse are generic facilities for mapping objects of one type to objects of another type. This mechanism is used throughout the Eclipse Platform to associate behavior with objects across plug-in boundaries. Suppose that a plug-in defines an object of type X, another plug-in wants to create views for displaying X instances, and another wants to extend X to add some extra features. Extra state or behavior could be added to X through subclassing, but that allows for only one dimension of extensibility. In a single-inheritance language, clients would not be able to combine the characteristics of several customized subclasses of X into one object.
Adapters allow you to transform an object of type X into any one of a number of classes that can provide additional state or behavior. The key players are
IAdapterFactory
, a facility for transforming objects of one type to objects of another type.
IAdaptable
, an object that declares that it can be adapted. This object is typically the input of the adaptation process.
Adapters, the output of the adaptation process. No concrete type or interface is associated with this output; it can be any object.
IAdapterManager
, the central place where adaptation requests are made and adapter factories are registered.
PlatformObject
, a convenience superclass that provides the standard implementation of the IAdaptable
interface.
Adapter factories can be registered either programmatically, via the registerAdapters
method on IAdapterManager
, or declaratively, via the adapters
extension point. The only advantage of programmatic registration is that it allows you to withdraw or replace a factory at runtime, whereas the declaratively registered factory cannot be removed or changed at runtime.
Adaptation typically takes place when someone requires an input of a certain type but may obtain an input of a different type, often owing to the presence of an unknown plug-in. For example, WorkbenchLabelProvider
is a generic JFace label provider for presenting a model object in a tree or table. The label provider doesn’t really care what kind of model object is being displayed; it simply needs to know what label and icon to display for that object. When provided with an input object, WorkbenchLabelProvider
adapts the input to IWorkbenchAdapter
, an interface that knows how to compute its label and icon. Thus, the adapter mechanism insulates the client—in this case, the label provider—from needing to know the type of the input object.
As another example, the DeleteResourceAction
action deletes a selection of resources after asking the user for confirmation. Again, the action doesn’t really care what concrete types are in the selection, as long as it can obtain IResource
objects corresponding to that selection. The action achieves this by adapting the selected objects to IResource
, using the adapter mechanism. This allows another plug-in to reuse these actions in a view that does not contain actual IResource
objects but instead contains a different object that can be adapted to IResource
.
In all these cases, the code to perform the adaptation is similar. Here is a simplified version of the DeleteResourceAction
code that obtains the selected resource for a given input:
Object input = ...; IResource resource = null; if (input instanceof IResource) { resource = (IResource)input; } else if (input instanceof IAdaptable) { IAdaptable a = (IAdaptable)input; resource = (IResource)a.getAdapter(IResource.class); }
Note that it is not strictly necessary for the object being adapted to implement IAdaptable
. For an object that does not implement this interface, you can request an adapter by directly calling the adapter manager:
IAdapterManager manager = Platform.getAdapterManager(); ... = manager.getAdapter(object, IResource.class);
For an excellent design story on why Eclipse uses adapters, see Chapter 31 of Contributing to Eclipse by Erich Gamma and Kent Beck.
FAQ 177 How can I use IWorkbenchAdapter
to display my model elements?
In Eclipse 3.0, infrastructure was added to support running application code concurrently. This support is useful when your plug-in has to perform some CPU-intensive work and you want to allow the user to continue working while it goes on. For example, a Web browser or mail client typically fetches content from a server in the background, allowing the user to browse existing content while waiting.
The basic unit of concurrent activity in Eclipse is provided by the Job
class. Jobs are a cross between the interface java.lang.Runnable
and the class java.lang.Thread
. Jobs are similar to runnables because they encapsulate behavior inside a run
method and are similar to threads because they are not executed in the calling thread.
This example illustrates how a job is used:
Job myJob = new Job("Sample Job") { public IStatus run(IProgressMonitor monitor) { System.out.println("This is running in a job"); } }; myJob.schedule();
When schedule
is called, the job is added to a queue of jobs waiting to be run. Worker threads then remove them from the queue and invoke their run
method. This system has a number of advantages over creating and starting a Java thread:
Less overhead. Creating a new thread every time you want to run something can be expensive. The job infrastructure uses a thread pool that reuses the same threads for many jobs.
Support for progress and cancellation. Jobs are provided with a progress monitor object, allowing them to respond to cancellation requests and to report how much work they have done. The UI can listen to these progress messages and display feedback as the job executes.
Support for priorities and mutual exclusion. Jobs can be configured with varying priorities and with scheduling rules that describe when jobs can be run concurrently with other jobs.
Advanced scheduling features. You can schedule a job to run at any time in the future and to reschedule itself after completing.
Note that the same job instance can be rerun as many times as you like, but you cannot schedule a job that is already sleeping or waiting to run. Jobs are often written as singletons, both to avoid the possibility of the same job being scheduled multiple times and to avoid the overhead of creating a new object every time it is run.
The platform job mechanism uses a pool of threads, allowing it to run several jobs at the same time. If you have many jobs that you want to run in the background, you may want to prevent more than one from running at once. For example, if the jobs are accessing an exclusive resource, such as a file or a socket, you won’t want them to run simultaneously. This is accomplished by using job-scheduling rules. A scheduling rule contains logic for determining whether it conflicts with another rule. If two rules conflict, two jobs using those rules will not be run at the same time. The following scheduling rule will act as a mutex, not allowing two jobs with the same rule instance to run concurrently:
public class MutexRule implements ISchedulingRule { public boolean isConflicting(ISchedulingRule rule) { return rule == this; } public boolean contains(ISchedulingRule rule) { return rule == this; } }
The rule is then used as follows:
Job job1 = new SampleJob(); Job job2 = new SampleJob(); MutexRule rule = new MutexRule(); job1.setRule(rule); job2.setRule(rule); job1.schedule(); job2.schedule();
When this example is executed, job1
will start running immediately, and job2
will be blocked until job1
finishes. Once job1
is finished, job2
will be run automatically.
You can create your own scheduling rules, which means that you have complete control of the logic for the isConflicting
relation. For example, the resources
plug-in has scheduling rules for files such that two files conflict if one is a parent of the other. This allows a job to have exclusive access to a complete file-system subtree.
The contains
relation on scheduling rules is used for a more advanced feature of scheduling rules. Multiple rules can be “owned” by a thread at a given time only if the subsequent rules are all contained within the initial rule. For example, in a file system, a thread that owns the scheduling rule for c:a
can acquire a rule for the subdirectory c:ac
. A thread acquires multiple rules by using the IJobManager
methods beginRule
and endRule
. Use extreme caution when using these methods; if you begin a rule without ending it, you will lock that rule forever.
FAQ 127 Does the platform have support for concurrency?
Several methods on IJobManager
(find, cancel, join, sleep
, and wakeUp
) require a job family object as parameter. A job family can be any object and simply acts as an identifier to group and locate job instances. Job families have no effect on how jobs are scheduled or executed. If you define your job to belong to a family, you can use it to distinguish among various groups or classifications of jobs for your own purposes.
A concrete example will help to explain how families can be used. The Java search mechanism uses background jobs to build indexes of source files and JARs in each project of the workspace. If a project is deleted, the search facility wants to discard all indexing jobs on files in that project as they are no longer needed. The search facility accomplishes this by using project names as a family identifier. A simplified version of the index job implementation is as follows:
class IndexJob extends Job { String projectName; ... public boolean belongsTo(Object family) { return projectName.equals(family); } }
When a project is deleted, all index jobs on that project can be cancelled using the following code:
IProject project = ...; Platform.getJobManager().cancel(project.getName());
The belongsTo
method is the place where a job specifies what families, if any, it is associated with. The advantage of placing this logic on the job instance itself—rather than having jobs publish a family identifier and delegate the matching logic to the job manager—is that it allows a job to specify that it belongs to several families. A job can even dynamically change what families it belongs to, based on internal state. If you have no use for job families, don’t override the belongsTo
method. By default, jobs will not belong to any families.
If you have a reference to a job instance, you can use the method Job.getState
to find out whether it is running. Note that owing to the asynchronous nature of jobs, the result may be invalid by the time the method returns. For example, a job may be running at the time the method is called but may finish running between the time you invoke getState
and the time you check its return value. For this reason, you should generally avoid relying too much on the result of this method.
The job infrastructure makes things easier for you by generally being very tolerant of methods called at the wrong time. For example, if you call wakeUp
on a job that is not sleeping or cancel
on a job that is already finished, the request is silently ignored. Thus, you can generally forgo the state check and simply try the method you want to call. For example, you do not need to do this:
if (job.getState() == Job.NONE) job.schedule();
Instead, you can invoke job.schedule()
immediately. If the job is already scheduled or sleeping, the schedule request will be ignored. If the job is currently running, it will be rescheduled as soon as it completes
If you need to be certain of when a job enters a particular state, register a job change listener on the job. When using a listener, you can be sure that you will never miss a state change. Although the job may have changed state again by the time your listener is called, you are guaranteed that a given job listener will not receive multiple events concurrently or out of order.
If you do not have a job reference, you can search for it by using the method IJobManager.find
. This method will find only job instances that are running, waiting, or sleeping. To give a concrete example, the Eclipse IDE uses this method when the user launches an application. The method searches for the autobuild job and, if it is running, waits for autobuild to complete before launching the application. Here is a snippet that illustrates this behavior; the actual code is more complex because it first consults a preference setting and might decide to prompt the user:
IJobManager jobMan = Platform.getJobManager(); Job[] build = jobMan.find(ResourcesPlugin.FAMILY_AUTO_BUILD); if (build.length == 1) build[0].join();
Again, it is safe to call join
here without checking whether the job is still running. The join
method will return immediately in this case.
It is quite simple to find out when jobs, including those owned by others, are scheduled, run, awoken, and finished. As with many other facilities in the Eclipse Platform, a simple listener suffices:
IJobManager manager = Platform.getJobManager(); manager.addJobChangeListener(new JobChangeAdapter() { public void scheduled(IJobChangeEvent event) { Job job = event.getJob(); System.out.println("Job scheduled: "+job.getName()); } });
By subclassing JobChangeAdapter
, rather than directly implementing IJobChangeListener
, you can pick and choose which job change events you want to listen to. Note that the done
event is sent regardless of whether the job was cancelled or failed to complete, and the result status in the job change event will tell you how it ended.
It is common to have background work that repeats after a certain interval. For example, an optional background job refreshes the workspace with repository contents. The workspace itself saves a snapshot of its state on disk every few minutes, using a background job. Setting up a repeating job is not much more difficult than setting up a simple job. The following job reschedules itself to run once every minute:
public class RepeatingJob extends Job { private boolean running = true; public RepeatingJob() { super("Repeating Job"); } protected IStatus run(IProgressMonitor monitor) { schedule(60000); return Status.OK_STATUS; } public boolean shouldSchedule() { return running; } public void stop() { running = false; } }
The same schedule
method that is used to get the job running in the first place is also used to reschedule the job while it is running. Calling schedule
while a job is running will flag the job to be scheduled again as soon as the run method exits. It does not mean that the same job instance can be running in two threads at the same time, as the job is added back to the waiting queue only after it finishes running. Repeating jobs always need some rescheduling condition to prevent them from running forever. In this example, a simple flag is used to check if the job needs to be rescheduled. Before adding a job to the waiting queue, the framework calls the shouldSchedule
method on the job. This allows a job to indicate whether it should be added to the waiting job queue. If the call to shouldSchedule
returns false
, the job is discarded. This makes it a convenient place for determining whether a repeating job should continue.
3.145.33.235