Optimizations and fine tuning

Now that we implemented this DSL with a test suite, we can concentrate on refactoring some parts of it in order to optimize the performance.

In the Forward references section, we implemented the method variablesDefinedBefore and we anticipated that its performance might not be optimal. Since that method is used in the validator, in the type system and in the content assist it would be good to somehow cache its results to improve the performance.

Caching usually introduces a few problems since we must avoid that its contents become stale. Xtext provides a cache that relieves us from worrying about this problem, org.eclipse.xtext.util.IResourceScopeCache. This cache is automatically cleared when a resource changes, thus its contents are never stale. Moreover, its default implementation is annotated as com.google.inject.Singleton, thus all our DSL components will share the same instance of the cache.

To use this cache we call the method:

<T> T get(Object key, Resource res, Provider<T> provider)

We must provide the key of the cache, which can be any object, the Resource associated with the cache, and the Provider whose get() method is called automatically if no value is associated to the specified key.

Let's use this cache in the ExpressionsModelUtil for the implementation of variablesDefinedBefore:

@Inject IResourceScopeCache cache
…
def variablesDefinedBefore(AbstractElement containingElement) {
  cache.get(containingElement, containingElement.eResource) [
    val allElements =
      (containingElement.eContainer as ExpressionsModel).elements

    allElements.subList(0,
      allElements.indexOf(containingElement)).typeSelect(Variable)
  ]
}

We specify the AbstractElement as the key, its resource and a lambda for the Provider parameter. The lambda is simply the original implementation of the method body. Remember that the lambda will be called only in case of a cache miss. This is all we have to do to use the cache.

We now run the whole test suite, including the UI tests for the content assist, to make sure that the cache does not break anything.

Another aspect that is worth caching is the type computation. In fact, the type system is used by the validator, by the interpreter and by the custom hover implementation. In particular, it is good to cache type computation for cases that are not simple, such as variable reference. For computing, the type of a variable reference we compute the type of the referred variable's initialization expressions. This is performed over and over again for all the variable references that refer to the same variable.

Remember that the cache is shared by all the components of the DSL, thus we cannot simply reuse the referred variable as the key in this case, since that would conflict with the use of cache that we do in variablesDefinedBefore. Thus, in the type computer, we use a "pair" for the key, where the first element is the string "type" and the second element is the variable. A pair can be specified in Xtend with the following syntax: e1 -> e2.

This is the modified part in the type computer:

@Inject IResourceScopeCache cache
…
def dispatch ExpressionsType typeFor(VariableRef varRef) {
    if (!varRef.isVariableDefinedBefore)
        return
 null
    else {
        val variable = varRef.variable
        return cache.get("type" -> variable, variable.eResource) [
            variable.expression.typeFor
        ]
    }
}

Again, make sure you run the whole test suite to check that nothing is broken.

You can also try and experiment with a type computer where the type computation for all kinds of expression is cached.

Another part that can be optimized is the interpretation of a variable reference in the ExpressionsInterpreter: instead of interpreting the same variable over and over again, we can cache the result of the interpretation of variables:

VariableRef: {
  // avoid interpreting the same variable over and over again
  val v = e.variable
  cache.get("interpret" -> v, e.eResource) [
      v.interpret
  ]
}

In the sources of this DSL, you will also find a few tests that compare the performance of the DSL with and without caching. It is important to have such tests so that you can work on fine-tuning your DSL implementation. You need to make sure that the use of cache does not introduce overhead in some contexts, or you will get the opposite effect.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.168.172