List of Figures
Chapter 1. Learning to speak the language of the domain
Figure 1.1. Entities and collaborations from the problem domain must map to appropriate artifacts in a solution domain. The
entities shown on the left (security, trade, and so on) need corresponding representations on the right.
Figure 1.2. The problem domain and the solution domain need to share a common vocabulary for ease of communication. With this
vocabulary, you can trace an artifact of the problem domain to its appropriate representation in the solution domain.
Figure 1.3. A DSL script provides a representation of the domain language to the implementation model. It uses the common
vocabulary as the underlying dictionary that makes the language feel more natural to users.
Figure 1.4. Three execution models for a DSL script. You can directly execute the program that implements the solution domain
model . Alternatively you can instrument bytecodes and then execute the script . Or you can do a source code translation (as
with Lisp macros) and then generate bytecodes for execution .
Figure 1.5. You implement an internal DSL using an existing host language and the infrastructure that it offers.
Figure 1.6. You need to develop your own language-processing infrastructure for an external DSL. The infrastructure includes
lexical analyzers, parsers, and code generators commonly found in high-level language implementations. Note that the complexities
of each of them depend on how detailed your language is.
Chapter 2. The DSL in the wild
Figure 2.1. Roadmap for chapter 2
Figure 2.2. An informal micro-classification of patterns used in implementing internal DSLs
Figure 2.3. Smart API using method chaining. Note how the method calls progress forward and return only at the end to the
client.
Figure 2.4. An embedded typed DSL comes with lots of implicit guarantees of consistency. Use a type to model your DSL abstraction.
The constraints that you define within your type are automatically checked by the compiler, even before the program runs.
Figure 2.5. Languages that support runtime metaprogramming let users generate code on the fly. This code can add behaviors
dynamically to existing classes and objects.
Figure 2.6. Enriching the domain syntax through runtime metaprogramming
Figure 2.7. You use macros to do compile-time metaprogramming. Your DSL script has some valid language forms and some custom
syntax that you’ve defined. The custom syntax is in the form of macros, which get expanded during the macroexpansion phase
into valid language forms. These forms are then forwarded to the compiler.
Figure 2.8. The processing stages of an external DSL. Note that unlike internal DSLs, the parser is now part of what you need
to build. In internal DSLs, you use the parser of the host language.
Figure 2.9. An informal micro-classification of common patterns and techniques of implementing external DSLs
Figure 2.10. XML is being used as the external DSL to abstract Spring configuration specification. The container reads and
processes XML during startup and produces the ApplicationContext that your application uses.
Chapter 3. DSL-driven application development
Figure 3.1. How you’ll progress through the chapter and learn the issues related to DSL-driven application development
Figure 3.2. A macroscopic view of DSL-based application architecture. Note the decoupling of the DSLs from the core application.
They have different evolution timelines.
Figure 3.3. Our application architect is having a nightmarish time thinking about how to integrate DSLs written in various
languages with the core application. It’s a time bomb that’s waiting to explode. Can you help him out?
Figure 3.4. All three DSLs integrate homogeneously with the core application. Each DSL can be deployed as jar files that interoperate
seamlessly in the JVM.
Figure 3.5. Integrating the Groovy DSL through the Java 6 scripting engine. The interaction diagram shows all the steps involved
in evaluating the Groovy DSL script within the sandbox of the ScriptEngine.
Figure 3.6. Role of trading and settlement accounts in the trade process
Figure 3.7. The flow as depicted in the preceding code snippet takes an account and describes the sequence until the end of
the transaction.
Figure 3.8. The three-pronged strategy for dealing with errors and exceptional states in a DSL
Figure 3.9. The compiler is the policeman!
Chapter 4. Internal DSL implementation patterns
Figure 4.1. Roadmap of the chapter
Figure 4.2. Internal DSL implementation patterns, along with example artifacts. I’ll discuss each of these artifacts in this
chapter, and provide sample implementations in the languages specified in the figure.
Figure 4.3. The steps of pattern application: The account is created through the implicit context that is set up using instance_eval.
The account is saved. The Mailer is set up using fluent interfaces and gets the account from a block.
Figure 4.4. How the super call wires up the value() method. The call starts with Commission.value(), Commission being the
last module in the chain, and propagates downward until it reaches the Trade class. Follow the solid arrow for the chain.
Evaluation follows the dotted arrows, which ultimately results in 220, the final value.
Figure 4.5. The subject (Trade class) gets all the decorators (TaxFee and Commission) and extends them dynamically using the
with() method.
Figure 4.6. Internal DSL patterns checklist up to this point. In this chapter, you’ve seen implementations of these patterns
in Ruby and Groovy.
Figure 4.7. A simplified view of a sample client activity report statement
Figure 4.8. Sample view of the account activity report sorted and grouped by the instruments traded during the day. Note how
the instruments are sorted and the quantities grouped together under each instrument.
Figure 4.9. Sample view of the account activity report sorted and grouped by the quantity of instruments traded during the
day.
Figure 4.10. Activity report computation grouped by instrument (groupBy(_.instrument)). Follow the steps in the figure and
correlate them with listing 4.10 and the snippet that follows it, which uses the DSL to compute the ActivityReport for “john
doe”.
Figure 4.11. Program structures for typed embedding of internal DSLs. These patterns teach you how to think with types in
a programming language.
Figure 4.12. Runtime metaprogramming generates code from meta-objects during runtime. The meta-objects generate more objects,
which reduces the amount of boilerplate code that you need to write.
Figure 4.13. Compile-time metaprogramming generates code through macro expansion. Note that we’re still in the compilation
phase when the code is generated. This technique doesn’t have any runtime overhead, unlike the earlier one in figure 4.12.
Chapter 5. Internal DSL design in Ruby, Groovy, and Clojure
Figure 5.1. Roadmap of the chapter
Figure 5.2. Polymorphism through duck typing. The abstractions Foo and Bar don’t have any common base class, but we can treat
them polymorphically in languages that support duck typing.
Figure 5.3. Ruby, Groovy, and Clojure present an interesting mix for DSL implementation
Figure 5.4. How we’ll enrich our Ruby DSL to implement trade processing. At every stage, we’ll make the DSL richer by using
the abstraction capability that Ruby offers and add more domain functionality.
Figure 5.5. A DSL facade offers an expressive API to the user. It also keeps the core implementation structures from being
exposed.
Figure 5.6. How a sample TradeDSL script is interpreted by the code in listing 5.6 to generate Ruby objects. An instance of
security_trade is generated through the DSL interpreter.
Figure 5.7. We’ve developed the DSL for trade generation. Now we’ll add business rules as DSLs to compute cash value of the
trade.
Figure 5.8. A look at the alternatives we implemented in our order-processing DSL in earlier chapters
Figure 5.9. How the Groovy DSL script gets transformed into the Semantic model and finally into the Execution model
Figure 5.10. DSL script to execution model for Clojure. Pay attention to the series of steps that the DSL script goes through
before it’s ready for execution. As we discussed in chapter 1, the semantic model bridges the DSL script and the execution
model.
Chapter 6. Internal DSL design in Scala
Figure 6.1. Our roadmap through this chapter
Figure 6.2. You don’t need to start doing Scala in production code from day one. These are some of the baby steps that you
can start with, in no specific order, during the lifetime of your project.
Figure 6.3. With Scala you can use the dual power of OO and functional programming to evolve your domain model. Using the
OO features of Scala, you can abstract over types and values, specialize a component through subtyping, and do composition
using mixins. You can also use the functional features of Scala through its higher-order functions, closures, and combinators.
Finally, you can compose all of this using modules and get your final abstraction.
Figure 6.4. Sequence of implicit conversions that leads to the construction of the FixedIncomeTrade instance. Read the figure
from left to right and follow the arrows for implicit conversions and the subsequent creation of helper objects.
Figure 6.5. Tax fee component model for the trading solution. The class diagram shows the static relationship between the
TaxFeeCalculationComponent and the collaborating abstractions.
Figure 6.6. TradeDSL has an abstract type member T <: Trade, but EquityTradeDSL has the concrete type T = EquityTrade and
FixedIncomeTradeDSL has the concrete type T = FixedIncomeTrade. TradeDSL has two specializations in EquityTradeDSL and FixedIncomeTradeDSL.
Chapter 7. External DSL implementation artifacts
Figure 7.1. Our roadmap through the chapter
Figure 7.2. The simplest form of an external DSL. The parsing infrastructure does everything necessary to produce the target
actions. The phases of processing the DSL script (lexicalization, parsing, generating the AST, and code generation) are all
bundled in one monolithic block.
Figure 7.3. Separation of concerns for the four responsibilities that the single box in figure 7.2 was doing. Each dotted
region can now encapsulate the identified functionalities.
Figure 7.4. We’ve split the parsing infrastructure box of figure 7.2 into two separate abstractions. The parser takes care
of the core parsing of the syntax. The semantic model is now a separate abstraction from the parsing engine. It encapsulates
all the domain concerns that are ready to be fed into the machinery that generates all the target actions.
Figure 7.5. The semantic model evolves bottom-up as a composition of smaller domain abstractions. You develop smaller abstractions
for domain entities, as indicated by the dotted rectangles. You then compose them together to form larger entities. In the
end, you have the entire domain abstracted in your semantic model.
Figure 7.6. The process of parsing. The language script is fed into the lexical analyzer that tokenizes and feeds them into
the parser.
Figure 7.7. The parser generator takes the grammar rules and the custom actions as input. It then generates the lexical analyzer
and the parser, which accept the DSL script and generate the semantic model. Then you can integrate this model with the core
application.
Figure 7.8. How top-down and bottom-up parsers construct their parse trees.
Figure 7.9. Xtext processes the textual grammar rules and generates lots of stuff. Chief among the important stuff is the
Ecore metamodel that abstracts the syntax that the model of the grammar uses.
Figure 7.10. Hierarchical, or outline view of the model. The outline view shows the structure associated with each rule. You
can sort the elements alphabetically and select an element to navigate to the corresponding one in the text editor.
Figure 7.11. The metamodel of the order-processing DSL. Every production rule from our grammar returns an Ecore model element
like EString and EInt.
Figure 7.12. The Xtext metamodel provides you with an excellent editor for writing DSLs. See how the code completion suggests
alternatives for you? You can also see the syntax highlighting feature, which is quite handy.
Figure 7.13. The semantic models need to be decoupled from the grammar rules. For one set of grammar rules, you can have multiple
semantic models.
Chapter 8. Designing external DSLs using Scala parser combinators
Figure 8.1. Our roadmap through the chapter
Figure 8.2. Chaining parsers. Parser #1 parses a part of the input stream. Parser #2 matches the part left over by parser
#1. The combination returns a parser that combines the two results. The combination parser succeeds only if both parser #1
and parser #2 match their inputs.
Figure 8.3. Implementation architecture of designing an external DSL using an external parser generator like ANTLR. The generator
produces a parser that parses the DSL script and generates the semantic model of the application.
Figure 8.4. Implementation architecture of an external DSL designed using parser combinators. You’re completely within the
confines of the host language infrastructure when you define grammar rules and custom actions.
Figure 8.5. Modeling a parser as a function in the Scala library
Figure 8.6. Generate an order using all the other attributes as inputs
Figure 8.7. Combinators compose smaller parsers and give rise to bigger ones.
Figure 8.8. The parse tree that’s generated from the parsing process of the grammar in listing 8.2. The dotted portion represents
repetition of line_item, which ultimately reduces to the order node of the tree.
Figure 8.9. A detailed run-down of how a sample grammar rule returns a Parser[Order]. items and account_spec each flow in
as a Parser , to the rule. The sequence combinator runs Parser[Items] and Parser[AccountSpec] and passes the results as an
instance of ~ to the function application combinator . Pattern matching is done and an Order instance is created . Through
an implicit conversion, Order is lifted into a Parser and is returned .
Figure 8.10. The trade and settlement processes. Trade is a promise made between two counterparties for exchange of securities
and cash. The settlement is the actual commitment that transfers securities and cash between the counterparty accounts to
change positions.
Figure 8.11. The SSIs are needed to complete the process of settlement. The brokers and the custodians need to know the bank
and account information where the securities and cash need to be transferred.
Chapter 9. DSL design: looking forward
Figure 9.1. Our roadmap through the chapter
Figure 9.2. Evolution of expressiveness in programming languages
Figure 9.3. Evolution of the features in programming languages we use to develop DSLs
Figure 9.4. DSL workbenches support the full lifecycle of a DSL implementation. Domain experts work with higher-level structures
like Microsoft Excel. The workbench stores metadata instead of program text. The metadata can be projected onto smart editors
called projectional editors where you can edit, version, and manage it. The workbenches also have the facility to generate
code to programming languages like Java.
Figure 9.5. In an IDE, besides the core part, you can implement your own plugins. For your DSL, you can design a syntax-highlighter
as a plugin and introduce it alongside the rest of the IDE.
Appendix A. Role of abstractions in domain modeling
Figure A.1. Subtyping through interface inheritance. Subtypes FixedIncome and Equity inherit only the interface from the supertype
Instrument and provide their own implementations.
Figure A.2. Coupling in implementation inheritance. Implementation of issue() in FixedIncome and Equity reuses the implementation
from the super class.
Figure A.3. Mixin-based inheritance. ExoticInstrument gets the implementation of issue() and close() from Instrument, then
gets composed from the mixins CouponPayment, Maturable, and Tradable.
Figure A.4. Command decouples invoker and receiver from the actions. This makes the command object reusable outside the current
context of execution.
Figure A.5. MacroCommand composes commands. Execution of the MacroCommand results in a cascaded execution of its composed
commands.
Appendix B. Metaprogramming and DSL design
Figure B.1. The role of the language metamodel in DSL execution
Figure B.2. Groovy metaprogramming inflection points in our order-processing DSL
Figure B.3. Lisp uses macros to provide compile-time metaprogramming support
Figure B.4. Lisp as the DSL. Lisp macros get transformed into valid Lisp forms and get submitted to the compiler.
Appendix G. Polyglot development
Figure G.1. Don’t let this happen to you!