In AbstractingOverContextPart1
, we began our discussion of the powerful tools and idioms in Scala 2 and 3 for abstracting over context. In particular, we discussed type classes, extension methods, and implicit conversions as tools for extending the behaviors of existing types.
This chapter explores using clauses, which work with given instances to address particular design scenarios and to simplify user code.
The other major use of context abstractions is to provide method parameters implicitly rather than explicitly. When a method argument list begins with the keyword using
(Scala 3) or implicit
(Scala 2 and 3), the user does not have to provide values explicitly for the parameters, as long as given instances (explored in the previous chapter) are in scope that the compiler can use instead.
In Scala 2 terminology, those parameters were called implicit parameters and the whole list of parameters is an implicit parameter list or implicit parameter clause. In Scala 3, they are context parameters and the whole parameter list is a using clause.1 Here is an example:
class
BankAccount
(...)
:
def
debit
(
amount:
Money
)(
using
transaction:
Transaction
)
...
Here, the using clause starts with the using
keyword and contains the context parameter transaction
.
The values in scope that can be used to fill in these parameters are called implicit values in Scala 2. In Scala 3 they are the given instances or givens for short that we studied last chapter.
I’ll mostly use the Scala 3 terminology in this book, but when I use Scala 2 terminology, it will usually be when discussing a Scala 2 library that uses implicit
definitions and parameters. Scala 3 more or less treats them interchangeably, although the Scala 2 implicits will be phased out eventually.
For each parameter in a using clause, a type-compatible given must exist in the enclosing scope. Using Scala 2-style implicits, an implicit value or an implicit function returning a compatible value must be in scope.
For comparison, recall you can also define default values for method parameters. While sufficient in many circumstances, they are statically scoped to the definition at compile time and they are defined by the implementer of the method. Using clauses, on the other hand, provide greater flexibility for users of a method.
As an example, suppose we implement a simple type that wraps sequences for convenient sorting (ignoring the fact this capability is already provided by Seq
). One way to do this is for the user to supply an implementation of math.Ordering
, which knows how to sort elements of the particular type used in the sequence. That object could be passed as an argument to the sort
method, but the user might also like the ability to specify the value once, as an implicit, and then have all sequences of the same element type use it automatically.
This first implementation uses syntax valid for both Scala 2 an 3:
// src/script/scala-2/progscala3/contexts/ImplicitClauses.scala
case
class
SortableSeq
[
A
]
(
seq
:
Seq
[
A
]
)
{
def
sortBy1
[
B
]
(
transform
:
A
=>
B
)
(
implicit
o
:
Ordering
[
B
]
)
:
SortableSeq
[
A
]
=
new
SortableSeq
(
seq
.
sortBy
(
transform
)
(
o
)
)
def
sortBy2
[
B
:
Ordering
]
(
transform
:
A
=>
B
)
:
SortableSeq
[
A
]
=
new
SortableSeq
(
seq
.
sortBy
(
transform
)
(
implicitly
[
Ordering
[
B
]
]
)
)
}
val
seq
=
SortableSeq
(
Seq
(
1
,
3
,
5
,
2
,
4
)
)
def
defaultOrdering
(
)
=
{
assert
(
seq
.
sortBy1
(
i
=>
-
i
)
==
SortableSeq
(
Seq
(
5
,
4
,
3
,
2
,
1
)
)
)
assert
(
seq
.
sortBy2
(
i
=>
-
i
)
==
SortableSeq
(
Seq
(
5
,
4
,
3
,
2
,
1
)
)
)
}
defaultOrdering
(
)
def
oddEvenOrdering
(
)
=
{
implicit
val
oddEven
:
Ordering
[
Int
]
=
new
Ordering
[
Int
]
:
def
compare
(
i
:
Int
,
j
:
Int
)
:
Int
=
i
%
2
compare
j
%
2
match
case
0
=>
i
compare
j
case
c
=>
c
assert
(
seq
.
sortBy1
(
i
=>
-
i
)
==
SortableSeq
(
Seq
(
5
,
3
,
1
,
4
,
2
)
)
)
assert
(
seq
.
sortBy2
(
i
=>
-
i
)
==
SortableSeq
(
Seq
(
5
,
3
,
1
,
4
,
2
)
)
)
}
oddEvenOrdering
(
)
Use braces, because this is also valid Scala 2 code.
Wrap examples in methods to scope the use of implicits.
Uses the default ordering provided by math.Ordering
for Int
s.
Define a custom oddEven
ordering, which will be the “closest” implicit value in scope for the following lines.
Implicitly use the custom oddEven
ordering.
Let’s focus on sortBy1
for now. All the implicit parameters must be declared in their own parameter list. Here we need two lists, because we have a regular parameter, the function transform
. If we only had implicit parameters, we would need only one parameter list.
The implementation of sortBy1
just uses the Seq.sortBy
method in the collections library. It takes a function that transforms the values to affect the sorting, and an Ordering
instance to sort the values after transformation.
There is already a default implicit implementation in scope for math.Ordering[Int]
, so we don’t need to supply one if we want the usual numeric ordering. The anonymous function i => -1
transforms the integers to their negative values for the purposes of ordering, which effectively results in sorting from highest to lowest.
Next, let’s discuss the other method, sortBy2
, and also explore new Scala 3 syntax for this purpose.
If you think about it, while SortableSeq
is declared to support any element type A
, the two sortBy*
methods “bound” the allowed types to those for which an Ordering
exists. Hence, the term context bound is used for the implicit value in this situation.
In SortableSeq.sortBy1
, the implicit parameter o
is a context bound. A major clue is the fact that it has type Ordering[B]
, meaning it is parameterized by the output element type, B
. So, while it doesn’t bound A
explicitly, the result of applying transform
is to convert A
to B
and then B
is context bound by Ordering[B]
.
Context bounds are so common that Scala 2 defined a more concise way of declaring them in the types, as shown in sortBy2
, where the syntax B : Ordering
appears. (Note that it’s not B : Ordering[B]
.)
In the generated byte code for Scala 2, this is just short hand for the same code we wrote explicitly for sortBy1
, with one difference. In sortBy1
, we defined a name for the Ordering
parameter, o
, in the second argument list. We don’t have a name for it in sortBy2
, but we need it in the body of the method. The solution is to use the method Predef.implicitly
, as shown in the method body. It “binds” the implicit Ordering
that is in scope so it can be passed as an argument.
Let’s rewrite this code in Scala 3:
// src/script/scala/progscala3/contexts/UsingClauses.scala
case
class
SortableSeq
[
A
](
seq
:
Seq
[
A
])
:
def
sortBy1a
[
B
](
transform
:
A
=>
B
)(
using
o
:
Ordering
[
B
])
:
SortableSeq
[
A
]
=
new
SortableSeq
(
seq
.
sortBy
(
transform
)(
o
))
def
sortBy1b
[
B
](
transform
:
A
=>
B
)(
using
Ordering
[
B
])
:
SortableSeq
[
A
]
=
new
SortableSeq
(
seq
.
sortBy
(
transform
)(
summon
[
Ordering
[
B
]]))
def
sortBy2
[
B
:
Ordering
](
transform
:
A
=>
B
)
:
SortableSeq
[
A
]
=
new
SortableSeq
(
seq
.
sortBy
(
transform
)(
summon
[
Ordering
[
B
]]))
The sortBy1a
method is identical to the previous sortBy1
method with a using clause instead of an implicit parameter list. In sortBy1b
, we see that the name can be omitted and a new Predef
method, summon
is used to bind the value, instead. (It is identical to +implicitly
.) The sortBy2
here is written identically to the previous one in ImplicitClauses
, but in Scala 3 it is implemented with a using clause.
The previously defined test methods, defaultOrdering
and oddEvenOrdering
, are almost the same in this source file, but are not shown here. There is an additional test method in this file that uses a given instance instead of an implicit value:
def
evenOddGivenOrdering
(
)
=
given
evenOdd
as
Ordering
[
Int
]
=
new
Ordering
[
Int
]
:
def
compare
(
i
:
Int
,
j
:
Int
)
:
Int
=
i
%
2
compare
j
%
2
match
case
0
=>
i
compare
j
case
c
=>
-
c
val
expected
=
SortableSeq
(
Seq
(
4
,
2
,
5
,
3
,
1
)
)
assert
(
seq
.
sortBy1a
(
i
=>
-
i
)
==
expected
)
assert
(
seq
.
sortBy1b
(
i
=>
-
i
)
==
expected
)
assert
(
seq
.
sortBy2
(
i
=>
-
i
)
==
expected
)
assert
(
seq
.
sortBy1a
(
i
=>
-
i
)
(
using
evenOdd
)
==
expected
)
assert
(
seq
.
sortBy1b
(
i
=>
-
i
)
(
using
evenOdd
)
==
expected
)
assert
(
seq
.
sortBy2
(
i
=>
-
i
)
(
using
evenOdd
)
==
expected
)
evenOddGivenOrdering
(
)
The syntax given foo as Type[T]
is used instead of implicit val foo: Type[T]
, essentially the same way we used givens when discussing type classes. Recall the use of as
, too.
If the using clause is provided explicitly, as marked with comment 2, the using
keyword is required in Scala 3, whereas Scala 2 didn’t require the implicit
keyword here. The reason using
is now required is two fold. First, it’s better documentation for the reader. Second, it removes an ambiguity that is illustrated in the following contrived Scala 2 example:
case
class
FastSeq
[
T
]
(
implicit
storage
:
Storage
[
T
]
)
:
def
apply
(
i
:
Int
)
:
Option
[
T
]
=
storage
.
get
(
i
)
implicit
val
customStorage
:
Storage
=
???
val
opt
=
FastSeq
[
String
]
(
5
)
A “fast” sequence implementation with user-pluggable storage.
Optionally return the item at index i
.
Does this line return a None
, because the sequence is empty?
In Scala 2, the last line would cause a compiler error for the argument 5
, saying that a Storage
instance was expected as the argument. The actual user intention was for the instance to be constructed with the implicit value customStorage
, then the apply method was to be called with 5
. Instead, you would have to use the unintuitive expression FastSeq[String].apply(5)
.
Now this ambiguity is removed by requiring using
when the implicit is provided explicitly. As written, the compiler knows that you want to use the implicit for the storage and then call apply(5)
.
The intent of the new given name as …
syntax and the using …
syntax is to make their purpose more explicit, but it functions almost identically to Scala 2 implicit
definitions and parameters.
Context parameters can be by-name parameters. Here is an example adapted from this Dotty documentation.
// src/script/scala/progscala3/contexts/ByNameContextParameters.scala
trait
Codec
[
T
]
:
def
write
(
x
:
T
)
:
Unit
given
intCodec
as
Codec
[
Int
]
:
def
write
(
i:
Int
)
:
Unit
=
(
i
)
given
optionCodec
[
T
](
using
ev
:
=>
Codec
[
T
])
as
Codec
[
Option
[
T
]]
:
def
write
(
xo:
Option
[
T
])
=
xo
match
case
Some
(
x
)
=>
ev
.
write
(
x
)
case
None
=>
val
s
=
summon
[
Codec
[
Option
[
Int
]]]
s
.
write
(
Some
(
33
))
s
.
write
(
None
)
Note that ev
for optionCodec[T]
is a by-name parameter, which means its evaluation is delayed until used. Using a by-name parameter here can avoid certain cases where a divergent expansion can happen, as the compiler chases its tail trying to resolve all using clause parameters.
In ATasteOfFutures
, we saw that Future.apply
has a second, implicit argument list that is used to pass an ExecutionContext
:
object
Future
:
apply
[
T
](
body
:
=>
T
)(
implicit
executor
:
ExecutionContext
)
:
Future
[
T
]
...
It is not a context bound, because the ExecutionContext
is independent of T
.
We didn’t specify an ExecutionContext
when we called these methods, but we imported a global default that the compiler used:
import
scala.concurrent.ExecutionContext.Implicits.global
Future
(...)
// Use the implicit value
Future
(...)(
using
customExecutionContext
)
// Explicit argument with "using"
Future
supports a lot of the operations like filter
, map
, etc. All take two argument lists, like Future.apply
. Have a using clause for the ExecutionContext
makes the code much cleaner:
given
customExecutionContext
:
ExecutionContext
=
???
val
f1
=
Future
(...)(
using
customExecutionContext
)
.
map
(...)(
using
customExecutionContext
)
.
filter
(...)(
using
customExecutionContext
)
// versus:
val
f2
=
Future
(...).
map
(...).
filter
(...)
Other example of using contexts (my term) include transaction identifiers, database connections, and web sessions.
The example shows that using contexts can make code more concise, but they can be overused in Scala code. When you see the same using FooContext
all over a code base, it feels more like a global variable than pure functional programming.
Context Functions are functions with context parameters only. Scala 3 introduces a new context function type for them, indicated by ? => T
, with special handling depending on how they are used.
In essence, context functions abstract over context parameters.
Consider this alternative for handling The ExecutionContext
passed to Future.apply()
, using a wrapper FutureCF
(for “context function”):
// src/script/scala/progscala3/contexts/ContextFunctions.scala
import
scala.concurrent.
{
Await
,
ExecutionContext
,
Future
}
import
scala.concurrent.ExecutionContext.Implicits.global
import
scala.concurrent.duration._
object
FutureCF
:
type
Executable
[
T
]
=
ExecutionContext
?=>
T
def
apply
[
T
]
(
body
:
=>
T
)
:
Executable
[
Future
[
T
]
]
=
Future
(
body
)
def
sleepN
(
dur
:
Duration
)
:
Duration
=
val
start
=
System
.
currentTimeMillis
(
)
Thread
.
sleep
(
dur
.
toMillis
)
Duration
(
System
.
currentTimeMillis
-
start
,
MILLISECONDS
)
val
future1
=
FutureCF
(
sleepN
(
1.
second
)
)
val
future2
=
FutureCF
(
sleepN
(
1.
second
)
)
(
using
global
)
val
duration1
=
Await
.
result
(
future1
,
2.
second
)
val
duration2
=
Await
.
result
(
future2
,
2.
second
)
Type alias for a context function with an ExecutionContext
.
Compare this definition of FutureCF.apply()
to Future.apply()
above, which we are calling here. The implicit ExecutionContext
is passed to Future.apply()
.
Define some work that will be passed to futures; sleep for some Duration
and return the actual elapsed time as a Duration
.
Create a future with the implicit value, like calling Future.apply()
.
Create another future specifying the implicit argument explicitly.
Await the results of the futures. Wait no longer than two seconds.
The last two lines print the following (your actual numbers may vary slightly):
val
duration1
:
concurrent.duration.Duration
=
1004
milliseconds
val
duration2
:
concurrent.duration.Duration
=
1002
milliseconds
Let’s look at a more extensive example inspired by the context functions documentation page. We’ll create a domain-specific language (DSL) for constructing JSON. For simplicity, we won’t construct instances for some JSON library, just JSON-formatted strings. To motivate the example, let’s begin with an entry point that shows the DSL in action:
// src/main/scala/progscala3/contexts/json/JSONBuilder.scala
package
progscala3.contexts.json
@main
def
TryJSONBuilder
()
:
Unit
=
val
js
=
obj
{
"config"
->
obj
{
"master"
->
obj
{
"host"
->
"192.168.1.1"
"port"
->
8000
"security"
->
"null"
// "foo" -> (1, 2.2, "three") // doesn't compile!
}
"nodes"
->
array
{
aobj
{
// "array object"
"name"
->
"node1"
"host"
->
"192.168.1.10"
}
aobj
{
"name"
->
"node2"
"host"
->
"192.168.1.20"
}
"otherThing"
->
2
}
}
}
println
(
js
)
Let’s try it in SBT. I reformatted the output for better legibility:
>
runMain
progscala3
.
contexts
.
json
.
TryJSONBuilder
...
{
"config"
:
{
"
master
"
:
{
"
host
"
:
"192
.
168
.
1
.
1"
,
"
port
"
:
8000
,
"
security
"
:
null
},
"nodes"
:
[
{
"
name
"
:
"
node1
"
,
"
host
"
:
"192
.
168
.
1
.
10"
},
{
"name"
:
"
node2
"
,
"host"
:
"192
.
168
.
1
.
20"
},
"otherThing"
->
2
]
}
}
Now let’s work through the implementation (same source file):
object
JSONElement
:
def
valueString
[
T
]
(
t
:
T
)
:
String
=
t
match
case
"null"
=>
"null"
case
s
:
String
=>
"""
+
s
+
"""
case
_
=>
t
.
toString
sealed
trait
JSONElement
case
class
JSONKeyedElement
[
T
]
(
key
:
String
,
element
:
T
)
extends
JSONElement
:
override
def
toString
=
"""
+
key
+
"": "
+
JSONElement
.
valueString
(
element
)
case
class
JSONArrayElement
[
T
]
(
element
:
T
)
extends
JSONElement
:
override
def
toString
=
JSONElement
.
valueString
(
element
)
Return the correct string representation for a value. JSON allows null
s, for which we’ll expect the user to use the string "null"
(as shown in the example). Hence, valueString
returns null
without quotes, all other strings in double quotes, and for everything else, the output of toString
.
We can model everything as either a “keyed” element of the form "key": value
or just a value, but the latter only appear as elements in arrays.
Continuing, we have types for JSON objects and arrays:
import
scala.collection.mutable.ArrayBuffer
trait
JSONContainer
(
open
:
String
,
close
:
String
)
extends
JSONElement
:
val
elements
=
new
ArrayBuffer
[
JSONElement
]
def
add
(
e
:
JSONElement
)
:
Unit
=
elements
+=
e
override
def
toString
=
elements
.
mkString
(
open
,
", "
,
close
)
class
JSONObject
extends
JSONContainer
(
"{"
,
"}"
)
class
JSONArray
extends
JSONContainer
(
"["
,
"]"
)
For both JSON objects and arrays, we add elements to a mutable array buffer. There are two places the add
method is called, discussed below.
Note that traits can define constructor parameters, like classes. For our purposes, only the opening and closing delimiter differ between objects and arrays. The concrete classes for them define the correct delimiters.
sealed
trait
ValidJSONValue
[
T
]
given
ValidJSONValue
[
Int
]
given
ValidJSONValue
[
Double
]
given
ValidJSONValue
[
String
]
given
ValidJSONValue
[
Boolean
]
given
ValidJSONValue
[
JSONObject
]
given
ValidJSONValue
[
JSONArray
]
extension
[
T
:
ValidJSONValue
]
(
name
:
String
)
def
->
(
element
:
T
)
(
using
jc
:
JSONContainer
)
=
jc
.
add
(
JSONNamedElement
(
name
,
element
)
)
These given instances of ValidJSONValue[T]
are witnesses, constraining the allowed types of JSON values (see ConstrainingAllowedInstances
).
This String
extension method that is constrained by ValidJSONValue[T]
. It constructs JSONKeyedElement
s using "key" -> value
, just like tuple pairs, but constrained by the context bound T : ValidJSONValue
. We use a JSONContainer
because these key-value pairs only occur inside containers (objects or arrays) in the DSL. It is here that we add the key-value pairs to the container jc
.
If you try a tuple value, it will fail to compile, as shown in a comment in TryJSONBuilder
!
In Scala 2, you would need to declare derived classes of ValidJSONValue[T]
, like this:
implicit
object
VJSONInt
extends
ValidJSONValue
[
Int
]
...
Finally, we see the actual context functions in action:
def
obj
(
init
:
JSONObject
?=>
Unit
)
=
given
jo
as
JSONObject
init
jo
def
aobj
(
init
:
JSONObject
?=>
Unit
)
(
using
jc
:
JSONContainer
)
=
given
jo
as
JSONObject
init
jc
.
add
(
jo
)
def
array
(
init
:
JSONArray
?=>
Unit
)
=
given
ja
as
JSONArray
init
ja
A whole JSON object, as well as nested objects, starts with obj
. Refer to the example in TryJSONBuilder
. Where does the init
context function of type JSONObject ?=> Unit
come from? It is constructed by the compiler from the expressions inside the braces passed as the argument to obj
. Or, as it appears in the DSL, the braces after the obj
“keyword”. Next, the given
clause creates an instance of JSONObject
named jo
. Then, init
is evaluated, where jo
will be used to satisfy using clauses inside those nested expressions. Finally, we return jo
.
Use aobj
to define objects as array elements. Note that this function has a using clause, unlike obj
, which will always a JSONArray
. Unfortunately, the name obj
can’t be overloaded here, because the compiler would consider the two definitions ambiguous. The body of aobj
is the second place where the add
method is called. Recall that the other location is inside the String
extension method ->
.
Define an array. This body is very similar to obj
.
So, if you find you need a small DSL for expressing structure, context functions is one tool at your disposal. We’ll explore more tools for DSLs in DomainSpecificLanguages
.
The “given” instances of ValidJSONValue[T]
in the previous example were used as context bounds that constrained the allowed types that could be used for type parameter T
in the String
extension method ->(element: T)
.
What was new is that we did no actual work with these instances. Only their existence mattered. They “witnessed” the allowed types for JSON elements. So, because we didn’t provide an instance for three-element tuples, for example, attempting to use a tuple value in the DSL, such as "stuff" -> (1, "two", 3.3)
, causes a compilation error.
Sometimes a context bound is used in both ways, as a witness and to do work. Consider the following sketch of an API for data “records” with ad hoc schemas, like in some NoSQL databases. Each row is encapsulated in a Map[String,Any]
, where the keys are the field names and the “column” values are unconstrained. However, the add
and get
methods, for adding column values to a row and retrieving them, do constrain the allowed instance types. Here is the example:
// src/main/scala/progscala3/contexts/NoSQLRecords.scala
package
progscala3.contexts.scaladb
import
scala.language.implicitConversions
import
scala.util.Try
case
class
InvalidFieldName
(
name
:
String
)
extends
RuntimeException
(
s"
Invalid field name
$name
"
)
object
Record
:
def
make
:
Record
=
new
Record
(
Map
.
empty
)
type
Conv
[
T
]
=
Conversion
[
Any
,
T
]
case
class
Record
private
(
contents
:
Map
[
String
,
Any
]
)
:
import
Record.Conv
def
add
[
T
]
(
nameValue
:
(
String
,
T
)
)
(
using
Conv
[
T
]
)
:
Record
=
Record
(
contents
+
nameValue
)
def
get
[
T
]
(
colName
:
String
)
(
using
toT
:
Conv
[
T
]
)
:
Try
[
T
]
=
Try
(
toT
(
col
(
colName
)
)
)
private
def
col
(
colName
:
String
)
:
Any
=
contents
.
getOrElse
(
colName
,
throw
InvalidFieldName
(
colName
)
)
@main
def
TryScalaDB
=
import
Record.Conv
given
Conv
[
Int
]
=
_
.
asInstanceOf
[
Int
]
given
Conv
[
Double
]
=
_
.
asInstanceOf
[
Double
]
given
Conv
[
String
]
=
_
.
asInstanceOf
[
String
]
given
ab
[
A
:
Conv
,
B
:
Conv
]
as
Conv
[
(
A
,
B
)
]
=
_
.
asInstanceOf
[
(
A
,
B
)
]
val
rec
=
Record
.
make
.
add
(
"one"
->
1
)
.
add
(
"two"
->
2.2
)
.
add
(
"three"
->
"THREE!"
)
.
add
(
"four"
->
(
4.4
,
"four"
)
)
.
add
(
"five"
->
(
5
,
(
"five"
,
5.5
)
)
)
val
one
=
rec
.
get
[
Int
]
(
"one"
)
val
two
=
rec
.
get
[
Double
]
(
"two"
)
val
three
=
rec
.
get
[
String
]
(
"three"
)
val
four
=
rec
.
get
[
(
Double
,
String
)
]
(
"four"
)
val
five
=
rec
.
get
[
(
Int
,
(
String
,
Double
)
)
]
(
"five"
)
val
bad1
=
rec
.
get
[
String
]
(
"two"
)
val
bad2
=
rec
.
get
[
String
]
(
"five"
)
val
bad3
=
rec
.
get
[
Double
]
(
"five"
)
// val error = rec.get[Byte]("byte")
println
(
s"
one, two, three, four, five ->
$one
,
$two
,
$three
,
$four
,
$five
"
)
println
(
s"
bad1, bad2, bad3 ->
$bad1
,
$bad2
,
$bad3
"
)
The companion object defines make
to start “safe” construction of a Record
. It also defines a type alias for Conversion
, where we always use Any
as the first type parameter. This alias is necessary when we define given ab
below.
Define Record
with a single field Map[String,Any]
to hold the user-defined fields and values. Use of private
after the type name declares the constructor private, forcing users to create records using Record.make
followed by add
calls. This prevents users from using an unconstrained Map
to construct a Record
!
A method to add a field with a particular type and value. The anonymous context parameter is used only to constrain the allowed values for T
. It’s apply
method won’t be used. Since Record
s are immutable, a new instance is returned.
A method to retrieve a field value with the desired type T
. Here the context parameter both constrains the allowed T
types and it handles conversion from Any
to T
. On failure, an exception is returned in the Try
. Hence, this example can’t catch all type errors at compile time, as shown below.
Only Int
, Double
, String
, and pairs of them are supported. These definitions work as witnesses for the allowed types in both the add
and get
methods, as well as function as implicit conversions from Any
to specific types when used in get
. Note that given ab
declares a given for pairs, but the A
and B
types are constrained to be other allowed types, including other pairs!
Attempting to retrieve columns with the wrong types. Attempting to retrieve an unsupported Byte
column would cause a compilation error.
Running this example with runMain progscala3.contexts.scaladb.TryScalaDB
, you get the following output (abbreviated):
one
,
two
,
three
,
four
,
five
->
Success
(
1
),
Success
(
2.2
),
Success
(
THREE
!),
Success
((
4.4
,
four
)),
Success
((
5
,(
five
,
5.5
)))
bad1
,
bad2
,
bad3
->
Failure
(...
java
.
lang
.
Double
cannot
be
cast
to
class
java
.
lang
.
String
...),
Failure
(...
scala
.
Tuple2
cannot
be
cast
to
class
java
.
lang
.
String
...),
Failure
(...
scala
.
Tuple2
cannot
be
cast
to
class
java
.
lang
.
Double
...))
Hence, the only runtime failure we have we can’t prevent at compile time is attempting to get a column with the wrong type.
The type alias Conv[T]
not only made the code more concise than using Conversion[Any,T]
, it is necessary for the context bounds on A
and B
in ab
. This is because context bounds always require one and only one type parameter, but Conversion[A,B]
has two. Fortunately, the A
is always Any
in our case, so we were able to define the type alias Conv[T]
and use it for the bounds in ab
.
Using a type alias to fill in some of the type parameters is a useful trick when the number of type parameters in a type don’t match what your needs.
As a reminder, use of given
provides a more concise syntax than the Scala 2 way of declaring an implicit value (which is still supported):
given
Conv
[
Int
]
=
_
.
asInstanceOf
[
Int
]
// Scala 3
// vs.
implicit
val
toInt
:
Conv
[
Int
]
=
new
Conv
[
Int
]
:
// Scala 2
def
apply
(
any
:
Any
)
:
Int
=
any
.
asInstanceOf
[
Int
]
To recap, we limited the allowed types that can be used for a parameterized method by passing an implicit parameter and only defining given values that match the types we want to allow.
This example was inspired by an API I once wrote to work with Cassandra.
In the previous example, the Record.add
method showed one way to constrain the allowed types without doing anything else with the context bounds. Now we’ll discuss another technique called implicit evidence.
A nice example of this technique is the toMap
method available for all iterable collections. Recall that the Map
constructor wants key-value pairs, i.e., two-element tuples, as arguments. If we have a sequence of pairs, wouldn’t it be nice to create a Map
out of them in one step? That’s what toMap
does, but we have a dilemma. We can’t allow the user to call toMap
if the sequence is not a sequence of pairs.
The toMap
method is defined in IterableOnceOps
:
trait
IterableOnceOps
[
+A
]
:
def
toMap
[
K
,V
](
implicit
ev
:
<:
<
[
A
,(
K
,V
)])
:
immutable.Map
[
K
,V
]
...
The implicit parameter ev
is the “evidence” we need to enforce our constraint. It uses a type defined in Predef
called <:<
, named to resemble the type parameter constraint <:
, e.g., A <: (K,V)
. In _call_by_name_call_by_value
, we learned that this notation means A
is a subtype of (K,V)
.
Recall we said that types with two type parameters can be written in “infix” notation. So, the following two expressions are equivalent:
<:<[
A
,(
T
,U
)]
A
<:<
(
T
,
U
)
Now, when we have a traversable collection that we want to convert to a Map
, the implicit evidence ev
value we need will be synthesized by the compiler, but only if A <: (T,U)
; that is, if A
is actually a pair of types. If true, then toMap
can be called and it simply passes the elements of the traversable to the Map
constructor. However, if A
is not a pair type, the code fails to compile.
Hence, evidence only has to exist to enforce a type constraint, which the compiler generates for us. We don’t have to define a given
or implicit
value ourselves.
There is also a related type in Predef
for providing evidence that two types are equivalent, called =:=
.
Context bounds can also be used to work around limitations due to type erasure on the JVM.
For historical reasons, the JVM “forgets” the supplied type arguments for parameterized types. For example, consider the following definitions for an overloaded method with unique type signatures, at least to human readers:
scala
>
object
O
:
|
def
m
(
seq:
Seq
[
Int
])
:
Unit
=
println
(
s"Seq[Int]:
$seq
"
)
|
def
m
(
seq
:
Seq
[
String
])
:
Unit
=
println
(
s"Seq[String]:
$seq
"
)
|
3
|
def
m
(
seq
:
Seq
[
String
])
:
Unit
=
println
(
s"Seq[String]:
$seq
"
)
|
^
|
Double
definition
:
|
def
m
(
seq:
Seq
[
Int
])
:
Unit
in
object
O
at
line
2
and
|
def
m
(
seq
:
Seq
[
String
])
:
Unit
in
object
O
at
line
3
|
have
the
same
type
after
erasure.
So, the compiler disallows the definitions because they are effectively the same in byte code.
Be careful defining overloaded methods in the REPL without an enclosing type. Because the REPL lets you redefine anything, for convenience, if you enter these two method definitions without the enclosing object, you’ll see no complaints. You will have one method, for Seq[String]
, instead of two.
However, we can add an implicit parameter to disambiguate the methods:
// src/script/scala/progscala3/contexts/UsingTypeErasureWorkaround.scala
object
M
:
implicit
object
IntMarker
implicit
object
StringMarker
def
m
(
seq
:
Seq
[
Int
]
)
(
using
IntMarker
.
type
)
:
String
=
s"
Seq[Int]:
$seq
"
def
m
(
seq
:
Seq
[
String
]
)
(
using
StringMarker
.
type
)
:
String
=
s"
Seq[String]:
$seq
"
import
M._
The last three lines produce the following results:
scala
>
m
(
Seq
(
1
,
2
,
3
))
|
m
(
Seq
(
"one"
,
"two"
,
"three"
))
val
res0
:
String
=
Seq
[
Int
]
:
List
(
1
,
2
,
3
)
val
res1
:
String
=
Seq
[
String
]
:
List
(
one
,
two
,
three
)
scala
>
m
(
Seq
(
"one"
->
1
,
"two"
->
2
,
"three"
->
3
))
// ERROR
1
|
m
(
Seq
(
"one"
->
1
,
"two"
->
2
,
"three"
->
3
))
|^
|
None
of
the
overloaded
alternatives
of
method
m
in
object
M
with
types
|...
Define two special-purpose implicit objects that will be used to disambiguate the methods affected by type erasure.
Redefinition of the method that takes Seq[Int]
. It now has a second parameter list expecting an implicit IntMarker.type
(because IntMarker
is an object
). Then define a similar method for Seq[String]
.
Now the compiler considers the two m
methods to be distinct after type erasure.
You might wonder why I didn’t use implicit Int
and String
values, rather than invent new types. Using implicit values for very common types is not recommended. It would be too easy for one or more implicit String
values, for example, to show up in a particular scope. If you don’t expect one to be there, you might be surprised when it gets used. If you do expect one to be in scope, but there are several of them, you’ll get a compiler error because all of them are valid choices and the compiler can’t decide which to use.
At least the second scenario triggers an immediate error rather than allowing unintentional behavior to occur.
The safer bet is to limit your use of implicit parameters and values to very specific, purpose-built types.
Avoid using context parameters for very common types like Int
and String
, as they are more likely to cause confusing behavior or compilation errors.
We’ll discuss type erasure in more detail in ScalasTypeSystemI
.
The sidebar lists the general rules for using clauses.
Hence, any one parameter list can’t mix context parameters with other parameters. Here are a few more examples, including what happens when a regular parameter list follows an using clause, which is allowed in Scala 3, but not in Scala 2:
// src/script/scala/progscala3/contexts/UsingClausesLists.scala
case
class
U1
[
T
]
(
t
:
T
)
case
class
U2
[
T
]
(
t
:
T
)
def
f1
[
T1
,
T2
]
(
name
:
String
)
(
using
u1
:
U1
[
T1
]
,
u2
:
U2
[
T2
]
)
:
String
=
s"
f1:
$name
:
$u1
,
$u2
"
def
f2
[
T1
,
T2
]
(
name
:
String
)
(
using
u1
:
U1
[
T1
]
)
(
using
u2
:
U2
[
T2
]
)
:
String
=
s"
f2:
$name
:
$u1
,
$u2
"
def
f3
[
T1
,
T2
]
(
name
:
String
)
(
using
u1
:
U1
[
T1
]
)
(
u2
:
U2
[
T2
]
)
:
String
=
s"
f3:
$name
:
$u1
,
$u2
"
given
u1i
as
U1
[
Int
]
(
0
)
given
u2s
as
U2
[
String
]
(
"one"
)
One using clause with two values.
Two using clauses, each with one value.
One using clause sandwiched between two regular parameter lists.
Now use them:
scala
>
f1
(
"f1a"
)
|
f1
(
"f1b"
)(
using
u1i
,
u2s
)
|
f2
(
"f2a"
)
|
f2
(
"f2b"
)(
using
u1i
)(
using
u2s
)
|
f3
(
"f3a"
)
|
f3
(
"f3b"
)(
using
u1i
)
|
f3
(
"f3c"
)(
using
u1i
)(
u2s
)
val
res0
:
String
=
f1
:
f1a:
U1
(
0
),
U2
(
one
)
val
res1
:
String
=
f1
:
f1b:
U1
(
0
),
U2
(
one
)
val
res2
:
String
=
f2
:
f2a:
U1
(
0
),
U2
(
one
)
val
res3
:
String
=
f2
:
f2b:
U1
(
0
),
U2
(
one
)
val
res4
:
U2
[
Any
]
=>
String
=
Lambda$7814
/
0x000000080360d040
@
4
aa25f5d
val
res5
:
U2
[
Any
]
=>
String
=
Lambda$7815
/
0x000000080360c840
@
521
dc499
val
res6
:
String
=
f3
:
f3c:
U1
(
0
),
U2
(
one
)
The results for f1
and f2
should make sense; they are functionally equivalent. Recall that when passing values explicitly, the using
keyword is required.
Now consider res4
through res6
. First, res6
should be unsurprising, as we explicitly provided arguments for all three lists.
Partial application is the explanation for res4
and res5
. For methods with more than one parameter list, if you invoke them with a subset of the leading parameter lists, a new function is returned expecting the rest of the parameter lists. For res4
, the given is used for the second parameter list, the using clause, while a value is explicitly provided for the using clause in the res5
definition. Hence, both return the same thing.
For both res4
and res5
, the third parameter list is not a using clause and it was not provided explicitly. Therefore, the expressions returned a function that expects the remaining parameter list, which takes an instance of U2
, and returns a String
:
scala
>
val
u2a
=
U2
[
Any
](
1.1
)
// Declare a U2[Any] we can use.
|
res4
(
u2a
)
// Pass it to the res4 and res5 functions.
|
res5
(
u2a
)
val
u2a
:
U2
[
Any
]
=
U2
(
1.1
)
val
res7
:
String
=
f3
:
f3a:
U1
(
0
),
U2
(
1.1
)
val
res8
:
String
=
f3
:
f3b:
U1
(
0
),
U2
(
1.1
)
Because we didn’t provide the third parameter list when we constructed res4
and res5
, the type parameter T2
for f3
was inferred to be the widest possible type, Any
. Try calling res4(m1)
or res4(m2)
and you’ll get a type error, as m1
and m2
are not type compatible with U2[Any]
. It doesn’t matter that m1
and m2
were declared as givens; we can still use them as regular parameters, as long as the types are compatible.
Let’s finish the discussion of using clauses by discussing how to improve the errors reported when an value isn’t found. The compiler’s default messages are usually sufficiently descriptive, but you can customize them with the implicitNotFound
annotation,2 as follows:
// src/script/scala/progscala3/contexts/ImplicitNotFound.scala[]
import
scala.annotation.implicitNotFound
@implicitNotFound
(
"Stringer: No implicit found ${T} : Tagify[${T}]"
)
trait
Tagify
[
T
]
:
def
toTag
(
t
:
T
)
:
String
case
class
Stringer
[
T
:
Tagify
](
t
:
T
)
:
override
def
toString:
String
=
s"Stringer:
${
implicitly
[
Tagify
[
T
]].
toTag
(
t
)
}
"
object
O
:
def
makeXML
[
T
](
t
:
T
)(
implicit
@implicitNotFound
(
"No Tagify[${T}] implicit found"
)
tagger
:
Tagify
[
T
])
:
String
=
s"<xml>
${
tagger
.
toTag
(
t
)
}
</xml>"
given
Tagify
[
Int
]
:
def
toTag
(
i:
Int
)
:
String
=
s"<int>
$i
</int>"
given
Tagify
[
String
]
:
def
toTag
(
s:
String
)
:
String
=
s"<string>
$s
</string>"
Let’s try it:
scala
>
Stringer
(
"Hello World!"
)
|
Stringer
(
100
)
|
O
.
makeXML
(
"Hello World!"
)
|
O
.
makeXML
(
100
)
val
res0
:
Stringer
[
String
]
=
Stringer
:
<string>Hello
World!</string>
val
res1
:
Stringer
[
Int
]
=
Stringer
:
<int>
100
</int>
val
res2
:
String
=
<
xml
><
string
>
Hello
World
!</
string
></
xml
>
val
res3
:
String
=
<
xml
><
int
>
100
</
int
></
xml
>
scala
>
Stringer
(
3.14569
)
|
O
.
makeXML
(
3.14569
)
1
|
Stringer
(
3.14569
)
|
^
|
Stringer
:
No
implicit
found
Double
:
Tagify
[
Double
]
2
|
O
.
makeXML
(
3.14569
)
|
^
|
Stringer
:
No
implicit
found
Double
:
Tagify
[
Double
]
TODO: Still true in Scala 3 final? What about a new annotation??
Only the annotation on Tagify
is used. The annotation on the parameter to O.makeXML
is supposed to take precedence for the last output. This appears to be a current limitation in Scala 3.
You can only annotate types intended for use as givens. This is another reason for creating custom types for this purpose, rather than using more common types, like Int
, String
, or our Person
type. You can’t use this annotation with those types.
We completed our exploration into the details of abstracting over context in Scala 2 and 3. I hope you can appreciate their power and utility, but also the need to use them wisely. Unfortunately, because the old implicit idioms are still supported for backwards compatibility, at least for a while, it will be necessary to understand how to use both the old and new constructs, even though they are redundant.
Now we’re ready to dive into the principles of functional programming. We’ll start with a discussion of the core concepts and why they are important. Then we’ll look at the powerful functions provided by most container types in the library. We’ll see how we can use those functions to construct concise, yet powerful programs.
1 A “regular” parameter list is also known as a normal parameter clause, but I have just used the more familiar parameter list in this book. Using clause is more of a formal term in Scala 3 documentation than implicit parameter clause was, which is why I emphasize it here.
2 At the time of this writing, there is no givenNotFound
or similar replacement annotation in Scala 3.
3.147.66.149