Concurrent access to the service instance is governed by
the ConcurrencyMode
property of the ServiceBehavior
attribute:
public enum ConcurrencyMode { Single, Reentrant, Multiple } [AttributeUsage(AttributeTargets.Class)] public sealed class ServiceBehaviorAttribute : ... { public ConcurrencyMode ConcurrencyMode {get;set;} //More members }
The value of the ConcurrencyMode
enum controls if and when
concurrent calls are allowed. The name ConcurrencyMode
is actually incorrect; the
proper name for this property would have been ConcurrencyContextMode
, since it synchronizes
access not to the instance, but rather to the context containing the
instance (much the same way Instance
Context
Mode
controls the instantiation of the
context, not the instance). The significance of this distinction—i.e.,
that the synchronization is related to the context and not to the
instance—will become evident later.
When the service is configured with ConcurrencyMode.Single
, WCF will provide
automatic synchronization to the service context and disallow
concurrent calls by associating
the context containing the service instance with a synchronization
lock. Every call coming into the service must first try to acquire the
lock. If the lock is unowned, the caller will be allowed in. Once the
operation returns, WCF will unlock the lock, thus allowing in another
caller.
The important thing is that only one caller at a time is ever
allowed. If there are multiple concurrent callers while the lock is
locked, the callers are all placed in a queue and are served out of
the queue in order. If a call times out while blocked, WCF will remove
the caller from the queue and the client will get a TimeoutException
. The
ConcurrencyMode.Single
is the WCF
default setting, so these definitions are equivalent:
class MyService : IMyContract
{...}
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Single
)]
class MyService : IMyContract
{...}
Because the default concurrency mode is synchronized access, the susceptible instancing modes of per-session and singleton are also synchronized by default. Note that even calls to a per-call service instance are synchronized by default.
As explained in Chapter 7, WCF will
verify at service load time whether at least one operation on the
service has TransactionScopeRequired
set to true
and that Release
Service
Instance
On
Transaction
Complete
is true
. In this case, the service
concurrency mode must be ConcurrencyMode.Single
. This is done
deliberately to ensure that the service instance can be recycled at
the end of the transaction without any danger of there being another
thread accessing the disposed instance.
When the service is configured with ConcurrencyMode.Multiple
, WCF will stay out
of the way and will not synchronize access to the service instance in
any way. Concurrency
Mode.
Multiple
simply means that the service
instance is not associated with any synchronization lock, so
concurrent calls are allowed on the service instance. Put differently,
when a service instance is configured with ConcurrencyMode.Multiple
, WCF will not queue
up the client messages and dispatch them to the service instance as
soon as they arrive.
A large number of concurrent client calls will not result in a matching number of concurrently executing calls on the service. The maximum number of concurrent calls dispatched to the service is determined by the configured maximum concurrent calls throttle value.
Obviously, this is of great concern to sessionful and singleton
services, which must manually synchronize access to their instance
state. The common way of doing that is to use .NET locks such as
Monitor
or a WaitHandle
-derived class. Manual
synchronization, which is covered in great depth in Chapter 8 of my book Programming
.NET Components, Second Edition (O’Reilly), is not for the
faint of heart, but it does enable the service developer to optimize
the throughput of client calls on the service instance: you can lock
the service instance just when and where synchronization is required,
thus allowing other client calls on the same service instance in
between the synchronized sections. Example 8-1 shows a manually
synchronized sessionful service whose client performs concurrent
calls.
Example 8-1. Manual synchronization using fragmented locking
[ServiceContract(SessionMode = SessionMode.Required)]
interface IMyContract
{
void MyMethod();
}
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple
)]
class MyService : IMyContract
{
int[] m_Numbers;
List<string> m_Names;
public void MyMethod()
{
lock(m_Numbers)
{
...
}
/* Don't access members here */
lock(m_Names)
{
...
}
}
}
The service in Example 8-1 is configured for
concurrent access. Since the critical sections of the operations that
require synchronization are any member variable accesses, the service
uses a Monitor
(encapsulated in the
lock
statement) to lock the member
variable before accessing it. I call this synchronization technique
fragmented locking, since it locks
only when needed and only what is being accessed. Local variables
require no synchronization, because they are visible only to the
thread that created them on its own call stack.
There are two problems with fragmented locking: it is both error- and deadlock-prone. Fragmented locking only provides for thread-safe access if every other operation on the service is as disciplined about always locking the members before accessing them. But even if all operations lock all members, you still risk deadlocks: if one operation on thread A locks member M1 while trying to access member M2, while another operation executing concurrently on thread B locks member M2 while trying to access member M1, you will end up with a deadlock.
WCF resolves service call deadlocks by eventually timing out
the call and throwing a TimeoutException
. Avoid using a long send
timeout, as it decreases WCF’s ability to resolve deadlocks in a
timely manner.
It is better to reduce the fragmentation by locking the entire service instance instead:
public void MyMethod() { lock(this) { ... } /* Don't access members here */ lock(this) { ... } }
This approach, however, is still fragmented and thus error-prone—if at some point in the future someone adds a method call in the unsynchronized code section that does access the members, it will not be a synchronized access. It is better still to lock the entire body of the method:
public void MyMethod() { lock(this) { ... } }
The problem with this approach is that in the future someone
maintaining this code may err and place some code before or after the
lock
statement. Your best option
therefore is to instruct the compiler to automate injecting the call
to lock the instance using the MethodImpl
attribute
with the MethodImplOptions.Synchronized
flag:
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
class MyService : IMyContract
{
int[] m_Numbers;
List<string> m_Names;
[MethodImpl(MethodImplOptions.Synchronized
)]
public void MyMethod()
{
...
}
}
You will need to repeat the assignment of the MethodImpl
attribute on all the service
operation implementations.
While this code is thread-safe, you actually gain little from
the use of ConcurrencyMode.Multiple
: the net effect in
terms of synchronization is similar to using ConcurrencyMode.Single
, yet you have
increased the overall code complexity and reliance on developers’
discipline. In general, you should avoid Concurrency
Mode.
Multiple
. However, there are cases
where ConcurrencyMode.Multiple
is
useful, as you will see later in this chapter.
When the service is configured for ConcurrencyMode.Multiple
, if at least one
operation has TransactionScopeRequired
set to true
, then ReleaseServiceInstanceOnTransactionComplete
must be set to false
. For
example, this is a valid definition, even though ReleaseServiceInstanceOnTransactionComplete
defaults to true
, because no
method has TransactionScopeRequired
set to true
:
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)] class MyService : IMyContract { public void MyMethod() {...} public void MyOtherMethod() {...} }
The following, on the other hand, is an invalid definition
because at least one method has TransactionScopeRequired
set to true
:
//Invalid configuration: [ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple
)] class MyService : IMyContract { [OperationBehavior(TransactionScopeRequired =true
)] public void MyMethod() {...} public void MyOtherMethod() {...} }
A transactional unsynchronized service must explicitly set
ReleaseServiceInstanceOnTransactionComplete
to false
:
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple
,ReleaseServiceInstanceOnTransactionComplete = false
)] class MyService : IMyContract { [OperationBehavior(TransactionScopeRequired =true
)] public void MyMethod() {...} public void MyOtherMethod() {...} }
The rationale behind this constraint is that only a sessionful or a singleton service could possibly benefit from unsynchronized access, so in the case of transactional access, WCF wants to enforce the semantic of the configured instancing mode. In addition, this will avoid having one caller access the instance, complete the transaction, and release the instance, all while another caller is using the instance.
The ConcurrencyMode.Reentrant
value is a
refinement of ConcurrencyMode.Single
. Similar to ConcurrencyMode.Single
, ConcurrencyMode.Reentrant
associates the
service context with a synchronization lock, so concurrent calls on
the same instance are never allowed. However, if the reentrant service
calls out to another service or a callback, and that call chain (or
causality) somehow winds its way back to the
service instance, as shown in Figure 8-1, that
call is allowed to reenter the service instance.
The implementation of ConcurrencyMode.Reentrant
is very
simple—when the reentrant service calls out over WCF, WCF silently
releases the synchronization lock associated with the instance
context. ConcurrencyMode.Reentrant
is designed to avoid the potential deadlock of reentrancy, although it
will release the lock in case of a callout. If the service were to
maintain the lock while calling out, if the causality tried to enter
the same context, a deadlock would occur.
Reentrancy support is instrumental in a number of cases:
A singleton service calling out risks a deadlock if any of the downstream services it calls tries to call back into the singleton.
In the same app domain, if the client stores a proxy reference in some globally available variable, then some of the downstream objects called by the service use the proxy reference to call back to the original service.
Callbacks on non-one-way operations must be allowed to reenter the calling service.
If the callout the service performs is of long duration, even without reentrancy, you may want to optimize throughput by allowing other clients to use the same service instance while the callout is in progress.
A service configured with ConcurrencyMode.Multiple
is by definition
also reentrant, because no lock is held during the callout. However,
unlike a reentrant service, which is inherently thread-safe, a
service configured with ConcurrencyMode.Multiple
must provide for
its own synchronization (for example, by locking the instance during
every call, as explained previously). It is up to the developer of
such a service to decide if it should release the lock before
calling out to avoid a reentrancy deadlock.
It is very important to recognize the liability associated with reentrancy. When a reentrant service calls out, it must leave the service in a workable, consistent state, because others could be allowed into the service instance while the service is calling out. A consistent state means that the reentrant service must have no more interactions with its own members or any other local object or static variable, and that when the callout returns, the reentrant service should simply be able to return control to its client. For example, suppose the reentrant service modifies the state of some linked list and leaves it in an inconsistent state—say, missing a head node—because it needs to get the value of the new head from another service. If the reentrant service then calls out to the other service, it leaves other clients vulnerable, because if they call into the reentrant service and access the linked list they will encounter an error.
Moreover, when the reentrant service returns from its callout, it must refresh all local method state. For example, if the service has a local variable that contains a copy of the state of a member variable, that local variable may now have the wrong value, because during the callout another party could have entered the reentrant service and modified the member variable.
A reentrant service faces exactly the same design constraints
regarding transactions as a service configured with ConcurrencyMode.Multiple
; namely, if at
least one operation has TransactionScopeRequired
set to true
, then ReleaseServiceInstanceOnTransactionComplete
must be set to false
. This is
done to maintain the instance context mode semantics.
Consider now the case of a service designed for
single-threaded access with Concurrency
Mode.
Single
and with duplex callbacks.
When a call from the client enters the context, it acquires the
synchronization lock. If that service obtains the callback reference
and calls back to the calling client, that call out will block the
thread used to issue the call from the client while still
maintaining the lock on the context. The callback will reach the
client, execute there, and return with a reply message from the
client. Unfortunately, when the reply message is sent to the same
service instance context, it will first try to acquire the lock—the
same lock already owned by the original call from the client, which
is still blocked waiting for the callback to return—and a deadlock
will ensue. To avoid this deadlock, during the operation execution,
WCF disallows callbacks from the service to its calling client as
long as the service is configured for single-threaded access.
There are three ways of safely allowing the callback. The first is to configure the service for reentrancy. When the service invokes the proxy to the callback object, WCF will silently release the lock, thus allowing the reply message from the callback to acquire the lock when it returns, as shown in Example 8-2.
Example 8-2. Configure for reentrancy to allow callbacks
interface IMyContractCallback
{
[OperationContract]
void OnCallback();
}
[ServiceContract(CallbackContract = typeof(IMyContractCallback))]
interface IMyContract
{
[OperationContract]
void MyMethod();
}
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Reentrant
)]
class MyService : IMyContract
{
public void MyMethod()
{
IMyContractCallback callback = OperationContext.Current.
GetCallbackChannel<IMyContractCallback>();
callback.OnCallback();
}
}
Control will only return to the service once the callback
returns, and the service’s own thread will need to reacquire the
lock. Configuring for reentrancy is required even of a per-call service, which otherwise has no
need for anything but Concurrency
Mode.
Single
. Note that the service may
still invoke callbacks to other clients or call other services; it
is the callback to the calling client that is disallowed.
You can, of course, configure the service for concurrent
access with ConcurrencyMode.Multiple
to avoid having
any lock.
The third option (as mentioned in Chapter 5), and the only case where a service
configured with ConcurrencyMode.Single
can call back to
its clients, is when the callback contract operation is configured
as one-way because there will not be any reply message to contend
for the lock.
3.128.78.30