Overview of Resource Management in the .NET Framework

As you are probably aware, the CLR takes an active role in managing the lifetime of objects that your application allocates. Actively managing objects is based on the premise that every object requires memory. It is the CLR's job to manage the allocation and deallocation of memory in an efficient and timely manner. In addition, every object requires that the memory associated with it be initialized. The programmer must properly initialize the object; the CLR cannot and should not interfere with this process. The .NET Framework requires that all objects be allocated from a managed heap. Each language expresses this allocation and initialization differently, but the concept is the same. For C#, an object is allocated and initialized as follows:

class Foo
{
    public Foo()
    {
        Console.WriteLine("Constructing Foo");
    }
}
. . .
Foo f = new Foo();

This statement allocates memory for the object and initializes it using the default constructor for the class Foo. Managed C++ performs this function with the following:

__gc class Foo
{
public:
    Foo()
    {
        Console::WriteLine(S"Constructing Foo");
    }
} ;
. . .
Foo *f = new Foo();

In Visual Basic, it looks like this:

Public Class Foo
    Public Sub New()
        MyBase.New()
        Console.WriteLine("Constructing Foo")
    End Sub
End Class
. . .
Dim f As New Foo()

Each language has syntax for constructing an object, but in the end, each language generates code that is remarkably the same IL. What follows is the IL for the Foo constructor from the C# code, but the code is much the same for each of the three languages listed earlier.

IL_0000:  ldarg.0
IL_0001:  call       instance void [mscorlib]System.Object::.ctor()
IL_0006:  ldstr      "Constructing Foo"
IL_000b:  call       void [mscorlib]System.Console::WriteLine(string)
IL_0010:  ret

Note

For all languages that support the Common Language Specification (CLS), the base Object constructor is called by default. In the .NET Framework, when you construct an object, you are also constructing the base class Object from which all objects are implicitly derived.


Before any of this code can execute, it needs some space in which to execute. C++ had syntax that would allow a programmer to explicitly separate the allocation and the initialization (construction) of an object using the new placement syntax. The new placement looks something like this:

Foo *p = new (address) Foo();

You need to look at constructing objects in the .NET Framework as two steps: allocation and initialization. Allocation is the allotment of memory, whereas initialization is the filling in of default values for the object. The .NET Framework does not support separating the two steps explicitly as C++. A clear division of labor exists; the CLR handles the memory, and your application handles the rest.

Just before the construction or initialization of your object, the .NET Framework allocates enough memory to hold your object. This allocation process is depicted in Figure 10.1.

Figure 10.1. .NET allocation.


All that is necessary to allocate an object is to examine the type information (pictured as a cloud) and bump up the Next pointer to make room for the new object. This process is extremely fast. In basic terms, it just involves calculating the size and incrementing a pointer appropriately. This is somewhat idealized, however. What happens when the height of the stack is limited, as shown in Figure 10.2?

Figure 10.2. .NET allocation with a limit.


An error might occur because of the lack of room. To make room, you could remove one or more of the old objects to make room for the new object. This is where a process known as garbage collection comes into play.

When the CLR determines that it has no more room, it initiates a garbage collection cycle. Initially, the assumption is that all objects are garbage. The garbage collector then proceeds to examine all roots to see if the program can reach the object. A root is virtually any storage location to which the running program has access. For example, a processor's CPU registers can act as roots, global and static variables can act as roots, and local variables can be roots. A simple rule is that if the object is reachable by the program, then it has a root referring to it somewhere. You might have the following code:

Foo f = new Foo();
. . .
// Use object . . .
. . .
f = null;

A local variable, f, refers to the object Foo. After you set the local variable that references the object Foo to null, you have effectively disconnected the local variable from the object. By doing this, you have not destroyed the object. You have merely removed a root reference to the object. Figure 10.3 shows the roots referring to two of the objects. Because the other two objects have no reference, they are considered garbage and can be collected.

Figure 10.3. Simple garbage collection.


When a collection cycle is complete, the garbage objects are removed, the pointers adjusted, and the heap coalesced. As you can imagine, the garbage collection cycle is expensive. It requires the CLR to find all of the unused objects, find all of the roots and other objects that refer to these unused objects, and adjust the references to null as these objects are removed. If the heap needs to be coalesced, then all of the references to the objects to be moved need to be adjusted so they still refer to the same object. Luckily, a garbage collection cycle occurs only when it is either explicitly started (using the GC.Collect() method) or when the heap is full (no room exists for a new object).

As a further illustration of the principles behind garbage collection, consider the code in Listing 10.1 that builds a linked list.

Listing 10.1. C# Code to Build a Linked List of Objects
public class LinkedObject
{
    private LinkedObject link;
    public LinkedObject()
    {
    }
    public LinkedObject(LinkedObject node)
    {
        link = node;
    }
    public LinkedObject Link
    {
        get
        {
            return link;
        }
        set
        {
            link = value;
        }
    }
}
. . .
if(head == null)
{
    head = new LinkedObject();
}
LinkedObject node;
for(int i = 0; i < 100000; i++)
{
    node = new LinkedObject(head);
    head = node;
}

The first part of the listing shows the definition of the LinkedObject class. This class simply contains a link to other linked objects, and as such, it is only an illustrative example. In a real application, this class would contain pertinent data. The next section of the listing illustrates how to use this class to build a linked list of 100,000 objects. The list starts out with just a head node. After the first node is added to the list, the instance of the object that was referred to by head becomes the tail of the list, and the head reference is adjusted to refer to the new beginning of the list. If you want to remove every item in the linked list, all you have to do is set the head reference to null:

head = null;

The only root that references the list is the head variable. As a result, after this reference is removed, the program no longer has access to any portion of the list. The entire list is considered garbage and is collected during the next garbage collection cycle. Of course, this does not work if you try something like the following:

myhead = head;
. . .
head = null;

Now the beginning of the list has another reference in myhead. Setting head to null only removes one of the root references to the linked list; the other (myhead) keeps the list from being collected.

Comparison of .NET Framework Allocation and C-Runtime Allocation

Before going further in this discussion on .NET Framework garbage collection and memory allocation, you need to compare this allocation to what it is replacing: the traditional C-Runtime heap allocation. Two of the main problems with the C-runtime heap are that it fragments memory and it is not efficient.

The C-Runtime heap (Win32 heap works much the same) consists of a list of free blocks. Each block is a fixed size. If the size of memory that you are trying to allocate is not the same size as the block and it is not some even multiple of the block size, then extra bytes or holes will be in the memory. If the size of the block is 1024 bytes and you try to allocate 1000 bytes, then 24 bytes will be left over. If you again try to allocate 1000 bytes, then you will be allocating from the next free block. If you have many of these holes, it can add up to a significant loss of memory. Fragmented memory is the result of the cumulative effect of each of these leftover pieces of memory.

The other drawback of the C-Runtime heap is efficiency. For each allocation, the allocator needs to walk the list of free blocks in search of a block of memory that is large enough to hold the requested allocation. This can be time consuming, so many schemes have been developed to work around the inefficiency of the C-Runtime heap. A simple program has been put together that allocates 100,000 items in a linked list at a time. The application displays the time for the allocation and the deallocation. This program is in the HeapAllocation subdirectory, but the essential parts of the program are shown in Listing 10.2.

Listing 10.2. Building a Linked List of Objects from the C-Runtime Heap
class LinkedList
{
public:
    LinkedList *link;
    LinkedList(void)
    {
        link = NULL;
    }
    LinkedList(LinkedList* node)
    {
        link = node;
    }
    ~LinkedList(void)
    {
    }
} ;
. . .
if(head == NULL)
{
    head = new LinkedList();
}
__int64 start, stop, freq;
LinkedList *node = NULL;
for(int i = 0;i < 100000;i++)
{
    node = new LinkedList(head);
    head = node;
}

This code looks similar to Listing 10.1. The functionality was mimicked as much as possible to allow comparison between the different allocation schemes. The complete source that is associated with Listing 10.1 is in the LinkedList subdirectory. Figure 10.4 shows a typical screenshot after an allocation and deallocation have occurred.

Figure 10.4. Timing C-Runtime allocation.


This figure shows the timing for one particular machine. See how the allocation and deallocation performs on your machine. Do not run this timing in debug mode because the hooks that have been placed in the debug memory allocation scheme cause the allocation and deallocation to be extremely slow. LinkedList is an application that can be used to compare the management allocation and deallocation of memory. Figure 10.5 shows a typical screenshot after allocating 100,000 items and placing them in a linked list. The essential code was already shown in Listing 10.1. The complete application is in the LinkedList subdirectory. Reminder: The Collect button initiates a garbage collection cycle; therefore, it will be the only one to deallocate memory in this sample.

Figure 10.5. Timing garbage collection.


The allocation times can be compared directly. It is best to compare the deallocation times for the C-Runtime heap deallocation with the times for a collection cycle. Even then, because a destructor doesn't exist in managed code, the comparison between the C-Runtime and managed deallocation is not entirely an apples to apples comparison, but it is close. You can see that, as predicted, the allocation from managed code is fast. These two figures show that the managed code allocation is about four times as fast. Suffice it to say, it is significantly faster to allocate from a managed heap than from the C-Runtime heap. For this case, you see that even an expensive operation such as garbage collection is about five times faster than cleaning up a linked list using the C-Runtime heap.

Optimizing Garbage Collection with Generations

One significant observation that has been made is that objects seem to fall into two categories: objects that are short lived and objects that are long lived. One of the drawbacks of using the C-Runtime heap is that only one heap exists. With a managed heap, most objects have three parts that correspond to the age of the objects, known as generation

0, generation 1, and generation 2. Initially, allocations occur at generation 0. If an object survives a collection (it is not destroyed or it is space reclaimed), then the object is moved to generation 1. If an object survives a collection of generation 1, then it is moved to generation 2. The CLR garbage collector currently does not support a generation higher than generation 2; therefore, subsequent collections will not cause the object to move to another generation.

The LinkedList application shown in Figure 10.5 shows the progression from one generation to the next. To show an object moving from one generation to the next, click the Allocate Objects button. You will see the left column labeled with Gen0, Gen1, and Gen2. After allocating the list of objects, you will see a count of the number of objects in each of the three generations. By forcing a collection without freeing up the list, you will see the objects moving from one generation to the next. When all of the objects are in generation 2, forcing a collection does not move the objects further. The number of allocated bytes does not change significantly until you deallocate the objects and then force a collection, at which time the memory that is occupied by each object is reclaimed.

The obvious performance benefit from having multiple generations is that the garbage collector need not examine the entire heap to reclaim the required space. Remember that generation 0 objects represent objects that are the newest. Based on studies, these generation 0 objects are likely to be the shortest lived; therefore, it is likely that enough space will be reclaimed from generation 0 to satisfy an allocation request. If enough space is available after collecting objects in generation 0, then it is not necessary to examine generation 1 or generation 2 objects. Not examining these objects saves a good deal of processing.

If a new object in generation 0 is a root to an object that is in an older generation, then the garbage collector can choose to ignore this item when forming a graph of all reachable objects. This allows the garbage collector to form a reachable graph much faster than if that graph had to contain all objects reachable through generation 0 objects.

With C-Runtime heap, allocation memory is obtained from wherever room is available. It is quite possible that two consecutive allocations be separated by many megabytes of address space. Because the garbage collector is constantly collecting and coalescing space on the managed heap, it is more likely that consecutive objects are allocated consecutive space. Empirically in terms of memory, objects tend to interact and access other objects that are related or are nearby. On a practical level, this allows objects to quickly access objects that are close, possibly near enough for the access to be in cache.

Finalization

When C++ programmers look at the garbage collection scheme, their first reaction is typically that it's impossible to tell when an object is destroyed. In other words, managed code has no destructors. C# offers a syntax that is similar to C++ destructors. An example destructor is shown in Listing 10.3.

Listing 10.3. A Class with a Destructor
class Foo
{
    public Foo()
    {
        Console.WriteLine("Constructing Foo");
    }
    ~Foo()
    {
        Console.WriteLine("In destructor");
    }
}

When the compiler sees this syntax, it turns it into the pseudo code shown in Listing 10.4.

Listing 10.4. Pseudo Code for a C# Destructor
class Foo
{
    public Foo()
    {
        Console.WriteLine("Constructing Foo");
    }
    protected override void Finalize()
    {
        try
        {
            Console.WriteLine("In destructor");
        }
        finally
        {
            base.Finalize();
        }
    }
}

To further confuse the issue, if you try to implement the pseudo code in Listing 10.4, you get two errors from the C# compiler. The first error forces you to supply a destructor:

Class1.cs(11): Do not override object.Finalize. Instead, provide a destructor.

This is confusing because even though ~Foo() is syntactically identical to the C++ destructor, they are different in many ways—enough that you could say that a destructor does not exist in managed code. The second error is a result of calling the base class Finalize:

Class1.cs(14): Do not directly call your base class Finalize method. It is called
 automatically from your destructor.

However, the IL code for this destructor is exactly the C# code that is shown. The IL code that is generated for the C# destructor in Listing 10.3 is shown in Listing 10.5.

Listing 10.5. IL Code for a C# Destructor
.method family hidebysig virtual instance void
        Finalize() cil managed
{
  // Code size       20 (0x14)
  .maxstack  1
  .try
  {
    IL_0000:  ldstr      "In destructor"
    IL_0005:  call       void [mscorlib]System.Console::WriteLine(string)
    IL_000a:  leave.s    IL_0013
  }   // end .try
  finally
  {
    IL_000c:  ldarg.0
    IL_000d:  call       instance void [mscorlib]System.Object::Finalize()
    IL_0012:  endfinally
  }   // end handler
  IL_0013:  ret
}  // end of method Foo::Finalize

Don't let the syntax fool you. Although Listing 10.5 looks like it has a destructor, it is not a destructor in the same sense as a C++ destructor. Many of the properties of a destructor that C++ programmers have come to know and love are missing from finalization.

You can get some of the same effects from a Finalize method that you can with a destructor. For example, using the class that is illustrated in Listing 10.3, you are notified when an object is destroyed. Listing 10.6 shows an example.

Listing 10.6. Being Notified When an Object Is Destroyed
Foo f = new Foo();
. . .
f = null;
GC.Collect(0);

Forcing a collection as in Listing 10.6 ensures that the Finalize (destructor) method is called. The output from the code in Listing 10.6 looks like the following:

Constructing Foo
In destructor

That is about where the similarities end. Finalization is syntactically similar in C# and functionally similar in simplistic allocation schemes. What are the differences?

First, you cannot specifically destroy a single object using finalization. Managed code has no equivalent to a C++ delete or a C free. Because at best, you can force a collection, the order in which the Finalize methods are called is indeterminate. A destructor along with some means of identifying each of the objects was added to the LinkedObject class illustrated in Listing 10.1. In the destructor, the ID was printed of the object that was being destroyed. Listing 10.7 shows a portion of the result.

Listing 10.7. Out of Order Finalization
Destroying: 5
Destroying: 4
Destroying: 3
Destroying: 2
Destroying: 1
Destroying: 0
Destroying: 4819
Destroying: 4818
Destroying: 4817
Destroying: 4816
Destroying: 4815
Destroying: 4814
Destroying: 4813
Destroying: 15058
Destroying: 4812
Destroying: 4811
Destroying: 4810
. . .

It is easy to prove that the Finalize method is not called in any particular order. Besides, even if it turned out that this simple example showed that the objects were in fact destroyed in order, Microsoft explicitly states in the documentation not to make assumptions on the order of the Finalize calls. This could cause a problem if your object contains other objects. It is possible that the inner object could be finalized before the outer object; therefore, when the outer object's Finalize method is called and it tries to access the inner object, the outcome might be unpredictable. If you implement a Finalize method, don't access contained members.

If your Finalize method is put off to near the end of the application, it might appear that the Finalize method is never called because it was simply in the queue when the application quit.

Adding a Finalize method to a class can cause allocations of that class to take significantly longer. When the C# compiler detects that your object has a Finalize method, it needs to take extra steps to make sure that your Finalize method is called. The application shown in Figure 10.6 illustrates the allocation performance degradation of a class due to an added Finalize method.

Figure 10.6. Timing allocation for a class with a Finalize method.


This figure shows that allocating 100,000 objects with a Finalize method takes about twice as long as doing the same operation on objects without a Finalize method. Sometimes allocations take five to six times as long just because the class has a Finalize method. This application is the same as that pictured in Figure 10.5; the complete source is in the LinkedList subdirectory.

Another drawback of objects with a Finalize method is that those objects hang around longer, increasing the memory pressure on the system. During a collection, the garbage collector recognizes that an object has a Finalize method and calls it. The garbage collector cannot free an object (reclaim its memory) until the Finalize method has returned, but the garbage collector cannot wait for each Finalize method to return. Therefore, the garbage collector simply starts up a thread that calls the Finalize method and moves on to the next object. This leaves the actual destruction of the object and reclamation of the memory to the next collection cycle. Thus, although the object is not reachable and essentially is destroyed, it is still taking up memory until it is finally reclaimed on the next collection cycle. Figure 10.7 illustrates this problem.

Figure 10.7. Deallocation of an object with a Finalize method.


In Figure 10.7, the objects have been allocated with a Finalize method by clicking on the Allocate F-Objects button. Then the objects were deallocated by clicking on the Deallocate F-Objects button. Finally, to arrive at the point illustrated in Figure 10.7, the Collect button was clicked to force a collection. Notice that collecting objects with a Finalize method takes longer (sometimes up to four times longer). The important point is that the number of bytes allocated is still virtually the same as before the collection. In other words, the collection resulted in no memory being reclaimed. You must force another collection to see something like Figure 10.8.

Figure 10.8. A second collection is required to reclaim memory with finalized objects.


The total memory that was allocated before the collection was about 1.3MB, and after the collection, it was only about 95KB. This is where the memory allocated to the linked list is finally released. By watching this same process with the Performance Monitor, you can gather further evidence that the Finalize method call delays the reclamation of memory. This is shown in Figure 10.9.

Figure 10.9. Watching the deallocation of a list of finalized objects.


This figure introduces Finalization Survivors. This is a term used by the Performance Monitor counters. When an object that has a Finalize method is collected, the actual object is put aside and the Finalize method is called. All of the objects that have a Finalize method are put on the Finalization Survivor list until the next collection, at which time they are destroyed and the memory they occupied reclaimed. You can see from Figure 10.9 that the memory is actually being reclaimed. At the same time that the number of Finalization Survivors goes to 0, the working set for the application also falls by an amount equivalent to the size of the 100,000 objects that were allocated.

If certain objects still do not have their Finalize methods called when it comes time for the application to shut down, their Finalize methods are ignored. This allows an application to shut down quickly and not be held up by Finalize methods. This could present a problem if your object becomes unreachable and a collection does not occur before the application shuts down. A similar scenario can occur if objects are created during an application shutdown, when an AppDomain is unloading, or if background threads are using the objects. In essence, it is not guaranteed that the Finalize method will be called because of these types of scenarios.

Inner Workings of the Finalization Process

If an object has a Finalize method, then the CLR puts a reference to that object on a finalization queue when the object is created. This is partly why objects with a Finalize method take longer to create. When the object is not reachable and is considered garbage, the garbage collector scans the finalization queue for objects that require a call to a Finalize method. If the object has a reference to it on the finalization queue, then that reference is moved to another queue called the f-reachable queue. When items are on the f-reachable queue, a special runtime thread wakes up and calls the Finalize method that corresponds to that object. That is why you cannot assume anything about the thread that calls the Finalize method associated with your object.

When the Finalize method returns, the reference to the object is taken off the f-reachable queue and the f-reachable thread moves on to the next item in the queue. Therefore, it is important that your object's Finalize method is relatively short and does not block; otherwise, it could hold up the other pending calls to Finalize methods. The references to the object in the f-reachable queue are like root references. As long as a reference to the object exists, it is not considered garbage even though it is only reachable through an internal f-reachable queue. After the Finalize method returns and the reference to the object is removed from the f-reachable queue, then no references to the object exist and it is truly not reachable. That means it is free to be collected as any other object and have its memory reclaimed. This explains why it takes two collection cycles for an object with a Finalize method to have its memory reclaimed.

Managing Resources with a Disposable Object

With all of the drawbacks of having an object implement a Finalize method, why consider it at all? When you have a file handle, socket connection, database connection, or other resource that is non-memory related, the runtime garbage collector does not know how to tear it down. You must implement a method to deterministically finalize the resource. Building an object that implements a Finalize method is one means to this end, but it is not the best overall solution. A Finalize method is meant to be part of a scheme to support deterministic finalization, not as a solution on its own. The scheme involves implementing an IDisposable interface and uses Finalize as a backup in case the object is not explicitly finalized as expected. The complete template for code that implements IDisposable and Finalize is shown in Listing 10.8.

Listing 10.8. Building a Deterministically Finalizable Object
public class DisposeObject : IDisposable
{
    private bool disposed = false;
    public DisposeObject()
    {
        disposed = false;
    }
    public void Close()
    {
        Dispose();
    }
    public void Dispose()
    {
        // Dispose managed and unmanaged resources.
        Dispose(true);
        GC.SuppressFinalize(this);
    }
    protected virtual void Dispose(bool disposing)
    {
        // Check to see if Dispose has already been called.
        if(!this.disposed)
        {
            // If disposing equals true, dispose all managed
            // and unmanaged resources.
            if(disposing)
            {
                // Dispose managed resources.
            }
            // Release unmanaged resources. If disposing is false,
            // only the following code is executed.
        }
       // Make sure that resources are not disposed
       // more than once
        disposed = true;     
    }
    ~DisposeObject()
    {
        Dispose(false);
    }
}

Listing 10.9 shows the same pattern being implemented in Visual Basic.

Listing 10.9. A Design Pattern for IDisposable Objects in Visual Basic
Public Class DisposeObject
    Implements IDisposable
    ' Track whether Dispose has been called.
    Private disposed As Boolean = False

    ' Constructor for the DisposeObject Object.
    Public Sub New()
        ' Insert appropriate constructor code here.
    End Sub

    Overloads Public Sub Dispose()Implements IDisposable.Dispose
        Dispose(true)
        GC.SuppressFinalize(Me)
    End Sub

    Overloads Protected Overridable Sub Dispose(disposing As Boolean)
        If Not (Me.disposed) Then
            If (disposing) Then
                ' Dispose managed resources.
            End If
        ' Release unmanaged resources
        End If
        Me.disposed = true
    End Sub
    Overrides Protected Sub Finalize()
        Dispose(false)
    End Sub
End Class

Listings 10.8 and 10.9 provide a template for the same pattern. Each implementation as shown does not perform functions other than creating and disposing of an object. In a real application, these template functions would be filled in with code that is pertinent to the application. For readability, comments were limited. To clarify some of the more important concepts regarding this pattern, each part of this pattern will be briefly discussed.

To make an IDisposable object, you need to derive from the IDisposable. This ensures that you implement the proper methods and provides a means of discovering whether this object supports the deterministic finalization model by querying the object for the IDisposable interface.

With C#, it is possible to use an object that implements an IDisposable interface and be assured that the Dispose method is called on that object when execution passes beyond the scope that is defined by the using keyword. This syntax looks like this:

using(f)
{
    // Use the object
}

This is roughly equivalent to something like this:

IDisposable d = (IDisposable)f
try
{
    // Use the object
}
finally
{
    d.Dispose();
}

If the object that is specified does not implement IDisposable, then the compiler generates an error like the following:

LinkedList.cs(525): Cannot implicitly convert type 'LinkedList.LinkedObject' to 'System
.IDisposable'

Therefore, the cast will always succeed at runtime. In addition, users of your object might not know that you implement IDisposable, so they might want to execute the following code to test:

if(node is IDisposable)
{
    // Now that it is known that the
    // object implements IDisposable,
    // the object can be disposed.
    . . .
}

Therefore, although it is possible to implement your own finalization scheme, using the IDisposable interface makes your object integrate better with the rest of the .NET Framework.

Cache the state of this object to ensure that the object is finalized only once. It is up to the designer of the object to protect the object from users who call Close or Dispose multiple times.

Only the default constructor is implemented in Listing 10.8. In a real application, the resource that is encapsulated by this object could be constructed as part of the default constructor. Another constructor might be dedicated to creating the resource, or an explicit “open” method could exist for constructing the resource. It would be up to the designer of the class to decide how the encapsulated resource or resources are allocated and initialized.

The method that disposes of the object has been called Close because that method name is familiar to most programmers. This method is primarily in place to make it easier for the user of this class to remember how to clean up an allocated object. A user could always call Dispose directly, but that method name might not be as familiar as Close. This would be the primary method that an object should expose to allow for deterministic finalization. The pattern would be that the object is “opened” via new and a constructor (or an explicit “open” call) and then explicitly closed with this method.

Dispose is the only method to implement in the IDisposable interface. Do not make it virtual. A derived class should not be able to override Dispose. The derived class overrides the Dispose(bool) method, and this method ends up calling that overridden method through polymorphism. As part of the pattern, make sure that Dispose(true) is called to indicate that the object is being disposed directly as a result of a user's call. In addition, ensure that the object is taken off the finalization queue so that the object is removed promptly rather than waiting for Finalize to complete. To guarantee that finalization is done properly, do not make Dispose(bool) virtual.

Take this object off the finalization queue to prevent finalization code for this object from executing a second time. Because the object is caching its state with the disposed variable, the call to the Finalize method would be a no operation (NOP) anyway. The key is that if this object is not taken off the finalization queue, it is retained in memory while the freachable queue is being emptied. By removing the object from the finalization queue, you are ensuring that the memory that the object occupies is reclaimed immediately.

Dispose(bool disposing) executes in two distinct scenarios. If disposing equals true, the method has been called directly or indirectly by a user's code. Managed and unmanaged resources can be disposed. If disposing equals false, the method has been called by the runtime from inside the Finalize method and you should not reference other objects. Only unmanaged resources can be disposed in such a situation; this is because objects that have Finalize methods can be finalized in any order. It is likely that the managed objects that are encapsulated by this object could have already been finalized. Consequently, it is best to have a blanket rule that states that access to managed objects from inside the Finalize method is forbidden. Calling Dispose(false) means that you are being called from inside a Finalize method.

Thread synchronization code that would make disposing this object thread safe was not added intentionally. This code is not thread safe. Another thread could start disposing the object after the managed resources are disposed but before the disposed flag is set to true. If synchronization code is placed here to make this thread safe, it needs to be written to guarantee not to block for a long time. Otherwise, this code might be called from the destructor, and hence, from the freachable queue thread.

This destructor runs only if the Dispose method is not called. If Dispose is called, then GC.SupressFinalize() is called, which ensures that the destructor for this object is not called. It gives your base class the opportunity to finalize. Providing a Finalize method allows a safeguard if the user of this class did not properly call Close or Dispose. Do not provide destructors in types that are derived from this class. Calling Dispose(false) signals that only unmanaged resources are to be cleaned up.

Figure 10.10 shows the allocation of the same linked list. This time, each node implements the IDisposable interface.

Figure 10.10. Allocation of a disposable object list.


The allocation takes roughly the same time as a class that just implements a Finalize method. This is probably because part of the disposable object pattern is the implementation of a Finalize method as a safeguard in case Dispose or Close is not called. If your design requires more performance than safety, you could take the Finalize safeguard out by removing the Finalize (destructor) from the class. Doing so will allow the object to allocate more quickly, but Close or Dispose must be called or the encapsulated resource will not be properly cleaned up. Deallocation of the list involves explicitly calling Close on every object, so deallocation is rather slow. Of course, that is the purpose of implementing an algorithm to allow deterministic finalization. If you do not need deterministic finalization, then the disposable pattern is more overhead than you need. The bright spot is the deallocation. Because each object is explicitly closed, and part of the call to Close is the removal of this object from the finalization queue, the object is destroyed immediately. It does not take two collection cycles to reclaim an object as it did when only a Finalize method was implemented.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.161.132