Tag Archives: Multithreaded

Limiting Concurrent Threads with Semaphores

It’s been some time since I posted my early attempts at making parallel execution of threads easy when working with collections in .NET Framework prior to version 4. It might seem out of date but believe it or not there are a lot of developers who are still unable to take advantage of the new runtime yet. For me this is due to our strong dependence on SharePoint which unfortunately still requires the 2.0 runtime.

With this limitation strongly in place, I continue to refine my ability to work with threads.

Continue reading

Advertisements

Using SyncRoot for Multithreaded Applications

I previously wrote an update to a post I’d written about adding extension methods to the .NET Framework 3.5 to provide simple multithreading capability in an application. In the update I extended my solution from managing the starting of threads from the Managed Thread Pool and blocking the main thread until they all complete to also collect return data from these threads.

To accomplish this I had instantiated a List<T> object before starting my threads. Since I knew the number of threads I was going to start and since they would all return the same type of data, I gave the list an initial capacity so it wouldn’t need to grow. Since the object was instantiated before starting the threads, the child threads have access to the object and can add their results.

To ensure there were no collisions during the write operations from the various threads, I added my own synchronous ThreadSafeAdd extension method. The method was very simple, grab the list object’s SyncRoot property and use a lock on it while I called the add method.

public static void ThreadsafeAdd<T>(this List<T> list, T value)
{
    object spinLock = ((ICollection)list).SyncRoot;

    lock (spinLock)
    {
        list.Add(value);
    }
}

Why was this necessary?

In multithreaded programming you can never be sure what thread will be writing to a shared object at a given time. More problematic is the fact that threads can be preempted at (almost) any point in its execution while another thread does some work. So say I didn’t give the list an initial capacity and let it grow as needed. Memory isn’t added to the existing allocation for the list when it needs to expand. Instead a new block of memory is allocated, the existing contents copied to the new memory location, the object is reassigned to the new memory space and the old space is (eventually) collected for re-use.

Imagine what happens when one thread attempts to add an item to a list that has reached capacity. The steps above start to be executed but before they can complete the working thread is interrupted and another thread is resumed. Assume that thread wants to access the same object. The reference to that object still points to the full memory location so the process outlined above is started again with this thread. When each of these threads completes they will have copies of the old data in their new memory locations to which each of them adds their item before updating the reference and terminating. The list reference can only point to one location so whoever updates it last wins. Because of this, the other thread’s result which was only added to their new memory allocation, is lost.

To prevent this from happening we have a number of options at our disposal. For my purposes I went with a lightweight lock object. This works really well in a situation where you don’t expect your lock to last a long time which is exactly what we need. The first thread that gets to the lock statement gets exclusive access to the statements in the code block. No other thread can execute that code block on that instance of an object until the thread that has the lock completes. This methodology can only succeed if everyone uses the same object for locking.

The list inherits from ICollection which requires implementers to expose a gettable object named SyncRoot. From MSDN: “An object that can be used to synchronize access to the ICollection.” If you reflect the List<T> object you can see the implementation. Needless to say, there is work done to ensure an instance object has a single, non changing SyncRoot object on which we can lock.

Using an object exposed by the instance is better than using your own. Though you may control all the code which requires synchronization it’s still conceptually cleaner to lock on a property of an object than your own. Since this property is available, it should be standard practice to use it. If everyone does it correctly, then objects that become shared at a higher level can continue to ensure they are locking on the same object and therefore not stepping on each others’ toes. If everyone used their own lock object, none of the disparate code pieces would be locking on the same object and threads could still collide.

After I wrote all this stuff I decided to see if I was on target with what the new synchronous objects in 4 are using. Since my project is restricted to .NET 3.5 I can’t make use of these objects but I can take a peek and see if I’m doing it right. Taking a quick peek at .NET 4’s SynchronousList.Add(…) I see:

internal class SynchronizedList : IList<T>, ICollection<T>, IEnumerable<T>, IEnumerable
{
    // Fields
    private List<T> _list;
    private object _root;

    // Methods
    internal SynchronizedList(List<T> list)
    {
        this._list = list;
        this._root = ((ICollection) list).SyncRoot;
    }

    public void Add(T item)
    {
        lock (this._root)
        {
            this._list.Add(item);
        }
    }
    ...
}

SynchronousList is assigning the SyncRoot property to _root and locks the same way as I’ve done in the code snippet above.

There are other places where SyncRoot may be exposed. If you find yourself wanting to synchronize access to an object, take a look at its properties and see if something is already available. It’ll be safer and faster than defining your own.

lock(Signature.SyncRoot) {
   Signature.Text = "-Erik";
}