Daniel Fortunov


Daniel Fortunov's Adventures in Software Development

DDD9: DeveloperDeveloperDeveloper! Day 9

 0 Comments- Add comment Written on 05-Feb-2011 by asqui

Last Saturday marked the DeveloperDeveloperDeveloper! Day 9 event (DDD9), the latest in a series of community-organised one-day conference events named after Steve Ballmer's famous outburst at a developer conference. DeveloperDeveloperDeveloper! Days are an opportunity to hear your peers from the technical community share learnings and insights about various technologies and experiences they've had. For me, it's a great way to get a taste for tools and technologies that I've not had time to explore in my own time.

The event venue was generously sponsored by Microsoft at their offices in Reading, including four presentation rooms, snacks, two meals, and a sea of teacups waiting to be filled, ever replenished by the diligent hospitality team that gave up their Saturday to keep us fed and watered.

This time there were some tough decisions to be made -- with four presentation rooms and six timeslots throughout the day there were 24 sessions to choose from, but since none of them were recorded, one could only see six of the 24 without being in two places at one time. I think I chose wisely and enjoyed each of the sessions I attended, getting to hear about AJAX and jQuery, RavenDB, Behaviour Driven Development, Ruby on Rails, and PowerShell.

My favourite session of the day was not on any specific tool or technology, but on one developer's experience of a large-scale software re-write project. Phil Collins started his presentation by citing Joel Spolsky's quote that rewriting software is the single worst mistake you can make (in fact, his session was named after this) and then deftly sidestepped this assertion by pointing out that their industry regulators were forcing them to do this re-write. The session was very well presented and I was highly impressed by Phil's calm and measured approach to this big undertaking. One point he kept pushing is that re-writes fail most of the time, and if you rush in you are even more likely to fail. So instead they took it slow and steady, worked out a way for the existing code-base to coexist with the re-write so they could deliver customer value whilst phasing in the re-write gradually, and then started this multi-year undertaking. Other challenges included office politics (like convincing the people who knew about the existing software that a re-write was a good idea, and not alienating them because all the "documentation" of the previous system was in the head of just a few developers) and negotiating with regulators. An inspiring talk.

Send to a friend

Concurrent Programming on Windows

 0 Comments- Add comment Written on 08-Jun-2010 by asqui

'Concurrent Programming on Windows' book cover Concurrent Programming on Windows (by Joe Duffy) is a book so good I had to put it down, frequently, to stop and think. The information density is pretty high and I often found myself staring blankly into space for minutes at a time, book open on my lap, thinking through what I’d just read.

This is probably the definitive book on concurrency in Windows, covering general principles and the relevant APIs across both native (Win32) and managed (.NET). It has a good balance of theoretical discussion and practical advice, with no shortage of references at the end of each chapter for those who feel inclined for some additional background reading. (For instance, the “Further Reading” section at the end of Chapter 10 “Memory Models and Lock Freedom” points to some light reading: AMD x86-64 Architecture Programmer’s Manual, Volumes 1–5 (!))

What makes this book truly valuable is the amount of information and knowledge that it aggregates, from obscure technical sources, academic papers, and even first-hand spelunking in the Windows source code to find answers to some undocumented behavioural details. It also provides plenty of practical advice garnered from years of experience.

For me, having mainly managed programming experience, this book provided a nice opportunity to understand more about the underlying obscurities of Win32, and how these relate to and contrast with what is exposed in .NET. Having that underlying knowledge has let me see how passing the Invalid Wait Handle value to some asynchronous methods can make them execute synchronously instead, and to understand that asynchronous I/O needs to be decided on when a file handle is opened — details that had previously eluded me in my journeys through managed-land.

Other things that were interesting to learn about included lock-free algorithms (with clever tricks like structuring a lock-free linked list such that it has a sentinel node when empty, cunningly avoiding the problem of updating two pointers when the list transitions between empty and non-empty), and the details of kernel-mode synchronisation primitives, with their limitless caveats (the abandoned mutex scenario was my favourite… when waiting on a named mutex it is possible that it would have been abandoned if another process exited before releasing it. Despite returning an error, the operation has succeeded in acquiring the mutex and you must still remember to release it! As if you didn’t have enough to think about by that point, with all the other complexities around alertable waits and pumping the message queue if you’re in an STA).

I previously said that, at a length of 736 pages, CLR via C# (2nd Edition) was the largest book I have ever read. But with a length of 930 pages, Concurrent Programming on Windows has surpassed this. Next up on the reading list: CLR via C# (3rd Edition).

Send to a friend

Asynchronous File I/O in .NET

 1 Comment- Add comment Written on 10-May-2010 by asqui

Another useful snippet of knowledge gained from reading Concurrent Programming on Windows (by Joe Duffy):

Did you know that asynchronous file I/O in .NET is not just about calling FileStream.BeginRead() or BeginWrite() in place of Read() or Write()? You should also make sure that the FileStream is opened for asynchronous operations, otherwise you’ll quietly get less performant ‘mock’ async operations that just execute synchronous I/O on the thread pool, rather than using true overlapped I/O at the Win32 level.

Excuses, Excuses

The natural starting point for creating a FileStream is the static File.Open() method, the documentation for which mentions nothing about synchronicity of the FileStream that is created! Nor does it allow you to provide FileOptions (which are used to specify the magic FileOptions.Asynchronous flag).

Instead, the FileStream is created with FileOptions.None. Any asynchronous operations are quietly faked by the obliging implementation of the Stream base class, which merely wraps the corresponding synchronous method in a delegate and invokes it on the thread pool using the BeginInvoke() method.

This is a deviation from the usual ‘pit of success’ design philosophy, where everything in .NET seems to work as you think it would, without a need to closely read the documentation and/or gradually discover obscure catches and gotchas over time.

On Balance

Admittedly I’ve never actually used asynchronous file I/O (for the applications I’ve worked on have used databases, queues, and other remote data persistence rather than local files) or else I might have read the FileStream.BeginRead() and BeginWrite() documentation a little more closely:

FileStream provides two different modes of operation: synchronous I/O and asynchronous I/O. While either can be used, the underlying operating system resources might allow access in only one of these modes. By default, FileStream opens the operating system handle synchronously. In Windows, this slows down asynchronous methods. If asynchronous methods are used, use the FileStream(String, FileMode, FileAccess, FileShare, Int32, Boolean) constructor.


That last Boolean parameter to the FileStream constructor is called useAsync and, if true, results in FileOptions.Asynchronous being used (or you can also use the other constructor overload which takes FileOptions in the last parameter, and specify FileOptions.Asynchronous yourself).

The underlying Stream.BeginRead() and BeginWrite() methods also talk about synchronicity:

The default implementation of BeginRead on a stream calls the Read method synchronously, which means that Read might block on some streams. However, instances of classes such as FileStream and NetworkStream fully support asynchronous operations if the instances have been opened asynchronously. Therefore, calls to BeginRead will not block on those streams. You can override BeginRead (by using async delegates, for example) to provide asynchronous behavior.

I think this documentation is out of date, or at least a little unclear. The default implementation of BeginRead does not call Read synchronously — Reflector shows that it calls Read by wrapping it in a delegate and calling BeginInvoke, which would result in it being called on a thread pool thread. This is an asynchronous call (with respect to the caller of BeginRead).

Perhaps the documentation is out of date, since it also suggests "using async delegates" to implement your own asynchronous behaviour — what advantage would that give you over the default implementation which does just the same?

As ever, the truth lies in Reflector.


In summary, if you want to do asynchronous file I/O:

  • Don’t use File.Open() to create your FileStream — it will be opened for synchronous I/O.
  • Create the FileStream directly, specifying useAsync=true (or options=FileOptions.Asynchronous) — this will open the Win32 file handle for overlapped I/O.
  • Use BeginRead() and BeginWrite() as normal — the framework will hide the details of overlapped operations behind the Asynchronous Programming Model.

Finally, if you’re using asynchronous I/O you must care about performance, so don’t forget to measure, measure, measure! And heed the warning hidden in the documentation of that useAsync parameter:

Specifies whether to use asynchronous I/O or synchronous I/O. However, note that the underlying operating system might not support asynchronous I/O, so when specifying true, the handle might be opened synchronously depending on the platform. When opened asynchronously, the BeginRead and BeginWrite methods perform better on large reads or writes, but they might be much slower for small reads or writes. If the application is designed to take advantage of asynchronous I/O, set the useAsync parameter to true. Using asynchronous I/O correctly can speed up applications by as much as a factor of 10, but using it without redesigning the application for asynchronous I/O can decrease performance by as much as a factor of 10.
Send to a friend

ILMerge in MSBuild

 2 Comments- Add comment Written on 09-Feb-2010 by asqui

ILMerge is a utility from Microsoft Research that combines multiple .NET assemblies into a single assembly. This is convenient when you want to combine your application and its dependencies into a single DLL file, for example, to make deployment and versioning easier.

ILMerge is released as a console application but also exposes an API to allow you to use it in other applications. For example, I see there are some GUI applications to ease the burden of typing in all those command line switches. ILMerge is mysteriously missing from the community collections of MSBuild tasks, such as the SDC Tasks Library and MSBuild Extended Tasks, probably because it is perfectly feasible to invoke the ILMerge executable using the Exec task that is provided with MSBuild.

The Goal

The goal is to integrate ILMerge into MSBuild, such that it runs automagically every time the project is built (either within Visual Studio, or with MSBuild from the command line).

Unfortunately there are some interesting details to integrate smoothly into the build, such as making sure the task handles incremental builds properly (so that adding ILMerge to one project in a solution doesn’t force a re-build of that entire sub-tree every time you build!)

I’ve not been able to find an adequate pre-canned way to achieve this, but I’ve hacked something together starting from Jomo Fisher’s solution and addressing some of the shortcomings I found along the way.

The Solution

Hand-edit your MSBuild project (e.g. *.csproj) file to tag the referenced assemblies you’d like to merge with the ILMerge=True metadata, like this:

<Reference Include="DependencyLibrary, Version=, Culture=neutral, processorArchitecture=MSIL">
  <HintPath>Referenced Assemblies\DependencyLibrary.dll</HintPath>

(Note that it is not necessary to set CopyLocal=True for the target assemblies.)

Then, define the following targets and properties at the bottom of your MSBuild project (just above the </Project> tag):

<Target Name="AfterBuild" DependsOnTargets="ILMerge" />
<Target Name="ILMerge" Inputs="@(IntermediateAssembly)"
        Outputs="@(MainAssembly -> '%(RelativeDir)%(Filename).ILMergeTrigger%(Extension)')">
  <CreateItem Include="@(ReferencePath)" Condition="'%(ReferencePath.ILMerge)'=='True'">
    <Output TaskParameter="Include" ItemName="ILMergeAssemblies" />
  <Exec Command="$(ILMergeExecutable) /Closed /Internalize /Lib:$(OutputPath) /keyfile:$(KeyFile) /out:@(MainAssembly) &quot;@(IntermediateAssembly)&quot; @(ILMergeAssemblies->'&quot;%(FullPath)&quot;', ' ')" />
  <!-- Make a copy of the merged output DLL to use as a trigger for incremental builds -->
    DestinationFiles="@(MainAssembly -> '%(RelativeDir)%(Filename).ILMergeTrigger%(Extension)')" />

Here's the full wolking solution: ILMergeExperiments

There are a couple of hacks here to deal with the fact that we want our ILMerged assembly to have the same name as the original:

  1. We are referencing the intermediate assembly from the ‘obj’ directory as input (because you can’t have the same file as both input and output to ILMerge)
  2. A copy of the output file is saved to Foo.ILMergeTrigger.dll, and it is this file that is named as the output of the build target. This is in order to correctly participate in the dependency analysis that is used during incremental builds (because if your merged output assembly has the same name as the unmerged output assembly, then the standard build will overwrite your merged assembly and make it look ‘up to date’, and your ILMerge task will not be executed because its outputs are up to date!)

This is somewhat hacky, and I’m sure there must be a more cunning way to integrate into MSBuild; I’ll have to revisit this once I’ve read the book Inside the Microsoft Build Engine: Using MSBuild and Team Foundation Build.

Send to a friend

Invalid Wait Handle

 0 Comments- Add comment Written on 31-Jan-2010 by asqui

One of the obscure gems garnered from my current reading of the book Concurrent Programming on Windows (by Joe Duffy) is an insight into the INVALID_HANDLE_VALUE constant.


In Win32 programming, functions that return a HANDLE (such as CreateFile) may return INVALID_HANDLE_VALUE to indicate failure (sometimes). You can check for this return value and call GetLastError to find out why the operation failed.

In .NET functions typically indicate unexpected failure by throwing an exception. The endless dance of “Do something; Did it succeed? If not, why did it fail. Do something else; Did it succeed? …” is replaced by structured exception handling and constructs such as try-catch, which let you defer thinking about error scenarios until you want to, rather than thinking… about errors… at every… step… of… the… way.

So if .NET methods such as File.Open() will throw exceptions rather than returning  INVALID_HANDLE_VALUE we have no need to expose INVALID_HANDLE_VALUE in the .NET BCL, right? Not quite.

INVALID_HANDLE_VALUE as an input parameter

In addition to being used as a magic return value indicating failure, INVALID_HANDLE_VALUE also has some magic powers with methods that accept a HANDLE as a parameter. Now, you won’t get any useful behaviour from passing INVALID_HANDLE_VALUE to CloseHandle, however there is a group of functions that let you provide an event handle, do some asynchronous work, and then signal your event to let you know the work has been completed.

Functions such as UnregisterWaitEx and DeleteTimerQueueTimer will cancel any pending registered wait operation or a timer-queue timer, however if a callback has already been triggered this will still run to completion. If you need to clean up any resources used by your callback, to avoid pulling the rug out from under its feet, you must first ensure that your callback is not still executing. To avoid having to manually introduce control synchronisation in your callback, UnregisterWaitEx and DeleteTimerQueueTimer let you provide an event handle which will be signalled when any executing callbacks have returned.

If you don’t want the overhead of allocating another event and then registering a wait on it (in order to perform the clean-up asynchronously, when the event is signalled) you can tell the function to block and wait for any executing collback functions to complete before returning by providing INVALID_HANDLE_VALUE for the wait handle.

What about .NET?

Now the interesting part: Since we previously concluded that there is no need to expose INVALID_HANDLE_VALUE in the .NET BCL, how would we get this handy blocking behaviour from the .NET equivalents to the methods mentioned above: RegisteredWaitHandle.Unregister(WaitHandle) and System.Threading.Timer.Dispose(WaitHandle)?

The MSDN documentation makes no suggestion that this behaviour is even possible (not even in the preview documentation for .NET 4). I’m not sure if this is an oversight or an intentionally unsupported behaviour.

To work around this we can do a little poking around the BCL with Reflector:

  • WaitHandle, the base class for all events in .NET, has a static readonly IntPtr field called InvalidHandle which is populated with the value of INVALID_HANDLE_VALUE; but
  • Alas, WaitHandle.InvalidHandle is protected, rather than public! (Why, oh why?)
  • Conveniently, the internal handle value is initialised by the default constructor of WaitHandle to InvalidHandle; but
  • Alas, WaitHandle is marked abstract, so we can't instantiate it directly.


So the only way to get at INVALID_HANDLE_VALUE in .NET is to subclass WaitHandle. We don't actually need to do anything in our subclass, mind you:

public class InvalidWaitHandle : WaitHandle { }

So there you have it, pretty convoluted but works like a charm!

Here’s the full version, with documentation and a cached instance:

using System.Threading;

/// <summary>
/// An inert wait handle that can be used to avoid allocating a real event in
/// some situations.
/// </summary>
/// <remarks>
/// <para>
/// An <see cref="InvalidWaitHandle"/> can be provided to methods such as
/// <see cref="RegisteredWaitHandle.Unregister(WaitHandle)"/> and 
/// <see cref="Timer.Dispose(WaitHandle)"/>. In this case, the function waits
/// for all callback functions to complete before returning, rather than
/// returning immediately and signalling the provided wait handle
/// asynchronously.
/// </para>
/// <para>
/// Internally, this results in the use of INVALID_HANDLE_VALUE when calling
/// the underlying Win32 functions.
/// </para>
/// <para>
/// For further information, see "Concurrent Programming on Windows" (First
/// Edition, 2009) by Joe Duffy, p. 374, 377.
/// </para>
/// </remarks>
public class InvalidWaitHandle : WaitHandle
    static InvalidWaitHandle()
        Instance = new InvalidWaitHandle();

    /// <summary>
    /// Gets a shared instance of <see cref="InvalidWaitHandle"/> which may
    /// be re-used.
    /// </summary>
    /// <remarks>
    /// Using this field allows a single <see cref="InvalidWaitHandle"/> to
    /// be re-used as opposed to creating a <c>new</c> instance at every call
    /// site.
    /// </remarks>
    /// <value>A shared instance of <see cref="InvalidWaitHandle"/> which may
    /// be re-used.</value>
    public static InvalidWaitHandle Instance { get; private set; }
Send to a friend
  • Rss2
  • Atom
  • Add to my webjam page

Loading …
  • Server: web1.webjam.com
  • Total queries:
  • Serialization time: 56ms
  • Execution time: 62ms
  • XSLT time: $$$XSLT$$$ms