Update January 24th, 2015

This article has been deprecated in favor of a more recent article.

Sorry for the clickbait title, the gist of the article is don’t be surprised when switching to unsafe code doesn’t yield the performance benefits predicted. Always benchmark. Also the article contains misinformation regarding character encoding (as pointed out in the comments, so please be aware of that).


If any idea should be gained from the following paragraphs it should be that the pragmatic programmer will not dive into unsafe C# code to gain performance. It will be a waste of time.

The C# specification defines several uses for when to unsafe code

[T]here are situations where access to pointer types becomes a necessity. For example, interfacing with the underlying operating system, accessing a memory-mapped device, or implementing a time-critical algorithm may not be possible or practical without access to pointers. To address this need, C# provides the ability to write unsafe code.

The specification implies unsafe code is inherently faster with pointers. Now, if you know me, if anything promises faster speed, I’m all over it. I even flaunt the performance of my code in several products. The question is, how could unsafe code be faster than its safer invariant? When talking about using arrays (the specification calls pointer arrays “fixed size buffers”), the specification states:

The subsequent elements of the fixed size buffer can be accessed using pointer operations from the first element. Unlike access to arrays, access to the elements of a fixed size buffer is an unsafe operation and is not range checked.

To give a little background to this statement, when you index an array in .NET there are really three operations going on. There are checks that the requested index is greater than 0 but less than the number of elements in the array before the retrieval of the specified element. Programmers of lower level languages are probably scoffing at this. They can access an element in an array in a third of the operations it takes C# programmers. A hypothesis can be made that if given the time for the operations to complete are equal, accessing an element in an array in C/C++ is three times faster.

With this thought in mind, I turned towards my file parser (Pdoxcl2Sharp) in an attempt to optimize the appending of characters from the underlying stream to form a coherent string. What I love about C#, and programming is general, is that there are many different ways to accomplish the same thing. Creating a string is a great example. I can call ToString() on a string builder, pass in a character array, pass in a null terminated character pointer, or pass in a null terminated signed byte pointer. I decided to profile these methods on a 30MB file, and here are the partial implementation and results:

//Stringbuilder - appending data
stringBuilder.Append((char)currentByte);
//Stringbuilder - construction of string
stringBuilder.ToString()

//character array - appending data
charBuffer[index++] = (char)currentByte;
//character array - construction of string
new string(charBuffer, 0, index);

//sbyte* - appending data
*bytePtr++ = currentByte;
//sbyte* - construction of string
new string((sbyte*)byteArray);

//char* - appending data
charPtr++ = (char)currentByte;
//char* - construction of string
new string(charPtr);

//Results
StringBuilder: 0.85 seconds
char[]: 0.65 seconds
sbyte*: 1.10 seconds
char*: 0.71 seconds

The results are surprising, the pointers in general are slower than their managed counterparts. I found the Sbyte* result particularly unexpected, as I would have thought that this would have correlated with the string’s internal representation and that a simple copying or moving of memory would be the only thing needed. I must be wrong.

This is not an isolated instance either. In previous releases of of a image format library I created, I had made the assumption that using pointers would be faster as there was plenty of array traversal and low level constructs such as bitwise operators. I was wrong. Very wrong. I never thought that I would have to rewrite C# pointers back into managed C# (I didn’t use source control). Needless to say, I learned my lesson. I will stay away from unsafe code.

I would like to take a moment to hypothesize why using pointers doesn’t give the speed up I was hoping for. When a C/C++ program is compiled, it is compiled down into machine code. .NET on the other hand is compiled down into Microsoft’s Intermediate Language (MSIL) and when the program is executed, it is just in time compiled (JIT) into machine code. In my opinion this is the reason why, no matter what low level constructs are taken advantage of, there will not be a performance boost, as the produced code has to be re-interpreted. I will even go as far as suggesting that the JIT compiler won’t optimize unsafe code because it can’t make assumptions, and pointers are not C#’s strong suit. These are only conjectures.

Another point that needs to be made is that array traversal in C/C++ is not three times faster, as alluded to earlier. I wanted to trick you. It may take a third of the operations, but not all operations are equal. The time it takes to check if an index is in range is minuscule when compared to accessing RAM latency numbers every programmer should know. Not to mention that with today CPU’s branch prediction, the in-range check will practically disappear.