Breaking .NET’s Random class

Security is hard. In a current project I saw some code that created some access tokens based on a random number generator – .NET’s Random class. The code used an instance of Random stored in static field and I got curious:

If you have such a long living Random instance, could you predict the random values after generating a few of them?

It turns out, it is possible. And you only need to read 56 55 “random” values to predict all future values.

Read more →

Docker, .NET Core and (NuGet) Dependencies

Recently, I wanted to try out the new .NET Core together with a Docker container. However, coming from programing .NET applications for the regular .NET Framework, there were some obstacles I encountered. This one is about NuGet packages.

The Goal

The goal is to have a .NET Core console application with some NuGet dependencies running in a Docker container.

I’ll be using Visual Studio 2015 (Community Edition) for this article but you also use any other IDE that supports .NET Core projects. As such, I’ll try to minimize the dependency on Visual Studio in this article.

To better understand how a .NET Core application integrates with Docker, I will not use the Docker Tools for Visual Studio. While they work, they add a lot of “magic” to the build process. And this magic makes it hard to understand what’s going on.

Download the Example Code

To keep the article brief, I’ll just explain the important parts.

You can find the complete source code on my GitHub:

Note that you can examine the commits to see how the example evolves like this article.

The Program

The program I’m going to write is very simple:

using System;
using Newtonsoft.Json;

namespace DockerCoreConsoleTest
    public class Program
        public static void Main()
                $"Hello, Docker and {typeof(JsonConvert).FullName}!"

It just uses .NET Core and the Newtonsoft.Json NuGet package as dependency.

Building with Visual Studio

Building the application in Visual Studio is pretty straight forward.

  1. Make sure you have installed the .NET Core Visual Studio Tooling installed.
  2. Create a new .NET Core Console Application project/solution called DockerCoreConsoleTest.
  3. Use NuGet to add Newtonsoft.Json to the project.
  4. Copy the code from above into your Program.cs
  5. Run the program

You should see the following output:

Hello, Docker and Newtonsoft.Json.JsonConvert!

If you run into any troubles, go and checkout the example code.

Running in Docker

So far, so good. Now lets execute this program in a Docker container.

Note: If you haven’t installed Docker yet, you can download it here.

For this, we’ll use the following Dockerfile:

FROM microsoft/dotnet:1.0.0-core
COPY bin/Debug/netcoreapp1.0/ /app/
ENTRYPOINT ["dotnet", "DockerCoreConsoleTest.dll"]

Then build the Docker image with:

docker build -t dockercoreconsoletest .

Then run the Docker image with:

docker run dockercoreconsoletest

This will give you this result:

Error: assembly specified in the dependencies manifest was not found 
-- package: 'Newtonsoft.Json', version: '9.0.1', path: 'lib/netstandard1.0/Newtonsoft.Json.dll'

Not what one would expect.

The Problem(s)

The problem here is that – unlike .NET projects for the regular .NET Framework – the build process for a .NET Core project (dotnet build) does not copy any dependencies into the output folder.

If you look into bin\Debug\netcoreapp1.0 you’ll find no Newtonsoft.Json.dll file there.

There’s a second problem (or more an inconvenience). The Dockerfile contains the following line:

COPY bin/Debug/netcoreapp1.0/ /app/

This line depends on the build configuration that’s being used. If you’d build a Release build, the Dockerfile wouldn’t work anymore.

The Solution

In .NET Core projects you use the dotnet publish command to gather all dependencies in one directory (default is bin/CONFIG/netcoreapp1.0/publish).

So, running this command fixes the first problem. But it can also fix the second problem.

First, we can add the following lines to the project’s project.json file:

"publishOptions": {
  "include": [

Now, when running dotnet publish, the Dockerfile will be copied to the publish directory as well.

This also means that we can change the COPY directive in the Dockerfile to:

COPY . /app/

This way, the Dockerfile independent of the build configuration.

We could go one step further and actually build the docker image as part of the publish process. To do this, add the following lines to the project’s project.json file:

"scripts": {
  "postpublish": [
    "docker build -t dockercoreconsoletest %publish:OutputPath%"

Two notes one this:

  1. I have not found a suitable variable (like %publish:OutputPath%) yet that could be used for the docker label (-t). So, for the time being, the label has to be hard-coded here.
  2. Building a docker image as part of publish process may not be for everyone. I like the idea mainly because I haven’t come across any (relevant) downsides of doing this.

Wrapping Things Up

You can now run:

# dotnet publish
# docker run dockercoreconsoletest

This will give you the expected output:

Hello, Docker and Newtonsoft.Json.JsonConvert!

This is my first shot at Docker and .NET Core. If you find any error or have suggestions for improvements, please leave them in the comments below.

Alternative Solution: Using dotnet:latest as base image

There’s another solution to the problem(s) described in this article. This solution is less “clean”, in my opinion, but I thought I mention it anyways.

In the Dockerfile, instead of using microsoft/dotnet:1.0.0-core as base image, one could use microsoft/dotnet:latest. This will give the Docker container access to dotnet build (whereas the -core base image just contains dotnet someapplication.dll).

You may then build the .NET Core application from within the container with a Dockerfile like this:

FROM microsoft/dotnet:latest
COPY . /app

RUN ["dotnet", "restore"]
RUN ["dotnet", "build"]

ENTRYPOINT ["dotnet", "run"]

This approach has some disadvantages:

  1. The container in general will be bigger than the solution proposed in the rest of the article.
  2. You need to copy all source code into the container (and it will stay there).

    • Depending on how you execute docker build, this container may even contain the build output of configurations that you don’t intend to run (e.g. bin/Debug when actually running a release build).
    • Removing the source code after building the application (or having lots of RUN directives in general) may be inefficient in regard to Docker’s container layering system and Build Cache.
  3. Running dotnet restore will re-download all NuGet dependencies every time the container image is built. This will increase the build time and cause unnecessary network traffic – especially if the application is built often as part of some continuous integration process.

Switch Azure Development Storage to SQL Server – Step by Step

Update (2016-04-04): This article has been updated for Azure SDK 2.8 and SQL Server Developer Edition.

While developing for Windows Azure, I recently got lots of StorageExceptions reading “(500) Internal Server Error.”. This usually means that the SQL database that holds the table storage data is full. (SQL Server Express 2008 and sooner have a 4 GB limit; SQL Server Express 2012 and later have a 10 GB limit.)

Some time ago, Microsoft released its SQL Server Developer Edition for free. This edition doesn’t have a database size limit.

Here is how to use this SQL Server (or any SQL server instance) as storage for the table storage emulator:

  1. Open Microsoft Azure Storage command line from the start menu.
  2. cd "Storage Emulator"
  3. (Re-)initialize the storage emulator (for the fully command-line reference see here):

    1. For the default SQL instance: AzureStorageEmulator.exe init -server .
    2. For a named SQL instance: AzureStorageEmulator.exe init -sqlinstance "MSSQLSERVER"

That’s it.

Note: If you use the “named SQL instance” option from above but the instance is actually the default instance, you will get this error:

Error: User-specified instance not found. Please correct this and re-run initialization.

Default instance or named instance

You can run multi SQL Server instances on the same computer. Each instance has a name (e.g. MSSQLSERVER, SQLEXPRESS) and one of the instances will be the default instance. Depending on this, you need either to use option “default SQL instance” or “named SQL instance” above.

Unfortunately, I haven’t found a simple way to figure which instance is the default instance.

The one solution I found is to use Microsoft SQL Server Management Studio and try to connect to .\INSTANCENAME (e.g. .\MSSQLSERVER). If you can’t connect, than this instance is (most likely) the default instance.

How to get the name of the SQL instance

The default SQL instance names are as follows:


You can list all instance names with the Sql Server Configuration Manager that should come with your SQL server installation.

It’ll give you something like this:

Sql Server Configuration Manager

Projects in Visual C++ 2010 – Part 1: Creating a DLL project

When you write software, you often/sometimes divide your project into several subprojects. This mini series describes how to do this with Visual C++ 2010 (but this first part also applies to earlier versions). We start with creating a library project in form of a DLL.

Related articles:

Read more →

Mutexes in .NET

The Mutex class in .NET is a little bit tricky to use.

Here’s an example how I got it to do what I want:

/// <summary>
/// A simple, cross application mutex. Use <see cref="Acquire"/> to acquire it
/// and release it via <see cref="Dispose"/> when you're finished.
/// </summary>
/// <remarks>
/// Only one thread (and thus process) can have the mutex acquired at the same
/// time.
/// </remarks>
public class SimpleMutex : IDisposable
    private readonly Mutex m_mutex;

    /// <summary>
    /// Acquires the mutex with the specified name.
    /// </summary>
    /// <param name="mutexName">the mutex's name</param>
    /// <param name="timeout">how long to try to acquire the mutex</param>
    /// <returns>Returns the mutex or <c>null</c>, if the mutex couldn't be
    /// acquire in time (i.e. the current mutex holder didn't release it in
    /// time).</returns>
    public static SimpleMutex Acquire(string mutexName, TimeSpan timeout)
        var mutex = new SimpleMutex(mutexName);
            if (!mutex.m_mutex.WaitOne(timeout))
                // We could not acquire the mutex in time.
                return null;
        catch (AbandonedMutexException ex)
            // We now own this mutex. The previous owner didn't
            // release it properly, though.

        return mutex;

    private SimpleMutex(string mutexName)
        this.m_mutex = new Mutex(false, mutexName);

    public void Dispose()

You can use it like this:

using (SimpleMutex.Acquire("MyTestMutex", Timeout.InfiniteTimeSpan))
    Console.WriteLine("Acquired mutext");

Console.WriteLine("Released mutext");

If you run your program twice, one will acquire the mutex and the other one will wait – until you press a key in the first one.

Note: If you forget to call Dispose() on this mutex, the operating system will make sure that the mutex is released when the program terminates. However, the next process trying to acquire this mutex will then get an AbandonedMutexException (which is handled properly in Acquire() though).