PXE Server on Existing Network (DHCP Proxy) with Ubuntu

There are a lot of articles out there that explain how to run a PXE server. However, I couldn’t find a single one that contained all the information to setup a PXE server:

  • on Ubuntu
  • without replacing the network’s existing DHCP server (e.g. provided by a hardware router)

So, with article I’m trying to fill this gap.

The Goal

At the end of this article you’ll have a working PXE server that lets you boot memtest86+ over a network.

The goal is to have a simple but working solution. This is why I’m using memtest. It consists of just one file and thus is easy to use in a PXE setup. More complex scenarios (i.e. loading real operating systems) can be built on top of this simple setup.

Everything described in the article can be done inside a virtual machine. The only requirement is that the VM is connected directly (i.e. no NAT) to the network where it’s supposed to serve PXE (usually the host’s network).

The Basics: PXE, DHCP, ProxyDHCP, TFTP, and dnsmasq

PXE is an abbreviation for “Preboot Execution Environment”. To put it simple: It’s a standardized way to boot an operating system over network (rather than from hard disk).

DHCP is usually used to assign IP addresses to computers/devices in a network. PXE is an extension to DHCP. To use PXE one needs a PXE-capable DHCP server.

When PXE was designed, the creators wanted to make it compatible with networks that already have an existing DHCP server. As a result, PXE and DHCP can be provided by separate servers without interfering with each other. In this scenario, the PXE server is called proxyDHCP server and only provides the PXE functionality (but doesn’t do IP assigning).

TFTP (Trivial File Transfer Protocol) is used by PXE clients to download the operating system (file) from the PXE server.

dnsmasq is a “simple” Linux tool that combines a DNS server, a DHCP server, a TFTP server, and a PXE server. This is the tool you’ll use in this article.

Prerequisites

The steps in this article are based on Ubuntu 16.04.

You need the following packages:

$ apt-get install dnsmasq pxelinux syslinux-common

You also need the precompiled memtest binary:

$ wget http://www.memtest.org/download/5.01/memtest86+-5.01.bin.gz
$ gzip -dk memtest86+-5.01.bin.gz

Furthermore, you need a working DHCP server (e.g. one provided by a hard router).

The last thing you need is to know the network you’re on. My network is 192.168.178.XXX – so I’ll use this in this article. This information is only needed once in a configuration file (see below).

Warning: During the course of this article your Ubuntu machine may temporarily lose the ability to do DNS lookups. This is caused by dnsmasq. If this happens to you and you need to download anything or access the web, just (temporarily) stop dnsmasq.

Step by Step: From Start to Finish

Lets do it then. This section describes all steps need to get a working PXE server.

First, lets stop dnsmasq for now.

$ service dnsmasq stop

Create the directory where all transferable operating system files will reside:

$ mkdir -p /var/lib/tftpboot

Inside of this directory, create a directory for the unzipped memtest binary file and copy it there:

$ mkdir -p /var/lib/tftpboot/memtest
$ cp ~/memtest86+-5.01.bin /var/lib/tftpboot/memtest/memtest86+-5.01

Important: Note that the copy command removed the .bin file extension. This is required.

Now create the directory for the PXE configuration file:

$ mkdir -p /var/lib/tftpboot/pxelinux.cfg

Important: This directory must always be called pxelinux.cfg.

Inside of this directory, create a file called default and put in the following content:

default memtest86
prompt 1
timeout 15

label memtest86
  menu label Memtest86+ 5.01
  kernel /memtest/memtest86+-5.01

Next, you need to put the files pxelinux.0 (Ubuntu package pxelinux) and ldlinux.c32 (Ubuntu package syslinux-common) in /var/lib/tftpboot. I’ll use symlinks for that:

$ ln -s /usr/lib/PXELINUX/pxelinux.0 /var/lib/tftpboot/
$ ln -s /usr/lib/syslinux/modules/bios/ldlinux.c32 /var/lib/tftpboot/

Now, clear all contents of /etc/dnsmasq.conf and replace them with this:

1
2
3
4
5
6
7
8
910
11
12
13
14
15
16
17
# Disable DNS Server
port=0
 
# Enable DHCP logging
log-dhcp
 
# Respond to PXE requests for the specified network;
# run as DHCP proxy
dhcp-range=192.168.178.0,proxy 
dhcp-boot=pxelinux.0
 
# Provide network boot option called "Network Boot".
pxe-service=x86PC,"Network Boot",pxelinux
 
enable-tftp
tftp-root=/var/lib/tftpboot

Important: In line 9 you need to put in your network, if you’re not on 192.168.178.XXX.

Edit /etc/default/dnsmasq and add the following line to the end:

DNSMASQ_EXCEPT=lo

This line is necessary because you disabled dnsmasq’s DNS functionality above (with port=0). Without it Ubuntu will still redirect all DNS queries to dnsmasq – which doesn’t answer them anymore and thus all DNS lookups would be broken. You can check /etc/resolv.conf and verify that it contains the correct IP address for your network’s DNS server.

Last step – start dnsmasq again:

$ service dnsmasq start

Now, when starting a PXE-enabled machine, it should boot memtest.

pxe-boot.gif

Troubleshooting

While I was trying to get a PXE server working, I stumbled across some pitfalls that I like to add here.

Starting dnsmasq fails because resource limit

When starting dnsmasq with:

$ service dnsmasq start

and you get the error:

Job for dnsmasq.service failed because a configured resource limit was exceeded.

… then you (accidentally) deleted /etc/dnsmasq.d/README.

The dnsmasq init script checks the existence of this file and this leads to this obscure error message (filed as #819856).

PXE Boot with VMWare Fusion

VMWare Fusion’s GUI is more designed for regular users than developers. If you want to use PXE boot in a VMWare Fusion VM, make sure you select “Bridged Networking” rather than “Share with my Mac” (which is NAT).

vmware-network.png

PXE Boot with Hyper-V

To be able to PXE boot a Hyper-V VM, you need to add a Legacy Network Adapter to the VM. By default, only a non-legacy network adapter is added to VMs and it doesn’t support PXE boot (for whatever reason).

hyperv-pxe.png

This is especially confusing since the “BIOS” section always lists “Legacy Network adapter” – even if none has been added to the VM.

Breaking .NET’s Random class

Security is hard. In a current project I saw some code that created some access tokens based on a random number generator – .NET’s Random class. The code used an instance of Random stored in static field and I got curious:

If you have such a long living Random instance, could you predict the random values after generating a few of them?

It turns out, it is possible. And you only need to read 56 55 “random” values to predict all future values.

Read more →

Docker, .NET Core and (NuGet) Dependencies

Recently, I wanted to try out the new .NET Core together with a Docker container. However, coming from programing .NET applications for the regular .NET Framework, there were some obstacles I encountered. This one is about NuGet packages.

The Goal

The goal is to have a .NET Core console application with some NuGet dependencies running in a Docker container.

I’ll be using Visual Studio 2015 (Community Edition) for this article but you also use any other IDE that supports .NET Core projects. As such, I’ll try to minimize the dependency on Visual Studio in this article.

To better understand how a .NET Core application integrates with Docker, I will not use the Docker Tools for Visual Studio. While they work, they add a lot of “magic” to the build process. And this magic makes it hard to understand what’s going on.

Download the Example Code

To keep the article brief, I’ll just explain the important parts.

You can find the complete source code on my GitHub:

https://github.com/skrysmanski/dotnetcore-docker

Note that you can examine the commits to see how the example evolves like this article.

The Program

The program I’m going to write is very simple:

using System;
using Newtonsoft.Json;

namespace DockerCoreConsoleTest
{
    public class Program
    {
        public static void Main()
        {
            Console.WriteLine(
                $"Hello, Docker and {typeof(JsonConvert).FullName}!"
            );
        }
    }
}

It just uses .NET Core and the Newtonsoft.Json NuGet package as dependency.

Building with Visual Studio

Building the application in Visual Studio is pretty straight forward.

  1. Make sure you have installed the .NET Core Visual Studio Tooling installed.
  2. Create a new .NET Core Console Application project/solution called DockerCoreConsoleTest.
  3. Use NuGet to add Newtonsoft.Json to the project.
  4. Copy the code from above into your Program.cs
  5. Run the program

You should see the following output:

Hello, Docker and Newtonsoft.Json.JsonConvert!

If you run into any troubles, go and checkout the example code.

Running in Docker

So far, so good. Now lets execute this program in a Docker container.

Note: If you haven’t installed Docker yet, you can download it here.

For this, we’ll use the following Dockerfile:

FROM microsoft/dotnet:1.0.0-core
COPY bin/Debug/netcoreapp1.0/ /app/
WORKDIR /app
ENTRYPOINT ["dotnet", "DockerCoreConsoleTest.dll"]

Then build the Docker image with:

docker build -t dockercoreconsoletest .

Then run the Docker image with:

docker run dockercoreconsoletest

This will give you this result:

Error: assembly specified in the dependencies manifest was not found 
-- package: 'Newtonsoft.Json', version: '9.0.1', path: 'lib/netstandard1.0/Newtonsoft.Json.dll'

Not what one would expect.

The Problem(s)

The problem here is that – unlike .NET projects for the regular .NET Framework – the build process for a .NET Core project (dotnet build) does not copy any dependencies into the output folder.

If you look into bin\Debug\netcoreapp1.0 you’ll find no Newtonsoft.Json.dll file there.

There’s a second problem (or more an inconvenience). The Dockerfile contains the following line:

COPY bin/Debug/netcoreapp1.0/ /app/

This line depends on the build configuration that’s being used. If you’d build a Release build, the Dockerfile wouldn’t work anymore.

The Solution

In .NET Core projects you use the dotnet publish command to gather all dependencies in one directory (default is bin/CONFIG/netcoreapp1.0/publish).

So, running this command fixes the first problem. But it can also fix the second problem.

First, we can add the following lines to the project’s project.json file:

"publishOptions": {
  "include": [
    "Dockerfile"
  ]
}

Now, when running dotnet publish, the Dockerfile will be copied to the publish directory as well.

This also means that we can change the COPY directive in the Dockerfile to:

COPY . /app/

This way, the Dockerfile independent of the build configuration.

We could go one step further and actually build the docker image as part of the publish process. To do this, add the following lines to the project’s project.json file:

"scripts": {
  "postpublish": [
    "docker build -t dockercoreconsoletest %publish:OutputPath%"
  ]
}

Two notes one this:

  1. I have not found a suitable variable (like %publish:OutputPath%) yet that could be used for the docker label (-t). So, for the time being, the label has to be hard-coded here.
  2. Building a docker image as part of publish process may not be for everyone. I like the idea mainly because I haven’t come across any (relevant) downsides of doing this.

Wrapping Things Up

You can now run:

# dotnet publish
# docker run dockercoreconsoletest

This will give you the expected output:

Hello, Docker and Newtonsoft.Json.JsonConvert!

This is my first shot at Docker and .NET Core. If you find any error or have suggestions for improvements, please leave them in the comments below.

Alternative Solution: Using dotnet:latest as base image

There’s another solution to the problem(s) described in this article. This solution is less “clean”, in my opinion, but I thought I mention it anyways.

In the Dockerfile, instead of using microsoft/dotnet:1.0.0-core as base image, one could use microsoft/dotnet:latest. This will give the Docker container access to dotnet build (whereas the -core base image just contains dotnet someapplication.dll).

You may then build the .NET Core application from within the container with a Dockerfile like this:

FROM microsoft/dotnet:latest
COPY . /app
WORKDIR /app

RUN ["dotnet", "restore"]
RUN ["dotnet", "build"]

ENTRYPOINT ["dotnet", "run"]

This approach has some disadvantages:

  1. The container in general will be bigger than the solution proposed in the rest of the article.
  2. You need to copy all source code into the container (and it will stay there).

    • Depending on how you execute docker build, this container may even contain the build output of configurations that you don’t intend to run (e.g. bin/Debug when actually running a release build).
    • Removing the source code after building the application (or having lots of RUN directives in general) may be inefficient in regard to Docker’s container layering system and Build Cache.
  3. Running dotnet restore will re-download all NuGet dependencies every time the container image is built. This will increase the build time and cause unnecessary network traffic – especially if the application is built often as part of some continuous integration process.

Switch Azure Development Storage to SQL Server – Step by Step

Update (2016-04-04): This article has been updated for Azure SDK 2.8 and SQL Server Developer Edition.

While developing for Windows Azure, I recently got lots of StorageExceptions reading “(500) Internal Server Error.”. This usually means that the SQL database that holds the table storage data is full. (SQL Server Express 2008 and sooner have a 4 GB limit; SQL Server Express 2012 and later have a 10 GB limit.)

Some time ago, Microsoft released its SQL Server Developer Edition for free. This edition doesn’t have a database size limit.

Here is how to use this SQL Server (or any SQL server instance) as storage for the table storage emulator:

  1. Open Microsoft Azure Storage command line from the start menu.
  2. cd "Storage Emulator"
  3. (Re-)initialize the storage emulator (for the fully command-line reference see here):

    1. For the default SQL instance: AzureStorageEmulator.exe init -server .
    2. For a named SQL instance: AzureStorageEmulator.exe init -sqlinstance "MSSQLSERVER"

That’s it.

Note: If you use the “named SQL instance” option from above but the instance is actually the default instance, you will get this error:

Error: User-specified instance not found. Please correct this and re-run initialization.

Default instance or named instance

You can run multi SQL Server instances on the same computer. Each instance has a name (e.g. MSSQLSERVER, SQLEXPRESS) and one of the instances will be the default instance. Depending on this, you need either to use option “default SQL instance” or “named SQL instance” above.

Unfortunately, I haven’t found a simple way to figure which instance is the default instance.

The one solution I found is to use Microsoft SQL Server Management Studio and try to connect to .\INSTANCENAME (e.g. .\MSSQLSERVER). If you can’t connect, than this instance is (most likely) the default instance.

How to get the name of the SQL instance

The default SQL instance names are as follows:

SQL Server MSSQLSERVER
SQL Server Express SQLEXPRESS

You can list all instance names with the Sql Server Configuration Manager that should come with your SQL server installation.

It’ll give you something like this:

Sql Server Configuration Manager

Projects in Visual C++ 2010 – Part 1: Creating a DLL project

When you write software, you often/sometimes divide your project into several subprojects. This mini series describes how to do this with Visual C++ 2010 (but this first part also applies to earlier versions). We start with creating a library project in form of a DLL.

Related articles:

Read more →