Color Toolkit

Background

I love color. In fact, one of my first posts I ever wrote for my developer website was based on setting up color printing for console output (which is super useful for distinguishing errors from normal informational output).

In my current code base, I have a whole library dedicated to Color. In this article, I’d like to share some code and insights for a few constructs I find really useful in everyday development.

RGB Color

Let’s start with the basics. Color32 is a simple 32-bit color structure with four 8-bit unsigned integer components: red (R), green (G), blue (B), alpha (A). Each of the components range from [0,255].

The class only has a small set of member functions for clamping, linear interpolation between two colors, etc. Also, a Color32 can be specified from four floats (range: [0,1]), though the component values are immediately converted to the byte representation.

Color-Space Conversions

RGB is so ubiquitous because it’s the color-space that rendering API calls expect for their color inputs. However, converting from RGB to other color-spaces (and back) is very important because some color operations are more natural in other color-spaces.

For the purpose of this article, we will concentrate on Hue-Saturation-Value (HSV).

HSV Color

HsvColor contains four float components: hue (H), saturation (S), value (v), alpha (A). Each of the components range from [0,1].

It is important to note that the HSV color-space is actually a cylinder, and the hue component is actually an angular value. I like to keep hue in the same [0,1] range as the other components, but that’s just a personal preference. Just be sure to convert your hue appropriately when implementing your RgbToHsv() and HsvToRgb() routines.

Furthermore, don’t be troubled that this object is a bit more heavyweight than the Color32. The main purpose for the HsvColor is to use it as a vehicle for color computation as opposed to storage.

Color Generator

Generating a good sequence of unique colors is not as trivial as it sounds. Using random number generators produces poor color sequences, and I didn’t want to limit myself to a hand-picked list of “good” colors. Instead I wanted to be able to generate a list of colors for an arbitrary size depending upon the requirements of the code. As a result, I found a couple of different articles on the subject (see References below) and implemented my own lightweight ColorGenerator class.

Internally, it uses the HSV color-space to generate a new unique color each time Generate() is called. The saturation and value components are kept constant during generation, while the hue is rotated around the HSV color cylinder using the golden ratio.

const float cInvGoldenRatio = 0.618033988749895f;

The Generate() routine below uses the cInvGoldenRatio constant as the hue increment.

Color32 Generate() const
{
   const float nextHue = mHsvColor.H + cInvGoldenRatio;
   mHsvColor.H = MathCore::Wrap(nextHue, 0.0f, 1.0f);
   return ColorOps::HsvToRgb(mHsvColor);
}

It’s important to note that the colors are not pre-generated and stored. Instead, I simply have a single HsvColor that stores the state of the generator between calls to Generate().

Color Palette

The ColorPalette class is a named container of colors. The array of colors is set on construction, and each individual color can be accessed via the bracket operator.

I call it a “palette” because it stores a fixed array of colors that are not assumed to be associated with each other in any particular way (order doesn’t matter, etc.). Although this class seems trivial, it serves as a basis for a couple really cool constructs.

Prefab Color Palettes

As a convenience, I have a set of ColorPalette creation routines (listed and respectively displayed in the image below).

  • Black and White.
  • Rainbow: Red, Orange, Yellow, Green, Cyan, Blue, Violet.
  • Monochrome: single color spectrum (more precisely: a spectrum from black to a single color).
  • Spectrum: a set of bands between two colors.
  • Pastels: a generated set of pastel colors. (h:0.0f, s:0.5f, v:0.95f)
  • Bolds: a generated set of saturated colors. (h:0.0f, s:0.9f, v:0.95f)

Color Ring

A ColorRing is a simple wrapper class that operates on a given ColorPalette, treating it like a circular list. Each time client code calls GetColor(), the current color is returned and the current color index is increased (wrapping back to 0 if it passes the last color).

This class useful when you have a group of items but want to limit their color choices to a certain set of colors (which may or may not be equal to the number of items): build a palette with your desired set of colors and then use a ColorRing when performing the color assignment.

Color Ramp

The ColorRamp class also operates on a given ColorPalette. It treats the ColorPalette as its own color subspace that can be accessed via a parametric value in the range of [0,1]. In other words, when client code wants a color from the ColorRamp, they must provide a parametric value which is then mapped into the ColorPalette.

Additionally, my implementation includes a RampType flag that indicates whether it will behave as a gradient or whether it will round (up or down) to the nearest neighboring color in the palette.

The Evaluate() routine uses the given parametric value to find the two nearest colors, and then computes a resulting color based on the aforementioned RampType flag. In the case of the gradient, the colors are linearly interpolated with the parametric value remainder; while the other two behaviors simply round to one color or the other based on the parametric value remainder.

Color32 Evaluate(const float parametricValue) const
{
   if (parametricValue <= 0.0f)
      return mColorPalette[0];

   if (parametricValue >= 1.0f)
      return mColorPalette[mColorPalette.Count-1];

   const float cValue = parametricValue * (mColorPalette.Count - 1);

   const int idxA = (int)MathCore::Floorf(cValue);
   const int idxB = idxA + 1;

   switch (mRampType)
   {
      case RampTypeId::eGradient:
         {
            const float fracBetween = cValue - (float)idxA;
            const Color32& colorA = mColorPalette[idxA];
            const Color32& colorB = mColorPalette[idxB];
            return Color32::Lerp(colorA, colorB, fracBetween);
         }

      case RampTypeId::eRoundDown:
         return mColorPalette[idxA];

      case RampTypeId::eRoundUp:
         return mColorPalette[idxB];

      default:
         PULSAR_ASSERT(false);
         break;
   }

   // should never get here
   return Color32::eBlack;
}

Here’s what it looks like when we create a corresponding gradient ColorRamp for each of the palettes shown above:

Initially, I wrote the ColorRamp class to help with assigning color values to vertices in a height field, but since then I have found several other interesting applications for it.

Final Thoughts

While I concentrated on exhibiting a few of the core classes in my Color library at a high level, it should be pretty clear that there are a lot of possibilities when it comes to Color. You can find more in-depth information, gritty details, and sample code in the references provided below. I highly encourage you to read them even if you’re only the slightest bit interested — they are well worth your time.

References

Book Review: Team Leadership in the Game Industry

Team Leadership in the Game Industry

Team Leadership
in the Game Industry

Seth Spaulding

The Review

Simply put, I really liked this book. In fact, I thought the book was so good that I bought two additional copies as gifts for a pair of coworkers who were just getting started as leaders of their own teams.

It contains a wealth of knowledge on the subject of leadership in the context of the game industry. Understandably, the book leans towards the artist camp in a game company, but the advice offered in the book is most certainly applicable towards other engineering or game design (I myself am an engineer and as I mentioned above, I found it to be very useful!).

But beyond the organizational charts and high level discussion of team dynamics, the book drills into some very important topics. For example, I was very impressed with the section on how to best evaluate whether someone is suited for a leadership position or instead set them into a senior position without placing them in charge of a team. Spaulding lays out several different scenarios and guides the reader through each one, explaining why people with different skill sets and personalities may or may not work out when placed in charge of over a team.

Another topic that is addressed in detail is what to do when things go sour: personality conflicts, team meltdowns, over-zealous leaders, and both incompetent team members and team leaders.

The book also contains insights directly from the GDC Roundtable sessions, including a detailed look at the question “What traits would you want in your ideal team leader?” Spaulding outlines the traits that are commonly chosen common and explains not only why they are common, but also why they are good traits or which ones may be more important than others.

On top of all of this, the book includes a collection of interviews (one at the end of each chapter) with industry veterans from an assortment of leadership positions and disciplines (art, production, engineering, etc). I especially enjoyed these because they showed both a variety of answers to some questions, while others most answered with the same useful advice.

I would recommend this book to anyone in the industry who is currently leading a team, or is thinking that they might want to lead a team in the near future.

Unit Testing Ambiguity

Background

This post is related to the previous post on testing a return value that consists of a set of items with no predefined order (see Unit Testing Unordered Lists)[1]. However, it differs in that this time the problem is a little more mathematical in nature: how do we properly test a routine that performs an operation that may have multiple correct solutions?

For those of you who have written core math routines and solvers before, you know exactly what I’m referring to, but even if you haven’t I still encourage you to continue reading. I’ve chosen a relatively simple, common, yet important example to work through — there’s a good chance you’ll make use of this knowledge somewhere.

Unit Testing Ambiguity

There have been times when I’ve found myself trying to impose unrealistic expectations on my routines. Of course, when I test for them, I end up with failed tests and my first thought is more often than not: the code being tested must be wrong. However, the reality is that I have actually written a bad test.

A good example of this happened to me when I was working on my ComputeOrthoVectors() routine[2]. The purpose of the function is: given a single Vector3, compute two additional Vector3s which form an orthonormal basis with the input Vector3. So, I wrote the following test:

TEST(CanComputeOrthoBasisVectors)
{
   Vector3 vecA, vecB;
   ComputeOrthoBasisVectors(vecA, vecB, Vector3::UNIT_X);
   CHECK_ARRAY_CLOSE(Vector3::UNIT_Y, vecA, 3, EPSILON);
   CHECK_ARRAY_CLOSE(Vector3::UNIT_Z, vecB, 3, EPSILON);
}

The primary issue here is that the operation itself is soaked in ambiguity. More specifically, the two Vector3s that make up the return value can be produced by one of many valid solutions. To see this, let’s look at the math required for the implementation.

Computing an Orthonormal Basis

First, let’s be clear on the terminology: a set of three Vector3s that are each perpendicular to the other two is called an orthogonal basis. If we normalize the vectors in the set, we can call it an orthonormal basis. In other words: A 3D orthonormal basis is a set of three vectors that are each perpendicular to the other two and each of unit length.

The need for creating an orthonormal basis from a single vector is a fairly common operation (probably the most common usage is in constructing a camera matrix given a “look-at” vector).

Given two non-coincident[3] vectors, we can use the cross product to find a third vector that is perpendicular to both the input vectors. This is often why we say that two non-coincident vectors span a plane, because it is from these two vectors that we can compute the normal to that plane.

The thing to note here is that there are an infinite number of valid non-coincident vectors that span a plane. You can imagine grabbing the normal vector as if it were a rod connected to two other rods (the input vectors) and spinning it; any orientation you spin it to is a valid configuration that would produce the same normal vector. I have created an animation demonstrating this below:

Animation of two vectors orthogonal to input vector. The two vectors remain in the plane defined by the input vector (the plane normal).

In essence, this is why the results of the operation are valid, yet, for the lack of a better word, ambiguous. In the case of our routine, we are supplying the normal and computing two other vectors that span the plane. The fact that the two returned vectors are orthogonal to the input vector does not change the fact that there are an infinite number of configurations.

Testing the Geometry, Not the Values

Returning to the original testing dilemma, since there are an infinite number of possible solutions for our returned pair of Vector3s, if we write tests that check the values of those vectors, then our tests will be bound to the implementation. The result is a ticking time-bomb that may explode in our face later on (maybe during an optimization pass) because, although we may change the pair of vectors returned for a given input, the three vectors still form an orthonormal basis — the correct and desired result — in spite of the fact that we now have failing tests.

My solution was to check the following geometric properties:

  1. All three vectors are perpendicular to each other (in other words, they form an orthogonal basis).
  2. The two returned vectors are of unit length (input vector is assumed to be normalized).

The following are a few tests (straight from my codebase) that employ this approach:

TEST(CanComputeOrthoBasisVectors_UnitX)
{
   Vector3 vecA, vecB;
   ComputeOrthoBasisVectors(vecA, vecB, Vector3::UNIT_X);
   CHECK_CLOSE(0.0f, vecA.Dot(vecB), EPSILON);
   CHECK_CLOSE(0.0f, vecA.Dot(Vector3::UNIT_X), EPSILON);
   CHECK_CLOSE(0.0f, vecB.Dot(Vector3::UNIT_X), EPSILON);
   CHECK_CLOSE(1.0f, vecA.Mag(), EPSILON);
   CHECK_CLOSE(1.0f, vecB.Mag(), EPSILON);
}

TEST(CanComputeOrthoBasisVectors_UnitY)
{
   Vector3 vecA, vecB;
   ComputeOrthoBasisVectors(vecA, vecB, Vector3::UNIT_Y);
   CHECK_CLOSE(0.0f, vecA.Dot(vecB), EPSILON);
   CHECK_CLOSE(0.0f, vecA.Dot(Vector3::UNIT_Y), EPSILON);
   CHECK_CLOSE(0.0f, vecB.Dot(Vector3::UNIT_Y), EPSILON);
   CHECK_CLOSE(1.0f, vecA.Mag(), EPSILON);
   CHECK_CLOSE(1.0f, vecB.Mag(), EPSILON);
}

TEST(CanComputeOrthoBasisVectors_UnitZ)
{
   Vector3 vecA, vecB;
   ComputeOrthoBasisVectors(vecA, vecB, Vector3::UNIT_Z);
   CHECK_CLOSE(0.0f, vecA.Dot(vecB), EPSILON);
   CHECK_CLOSE(0.0f, vecA.Dot(Vector3::UNIT_Z), EPSILON);
   CHECK_CLOSE(0.0f, vecB.Dot(Vector3::UNIT_Z), EPSILON);
   CHECK_CLOSE(1.0f, vecA.Mag(), EPSILON);
   CHECK_CLOSE(1.0f, vecB.Mag(), EPSILON);
}

TEST(CanComputeOrthoBasisVectors_RefPaperExample)
{
   Vector3 vecA, vecB;
   const Vector3 unitAxis(-0.285714f, 0.857143f, 0.428571f);
   ComputeOrthoBasisVectors(vecA, vecB, unitAxis);
   CHECK_CLOSE(0.0f, vecA.Dot(vecB), EPSILON);
   CHECK_CLOSE(0.0f, vecA.Dot(unitAxis), EPSILON);
   CHECK_CLOSE(0.0f, vecB.Dot(unitAxis), EPSILON);
   CHECK_CLOSE(1.0f, vecA.Mag(), EPSILON);
   CHECK_CLOSE(1.0f, vecB.Mag(), EPSILON);
}

By testing these properties, as opposed to testing for the resulting vector values directly (as in the original test shown above), it doesn’t matter how the internals of the ComputeOrthoBasisVectors() produces the two returned Vector3s. As long as the input vector and the returned vectors all form an orthonormal basis, our tests will pass.

Final Thoughts

My hope is that the example presented in this article demonstrates one of the pitfalls of having tests that depend on the internal implementation of the routine being tested. As I have stated before, although it is important to write tests for the functionality, it can be difficult to recognize when a test is bound to the implementation.

A good place to start looking for this sort of scenario is in tests that explicitly check return values. Certainly, explicit checks is actually what you want in most cases, but for some operations it is not.

Footnotes

1. Originally, these two topics were going to be addressed in a single article, but I decided against it in hopes that keeping them separate would allow for more clarity in each.

2. In fact, the trouble I had in testing the ComputeOrthoVectors() routine is what inspired me to post both this article and the last.

3. Two vectors are said to be coincident if they have the same direction when you discard their magnitudes. In other words, two vectors are coincident if you normalize both of them and the results are the same.

References

Möller, Tomas and John F. Hughes. Building an Orthonormal Basis from a Unit Vector. [ pdf ]

Unit Testing Unordered Lists

Background

If you’ve read just about any of my earlier posts, you know that I write tests for Pulsar. This includes unit tests, functional tests, and performance tests, all in addition to the demo / visual tester applications that I usually post my screenshots from. The details on the structure to my codebase are probably worth outlining in a future post, but in this post now I’m going to focus in on the unit tests.

Implementation Note: I use UnitTest++ as my test harness. It is simple, lightweight, and quick to integrate into a codebase. If you don’t have a testing framework setup for your codebase yet, I highly recommend UnitTest++.

Unit Testing Unordered Lists

Every once in a while (primarily in my Geometry / Collision Detection library), I have run into the need for unordered lists in my test suite. In fact, it has come up often enough that I decided to share some thoughts and my solution here.

Although I’m going to explain this issue through a somewhat contrived example, I chose it because the premise is easy to understand and visualize; certainly this could be extended to other, more complex cases.

Let’s say you have the following Rectangle class that represents an 2D axis-aligned box:

class Rectangle
{
public:
   Vector2 mMin;
   Vector2 mMax;
};

Next, say we are writing a ComputeCorners() member function. As you probably guessed, this routine needs to return four Vector2s that are the coordinates of each of the corners on the Rectangle. This can be done in one of the following two flavors:

  • add a struct with four Vector2s, each with sensible member names
  • use an array: Vector2[4]

If your calling code requires explicit knowledge that maps each returned point to its corresponding corner on the Rectangle, you would probably choose to use the first option.

However, let’s assume that all we actually need are the points themselves and we don’t really care which point is which (maybe we’re just going to insert them into some larger list of points that will be processed, or whatever). Using the array approach, our test code might look something like this:

TEST(CanComputeCorners)
{
   Rectangle rect;
   rect.mMin.Set(1.2f, 1.4f);
   rect.mMax.Set(2.5f, 1.7f);
  
   Vector2 corners[4];
   rect.ComputeCorners(corners);
  
   CHECK_EQUAL(Vector2(1.2f, 1.4f), corners[0]);
   CHECK_EQUAL(Vector2(2.5f, 1.4f), corners[1]);
   CHECK_EQUAL(Vector2(1.2f, 1.7f), corners[2]);
   CHECK_EQUAL(Vector2(2.5f, 1.7f), corners[3]);
}

Without sweating the details of testing against an epsilon, this test looks pretty good at first glance, and it probably succeeds without trouble.

However, there is a subtle problem here: our test is assuming that there is a specific order to the points being returned. Nothing in our function dictates that we absolutely have to return the points in that order, but because of the way we have written our test any change to the order of the returned points in the implementation will result in a failing test.

So how exactly can we test this routine (and others like it) properly? In other words, how can we write our test to be order-independent?

My solution (as straightforward as it may be) was to implement a testing helper routine:

bool ArrayContainsItem(
   const Vector2* itemArray, const uint itemCount,
   const Vector2& itemToFind)
{
   for(uint idx = 0; idx < itemCount; ++idx)
   {
      const Vector2& curItem = itemArray[idx];
      if(curItem == itemToFind)
         return true;
   }
  
   return false;
}

TEST(CanComputeCorners)
{
   Rectangle rect;
   rect.mMin.Set(1.2f, 1.4f);
   rect.mMax.Set(2.5f, 1.7f);
  
   Vector2 corners[4];
   rect.ComputeCorners(corners);
  
   CHECK(ArrayContainsItem(corners, 4, Vector2(1.2f, 1.4f));
   CHECK(ArrayContainsItem(corners, 4, Vector2(2.5f, 1.4f));
   CHECK(ArrayContainsItem(corners, 4, Vector2(1.2f, 1.7f));
   CHECK(ArrayContainsItem(corners, 4, Vector2(2.5f, 1.7f));
}

Notice that this routine is intended for the purposes of simplifying the test and resides with the testing code, not packaged up with the library itself. It is perfectly acceptable to have to perform additional setup (mock objects, etc.) or implement a helper or two for testing.

Also, you could certainly generalize this unit test helper routine a little so it can be used in other unordered list instances -- I've found many uses for my own implementation while testing Pulsar libraries.

Final Thoughts

The real take away from this actually isn't about how to test an unordered set of objects; in fact, I think the lesson is much bigger and important: be careful to write tests for the functionality, not for the implementation.

This key point to this lesson is subtle and sometimes easy to miss / hard to catch.

To be clear, functionality is what the routine does that can be inspected from the outside view of the object, or, in the case of a non-member ("free") function, from the return value. On the other hand, implementation is much more tightly bound to the code itself (the internal process of a routine) and how it operates on an object or data.

In the example case above, we saw that the interface did not specify an ordering for the list of returned corner coordinates, yet our initial test definitively expected an ordering which happened to be based on our knowledge of the implementation.

I'll be the first to admit that it isn't always clear when implementation details have crept into the tests -- this has happened to me a lot more often than I care to reveal (especially while practicing Test-Driven Development).

In closing, I believe that the first line of defense against this problem is to simply be aware that it can show up when testing container return values. As a result, I encourage you to take some additional time to make sure that the tests really are testing the functionality and not the implementation.

Build Server Setup

Konoko from Bungie’s Oni.

This last weekend, my build server (Konoko) decided to finally go rampant. Unfortunately, over the past couple years she became increasingly unstable, ultimately resulting in the undeniable fact that she could no longer perform her duties as a part-time build server. Subsequently, my only course of action was to outfit another one of my machines (Kyla) to fill in.

The following article is a set of notes I put together during the setup process. I don’t expect everyone will have the same needs as my home setup, but I’ve done my best to explain why I configured my build machine the way it is as well as emphasize and supply solutions for a few of issues I ran into along the way.

Basic Setup

At this point we’ll assume that you have a WinXP box (I prefer a fresh install, but that’s not strictly necessary). Before proceeding, you should also install any fundamentals to your development pipeline. In my case this includes:

  • Powershell
    • Set the appropriate execution policy following so you can run scripts:
      PS> Set-ExecutionPolicy RemoteSigned
  • Programmer’s Notepad
  • Visual Studio (and any service packs)
  • .NET Framework
  • DirectX SDK
    • Ensure that the DXSDK_DIR was added by the installer.
    • Add the following to the Project and Solutions | VC++ Directories:
      • Include Files: $(DXSDK_DIR)include
      • Library Files: $(DXSDK_DIR)lib\x86
  • Python
    • Add the path as an environment variable.

For your build server, it is a good idea to make a shortcut to Services on your desktop:
Target: %SystemRoot%/System32/services.msc /s

The image above summarizes the arrangement I have setup at home. To be honest though, I don’t even have a monitor hooked up to my build server; I just use Remote Desktop to handle anything needed on the build server itself.

It should be noted that I install the source control client and CruiseControl client (CCTray) on the server in addition to the server software. This is simply for convenience and a personal preference.

Source Control: Perforce

My build system at home uses Perforce. It’s easy to install, easy to use, and it’s free for up to 2 users and 5 client workspaces. If you have a different source control system you prefer, go install that and just skip this section.

Installing the Perforce Server

Go to: http://www.perforce.com/perforce/downloads/index.html.

The server is labeled “The Perforce Server (P4D)”. You should get the installer appropriate for your machine (x86 or x64). The downloaded executable is simply called perforce.exe.

Install the following features:

  • Server (P4D)
  • Command-Line Client (P4)

I’m not using the the Perforce Proxy (P4P).

Server Configuration:

  • Choose your port.

Client Configuration:

  • Server: %SERVERNAME%:%PORT%

The version I installed (Feb 2010) automatically added Perforce as a service. You can verify this by opening up Services and searching for Perforce. This is connected to the Perforce/Server/p4s.exe. This should be set to have a Startup Type of “Automatic” so that it is started each time you boot the machine.

Firewall Notes

If you are using a firewall, you should add the p4s.exe as an “Exception” (Windows Firewall) or to the list of “Approved Programs” (ZoneAlarm). Otherwise, you most likely won’t be able to connect to Perforce via any remote machines (even though you will be able to run a P4 client on the build server locally). Also, it may be a good idea to add the p4d.exe in case you need to run it locally as a non-service.

Installing the Perforce Client

Go to: http://www.perforce.com/perforce/downloads/index.html.

The client is labeled “The Perforce Visual Client (P4V)”. The executable is called p4vinst.exe or p4vinst64.exe.

NOTE: if you are a die hard P4Win-type, you can find the legacy app here:
http://www.perforce.com/perforce/downloads/legacy.html

The client installation is pretty self-explanatory. I will note that I like to install the client on both the build server as well as my remote machines.

Internet Server: IIS

In order to use P4Web or CruiseControl.NET’s dashboard, we need to set up a web server application on the build machine. After having used both Apache and IIS in previous build system configurations, I’ve come to the conclusion that my simple Windows-based setup gets along just fine with IIS.

The installer for IIS is found on the WinXP CD. (IIS 6 on WinXP)

From the autorun:

  • Choose “Install optional Windows components”.
  • Check “Internet Information Services (IIS)”.
  • Click the Next button to install.

NOTE: if you happen to install this after you install CruiseControl.NET, you will need to pop open a command console and run the following registration app[1]:
Run: %WINDOWS%/Microsoft.NET/Framework/v2.0.50727/aspnet_regiis.exe -i

Firewall Notes

If you are using a firewall, you should add the %WINDOWS%/System32/inetsrv/inetinfo.exe (which is the executable for IIS) as an “Exception” (Windows Firewall) or to the list of “Approved Programs” (ZoneAlarm). Otherwise, you most likely won’t be able to connect to CruiseControl’s dashboard from a remote machine.

Build Integration: CruiseControl.NET

Go to: ccnet.thoughtworks.com/.

Download the latest stable release of CruiseControl.NET. In my case this was the CC.NET 1.5 RC1 installer: CruiseControl.NET-1.5.6804.1-Setup.exe.

Go through the installer. One of the steps is called “Additional Conditions”, this includes:

  • Install CC.NET as a service.
    Make sure this is checked (unless you want to have to manually start and stop CruiseControl).
  • Create virtual directory in IIS for Web Dashboard.
    Make sure IIS is installed before installing CC.NET; otherwise this step won’t succeed.

Once CC.NET is installed, the service should have been automatically started. To check, open Services and find the CruiseControl.NET Server service. You can right click on it to Start or Stop it. While you’re here, right-click and go to Properties; check to make sure that it has a Startup Type of “Automatic” so that it is started each time you boot the machine.

Also, in the CruiseControl.NET Server service Properties, under the Log On tab, I had to set it to use an account with the appropriate environment variables set[1]. This caused me a lot of headache where my code would build on the build server while in the IDE, but it would fail to build when launched from the CC.NET Tray. The build would fail because it couldn’t find any of the DirectX includes, even though they had been set in the IDE (as described in the Basic Setup section above.)

Firewall Notes

If you are using a firewall, you should add the CruiseControl.NET/Server/ccnet.exe and CruiseControl.NET/Server/ccservice.exe as an “Exception” (Windows Firewall) or to the list of “Approved Programs” (ZoneAlarm).

Resources:

[1] http://ccnetlive.thoughtworks.com/ccnet/doc/CCNET/FAQ.html