Small and Simple: The Keys to Enterprise Software Success

Someone once said, “By small and simple things are great things brought to pass.” The context was certainly not enterprise software development but I have come to believe that these words are profound in this context as well.

Every enterprise, medium and large, is challenged with complexity. Their systems and businesses are complex. Their politics and personnel and departments are complex. Their business models and markets and product lines are complex.

And their software is complex. But why?

Because we make it complex. We accept the notion that one application must accomplish all the wants and wishes of the management for whom we work, either because we are afraid to say, “This is a bad idea,” or because we are arrogant enough to think we’re up to the challenge. So we create large teams and build complex, confusing and unmanageable software that eventually fulfills those wants and wishes, sometimes years later—that is, if the project is not cancelled because the same management team loses patience and/or budget to see it through.

Some companies have made a very large business from selling such very large and complex software. They sell it for a very large “enterprise class” price to a business who wants all those nice shiny features but doesn’t want to take the risk of building it for themselves. And then they buy the very large enterprise software system and find that they must spend the next six to twenty months configuring and customizing the software or changing their own business processes and rules to fit the strictures of the software they have purchased.

The most successful enterprise projects I’ve worked on have all shared certain qualities. These projects have produced workable, usable and nearly trouble-free service all because they shared these qualities. The are:

  • A small team consisting of:
    • A senior lead developer or architect
    • Two mid-senior level developers
    • A project manager (part time)
    • And a strong product owner (part time)
  • A short but intense requirements gathering and design period with earnest customer involvement.
  • Regular interaction with product owner with a demo of the software every week.
  • Strong communication and coordination by the project manager, assuring the team and the customer understands all aspects of project status and progress.

The challenge in the enterprise is to find the will and means to take the more successful approach of “small and simple.”

You may say, “Well, that’s all fine and good in some perfect world, but we have real, hard and complex problems to solve with our software.” And that may be true. Software, no matter how good, cannot remove the complexity from the enterprise. But there is, in my opinion, no reason that software should add to it.

The real genius of any enterprise software architect, if she has any genius at all, is to find a way to break down complex business processes and systems into singular application parts that can be built by small, agile teams like I’ve described, and where necessary, to carefully define and control the service and API boundaries between them.

Must a single web application provide every user with every function and feature they might need? No. But we often try for that nirvana and we often fail or take so long to deliver that delivery does not feel like success.

Must a single database contain all the data that the enterprise will need to perform a certain business function or fulfill a specific book of business? No. But we often work toward that goal, ending up with a database that is so complex and so flawed that we cannot hope to maintain or understand it without continuous and never ending vigilance.

When your project complexity grows, your team grows, your communication challenges grow and your requirements become unwieldy and unknowable. When you take on the complex all in one with an army, you cannot sufficiently focus your energy and your strength and collectively you will accomplish less and less with each addition to your team until your project is awash is cost and time overruns, the entire team collectively marching to delivery or cancellation. And in the end either outcome, delivery or cancellation, feels the same.

By small and simple things, great enterprise software can be brought to pass. Sooner rather than later. Delivering solid, workable, maintainable, supportable solutions. Put one small thing together with another, win the war on complexity and provide real value to your management team. Do this again and again and you’ll have a team that will never want another job because they will have become all too familiar with real success.

What’s Wrong with Technical Recruiters

I’ve had a series of ongoing discussions with a friend who is also a senior software architect. In recent months he’s become quite dissatisfied with his manager specifically and employer generally. He has put his resume out on the typical job search sites, Dice, Monster, TheLadders, LinkedIn and others, and has been sorely disappointed with the result. Technical recruiters seem to be, with rare exceptions, a sorry lot of would-be telemarketers rejected for lack of integrity by the local call-center-for-hire.

On a daily basis this friend of mine receives emails and calls with a hot job listing that do not generally match his skills or experience like the one that he forwarded to me today, not because he might really be a match but in hopes of sparking a conversation that goes something like this:

  • Recruiter: “I know you may not be a good fit for this job, but do you know anyone who is?”
  • Candidate: “No. I do not know anyone with this skill set and experience level interested in a 6 month contract.”
  • Recruiter: “Okay. I’d like to keep in touch with you and I will definitely call you if something comes through that is a better match for what you’re looking for.”
  • Candidate: “Okay, sure. Thanks for the call.”

My friend reports that he has at least three of those conversations per week and at least two or three such emails as the one below per day. I’ve changed the company names and the place to avoid the chance of embarrassing any of them, especially since one or two of them is a household brand name.


I am Recruiter with XYZConsulting (formerly ABCConsulting), a dedicated QRS contractor sourcing service, managed by TenCharms. We recruit solely for QRS Contract Positions. Should you accept a contract position, it will be through our preferred partner TenCharms, an independent vendor chosen to manage this program for QRS.

I saw your resume on a job board. I am currently recruiting .net Consultant for a 12 months contract project with QRS – Chicago, IL.

Email me a copy of WORD formatted resume, along with these details:

  • Best time & number to speak to you:
  • Location:
  • Visa Status:
  • How soon you can start:
  • Work mode, W2, C2C (if self-incorporated):
  • Expectations on Compensation:

Position’s name: .net Consultant

Assignment period: 12 months
Location: Chicago, IL

Required skills
Primary Skills:,, Oracle / SQL Server, SQL. Secondary Skills:, MS Access / VBA

As this is an urgent requirement, kindly respond to it at earliest possible.

Best Regards,
Bad Excuse (name changed to protect the guilty)

Recruiter | XYZConsulting (formerly ABCConsulting)
A TenCharms Solution
Dedicated QRS Contractor Sourcing Services

My friend has shown me many other emails that are far worse than this one. Many of them are poorly written, lacking proper grammar and teeming with spelling errors, yet they claim to represent well known companies. I wonder if these companies for whom these recruiters work ever bother to review the practices and general quality of the communications of their agents. Somehow I doubt it.

In some of the emails my friend has forwarded to me, it becomes clear that the recruiting company has employed some shoddy keyword matching algorithm to select from a list of candidates who have put their resumes out into the swamp of job sites (see above mentioned sites). In some of these I have been able to identify one or two specific keywords that match my friend’s resume. I should know. I helped him write it.

Sometimes the keyword match is even close, such as, “architect” or “jQuery.” But since my friend is a .NET architect and the job is for a J2EE architect who also knows jQuery and PHP and Oracle while my friend does not mention Oracle or PHP on his resume at all, it is clear that the recruiter’s matching algorithm, whether it is their own or provided by one of the job sites, is severely lacking in even the most basic intelligence.

Either that or the recruiter is far more interested in spamming the candidate with a job posting that is clearly not a fit in hopes that the candidate will do the job of the recruiter for him or her or it.

To all such recruiters, please for the love of Pete, what’s wrong with you? Stop wasting my friend’s time!

C# Basics: The Conditional or Ternary Operator

For the second installment of C# Basics, let’s take a look at the conditional operator, sometimes known as the ternary operator. How many times have you written code like this:

if (a == b)
  x = y;
  x = z;

Probably never, at least not quite this simplistic or with such ancient style variable names, or one would hope anyway. But the example serves its purpose. Now use the conditional operator and convert that code to this:

x = (a == b) ? y : z;

You can read more about the conditional operator on MSDN here.

Ruck Software Development Process

I have worked in several development teams that have a Scrum-like development process where the team has adapted Scrum to their own team dynamics and organizational needs. I was intrigued by a recent MSDN article called Visual Studio ALM Rangers—Reflections on Virtual Teams that formalizes one of these this Scrum-but-not-Scrum processes and calls it Ruck. Here is the authors’ loose definition:

“To ensure that we don’t confuse anyone in terms of the methodologies or annoy process zealots, we provisionally chose to call our customized process ‘Ruck,’ which, in the rugby world, means ‘loose scrum.’”

The article authors share their experience in Microsoft’s ALM Rangers team and “provide guidance for working with teams in less-than-ideal distributed and virtual environments.” I found this introduction compelling and I wanted to try to restate their approach here with a few of my own thoughts mixed in.

The ALM Rangers’ Ruck process is very specific to their needs and team: a highly distributed team and a high degree of variability in team member availability where team members have multiple professional obligations that take higher priority than their work on the team. This is a very challenging situation for a team trying to be Scrum-like and may be closer to many popular open source projects with multiple contributors from around the world, with one very important exception: a fixed ship date.

I’m not advocating that any specific team adopt the ALM Rangers’ Ruck, but the choices and approach taken by this team could be very valuable to other teams faced with creating their own Ruck because they cannot, for whatever reason, get close enough to Scrum to really call their process Scrum.

One of the most important points I found the authors making had to do with adhering to the agile principle of self-organizing teams. They found that their “belief in letting developers choose work from areas in which they have interest” required that they allow teams to self-organize around these areas of interest rather than specific geographies or time zones.

One of the most difficult challenges facing enterprise software development projects is team size and how to break up larger projects into smaller chunks that can be addressed by smaller, more agile teams. The authors observe:

“We’ve learned that smaller projects and smaller team sizes enable better execution, proper cadence and deeper team member commitment. Also, it seems it’s more difficult for someone to abandon a small team than a larger team.”

Another interesting area the authors cover is that of project artifacts used to manage and break down the project into actionable tasks. The particular names of these seem less important than the fact that they are well defined and communicated and used consistently within their Ruck. In creating your own Ruck process, do not skip this important element.

Roles are also well defined. Knowing what position you play on the team at any one given moment can eliminate the need for over-the-shoulder management because each team member knows what their responsibility is. The task of the manager becomes one of coach rather than hall monitor, communicating these roles and responsibilities to team members rather than asking the question, “so what are you working on today?”

One more critical area to define is meetings. Don’s skimp on structuring meetings to achieve what you need to achieve without wasting too much time. As my brother tells me, “It takes a really good meeting to beat no meeting at all.”

And finally, I want to share one more brief excerpt from the authors. Their top recommendations:

“There are many recommendations, skills and practices that will help you improve the virtual team environment, the collaboration and effectiveness of each team member, and the quality of the deliverables. For us, the seven top recommendations include:

  1. Clearly define and foster a team identity and vision.
  2. Clearly define the process, such as the Ruck process.
  3. Clearly define the communication guidelines and protocols.
  4. Clearly define milestones, share status and celebrate the deliverables.
  5. Implement an ALM infrastructure that allows the team to collaborate effectively, such as TFS and associated technology.
  6. Promote visibility and complete transparency on a regular cadence.
  7. Choose your teams wisely, asking members to volunteer and promoting passionate and get-it-done individuals.”

C# Basics: The Proper Use of a Constructor

This is the first in what I hope will be a long series of quick posts covering the basics of programming in C#. For this first installment, I’ll review the proper use and improper use of a class or struct constructor.

What is a Constructor
In a C# class, a constructor is a special code block for a class or struct called when an instance of that class or struct is created. If a constructor is not defined in code, the compiler will create a default constructor without parameters. Constructors can be overridden and can call a base class constructor.

Proper Use of a Constructor
Use a constructor to set or initialize class or struct fields. If this initialization code is lengthy, such as in the construction of a Windows Form class, move the initialization code to a partial class file and call it something like InitializeComponents. Here are a few examples:

public partial class Apple : Fruit
  //same as a default constructor if not constructor defined
  public Apple()
  //initialize a local class member
  public Apple(string name)
    _name = name;
  //initialize and call base class constructor
  public Apple(string name, double weight) : base(weight)
    _name = name;

  //constructor with conditional param to control construction behavior
  public Apple(bool initializeAll)
    if (initializeAll) InitializeComponents();

  //shown here but often hidden away in another file of defining the partial class
  void InitializeComponents()
    //do some lengthy but simple initialization of class members
  private string _name;

Improper Use of a Constructor
Do not use a constructor for error prone or complex code. Do not use a constructor to execute tasks, business logic. I’m not going to include code examples of bad constructors but here are a few anti-patterns I’ve seen in my travels:

  • The constructor makes a call to an external service or database – BAD
  • The constructor creates child objects and calls methods on those objects to perform some business function – BAD BAD
  • The constructor asks for user input or reads from a file – BAD BAD BAD

Alternatives to a Constructor
If you require lengthy, long running or potentially error prone code to produce an instance object of a class or struct, use either a factory class or a static create method. Here’s how:

public class BookService : IBookService
  //a private default construtor prevents use of it outside of the class itself
  private BookService()
  public static BookService Create()
    BookService instance = new BookService(); //calls private constructor
	//put lenghty or error prone code here that sets up the service
	return instance;

//use a factory for even more complex object creation
public static class BookServiceFactory
  public static IBookService Create(BookOptions options)
    IBookService instance;
	//based on the BookOptions provide create the proper
	//concrete instance of an IBookService interface
	return instance;

Of course this is not a comprehensive treatment. It’s a blog post. If you want more, buy a book. If you have topics you would like to see addressed in future installments, please let me know.

System.Data.SQLite Write Comparison of SQLite 3 vs Berkeley DB Engines

In my last post, I covered getting the Berkeley DB SQL API running under the covers of the System.Data.SQLite library for .NET. Another weekend has come and I’ve had some time to run some tests. The whitepaper Oracle Berkeley DB SQL API vs. SQLite API indicates that the Berkeley engine will have relatively the same read speed as SQLite but better write speed due to it’s page locking vs. file locking.

Berkeley Read Bug
Originally, my code would write a certain number of rows in a given transaction on one or more threads and then read all rows on one or more threads, but I had to give up on my read code because the Berkeley implementation of System.Data.SQLite threw a "malloc() failed out of memory" exception whenever I ran the read test with more than one thread. You can read the code below and decide for yourself whether I made a mistake or not, but the test ran fine with the SQLite assembly from the site.

The Tests
There were four tests, each inserting similar but different data into a single table with a column representing each of the main data types that might be used in a typical application. Two single threaded tests, the first inserting 6,000 rows in a single transaction and the second 30,000 rows. The third and fourth tests were multithreaded: 6 threads inserting 1,000 rows each and then 10 threads inserting 500 rows each. 

Here’s the SQL for the table:

  [src] TEXT, 
  [td] DATETIME, 
  [tn] NUMERIC, 
  [ti] INTEGER, 
  [tr] REAL, 
  [tb] BLOB

The Results
The results do not show a vast difference in insert speed for Berkeley as I might have expected. The tests were run on an AMD 6 core machine with 8GB of RAM. I’m writing to an SSD drive which may or may not affect results as well.


You may choose to interpret the results differently, but I don’t see enough difference to make me abandon the more stable SQLite 3 engine at this time.

The Code
I invite you to review the code. If I’ve missed something, please let me know. It’s pretty straightforward. With each test, I wipe out the database environment entirely, starting with a fresh database file. I use the System.Threading.Tasks library.

class Program
  static void Main(string[] args)
    Console.WriteLine("SQLite Tests");
    var tester = new Tester();
    tester.DoTest("1 thread with 6000 rows", 1, 6000);
    tester.DoTest("6 threads with 1000 rows each", 6, 1000);

    tester.DoTest("1 thread with 30000 rows", 1, 30000);
    tester.DoTest("10 threads with 500 rows each", 10, 500);

    Console.WriteLine("tests complete");

class Tester
  public void DoTest(string testName, int threadCount, int rowCount)
    Console.WriteLine(testName + " started:");

    List<Task> insertTasks = new List<Task>();
    DateTime beginInsert = DateTime.Now;
    for (int i = 0; i < threadCount; i++)
      insertTasks.Add(Task.Factory.StartNew(() => InsertTest(i, rowCount), TaskCreationOptions.None));

    DateTime endInsert = DateTime.Now;
    TimeSpan tsInsert = endInsert - beginInsert;

    //List<Task> readTasks = new List<Task>();
    //DateTime beginRead = DateTime.Now;
    //for (int i = 0; i < threadCount; i++)
    //   readTasks.Add(Task.Factory.StartNew(() => ReadTest(i), TaskCreationOptions.None));

    //DateTime endRead = DateTime.Now;
    //TimeSpan tsRead = endRead - beginRead;

    double rowsPerSecondInsert = (threadCount * rowCount) / tsInsert.TotalSeconds;
    //double rowsPerSecondRead = (threadCount * rowCount) / tsRead.TotalSeconds;
    Console.WriteLine("{0} threads each insert {1} rows in {0} ms", threadCount, rowCount, tsInsert.TotalMilliseconds);
    //Console.WriteLine("{0} threads each read all rows in {1} ms", threadCount, tsRead.TotalMilliseconds);
    Console.WriteLine("{0} rows inserted per second", rowsPerSecondInsert);
    //Console.WriteLine("{0} rows read per second", rowsPerSecondRead);

  void InsertTest(int threadId, int rowCount)
    DateTime begin = DateTime.Now;
    DateTime end = DateTime.Now;
    TimeSpan ts = end - begin;
    //Console.WriteLine("{0} lines inserted in {1} ms on thread {2}", rowCount, ts.TotalMilliseconds, threadId);

  void ReadTest(int threadId)
    DateTime begin = DateTime.Now;
    int count = DataMan.ReadAllRows();
    DateTime end = DateTime.Now;
    TimeSpan ts = end - begin;
    //Console.WriteLine("{0} rows read in {1} ms on thread {2}", count, ts.TotalMilliseconds, threadId);

internal static class DataMan
  private const string DbFileName = @"C:\temp\bdbvsqlite.db";
  private const string DbJournalDir = @"C:\temp\bdbvsqlite.db-journal";

  public static void Create()
    if (File.Exists(DbFileName)) File.Delete(DbFileName);
    if (Directory.Exists(DbJournalDir)) Directory.Delete(DbJournalDir, true);

    using (SQLiteConnection conn = new SQLiteConnection())
      conn.ConnectionString = "Data Source=" + DbFileName + ";";
      using (SQLiteCommand cmd = conn.CreateCommand())
        cmd.CommandTimeout = 180;
        cmd.CommandText = "DROP TABLE IF EXISTS [TT]; CREATE TABLE [TT] ([src] TEXT, [td] DATETIME, [tn] NUMERIC, [ti] INTEGER, [tr] REAL, [tb] BLOB);";
        cmd.CommandType = System.Data.CommandType.Text;
        int result = cmd.ExecuteNonQuery();

  public static void InsertRows(int rowCount)
    SQLiteParameter srParam = new SQLiteParameter();
    SQLiteParameter tdParam = new SQLiteParameter();
    SQLiteParameter tnParam = new SQLiteParameter();
    SQLiteParameter tiParam = new SQLiteParameter();
    SQLiteParameter trParam = new SQLiteParameter();
    SQLiteParameter tbParam = new SQLiteParameter();
    Random rnd = new Random(DateTime.Now.Millisecond);

    using (SQLiteConnection conn = new SQLiteConnection())
      conn.ConnectionString = "Data Source=" + DbFileName + ";";
      using (SQLiteTransaction trans = conn.BeginTransaction())
      using (SQLiteCommand cmd = conn.CreateCommand())
        cmd.CommandTimeout = 180;
        cmd.CommandType = CommandType.Text;
        cmd.CommandText = "INSERT INTO [TT] ([src],[td],[tn],[ti],[tr],[tb]) VALUES (?,?,?,?,?,?);";

        int count = 0;
        while (count < rowCount)
          var data = MakeRow(rnd);
          srParam.Value = data.src;
          tdParam.Value =;
          tnParam.Value =;
          tiParam.Value = data.ti;
          trParam.Value =;
          tbParam.Value = data.tb;
          int result = cmd.ExecuteNonQuery();

  public static int ReadAllRows()
    List<DRow> list = new List<DRow>();
    using (SQLiteConnection conn = new SQLiteConnection())
      conn.ConnectionString = "Data Source=" + DbFileName + ";";
      using (SQLiteCommand cmd = conn.CreateCommand())
        cmd.CommandTimeout = 180;
        cmd.CommandText = "SELECT [rowid],[src],[td],[tn],[ti],[tr],[tb] FROM [TT];";
        using (SQLiteDataReader reader = cmd.ExecuteReader())
          while (reader.Read())
            var row = new DRow
              rowid = reader.GetInt64(0),
              src = reader.GetString(1),
              td = reader.GetDateTime(2),
              tn = reader.GetDecimal(3),
              ti = reader.GetInt64(4),
              tr = reader.GetDouble(5)
              //tb = (byte[])reader.GetValue(6)
    return list.Count;

  private static DRow MakeRow(Random rnd)
    int start = rnd.Next(0, 250);
    byte[] b = new byte[512];
    return new DRow
      src = spear.Substring(start, 250),
      td = DateTime.Now,
      tn = rnd.Next(2999499,987654321),
      ti = rnd.Next(3939329,982537553),
      tr = rnd.NextDouble(),
      tb = b

  private const string spear = @"FROM fairest creatures we desire increase,That thereby beauty's rose might never die,But as the riper should by time decease,His tender heir might bear his memory:But thou, contracted to thine own bright eyes,Feed'st thy light'st flame with self-substantial fuel,Making a famine where abundance lies,Thyself thy foe, to thy sweet self too cruel.Thou that art now the world's fresh ornamentAnd only herald to the gaudy spring,Within thine own bud buriest thy contentAnd, tender churl, makest waste in niggarding.Pity the world, or else this glutton be,To eat the world's due, by the grave and thee.";

internal class DRow
  public long rowid { get; set; }
  public string src { get; set; }
  public DateTime td { get; set; }
  public decimal tn { get; set; }
  public long ti { get; set; }
  public double tr { get; set; }
  public byte[] tb { get; set; }

How to Get Berkeley DB SQL API into the .NET System.Data.SQLite Provider

My last post covered getting Berkeley DB up and running with .NET. Now it’s time to take it one step further and build the open source System.Data.SQLite ADO.NET library, replacing the SQLite 3 engine with Oracle’s version of the SQLite.Interop C library that gets embedded into the .NET System.Data.SQLite assembly.

In other words, when you complete the steps below, you’ll have a System.Data.SQLite library that can, supposedly, drop into your .NET projects that currently use the ADO.NET library found on the site. The real difference is that instead of SQLite with single threaded writes, or file locking for writes rather, you will be using the latest Berkeley DB storage engine which supports page level locking to allow more writers, assuming the writers are writing to different pages.

For more information on Oracle’s implementation of the SQLite API which they call the Berkeley DB SQL API and to learn where they differ, you should read the whitepaper: Oracle Berkeley DB SQL API vs. SQLite API.

  1. Download the Berkeley DB source.
  2. Unlock the zip file (Right-click and select Properties and click the Unlock button.)
  3. Extract the contents to an empty directory of your choice.
  4. Open the SQLite.NET.2010.sln solution file.
  5. Choose Release and then build.
  6. Optionally show all files in the System.Data.SQLite.2010 project and open the Assembly.cs file and change the AssemblyProduct string to something like “System.Data.SQLite-BDB” so that it will show up in file Properties Details tab. This is the only way you will know that this assembly has the Berkeley DB interop engine built into rather than the SQLite engine.

If you’re curious, do a diff between the SQLite.Interop project files in the Oracle version and the original version. Clearly very different animals. Now the only thing that is left is to write up a nice little test in C# to compare the two libraries. The subject of a future post.

UPDATE #1 (9/5/2011): Tests today show that this “new” System.SQLite.Data library DOES NOT create a Berkeley DB database. At least as far as I can tell. So while there is a libdb_sql50.dll in the main Berkeley DB build that implements the sqlite3.dll API, there is no way that I can find so far to use that library from C#. More experimentation to come.

UPDATE #2 (9/5/2011): Modify the UnsafeNativeMethods.cs with line 34 as

private const string SQLITE_DLL = “libdb_sql52.dll”;

and then change the solution’s projects conditional compilation symbols to “SQLITE_STANDARD” and then build. This prevents the SQLite interop embedded DLL from being used. Now when you run your test app, you’ll need to copy to the bin directory of your test app the new built DLLs (see previous post):


I’ll post more on my tests later. So far, I’ve got the basics working but when I push the limits, I get some nasty crashes, so I’m skeptical of this scheme to use the SQLite API over the Berkeley DB engine.

How to Get Berkeley DB 5.2.28 and .NET Examples Working in Visual Studio 2010 SP1

Over the years, I’ve looked into using Berkeley DB for my own C# applications because it has a solid reputation as a very fast, reliable key/value database. Every time I’ve walked away disappointed with the .NET API wrapper—until now. My interest was renewed a few days ago when I noticed an item on StackOverflow comparing SQLite performance to Berkeley DB.  Since acquiring Berkeley DB, Oracle has been busy making it better, including adding better support for the .NET development community.

WARNING: read and understand the license terms and conditions for Berkeley DB before you choose to use it.

I began this most recent review by downloading the MSI Windows installer. DO NOT DO THIS! When I tried to compile the VS2010 solutions (see below), I got all kinds of errors. Next I tried downloading the 45MB zip file. This worked like a charm, except the .NET examples projects had a broken reference which was easily fixed. Follow these steps and you’ll be up and running your .NET app using Berkeley DB in no time.

  1. Download file.
  2. Right click the zip file and select Properties and click the "Unlock" button. This will unlock and make usable all the files in the zip file for your local machine. DO NOT SKIP this step.
  3. Extract the contents of the zip file to a directory, e.g. C:\Code so that you will have a C:\Code\db-5.2.28 folder.
  4. In that db-5.2.28 folder, you will find a build_windows folder. This will be your home for the next steps. Be sure to follow them in order.
  5. Open Berkeley_DB_vs2010.sln
    1. build debug win32 and x64
    2. build release win32 and x64
  6. Open Berkeley_DB_examples_vs2010.sln
    1. build debug win32 and x64
    2. build release win32 and x64
  7. Open BDB_dotNet_vs2010.sln (allow default conversion)
    1. build debug win32 and x64
    2. build release win32 and x64
  8. Open BDB_dotNet_examples_vs2010.sln
    1. In each project, delete the missing reference to "db_dotnet" and add reference to ..\AnyCPU\Release\libdb_dotnet52.dll
    2. build debug win32 and x64
    3. build release win32 and x64

That’s it. Now you can run the example projects and you have a .NET library and x86 (Win32) and x64 binary engines for that library to use. Enjoy!