A Software Development Allegory

Farmer Brown has a tractor. Farmer Jones has a tractor. Both tractors break down every Monday.

Farmer Brown spends every Monday afternoon fixing his tractor and then gets a good five days of work in before he rests on Sunday.

Farmer Jones spends Monday afternoon evaluating the tractor and Monday evening discussing it with his wife writing up a plan and reviewing that plan for fixing it.

Tuesday morning Farmer Jones goes to the diner for coffee and a donut and to discuss his tractor woes with his pals, showing off his plan for fixing the tractor. One of the pals suggests it might not a problem with the doohickey as Farmer Jones suspects. He recommends that Farmer Brown take the tractor to the mechanic for further diagnosis and discussion. So Farmer Jones spends the rest of the day loading up his tractor onto the trailer and hauling it into town whereupon the mechanic tells him he can get to it first thing in the morning.

On Wednesday morning after coffee at the diner, Farmer Jones ambles on over to the mechanic shop and learns that the problem was indeed what he had suspected all along and that he could have fixed the problem in an hour or two on Monday afternoon. So farmer Jones loads up the tractor and takes it home only to find that his wife has baked a nice apple pie and so he spends a lazy afternoon eating pie and talking with his wife and the neighbor who has come over to gossip. That evening he fixes the tractor.

Now first thing Thursday morning, Farmer Jones gets to work and works through Sunday, making his wife cross with him for not attending Services at the church. Farmer Jones is too tired to listen and flops down in bed in need of rest.

And on Monday morning both tractors break down again.

Farmer Brown gets 20% more work done and rests one day a week.

Farmer Jones later gives up on farming and gets a job managing the parts store at the mechanic shop.

What kind of farmer are you?

Hiberfile.sys Removal Note to Self

A very large portion of my system drive, a 250GB SSD, seemed to be gobbled up with my fresh Windows 8.1 install and after install all my tools, I was fast running out of disk space on the C: drive. A quick search for culprits using Effective Search from SowSoft turned up a 64GB file called hiberfil.sys.

After a little hunting and poking, I found the GUI for power management options and tried to turn off hibernate. But that did not get rid of the file.

Not until I found and used the following in an “as Administrator” cmd window did I recover the 64GB of SSD space:

powercfg -h off

And yes, I have 64GB of RAM. Call me spoiled.

Microsoft Graph Engine

This tweet from Scott Hanselman caught my eye because I spent nearly all of 2014 working on a graph solution for my employer that had it’s genesis in my study of Neo4J but primarily in my reading of this Microsoft Research paper (Sakr, Elnikety, and He) produced in 2012.

graphtweet

The essence of the Microsoft Research paper is storing edges (node relationships) in memory. So that’s what I did with unmanaged memory allocated in blocks using the Marshal.AllocHGlobal method. My own efforts were very specific with respect to my employer’s needs at the time and not really useable as a general purpose tool, so I was very pleased to see that Microsoft Research had an ongoing project called Trinity working to produce a more general purpose tool based on many of the same concepts originally explored by Sakr, Elnikety and He.

That tool was recently quietly released as the Microsoft Graph Engine. I’ve only had a little time to explore and understand it and look forward to spending more time using it soon. The essence is the same. Store the data in sequential chunks in raw unmanaged memory. Graph Engine uses Visual Studio to generate code on the fly using a meta language called Trinity Specification Language (TSL). Have a look through the documentation. If you’re considering graph database work, put Microsoft’s Graph Engine on your list of items to evaluate.

Blog Vacation is Over

It's been seven months and two job changes and crazy busy with family, work and life.

Vacation is over.

List of things to blog about.

So much to say, so little time to say it.

ServiceMq Hits 10,000 Downloads

I am pleased to see this milestone of 10,000 downloads in the short history of ServiceMq and its underlying communication library ServiceWire, a faster and simpler alternative to WCF .NET to .NET RPC. And the source code for all three can be found on GitHub here.

smq10k

Over the past few weeks both libraries have been improved.

ServiceMq improvements include:

  • Options for the persistence of messages asynchronously to improve overall throughput when message traffic is high
  • ReceiveBulk and AcceptBulk methods were introduced
  • Message caching was refactored to improve performance and limit memory use in scenarios where large numbers of messages are sent and must wait for a destination to become available or received and must wait to be consumed
  • Faster asynchronous file deletion was added which eliminates the standard File.Delete’s permission demand on every message file delete
  • Asynchronous append file logging was added to improve throughput
  • The FastFile class was refactored to support IDisposable and now dedicates a single thread each to asynchronous delete, append and write operations
  • Upgraded to ServiceWire 1.6.3

ServiceWire has had two minor but important bugs fixed:

  • Code was refactored to properly dispose of resources when a connection failure occurs.
  • Previously if the host was not hosting the same assembly version of the interface being used, the connection would hang. This scenario now properly throws an identifiable exception on the client and disposes of the underlying socket or named pipe stream.

Real World Use

In the last month or so, I have had the opportunity to use both of these libraries extensively at work. All of the recent improvements are a direct or indirect result of that real world use. Without disclosing work related details, I believe it is safe to say that these libraries are moving hundreds of messages per second and in some cases 30GB of data between two machines in around three minutes across perhaps 300 RPC method invocations. Some careful usage has been required given our particular use cases in order to reduce connection contention from many thousands of message writer threads across a pool of servers all talking to a single target server. I’ve no doubt that a little fine tuning on the usage side may be required, but overall I’m very happy with the results.

I hope you enjoy these libraries and please contact me if you find any problems with them or need additional functionality. Better yet, jump onto GitHub and submit a pull request of your own. I am happy to evaluate and accept well thought out requests that are in line with my vision for keeping these libraries lightweight and easy to use.

One other note

I recently published ServiceMock, a tiny experimental mocking library that has surprisingly been downloaded over 500 times. If you’re one of the crazy ones, I’d love to hear from you and what you think of it.

Agile and Architecture

One of the most misunderstood and misrepresented documents in the history of software development is the Agile Manifesto. This may be due to many of its readers overlooking the phrase “there is value in the items on the right.” Most seem to focus on the items on the left only. Here’s the text that Cunningham, Fowler, Martin and other giants in the field created:

Manifesto for Agile Software Development

We are uncovering better ways of developing
software by doing it and helping others do it.
Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on the right,
we value the items on the left more.

Note that I have emphasized the items on the right. These do indeed have value but so many advocates of Agile deliberately ignore and even exclude these from their software development process and organization. Some have advocated the elimination of architecture and design entirely, leaving these open to gradual discovery through the iterative process driven by use cases and user stories and backlog tasks.

Recently I have read a number of discussions, blog posts and articles on the question of Agile and architecture. The comments and discussion around the topic have been interesting. Perhaps this is due to the notion that developing software is only about writing the code. The general theme of these sources is that architecture (and design) are at odds with Agile. This is one of the great fallacies of our time.

Architecture and Design are Software Development Artifacts

Teams and organizations who skip architecture and design will sooner or later find themselves off track and repeating work unnecessarily. Such waste is not entirely preventable, but this does not mean we should not try. Teams that incorporate these activities into their iterations, regularly revisiting architectural questions such as non-functional requirements and component design, will find that they are better able to stay on course.

Organizations that have multiple teams will find greater stability in moving forward when guided by a centralized architecture team, comprised of architects or leads who are dedicated to and work within the organizations development teams. The architecture team works in an Agile fashion, with its own backlog and its own products including working prototypes, cross cutting standardized components, documentation of the architecture and designs, and work items to be placed on the backlogs of development teams.

In this way, development teams have dependencies on the architecture team and can make requests for additional guidance or improvements or extensions to shared, standardized libraries for which the architecture team is responsible. These requests keep the backlog of the architecture team charged with work throughout the software development lifecycle.

In addition to formal activities to improve architecture and design across the organization, the architecture team should regularly interact with development teams to include (but not limited to) the following:

  • practice improvement activities—e.g. SOLID principles
  • technology deep dives—e.g. digging deeper into .NET
  • technical solutions brainstorming sessions—solving the hard problems
  • technical debt evaluation and pay-down planning
  • code reviews and walkthroughs—one on one and as a team
  • presenting and sharing solutions and ideas from other development teams
  • exploring and evaluating new technologies and tools

Like testing and coding, architecture and design are a part of the whole of software development. These activities are perfectly suited to Agile development practices, including SCRUM. And when all of these aspects of delivering quality software are taken into account and incorporated into your Agile process, your chances of success are greatly improved.

----

P.S. And if you add to all this a great dev ops team to support your efforts with automated build and deploy systems, your life will be that much easier and your chances of success are automatically improved.

ServiceMock a New ServiceWire Based Project

I know. There are some really great mocking libraries. The one I’ve used the most is Moq 4. While I’ve not been a regular user of mock libraries. I am fascinated with their usefulness and I’ve recently been thinking about how I might utilize the ServiceWire dynamic proxy to create a simple and easily extended mock library. After a few hours of work this morning, the first experimental of ServiceMock comes to life.

This is not a serious attempt to replace Moq or any other mocking library. It is for the most part a way to demonstrate how to use the dynamic proxy of ServiceWire to do something more than interception or remote procedure call (RPC). It is entirely experimental, but you can get it via NuGet as well.

With ServiceMock, you can now do something like this:

// create your interface
public interface IDoSomething
{
   void DoNoReturn(int a, int b);
   string DoSomeReturn(string a, string b);
}

// now mock and use the mock
// note: you don't have an implementation of the interface
class Program
{
   static void Main(string[] args)
   {
      var mock = Mock.Make<IDoSomething>();

      mock.DoNoReturn(4, 5);
      var mockReturnValue = mock.DoSomeReturn("a", "b");
      Console.WriteLine(mockReturnValue);

      Console.WriteLine("Press Enter to quit.");
      Console.ReadLine();
   }
}

To create a library that takes advantage of the ServiceWire dynamic proxy, you need a factory (Mock), a channel (MockChannel) that the dynamic proxy will invoke, a channel constructor (MockDefinition) parameter class, and finally an function for invoke and exception handling should the invoke throw (MockActions). And of course, you can supply your own customized function and assign it to the MockActions instance.

The heart of the extensibility is the ability to inject your own “invoke” function via the instance of the MockActions class in the MockDefinition constructor parameter.

var mock = Mock.Make<IDoSomething>(new MockDefinition
{
   Id = 1,
   Actions = new MockActions
   {
      Invoke = 
         (id, methodName, returnType, parameters) =>
         {
            // do your thing here
            var retval = new object[parameters.Length + 1];

            // assign your return value to the first object
            // in the return array
            retval[0] = returnType.Name == "String"
               ? returnType.ToString()
               : TypeHelper.GetDefault(returnType);

            //by default, return all parameters as supplied
            for (int i = 0; i < parameters.Length; i++)
            {
               retval[i + 1] = parameters[i];
            }
            return new object[parameters.Length + 1];
         },
      InvokeExceptionHandler = 
         (id, methodName, returnType, parameters, exception) =>
         {
            //do your custom exception handler if your invoke throws
            return true; //return true if you want exception thrown 
            //return false if you want the exception buried
         }
   }
});

Here’s the default “invoke” code should you not wish to provide one.

(id, methodName, returnType, parameters) =>
   {
      Console.WriteLine(id + methodName);
      var retval = new object[parameters.Length + 1];
      
      //return params must have returnType 
      //as first element in the return values
      retval[0] = returnType.Name == "String" 
         ? returnType.ToString() 
         : TypeHelper.GetDefault(returnType);

      //by default, return all parameters as supplied
      for (int i = 0; i < parameters.Length; i++)
      {
         retval[i + 1] = parameters[i];
      }
      return retval;
   };

Of course, you might want to log the calls, aggregate counts per methodName or whatever you wish. I hope you find this useful, but I hope more that you will build your dynamic proxy wrapper for your own cool purposes.

How to Rescue Distressed Projects and Teams

If you have worked in the software development world long enough, it has likely been your privilege (tongue firmly in cheek) to work on a project and with a team that has been taken to or even driven over the brink of failure. A project like this usually involves an unhappy client, a frustrated management and a very discouraged delivery team. It generally involves an “interrupt-driven” task and workflow prioritization process with a fixed delivery schedule, a once fixed but changing requirements set, and estimates and assumptions that failed to consider the full lifecycle of a feature, story, or task.

Often such projects are cancelled and teams dismantled. Sometimes they push through to a bitter end with something that works but everyone unhappy. It took too long. It cost too much. It works but not well. Clients are lost. Teams suffer from unnecessary attrition. Blame and resentment prevail. But there is a better way. Teams and projects can be rescued.

To rescue a distressed project and team is not as hard as one might think. Many have written about this. Some of us have even experienced it first hand. One excellent case study was published two years ago by Steve Andrews on InfoQ. There are many other stories like the one he shares and they all have several common aspects that can come to your rescue.

Analyze and Decide Using Facts
Working from facts and data, such as defect counts and other available metrics, can help to eliminate the emotional element and engage the team’s analytical talents.

Drive Quality with Acceptance Tests
Make quality and testing come first. Create acceptance tests for a given feature or story before you begin coding. Acceptance tests should clearly define “done” and support validation.

Eliminate Waste—Control Flow—Decrease Batch Size
Long established principles of quality manufacturing, these can be applied to software development. Creating very large and complex requirements documents that will be invalidated shortly after development begins is waste. Managers pushing large sets of tasks and assigning specific work items to specific team members creates waste. “Fix all the bugs” creates a batch that may overwhelm any team. But when team members pull work from a queue (aka backlog) and a team’s total work-in-progress (WIP) is limited, individual and team work flows efficiently.

Allow Teams to Self-Organize
Coach teams in Scrum and Kanban and let them choose which works best for them to control flow and achieve individual and team efficiency. Some teams may choose a combination. In any case, self-organized teams pull work and progress more efficiently than those who wait for management to assign out tasks. Management is then free to focus on grooming the backlog.

Manage the Backlog
Change control and therefore control of the backlog is critical to the success of a project. A manager with one of any number of titles controls what gets added to the backlog and when. The manager gathers details from stakeholders and delivery team members for each item on the backlog to provide sufficient detail for an estimate to be made. Based on input from stakeholders, the manager prioritizes items. Delivery team members add estimates before items can be taken off the backlog and put into a ready state or work-in-progress state. Estimates can be in abstract “points” or “ideal days” or some other common unit of measure to allow tracing metrics as work proceeds. Once estimates are provided, the manager works with stakeholders to finalize backlog priorities.

Work as a Team
Even if you are not using Kanban, you still need to eliminate bottlenecks and prevent individuals from working too far ahead of the team. If analysts are unable to keep up with writing acceptance tests, re-task other team members to avoid starving or bunching up of the team’s work-in-progress. If the delivery team lacks a well groomed and ready backlog, you should alter your planning cadence, decoupling it from your delivery cadence.

The primary factor in rescuing a distressed project and team is the motivation of management. If you believe in your people and give them the tools, processes and coaching they need to achieve great things, you can turn around a troubled project and team.