Earlier this week I had a somewhat heated argument with some of my colleagues over the performance difference of MSTest and NUnit. I have used both professionally in the last five years and I always use MSTest as it comes with Visual Studio in my own personal projects. I have never really thought about the performance of either, I had heard from others that NUnit was faster to execute, but prefer the fully integrated nature of MSTest and generally stick with it. During the discussion one of my colleagues was adamant that the performance of MSTest was an order of magnitude slower than NUnit, so much so that on previous projects he has switched from MSTest to NUnit because of the time it took to run 7000 unit tests severely impacted his TDD productivity.
I decided I would test out my colleagues claims and try to understand why this might be the case. I created a new VS 2010 solution and added a class with a single method that multiplied two numbers without using the multiplication operator (I frequently ask interviewees to write this method). I then added an MSTest and an NUnit test project to the solution and to each I added 100 test classes each with 10 test methods that called my Multiply method and asserted the return value was correct.
I executed all MSTest tests by pressing Ctrl + R, A then I executed all the NUnit tests using NUnit GUI and lo and behold it seems my colleague was right as VS 2010 took almost 4 times as long to run the tests.
I thought about this for a while, then I asked myself whether this was a fair comparison. NUnit GUI is a standalone process and does nothing but run the tests, while VS 2010 is not only doing it’s thing but it is integrating the test results into several views. I also started thinking about how developers actually run their tests, it has been some time since I worked without either CodeRush or Resharper and I had TestDriven.NET available more often than not. On my current project we all have Resharper 5.* and everyone runs tests using this rather than NUnit GUI so to do the job properly and find out if it is the actual execution of the tests that accounts for the time difference or the extra stuff that runners do.
On my personal projects I use CodeRush, I installed TestDriven.NET and the trial of Resharper 5.* and did three runs of each set of tests with each runner including VS 2010 and NUnit GUI. The following table shows the run time results in seconds as published by the runner. Following this I have included some observations, because I noticed some odd differences, particularly when it comes to rendering results.
| || || |
| || || |
|Runner || |
|VS IDE ||13 ||14 ||14 ||N/A ||N/A ||N/A |
|NUnit GUI ||N/A ||N/A ||N/A ||2.51 ||2.03 ||1.38 |
|TestDriven.NET ||5.49 ||5.45 ||5.4 ||5.11 ||5.23 ||5.54 |
|CodeRush ||5.15 ||3.03 ||3.25 ||3.07 ||3.24 ||3.04 |
|Resharper ||1.9 ||1.54 ||1.67 ||1.26 ||1.26 ||1.29 |
The figures shown clearly suggests running all tests in a solution is much slower than any of the other runners, but if you drill down into individual tests the execution times are very low and adding them up does not equal the total time reported. So my conclusion is that the total time reported is the total time to report and render results as well as execute tests. There is a a lot more going on with the VS IDE MSTest integration so it is not surprising it takes more time, they could and should optimise this somewhat but not sure they will. I once did a similar exercise with MSBuild and found it was significantly slower when run within VS that when run from the command line. It seems the deep integration with the IDE comes within a significant cost, but I personally don’t see this as a problem.
Clearly very fast, but not totally surprising since it is a lightweight process that does very little more than run the tests and render some results. Personally I prefer an integrated solution so I would never use the NUnit GUI on a day to day basis.
A pretty simple add-in that runs the tests and outputs the results to the Output window of the VS IDE. This is probably the fairest comparison of all and it produces comparable results, I guess this is down to the fact it does not attempt to render the results in any way. NUnit appears to be marginally faster to execute but there is only fractions of a second in it.
I couldn’t figure out why the first MS Test run reported a slower total time. I ran the MS Tests a few extra times beyond the three and it consistently reported times in the 3.* second range. My conclusion on CodeRush is that it is comparable for both MS Test and NUnit, and that it is fast enough for my needs. A good thing since I have just renewed my subscription for another year.
I found the results for Resharper a bit odd. The reported execution times were comparable with NUnit coming out slightly ahead, but visually there was a huge difference. The rendering of MS Test results was significantly slower taking almost 20 seconds to complete, even though there is no difference in what is rendered. If Resharper is using the native XML results output by MS Test and NUnit then I could understand some of this difference as the MS Test output is significantly more complex than that of NUnit, but not this level. VS does a lot more with the results than Resharper but is significantly faster. CodeRush does pretty much the same with both sets of results and there is no discernable visual difference between MS Test and NUnit rendering performance. My only conclusion with regard to Resharper is that I am glad I use CodeRush for my own stuff.
My overall conclusion from this exercise is that the actual Unit testing framework you choose is not so important as the test runner you choose. Obviously there are features that NUnit has that MSTest doesn’t that might lead you in the direction of NUnit just as the use of a single fully integrated tool might lead you to go for MSTest, but when it comes down to performance the test runner seems to me to be the deciding factor.