Google analytics script

Latest jQuery CDN with code tiggling.

Friday, 18 October 2013

Parametrized code testing and code exploration

Few days ago I had the pleasure of presenting parametrized unit testing among other things to a group of developers as part of my consulting side of work. I'm sure everyone got something useful out of those few hours we spent together.

So I thought of writing an introduction blog post to parametrized unit tests using NUnit library. But to give it some edge let me do this along with code exploration that makes our tests (and consequently our production code) better. I'll be using Code Digger Visual Studio 2012 extension. Hopefully you'll learn something new as well.

Intro to parametrized tests

We've all written closed unit tests where each test method tests a particular part of our production code. The problem with closed unit tests is code duplication because more often than not we're duplicating a considerable amount of code. Something we're constantly striving to eliminate in our production code. Enter parametrized tests universe. So instead of having one method per test, we write one method that takes parameters and reuse it for several tests. Positive or negative or both. As simple as that.

What NUnit has to offer

If you're using NUnit that supported parametrized tests since May 2009 you basically have 2+1 ways to write them:

  • void return type parametrized tests
  • concrete return type parametrized tests and
  • test theories
I said 2+1 because theories are treated and executed a bit differently than the first two.

To make things clearer I'll use an example of a validator, which are rather common to any application. I'll be testing a validator of Slovenian VAT identifiers. Slovenian VAT numbers are best described using this regular expression: SI([1-9]\d{7}) VAT-registered identifiers always start with "SI" followed by 8 numbers of which the first one cannot be zero and the last one is calculated from the previous 7. This is validator's code I'll be testing:

   1:  private static Regex vatPattern = new Regex(@"^SI(?<value>[1-9]\d{7})$", RegexOptions.Compiled | RegexOptions.IgnoreCase);
   2:  private int[] multipliers = new int[] { 8, 7, 6, 5, 4, 3, 2 };
   4:  /// <summary>Validates provided identifier. Does not assure that such identifier actually exists.</summary>
   5:  /// <param name="identifier" type="String">VAT Identifier</param>
   6:  /// <returns type="Boolean"><c>true</c> when identifier is valid; <c>false</c> otherwise.</returns>
   7:  public bool Validate(string identifier)
   8:  {
   9:      if (string.IsNullOrWhiteSpace(identifier))
  10:      {
  11:          throw new ArgumentNullException("identifier");
  12:      }
  14:      if (!vatPattern.IsMatch(identifier))
  15:      {
  16:          return false;
  17:      }
  19:      Match m = vatPattern.Match(identifier);
  20:      string value = m.Groups["value"].Value;
  22:      int totalSum = 0;
  23:      for (int i = 0; i < 7; i++)
  24:      {
  25:          totalSum += int.Parse(value[i].ToString()) * multipliers[i];
  26:      }
  28:      totalSum = 11 - totalSum % 11;
  30:      if (totalSum > 9)
  31:      {
  32:          totalSum = 0;
  33:      }
  35:      return totalSum.ToString().Equals(value[7].ToString());
  36:  }

Void return type parametrized tests

These are very much like closed unit tests with main differences being added method parameters and different attributes declarations we use over it. Parameters should include inputs as well as expected result so we can test it. In order to completely test our code we need to provide some test case inputs:

  • null
  • empty string
  • spaces-only string
  • random invalid string
  • non "SI" starting string with 8 digits
  • too short string that starts with "SI"
  • too long string that starts with "SI"
  • correctly formatted string but with first digit as zero
  • correctly formatted string but with invalid checksum digit
  • valid VAT identifier with upper case "SI"
  • valid VAT identifier with lower case "SI" (should pass as well)

Closed tests always have [Test] attribute on them. Parametrized tests on the other hand may have it as well, but if you add at least one [TestCase] or [TestCaseSource] to it, you can omit [Test] attribute as NUnit runner will already identify it as a test method. [TestCase] defines single test case inputs, while [TestCaseSource] provides a source of at least one test case. But it's usually used to provide several of them be it statically or externally defined depending on your implementation.

If you're dealing with numeric parameters you can also declaratively provide values to such parameters individually by using attributes [Random], [Range], [Values] and [ValueSource]. They're all rather self-explanatory, but you can always read the documentation. There're also three additional test method level attributes: Combinatorial, Pairwise and Sequential that define how NUnit runner will generate parameter values of those individually declared parameter inputs.

For simplicity reasons I'm providing all my test cases using [TestCase] attribute.

   1:  [TestCase(null, false, ExpectedException = typeof(ArgumentNullException))]
   2:  [TestCase("", false, ExpectedException = typeof(ArgumentNullException))]
   3:  [TestCase(" ", false, ExpectedException = typeof(ArgumentNullException))]
   4:  [TestCase("AAAAA", false)]
   5:  [TestCase("XY12345678", false)]
   6:  [TestCase("SI123", false)]
   7:  [TestCase("SI1234567890", false)]
   8:  [TestCase("SI00000000", false)]
   9:  [TestCase("SI99999999", false)]
  10:  [TestCase("SI12345679", true, TestName = "Valid Uppercase Slovenian VAT")]
  11:  [TestCase("si12345679", true, TestName = "Valid Lowercase Slovenian VAT")]
  12:  public void SlovenianVATValidator_VoidTest(string input, bool expectedResult)
  13:  {
  14:      // arrange
  15:      IValidator validator = new SlovenianVATValidator();
  17:      // act
  18:      bool result = validator.Validate(input);
  20:      // assert
  21:      Assert.AreEqual(result, expectedResult);
  22:  }

Concrete return type parametrized tests

These are very similar except that we don't have to provide expected results as parameters. Results are automatically asserted by NUnit runner. So if all we're doing are comparing results to actual tested code we don't have to write explicit asserts. So here's the very similar code of concrete type parametrized test.

   1:  [TestCase(null, ExpectedException = typeof(ArgumentNullException))]
   2:  [TestCase("", ExpectedException = typeof(ArgumentNullException))]
   3:  [TestCase(" ", ExpectedException = typeof(ArgumentNullException))]
   4:  [TestCase("AAAAA", ExpectedResult = false)]
   5:  [TestCase("XY12345678", ExpectedResult = false)]
   6:  [TestCase("SI123", ExpectedResult = false)]
   7:  [TestCase("SI1234567890", ExpectedResult = false)]
   8:  [TestCase("SI00000000", ExpectedResult = false)]
   9:  [TestCase("SI99999999", ExpectedResult = false)]
  10:  [TestCase("SI12345679", ExpectedResult = true, TestName = "Valid Uppercase Slovenian VAT")]
  11:  [TestCase("si12345679", ExpectedResult = true, TestName = "Valid Lowercase Slovenian VAT")]
  12:  public bool SlovenianVATValidator_ConcreteTest(string input)
  13:  {
  14:      // arrange
  15:      IValidator validator = new SlovenianVATValidator();
  17:      // act
  18:      return validator.Validate(input);
  19:  }

Comparing void and concrete return type parametrized tests

Both types are somewhat similar but there are still few differences which makes concrete return type parametrized tests my preferred type:

  • with void return type test case we have to provide some expected result parameter even though we're expecting an exception
  • concrete return type tests don't have to implement any asserts when we're only comparing results

Test theories

Test theories on the other hand are a bit different. They're used when we don't individually control inputs so they can actually be anything. We tell NUnit test runner that our test method is a theory by declaring a [Theory] attribute on it. But theories also work differently. Instead of having arrange, act and assert blocks in our test method we have an additional block called assume. In this block we provide assumptions on input parameters under which our tested code will return successful results. Each time NUnit runner executes our theory it will first test given parameters against assumptions and if they pass other code blocks (arrange, act and assert) will execute. Assumptions therefore provide input parameter filtering.

Unfortunately there's not too many documentation on test theories and also not too many developers using them. They seem like a very powerful mechanism within NUnit, but we're left with out own understanding and implementation. Theories don't provide test cases by standard means as we do in either closed or parametrized tests, but we merely provide data points using [DataPoint] or [DataPoints] attributes. Data points can also be provided statically or externally however we choose to implement them an then combinatorically combined for each theory execution. The fact to remember here is that all data points of matching type will be used for each test theory parameter. If any of our theory parameters is of bool or enum type, values will be automatically injected by the NUnit runner so we don't have to explicitly provide data points for these types.

How detailed should assumptions be?

Test theories could also be called positive-only void return type parametrized tests as assumptions should filter out all parameter combinations under which our tested code would return negative results. As per NUnit theory documentation: "A theory makes a general statement that all of its assertions will pass for all arguments satisfying certain assumptions.". This means that we either:

  • provide all assumptions that return positive test results or
  • provide enough assumptions so that our tested code will either return a positive or negative test but assert for the correct one
In each case this means that we have to provide tested code business logic details within a theory. Either as assumptions or as code so we can assert correct results (positive or negative). Consequently this also means that since tests are being developed by the same developer that writes actual tested code, theories will likely have the same algorithm bugs in them as tested code. And I see this as a huge paradox. I must still be proven otherwise or find a different usage scenario for test theories.

In any way this is the same (albeit assumptions incomplete) test code:

   1:  [Datapoints]
   2:  private IEnumerable<string> inputs = new[] {
   3:      null,
   4:      "",
   5:      " ",
   6:      "AAAAA",
   7:      "SI123",
   8:      "XY12345678",
   9:      "SI00000000",
  10:      "SI99999999",
  11:      "SI12345679",
  12:      "si12345679"
  13:  };
  15:  [Theory]
  16:  public void Theory(string input)
  17:  {
  18:      // Assumptions
  19:      // Input is not null/empty/whitespace
  20:      // starts with SI
  21:      // only numbers after SI
  22:      // is 10 characters long
  23:      // ...
  25:      // assume
  26:      int n;
  27:      Assume.That(!string.IsNullOrWhiteSpace(input));
  28:      Assume.That(input.ToLowerInvariant().StartsWith("si"));
  29:      Assume.That(int.TryParse(input.Substring(2), out n));
  30:      Assume.That(input.Length == 10);
  31:      //Assume.That(...)
  33:      // arrange
  34:      IValidator validator = new SlovenianVATValidator();
  36:      // act
  37:      bool result = validator.Validate(input);
  39:      // assert
  40:      Assert.IsTrue(result);
  41:  }
Because my assumptions don't provide all algorithm checksum calculation assumptions, some seemingly correct parameters still fail in NUnit. but you can see how theories are being used.

NUnit test theories and Microsoft code contracts

Microsoft code contracts are a similar paradigm although they're written inside actual code. Not tests. They also provide assumptions over input data and in this case (since this is actual code being executed) they're actually assuming uncontrolled input data. Code contracts provide preconditions, postconditions and object invariants which gives them an even better usage scenario over NUnit theories. but I won't be going into details about code contracts. Maybe in some other post.

Intro to code exploration

If you've never heard of code exploration it's a method of white-box code analysis that tries to find input edge cases of a particular method to cover all code branches and possibly break our code in ways we don't always see. Visual Studio 2010 was supported by PEX that was more than just a code exploration tool. It also generated unit tests for us with high code coverage. Unfortunately PEX aren't compatible with later version of Visual Studio, so the same team only provided Code Digger extension for Visual Studio 2012. It uses the same engine to provide us with code analysis that returns interesting inputs to our tested code so we can see which cases we haven't covered and breaks our code.

Our case of Slovenian VAT validator

Initial code that validates Slovenian VAT identifiers seems strong and shouldn't really break in unexpected places, but you may be surprised that Code Digger knows more than we do. When I run Code Digger on my Validate method it shows me two possible inputs where I get FormatException thrown at me. When I add one of those two test cases to my parametrized unit test harness and debug my code I can see that validator code breaks on the int.Parse() line.

You can see two strange looking inputs that break my validator. They're both using some obscure unicode characters that seem to pass my regular expression matching. Ok so my regular expression again looks like this: new Regex(@"^SI(?<value>[1-9]\d{7})$", RegexOptions.Compiled | RegexOptions.IgnoreCase); When I check on the internet I can see that there's more to digits that we that are using normal alphabet know about. Those particular cases seen in upper image are based on traditional Arabic digit characters. And as .net regular expresions work they of course satisfy the \d matching. But they can't be parsed to an integer unfortunately so my code breaks.

The outcome of this digging means that I'll change my regular expression and replace \d with [0-9] and add an additional test case to my unit testing harness and provide one of these two identifiers. Great! Code Digger FTW and we all learned something new.

That's it

If you've not heard of code exploration tools before I'm sure I've shown you how useful they are and how they actually help you write better code. Runnable and test code. And if you've never seen parametrized tests before then hopefully you also see value in them.

If you have any questions or suggestions related to this topic, let me know in a comment. I would especially be interested to talk about test theories that seem to be an uncharted area on my development map.

No comments:

Post a Comment