Context: the blossoming Panther team didn’t have a unit testing framework in place and wanted to implement one. Well, in order to know how to implement it, we needed a set of requirements for what we…
…needed to have:
a. The tests need to be completely independent of the units they’re testing, working off an API (pause for potential contention on how this would then not be a “unit” test).
b. To run a series of tests consistently in a local and remote environment. Though, ideally, any test-found error should be caught locally before a remote catch is even necessary.
c. Visualize our test runs and see the output of our run.
d. Prevent builds from succeeding if the tests fail (aka behave as a gate).
e. Way to quickly add and remove new tests without dealing with a lot of configuration changes.
…wanted to have:
1. The tests needed to be created and run to generate results QUICKLY.
2. The tests should be able to be specified by a non-developer.
3. The tests should be run pre-commit and prevent commits if the tests fail.
There are certainly more requirements that could have be added, but working with the above, we ended with the following.
DISCLAIMER: The following is meant to evangelize the idea of unit-testing and how we were able to quickly put together a “framework” to allow us to setup unit testing. It is NOT meant to evangelize the actual framework. The Forge team is closing on a holistic approach to cover unit, integration, and endpoint testing. If you’re looking to get a couple of tests out to check your base code functionality, please feel free to do the following. If you’re looking for a long-term, system-compatible testing framework, please reach out to the Forge team.
Xunit as the testing framework
Our rationale of using Xunit (over say nunit, other frameworks) was two-fold: (a) Online documentation had a preference for Xunit (see link ), (b) We had already started Xunit, and setting up with other another framework felt like undoing existing work.
Xunit console runner for local builds
The setup for this was straightforward. We simply paket-added a few packages to our solution, created a new project for tests, added the tests into it, and modified our tests to be set with the [<Fact>] annotation for the tests. Any new tests added simply need to be added to another method/file addition (covering requirement e). Also, since we’re externally referencing the project, this provides an equivalence into how external systems might access it (covering requirement a).
Note for future testers: The tests NEED to have parentheses after the name in order to be recognized as a test.
After this, we built the test project and then ran the tests by running the following command:
<location to packages>\packages\xunit.runner.console\tools\xunit.console.exe "<location to test project build (release or debug accordingly)>\<test project name>.dll" -xml "<output file>.xml"
We’ll see the new file created in the root with the test results.
Xunit console runner or remote builds (Please refer to the disclaimer above)
As we’re using a local package for the tests, all we needed to ensure was that the same command was run on the remote box build as in the local one. This was ensured by transferring the above command to Jenkins. In the “Build a visual studio project or solution using MSBuild”, after the regular build command, add a “Windows Powershell” step. In there, we simply put the same command we’d used earlier:
& $env:WORKSPACE\packages\xunit.runner.console\tools\xunit.console.exe "$env:WORKSPACE\<location to test project>Panther.Config.Tests.dll" -xml $<output_file>.xml
By doing this, we ensured that as long as the remote machine resembled our local machines in build specs (project versions, plugins), we would see the same test results (covering requirement b).
Xunit plugin for visualization and gatekeeping
We then introduced a way to actually see the results of our build without needed to go through our console. Further, we still see that the build passes in Jenkins even if the tests fail (as there is no actual code compilation issues).
We addressed both of these by introducing the Publish xUnit test result report Jenkins Xunit plugin. In it, all we needed to specify was the type of output file to be reading. This reads the reports and judges failure or success based on the xml. If there are failures the build fails (addressed requirement d) and shows us an error report/history (addressed requirement c).
This has now addressed all our need-to-have requirements. Our like-to-haves are being worked on.
I’d like to wrap up the above with the following. While being consistent with testing and frameworks is crucial in a large tech company, it’s far more important to have tests in the first place! Finally, if the above setup rigor is non-exciting for you (people who don’t find working with Jenkins enthralling? Shocking!), please reach out to the Forge team for ideas and suggestions.