HaveComputerWillCode.Com

Welcome!
Life is a Non-Deterministic Finite State Automata
Automation ? (*pGeekiness)++ : Code /eneration;

May 6, 2012

Pronto v1.5 Released (Productivity Tool for Automating MTM Test Cases)

Filed under: ALM,Programming,Testing — Tags: , , , , , , , — admin @ 6:12 am

Pronto lets you create test stubs (including data binding and documentation) for your manual MTM Test Cases in C# or VB.Net by dragging those test cases onto your source file. You can then use Pronto’s Bulk Associated Automation Assistant to associate many of your test methods with your MTM test cases in one go. Uses might include: automating acceptance tests or creating Keyword/Action Word Frameworks. The application can be downloaded directly from Visual Studio Gallery here.

Changes to v1.5:

• All fragment generators are now freely editable T4 text files
• Create new generators and customize the fragments easily (docs and samples included)
• Fixed a few bugs

After downloading, Unblocking the file (right click -> Properties), installing the VSIX and restarting Visual Studio, ensure that the Pronto window is visible:

Assuming you already have a WorkItem query in Team Explorer that returns Test Cases, just Drag and Drop that query onto the Pronto window to get a list of Test Cases:

To create your method stubs and help with documentation, either Drag ‘n’ Drop or Right Click/Copy the Test Cases and Shared Steps and paste directly into your Unit Test:

Notice how the method stub, data binding parameters, test steps and title are all generated for you automatically (this is customizable).

After building your solution manually, open up the Bulk Associated Automation Assistant (the “Sheep” icon). It will discover your tests and correlate the MTM Test Case ID with the WorkItem Id on your test method:

Now you can optionally associate them all in one go without leaving VS2010.

This application has been tested on Visual Studio 2010 Professional (first release) and Visual Studio 2010 Ultimate (SP1, FP2, Rollups). Providing your process template integrates with MTM from a Test Automation perpsective – ie: supports ‘Associated Automation’, has an ‘Automated’ automation status and contains ‘Shared Steps’ and ‘Test Case’ template types, in theory, depending on the alignment of the stars, the phase of the moon and the direction of the wind, Pronto should just work.

Enjoy!

Grey Ham

January 23, 2012

Adding ‘VERIFY’ to MsTest (‘ASSERT’-but-carry-on-if-it-fails)

Filed under: ALM,Programming,Testing — Tags: , , , , , — admin @ 7:23 am

Updated source code is here (2012-Jan-28: I forgot to include the Verify Exception in the original source :) )

If you’ve ever used the likes of GTEST for C++ unit testing, you will be familiar with the ASSERT and VERIFY semantics:

  • ASSERT – bail out of a test as soon as a condition is false (ie: NULL Pointer)
  • VERIFY – acknowledge that a condition is false but continue with the test anyway and get as far as you can. The test still technically failed but more information was gleamed.

Verify is ideal for functional / UI / UAT Automation because it lets the test get as far as it can and elicit as much information as possible before the test completes and a summary of failures are reported: it’s more useful for a developer to know that 5 numeric fields on a form are invalid instead of just one. A colleague pointed out recently that various UI Automation tools tend to implement similar semantics using ‘LogFail’ or similar statements – however, as a developer/tester I find the ‘Verify’ semantics more fitting but they are not part of MsTest.

In this rather long post, I will put Verify into MsTest by wrapping Assert.AreEqual with Verify.AreEqual (for all samples here, this is just an ordinary VS2010 Unit Test). I will provide nothing but a bare-bones implementation here (and I’ve just noticed the code prettifier I use has messed up some of the snippets on this post… please see the source code above for the complete code).

When you do this:

	Assert.AreEqual(1,0)

The unit test fails immediately. We want to do this:

	Verify.AreEqual(1, 0)

Where the failure is ‘noted’ but the test continues: but when the test completes, if there were verification failures in the test, we need to throw an exception so that the unit testing framework designates the verification failures as a test failure. How to do this in MsTest? There’s a few hurdles to cross!

Syntax
All Assert methods are static. Like so:

	Assert.AreEqual(...)

Assertions have no state – a failure is propagated to the test host immediately so static methods are a good fit. Verification failures on the other hand will ‘accumulate’ so we need to preserve state. For this post I have chosen to go the ‘instance’ route so here is a simple Verify class:

public class Verify
{
	public Verify()
	{
		Exceptions = new List();
	}
	public void AreEqual(int left, int right)
	{
		...
	}

	public readonly List Exceptions;
}

However: you can implement Verify methods using thread local storage and static methods but I am trying to keep this long post shorter!

The key is the implementation of the Verify methods: all an Assertion does is throw an exception when a condition is false so all we have to do is sink & record that exception by wrapping it with our Verify calls:

public class Verify
{ 
    ....
    public void AreEqual(int left, int right)
    {  
	try
	{
		Assert.AreEqual(left, right);
	}
	catch(UnitTestAssertException ex)
	{
		Exceptions.Add(ex);
	}
    }
}

If the assertion fails; we essentially ‘note’ the failure but continue. Putting it all together, we might have a test like this:

protected Verify Verify;

[TestInitialize]
public void Init()
{
	this.Verify = new Verify();
}

[TestMethod]
public void Pointless()
{
	Verify.AreEqual(1,2);
	Verify.AreEqual(3,3);
	Verify.AreEqual(3,4);
}

NOTE: Even though Verification violations occurred, as far as the Unit Testing framework is concerned the test technically passed – no exceptions were thrown by the Unit Test! So we need to check for verification violations in the Cleanup method and then throw our own Exception if Verifications were logged during the test.

When executing that test, we have two Verification errors. But what to do with them? If we are running Pointless with Associated Automation within MTM, we want the test to fail; MTM has no concept of a Warning or a Partial Failure. The test either passes or fails so from MTM’s perspective unit tests should exhibit the same behavior. The Verifications are only useful for troubleshooting, logging and triage so they need to appear in the final log / TRX. If we are running the tests within Visual Studio as a Unit Test, we still need the test to fail for the same reason as above to integrate with the toolchain. How to do this? The easiest place to look for any logged verification failures is in the [TestCleanup] method. If you throw an exception in TestCleanup, the exception/failure is still associated with the Unit Test that has just run (ie: the method containing the Verify methods):

[TestCleanup]
public void Cleanup()
{
	if(Verify.Exceptions.Count > 0)
	{
		throw Verify.Exceptions.First();
	}
}

The unfortunate side effect of this is that the exception/stack trace in the Test Results / TRX file looks like it came from Cleanup method and not the test itself. Clicking through takes you to the common Cleanup method which is kind of annoying:

But we can fix that.

CHECKPOINT:We can accumulate verification failures during a unit test and throw an exception in TestCleanup if any verification failures occured. The exception we manually throw could contain a description of every verification encountered so far (for this post: I am dealing with only the first exception and I am keeping message formatting as simple as possible).

But what if the Unit Test contains ASSERTIONS *AND* Verifications? Like so:

[TestMethod]
public void Pointless()
{
	Verify.AreEqual(1,2);
	Assert.AreEqual(3,3);
	Verify.AreEqual(3,4);
}

I have decided that the Assertion gets ‘priority’ – it is that exception/assertion we want to propagate ‘out’ to the unit testing framework. We can determine if a ‘real’ Assertion or Exception was thrown in the Unit Test by looking at the CurrentTestOutcome property:

public TestContext TestContext { get; set; }

[TestCleanup]
public void Cleanup()
{
	if(TestContext.CurrentTestOutcome == UnitTestOutcome.Passed)
	{
                // If we only have Verify failures, as far as MsTest is concerned, the test will pass! So we need to spoof a failure... 
 		if(Verify.Exceptions.Count() > 0)
		{
			throw Verify.Exceptions.First();
		}
	}
}

Easy! So we can comfortably mix assertions and verifications in a single functional test and it will “just work” as far as the tool chain is concerned; if a real assertion happens, that one gets propagated. In C++/GTEST, an ASSERT is used to validate a pointer (little need to go on if its NULL…!) and VERIFY is then used for individual properties. In a functional test, an ASSERT might look for a key component of a page; and the VERIFY calls for fields for example. It depends if it fits what you are trying to do. Use your judgement. This will not be suitable in all circumstances.

Fixing the Stack
As stated, if we throw an exception from TestCleanup, the stack trace looks like this:

That’s not good enough! It shows the Cleanup method itself, not the actual line of code where the Verify call was made. Thanks to the .Net designers, this is easy to fix though :-) If you examine System.Exception, you can override two key properties: Message and StackTrace (and there’s a section for each in the TRX file) Yes – as you can override the stack trace text, you can ‘inject’ a stack trace into an exception and fool anything that interprets that exception about its source – such as the TRX viewer. And it’s easy to get the stack trace. Just do this:

	string stack = Environment.GetStackTrace();

Trivial! But we will be getting the stack trace in our Verify method… we need to ‘unwind’ a bit. To ‘pop’ a few lines we just do this:

List t = new List(Environment.StackTrace.Split(delims, StringSplitOptions.None));

// 'Pop' a few lines
t.RemoveRange(0, 2);

// Reconstruct
string stack = String.Join("\r\n", t);

With this, we can ‘inject’ a stack trace into our exception. The only way I could find to do this is to create a custom exception class (gives us more flexibility…) and override its virtual StackTrace property. So it makes sense for our new exception to wrap the original exception and delegate every other call to it (where possible):

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.VisualStudio.TestTools.UnitTesting;

namespace HaveCompTest
{
    // Might as well derive from the unit test exception baseclasses... and you should probably use InnerException : - )
    public class MyVerifyWrapperException : UnitTestAssertException
    {
        // Pass in the original assertion exception
        public MyVerifyWrapperException(UnitTestAssertException utex, string spoofedStackTrace)
        {
            OriginalException = utex;
            SpoofedStackTrace = spoofedStackTrace;
        }

        public override System.Collections.IDictionary Data { get { return this.OriginalException.Data; }}
        public override string Message { get { return OriginalException.Message; }}
        public override string Source { get { return OriginalException.Source; }}

        public override string StackTrace { get { return SpoofedStackTrace; } }

        public readonly System.Exception OriginalException;
        public readonly string SpoofedStackTrace;
    }
}

Getting there! So now when we wrap our original Assert.AreEqual call with our Verify.AreEqual call, we can create our new exception type with the StackTrace we want:

try
{
            Assert.AreEqual(left, right);
}
catch(UnitTestAssertException ex)
{
            string[] delims = new string[] { "\r\n" };
            List t = new List(Environment.StackTrace.Split(delims, StringSplitOptions.None));
            // Choose how many lines to strip... 
            t.RemoveRange(0, 3);
            string stack = String.Join("\r\n", t);

            // The stack trace now looks like it was thrown directly within the test method itself instead of here.
            MyVerifyWrapperException e = new MyVerifyWrapperException(ex, stack);
            _Exceptions.Add(e);
}

KOOL! So now when we throw our verify exception in [TestCleanup] like so, in the TRX viewer we see this:

Clicking through takes us straight to the location of the Verify failure.

DONE!

Putting it all together

The source code for a simple skeleton class can be found here (just add it to an ordinary MsTest Unit Test).

Tips

You’ll need to wrap all the Assert.XXX calls… a delegate is your friend when doing this… () => { Assert.AreEquals(…) }

The error message says ‘Assert.’ in TRX. Modify the ErrorMessage in MyVerifyWrapperException to say Verify…

You might want a clickthrough to all Verifications in the Stack Trace view…

If mixed Assertion / Verification failures occur within  a test, you might still want the Verifications to show up in TRX…

Use Thread Local Storage to implement static Verify syntax so they are similar to Assert…

 

March 25, 2011

Automating the Integration of VmWare with Microsoft Test Manager and Lab Center: Part 6 – Changes for Visual Studio 2010 Service Pack 1

PLEASE NOTE: This is for Visual Studio 2010. For the VS2012 version, please click here.

Mid-way through the series, Visual Studio Service Pack 1 was released. How amusing! So this is an update to incorporate the Service Pack 1 changes.

See Part 5 for the Source Code and scripts.

DISCLAIMER!

Do not use this code under any circumstances (should just about cover the possibilities!).

I am using an undocumented API in order to construct the Physical Environment in Lab Center and set up the Test Controller Topology. I have tested the registered environments using MTM, use it often and come to no grief. My Lab Center and TFS system appears to be stable. But you use this at your own risk! At the very least, it would be sensible to do a full back up of your TFS Installation and ideally test this prior to production deployment. Use at your own risk :)

Parts 1, 2, 4, 5 are the same: nothing changes. The only changes you will need to make are down to the installation automation in Part 3.

I will not be providing an updated script to do this but if you have been following the series and want to stick to the same structure, you need to make Service Pack 1 available under the VisualStudioGumpf directory by unpacking your ISO there:

You will probably also need to create a new BAT file to launch “setup.exe /passive” from the Service Pack 1 location. Drop this into your Golden VM at the usual place:

And then write a new function to launch that from PowerShell.

InstallServicePack1 $VmWareConnectionParameters $VmWareClonedImageLocation "$DomainName\$DomainUsername" $DomainPassword $VisualStudioGumpfUnc;

Troubleshooting
If you get problems – try it manually first! The only part where I do anything undocumented is to create the Physical Environment. If you happen to get a situation where you can do this registration process manually, but not automatically, please let me know so that I can fix it :-)

Source Code Changes
I have no idea how many lines of white powder I had up my nose when I wrote this comment:

// I am not going to check this here but only one machine in an environment can be of a given role. 
Dictionary agentsToAdd = new Dictionary();
Dictionary machineRoleInfo = new Dictionary();

But it is clearly wrong!

Apart from having to install Service Pack 1, I haven’t had to make any changes: the environment still gets created and all appears normal. You should be able to target your created environment from MTM:

And run your Unit Tests, Integration Tests and CodedUI Tests on it:

Tchau!

March 17, 2011

Automating the Integration of VmWare with Microsoft Test Manager and Lab Center: Part 5 – Registering a new Physical Environment with TFS / Lab Center

PLEASE NOTE: This is for Visual Studio 2010. For the VS2012 version, please click here.

This series of posts will show you how to provision VmWare machines, install Test Controllers and Test Agents, register those with TFS and construct a Physical Environment within Lab Center so that it can be targeted by Microsoft Test Manager. All automatically using PowerShell (and VmRun.EXE):

  • Part 1 – Introduction.
  • Part 2 – Provisioning a new VmWare machine using Vix and joining a domain.
  • Part 3 – Automating the installation of the Test Controller and Test Agent.
  • Part 4 – Automating the configuration and registration of the Test Controller and Test Agent with TFS.
  • Part 5 – Automating the creation of a Physical Enviroment in Lab Center.(Source Code) [Updated: Removed comment]
  • Part 6 – Service Pack 1 Notes
  • Build-Deploy-Test with VmWare is a superset of this so I will tackle that after this series.

    See Part 2 for Requirements (PowerShell, Vix etc.).

    DISCLAIMER!
    Do not use this code under any circumstances (should just about cover the possibilities!).

    I am using an undocumented API in order to construct the Physical Environment in Lab Center and set up the Test Controller Topology. I have tested the registered environments using MTM, use it often and come to no grief. My Lab Center and TFS system appears to be stable. But you use this at your own risk! At the very least, it would be sensible to do a full back up of your TFS Installation and ideally test this prior to production deployment. In particular: this has not yet been tested with Visual Studio 2010 Server Pack 1 (it has now, and it appears to work as well as pre-SP1) but use at your own risk :-)

    In future, if and when Microsoft provides a way to create Phyical environments and construct Test Controller / Test Agent hierarchies programmatically, this is the only stage you will have to change in this series. Everything else you have done as part of this series still applies.

    Background
    That aside: this is the REALLY AWESOME PART!! Before going on, consider provisioning a brand new virtual machine using the scripts at the end of Part 4.

    There are no officially supported APIs to manage the Test Controllers and Test Agents in TFS. This is unfortunate because it stops you creating Physical Environments and constructing your Test Controller / Test Agent hierarchies on the fly. You proceed at your own risk!

    The Test Controllers and Test Agents are all written in .Net and Microsoft has a tendency to make all of its classes ‘internal’ so you cannot reuse them willy-nilly. I like this approach: rather than obfuscating their code they are polite enough to let us wander around their work if we need to in order to troubleshoot or find out WTF is going on. It’s a pragmatic solution between opening things up to all and sundry and having to support their firstborn for life.

    The code we need to invoke to construct our Test Controller and Test Agent hierarchies is stored in ‘internal’ classes so we need to find a way of instantiating those classes and invoking their methods.

    A major help is that IL has no concept of ‘internal’: it is the compiler that enforces that contract (well… its not THAT simple!) opening up all kinds of creative possibilities! But that aside, you can create internal classes from other libraries just by using reflection like so:

    System.Reflection.Assembly controllerAssembly = System.Reflection.Assembly.LoadFile(theAssemblyPath); 
    Type controllerTestEnvironmentType = controllerAssembly.GetType(theFullyQualifiedTypeName);
    
    object testEnvironment = controllerTestEnvironmentType.InvokeMember("", System.Reflection.BindingFlags.CreateInstance, null, controllerTestEnvironmentType, null);
    

    …and you can get hold of properties, invoke private and protected methods and everything else you want using nothing more than reflection. But in .Net 4.0 there is an easier way! The ‘dynamic’ keyword will leave property and method resolution until runtime. Think of it as IDispatchEx from the COM World or expandos in JavaScript. It is very cool because by wrapping an internal type you can treat it like any other object. I use the code from that link verbatim but there’s a lot of extra stuff you can do with that if you put in the time.

    The reason I am mentioning it? Because you will see it in my code when I invoke the internal or private Test Controller classes:

                dynamic controllerConnectionManagerWrapper = new AccessPrivateWrapper(controllerConnectionManager);
    
                controllerConnectionManagerWrapper.UpdateTestEnvironment(testEnvironment);
    

    Manually creating a Physical Environment using a UI
    The source code this time includes all of the scripts and some supporting Visual Studio 2010 Solutions that you will have to build yourself:

  • The AgentHelpers library creates physical environments in Lab Center and sets up the Test Controller / Test Agents ready for use in that environment.
  • A crude UI Application (PhysicalEnvironments) that uses the library to create physical environments in Lab Center
  • A command line application called AgentInteraction.EXE is used to set up the environments as part of the PowerShell script
  • The first thing we need to do is to make sure you can manually construct Physical Environments using my User Interface because it drives the underlying TFS libraries. If not: not much point going on!

    Assuming you have heeded the disclaimer, you will need to add references to several standard TFS API’s including:

    The interesting three are:

  • Microsoft.TeamFoundation.Lab.Activites – in C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PrivateAssemblies\TeamLab\Microsoft.TeamFoundation.Lab.Activities.dll
  • Microsoft.VisualStudio.QualityTools.ControllerObject – in C:\Windows\assembly\GAC_MSIL\Microsoft.VisualStudio.QualityTools.ControllerObject\10.0.0.0__b03f5f7f11d50a3a\Microsoft.VisualStudio.QualityTools.ControllerObject.dll
  • Microsoft.VisualStudio.QualityTools.ExecutionCommon – in C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\ReferenceAssemblies\v2.0\Microsoft.VisualStudio.QualityTools.ExecutionCommon.dll
  • It is important you set *ALL* of those DLL’s to ‘Copy local: True’ for the Reference. You will need to add them to AgentInteraction and PhysicalEnvironments as well.

    Now run the PhysicalEnvironments application and you will see a glorious ‘Graham UI’. I loathe arty-farty work and I loathe writing User Interfaces so much, and have consistently done such a bad job of it, that I never get asked to write them any more (mission accomplished!). Select the TFS Team Project Collection / Team Project and observe what happens:

    This UI is listing all of the Test Controllers on the left: that is the same list you see when you create a new environment in Lab Center and are asked to select the Test Controller you want to create the environment on:

    When you select one of the Test Controllers (such as Moo1250) you will see the list of Agents available. This list is identical to what you can see in Lab Center as well:

    So we have all of the information we need to start constructing Physical Environments. Enter a new name for the Physical Environment you want to create – such as “WooHOO” – tick the single Agent, enter a sensible Role name, and select “Create New Environment”:

    Your UI will now gloriously lock up for about a minute (I provide no UI Cues) whilst Lab Center does whatever it needs to do to create your environment. Assuming all went well (you’ll know when focus returns to your window) you should be able to see your environment in Lab Center:

    AWESOME! So we have all the bits and pieces.

    Before going on, delete the environment in Lab Center and then continue.

    Automating the creation of a Physical Environment using PowerShell
    The PowerShell script obviously doesn’t use the User Interface but I have created a dirty, hackey-wackey Console application called ‘AgentInteraction.EXE’ that is only moderately unforgiving of command line mistakes but can be used to drive the registration process.

    To set this up to work, create a new subdirectory under VisualStudioGumpf called AgentInteraction and drop these DLL’s (AgentHelpers/bin/Debug), AgentInteraction.EXE and anything pulled in as part of the build into it:

    Assuming you have removed the Test Controller and everything from Lab Center, you should be able to execute code like this:

    # Obtain a list of all Agents known to a Test Controller
    \\10.0.0.1\VisualStudioGumpf\AgentInteraction\AgentInteraction.EXE Command=ListAgents TestControllerUri="MOO1251.WOOHOO.COM:6901"
    
    # To create an environment on the Test Controller MOO1251 and with a single Agent called MOO1251 with a Role of 'Desktop Client' you would execute:
    \\10.0.0.1\VisualStudioGumpf\AgentInteraction\AgentInteraction.EXE command=RegisterEnvironment testcontrolleruri="MOO1251.WOOHOO.COM:6901" TeamProjectName="TestSample" EnvironmentName="My New Environment" Agents="MOO1251,The Web Server"
    

    … and see your environment created in Lab Center from PowerShell:

    Kool!

    Automation Summary
    Assuming everything is in place, you should now be able to dynamically provision new virtual machines in VmWare, install the Test Controllers and Test Agents, register them with TFS and create a new Physical Environment on a whim. The key differences for this post are:

    
    # ... variables from previous stages
    
    # The Team Project Name we want to register the environment with and the environment name
    $TeamProjectName = "TestSample";
    $EnvironmentName = "My Brand New Environment";
    $MachineRole = "Database Server";
    
    # ...
    
    # STAGE 5: Create the Environment
    # It can take a while for the VM to spin up and the Test Controller + Test Agent to come online... so use the RETRYCOUNT here.
    Invoke-Expression "$VisualStudioGumpfUnc\AgentInteraction\AgentInteraction.EXE command=RegisterEnvironment testcontrolleruri=$($MachineName).$($DomainName):6901 TeamProjectName=`"$TeamProjectName`" EnvironmentName=`"$EnvironmentName`" Agents=`"$MachineName,$MachineRole`" RetryCount=10";
    
    Agents=`"$MachineName,$MachineRole`" ";
    

    As a final test, set it as a targettable environment in MTM and run your Tests:

    Mission Accomplished! However… there is an anomoly! When I went into the Test Controllers view and clicked a Test Controller, I noticed that duplicate Test Controller names appeared for those that were put on Virtual Machines that I had provisioned:

    At first I thought it was some SID issue with the cloned virtual machine but it appears to be down to a legacy DNS Issue:

    I was able to repeat this 100%. If you remove the legacy DNS Record (ie: for Moo1400 which is a virtual machine I purged) and restart MTM, the problem goes away (I wasn’t able to repeat it after this).

    So I guess you need to purge the DNS Record as well as the Computer from your Active Directory when you throw away an environment.

    L8r!

    « Newer PostsOlder Posts »

    Powered by WordPress