HaveComputerWillCode.Com

Welcome!
Life is a Non-Deterministic Finite State Automata
Automation ? (*pGeekiness)++ : Code /eneration;

June 15, 2012

Automating the creation of standard environments using the VS2012 RC API (Update)

The full source code for this article can be downloaded here (V1.1) It is built against the 2012 RC version.

UPDATE 28-June-2012: The TF259637 error has been confirmed and sounds like it will be fixed with better UI guidance in the next release. The sample code is still at V1.1 and I will update my code at RTM.

A very welcome feature in the new edition of VS2012 Lab Center is the concept of a ‘Standard Environment’. Within Lab Centre, you can add any running machine to an environment and automatically push out the Test Agent ready for distributing tests to:

This means that VmWare, Virtual Box, Virtual PC and physical machines can easily be incorporated into your Lab Center environments. This post will show how to automate that feature using the (now public) API. No need to jump through hoops anymore like in VS2010!

However, before going on, you must at least be able to push these agents out manually using the Lab Center user interface so you know your infrastructure is set up for this: ensure that your target machine has file sharing set up; that you have fixed this rather obscure registry setting if necessary and ensure that IPSec and the Firewall aren’t getting in the way. And lots more.

We will now write some code to create a new Standard Environment and push the Agent out ready to run tests:

It looks like all of the API calls to the Lab Service and supporting infrastructure are now public in MSDN. There is some very kool stuff in there!

Getting going
The first thing is to add a few references to your project: Microsoft.TeamFoundation.DLL, Microsoft.TeamFoundation.Client.DLL and Microsoft.TeamFoundation.Lab.Client.DLL (search your machine for these):

Then it’s a simple case of connecting to a Team Project…

TeamProjectPicker picker = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false);

DialogResult result = picker.ShowDialog();
if (result != System.Windows.Forms.DialogResult.OK) return;
if (picker.SelectedProjects.Length == 0) return;

… and making the following calls to create a new environment and register it with a test controller:

LabSystemDefinition single = new LabSystemDefinition("TheMachineNameYouWantToPushTheAgentsOutTo", "TheMachineNameYouWantToPushTheAgentsOutTo", "YourMachineRole");

LabEnvironmentDefinition definition = new LabEnvironmentDefinition("The Environment Name", "The Environment Description", new List() { single });
definition.TestControllerName = "TheTestController:6901";

LabEnvironment newEnvironment = service.CreateLabEnvironment(ProjectName, definition, null, null);

There is then a nicely exposed ‘InstallTestAgent’ method that does exactly what it says on the tin:

// Download the source code to see how the credentials are set up for this call (process == null if you want to run the Test Agent as a service)
themachine.InstallTestAgent(admin, process);

THATS IT!:

The name ‘InstallTestAgent’ is a little misleading – it installs the Agent if it does not already exist and then reconfigures it.

Configuring the Agent to run tests interactively is similar: all we need to do is to provide another set of credentials that we want to run the Test Agent as on each end-point and tell the Lab Environment which machine roles require an interactive agent so the deployed agents can be configured correctly. We do this prior to creating the environment otherwise we would have to call LabService.UpdateLabEnvironment afterwards:

definition.CodedUIRole = p.MachineRoles;
definition.CodedUIUserName =  String.Format("{0}\\{1}", p.InteractiveCredentials.Domain, p.InteractiveCredentials.UserName);

IS THIS A BUG?
I had issues getting my Lab Center to push out a test agent across *Workgroups* to run interactively even when I drive the operation manually from the Lab Center UI (not the API): this happened from a completely fresh install or otherwise. After pushing out the Agent, rebooting and automatically logging in, the Lab Center UI would keep hitting me with error “TF259637: The test machine configuration does not match the configuration on the environment. The account…”. The benefit of a public API is that it lets us investigate! The error appears to be the way Lab Center stores and/or validates its LabEnvironment.CodedUIUserName and LabSystem.Configuration.ConfiguredUserName parameters when distributing a Test Agent across Workgroups. The LabEnvironment.CodedUIUserName was set to ‘W732AGENTS\Graham’ (the value I entered in the Lab Center UI because that is what I want to run the Agent as on the end-point) whereas the LabSystem.Configuration.ConfiguredUserName property was set to .\Graham. Clearly a mismatch. To fix it, it seems all we need to do is sync the two. I need to be clear [especially given how the above code snippet obviously creates this problem!]: the issue occurs when driving the Lab Center UI manually, so it is not specific to the API or this sample. For the sample, I have chosen to mimic (what I think is) the behaviour of the Lab Center UI.

I have posted an issue with Connect with more information to seek clarification – please see ID: 749436.

If deploying Agents to Workgroups, you might get the TF2569637 error. I have left my source code to pass parameters to the API the same way that the Lab Center UI RC appears to (ie: without validation or guidance) so I attempt to deal with the error and automatically fix it post-deployment. It actually makes things more robust anyhow. I will update my code to reflect Lab Center UI changes post-RC:

var theEnvironment = service.QueryLabEnvironments(new LabEnvironmentQuerySpec() { Project = ProjectName }).First(f => f.Name == p.EnvironmentName);
var theMachine = theEnvironment.LabSystems.First(f => f.Name == p.MachineName);

string testAgentRunningAs = theMachine.Configuration.ConfiguredUserName;
string environmentThinksTestAgentRunningAsd = theEnvironment.CodedUIUserName;

if (String.Compare(testAgentRunningAs, environmentThinksTestAgentRunningAsd, true) != 0)
{
    // Synchronize the user names... 
    service.UpdateLabEnvironment(theEnvironment.Uri, new LabEnvironmentUpdatePack() { CodedUIUserName = testAgentRunningAs });
}

You can also use that snippet to fix a manually-deployed environment that is broken with error TF259637.

Putting it all together, you can can download this sample here. :

It looks like there is a slightly more flexible way of doing the installation of the test agents using a combination of the TestAgentDeploy class and the AMLCommandBase-derived classes. But perhaps more on that some other time!

Enjoy!

May 6, 2012

Pronto v1.5 Released (Productivity Tool for Automating MTM Test Cases)

Filed under: ALM,Programming,Testing — Tags: , , , , , , , — admin @ 6:12 am

Pronto lets you create test stubs (including data binding and documentation) for your manual MTM Test Cases in C# or VB.Net by dragging those test cases onto your source file. You can then use Pronto’s Bulk Associated Automation Assistant to associate many of your test methods with your MTM test cases in one go. Uses might include: automating acceptance tests or creating Keyword/Action Word Frameworks. The application can be downloaded directly from Visual Studio Gallery here.

Changes to v1.5:

• All fragment generators are now freely editable T4 text files
• Create new generators and customize the fragments easily (docs and samples included)
• Fixed a few bugs

After downloading, Unblocking the file (right click -> Properties), installing the VSIX and restarting Visual Studio, ensure that the Pronto window is visible:

Assuming you already have a WorkItem query in Team Explorer that returns Test Cases, just Drag and Drop that query onto the Pronto window to get a list of Test Cases:

To create your method stubs and help with documentation, either Drag ‘n’ Drop or Right Click/Copy the Test Cases and Shared Steps and paste directly into your Unit Test:

Notice how the method stub, data binding parameters, test steps and title are all generated for you automatically (this is customizable).

After building your solution manually, open up the Bulk Associated Automation Assistant (the “Sheep” icon). It will discover your tests and correlate the MTM Test Case ID with the WorkItem Id on your test method:

Now you can optionally associate them all in one go without leaving VS2010.

This application has been tested on Visual Studio 2010 Professional (first release) and Visual Studio 2010 Ultimate (SP1, FP2, Rollups). Providing your process template integrates with MTM from a Test Automation perpsective – ie: supports ‘Associated Automation’, has an ‘Automated’ automation status and contains ‘Shared Steps’ and ‘Test Case’ template types, in theory, depending on the alignment of the stars, the phase of the moon and the direction of the wind, Pronto should just work.

Enjoy!

Grey Ham

January 23, 2012

Adding ‘VERIFY’ to MsTest (‘ASSERT’-but-carry-on-if-it-fails)

Filed under: ALM,Programming,Testing — Tags: , , , , , — admin @ 7:23 am

Updated source code is here (2012-Jan-28: I forgot to include the Verify Exception in the original source :) )

If you’ve ever used the likes of GTEST for C++ unit testing, you will be familiar with the ASSERT and VERIFY semantics:

  • ASSERT – bail out of a test as soon as a condition is false (ie: NULL Pointer)
  • VERIFY – acknowledge that a condition is false but continue with the test anyway and get as far as you can. The test still technically failed but more information was gleamed.

Verify is ideal for functional / UI / UAT Automation because it lets the test get as far as it can and elicit as much information as possible before the test completes and a summary of failures are reported: it’s more useful for a developer to know that 5 numeric fields on a form are invalid instead of just one. A colleague pointed out recently that various UI Automation tools tend to implement similar semantics using ‘LogFail’ or similar statements – however, as a developer/tester I find the ‘Verify’ semantics more fitting but they are not part of MsTest.

In this rather long post, I will put Verify into MsTest by wrapping Assert.AreEqual with Verify.AreEqual (for all samples here, this is just an ordinary VS2010 Unit Test). I will provide nothing but a bare-bones implementation here (and I’ve just noticed the code prettifier I use has messed up some of the snippets on this post… please see the source code above for the complete code).

When you do this:

	Assert.AreEqual(1,0)

The unit test fails immediately. We want to do this:

	Verify.AreEqual(1, 0)

Where the failure is ‘noted’ but the test continues: but when the test completes, if there were verification failures in the test, we need to throw an exception so that the unit testing framework designates the verification failures as a test failure. How to do this in MsTest? There’s a few hurdles to cross!

Syntax
All Assert methods are static. Like so:

	Assert.AreEqual(...)

Assertions have no state – a failure is propagated to the test host immediately so static methods are a good fit. Verification failures on the other hand will ‘accumulate’ so we need to preserve state. For this post I have chosen to go the ‘instance’ route so here is a simple Verify class:

public class Verify
{
	public Verify()
	{
		Exceptions = new List();
	}
	public void AreEqual(int left, int right)
	{
		...
	}

	public readonly List Exceptions;
}

However: you can implement Verify methods using thread local storage and static methods but I am trying to keep this long post shorter!

The key is the implementation of the Verify methods: all an Assertion does is throw an exception when a condition is false so all we have to do is sink & record that exception by wrapping it with our Verify calls:

public class Verify
{ 
    ....
    public void AreEqual(int left, int right)
    {  
	try
	{
		Assert.AreEqual(left, right);
	}
	catch(UnitTestAssertException ex)
	{
		Exceptions.Add(ex);
	}
    }
}

If the assertion fails; we essentially ‘note’ the failure but continue. Putting it all together, we might have a test like this:

protected Verify Verify;

[TestInitialize]
public void Init()
{
	this.Verify = new Verify();
}

[TestMethod]
public void Pointless()
{
	Verify.AreEqual(1,2);
	Verify.AreEqual(3,3);
	Verify.AreEqual(3,4);
}

NOTE: Even though Verification violations occurred, as far as the Unit Testing framework is concerned the test technically passed – no exceptions were thrown by the Unit Test! So we need to check for verification violations in the Cleanup method and then throw our own Exception if Verifications were logged during the test.

When executing that test, we have two Verification errors. But what to do with them? If we are running Pointless with Associated Automation within MTM, we want the test to fail; MTM has no concept of a Warning or a Partial Failure. The test either passes or fails so from MTM’s perspective unit tests should exhibit the same behavior. The Verifications are only useful for troubleshooting, logging and triage so they need to appear in the final log / TRX. If we are running the tests within Visual Studio as a Unit Test, we still need the test to fail for the same reason as above to integrate with the toolchain. How to do this? The easiest place to look for any logged verification failures is in the [TestCleanup] method. If you throw an exception in TestCleanup, the exception/failure is still associated with the Unit Test that has just run (ie: the method containing the Verify methods):

[TestCleanup]
public void Cleanup()
{
	if(Verify.Exceptions.Count > 0)
	{
		throw Verify.Exceptions.First();
	}
}

The unfortunate side effect of this is that the exception/stack trace in the Test Results / TRX file looks like it came from Cleanup method and not the test itself. Clicking through takes you to the common Cleanup method which is kind of annoying:

But we can fix that.

CHECKPOINT:We can accumulate verification failures during a unit test and throw an exception in TestCleanup if any verification failures occured. The exception we manually throw could contain a description of every verification encountered so far (for this post: I am dealing with only the first exception and I am keeping message formatting as simple as possible).

But what if the Unit Test contains ASSERTIONS *AND* Verifications? Like so:

[TestMethod]
public void Pointless()
{
	Verify.AreEqual(1,2);
	Assert.AreEqual(3,3);
	Verify.AreEqual(3,4);
}

I have decided that the Assertion gets ‘priority’ – it is that exception/assertion we want to propagate ‘out’ to the unit testing framework. We can determine if a ‘real’ Assertion or Exception was thrown in the Unit Test by looking at the CurrentTestOutcome property:

public TestContext TestContext { get; set; }

[TestCleanup]
public void Cleanup()
{
	if(TestContext.CurrentTestOutcome == UnitTestOutcome.Passed)
	{
                // If we only have Verify failures, as far as MsTest is concerned, the test will pass! So we need to spoof a failure... 
 		if(Verify.Exceptions.Count() > 0)
		{
			throw Verify.Exceptions.First();
		}
	}
}

Easy! So we can comfortably mix assertions and verifications in a single functional test and it will “just work” as far as the tool chain is concerned; if a real assertion happens, that one gets propagated. In C++/GTEST, an ASSERT is used to validate a pointer (little need to go on if its NULL…!) and VERIFY is then used for individual properties. In a functional test, an ASSERT might look for a key component of a page; and the VERIFY calls for fields for example. It depends if it fits what you are trying to do. Use your judgement. This will not be suitable in all circumstances.

Fixing the Stack
As stated, if we throw an exception from TestCleanup, the stack trace looks like this:

That’s not good enough! It shows the Cleanup method itself, not the actual line of code where the Verify call was made. Thanks to the .Net designers, this is easy to fix though :-) If you examine System.Exception, you can override two key properties: Message and StackTrace (and there’s a section for each in the TRX file) Yes – as you can override the stack trace text, you can ‘inject’ a stack trace into an exception and fool anything that interprets that exception about its source – such as the TRX viewer. And it’s easy to get the stack trace. Just do this:

	string stack = Environment.GetStackTrace();

Trivial! But we will be getting the stack trace in our Verify method… we need to ‘unwind’ a bit. To ‘pop’ a few lines we just do this:

List t = new List(Environment.StackTrace.Split(delims, StringSplitOptions.None));

// 'Pop' a few lines
t.RemoveRange(0, 2);

// Reconstruct
string stack = String.Join("\r\n", t);

With this, we can ‘inject’ a stack trace into our exception. The only way I could find to do this is to create a custom exception class (gives us more flexibility…) and override its virtual StackTrace property. So it makes sense for our new exception to wrap the original exception and delegate every other call to it (where possible):

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.VisualStudio.TestTools.UnitTesting;

namespace HaveCompTest
{
    // Might as well derive from the unit test exception baseclasses... and you should probably use InnerException : - )
    public class MyVerifyWrapperException : UnitTestAssertException
    {
        // Pass in the original assertion exception
        public MyVerifyWrapperException(UnitTestAssertException utex, string spoofedStackTrace)
        {
            OriginalException = utex;
            SpoofedStackTrace = spoofedStackTrace;
        }

        public override System.Collections.IDictionary Data { get { return this.OriginalException.Data; }}
        public override string Message { get { return OriginalException.Message; }}
        public override string Source { get { return OriginalException.Source; }}

        public override string StackTrace { get { return SpoofedStackTrace; } }

        public readonly System.Exception OriginalException;
        public readonly string SpoofedStackTrace;
    }
}

Getting there! So now when we wrap our original Assert.AreEqual call with our Verify.AreEqual call, we can create our new exception type with the StackTrace we want:

try
{
            Assert.AreEqual(left, right);
}
catch(UnitTestAssertException ex)
{
            string[] delims = new string[] { "\r\n" };
            List t = new List(Environment.StackTrace.Split(delims, StringSplitOptions.None));
            // Choose how many lines to strip... 
            t.RemoveRange(0, 3);
            string stack = String.Join("\r\n", t);

            // The stack trace now looks like it was thrown directly within the test method itself instead of here.
            MyVerifyWrapperException e = new MyVerifyWrapperException(ex, stack);
            _Exceptions.Add(e);
}

KOOL! So now when we throw our verify exception in [TestCleanup] like so, in the TRX viewer we see this:

Clicking through takes us straight to the location of the Verify failure.

DONE!

Putting it all together

The source code for a simple skeleton class can be found here (just add it to an ordinary MsTest Unit Test).

Tips

You’ll need to wrap all the Assert.XXX calls… a delegate is your friend when doing this… () => { Assert.AreEquals(…) }

The error message says ‘Assert.’ in TRX. Modify the ErrorMessage in MyVerifyWrapperException to say Verify…

You might want a clickthrough to all Verifications in the Stack Trace view…

If mixed Assertion / Verification failures occur within  a test, you might still want the Verifications to show up in TRX…

Use Thread Local Storage to implement static Verify syntax so they are similar to Assert…

 

March 25, 2011

Automating the Integration of VmWare with Microsoft Test Manager and Lab Center: Part 6 – Changes for Visual Studio 2010 Service Pack 1

PLEASE NOTE: This is for Visual Studio 2010. For the VS2012 version, please click here.

Mid-way through the series, Visual Studio Service Pack 1 was released. How amusing! So this is an update to incorporate the Service Pack 1 changes.

See Part 5 for the Source Code and scripts.

DISCLAIMER!

Do not use this code under any circumstances (should just about cover the possibilities!).

I am using an undocumented API in order to construct the Physical Environment in Lab Center and set up the Test Controller Topology. I have tested the registered environments using MTM, use it often and come to no grief. My Lab Center and TFS system appears to be stable. But you use this at your own risk! At the very least, it would be sensible to do a full back up of your TFS Installation and ideally test this prior to production deployment. Use at your own risk :)

Parts 1, 2, 4, 5 are the same: nothing changes. The only changes you will need to make are down to the installation automation in Part 3.

I will not be providing an updated script to do this but if you have been following the series and want to stick to the same structure, you need to make Service Pack 1 available under the VisualStudioGumpf directory by unpacking your ISO there:

You will probably also need to create a new BAT file to launch “setup.exe /passive” from the Service Pack 1 location. Drop this into your Golden VM at the usual place:

And then write a new function to launch that from PowerShell.

InstallServicePack1 $VmWareConnectionParameters $VmWareClonedImageLocation "$DomainName\$DomainUsername" $DomainPassword $VisualStudioGumpfUnc;

Troubleshooting
If you get problems – try it manually first! The only part where I do anything undocumented is to create the Physical Environment. If you happen to get a situation where you can do this registration process manually, but not automatically, please let me know so that I can fix it :-)

Source Code Changes
I have no idea how many lines of white powder I had up my nose when I wrote this comment:

// I am not going to check this here but only one machine in an environment can be of a given role. 
Dictionary agentsToAdd = new Dictionary();
Dictionary machineRoleInfo = new Dictionary();

But it is clearly wrong!

Apart from having to install Service Pack 1, I haven’t had to make any changes: the environment still gets created and all appears normal. You should be able to target your created environment from MTM:

And run your Unit Tests, Integration Tests and CodedUI Tests on it:

Tchau!

Older Posts »

Powered by WordPress