HaveComputerWillCode.Com

Welcome!
Life is a Non-Deterministic Finite State Automata
Automation ? (*pGeekiness)++ : Code /eneration;

January 28, 2015

User Account Control Automation Assistant [UacAA] open sourced on Github

Filed under: Automation,BrekIT,Programming,Testing — admin @ 7:41 am

In November 2011, I released UacAA that allowed any COM friendly language (such as C# or VBS) to automate the User Account Control Dialog Boxes on either the user or secure desktop.

I (thought) I had lost the source code in an SSD Crash until I found it a week or so ago. Given I’ve had over two dozen requests for the code, I have released it on Github.

The source code and links to the binaries are available here.

August 10, 2014

Moksy v1.0 released (Web Service Faking/Stubbing Framework with real HTTP Endpoint)

Filed under: Automation,Programming,Testing — admin @ 12:36 am

Moksy v1.0
Moksy is an open source .Net library for stubbing, mocking and simulating web services (Github,NuGet,License)

Intended to be driven from MsTest (or your favorite testing framework), Moksy will create a real HTTP Server end-point that your system under test or other services can hit.

For example:

	Moksy.Common.Proxy proxy = new Moksy.Common.Proxy(10011);
	proxy.Start();

	var simulation = SimulationFactory.When.I.Get().From("/TheEndpoint").Then.Return.Body("Hello World!").And.StatusCode(System.Net.HttpStatusCode.OK);
	proxy.Add(simulation);

Navigating to http://localhost:10011/TheEndpoint in your browser or hitting that Url from another service will return “Hello World!”.

Key features of Moksy include:

  • Easy to use Fluent API for specifying conditions and responses (known as “Simulations”)
  • Ideal for stubbing services that are incomplete, unreliable, not available for testing or to remove service development from your critical path
  • Moksy can be deployed to your test environment and started from the command line
  • A convenient way to inject faults into your system
  • A dynamic "In Memory Database" is supported for Json objects: just specify a key on the JSon structure and Moksy will support CRUD operations immediately
  • Conditions can include URL’s, headers, property constraints (experimental) and the existence of objects in the database
  • Responses can include the response body, headers, objects and mutated objects

BrekIT recommends RestSharp and JSON.Net for writing your integration tests. Please see Github for more information and examples.

Integrating Moksy into MsTest (or any framework)
Moksy can be added to your test projects via NuGET:

    Install-Package "Moksy"

Create a Unit Test containing the above simulation and you are up and running!

Github Repository
Moksy is 100% open source and can be downloaded from the Github repository here.

More Information
Please see the Github repository for more up to date information and examples on how to use Moksy for your testing.

June 1, 2014

Triksy v1.0 released on Github (PowerShell advanced functions for processing MsTest TRX results)

Filed under: Automation,BrekIT,Programming,Testing — admin @ 12:46 am

Triksy v1.0 has been released. It is open source and can be downloaded from the Github Repository.

What is it?
Triksy is intended to help you query, aggregate and reshape MsTest TRX files to simplify working with other test management tools or custom reporting solutions.

And it was a cool way to learn PowerShell Advanced Functions and Pester (used for testing) :-)

Examples
The following section shows some examples of using Triksy (the Github repository contains all of the test data to produce these results – just run these commands from the Functions\TestData folder).

To return the test result from every TRX file:

Get-ChildItem "*.trx" -Recurse | Get-Trx | Get-TrxResult | Format-Table -Property TestName,Outcome,Path -Autosize

Sample output is:

TestName                   Outcome      Path
--------                   -------      ----
WorkItemPassesAndFailsPass Passed       F:\Github\Triksy\Functions\TestData\Valid\MsTestAttributes.trx
AlwaysPass                 Passed       F:\Github\Triksy\Functions\TestData\Valid\MsTestSingleResult.trx
AlwaysPass                 Passed       F:\Github\Triksy\Functions\TestData\Valid\MsTestSummary1.trx
AlwaysPass                 Passed       F:\Github\Triksy\Functions\TestData\Valid\MsTestSummary2.trx
AlwaysPass                 Passed       F:\Github\Triksy\Functions\TestData\Valid\MsTestTwoResults.trx
... etc

To get summary of all tests run (passed, failed, inconslusive etc) in each Trx file, execute:

Get-ChildItem *.trx -Recurse | Get-TrxSummary | Format-Table -Property Valid,Outcome,Total,Executed,Passed,Error,Failed,Timeout,Aborted,Inconclusive,Path -Autosize

The sample output might look like this:

Valid Outcome   Total Executed Passed Error Failed Timeout Aborted Inconclusive Path
----- -------   ----- -------- ------ ----- ------ ------- ------- ------------ ----
False                                                                           F:\Github\Triksy\Functions\TestData\...
 True Failed    16    16       12     0     3      0       0       1            F:\Github\Triksy\Functions\TestData\...
 True Completed 0     0        0      0     0      0       0       0            F:\Github\Triksy\Functions\TestData\...
 True Completed 20    19       18     17    16     15      14      13           F:\Github\Triksy\Functions\TestData\...
 True Completed 20    19       18     17    16     15      14      13           F:\Github\Triksy\Functions\TestData\...
 True Completed 120   119      118    117   116    115     114     113          F:\Github\Triksy\Functions\TestData\...
 True Failed    2     2        1      0     1      0       0       0            F:\Github\Triksy\Functions\TestData\...

It is often useful to aggregate results across many files – for example, the use of the -Aggregate switch in the above command line will sum the totals in every TRX file:

Get-ChildItem *.trx -Recurse | Get-TrxSummary -Aggregate | Format-Table -Property Valid,Total,Executed,Passed,Error,Failed,Timeout,Aborted,Inconclusive -Autosize

Will produce a single line of output:

Valid Total Executed Passed Error Failed Timeout Aborted Inconclusive
----- ----- -------- ------ ----- ------ ------- ------- ------------
    6   178      175    167   151    152     145     142          140

The Workitem Attribute
A very useful feature of the MsTest framework is the [Workitem(id)] attribute you can add to your test methods – this is a way of correlating a particular test with a test case or user story in your ALM or test management tool of choice. The same Workitem id can be used on multiple test methods so you typically want to know if all of the tests associated with a particular workitem passed. The Workitem attribute is output in the TRX file by MsTest so it is just a scripting effort to correlate the result of that unit test with the associated work item in your ALM/Test Management tool of choice.

Triksy provides a CmdLet to do the aggregation Get-TrxWorkitemSummary. If every test associated with the same Workitem id passes, the result is Passed. If any of the results are NOT Passed (including: Error, Failed, Aborted, Inconclusive etc) then the overall status of that Workitem is ‘Failed’:

Get-ChildItem *.trx -Recurse | Get-Trx | Get-TrxResult | Get-TrxWorkitemSummary

Will produce a single Pass/Fail for each Workitem:

Workitem                                                    Outcome
--------                                                    -------
4000                                                        Failed
3000                                                        Failed
1000                                                        Passed
1001                                                        Passed
2000                                                        Passed
2001                                                        Failed
2002                                                        Failed

It is sometimes useful to associate all unit tests that do NOT have a [Workitem] attribute with the same Workitem as a catch all. To do this, provide the -DefaultWorkitems “ID” parameter to the Get-TrxWorkitemSummary.

Grey Ham

June 15, 2012

Automating the creation of standard environments using the VS2012 RC API (Update)

The full source code for this article can be downloaded here (V1.1) It is built against the 2012 RC version.

UPDATE 28-June-2012: The TF259637 error has been confirmed and sounds like it will be fixed with better UI guidance in the next release. The sample code is still at V1.1 and I will update my code at RTM.

A very welcome feature in the new edition of VS2012 Lab Center is the concept of a ‘Standard Environment’. Within Lab Centre, you can add any running machine to an environment and automatically push out the Test Agent ready for distributing tests to:

This means that VmWare, Virtual Box, Virtual PC and physical machines can easily be incorporated into your Lab Center environments. This post will show how to automate that feature using the (now public) API. No need to jump through hoops anymore like in VS2010!

However, before going on, you must at least be able to push these agents out manually using the Lab Center user interface so you know your infrastructure is set up for this: ensure that your target machine has file sharing set up; that you have fixed this rather obscure registry setting if necessary and ensure that IPSec and the Firewall aren’t getting in the way. And lots more.

We will now write some code to create a new Standard Environment and push the Agent out ready to run tests:

It looks like all of the API calls to the Lab Service and supporting infrastructure are now public in MSDN. There is some very kool stuff in there!

Getting going
The first thing is to add a few references to your project: Microsoft.TeamFoundation.DLL, Microsoft.TeamFoundation.Client.DLL and Microsoft.TeamFoundation.Lab.Client.DLL (search your machine for these):

Then it’s a simple case of connecting to a Team Project…

TeamProjectPicker picker = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false);

DialogResult result = picker.ShowDialog();
if (result != System.Windows.Forms.DialogResult.OK) return;
if (picker.SelectedProjects.Length == 0) return;

… and making the following calls to create a new environment and register it with a test controller:

LabSystemDefinition single = new LabSystemDefinition("TheMachineNameYouWantToPushTheAgentsOutTo", "TheMachineNameYouWantToPushTheAgentsOutTo", "YourMachineRole");

LabEnvironmentDefinition definition = new LabEnvironmentDefinition("The Environment Name", "The Environment Description", new List() { single });
definition.TestControllerName = "TheTestController:6901";

LabEnvironment newEnvironment = service.CreateLabEnvironment(ProjectName, definition, null, null);

There is then a nicely exposed ‘InstallTestAgent’ method that does exactly what it says on the tin:

// Download the source code to see how the credentials are set up for this call (process == null if you want to run the Test Agent as a service)
themachine.InstallTestAgent(admin, process);

THATS IT!:

The name ‘InstallTestAgent’ is a little misleading – it installs the Agent if it does not already exist and then reconfigures it.

Configuring the Agent to run tests interactively is similar: all we need to do is to provide another set of credentials that we want to run the Test Agent as on each end-point and tell the Lab Environment which machine roles require an interactive agent so the deployed agents can be configured correctly. We do this prior to creating the environment otherwise we would have to call LabService.UpdateLabEnvironment afterwards:

definition.CodedUIRole = p.MachineRoles;
definition.CodedUIUserName =  String.Format("{0}\\{1}", p.InteractiveCredentials.Domain, p.InteractiveCredentials.UserName);

IS THIS A BUG?
I had issues getting my Lab Center to push out a test agent across *Workgroups* to run interactively even when I drive the operation manually from the Lab Center UI (not the API): this happened from a completely fresh install or otherwise. After pushing out the Agent, rebooting and automatically logging in, the Lab Center UI would keep hitting me with error “TF259637: The test machine configuration does not match the configuration on the environment. The account…”. The benefit of a public API is that it lets us investigate! The error appears to be the way Lab Center stores and/or validates its LabEnvironment.CodedUIUserName and LabSystem.Configuration.ConfiguredUserName parameters when distributing a Test Agent across Workgroups. The LabEnvironment.CodedUIUserName was set to ‘W732AGENTS\Graham’ (the value I entered in the Lab Center UI because that is what I want to run the Agent as on the end-point) whereas the LabSystem.Configuration.ConfiguredUserName property was set to .\Graham. Clearly a mismatch. To fix it, it seems all we need to do is sync the two. I need to be clear [especially given how the above code snippet obviously creates this problem!]: the issue occurs when driving the Lab Center UI manually, so it is not specific to the API or this sample. For the sample, I have chosen to mimic (what I think is) the behaviour of the Lab Center UI.

I have posted an issue with Connect with more information to seek clarification – please see ID: 749436.

If deploying Agents to Workgroups, you might get the TF2569637 error. I have left my source code to pass parameters to the API the same way that the Lab Center UI RC appears to (ie: without validation or guidance) so I attempt to deal with the error and automatically fix it post-deployment. It actually makes things more robust anyhow. I will update my code to reflect Lab Center UI changes post-RC:

var theEnvironment = service.QueryLabEnvironments(new LabEnvironmentQuerySpec() { Project = ProjectName }).First(f => f.Name == p.EnvironmentName);
var theMachine = theEnvironment.LabSystems.First(f => f.Name == p.MachineName);

string testAgentRunningAs = theMachine.Configuration.ConfiguredUserName;
string environmentThinksTestAgentRunningAsd = theEnvironment.CodedUIUserName;

if (String.Compare(testAgentRunningAs, environmentThinksTestAgentRunningAsd, true) != 0)
{
    // Synchronize the user names... 
    service.UpdateLabEnvironment(theEnvironment.Uri, new LabEnvironmentUpdatePack() { CodedUIUserName = testAgentRunningAs });
}

You can also use that snippet to fix a manually-deployed environment that is broken with error TF259637.

Putting it all together, you can can download this sample here. :

It looks like there is a slightly more flexible way of doing the installation of the test agents using a combination of the TestAgentDeploy class and the AMLCommandBase-derived classes. But perhaps more on that some other time!

Enjoy!

Older Posts »

Powered by WordPress