HaveComputerWillCode.Com

Welcome!
Life is a Non-Deterministic Finite State Automata
Automation ? (*pGeekiness)++ : Code /eneration;

December 5, 2010

T4 Performance: From TextTemplatingHost to Preprocessed Templates

Filed under: Performance,Programming — Tags: , , , , — admin @ 7:44 am

I’ve been moving my code generators and infrastructure from Visual Studio 2008 to Visual Studio 2010 and making use of the new ‘Preprocessed T4 Templates’ feature. There’s no metrics out there I could find, so here’s mine.

Within 2008, I had my own host that loaded the TT files as they were required, used Microsoft’s Text Templating Engine to parse them on the fly and then generate my code. Performance was pretty good but it meant parsing the TT files every time. 7.5 seconds for 140+ files is admirable!

In 2010, I’ve moved over to Preprocessed Templates and the performance has improved three fold:

In all, there’s about 2 megabytes of generated C# code from the above generation job – in about 2.5 seconds. Preprocessed Templates are clearly the way to go!

My motivation for doing this isn’t speed though. As much as I would like to ship raw T4 templates with my application, it would appear it is not legal to distribute the Text Templating Engine along with your product. This means that unless you are targeting the Visual Studio development community, you cannot rely on the TextTemplating DLL’s being there. So you have to preprocess your templates and distribute a compiled version of that instead. The Engine comes with Visual Studio and various addons (in the Microsoft.VisualStudio.TextTemplating.* DLL’s) so developers don’t notice it’s missing until they distribute their application :-) It would be awesome if Microsoft could push out the T4 Engine as part of a regular update because it’s a mighty useful piece of kit.

The only way around this at the moment (apart from breaking the law or arranging something with the doods in Redmond) is to distribute the C# or VB.Net project containing the T4 files so your customers can regenerate them onsite if they need to modify the output. Or use the Open Source / reverse engineered version mentioned on the Stackoverflow link above. I don’t think either is ideal, but it seems to be the best that can be done at the moment. I would love to hear otherwise!

L8r!

February 16, 2010

Writing a Macro Recorder: Part 5 – Generating VBScript / JScript code and running it as a macro in your app

Filed under: Programming — Tags: , , , , , — admin @ 8:28 am

Part 1 (C#)- Part 2 (Testing C#)- Part 3 (PowerShell)- Part 4 (C++)

The source code for Part 5 can be downloaded here (Visual Studio 2008). Download and use at your own risk.
NUnit v2.5.0.9122 can be found here.
The reference in the test harness is to PowerShell 2.0 CTP2. Modify the reference – System.Management.Automation – and point it to your PowerShell; remove any lines that won’t compile :)
I also added a reference to stdole: change the reference to the version on your machine

This will wrap up the last of the ‘styles’ of languages I need to test. So far, I’ve covered C# (Managed), PowerShell (Scriptable Managed), C++ (Native) and now we need to cover ‘native scripting’: VBScript. What I write here applies equally well to JScript, Perl and so forth. In fact, it was so easy to add JScript I did :)

A few words about the test harnesses
First up, a few things: It is really the tests that do the ‘clever stuff’ – hosting PowerShell, VBScript, compiling C# code or launching Visual Studio. I haven’t really talked about them much because the work to do this resides in just two methods: Compile and Execute. Each language in the test harness implements those two methods. It’s easy to find and it’s isolated.

When you launch NUnit, you will see this:

All of the tests reside in TestingBase and each language derives from TestingBase. TestingBase uses abstract methods – such as Compile, GetGenerator and Execute – to obtain the correct engine and generator to use from each specialized class. The tests are structured in such a way that the specialized language can do whatever it likes, in any way it likes, providing it returns an IBindingList at the end. This means it is easy to ‘plug in’ new languages and get full coverage of all previous tests with that new language.
.
Back to it…
I am going to test this by hosting VBScript inside my .Net Test Harness and running the generated code. Like in Part 3: PowerShell, we’ll be generating and running real macros here. The generated code will talk to the host. Of course, you could create a .VBS file from the generated code and run that from the command line to test it (without host context); that works too and it’s what was done in Part 4: C++. At this point, we have a rich toolkit of approaches we can reuse for testing generated code. As this is about macros, though: I’ll test the generated code as a macro by hosting the VBScript Engine and executing code inside my application.

I will stick with the same theme throughout. I have added a new tab to the User Interface and I’ve added a new generator that outputs VBScript Code in a way that will work within my Test Harness host:

Recall from the earlier parts: the GridView is bound to an IBindingList. The user drives the GridView which drives the IBindingList; an IBindingList.ListChanged listener forwards the event to the macro recorder which generates the code.

And the single test I have now in TestHarness/TestingBase.cs should look familiar :-)

            // We've set up the handler that will forward changes to the macro recorder...
            // by driving the list directly, we view Recorder.Instance.Generators[0].Text
            // to see what has been generated so far. 
people.ListChanged += people_ListChanged;

Person woo = new Person();
woo.Name = "WOO";
woo.Age = 33;
people.Add(woo);

Person hoo = new Person();
hoo.Name = "HOO";
hoo.Age = 50;
people.Add(hoo);

people.Remove(woo);

If you step through that code, by the time you get to the end the VBScriptGenerator.Text property will look like this:

    ' We do not create theBindingList1. It will be passed in from the host. 

    Dim thePerson1
    Set thePerson1 = Manufacture("MacroSample.Person")

    thePerson1.Name = "WOO"
    thePerson1.SetAge(33)
    theBindingList1.Add(thePerson1)

    Dim thePerson2
    Set thePerson2 = Manufacture("MacroSample.Person")

    thePerson2.Name = "HOO"
    thePerson2.SetAge(50)
    theBindingList1.Add(thePerson2)

    theBindingList1.RemoveAt(0)

Unlike C++, PowerShell and C#, there is no template we need to ‘wrap’ the generated code to make it compilable. It will compile and run as straight text.

Fascinating. How do we test it?

First some background
VBScript – and JScript, Perl, Ruby and all the other ‘COM’ scripting languages on Windows – are just ordinary COM Objects with their own ProgIDs. Infact, the scripting engine *IS* usually the ProgID! Surprised? Look at HKEY_CLASSES_ROOT and see for yourself :-) If you want more convincing, do this from your VBScript code:

    Dim j
 
    Set j = CreateObject("JScript")

You’ve just created the JScript engine from VBScript! Of course, the interfaces on the created object – IActiveScript etc. – do not derive from IDispatch so we can’t run JScript code from here.

Which is a shame. That would have made me look smart. And that was a disgression. So let’s get back to it.

When you see a scriptlet, in a HTML page for example, wrapped with something that says script=”VBScript” or language=”JScript”, the host – perhaps Internet Explorer or our Test Harness in this case – creates an instance of that language engine and makes a few calls on the main engine interface. The interface exposed on all language engines is ‘IActiveScript’. The host needs to make the engine ready to execute some scripts on its behalf. In particular, the engine needs to know which Window handle it should use when a modal dialog box is displayed (ie: when we run MsgBox “WOO” in VBScript); what methods and properties from the host might be exposed to the script; and how to tell the host the script contains errors.

To do this, it uses a simple callback mechanism. We (as the host) implement an interface called IActiveScriptSite which exposes a whole bunch of methods and properties. And which, fortunately, Dr Dobbs had provided the Interop signatures for. We tell the engine which callback interface (‘scripting site’) to use by calling IActiveScript.SetScriptSite on the engine and passing in ourselves as the site. The engine can then call back on the interface at various times to tell us thing’s like it’s state, whether the script has finished, and to ask us for objects referenced in the script.

If our VBScript code contains this:

    MsgBox Woo.Hoo

It is fair to say that ‘Woo’ is not part of the VBScript Specification. The engine will ask the host to return the IDispatch* of Woo. Or, in the case of C#, an object reference whose class is attributed with [ComVisible(true)].

It’s dead, dead easy.

Something that caught me out in the old days: When you run a Javascript in Internet Explorer and use ‘alert’, a dialog box is displayed to annoy the user. But when you run Javascript outside of Internet Explorer and use ‘alert’, you get an error. Why? When Internet Explorer hosts JScript it exposes a method called ‘alert’ that scripts can use. It does this by providing a method called ‘alert’ via its own implementation of IActiveScriptSite. If you want JScript code to work in your app, and you want an alert method exposed, you have to implement it yourself. MsgBox, in VBScript, is part of the language so we don’t have this problem.

We will expose some host-specific methods to VBScript and generate code to use them.

Let’s do it
Now that’s in place I can wander through the code that gets generated in the UI. As I’m generating VBScript code, and I’ll be dealing with the Person object that is known about only within my C# host, I cannot create that in VBScript. ‘Person’ has no ProgID. Now: I *COULD* use the various attributes in System.Runtime.InteropServices so that my Person class is exposed to COM as a ProgId, and is creatable, but I won’t do that. Instead, I will have VBScript ask the host (ie: the test harness) to manufacture objects on my behalf by type name. ie:

    Dim t

    Set t = Manufacture("Some.Class")

In my host – I have provided a shell IActiveScriptSite implementation in the Test Harness that you can use – I implement a method that simply instantiates an object of that type. Everything I expose to script is wrapped up in a single class called ScriptVisibleHostProperties:

// ScriptVisibleHostProperties.cs
//
// Comments removed. 
//
    [ComVisible(true)]
    public class ScriptVisibleHostProperties
    {
        // This is the property our macro code expects to be around when it is run. 
        public object theBindingList1
        {
            get
            {
                return m_theBindingList1;
            }
            set
            {
                m_theBindingList1 = value;
            }
        }

        protected object m_theBindingList1;

        public object Manufacture(string typeName)
        {
            // Just create a brand new object of the specified type. 
            // I will look in the Assembly that contains our stuff.
            object t = typeof(MacroSample.Recorder).Assembly.GetType(typeName).InvokeMember("", BindingFlags.CreateInstance, null, null, null);

            return t;
        }
    }
}

From my VBScript code I can now call ‘Manufacture’ and access a property called ‘theBindingList1′. ie: the same binding list the macro code assumed would be around when it was generated.

When it comes to testing this – see TestHarness/ComScriptingBase.cs – the Compile stage sets up the IActiveScript (for the language we want) ready to execute the code; the Execute method just runs the script by setting the scripting engine to the ‘Connected’ state (bizarre, but there you go). It then returns the object that was returned by the VBScript Engine:

// VBScript.cs::Execute
//
MyPeopleCollection macroPeople = new MyPeopleCollection();

ScriptVisibleHostProperties props = theHost.NamedItems["MyHostProperties"] as ScriptVisibleHostProperties;
Assert.AreNotEqual(null, props);

props.theBindingList1 = macroPeople;

// Setting it to State 2 (CONNECTED) actually sets the engine running.
theHost.Engine.SetScriptState(2);

return macroPeople.TheRealBindingList;

I’ll explain why I do not pass a BindingList collection directly into the script later and all the nuances in the generated code.

The important thing is: from this method, we return theBindingList1 (the object the macro code expected to be around) after it has been passed into the VBScript engine and the macro code executed. The state of the object returned should be exactly the same as the one we built up in our TestingBase.cs test.

Super. Duper. Cool :)

Gotchas
Quite a few actually. Let’s start at the beginning :-)

When I first started, I got this error:

TestHarness.Language.VBScript.DoATest:
System.Runtime.InteropServices.COMException : Exception from HRESULT: 0x800A000D

‘Type Mismatch’ . I was exposing my ScriptVisibleHostProperties class to the VBScript engine, but I had not marked it with the ComVisible(true) attribute:

    [ComVisible(true)]
    public class ScriptVisibleHostProperties
    {

Once I got past this point, the ActiveScriptingHost.OnScriptError method was being called when I made a mistake and I could work out what was going wrong.

Like: Object required:

TestHarness.Language.VBScript.DoATest:
System.Runtime.InteropServices.COMException : Exception from HRESULT: 0x800A01A8

… I needed to mark my Person class with the ComVisible(true) attribute as well. If you get an ‘Object Expected’ error when you are running hosted VBScript, the first thing you should check is that your classes are marked as ComVisible(true). This includes the classes of any nested properties, too. I suppose the easiest way is to make everything in the assembly visible!

Then there was an issue with this:

TestHarness.Language.VBScript.DoATest:
System.Runtime.InteropServices.COMException : Exception from HRESULT: 0x800A01AE

‘Class does not support Automation: Age’. Hmmm. VBScript can see my class. It can see the Name property. But it can’t see the Age. What’s special about Age?

	public int? Age
	{ 
		...

It’s a nullable type. I would have expected the default marshaller to convert this into a Variant (VT_NULL) if it was a null value and the appropriate VT_xxx if it was set. But no: ‘Class does not support Automation. ‘

I tried various things, but all the interesting Marshalling attributes weren’t allowed on properties so I went with a hack. I wrapped the property name with explicit Getter/Setter methods and you can see them in the generated VBScript code. Doing this with all of your properties would be a pain; not really an issue with generated code though :) I assume one of the magic attributes in the deep, dark System.Runtime.InteropServices namespace will do what I want…?

And the biggest of all. I passed in a BindingList to VBScript but when it executed the ‘Add’ method I kept getting an error. It could not see ‘Add’. Probably because BindingList does not have the ComVisible(true) attribute set. I tried deriving from this class, setting the attribute on my derived class and passing an instance of that in instead but that didn’t work either. As you would expect. The best I could come up with was a hack: to create a new collection – MyPeopleCollection in the test harness – that exposed two methods: Add, and RemoveAt. They delegated to a ‘real’ BindingList.

I pass an instance of MyPeopleCollection to the Execute method but return the .TheRealBindingList to the test harness.

And that just about sums up all of the language ‘styles’ I’ll need to be generating and regression testing: Managed (C#, VB.Net, Managed C++), Managed Scripting (PowerShell), Native (C++) and COM Scripting (VBScript).

February 12, 2010

Partial Methods in C++

Filed under: Programming — Tags: , — admin @ 5:54 am

I’m looking at generating C++ versions of my C# classes to facilitate data exchange between the two languages. The first code generators I wrote were in C++ and COM and the whole thing was quite traumatic so I’ve been putting it off :-)

Over the last few years, C# has moved on and added lots of compiler aids for code generation: Partial Classes and Partial Methods spring to mind. Can I steal any ideas for my C++ generators?

Is there a way of achieving partial methods in C++? Yes.

By using a Microsoft extension to the language . It looks something like this:

CMyDerivedClass* pInstance = new CMyDerivedClass();

__if_exists(CMyDerivedClass::SomeMethodName)
{
	pInstance->SomeMethodName();	
}

It only generates the code around the condition if it is defined. ie: the method is there; otherwise it doesn’t. This works with variables, function names, classes and all kinds.

There’s a few implications: if SomeMethodName is defined in the base class of CMyDerivedClass, and not in CMyDerivedClass itself, then the function still evaluates to true. But technically: I think that’s what you want.

And the biggest implication of all: it’s a Microsoft language extension to C++.

There is no __else_ but there is an equivalent ‘not’:

CMyDerivedClass* pInstance = new CMyDerivedClass();

__if_not_exists(CMyDerivedClass::SomeMethodName)
{
	pInstance->SomeOtherMethodNameInstead();	
}

Personally, I’m not really sold on the idea of partial methods. I would prefer to have a well parameterized code generation infrastructure in place so that I can explicity tick a check box somewhere that tells me I need to implement a method. ie: for Grom, I use Extenders that are relevant to the language style being generated:

If I set ‘CustomPreSetCheck’ to true then a call to that method is always generated in the code and the compiler takes me exactly to where I need to be to fix it, giving me instructions (including the signature!) to implement that method. If you have your method name or signature wrong and are using partial methods, your partial method simply just will not be called and you’ll have to hunt around to try and work out why.

February 6, 2010

Writing a Macro Recorder: Part 4 – Generating C++ ‘macro’ code on the fly and testing it with NUnit.

Filed under: Programming — Tags: , , , , — admin @ 10:59 pm

Part 1 (C#)- Part 2 (Testing C#)- Part 3 (PowerShell)

The source code for Part 4 can be downloaded here (Visual Studio 2008). Download and use at your own risk.
NUnit v2.5.0.9122 can be found here.
The reference in the test harness is to PowerShell 2.0 CTP2. Modify the reference – System.Management.Automation – and point it to your PowerShell; remove any lines that won’t compile :)

I am not sure if this post on C++ really belongs in this series but given I can reuse a common theme – generating code within a user interface and regression testing it using NUnit – I am grouping it all together. It’s oviously different: after generating .Net code or PowerShell scripts, you can compile that code and run it within the context of your application. You would (probably not!) do that with C++ code. So why the post?

These posts are about two things: generating code based on a user or developer driving a data structure (IBindingList in this case) and automating the testing of what gets generated in NUnit. The principle can be applied to any generated C++ code… but why would you use NUnit to test generated C++ code?! It sounds a bit bizarre, until you realize that NUnit can be used to easily launch Visual Studio and compile C++ code on your behalf… I tend to use this approach for establishing confidence in the generated code, and then use CPPUnit for the detailed testing. Alas. Let’s get on with it….

… well, almost :) Why would you generate C++ code on the fly? For my API Modeling framework which has C# generated classes, it’s obviously worth generating macro code in C# and PowerShell (and VB.Net and Managed C++) code to show how to interact with that API. It allows third parties to work in the language they know. But that same API might also be hosted in an external web service; driving it as a client using C++ might be a requirement. Or from Java even. Or some other language. Rather than meticulously documenting an API in numerous languages, which I know from experience is surprisingly difficult and expensive to do well, I would prefer to have people ‘discover it’. By driving the application in the ‘master’ language (which contains the macro generators and so forth) in a UI, they can learn how to drive it from any other. And, if you can find a way of regression testing the generated code like this series shows you, you always know that what they discover is 100% up to date and usable.

But back to the code generation. We will stick to the sample I’ve used throughout but I’ll add another tab for C++:

To recall from Parts 1 thru 3: the user drives the GridView which is bound to an IBindingList; the IBindingList fires the ListChanged event; a handler on our form dispatches that event to the macro recorder which coordinates the code generation.

The C++ generator requires a different approach. It breaks everything, but I’ll come back to that. Repeatedly. First up though, there is no BindingList in C++. Instead, I will map the BindingList calls into ATL::CAtlArray calls.

And that’s where things start to break down.

Look at the generated code that creates the collection: it’s called pBindingList and not pATLArray or something else like you would expect. Why? The code generator is written in .Net. I am using the collection.GetType().Name from .Net to give it it’s name – it’s .Net name, more specifically. Is this a problem? Yes.

Rather than go into detail here, I have a ‘Deep Thinking’ section at the bottom that goes through these issues… but to all intents and purposes, you can ignore it :-)

So… to testing
By now, the single test I have in the Test Harness should look familiar. You run the following code in the Test Harness…

	System.ComponentModel.BindingList people = new System.ComponentModel.BindingList();

            // We've set up the handler that will forward changes to the macro recorder...
            // by driving the list directly, we can always view Recorder.Instance.Generators[0].Text
            // to see what has been generated so far. 
	people.ListChanged += people_ListChanged;

	Person woo = new Person();
	woo.Name = "WOO";
	woo.Age = 33;
	people.Add(woo);

	Person hoo = new Person();
	hoo.Name = "HOO";
	hoo.Age = 50;
	people.Add(hoo);

	people.Remove(woo);

… and by the time you get to the end, the CPPGenerator.Text field looks like this:

ATL::CAtlArray* pBindingList1 = new ATL::CAtlArray();

CPerson* pPerson1 = new CPerson();

pPerson1->SetName(L"WOO");
pPerson1->SetAge(33);
pBindingList1->Add(pPerson1);

CPerson* pPerson2 = new CPerson();

pPerson2->SetName(L"HOO");
pPerson2->SetAge(50);
pBindingList1->Add(pPerson2);

delete pBindingList1->GetAt(0);

pBindingList1->RemoveAt(0);

You substitute that text into the C++ template:

#include "stdafx.h"
#include "atlcoll.h"
#include "Person.h"
#include "XmlDumper.h"

class CGeneratedTestRunner
{
public:

	// Execute the generate code and dump the output to the console.
	static	void	Run()
	{
//%CONTENTS%//

		CXmlDumper::Dump(pBindingList1);
	}
};

And then you test the resulting code:

#include "stdafx.h"
#include "atlcoll.h"
#include "Person.h"
#include "XmlDumper.h"

class CGeneratedTestRunner
{
public:

	// Execute the generate code and dump the output to the console.
	static	void	Run()
	{
	    	ATL::CAtlArray* pBindingList1 = new ATL::CAtlArray();

		CPerson* pPerson1 = new CPerson();

    		pPerson1->SetName(L"WOO");
    		pPerson1->SetAge(33);
    		pBindingList1->Add(pPerson1);

    		CPerson* pPerson2 = new CPerson();

    		pPerson2->SetName(L"HOO");
    		pPerson2->SetAge(50);
    		pBindingList1->Add(pPerson2);

    		delete pBindingList1->GetAt(0);

		pBindingList1->RemoveAt(0);

		CXmlDumper::Dump(pBindingList1);
	}
};

Ahem. Well. You will eventually. Testing C++ requires a bit more infrastructure to work. Well, perhaps not, but for the purposes of this sample that’s what I’ll do. You need to create a Visual Studio Solution (a Win32 Console App in my case) that does as little as possible. You then structure that solution so that one of the files can be overridden by the test harness:

That solution is also part of the source code you can download above.

What happens next is fairly obvious: as part of the test harness, you overwrite that file with the contents of the code you want to compile. Then you need to compile the code automatically… how?

Visual Studio is very automation friendly (I wouldn’t like to work on the Visual Studio Extensibility Team though: they’ve done a cracking job of opening up Visual Studio since 2005, but by looking at the forums it seems no one will ever be happy with what they’ve done. They always want more!).

So to test the C++ generated code, the Compile method does this:

// TestHarness/CPP.cs
//
// ... bits missing ... the source file is written out before this happens ... 
//
	Type vsType = Type.GetTypeFromProgID("VisualStudio.DTE.9.0");

	EnvDTE.DTE visualStudio = Activator.CreateInstance(vsType) as EnvDTE.DTE;          

	visualStudio.MainWindow.Visible = true;

	visualStudio.Solution.Open(BaseLocation + @"\CPPConsoleApp\CPPConsoleApp.sln");

	visualStudio.Solution.SolutionBuild.SolutionConfigurations.Item("Debug").Activate();

	visualStudio.Solution.SolutionBuild.Build(true);

	Assert.AreEqual(0, visualStudio.Solution.SolutionBuild.LastBuildInfo);

It’s straight forward: open Visual Studio, make the window visible, open a solution, build it then check there are no errors. As an aside, I tend to always make the launched Visual Studio instance visible. Why? If a test fails, I tend to ensure that Visual Studio remains open (if I am working interactively). This gives me a chance to investigate the problem in the context of what is going on right now.

By driving Visual Studio in this way, I’ve had to add the EnvDTE80.DLL as a reference to my project.

But how to test it? How to make sure we have built up an AtlArray the same as our BindingList? First of all, let’s articulate what I’m trying to test in this case: although I’m not doing this here, imagine I developed a collection in C# and I want to ensure that the collection behaves the same in C++. I have generated classes in C# and C++ and I need to ensure they are serialized the same in both languages to facilitate data exchange. I need to compare the built-up C++ object with my C# one to ensure they are the same. They should be, or something is inconsistent. How to compare? Let’s see how the IBindingList implementation looks when it is serialized in C#:



  
    HOO
    50
  

Yup. That looks nice. I think we’ll use that! When the console application is run, the C++ code will dump out an Xml description of its built up object. We won’t worry about making the output exactly the same, character by character: instead, we’ll just concentrate on making the Xml such that it can be deserialized back into an IBindingList instance. We will then construct an object using that Xml in C# before comparing it as part of our test. Crude, but effective.

If you look at TestHarness\Templates\CPP.TXT file you will see the Dump method at the end:

	CXmlDumper::Dump(pBindingList1);

Where does the C++ Console App dumps its data? StdOut is probably the best place for a test. When we run the app – in TestHarness/CPP.cs/Execute – it looks like this:

// CPP.cs
//
// ... stuff missing that sets up the cmd.exe call 
//
            m_command.Start();

            string output = m_command.StandardOutput.ReadToEnd();

            m_command.Close();

            visualStudio.Solution.Close(false);
            visualStudio.Quit();

            // We can now try and instantiate our BindingList using the string we got back. 
            System.Xml.Serialization.XmlSerializer s = new System.Xml.Serialization.XmlSerializer(typeof(System.ComponentModel.BindingList));

            object result = s.Deserialize(new StringReader(output));

            return result;

By using StdOut we don’t need an intermediate file. You can see the Deserialize method: this reconstructs (in .Net) a BindingList based on the Xml that was output by running the C++ Console app. The value that is returned is given back to the test harness and compared with the one that was built up manually in C#.

Gotchas
There was one major problem I encountered trying to get this going. Sometimes when driving Visual Studio through Automation from a remote Multithreaded application you get these errors:

Application is busy (RPC_E_CALL_REJECTED 0×80010001)

Call was rejected by callee (RPC_E_SERVERCALL_RETRYLATER 0x8001010A)

At random. It’s very, very, very, funny.

The solution is to read this article: http://msdn.microsoft.com/en-us/library/ms228772.aspx

Or, if you can’t be bothered, look at the bottom of the TestHarness/CPP.cs file. On investigation, their solution was not working – CoRegisterMessageFilter was returning a bizarre HRESULT that not even Google knows about. So one can conclude: it does not exist. The reason was that my NUnit tests were being run in an MTA when I thought they were being run in an STA (NUnit uses a Multithreaded Apartment so it can update the tree view as the tests run… awww, how nice!).

Bottom line: I needed to run my test in an STA before I could call CoRegisterMessageFilter and the way to do that was to modify the TestHarness.dll.config file so it contained this entry:



  
    
      

And for some reason the syntax highlighter has changed that beyond recognition. I suggest you look at the one in the attached zip file.

The point of note is the =”STA”. I have put an assertion in the CPP.CS to ensure that Threading.CurrentThread.GetApartmentState() always returns STA. Without that, you will get issues driving Visual Studio through automation in your NUnit tests.

Tchau!

Deep thinking…
This section is an addendum and contains a lot of stuff you probably don’t need to worry about unless you are serious about generating C++ code on the fly. Bottom line: it’s harder than you think :-)

Originally, I asked: is it a problem if the variable is called pBindingList1 instead of pATLArray? Aesthetically, at least, yes. But solving that problem requires a surprisingly large amount of infrastructure around it. If I was generating code on the fly for C# and C++, I would (almost certainly) be doing so from a common model. I would then be calling collection.GetModel().Name or something similar instead of collection.GetType(). Assuming it’s a class model for now, that generated code would probably use an infrastructure I had developed in each language that tried to make fundamental data structures the same across all languages. I would perhaps create a collection class in each language that exposed exactly the same methods: Add, Remove, Swap and so forth. Or, I would ensure there was a suitable semantic and syntactic mapping between my C# collection type of choice and the one I used in C++. At that point, I would know when I generate the macro code for C++ what collection type to use and therefore what naming convention. At the moment, I don’t have that information around and I just have to guess.

Generating C++ code automatically as you go along brings up *ALL KINDS OF ISSUES!*. In the grand scheme of things they aren’t that important but they are worth thinking about. It’s probably the hardest language to generate code for and yet probably the one that needs it the most. When you can do this, from a model, pat yourself on the back :-)

For example: if I add an item to the binding list, I generate code for it in C++ like this:

    CPerson* pPerson1 = new CPerson();

    pPerson1->SetName(L"Graham");
    pPerson1->SetAge(33);
    pBindingList1->Add(pPerson1);

If I remove the item from the binding list… do I destroy the pointer in C++? Although removed from the list in C++, that object might still be part of some ‘greater’ model in C++ so it should not be destroyed. Afterall, even after removing an item from the list in C#, I can still change one of its properties in the UI and get the Macro Code for that property change in any language. By definition, if I can still set the property in C# I still have a reference to it… it will never be garbage collected.

No such beauty exists in C++.

In C++, if I delete the pointer when the object was removed from the list, that object would no longer be around when the property change came through. I’ll leave this one with you to work out. The best you can do with generating C++ code on the fly like this is probably to just get people up and running with your interface. That’s usually one of the biggest hurdles when trying to integrate with third party software anyway. What I do in this sample is destroy the pointer and (in CPPGenerator) remove the object reference from the NamingManager. That way, when the property change comes through, the object reference is unknown in C++ and will be built up. Or rather, I would remove the object reference from the NamingManager IF I KNEW WHAT OBJECT HAD JUST BEEN REMOVED! I only know it’s index because ListChangedEventArgs.ItemDeleted does not give me the reference of the object it has just purged. Problems! So the sample will generate C++ code that leaks.

Or you might have something in your model that helps you. Perhaps you have the concept of a list that ‘owns’ it’s pointers; or a list that does not. You would then generate the code accordingly: if it’s removed from the list that owns its pointers, you destroy the pointer. If the list is just a ‘view’ or a ‘filtered set’, you leave the pointer alone. Once again, this comes back to how you model your software and whether you reuse that model throughout your life cycle. If you have this kind of information in your model (and there might be good reasons for NOT having it there – perhaps it’s implementation specific, or too low level) then you can certainly use it here providing the model is accessible at this point in your application.

There’s lots of other issues too, some related to the above: previously, because I was generating macros for C# and PowerShell, I could just use .GetType() to find out the type I was referring to when I generate the code. I can’t do that in the C++ generators. If I was serious about generating C++ code, I would need some way of finding out – in C# – what the C++ name for an equivalent class was. For example: it’s MacroSample.Person in C#; but it might be SomeOtherNamespace::MacroSample::CPerson in C++. That kind of information comes from the model and – particularly – the job parameters used to generate the C++ classes you are now writing the macro generators for. We also need to consider things such as the ‘namespace separator’ used in different languages (. vs :: ).

BOTTOM LINE: Generating C++ code from C# without having a model around is hard and is quite different from the Managed languages. To get it right, and on-the-fly generated C++ code at least semi-usable, requires quite an investment of time and effort. I can’t think of any other way of tackling these problems without modeling your software, generating the code and having the macro recorder interpret the model at runtime. Nor can I think of any way of solving the ‘dangling pointer’ issue that generating C++ code on the fly will eventually bring up. Probably the best we can do is put comments in the code to direct the user.

But anyway. All those problems are not an indication that we should not generate C++ code; they are a hint we need to think about the problem and solution a bit harder!

Older Posts »

Powered by WordPress