Wednesday, October 28, 2009

More about Bizmonade

I got some interesting questions about Bizmonade on Michael Stephenson's blog, which I'll answer here.

Duplication between Bizmonade and BizUnit/when to use one instead of the other/when to use both

"1. Would I really want to extensively test all orchestration paths with Bizmonade and probably have an amount of duplication in my BizUnit testing":
My long term ideal would be to be able to specify tests that can run under two different contexts: simulated XLang (Bizmonade) or inside the real BizTalk. This would remove the need for duplication and also allow better trust in Bizmonade - Bizmonade can be used for fast feedback while doing development, and the result of the test can then be validated by running it inside the real BizTalk. If the test has a different result in each context, then there is a bug in Bizmonade. However I did not spend any time on this idea yet because I wanted to first focus on the simulated orchestration engine (and also because I was not sure of the best way to implement that - a BizMock-like approach, or more like BizUnit, or another solution; I would be interested in hearing feedback from the community about possible ideas)

However, there would still be some duplication because there would be different levels of tests (unit tests vs integration tests), but that's unavoidable regardless of the test tools being used.
"2. Would I be looking to use Bizmonade for edge or obscure test cases which would be a pain to test with BizUnit " and "4. Could a key place for Bizmonade be where I usually have a chain of orchestrations which could be difficult to test with BizUnit, and I could test the orchestrations individually with Bizmonade "
I can think of 3 cases where Bizmonade can test things that would be complex (or impossible) to test with a test tool that depends on the real BizTalk (such as BizUnit):
  • Error conditions that are hard to reproduce with BizUnit (for example exceptions or delivery NACKs), because you don't need to "physically" recreate the error (for example by shutting down a server or deleting a folder), you can simply tell the Orchestration Simulator you want to "inject" an exception or nack.
  • Another case is anything related to timing (delay shapes and timeouts), where Bizmonade allows "mocking" the system clock and therefore run test cases involving long delays almost instantly.
  • Also, there's the possibility to test started/called orchestrations individually, which is not possible at all using BizUnit (it can only test the "main" calling orchestration). I think that's a particularly useful case, because it allows having smaller test that focus on verifying one specific thing; this makes it easier to identify the cause of a failure when tests fail.
I would also be interested in hearing suggestions on other things that are hard to test and that could be made easier by enhancing Bizmonade.
"6. Would Bizmonade offer a better way to ensure good test coverage for orchestrations ":
A problem I've had with Bizunit tests is that they take too long to execute, as BizTalk is not optimized (yet) for low latency. Because of that, I don't have as much time to write extensive test cases (they would take too long to execute). Instead, I usually "cheat" by testing more than one thing in the same test. This however is a unit testing "antipattern", as it makes it harder to identify the cause of a test failure. It can also lead to introducing defects in tests (when modifying a test because a requirement changed, the developer can remove some important part of the test by mistake).
Therefore, because of the execution speed of tests in Bizmonade versus the real BizTalk, it's easier to ensure good test coverage while still following the idea that each test should verify a single thing.

Trust in Bizmonade test results

"3. Would I be looking to trust Bizmonade enough to reduce the amount of BizUnit testing I do "
and
"5. What risk does running my orchestration outside of BizTalk pose. Might I end up dealing with issues which wouldn't happen in XLANG or missing conditions that would happen in XLANG ":

Probably not in the initial releases: there can still be obscure edge cases where BizTalk's behavior is different from Bizmonade (either because I incorrectly interpreted what BizTalk was doing, or because of a bug/omitted feature in BizTalk). In future releases, I hope to achieve something that follows BizTalk's behavior closely enough to be trusted (again, having a way to execute the same tests in the real BizTalk would help - in the meantime, I'll rely on verifying my assumptions with BizUnit/manual tests and on user bug reports)

Orchestration debugging

"7. Would I lose the ability to debug orchestrations ":
No, BizTalk's own representation for orchestrations is still there (in addition to Bizmonade's representation), so you can still debug orchestrations from the BizTalk orchestration debugger.
In addition to the BizTalk debugger, an experimental feature I've been working on is to allow debugging orchestration from Visual Studio by debugging unit tests (this will allow setting breakpoints and stepping in the .odx file) - but again, it won't disable the BizTalk debugger, it would only add another option.

Licensing

"8. Am I going to have to pay for Bizmonade (longer term) "

No, although the site doesn't mention anything about licensing yet, our intention is to make this a free tool, and I completely agree with the commenter who said the tool needs to be open source as well. We have two obstacles in making it open source right now, but that should be resolved in a later release:
  • The code structure needs to stabilize: the code is still being refactored heavily, so it would be hard to make it available and start accepting patches for now (complex merges)
  • Licensing issues: We need to decide on an appropriate license (good for integration in closed/non-free projects, and compatible with other components on which Bizmonade depends).

Saving time in the development cycle

Productivity was my main goal when developing Bizmonade - by allowing developers to focus on a single change at a time (and ignore configuration issues), and by decreasing the time between a change and the confirmation that the change worked as expected or not. To take this idea to the extreme, I would even say that having automated tests is a very nice "side effect", but the main benefit is accelerating development and modifications to existing code.

Michael also suggested generating code for tests by analyzing the common conditions and branches within the orchestration. I'm not convinced on the idea of test case generation (the only thing a generated test can check is that the component under test still behaves as it did when the test was generated, including any bugs that were present at that time). I prefer the reverse development methodology, where tests are written first, and then an implementation is written to make the tests pass. That said, the XLang/.odx interpreter from Bizmonade could be reused for generating other types of code, such as a test "skeleton" generator, but that would be a completely different project.

Sunday, August 23, 2009

New BizTalk Unit Testing Framework : “Bizmonade”

For the past few months, I have been developing a new unit testing framework for BizTalk. This framework is specialized on unit testing Orchestrations, and unlike existing tools, it allows testing orchestrations without first deploying into BizTalk and without doing any configuration (bindings). The project is now complete enough to release a “preview” version. It still lacks important features, but it is a working implementation and I would be very interested in hearing feedback from the BizTalk community.

Tests can be written in C# using NUnit, MSTest or any other .Net testing framework.

Example test written using this tool:

[Test]
public void TotalForOrder413ShouldBe14_99()
{
OrchestrationSimulator.Test<SubmitOrder__Simulated>()
.When(MessageReceived.FromFile<PurchaseOrder>(
Path.Combine(exampleInstancesPath, "PurchaseOrder_413.xml")
))
// Ensure the orchestration publishes an updated "PurchaseOrder" with the correct price quote
.ExpectMessageSent<PurchaseOrder>
(
msg =>
{
// NUnit asserts, if using another framework, substitute with that framework's syntax
Assert.AreEqual(14.99m, msg.DistinguishedFields["Total"]);
Assert.AreEqual("priceQuoteSubmitted", msg.PromotedProperties[typeof(orderStatus)]);
}
)
// Simulate that the client sent an "Approved Purchase Order" in response
// to the price quote.
.When(MessageReceived.FromFile<PurchaseOrder>(
Path.Combine(exampleInstancesPath, "PurchaseOrder_413_Approved.xml")
))
// Ensure the orchestration publishes the right Invoice in reaction to the approval
.ExpectMessageSent<Invoice>
(
// Not using NUnit asserts this time, can be used with any test framework,
// but provides less deatailed errors in case of failure (for now)
msg =>
msg.GetDistinguishedField<string>("Number") == "1") &&
msg.GetDistinguishedField<decimal>("TotalPrice") == 14.99m
)
// Ensure the orchestration completes successfully
.ExpectCompleted()
.ExecuteTest();
}


The tool does not depend on BizTalk’s orchestration engine, instead it provides its own implementation of the XLang/s language. It generates C# code from ODX files (using an interpreter generated with SableCC), and the generated C# code uses Bizmonade’s “fake orchestration engine” implementation. This allowed developing the tool without worrying about internal BizTalk implementation details.



More details and download on bizmonade.matricis.com.

Saturday, April 4, 2009

My BizTalk vNext (post-2009) wish list

Michael Stephenson's recent thoughts about BizTalk vNext has inspired me in gathering my own BizTalk wishlist.

My wish list is categorized as follows:

  • Usability / Productivity
  • Explicitness / Clarity
  • Deployment
  • Debugability

I have given great importance to usability/productivity enhancements: BizTalk has reached a good level of architectural maturity, but it has neglected features that would allow developers to be more productive, and I hope the next version will improve on this aspect. (Note that I'm not talking about adding more wizards or drag and drop tools, but on enhancements that would improve flow)

Another important aspect is deployment. It should be as painless as possible to support several upgrade scenarios, to enable iterative development where upgrades can be done frequently and with minimal downtime.

Usability / Productivity


Title Description
Make variable/message creation more efficient
  • Allow entering a variable/message type with the keyboard, using IntelliSense. Finding a type in the type picker dialog can be annoyingly painful to use when you already know which type you want.
  • It should be possible to create a variable/message, name it and set its type without touching the mouse.

Example screenshot of IntelliSense for variable/message types and mouseless variable/message creation.

Wizards: remember previous values
  • After a wizard is run (for example the WCF Service Publishing Wizard), it should save everything that was entered in the wizard to a file (a DSL defined with Oslo?)
    WCFService
    {
    PublishOrchestration "MyProject.Orders.ReceiveOrders"
    MetadataEndpoint true
    TransportType WCF-WSHttp
    TargetNamespace http://tempuri.org/orders/receive
    Location http://${server.host}/services/ReceiveOrder/
    ...
    }
  • This file should be included in the project and checked into source control.
  • When the same wizard is rerun for a project, it should load the previously entered values (from the saved "wizard DSL" file). This would avoid re-entering the same thing 10 000 times when experimenting with a wizard (better learnability). It would also allow future updates to a project to be done without restarting from scratch every time a change is needed (for example, adding a new operation to an existing orchestration).
  • Ideally, the only thing a wizard should generate is the "wizard DSL" file containing the values entered by the user. It should NOT generate type-holder orchestrations, schemas or binding files. Instead, these artefacts should be generated at compile time from the "wizard DSLs" file and should NOT be part of the project.
    • This is probably not possible with the artefacts that are currently generated, because some of them need manual tweaking after they are generated. In an ideal solution, there would be no need to tweak the generated code (the necessary adjustments could be done in another way, for example by extending the "wizard DSLs"), so the generated artefacts would not need to be visible to developers anymore.
  • Similarly, the wizard should not deploy anything to IIS. Instead, a command line tool should be able to "import" the wizard DSL definition into an IIS web site / virtual directory.
  • Also, ideally, should get rid of the wizards and instead design corresponding DSLs that can be edited efficiently (and with good learnability) in a text editor with IntelliSense. (The goal of removing wizards is that the underlying DSL should be simple enough to manipulate directly, so that wizards would not be reallly necessary anymore)
Orchestrations: Get rid of the XML representation
  • Tools like Oslo's MGrammar would make it easier to directly manipulate XLang/s code, making it unnecessary to also have an XML representation.
  • The XLang/s representation is much more readable than the XML
  • The XLang/s representation by itself would be easier to work with merge tools. Currently, resolving conflicts/merging changes in orchestrations is very painful because the XML is not merge-tool-friendly.
  • XML representation of orchestrations is mostly a repetition of the XLang/s code. Some information is only in the XML, but it could be moved to the XLang/s by using code annotations.
  • Some tools will still need access to the XML, so there should still be a way to programatically generate, in memory, the XML representation (as it can be currently be done with XSharpP.exe). However, the XML representation should be hidden from orchestration developers, it should not be stored in source control and it should only be generated on the fly when needed by a tool (this means all tools that manipulate ODX files would need to be updated to do this conversion)
  • Another benefit of this is that it would help solving a frequent bug we encounter: while editing an orchestration, some parts of the code becomes invalid and replaced by "#error "The expression you have entered is invalid". When this happen, we have to do some modification in the orchestration designer that will force it to regenerate XLang/s code from the XML representation. If the designer worked directly with XLang/s code instead of converting to/from XML, this source of potential conversion problems would disappear.
Add a "Refactor > Extract Orchestration" feature

Example screenshot of suggested "Extract Orchestration" tool

  • Would be similar to C# refactoring tools that have a "Refactor > Extract Method" feature. It would greatly help in breaking down complex orchestrations into easier to understand "sub-orchestrations". Example use case:
    • User selects one or many shapes in orchestration and activates the "Extract Orchestration" tool.
    • User enters the new orchestration name and select the invocation strategy (Call Orchestration Shape, Publish Message to Message Box, Publish Message to Partner Orchestration)
    • User assign parameters names to the new orchestration
    • Extract Orchestration Tool generates the new orchestration with all appropriate parameters, and moves the selected shapes to the new orchestration
    • Extract Orchestration Tool replaces the shapes in the original orchestration by a "Call Orchestration Shape" or a "Send Message" shape, depending on the invocation strategy selected by the user.
    • If one of the "Publish Message" invocation strategies was selected, Extract Orchestration Tool generates appropriate schemas to pass values to the new orchestration.
  • The "Call Orchestration Shape" strategy is probably simpler to implement and would be a great start... other strategies could follow in future versions.
Pipeline designer: add pipeline components from "project reference" instead of toolbox.
  • Allow adding pipeline components without copying them to "c:\Program Files\..." and adding them to the toolbox. Many components are written to be used once (for a specific pipeline), so having them in the toolbox brings no value. Also, having pipelines in the toolbox leads to "file already in use" errors while recompiling projects (requiring a Visual Studio restart).
  • It should also be possible to reference pipelines by "project reference" instead of references to a compiled assembly. (Project references would be converted to assembly references when the pipeline is compiled)
Allow "commenting" orchestration shapes or group of shapes. This could be useful when doing major changes to an orchestration, to temporarily disable parts of an orchestration and ensure individual changes still compile.
Enhance general usability

Writing code using a text editor (with features like IntelliSense, refactoring...) is generally more efficient than using a graphical editor (such as BizTalk's orchestration designer, mapper and pipeline designer). This doesn't mean that an efficient graphical code editor is not possible, but it's still a huge unresolved challenge. Considerable user interface research must be done before reaching that point.
This research would have to consider usability principles such as "Keystroke-Level Model", "Flow", "Recognition rather than recall", "Locus of attention", "Modeless interfaces", and doing major experimental usability/efficiency studies.


Explicitness / Clarity

Title Description
Add a "Initialize Correlation Set" shape to orchestrations
  • In some cases, we need to initialize a correlation set with a value that is not in a promoted property of the received or sent messages. It could be in a non-promoted property, or it could be a value derived from the message.
  • The current solution is to send a dummy message with the values that will initialize the correlation set and have some way to redirect the dummy message to "/dev/null" (for example using a Null Adapter). This solution adds a lot of noise to orchestrations and it does not explicitly show that the goal is to initialize a correlation set.
  • A better solution would be a "Initialize Correlation Set" shape, which allows to assign the result of expressions to each property of the correlation set.
    Example of proposed Initialize Correlation Set shape
    corrEvent.EventId = configuration.deadlineReachedEventId;
    corrEvent.EventDate = deadline.ToString("yyyy-MM-dd");
Allow filter expression on non-activate receive shapes
  • This would give the same result than the "Initialize Correlation Set Shape" feature (create an instance subscription with the appropriate filters).
  • Having both options could still be useful: developers would be able to pick the option that allow the clearest representation for their context.
Never truncate any text in the orchestration editor.
  • Unclear names considered harmful:

Orchestration shape with truncated text

  • If text was not truncated in the shape's name, these names could be used to better document what the orchestration is doing.
  • It would be better to dynamically resize shapes based on their name (to allow the full name to fit), than to truncate the name.
Allow adding "documentation comments" to orchestrations
  • Another way to better document what an orchestration is doing would be to allow entering annotations.

Example of inline annotation in orchestration

  • It should be possible to edit the annotation directly in the designer (not in the property editor or a popup window)
  • These annotations would fulfill a similar purpose than the "Documentation" property, but would be visible directly in the designer (and in tools such as BizTalk Documenter). This avoids the need to click on each shape one after the other in case one of them has a useful comment.
Binding files: use "configuration by exception" for exported bindings.
  • 99% of the contents of binding files is noise: it simply repeats the default values for all properties. This make them hard to maintain. In our projects, we use another file to "configure the configuration file". This file contains only the values that can really change. We also have binding templates which reference values from the configuration-configuration-file.
  • A better solution would be to have simplified binding files that are created with a "configuration by exception" philosophy. These files would only contain values that are different from the defaults. (For this to work, the binding file export tools would need to have a way to check the default value for each property, and only export those that are different from this default.)
Binding files: Don't escape any XML
  • Many sections in binding files contain escaped XML. Some of these sections even contain "double-escaped" XML. This makes them hard to read and maintain. It can be easy to forget to change a small environment-specific value because it's buried in a long line of double-escaped XML.
  • A solution that's already available is to use the ElementTunnel tool from Scott Colestock's deployment framework.
  • A better solution would be to integrate the ElementTunnel tool in all BizTalk tools that import/export binding files, to make it transparent to developers. For example, when a binding file is exported, all XML should be automatically unescaped in the generated XML file. Similarly, when a binding file is imported, all XML should be automatically re-escaped before being processed by BizTalk. (It should also support older binding files where XML is already escaped)
Allow readable XPaths in every place where an XPath expression can be entered
  • In a BizTalk solution, most XPaths have the following form:
    /*[local-name() = 'Test' and
    namespace-uri()='http://tempuri.org']/*[local-name()='Hello' and
    namespace-uri()='http://tempuri.org']
    /*[local-name()='World' and
    namespace-uri()='http://tempuri.org']
  • This is because most of BizTalk tools don't support namespace aliases, and also because this is the way these tools generate XPaths
  • Because of this, many developers who first learn XPath when working with BizTalk jump to the conclusion that XPaths are an unredable variant of black magic.
  • It would be better if all BizTalk tools allowed XPaths in the following format (equivalent to the previous example when aliases are correctly set):
    /t:Test/t:Hello/t:World
    For this example to work, there would need to be a way to configure namespace aliases. For example, orchestrations and maps could have properties to assign aliases to namespaces URIs. When assigned, any generated XPath should use the shorter form with the user-assigned alias.

Deployment

Title Description
Create a MSI file WITHOUT first deploying to BizTalk The goal of a MSI file is to deploy, so needing to deploy in order to generate the file that will be used to deploy is a "paradox".
  • This diagram summarizes what is conceptually wrong with the current "automated" deployment process:
    BizTalk "automated" deployment process
  • This may work for an individual developer who tests on his own machine and then generates a MSI for deployment to production.
  • But it doesn't work in a team using Continuous Integration, where we need to:
    • Obtain the latest source code from source control
    • Build the solution
    • Deploy the solution, using the same process that will be used to deploy to production
    • Run automated tests on the deployed solution
    • Deploy to production using an installer that was successfully tested in the Continuous Integration environment
    Using the same deployment process that is used for production is important, because it is critical to identify problems in the deployment process before going to production. (Deployment to production should not be the first opportunity we have to test the real deployment process, to minimize downtime caused by problems in this process).
  • Because of this flawed way to generate MSI files, it is useless to us. Instead, we have developed complex scripts that deploy artefacts to BizTalk applications. These scripts are used in the Continuous Integration environment, and they are also used to deploy to production.
  • We therefore lose access to useful features of MSI files (rollback, uninstall, dependency management,...)
  • A good solution would be to have a way to directly generate a fully functional BizTalk MSI file, without needing to first deploy to BizTalk (using WiX?)
Have a way to replace a deployed schema, without undeploying assemblies that depend on it, and without changing its target namespace
  • If a schema is shared by multiple projects (as a "contract" to allow them to interact in a decoupled way), any updates to the schema mean that all dependent projects have to be undeployed.
  • For some schema updates, it is not desirable to undeploy all dependant applications. For example, if an optional element or promoted property is added, we currently need to:
    • Disable receive locations of dependent applications
    • Wait for orchestration instances of dependent applications to end (or terminate them, if acceptable)
    • Undeploy all dependant applications
    • Deploy the updated schema
    • Redeploy dependant applications and enable their receive locations
    • Manually reinject messages to resume orchestrations instances that were terminated.
    This may be appropriate when a mandatory element is added, but not for an optional element: we may have 5 projects that depend on the schema, and only 2 of them need the new property.
  • A solution would be to define a new version of the schema (by incrementing a version number in its "target namespace"). However, if an orchestration sends a message with version 2, it can't be received by another orchestration which still expects version 1. Therefore, it means that, in practice, either all orchestrations must still be updated at the same time, or that we need to setup a "version mapping" process to automatically map versions of a message to the appropriate version for its recipient.
  • Another solution is to keep the same target namespace, but increment the assembly number. In that case, BizTalk will take the schema in the assembly with the highest version number.
    • This works in pipelines and in filter based message routing solutions
    • But it fails in orchestrations: an orchestration expects the class associated to the message's .Net type. This class must have the exact version the orchestration expects. If the orchestration was compiled to accept message v1.0.0.0, and it receives an instance of message v1.1.0.0, it will throw a InvalidCastException.
  • A solution would probably require major reworking in BizTalk's internal deployment functionality, but it would greatly improve the ability to update applications in production.

Debugability

Title Description
Allow Visual Studio debugger to step through XLang/s code (when attached to BTSNTSvc.exe)

Example of orchestration being debugged in Visual Studio debugger

  • The BizTalk orchestration debugger can be used to debug orchestrations at a high level (see which shape executed and in which shape an error occurred).
  • It can sometimes be useful to debug orchestrations at a lower level (individual expressions in the .odx source)
  • This can be done by attaching the Visual Studio debugger to BTSNTSvc.exe, but only if the hack described on SymbolicDebuggingForOrchestrations was done. This hack will allow stepping through the generated C# code for the orchestration. However, the generated C# code is not designed for readability, so it can be hard to understand.
  • A better solution would be to include, in the C# code generated from orchestrations, "#line pragmas" to correlate each C# line with the original line in the .odx file (XLang/s code)
  • The solution should automatically do all the necessary steps described in the SymbolicDebuggingForOrchestrations article, instead of requiring developers to do them manually (or write their own scripts to do it). This would be done when the project is compiled in Debug (Development) configuration, but not in Release (Deployment) configuration.


Some of these may require ambitious/complex changes that may break legacy tools, so I doubt that all of them will be implemented. But it would still be nice if a few could be implemented. Also, some of these enhancements/features/critics are also relevant with WF and Dublin, where it's probably not too late to start applying them.

Saturday, January 17, 2009

M DSLs: Using DSL source line information at runtime

An interesting feature I included in my previous “M-to-C#” DSL example, was the ability to integrate the DSL scripts with the Visual Studio debugger and with exception stack traces. When an exception is thrown, I wanted its stack trace to refer to the DSL code, and not to the generated C# code or to an interpreter’s code. I also wanted to be able to set breakpoints in the DSL code and step into it using the Visual Studio debugger.

DebuggerIntegration

Unlike an internal DSL, this did not happen “by default”: a mapping between the DSL script source lines and the generated C# must be specified explicitly, using the C# “code line pragma”. For example, the following shows a generated C# class from my previous example DSL, including these pragmas:

using System;
using MAuthorizationDSL.Core;
public class IncidentReport_Comments_Edit_AuthRules : AbstractAuthorizationRule
{
#line 19 "../../../MAuthorizationDSL\IncidentReport_Comments.auth"
public void Evaluate(string user,Incident incident,Comment comment)
{
#line 21 "../../../MAuthorizationDSL\IncidentReport_Comments.auth"
if (UserIsInRole(user, "PlantSupervisor"))
{
#line 23 "../../../MAuthorizationDSL\IncidentReport_Comments.auth"
Allow("Plant supervisors can edit any comment at any time");
}
else
{
#line 27 "../../../MAuthorizationDSL\IncidentReport_Comments.auth"
if (UserIsAuthorOf(user, comment))
{
#line 29 "../../../MAuthorizationDSL\IncidentReport_Comments.auth"
if (DateTime.Now < incident.EndTime + TimeSpan.FromHours(12) )
{
#line 30 "../../../MAuthorizationDSL\IncidentReport_Comments.auth"
Allow("Comments can be edited up to 12 hours after the end of an incident.");
}
else
{
#line 32 "../../../MAuthorizationDSL\IncidentReport_Comments.auth"
Deny("The incident has ended more than 12 hours ago, its comments can't be edited anymore.");
}
}
else
{
#line 36 "../../../MAuthorizationDSL\IncidentReport_Comments.auth"
Deny("User can't edit another user's comment");
}
}
}
}
This was generated from the following DSL script:
Action Edit (user, incident, comment) {
if (User is "PlantSupervisor") {
Allow("Plant supervisors can edit any comment at any time")
}
else {
if (User IsAuthorOf comment) {
if (DateTime.Now < incident.EndTime + 12 hours)
Allow("Comments can be edited up to 12 hours after the end of an incident.")
else
Deny("The incident has ended more than 12 hours ago, its comments can't be edited anymore.")
}
else {
Deny("User can't edit another user's comment")
}
}
}
The C# was generated following a series of steps, which I explained in my previous post. In order to have the "#line" pragmas in the generated code, I had to pass along the source line information in all of these steps.
  • In the first step, parsing the DSL script to an MGraph representation, all resulting nodes implement the System.Dataflow.ISourceLocation interface. This interface allows each node to reference a specific source line and column number. (This interface is part of the M tools for parsing a DSL script to an MGraph, so all I needed to do was make use of it)
    image

  • For conversion of MGraph to XAML, I had to modify some of the MGraphXamlReader code to include ISourceLocation info in the generated XAML.
    <n2:IfThenElseStatement.ThenBranch>
    <n2:MethodCallStatement
    FileName="../../../MAuthorizationDSL\IncidentReport_Comments.auth"
    Span="(893:30,17)-(969:30,93)"

    Name="Allow">
    <n2:MethodCallStatement.Parameters>
    <n3:StringLiteralExpression
    FileName="../../../MAuthorizationDSL\IncidentReport_Comments.auth"
    Span="(899:30,23)-(968:30,92)"

    Value="&quot;Comments can be edited up to 12 hours after the end of an incident.&quot;" />
    </n2:MethodCallStatement.Parameters>
    </n2:MethodCallStatement>
    </n2:IfThenElseStatement.ThenBranch>
  • This adds some noise to the XAML code, but I don’t think it’s really a problem because it’s an intermediate format between the MGraph representation and the generated C#, and it’s therefore not intended to be human readable (even though reading it can help investigating some conversion issues).

    The two modifications required for this are:

    • Add an inputFileName parameter to methods in MGraphXamlReader.DynamicParserExtensions
    • Add two methods to the MGraphXamlReader class:
      private IEnumerable GetSourceLocation(ISourceLocation location)
      {
      var typeReference = GetTypeReference(location);

      var fileNameMemberIdentifier = GetMemberIdentifier("FileName", typeReference);
      var spanMemberIdentifier = GetMemberIdentifier("Span", typeReference);

      yield return new XamlStartMemberNode {MemberIdentifier = fileNameMemberIdentifier};
      yield return new XamlAtomNode {Value = location.FileName};
      yield return new XamlEndMemberNode {MemberIdentifier = fileNameMemberIdentifier};

      yield return new XamlStartMemberNode {MemberIdentifier = spanMemberIdentifier};
      yield return new XamlAtomNode {Value = ConvertSourceSpan(location.Span)};
      yield return new XamlEndMemberNode {MemberIdentifier = spanMemberIdentifier};
      }

      private object ConvertSourceSpan(SourceSpan span)
      {
      var context = new Context(this);
      var converter = new SourceSpanConverter();

      return converter.ConvertToString(context, span);
      }
      The GetSourceLocation method creates XAML nodes for the FileName and Span properties of the ISourceLocationInterface.

      The ConvertSourceSpan uses the System.Dataflow.SourceSpanConverter class to convert the Span to a concise string representation, for example: (893:30,17).

    • Use the GetSourceLocation to include these two properties in each AST node:
      if (node is ISourceLocation)
      foreach (var sourceLocNode in GetSourceLocation(node as ISourceLocation))
      yield return sourceLocNode;
  • For conversion of XAML to C# using my CodeGeneratingAstVisitor class, each AST node class now also needs to implement the ISourceLocation interface.


    image

    For example, in the code for generating C# for an if/then/else:

    public override void CaseIfThenElseStatement(IfThenElseStatement node)
    {
    generator.SetCurrentSourceLine(node.FileName, node.Span.Start.Line);

    generator.WriteIndent();
    generator.Write("if (");
    node.Condition.Visit(this);
    generator.Write(")");
    generator.WriteLine();
    ...
    }

    the SetCurrentSourceLine writes the "#line" pragma to generated C#

    public void SetCurrentSourceLine(string sourceFile, int sourceLine)
    {
    WriteLine();
    WriteLine(String.Format(@"#line {0} ""{1}""", sourceLine, sourceFile));
    }

Conclusion

With the "#line" pragmas in the generated C#, the compiled assembly's debugging information reference the DSL source. I believe this was an important feature and researched it for two main reasons:

  • First for the principle of developer productivity: problems are easier to investigate if the source code line responsible of the problem can be quickly identified.

  • But also to demonstrate that it could be done. An annoyance with BizTalk server is that exceptions reference lines of generated C# code, instead of lines in the code that was written by a developer. In BizTalk server, there are two options for debugging orchestrations:
    • The orchestration debugger, which allows setting breakpoints and stepping in the visual representation of an orchestration
    • The Visual Studio debugger, which works with the generated C# code (requires additional setup described in Symbolic Debugging for Orchestrations).

    It would be nice if future versions of BizTalk made use of C# "#line" pragmas to allow debugging directly in the ODX files (XLANG/s code). It could be argued that XLANG/s code was not intended to be read by developers and therefore referencing the C# instead of XLANG/s doesn’t really matter, but I disagree with that. Even though it’s hidden underneath a visual designer, XLANG/s is still a very interesting language, and in many cases it’s easier to read and understand than its visual representation.

I wrote this post primarily to report my modifications to the MGraphXamlReader example. As in my previous post, I’m not sure if the example is beyond the scope of M’s intended use, but it can still be interesting to others that may want to use M as a tool for writing imperative-style external DSLs.

Saturday, January 10, 2009

C# code generation using MGrammar

The Microsoft Oslo CTP has some very interesting tools, especially the MGrammar part of the M language, which allows defining textual Domain Specific Languages (DSL). This is a refreshing change from Microsoft’s past obsession with graphical tools. Most published examples on this tool focus on DSLs for defining data structures, which is probably great for declarative-style DSLs. However, I believe it could also be useful for defining application behavior, in imperative-style languages. I don’t know yet if imperative-style languages are really part of MGrammar’s intended use, but I did some experimentation to find what work would be required to achieve this.

I also had previous experience with SableCC and theoretical knowledge of internal DSLs defined with Boo, and wanted to compare these tools. Note that I may be biased in doing things in a SableCCish way, and there may be simpler/better ways to do what I’m trying to achieve with MGrammar, but for now I’ll try to find what I can or can’t do with this new tool/toy and evaluate its level of complexity.

A great example that helped me learn M was Torkel Ă–degaard’s WatiN DSL using MGrammar. I’ll try here to do one step further from his examples, and generate executable C# code from the generated AST (Abstract Syntax Tree).

Why C# instead of simply interpreting?

  • Objects used by the DSL can be created with plain old C# code, using the same techniques and design principles that would be used in a standard C# project
  • Increase developer productivity by providing a better debugging experience:
    • Set breakpoints in the DSL script’s source code and step into the DSL script (not the generated C# code, and not the interpreter code) using Visual Studio’s debugger

Sample DSL script running in Visual Studio's debugger

  • If the DSL code throws an exception, the stack trace references a line from the DSL script’s source code. In the case of an interpreter, the stack trace would instead reference the interpreter’s source code.

This C# code generation will be done in the following steps:

  1. Parse the DSL script to an MGraph representation
  2. Convert the MGraph representation to XAML
  3. Convert the XAML representation to strongly typed C# objects (giving an Abstract Syntax Tree).
  4. Visit each of the AST’s node
  5. Match each AST node against one of the Visitor’s methods. Each of these methods contains transformation code to generate C# code from a node’s properties.

CSharpGenerationSteps

Steps 2 and 3 are done using the MGraphXamlReader code sample, while steps 4 and 5 are additional work, in which I reproduced SableCC’s patterns (a variation of the GoF Visitor pattern).

First, I need a sample DSL. I took inspiration from the Authorization Rules DSL in Ayende Rahien’s Building Domain Specific Languages in Boo book, and adapted it with rules specific to a project I’ve been working on. I used a C#-like syntax, but MGrammar also allows defining more esoteric syntaxes.

Action Edit (user, incident, comment) {
if (User is "PlantSupervisor") {
Allow("Plant supervisors can edit any comment at any time")
}
else {
if (User IsAuthorOf comment) {
if (DateTime.Now < incident.EndTime + 12 hours)
Allow("Comments can be edited up to 12 hours after the end of an incident.")
else
Deny("The incident has ended more than 12 hours ago, its comments can't be edited anymore.")
}
else {
Deny("User can't edit another user's comment")
}
}
}

This DSL allows to customize what UI elements are enabled/disabled depending on various rules. In the above example, the script defines rules to determine if a user is allowed to Edit a Comment attached to an Incident. Some users may be allowed to Edit only some of the comments on one screen. Buttons for actions that the user can’t perform are disabled, and they have a tooltip explaining why the action can’t be performed.

Example user interface, with different rules applied to each action button

I then wrote a MGrammar for this DSL (using Intellipad, which helped greatly my learning process with its live MGrammar Preview Mode):

module Nootaikok
{
import Language;
import Microsoft.Languages;

export Authorization;

language Authorization
{
syntax Main
= action:Action*
=> action;

syntax Action
= TAction actionName:Identifier parameters:ActionParameters? rules:CodeBlock
=> Action { Name { actionName }, Parameters { parameters }, Rules { rules } } ;

syntax ActionParameters
= '(' parameterList:ParameterList? ')'
=> parameterList;

syntax ParameterList =
parameter:Identifier "," parameterList:ParameterList => [ parameter, valuesof(parameterList) ]
| parameter:Identifier => [ parameter ] // last parameter;

syntax ParameterValues
= '(' parameterValueList:ParameterValueList? ')'
=> parameterValueList;

syntax ParameterValueList =
v:ParameterValue "," l:ParameterValueList => [ v, valuesof(l) ]
| v:ParameterValue => [ v ] // last parameter;

syntax ParameterValue
= e:Expression => e;

syntax CodeBlock
= '{' statements:Statement* '}' => statements
| statement:Statement => statement;

syntax Statement
= s:IfThenStatement => s
| s:IfThenElseStatement => s
| s:MethodCallStatement => s;

syntax IfThenStatement
= 'if' '(' condition:Expression ')' then:CodeBlock
=> IfThenElseStatement { Condition{condition}, ThenBranch{then} } ;

syntax IfThenElseStatement
= 'if' '(' condition:Expression ')' then:CodeBlock 'else' @else:CodeBlock
=> IfThenElseStatement { Condition{condition}, ThenBranch{then}, ElseBranch{@else} } ;

syntax MethodCallStatement
= name:Identifier parameters:ParameterValues
=> MethodCallStatement { Name{name}, Parameters{parameters} } ;

syntax Expression
= stringLiteral:StringLiteral => StringLiteralExpression { Value{stringLiteral} }
| precedence 1: TUser TIs roleName:StringLiteral => UserIsInRoleExpression { Role{roleName} }
| precedence 1: TUser TIsAuthorOf TComment => UserIsAuthorOfExpression { AuthorOf{"Comment"} }
| precedence 1: TUser TWasWorkingIn range:Range => UserWasWorkingInExpression { DateTimeRange{range} }
| precedence 1: @left:Expression '<' @right:Expression => LessThanExpression { Left{@left}, Right{@right} }
| precedence 1: @left:Expression '>' @right:Expression => GreaterThanExpression { Left{@left}, Right{@right} }
| precedence 1: @left:Expression '<=' @right:Expression => LessThanOrEqualExpression { Left{@left}, Right{@right} }
| precedence 1: @left:Expression '>=' @right:Expression => GreaterThanOrEqualExpression { Left{@left}, Right{@right} }
| precedence 2: @left:Expression '+' @right:Expression => AddExpression { Left{@left}, Right{@right} }
| precedence 2: @left:Expression '-' @right:Expression => SubtractExpression { Left{@left}, Right{@right} }
| precedence 3: timespan:TimeSpan => TimeSpanExpression { valuesof(timespan) }
| precedence 4: name:Identifier => VariableReferenceExpression { Name{name} }
| precedence 4: propertyName:QualifiedIdentifier => PropertyReadExpression { ObjectAndPropertyName{propertyName} } ;

syntax Range
= '[' rangeStart:Expression '..' rangeEnd:Expression ']'
=> Range { Start{rangeStart}, End{rangeEnd} } ;

syntax TimeSpan
= days:Integer TDays => TimeSpan { Days{days} }
| hours:Integer THours => TimeSpan { Hours{hours} }
| minutes:Integer TMinutes => TimeSpan { Minutes{minutes} }
| seconds:Integer TSeconds => TimeSpan { Seconds{seconds} } ;

token IdentifierBegin = '_' | Letter;
token IdentifierCharacter = IdentifierBegin | '$' | DecimalDigit;
identifier token Identifier = IdentifierBegin IdentifierCharacter*;
token QualifiedIdentifier = Identifier ('.' Identifier)+;

@{Classification["Keyword"]} token TAction = 'Action';
@{Classification["Keyword"]} token TUser = 'User';
@{Classification["Keyword"]} token TIs = 'is';
@{Classification["Keyword"]} token TDays = 'days';
@{Classification["Keyword"]} token THours = 'hours';
@{Classification["Keyword"]} token TMinutes = 'minutes';
@{Classification["Keyword"]} token TSeconds = 'seconds';
@{Classification["Keyword"]} token TIsAuthorOf = 'IsAuthorOf';
@{Classification["Keyword"]} token TWasWorkingIn = 'WasWorkingIn';
@{Classification["Keyword"]} token TComment = 'comment';
@{Classification["Keyword"]} token TIncident = 'incident';

token Letter = 'a'..'z' | 'A'..'Z';
token DecimalDigit = '0'..'9';
token Integer = DecimalDigit+;

interleave Skippable
= Base.Whitespace+
| Language.Grammar.Comment;

syntax StringLiteral
= val:Language.Grammar.TextLiteral => val;

}
}

This grammar can be used to generate a parser, which converts the DSL script source to a set of MGraph nodes. These nodes are generic objects, which would be complex to manipulate in C# code. This is where MGraphXamlReader helps by generating a XAML representation of the MGraph, and by then converting that XAML representation to C# object instances.

To use this conversion from MGraph to XAML to C# objects, we first need to manually define a C# class for each node in the object graph. For example, an if/then/else is defined as:

using System.Collections.Generic;
using MAuthorizationDSL.CodeGenerator.Ast.AstVisitor;
using MAuthorizationDSL.CodeGenerator.Ast.Expressions;
namespace MAuthorizationDSL.CodeGenerator.Ast.Statements
{
public class IfThenElseStatement : AbstractAstNode, IStatement
{
public IfThenElseStatement()
{
ThenBranch = new List<IStatement>();
ElseBranch = new List<IStatement>();
}

public IExpression Condition { get; set; }
public IList<IStatement> ThenBranch { get; protected set; }
public IList<IStatement> ElseBranch { get; protected set; }
}
}

Once we have defined classes for all AST nodes, the AST can be generated from the DSL script source.

ExampleAST

The AST can also be represented as XAML, again using MGraphXamlReader:

<n1:Action Name="Edit">
<n1:Action.Parameters>
<n0:String>user</n0:String>
<n0:String>incident</n0:String>
<n0:String>comment</n0:String>
</n1:Action.Parameters>
<n1:Action.Rules>
<n2:IfThenElseStatement>
<n2:IfThenElseStatement.Condition>
<n3:UserIsInRoleExpression Role="&quot;PlantSupervisor&quot;" />
</n2:IfThenElseStatement.Condition>
<n2:IfThenElseStatement.ThenBranch>
<n2:MethodCallStatement Name="Allow">
<n2:MethodCallStatement.Parameters>
<n3:StringLiteralExpression Value="&quot;Plant supervisors can edit any comment at any time&quot;" />
</n2:MethodCallStatement.Parameters>
</n2:MethodCallStatement>
</n2:IfThenElseStatement.ThenBranch>
<n2:IfThenElseStatement.ElseBranch>
<n2:IfThenElseStatement>
<n2:IfThenElseStatement.Condition>...</n2:IfThenElseStatement.Condition>
<n2:IfThenElseStatement.ThenBranch>...</n2:IfThenElseStatement.ThenBranch>
<n2:IfThenElseStatement.ElseBranch>...</n2:IfThenElseStatement.ElseBranch>
</n2:IfThenElseStatement>
</n2:IfThenElseStatement.ElseBranch>
</n2:IfThenElseStatement>
</n1:Action.Rules>
</n1:Action>

This XAML is an intermediate representation before the C# objects are created. It’s very verbose, but it can be helpful when debugging failures when generating the objects. For example, we see that MGraphXamlReader expects to assign the value “Edit” to the “Name” property of the “Action” instance. If that property is not defined (or has a different name), the instantiation of AST classes will fail with a non-obvious error. Looking at the XAML can help investigating the mismatch between the MGraph and the strongly typed AST classes, and apply the necessary fixes to either the MGrammar or the AST classes.

Once we have the object representation (the AST), we need to traverse the tree by visiting each node. When a node is visited, we can then map from that node’s properties to C# code, and we then continue going deeper in the tree by visiting the node’s child nodes.

For example, the following XAML node:

<n3:UserIsInRoleExpression  Role="&quot;PlantSupervisor&quot;" />

will be mapped to:

this.UserIsInRole(user, "PlantSupervisor")

The SableCCish way to traverse the AST is a variation of the GoF Visitor pattern, and I’m going to use a similar pattern here. First, a base visitor class needs to be created. This base class has a method for each possible node in the AST, in which it calls the Visit method on each of its child nodes. This class is tightly coupled to all the AST nodes (it needs to know the structure of each one of them). Therefore, depending on the complexity of the DSL, this class can be painful to write and maintain. In the case of SableCC, developers are freed from this burden by having the Visitor class generated automatically, but it has to be written manually with M (although it could probably be generated with M as well).

public class AstVisitor : IAstVisitor
{
public virtual void CaseIfThenElseStatement(IfThenElseStatement node)
{
if (node.Condition != null)
node.Condition.Visit(this);
if (node.ThenBranch != null)
{
foreach (var expression in node.ThenBranch)
expression.Visit(this);
}

if (node.ElseBranch != null)
{
foreach (var expression in node.ElseBranch)
expression.Visit(this);
}
}
...
}

Each AST node also needs to implement the IAstVisitable interface, so for the previously shown IfThenElseStatement example, we need to add:

public override void Visit(IAstVisitor visitor)
{
visitor.CaseIfThenElseStatement(this);
}

The AstVisitor class is a base class which defines how to traverse the tree. The transformations to C# code are applied in a class inheriting from AstVisitor, which overrides the visitor methods for nodes where a transformation is needed. (A helpful analogy to better understand this is a set of XSLT templates which match and transform XML DOM nodes). For example, the CaseIfThenElseStatement is overrided in CodeGeneratingAstVisitor as:

public override void CaseIfThenElseStatement(IfThenElseStatement node)
{
generator.SetCurrentSourceLine(node.FileName, node.Span.Start.Line);
generator.WriteIndent();
generator.Write("if (");
node.Condition.Visit(this); // go further down in the AST by visiting the Condition node
generator.Write(")");
generator.WriteLine();
generator.WriteIndentedLine("{");
generator.IndentLevel++;
if (node.ThenBranch != null)
{
// go further down in the AST by visiting the ThenBranch nodes
foreach (var expression in node.ThenBranch)
expression.Visit(this);
}
generator.IndentLevel--;
generator.WriteIndentedLine("}");
if (node.ElseBranch != null && node.ElseBranch.Count > 0)
{
generator.WriteIndentedLine("else");
generator.WriteIndentedLine("{");
generator.IndentLevel++;
// go further down in the AST by visiting the ElseBranch nodes
foreach (var expression in node.ElseBranch)
expression.Visit(this);
generator.IndentLevel--;
generator.WriteIndentedLine("}");
}
}

Finally, this produces the following C# code:

using System;
using MAuthorizationDSL.Core;
public class IncidentReport_Comments_Edit_AuthRules : AbstractAuthorizationRule
{
public void Evaluate(string user,Incident incident,Comment comment)
{
if (UserIsInRole(user, "PlantSupervisor"))
{
Allow("Plant supervisors can edit any comment at any time");
}
else
{
if (UserIsAuthorOf(user, comment))
{
if (DateTime.Now < incident.EndTime + TimeSpan.FromHours(12) )
{
Allow("Comments can be edited up to 12 hours after the end of an incident.");
}
else
{
Deny("The incident has ended more than 12 hours ago, its comments can't be edited anymore.");
}
}
else
{
Deny("User can't edit another user's comment");
}
}
}
}

The generated IncidentReport_Comments_Edit_AuthRules class inherits from the AbstractAuthorizationRule class. This AbstractAuthorizationRule is another class we’ll need to write, in which we define the methods invoked by the DSL scripts:

  • UserIsInRole
  • UserIsAuthorOf
  • Allow
  • Deny

This follows the “Anonymous Base Class” DSL pattern. I’m not showing the class’s code here, because its a simple proof-of-concept implementation that returns hard coded results, but it could be modified to really do the appropriate checks in a database or Active Directory.

All that’s left is to compile that C# code, load the generated assembly (ideally in a separate AppDomain), create an instance of the IncidentReport_Comments_Edit_AuthRules class using reflection (for each rule, a separate C# class is generated) and execute its Evaluate method.

In my current code, I simply call the C# compiler and load the generated assembly in the current AppDomain. I also neglect several “infrastructure” considerations, since this is still proof-of-concept code:

  • caching
  • instance management
  • batch compilation
  • recompile and reload modified scripts at runtime

These considerations are better described in Chapter 7 of Ayende Rahien’s Building Domain Specific Languages in Boo book. This chapter explains these requirements and how the Rhino DSL library fulfills them. This library can be used for Boo DSLs, and a similar library would need to be written for M-to-C# DSLs before using such a DSL in a production application.

The rules can then be consumed in the C# UI code. For example, the following test shows how the "Edit" action on the "IncidentReports_Comments" functionality was denied to "Operator1" for a Comment that was created by "Operator2". The action was denied by a rule defined in the DSL. The AuthorizationRules class is responsible of loading and executing the DSL scripts for the requested functionality.

[Test]
public void OperatorCannotEditOtherOperatorComments()
{
var comment = new Comment() { Author = "Operator2", CommentText = "test" };
var incident = new Incident()
{
StartTime = DateTime.Now.AddDays(-5), EndTime = DateTime.Now.AddDays(-4),
Description = "test",
Comments = new List<comment>() { comment }
};

var rules = AuthorizationRules.GetInstance();
Assert.That(
rules.WhyAllowedOrDenied("IncidentReport_Comments", "Edit", "Operator1", incident, comment),
Is.EqualTo("User can't edit another user's comment")
);
Assert.That(!rules.IsAllowed("IncidentReport_Comments", "Edit", "Operator1", incident, comment));
}

Conclusion

That was fairly complex work, involving lots of steps. As I’ve said before, I was biased by knowledge of SableCC, and there may be simpler ways to achieve the same results with M (or simpler ways may be introduced in future versions of M).

One boring (repetitive) step was manually creating strongly-typed AST and Visitor classes for the DSL. A solution for that may be to automatically generate these classes from an MGrammar definition. Alternatively, it may be possible to work directly with the MGraph nodes in a dynamic language on the .Net DLR, or with C# 4.0’s dynamic typing. This would completely avoid the creation of the AST and Visitor classes (however I don’t know if it’s currently possible or even if this is an intended feature for a future version).

I’ve also overlooked several considerations that would need to be solved in a production DSL. A good inspiration to solve these considerations can be found in the Rhino DSL library.

Even though M is a very interesting tool, it may not be the best choice for all cases. My example defines an “external DSL”, in which I have great flexibility over the syntax. Another approach would be an “internal DSL”, which is hosted inside another language. Internal DSLs usually give less flexibility on the syntax: the DSL scripts will usually have some similarity to the host language’s syntax. For example, a internal DSL defined in Boo will have a Python-like syntax, and it would be hard to give it a C#-like syntax instead. However, I believe this is a non-issue in most cases, as these languages still give lots of flexibility (for example by providing metaprogramming facilities or by allowing manipulation of the parsed AST). An external DSL’s extreme syntax flexibility can be useful when the syntax is already defined (if we want to integrate with a “legacy DSL”), but otherwise an internal DSL is probably a simpler solution. An internal DSL uses the host language’s compiler to generate MSIL code, so we don’t have to worry about creating an AST, visiting its nodes and generating C# code. An internal DSL also gives us the benefit of being integrated with that language’s tools (refactoring tools, IntelliSense, debugger…) “by default”.

M may also not be the best choice for imperative-style external DSLs. Other tools, such as SableCC, automatically generate code that we have to do manually in M to achieve the same results. M DSLs can still be written very productively using Intellipad’s almost realtime feedback. Therefore, I would say M is appropriate for simple DSLs, where writing a few AST classes and Visitor methods manually is not an issue, but I believe SableCC would be more appropriate for more complex languages because it generates these classes automatically. (For example, the C# or Java languages could be defined using SableCC grammars).

Also, we can see from M’s published examples that there is an intense focus on data. This may indicate that imperative-style DSL are not really an intended use case of that tool, but it can still be very useful for declarative-style DSLs.


The full source code for this example can be downloaded here: MAuthorizationDSL.zip.