Monday, September 10, 2012

Packt MCTS BizTalk certification e-copy winners

This is a follow-up post to the Win a e-copy of the Packt MCTS BizTalk certification book post.  Thank-you to all that entered.  I enjoyed reading why you were interested in pursuing certification. The following people have won an e-copy of the book:
  • Johan Älverdal
  • Kevin Molloy
  • Donie Treadaway
I have forwarded your email addresses to the publisher and they will be in touch.

Wednesday, August 22, 2012

Win A Free Copy of Packt's Microsoft BizTalk Server 2010 Certification Guidebook

The author team is pleased to announce that we have teamed up with the publisher,Packt Publishing, and are organizing a give away.  Three lucky winners stand a chance to win an e-copy of our book.


Overview of Microsoft BizTalk Server 2010 Certification Guide
• Includes a comprehensive set of test questions and answers that will prepare you for the actual exam.

• The layout and content of the book closely matches that of the skills measured by the exam, which makes it easy to focus your learning and maximize your study time in areas where you need improvement.

Read more about this book and download free Sample Chapter: http://www.packtpub.com/mcts-microsoft-biztalk-server-2010-certification-guide/book

Also, feel free to check out some of the community reviews of the book:
How to Enter?
All you need to do is email MctsBTSBook@hotmail.com and let us know in a couple sentences why you would like to get your BizTalk Certification.

DeadLine:
The contest will close on Friday, September 7th, 2012 . Winners will be announced on this blog and will be contacted by email.

Friday, July 13, 2012

Part 2: BizTalk + SignalR

In my previous post, we discussed some of the reasons why BizTalk and SignalR may complement themselves in some situations.  I will now walk through the implementation of this OMS scenario.

I am going to create a Call Taker Web application that will communicate with a BizTalk exposed WCF service. Once BizTalk receives the request message, we will send a response acknowledgement message back to the Call Taker Web application. BizTalk will then communicate with OMS system.  “In real life” this will involve Websphere MQ, but for the purpose of this blog post I am simply going to use the FILE Adapter and a folder that will act as my queue.  Once we have finished communicating with OMS, we want to send an update status message to the Call Taker application using SignalR.  In this information we will include the Estimated Time of Restore(ETR) for the customer who has called in.image

 

The Bits

Other than a base BizTalk install, we are going to need the SignalR bits.  Like in most cases, NuGet if your friend.  However, as you probably know, BizTalk requires any “helper” assemblies to be in the GAC. We need to sign the SignalR.Client assembly with a Strong Name key.  To get around this I suggest you download the source from here.  You only need to do this for the SignalR.Client assembly.

The Solution

There are really 3 projects that make up this solution:

image

Let’s start with the BizTalk application since we are going to need to expose a WCF Service that the Web Application is going to consume.

In total we are going to need 4 schemas:

  • CallTakerRequest – This schema will be exposed to our Web Application as a WCF Service.  In this message we want to capture customer details.

image

  • CallTakerResponse – This will be our acknowledgement message that we will send back to the WCF client.  The purpose is to provide the Web Application with assurance that we have received the request message successfully and that we “promise” to process it

image

  • CreateCallRequest – This message will be sent to our OMS system.  Also note the msgid field which has a promoted property.  Since we are going to use correlation to tie the CreateCallRequest and CreateCallResponse messages together, we will use this field to bind the messages.

image

  • CreateCallResponse – When our OMS system responds back to BizTalk, it will include the same msgid as the field that was included in the request.  This field will also be promoted.  The other two elements(ETR and OrderNumber) we distinguish them so that we can pass them off to the SignalR Helper easily.

image

We will also need two maps:

  • CallTakerRequest_to_CallTakerResponse – The purpose of this map is to generate a response that we can send to the Web Client.  We will simply use a couple functoids to set a status of “True” and provide a timestamp.

image

  • CallTakerRequest_to_CreateCallRequest – This map will take our request message from our Web App and then transform it into an instance of our OMS Create Call message.  For the msgid, I am simply hardcoding a value here to make my testing easier.  In real life you need to ensure you have a unique value.

image

  • We now need an Orchestration to tie all of these artifacts together.  The Orchestration is pretty straight forward.  However, as I mentioned in the CreateCall schemas that we have promoted the msgid element.  The reason for this is that when we receive the message back from OMS system that we want it to match up with the same Request instance that was sent to OMS. To support this we need to create a CorrelationType and CorrelationSet.

image

The final Expression shape, identified by ‘Send SignalR Update’ is of particular interest to us since we will need to call a helper method that will send our update to our Web Application via that SignalR API.

image

This is a good segway into diving into the C# Class Library Project called BizTalkRHelper.

BizTalkRHelper Project

Since we are going to start interfacing with SignalR within this project, we are going to need a few project references which we can get from NuGet.  Although, please recall that we need a signed SignalR.Client assembly so we will need to compile this source code and then use a Strong Name key.  This can be the same key as the one that was used in the BizTalk project.  As I mentioned before, we need to GAC this assembly, hence us requiring the Strong Name Key.  We will also need to GAC the Newtonsoft.Json assembly but this does not require any additional signing on our part.

Otherwise we can use the assemblies that are provided as part of the NuGet packages.

image

This project includes two classes:

  • Message – This class is used as our strongly typed message that we will send to our web app.

image

  • CallTakerNotification – Within this class we will establish a connection to our HUB, construct an instance of our message that we want to send our client, provide the name of what you can think of as subscription and then send the message.  Obviously in a real world scenario hardcoding this URI is not a good idea.  You may also recognize that this is the method that we are going to be calling from BizTalk as we are providing the Estimated Time of Restore (ETR) and our OrderNumber that we received from our OMS system.  This is why we identified these elements in the CreateCallResponse message as being distinguished.  This also means that our BizTalk project will require a reference to this BizTalkRHelper project so that we can call this assembly from our Orchestration.

image

CallTakerWeb Project

This project will be used to store our Web Application artifacts. Once again with this project we need to get the SignalR dependencies.  I suggest using NuGet and search for SignalR.

image

Next, we need to add a couple classes to our project.  These classes are really where the “heavy lifting” is performed.  I use the term “heavy” lightly considering how few lines of code that we are actually writing vs the functionality that is being provided.   Note: I can’t take credit for these two classes as I have leveraged the following post: http://65.39.148.52/Articles/404662/SignalR-Group-Notifications.

  • Messenger – Provides helper methods that will allow us to:
    • Get All Messages
    • Broadcast a message
    • Get Clients

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;

using System.Collections.Concurrent;
using SignalR;

namespace CallTakerWeb
{
    public class Messenger
    {
        private readonly static Lazy<Messenger> _instance = new Lazy<Messenger>(() => new Messenger());
        private readonly ConcurrentDictionary<string, BizTalkRHelper.Message> _messages =
            new ConcurrentDictionary<string, BizTalkRHelper.Message>();

        private Messenger()
        {
        }

        /// <summary>
        /// Gets the instance.
        /// </summary>
        public static Messenger Instance
        {
            get
            {
                return _instance.Value;
            }
        }


        /// <summary>
        /// Gets all messages.
        /// </summary>
        /// <returns></returns>
        public IEnumerable<BizTalkRHelper.Message> GetAllMessages()
        {
            return _messages.Values;
        }

        /// <summary>
        /// Broads the cast message.
        /// </summary>
        /// <param name="message">The message.</param>
        public void BroadCastMessage(Object message, string group)
        {
            GetClients(group).add(message);
        }

        /// <summary>
        /// Gets the clients.
        /// </summary>
        /// <returns></returns>
        private static dynamic GetClients(string group)
        {
            var context = GlobalHost.ConnectionManager.GetHubContext<MessengerHub>();
            return context.Clients[group];
        }


    }
}

 

  • MessengerHub – Is used to:
    • Initialize an instance of our Hub
    • Add to a new group
    • Get All Messages
    • Broadcast a message to a group

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using SignalR.Hubs;
using BizTalkRHelper;

namespace CallTakerWeb
{
    [HubName("messenger")]
    public class MessengerHub : Hub
    {
        private readonly Messenger _messenger;

        public MessengerHub() : this(Messenger.Instance) { }

        /// <summary>
        /// Initializes a new instance of the <see cref="MessengerHub"/> class.
        /// </summary>
        /// <param name="messenger">The messenger.</param>
        public MessengerHub(Messenger messenger)
        {
            _messenger = messenger;

        }

        public void AddToGroup(string group)
        {
            this.Groups.Add(Context.ConnectionId, group);
        }

        /// <summary>
        /// Gets all messages.
        /// </summary>
        /// <returns></returns>
        public IEnumerable<BizTalkRHelper.Message> GetAllMessages()
        {
            return _messenger.GetAllMessages();
        }

        /// <summary>
        /// Broads the cast message.
        /// </summary>
        /// <param name="message">The message.</param>
        public void BroadCastMessage(Object message, string group)
        {
            _messenger.BroadCastMessage(message, group);
        }
    }
}

With our SignalR plumbing out of the way, we need to make some changes to our Site.Master page.  Since I am using the default Web Application project, it uses a Site.Master template.  We need to include some script references to some libraries.  By placing them here we only need to include them once and can use them on any other page that utilizes the Site.Master template.

<script src="Scripts/jquery-1.6.4.min.js" type="text/javascript"></script>
<script src="Scripts/BizTalkRMessengerHub.js" type="text/javascript"></script>
<script src="Scripts/jquery.signalR-0.5.2.js" type="text/javascript"></script>
<script src="../signalr/hubs"></script>

You may not recognize the second reference(BizTalkRMessengerHub.js) nor should you since it is custom.  I will further explore this file in a bit.

Next we want to modify the Default.aspx page.  We want to include some <div> tags so that we have placeholders for content that we will update via JQuery when we receive the message from BizTalk.

We also want to include a label called lblResults.  We will update this label based upon the acknowledgement that we receive back from BizTalk

<div class="callTakerDefault" id="callTaker" ></div>
<asp:Label ID="lblResults" runat="server" Text=""></asp:Label>
<div id="orderUpdate"> </div>
<div id="etr"> </div>
<div id="orderNumber"></div>

<br />
<h2>Please provide Customer details</h2>
<table>
    <tr>
        <td>Customer Name: <asp:TextBox ID="txtCustomer" runat="server"></asp:TextBox></td>   
    </tr>
    <tr>
        <td>Phone Number: <asp:TextBox ID="txtPhoneNumber" runat="server"></asp:TextBox> </td>
    </tr>
    <tr>
         <td>Customer Site ID: <asp:TextBox ID="txtCustomerSiteID" runat="server"></asp:TextBox></td>
    </tr>
    <tr>
        <td>Comments: <asp:TextBox ID="txtComments" runat="server"></asp:TextBox></td>
    </tr>
   </table>
   
  <asp:Button ID="Button1" runat="server" Text="Submit" onclick="Button1_Click" /><br />

 

The last piece of the puzzle is the BizTalkRMessengerHub.js file that I briefly mentioned. Within this file we will establish a connection to our hub, add ourselves to the CallTaker subscription and then get all related messages.

When we receive a message, we will use JQuery to update our div tags that we have embedded within our Default.aspx page.  We want to provide information like the Estimated Time of Restore and the Order Number that the OMS system provided.

$(function () {
    var messenger = $.connection.messenger // generate the client-side hub proxy { Initialized to Exposed Hub }


    function init() {
        messenger.addToGroup("CallTaker");
        return messenger.getAllMessages().done(function (message) {

        });
    }

    messenger.begin = function () {
        $("#callTaker").html('Call Taker Notification System is ready');

    };

    messenger.add = function (message) {
        //update divs
        $("#orderUpdate").html('Order has been updated');
        $("#etr").html('Estimated Time of restore is: ' + message.ETR);
        $("#orderNumber").html('Order Number: ' + message.OrderNumber);
      
        //Set custom backgrounds
        $("#orderUpdate").toggleClass("callTakerGreen");
        $("#etr").toggleClass("callTakerGreen");
        $("#orderNumber").toggleClass("callTakerGreen");

    };


    // Start the Connection
    $.connection.hub.start(function () {
        init().done(function () {
            messenger.begin();

        });
    });

 

});

 

Testing the Application

So once we have deployed our BizTalk application and configured our Send and Receive Ports we are ready to start testing. To do so we will:

  • Launch our Web Application.  The first thing that you may notice is that we have a <div> update indicating that our Notification System is ready.  What this means is that our browser has created a connection to our Hub and is now listening for messages.  This functionality was included in the JavaScript file that we just discussed.

image

  • Next we will populate the Customer form providing their details and then click the Submit button.

image

  • Once the button has been pressed we should receive an acknowledgement back from BizTalk and we will update the results label indicating that the Order has been received and that it is currently being processed.

image

  • You may recall that at this point we will start sending messages Asynchronously with the OMS system.  For the purpose of this blog post I am just using the FILE Adapter to communicate with the File System.  When I navigate to the folder that is specified in my Send Port, I see a newly created file with the following contents:

image

  • Ordinarily, the OMS system would send back an Acknowledgement message automatically but for this post, I am just going to mock one up and place it in the folder that my Receive Location is expecting.  You will notice that I am also using the same msgid to satisfy my Correlation Set criteria.

image

  • When BizTalk processes the CreateCallResponse, it will invoke our SignalR helper and a message will be sent to our Web Browser and it will subsequently be updated without any post backs or browser refreshes.  Below you will see 3 div tags being updated with this information that was passed from BizTalk. 

image

 

Conclusion

At this point I hope that you are impressed with SignalR.  I find it pretty amazing that we have other systems like BizTalk sending messages to our Web Application asynchronously without having the browser to be posted back or refreshed. I also think that this technology is a great way to bridge different synchronous/asynchronous messaging patterns.

I hope that I have provided a practical scenario that demonstrates how these two technologies can complement each other to provide a great user experience to end users.  We are seriously considering using this type of pattern in an upcoming project.  Since this was really my introduction to the technology and I do have some exploring to do but so far I am very happy with the results.

Part 1: BizTalk + SignalR

For those unfamiliar with BizTalk, it is Microsoft’s premiere Enterprise Application Integration (EAI) platform.  It is used to integrate disparate systems over a variety of protocols using a durable pub-sub mechanism.

SignalR does have some similarities to BizTalk in that it is a messaging system that also supports the notion of pub-sub.  However, SignalR’s sweet spot is really lightweight messaging across Web clients.  SignalR itself is a scalable, asynchronous .Net library authored by David Fowler and Damian Edwards.  If you are new to SignalR, I recommend checking out this post by Scott Hanselman who describes many of the underlying technical details that I will not be going into in this post.

Why is SignalR important?

One of the true benefits of SignalR is it is Asynchronous by nature.  I don’t profess to be an expert web developer.  I have done some in an earlier life prior to my BizTalk days but I know enough to understand that locking up a user’s browser during a request-response interaction can be a really bad thing.  Yes, technologies have been introduced like AJAX and JQuery to provide a more asynchronous experience and they both have their strengths and weaknesses. But, overall they take many steps to solve this Request-Response locking problem.  The question remains, what happens when you have events occurring in other systems that you want raised within your current system that you are interacting with?  This is where I feel the true “magic” of SignalR comes into place.

Scenario

I work in the Electricity/Power industry and we are implementing an Outage Management System (OMS).  OMS systems are used to calculate or anticipate the particular device(s) that are the underlying problem that is causing a Power Outage.  OMS systems may have many different types of input including Customer Calls, IVR messages or even SCADA events.  In this case we are only going to focus on Customer Calls.

This OMS system is a commercial off the shelf (COTS) product that we have purchased from a vendor.  This product has defined, XML based, asynchronous interfaces that require the use of  WebSphere MQ queues.  Using BizTalk to integrate with the OMS system makes a lot of sense and plays well to BizTalk’s strengths that include:

  • Support for MQ Series
  • Durable Messaging
  • Tracking
  • Configuration
  • Correlation (Async messaging)
  • XML Schemas
  • etc..

But the question remains, we need to capture information that is coming from our Customer’s calls in our Call Centre.  One option that we are currently exploring is a light weight Web Based application that will allow our Call Centre to quickly capture customer’s outage information and then pass this information to BizTalk and have BizTalk deal with calling the OMS’s semi-complex interfaces.

Much earlier in my career I may have been tempted to do the following:

  • Expose a WCF/Web Service that a Web Application can consume
  • Accept the request from the Web App and then proceed to call the asynchronous interfaces that exist in the OMS system.
  • In the meantime, the Web Application that called the BizTalk Service is being blocked as BizTalk is still processing messages Asynchronously.
  • Once BizTalk is done interacting with the OMS system, BizTalk will provide a response back to the Calling Web Application.

The reality is that his is a bad pattern.  You don’t want to lock up your users in a Web Application if you don’t have to especially when you have asynchronous messaging happening in the backend.

image

An alternative approach, that I like much better, is outlined below:

  • Expose a WCF/Web Service that a Web Application can consume.
  • Once BizTalk has received the Web Request from the Web Application, simply provide an acknowledgement that the message has been received and will be processed.
  • At this point the Web Browser has been posted back.  If our Web Application is built around technologies like JQuery and/or AJAX our users can continue to perform some work.
  • In the meantime as BizTalk is calling the OMS' related interfaces, BizTalk can provide status updates back to the Web Application using SignalR.  There is actual information that the OMS system will pass back that our end users are interested in.  More specifically, it will include information as to when the Customer can expect their power to be restored (Estimated Time of Restore).  If you have ever experienced a power outage, I am sure you would like to know if it is going to last 30 minutes or 10 hours.

The benefits to this approach include:

  • User’s browser is not locked up
  • Users are continuing to be updated as to the status of their request
  • No need to continue to refresh your page (Just say NO to F5) in order to get a status update.

image

Conclusion

I am sure at the beginning of this post you were thinking what could BizTalk and SignalR possible have in common?  I hope that I have provided a good example of how these two technologies complement themselves.

In Part 2 of this series, I will actually implement this pattern that I have shown above.  I have split this information into two parts due to the total length of the content.  Stay tuned!

Saturday, July 7, 2012

Exposing common service(s) to SAP and WCF clients

I have a scenario I am dealing with at work that involves exposing some common data to two different systems: SAP and a Custom ASP.Net Web App.  Both of these applications will request BizTalk to fetch some data from a variety of database views, aggregate it and package it up nicely for these calling systems.  Both Systems will be requesting this information on demand – i.e. Synchronously.  SAP will be calling an RFC hosted in BizTalk,via the SAP Adapter, using the method that I identified in a previous post.  The Custom Web Application will be consuming a WCF Service hosted in IIS.

Conceptually, my solution looks like this:

image

Whenever you have multiple systems trying to consume the same data, you generally  try to utilize a Canonical schema approach.  Canonical schemas allow you to take the different external data formats and transform them into a common internal format before handing them off for processing like in an Orchestration. 

image

You then perform all of your processing using this internal format to reduce the amount of code/configuration that you require to solve the problem.  Additionally, when you need to make a change, you do so in one place as opposed to two(or multiple) locations.

In order to keep things simple for this POC, I decided to reuse my RFC Add solution where you can have a client pass two numbers to BizTalk, BizTalk will then sum them and provide the answer back to the calling application.

image

For the Web Client, I will simply expose custom “Web” Schemas as WCF Services using the BizTalk wizard provided within Visual Studio.  Note that I did not want to expose my SAP schemas to my Web Application.  I could have done that but it is not a good practice as any changes to the SAP schemas would impose a change on my Web Application whether it was required or not.  Also, SAP schemas tend to be complex and we don’t want to unnecessarily propagate that complexities onto other applications if we don’t have to.

Initially I thought my solution would be pretty straight forward:

  • Generate my SAP Schemas
  • Create my Schemas that will be used for the Web Application and expose them via Wizard
  • Create my Canonical Schemas
  • Create related maps

I then created my logical port and set the Request and Response message types to my Canonical schemas. I deployed my application and configured my Physical Port within the BizTalk Admin Console.  I decided that I was going to re-use the port that was created as part of the BizTalk WCF Publishing Wizard.  I would simply add a Receive Location for SAP and set the appropriate inbound and outbound port mappings. 

Using inbound port mapping is very simple, I can specify multiple maps and BizTalk will detect which Map to use based upon the message type that is being passed.   So if we receive a request from SAP, BizTalk will detect this and use the SAP_to_Canonical.btm map.

 

image

It then hit me…how will BizTalk determine which Map to use on the Outbound (Response) message? The message being passed to the port will always be the same as it will be my canonical format.  I soon found out.  As you can see in the screenshot below, my SAP response was sent down to my Web Client(which in this case was the WCFTest tool).  Not the desired result that I was looking for.

image

While chatting with a colleague he mentioned why don’t you try a Direct Bound port.  I have used Direct Bound ports in the past but only in asynchronous scenarios.

So to fix this, I changed:

  • My logical Request-Response port to be a Direct bound port and to be Self Correlating.

image

  • Created an additional Receive Port.  I now have a Receive Port for my Web App and for SAP.

image

  • Made the appropriate Inbound and Outbound Port Mappings.  Now each port only has 1 Inbound and 1 Outbound port mapping.

image

 

  • My orchestration will no longer have a Physical Port to bind to since it will be Direct Bound to the MessageBox

image

  • Now when I execute my test from the WCF Test Client, I get the correct result in the WebAddResponse message type that I am expecting

image

  • I am also getting the correct response from SAP
image

Conclusion

The magic in this solution is really the Request-Response direct bound port.  The idea is that our Orchestration will place a subscription on our Canonical Request message.  It doesn’t really matter how that message gets into the MessageBox as long as it is there.  In this case we have exposed two end points, one for SAP and one for our Web App.  In both scenarios they will take their specific Request message and transform it into our Canonical message and therefore our Orchestration will pick it up.

Request-Response ports always use a form of Correlation so that it can provide the correct Response back to the calling client.  We can take advantage of this mechanism to ensure we get the correct Canonical Response message which in turn can use Outbound Port mapping and send our response in the correct format to the calling application.

Saturday, June 16, 2012

Microsoft TechEd North America 2012-Day 4

So this post is a little delayed due to all of the excitement around the BizTalk sessions.  However the sessions were that good that I wanted to still publish the post.

Azure Service Bus Solution Patterns  -Clemens Vasters and Abhishek Lal

Another session by Clemens and Abhishek.  This time around it was a very practical session based upon some Customer Use Cases and how to implement some popular integration design patterns based upon the “Integration Bible” - Enterprise Integration Patterns.  To view the actual session on Channel9, click here.

Some of the Use Cases included:

  • Web Services For Retailers
    • Company from Italy
    • Provide SAAS solution for Retail Stores
    • Seed local retail outlets with Catalogue and Pricing information
    • Push out to retail stores
      • Use Topics to distribute information to each retail store
  • SaaS with Dynamic Compute Workload
    • High Performance Computing (HPC) scenario
    • Command and Control messages sent in from Service Bus
    • ISV specialized dynamic compute capacity provider
  • Consumer Web Site
    • Web site that searches for data about people – credit check, criminal check etc.
    • Their challenge was back end data co-ordination
    • Different profiles for users who have different access to to back end services
    • Queues for decoupling the web layer from middle-tier services

 

Scaling things out

Next Clemens walked us through a scenario that Microsoft has been working on with a particular customer.  The solution was related to remote controlling air conditioners.  The idea is that a consumer would have the ability to manually control it but also power providers could *potentially* control it to prevent rolling brown-outs from occurring.  Instead of instituting  wide spread rolling brown-outs, each customer could alter their consumption. Collectively these savings add up and prevent demand from exceeding supply.  I am a little skeptical about a power company(I work for one) controlling someone’s air conditioner but in theory it makes a lot of sense.

The requirements for this solution includes:

  • Pair devices, such as air conditioner, to local Wi-Fi connection
  • Users need the ability control the device
    • Control requests could be made from back yard or across the world
    • Service Bus makes these control requests possible from anywhere that has an internet connection.
  • Devices will then send consumption data to Azure where the data can be viewed on a mobile device. This data will make its way to Azure via Service Bus.  The premise behind this is if customers are more aware of their consumption patterns, then they may try to alter them.  This is something that my organization has also been investigating.

So a question remains, these types of consumer devices will not have the .Net Service Bus bindings installed so how will they actually communicate?  The answer is really HTTP.  You can send HTTP requests to the Service Bus and in this case Clemens introduced a concept that he likes to call “N-HTTP”.  It is a similar to the “NoSQL” movement but in this case is related to HTTP.  HTTP in many cases includes HTTP Headers but also an entity body.  The entity body could include JSON content, XML content etc.  The challenge with entity bodies is that you need a parser to package the information up in requests or un-package it when receiving responses.  This would further complicate things as these parsers would need to be loaded into these consumer devices.  What’s interesting is HTTP Headers is they are well understood, across devices, systems, technology stacks etc,  and do not require parsers.  So if you can get away with sending key/value pairs when sending or receiving messages then this solution should work for you.

Receiving messages from Service Bus generally includes using ‘long polling’ when waiting for messages.  Using  long polling sockets isn’t a great use of power resources for devices that do not have permanent power sources (devices that rely on batteries).  With this in mind, Microsoft has been working with other industry leaders on the AMQP (Advanced Message Queuing Protocol).  AMQP is a popular queuing technology that is used in financial brokerage settings.  Another benefit of using AMQP is that it has a quieter socket profile which results in lower battery consumption.  So this is an area that Microsoft is investing in that will have wide spread benefits….Cool Stuff!!!

 

Message Channel Patterns

Abhishek was back on point and walked us through some popular messaging patterns including:

  • Pub-Sub
    • Accomplished via Topics
  • Content Based Router
    • Using Topics based upon a Subscription Rule
  • Recipient List
    • Sender wants to send the message to a list of recipients
    • Common use-cases
      • Order processing systems – route to specific vendors/departments
      • “Address LIKE ‘%First%’
  • Message Routing
    • Session re-sequencer – receiving messages out of order and then using the defer method to postpone processing the next message until you receive the next message that is “in order”

I must admit, when I learn more about the Service Bus I do get a little giddy.  I just see it as such an enabling technology.  It facilities building applications that just wouldn’t be possible or cost prohibitive in years gone by.  Whether it is submitting SAP  timesheets remotely or reporting customer power outages it is an amazing technology the opportunities are endless when it comes to bridging data center boundaries.

 

Mobile + Cloud: Building Mobile Applications with Windows Azure – Wade Wagner

Wade Wagner, a former Microsoft Azure Evangelist, put together a pretty interesting session related to Windows Phone and Azure.  To watch this session on Channel 9 click here.  In the past I have followed some of the work he did with the Mobile toolkits for the different mobile platforms, but just haven’t had the time to take a closer look.

This session focused primarily on Windows Phone 7 and how it interacts with some of the Azure services (Storage, SQL Azure, Tables, ACS).  Personally, I think these technologies complement each other very well.  Especially in the area of bridging mobile devices with on-premise LOB solutions and leveraging the Access Control Service (ACS) for authentication.

Three reasons for Device + Cloud

  • Allows for new application scenarios
  • The cloud levels the playing field
  • The cloud provides a way to reach across device platforms and a larger pool of resources from which to pull

Why Azure?

  • PaaS you build it, Windows Azure runs it
  • Automatic O/S patching
  • Elasticity and Scale
  • Utility Billing
  • Higher-level services
  • ACS, Caching, CDN (cache static content), Traffic Manager (route traffic across Azure datacenters based on locale)

Wade then demonstrated a scenario really lends itself well to this technology.  A mobile application that will take advantage of Social Identity providers(Windows Live, Google, Yahoo) for authentication via the Access Control Service.  Wade demonstrated that this isn’t as complicated as it sounds.  With the help of a Nuget package and adding a STS reference we can get this working in the matter of a minutes.  Wade then added some additional functionality to consume a ASP.Net Web API.  Most presenters would have left their demo there.  Giving people the information to build the services but then leaving out some “real world” gaps around security.  Wade did take his demo one step further and then showed we can use the ACS service to authorize user requests as well.  Before the ASP.Net WebAPI method is called, we can intercept this request and validate that the token that has been included as part of the HTTP Request is a valid ACS token.  Provide the token is valid, the appropriate data will be returned.

Wade then wrapped up his session demonstrating how we can use the Azure Push Notification service to serve up “toast notifications”.  Another set of useful information that I hope to play with soon. 

If you are into mobile apps, you definitely owe it to you to watch this session so you can learn about all of the Azure services that you and your customers can benefit from.

Thursday, June 14, 2012

Building Integration Solutions Using Microsoft BizTalk On-Premises and on Windows Azure - Javed Sikander and Rajesh Ramamirtham

Update:  This session has now been posted to Channel 9 and you can view the video here.  Feel free to post any comments at the bottom of this post.

 

This was a follow up session to the Application Integration Futures – The Road Map and what's next on Windows Azure  session that was discussed here.  The primary focus of this session was to demonstrate some of the new capabilities of BizTalk On-Premises, BizTalk IaaS and BizTalk PaaS. 

During the presentation there were many questions as to what the differences between the On-Premises version and the IaaS version would exist.  After many questions about a particular feature (BAM, ESB Portal etc) Bala  stepped in and declared that all features that exist in the On-Premises version will exist in the IaaS version.  After a further discussion after the session, it looks like there is a little more work to do in the area of clustered host instances but otherwise we can expect symmetry between these two versions.

Since BizTalk Next (aka “R2”) will be released as part of the latest Microsoft platform offering (Windows Server, SQL Server, Visual Studio), all BizTalk projects will target the .Net 4.5 platform.

The primary purpose of this session was to demonstrate some of these new features lets get into some of the scenarios/demos that were discussed.

BizTalk Consuming REST services

In the first example, the team demonstrated BizTalk consuming a REST feed from the Azure Data Market.  Specifically, the feed was related to flight delays.  BizTalk connected using the new WCF-WebHttpBinding and performed a GET operation against this particular feed.  Since the foundation for authentication when communicating with Azure is the Access Control Service (ACS), Rajesh demonstrated the out of box ACS authentication configuration.

BizTalk consuming SalesForce.com over REST API

Once again BizTalk was configured to consume a REST service.  In this case it was a SalesForce customer feed.  Within the Send Port, the “SOAP Action Header” was populated and once again included the GET operation.  A custom transport behavior was used to provide the appropriate credentials. When executed, a list of customers was returned from SalesForce.

Next, the URI in the SOAP Action header was modified and a hard coded id was provided for a particular customer.  In this case only this particular customer was returned.  Both myself and Bill Chestnut were thinking “great, but how do we inject a dynamic customer id to this GET request”?  Once again the BizTalk team had an answer for this and it came in the form of a new Variable Mapping button.  When clicked an interface that will allow us to specify the name of a context/promoted property.  Bottom line is that we can drive this dynamic value from message payload or context.

Finally, the last SalesForce demo included a POST, where they were able how to demonstrate how to update a customer record in SalesForce.com. 

 

BizTalk PaaS: Azure EAI Services

The team then switched gears and started talking about BizTalk PaaS: Azure EAI Services.  I have no idea as to whether this will be the official name.  This is what the title of their slide included so I am using it here.  I do like it.  I do like that BizTalk is still associated with this type of functionality.  I must caution that the product team did indicate not to look too much into naming/branding at this point.

Some of the functionality(some new, some old) that we can expect in the PaaS solution includes:

  • Sequence of activities to perform impedance mismatch
  • Flat file disassembly
  • Message validation
  • Transforms
  • Content based routing
    • XPath, FTP properties, Lookup (against SQL Azure), Http properties, SOAP
  • Hosting custom code
  • Scripting functoid to host .Net Code
  • XSLT support
  • New Service Bus Connect Wizard
  • BizTalk connectivity to Azure Artifacts (Service Bus Queues, Topics, XML bridges)

EDI Portal

  • Metro UI for managing trading partners
  • Manage and monitor AS2, X12 agreements
  • View resources like Transforms, Schemas, Certificates

EDI Bridge

  • Archiving
  • Batching
  • Tracking

Other

  • IaaS will be a public TAP
  • Other BizTalk releases(On-Premises/PaaS) will be “regular” TAP
  • On a lighter side, I did ask if we can expect a Metro version of the BizTalk Admin Console.  Don’t expect it any time soon Smile.  Basically any new UIs that need to be created will follow the Metro styling but other than that don’t expect many updates to previous tools.

Conclusion

This was a great session that included many demos and really proved that what the Product team was speaking to in the previous session wasn’t just lip service.  Having been at the MVP Summit, I must say I was pleasantly surprised at the amount of functionality that they have been working on.  Once again, I love the direction that they are heading in.  It has an updated feature set that should please customers no matter what their ideal deployment model is (On-Premises, IaaS, or PaaS).  You can also tell that they are serious about symmetry although it may take a while for PaaS to be closer aligned to On-Premises/IaaS but I think they are headed in the right direction.