Monday, June 29, 2009

Microsoft Tech Days Canada is back for 2009

Microsoft Tech Days 2009 is coming back to Canada this fall to the following cities:
  • Vancouver (September 14-15)
  • Toronto (September 29-30)
  • Halifax (November 2-3)
  • Calgary (November 17-18)
  • Montreal (December 2-3)
  • Ottawa (December 9-10)
  • Winnipeg (December 15-16)

Looks like Riderville (Regina) is off the list this year if you are from Saskatchewan you will want to head to Calgary or Winnipeg.

There is currently an early bird special special of $299 which is a savings of $300. I went last year and you can't go wrong with a Microsoft conference at this price point. The session agendas include a wide variety of topics so whether you are a hardcore developer or IT Pro you are bound to learn something new at this conference.

Follow this link for more details.

Saturday, June 20, 2009

Clustering MQ Series and BizTalk Send/Receive

We have a 3rd party application that uses MQ Series as an integration bridge. BizTalk is used to manage a business process and perform message transformation between this system and our ERP.

We have been using this configuration for the past couple years and it came time to upgrade this 3rd party application. This application now supports Websphere MQ 6.0 where as we were previously running Websphere MQ 5.3. Since it was time for an upgrade and we do have a pretty large BizTalk farm with a team that is responsible for supporting both BizTalk and MQ we decided to install MQ 6.0 on this same cluster(as BizTalk)...for better or worse(it is
supported).

In our previous configuration we had MQ 5.3 running on an Active/Passive Cluster that did not include BizTalk. When running BizTalk 2006 and wanting to use the MQ Series Adapter, there is a component that you must install on the MQ Series servers. The component that is installed is called MQSAgent*. Now if you are running BizTalk 2004 then the component is called MQSAgent where as if you are running BizTalk 2006 it is called MQSAgent2. According to this doc, the component is still called MQSAgent2 for BizTalk 2009. In our previous configuration, I must admit it was rock solid. We never had any fail over issues whatsoever.

We recently ran into some issues in the Test environment with the new configuration. We found that you needed to ensure that MS Distributed Transaction Coordinator (MS DTC) and MQ Series on the same node in order for it to function correctly. MSDTC is leveraged to support Guaranteed Reliable message delivery.

The specific problem that we ran into was that when we failed the MQ Resource group over to a new node, BizTalk would essentially just hang(with respect to MQ message processing). We couldn't send or receive messages with MQ. I then ran into this kb: The BizTalk Server Adapter for MQSeries version 2.0 no longer retrieves messages from a clustered MQSeries queue manager when the queue manager fails over to a different cluster node. This paragraph accurately describes our scenario:

You may configure the Microsoft BizTalk Server Adapter for MQSeries version 2.0 to receive messages from a clustered MQSeries queue manager. If the queue manager fails over to a different cluster node, the BizTalk Server Adapter for MQSeries no longer retrieves messages from the clustered MQSeries queue. When this behavior occurs, the following event is logged in the Application event log:

Event Type: Warning
Event Source: BizTalk Server 2006
Event Category: BizTalk Server 2006
Event ID: 5740
Date: 6/20/2009Time: 10:16:40 AM
User: N/A
Computer:
Description:The adapter "MQSeries" raised an error message. Details "Error encountered on opening Queue Manager name = ISVC.MSG4.QM.T Reason code = 2059.".
For more information, see Help and Support Center at
http://go.microsoft.com/fwlink/events.asp.

Later on in the kb it describes a "Workaround" *cough* HACK */cough* that is suppose to terminate the MQSAgent from the node that previously was hosting the MQ/MSDTC Resource group.


Option Explicit
On Error Resume Next

Dim sComputerName, oWMIService, colRunningServices, oService, colProcessList, objProcess

If Wscript.Arguments.Count = 0 Then

sComputerName = "."
Call ServStat
Wscript.Quit
End If

Sub ServStat
Set oWMIService = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate}!\\" & sComputerName& "\root\cimv2")
Set colRunningServices = oWMIService.ExecQuery _
("Select * from Win32_Service where DisplayName='Distributed Transaction Coordinator'")

For Each oService in colRunningServices
'Wscript.Echo oService.DisplayName & VbTab & oService.State
if (oService.State="Stopped") Then
'Wscript.Echo "Stopped"
' find the dllhost
Set colProcessList = oWMIService.ExecQuery ("SELECT * FROM Win32_Process WHERE Name = 'DLLHOST.EXE'")

For Each objProcess in colProcessList
if inStr(objProcess.CommandLine, "6D06157A-730B-4CB3-BD11-D48AC6B8A4BB")>0 then

'Wscript.Echo objProcess.ProcessId
Dim objShell
Set objShell = CreateObject("WScript.Shell")
objShell.Run "cmd /k kill -f " & objProcess.ProcessId & "& exit"
WScript.Quit
end if
Next
end if
Next

End Sub
I have highlighted two issues with this script in red:


  1. The ID 6D06157A-730B-4CB3-BD11-D48AC6B8A4BB that they are referring to in the script is for the MQSAgent (BizTalk 2004) and not BizTalk 2006. The MQSAgent2 ID (BizTalk 2006) is C691D827-19A0-42E2-B5E8-2892401481F5.



  2. The other issue is that the script will issue the following command to kill the MQSAgent# process on the server that use to host the DTC/MQ Resource group. objShell.Run "cmd /k kill -f " & objProcess.ProcessId & "& exit The problem with this is that the kill command is not available on Windows 2003 and I suspect Windows 2008 as well. This command has been replaced with Taskkill. I opted to go a different route since the script they have provided is based upon WMI and VBscript I have replaced this line with objProcess.Terminate()

My testing to this point have been very successful. I have not had to manually intevene whatsoever and have not lost any messages when failing resources over or rebooting servers. The only delay that you will see is if you have messages inflight while you bounce resources is that if the Queue is still down that BizTalk will use the Send Port's configuration to issue retries. This is very standard BizTalk behaviour and is performing as expected.

So what does this script do? It essentially looks to see if the DTC service is running on the node that this script is running on. If the DTC service is running, the script will exit since we still need the MQSAgent to run on this server with DTC. If DTC is not running then it will look for a process called 'DLLHOST.EXE' that has a Process ID (PID) of the MQSAgent. If it finds an instance of this process it will terminate it so that this process does not lock up BizTalk from sending or receiving messages with MQ Series.

This sounds like an opportunity to cluster the MQSAgent application. However, the guidance from Microsoft is to not cluster it:

There is no requirement to cluster the MQSAgent (MQSAgent2) COM+ application that is used with the BizTalk Server MQSeries adapter. To provide high availability for this component, install the component on each cluster node. If the COM+ application stops, the next call from the client will start it.

I don't know...seems pretty clunky to me. I will have to follow up further with support.

Sunday, June 7, 2009

Adventures with the HTTP Adapter and Yahoo Finance API

I was asked to investigate what would be involved in connecting BizTalk to a Yahoo Finance "api" in order to retrieve stock quotes. This is not a mission critical application but they wanted to be able to consume this information. The client "COTS" application can consume a WSDL, but not a HTTP response that includes comma delmitted data. So we figured that we could expose this data via a web service. There are certainly many ways to expose this data and this is not the point of this post.

The point of the post is to discuss some of the pitfalls that I ran into when trying to connect to this Yahoo API using the BizTalk HTTP Adapter. At first, I thought the problem was rather trivial, I opened IE, pointed it at the Yahoo URL, included the Stock Ticker and the format that I was interested in. The browser returned a string of data that included the stock quote and the other relevant data.

http://download.finance.yahoo.com/d/quotes.csv?s=MSFT&d=t&f=sl1d1t1c1ohgvj1pp2wern


I then saved a copy of this data into a text file, ran it through the BizTalk Flat file schema wizard, created a Receive Pipeline based upon this schema and now had "typed" data for use inside of BizTalk.

At this point, I was a little unsure of what Yahoo was expecting when I made this Http Request. I created a "dummy" schema which only had a root node and figured that I would submit the request to Yahoo to see what was going to happen. Initially, I had a static send port where I hard coded the URL. The URL was very important since it contains the HTTP Request query parameters. I figured that once I get this working, then I will focus on making it dynamic so that the client application can drive which stock quote is returned.

However, this is when I started to run into issues. When I tried to run my application using this configuration I was prompted with the following response from YAHOO: Missing Symbols List.

Based upon this error, I figured that something was up with the query parameters. Yahoo is expecting something along the lines of ?s=MSFT&d=t&f=sl1d1t1c1ohgvj1pp2wern to be past as part of the HTTP Request. After performing some BING searches someone suggested using a dynamic send port to pass these query parameters in. That didn't help either.

I then decided to open up Fiddler to see what was being passed as a successful request. Fiddler is a tool that can be used to inspect HTTP Requests and Responses.
When you use the Request Builder feature in Fiddler, it will default the HTTP request mode to "GET". It makes sense, but I then thought what if I switch this to POST? Look for the red line towards the bottom of the next image: "Missing Symbols List".

At this point, I am starting to understand the problem a little better. After another Bing search, I found the following document that indicates: "The HTTP send adapter gets messages from BizTalk Server and sends them to a destination URL on an HTTP POST request". Using Fiddler, I was able to determine that using a GET request worked without issue. Now knowing that the BizTalk HTTP Adapter is going to use a POST request, I figured that I needed to be able to get Fiddler to work with a POST request and then get BizTalk to use this same approach when posting data to Yahoo.

I am not going to get into the differences between POST and GET here as it has been done so many times before, but here is a good summary of the differences.

Since the query string is essentially being ignored anyways, I removed it from the URL Address text box. I then copied the query parameters into the "Request Body" text box without the '?'

Success! The next challenge is to get BizTalk to pass this data through the HTTP adapter as a POST request.

I found an old forum post by CranCran77 that discussed sending a message of type RawString to a different website that was also looking for HTTP Get. I have used the RawString class before when sending emails via BizTalk, so I was able to add this class to my project quickly.




In the image above, I have highlighted the "Construct Yahoo Request" Message Assignment shape. Below, I have the details of what is inside this message construct shape. Here I am assigning values to my message body that is of type "RawString". This RawString class has been added to a .Net Helper Assembly.


After the message is sent, I can look in the tracking database and can see that this data was transmitted as part of the message body.


Since the parameters that Yahoo requires are being sent as part of the message body, we may use a static Send Port and do not have to provide a query string.

With the emergence of SOAP and now WCF, the use of the HTTP adapter is limited. But as you can see there are still some "services" that exist on the web that rely upon HTTP. Unfortunately there is not a lot of good documentation on the HTTP adapter so hopefully this post fills in some of the gaps.

Thursday, May 21, 2009

Another BizTalk Online Resource - http://www.biztalk247.com/

I just wanted to plug another good BizTalk resource: http://www.biztalk247.com/. Saravana Kumar has given the site an extensive facelift and it looks great!

On the site, you will find links to Blogs, installation guides, BizTalk posters, Hot News, Web Casts, tutorials and links to lastest in BizTalk books.

While I am in the plugging mood, a book that I have started to read is Richard Seroter's SOA Patterns with BizTalk Server 2009. So far I am a couple chapters in and I am impressed with it. I will post a more comprehensive review once I have completed it.

Friday, May 15, 2009

TechEd 2009 - Day 5

Friday was the final day for TechEd 2009 North America. There were fewer people around and it had that feeling of writing a final exam on the last day of the semester. I started out the day with BizTalk and ended with a BizTalk RFID session. In between those two sessions I spent some time learning about DUET and WCF. Read the rest of the blog for more details.

Stephen Kaufman had a good session on BizTalk 2009 Application Lifecycle Management (ALM) using the new features of BizTalk 2009 and its integration with Team Foundation Server(TFS). BizTalk is now a first class citizen in the Visual Studio 2008 family and therefore has added benefits when you deploy TFS as well. This includes bug tracking, task management and automated builds to name a few.

Next Stephen ran us through some of the new unit testing capabilities. He went through writing unit tests for both Maps and Schemas. Having the ability to write these tests using Microsoft tools and then having the ability to publish the results to TFS is extremely powerful. Writing and executing unit tests is a good practice to follow. Like most developers I don't mind writing and executing the tests, but I hate documenting the results. If this can be published to a tracking tool automatically then I am all for this type of functionality.

Automating your builds was next on the agenda. If you are using automated builds in BizTalk 2006, then chances are you have a Build server with Visual Studio installed on it. This is no longer required as there is an option in the BizTalk 2009 installation that allows you to specify only the build components. This allows you to execute build scripts using MSBuild without the need of visual studio. I strongly suggest automating your builds and deployments. The benefits are consistency, repeatability and times savings. For those of you who were around in the BizTalk 2004 days know how much of a pain deployment was back then. Having a scripted deployment is truly a breath of fresh air compared to those days.

The DUET session was good, however I walked away a little discouraged with the complexity of the DUET architecture. Being a Microsoft developer at an organization that uses SAP as a system of record forces you to find different ways to integrate with SAP. From a middleware perspective, this path is very clear: BizTalk. However, there are situations when BizTalk may not be the clear cut choice. I would like to think of these types of scenarios to involve User to System interactions and potentially human workflow scenarios. BizTalk is very good with asynchronous scenarios. While it certainly can function in synchronous scenarios, you need to be careful in order to prevent client side timeouts.

The demos were impressive: the speaker showed a few scenarios where a user would be working in Outlook and had the ability to book time against project codes, submit a leave request and generate reports. In scenarios where you need approval, those requests will get forwarded to the appropriate person(Manager) as configured in SAP. The interactions were very smooth and responsive.

Now for the discouraging part…the amount of infrastructure and pre-requisites to get DUET functioning is significant. It would only be fair at this point to disclose that I am far from a DUET expert, but this is the way I interpreted the requirements:

Client Computers:
DUET Client
Local SQL Server Express (yes – must have)
Hidden Mail Folder

Exchange Server
DUET Service Mail Box

SAP Server
Enterprise Services need to be enabled
DUET Add on
Additional ABAP modules

DUET Server
IIS – Microsoft DUET Server Components
J2EE – SAP DUET Server Components
SQL Server

So as you can see there is a tremendous amount of infrastructure required in order to get DUET up and running. What also concerns me is the skill set in order to implement these functions. In my experience, I have not met too many people who understand both of the SAP and Microsoft technology stacks. The other thing that tends to happen on these types of projects is the finger pointing that occurs between the two technology groups. Within DUET, at least from my perspective, the line between roles and responsibilities becomes very blurry. The only way that I can see these types of implementations succeeding is to have a “DUET team” that is made up of both SAP and Microsoft resources and their mission is to ensure of the success of the technology. If you leave these two teams segregated I think you are in for a very long ride on a bumpy road.

Perhaps I am jumping to conclusions here, but I would love to hear any real world experiences if anyone is willing to share.

The next session I attended was a WCF session put on by Jon Flanders. In case you haven’t heard of Jon, he is a Connected Systems Guru having worked a lot with BizTalk, WF, WCF and Restful Services. He is also a Microsoft MVP, trainer at Pluralsight and an accomplished author.

He took a bit of a gamble in his session, but I think it paid off. He essentially wiped the slate clean and prompted the audience on the topics that we would like more info on. He did ensure that the topics listed in the abstract were covered just to ensure that no one felt slighted.
If you follow Jon’s work, you will quickly find out that he is very big supporter of Restful Services. It was really interesting to gain more insight on why he is such a proponent of the technology.

When you think about the various bindings that are available in WCF, you tend to think that the basicHttpBinding is usually the “safe” bet. Meaning that it allows for the most interoperability between your service and a variety of clients. These clients may be running on JAVA or even older versions of .Net (think asmx). Jon quickly changed my way of thinking with regards to interoperability. The webHttpBinding is truly the most interoperable binding. There was a bit of a sidebar jabbering between Jon and Clemens Vasters regarding this statement, but I will leave that one alone for now. The rational that Jon used was that http and XML are extremely pervasive across nearly every modern platform. He gave an example from his consulting experience in that a customer had a version of Linux and they were trying to connect to an ASMX web service via SOAP. In order to get this scenario working, they had to start including some hacks so that the client and service could communicate. When they brought Jon in, he convinced them to change the service to a RESTful service and once they had done that there were no more interoperability challenges. It was a very good scenario that he described and it certainly opened my eyes to REST.

I do consider myself to be more of a Contract First type of guy. Especially when communicating with external parties. Having recently communicated with a really big internet company over REST, I was frustrated at times because it wasn’t explicitly clear what my responsibility as a client was when submitting messages to their service. Sure there was a high level design document that described the various fields and a sample xml message, but what did I have to ensure that I was constructing my message according to their specification? It also wasn’t clear what the response message looked like. At first I wasn’t even sure if it was two way service? So yes some of this isn’t the fault of REST itself, but it does highlight that there can be a lot of ambiguity. Something that SOAP provides via WSDL is that the responsibilities are very explicit. When working with external parties, and especially when tight timelines are involved, I do like the explicit nature of SOAP. Now certainly with making contracts explicit you have challenges with iterations as a change to the service payload may slow down the process. The service developer needs to update his WSDL and then provide it to the client developer.

But all in all it was a very good session, and I am happy to say that I did learn to appreciate REST more so than I previously had.

The final session of the day was a BizTalk RFID session put on by BizTalk MVP Winson Woo. I had a little exposure to RFID previously, but he was able to fill in the gaps for me and I learned a lot from him in that 1h15.

BizTalk RFID is an application that comes with BizTalk, but it is not tightly coupled with BizTalk Server whatsoever. It does not hook into the message box or anything like that. As Winson put it, BizTalk Server is where the real value of RFID comes into play. I am not trying to downplay the role of BizTalk RFID, but it is essentially just reading tags. RFID readers have been out for years so there is nothing earth shattering about this. However, once you have read the data, having the ability to wrap this functionality around business process management and rules engine execution is where the value is really extracted.

Having the ability to read a tag, update your ERP system and send a message to a downstream provider is where this technology is impressive. A sample scenario could be some product being sent from a value add supplier to a retailer. The retailer wants to know when they can expect this product because the quicker they can get their hands on it, the quicker they can sell it. So as a pallet of widgets leaves the warehouse, an RFID reader detects this and pushes the data to BizTalk RFID via TCPIP. These readers essentially are network resolvable devices. On the BizTalk RFID server you are able to create what I will call a “mini-workflow” via event handlers. Within this “mini-workflow” you may write an event handler that will compose a message and sent it to BizTalk using WCF. You may also write this data to the RFID SQL Server Database. If you didn’t want to use WCF to when getting these RFID Reads, BizTalk could always poll the BizTalk RFID database as well. Once BizTalk has this data, it is business as usual. If you need to update your ERP, you would compose a message and send it to the ERP using the appropriate adapter. If you need to construct an EDI message and submit that message to your retailer via ASN you able to do that as well.

In scenarios where you have a hand held reader, you have a couple communication options. These readers will be network resolvable using TCP/IP as well, but in the event that you cannot communicate with the BizTalk RFID system, a store and forward persistence mechanism will maintain a record of all of your reads so that when you place the handheld reader into its cradle these records will be synchronized in the order that they were read.

As these reader devices evolve, so does the monitoring and tooling capabilities. SCOM and SCCM are able to determine the health of readers and push out updates to them. This is a great story as you no longer need to be running around a warehouse trying to determine whether a reader is functioning or not.

So that was TechEd 2009 in a nutshell. I hope that you have enjoyed following this series. You could tell that the tough economic climate had an effect on the attendance and atmosphere this year. There was no night at Universal studio which was a little unfortunate. I had a good time when we did this at TechEd 2007 and PDC 2008. All in all, I was happy with my TechEd 2009 experience. It was also a good opportunity to catch up with some of my MVP buddies and the BizTalk product team.

Thursday, May 14, 2009

TechEd 2009 - Day 4

Thursday ended up being a great day for sessions. The two top sessions for me were "Enhancing the SAP User Experience: Building Rich Composite Applications in Microsoft Office SharePoint Server 2007 Using the BizTalk Adapter Pack" and "SOA319 Interconnect and Orchestrate Services and Applications with Microsoft .NET Services"

Enhancing the SAP User Experience: Building Rich Composite Applications in Microsoft Office SharePoint Server 2007 Using the BizTalk Adapter Pack
In this session Chris Kabat and Naresh Koka demonstrated the various ways of exchanging data between SAP and other Microsoft technologies.

Why would you want to extract data from SAP - can't you do everything in SAP?
SAP systems tend to be mission critical, sources of truth or systems of records. Bottom line they tend to be very important. However, it is not practical to expect that all information in the enterprise is contained in SAP. You may have acquired a company that used different software, you may have an industry specific application where a SAP module doesn't exist or you may have decided that building an application on a different platform was more cost effective. Microsoft is widely deployed across many enterprises making it an ideal candidate to interoperate with SAP. Microsoft's technologies tend to be easy to use, quick to build and deploy and generally have a lower TCO. (Total Cost of Ownership). Both Microsoft and SAP have recognized this and have formed a partnership to ensure of interoperability.

How can Microsoft connect with SAP?
The 4 ways that they discussed included:
  • RFC/BAPI calls from .Net
  • RFC/BAPI calls hosted in IIS
  • RFC/BAPI calls from BizTalk Server
  • .Net Data Providers for SAP

Most of their discussion involved using the BizTalk Adapter Pack 2.0 when communicating with SAP. In case you were not aware, this Adapter Pack can be used in and outside of BizTalk. They demonstrated both of these scenarios.

Best Practice
A best practice that they described was using a canonical contract(or schema) when exposing SAP data through a Service. I completely agree with this technique as you are abstracting some of the complexity away from downstream clients. You are also limiting the coupling between SAP and a consumer of your service. SAP segments/nodes/field names are not very user friendly. If you wanted a Sharepoint app or .Net app to consume your service, you shouldn't have to delegate that pain of figuring out what AUFNR(for example) means to them. Instead you should use a business friendly term like OrderNumber.

.Net or BizTalk, how do I choose?
Since the BizTalk adapter pack can be used inside or outside of BizTalk, how do you decide where to use it? This decision can be very subjective. If you already own BizTalk, have the team to develop and support the interface then it makes sense to leverage those skills and infrastructure that you have in house to build the application with BizTalk. You can also build out a service catalogue, using this approach, that allows other applications to leverage these services as well. The scale out story in BizTalk is very good so you do not have to be too concerned with a service that will be sparingly used initially and then mutates into a mission critical service that is used by many other consuming applications. Next thing you know this service can't scale and your client apps have now broke because they cannot connect. Another benefit of using BizTalk is the canonical example that I previously described. Mapping your canonical schema values to your SAP values is every easy. All you have to do is drag a line from your source schema to your destination schema.

If you do not have BizTalk, or resources to support this scenario then leveraging the Adapter pack outside of BizTalk is definitely an acceptable practice. In many ways this type of a decision comes down to your organizations' appetite to build vs buy.

From a development perspective the meta data generation is very similar. Navigating through the SAP catalogue is the same no matter whether you are connecting with BizTalk or .Net. The end result is that you are going to get schemas generated for the BizTalk solution vs code for the .Net solution.


SOA319 Interconnect and Orchestrate Services and Applications with Microsoft .NET Services
Clemens Vasters' session on .Net Services was well done. I saw him speak at PDC and he didn't disappoint again. He gave an introduction demo and explanation about the relay service and direct connect service. Even though these demos are console applications, if you sit back and think about what he is demonstrating it blows your mind. Another demo he gave involved a blog site that he was hosting on his laptop. The blog was publicly accessible because he registered his application in the cloud. This allowed the audience to hit his cloud address over the internet, but it was really his laptop that serviced the web page request. As he put it, "he didn't talk to anyone in order to make any network configuration arrangements". This was all made possible by him having an application that established a connection with the cloud and listened for any requests being made.

As mentioned in my Day 1 post, the .Net Services team has been working hard on some enhancements to the bus. The changes mainly address issues that "sparsely connected receivers" may have. What this means is that if you have a receiver that has problems maintaining a reliable connection to the cloud, that you may need to add some durability.

So how do you add durability to the cloud? Two ways (currently):

  • Routers
  • Queues

Routers have a built in buffer that will continue to retry and connect to the downstream end point. Where as a Queue will persist the message, but the message needs to be pulled from the endpoint. So Routers push and Queues need to be pulled from.

Another interesting feature of the Router is dealing with bandwidth distribution scenarios. Lets say you are exchanging a lot of information between a client who has a lot of bandwidth (say a T1) and someone with little bandwidth (say a 128 kbps). The system that has a lot of bandwidth will overwhelm the system with the little bandwidth. Another way to look at this is someone who "drinks from a fire hose". So by using buffers, the Router is able to effectively deal with this unfair distribution of bandwidth by only providing as much data to the downstream application as it can handle. At some point the downstream endpoint should be able to catch up with the upstream system once the message generation process starts to slow down.

Routers also have a multi-cast feature. You can think of this much like a UDP Multicast scenario. Best efforts are made to distribute the message to all subscribers, but there is no durability built in. However, I just mentioned that a Router that is configured to have one subscriber, has the ability to take advantage of a buffer. There is nothing from stopping you from multi-casting to a set of routers and therefore you are able to achieve durability.

A feature of Queues that I found interesting was the two modes that they operate in. The first is a destructive receive where the client pulls the message and it is deleted...no turning back. The second mode has the receiver connecting, locking the message so that it can cannot be pulled from another source, once the message is retrieved, the client then issues the delete command when its ready.

Pricing
I am sure it happens in every Azure presentation, and this one was no different, that pricing comes up. We didn't get any hard facts, but were told that Microsoft's pricing should be competitive with other competing offerings. Both bandwidth and cost per message will be part of the equation. So when you are streaming large messages you are best off looking at direct connections. Direct connections are established initially as a relay, but while the relay is taking place, the .Net Service bus performs some NAT probing. Approximately 80% of the time the .Net Service bus is able to determine the correct settings that allow for a direct connection to be established. This improves the speed of the data exchange as Microsoft no longer becomes an intermediary. It also reduces the amount of money that the data exchange costs since you are connecting directly with the other party.

Workflow
At this point, it looks like Workflow in the cloud will remain in CTP mode. The feedback that the Product Team received, strongly encouraged them to deliver .Net 4.0 workflow in the cloud instead of releasing Cloud WF based upon the 3.5 version. The .Net Services team is trying to do the right thing once; so they are going to wait for .Net 4.o workflow to finish cooking before the make it a core offering in the .Net Service stack.

TechEd 2009 - Day 3

Day 3 was a bit of a lighter day for me. I spent most of my time "in the clouds" attending sessions and having some good conversations at the .Net Services booth...thanks Clemens.

In the Architecting Solutions for the Cloud session, Clemens provided an introduction to the .Net Service bus. For those not familiar with the .Net Service Bus it is essentially an Internet Service bus that allows you to exchange information with other parties using the Cloud as an intermediary. Another way of thinking about it is building a bridge between two islands, only the islands in this case are applications. The real power of the .Net Service bus are the WCF based relay bindings. These bindings allow endpoints to make outbound connections to the Bus and then listen for messages. This makes punching holes in your Firewall a task of the past. Very Cool!!! As mentioned on my Day 1 blog there are a couple new features as part of the March CTP that allow for more capabilities in the bus including Routers and Queues. Look for more information on this in my upcoming Day 4 blog.

When prompted for a comment about private clouds, the response was a very clear in that it is not going to happen anytime soon. The reason for this is that to build up your own "private" cloud would be too cost prohibitive. People shouldn't underestimate the complexity involved in building such an elastic, dynamic platform like Azure.

The second half of the session dealt more with deploying your Web Applications to the cloud. People have been hosting their web applications with Application Server Providers for years, what is the difference with Azure? Having the ability to efficiently scale would be an answer that I would have. If you had a viral marketing project underway and you were not sure just how much bandwidth, or processing power, that you are going to need then having an environment like Azure that can scale your app in minutes is a great option. The other thing to consider is that you can scale you application tiers independently. By establishing Web and Worker roles you can allocate resources to serving pages versus doing the back ground work. So if you were doing a lot of number crunching in the back end and the Web Requests were lightweight you can configure your application to suite your needs. I doubt that there are very many ASPs that can provide you this type of granularity.

In the demos they showed how easy it was to work with the local Azure Dev Fabric and then how easy it was to deploy to the cloud. I would expect to see some more tooling around this experience that allows for scripted deployments plus some delegated administration in the cloud. Currently, the developer working on the cloud application is the only one who can deploy the project to the cloud. Obviously this methodology would not fly in many corporate environments, but this is something that they are aware of and have on their task list to work on.

When working with ASP.Net web apps, be sure to use ASP.Net Web projects instead of ASP.Net Web Sites if you have plans of deploying them to the cloud. The Cloud does not support the "Web Site" flavour.

Here are some upcoming dates to look for albeit they are not "solid" at this point:
  • Pricing - August 2009
  • Reliability/SLA - shooting for August 2009, but may slide
  • Launch - targeting PDC time frame release (November 17th - 20th 2009)