This blog post will provide a summary of both of these sessions and any additional 'tid-bits' that stuck out for me.
.NET Services: Connectivity, Messaging, Events, and Discovery with the Service Bus
Clemons Vasters presented the .Net Services session and did he ever do a good job. Not only is he extremely technically gifted, but he also has good presentation skills(this reminds me that I need to fill out a review for this session).
What are .Net Services
.Net Services is the successor of what was formerly called "BizTalk Services" and is one of the core components of the Azure platform. I would describe .Net Services as providing the abilities to host .Net Applications in the cloud (off-premise) and provide the ability to traverse firewalls by using Relay Bindings in messaging scenarios. For instance, if you have an on premise service that you would like to expose to the world but don't want to deal with some of the challenges that firewalls and security bring.
Dealing with firewalls is becoming a bigger and greater challenge as they are extremely pervasive. Also the use of NAT (Network Address Translation) devices makes it difficult to connect with publicly exposed services. This is the result of the IPV4 address supply being pretty much exhausted.
Service Bus Capabilities
.Net Services provide Service Bus capabilities in the cloud. What this essentially means is that you are able to are able to place a message on the wire and let the Service Bus look after directing that message to the appropriate subscriber. Subscriptions are handled by the Service Bus Naming System and are URI based.
Transport security, in the form of SSL, is used for all connections. Microsoft has no need to look at the payload of your messages and claims that they do not. They welcome, and to an extent, encourage you to use Message level encryption if you have concerns as to whether or not your data is safe. A question that I repeatedly heard was, what type of audience do you expect to use these services as certain agencies would "never" trust Microsoft with their data. For instance, could the level of privacy that Microsoft is offering be sufficient enough to meet the criteria of Governments or Health Care organizations? Since .Net Services is still in CTP mode, I never did hear a real definitive answer to the question, but it is definitely something that is on Microsoft's radar.
A very cool demo and discussion about the NetTCPRelayBindingHybrid was included in this presentation. The goal of this binding is to try to establish a direct peer to peer connection between the service consumer and service provider. You may be asking: how is this accomplished? At a (very) high level, A Relay connection is established that includes some NAT Probing. Microsoft will use the data that is obtained during this NAT Probing to form an educated guess on what NAT settings need to be used in order to establish a direct connection between the parties. If a direct connection can be established then the message payload will be sent directly to the destination system. If a direct connection cannot be established, then the Relay connection will be used to send data to the destination system via the .Net Service bus. Since .Net Services will use a "Pay as you go approach" the data, that is sent over the Relay connection, would be subject to the "cost" model.
Dublin": Hosting and Managing Workflows and Services in Windows Application Server"
This was my first real good look at the technology since the SDR sessions that were held at the MVP Global Summit in April. While I cannot discuss what I saw in April, I am able to say that the Dublin team has been doing some good work and has made progress.
When will Dublin be available
No public date was given other than "shortly" after Visual Studio 10 is released. This means we are probably looking at 1.5 - 2 years from now.
It just works
A slogan that is being used by the Connected Systems team. The idea behind this slogan is that in the past developers have had to either implement some features themselves or tweak their WCF/WF application in order to get it to work the way they want it to. In Dublin, more tooling and visibility into your WCF/WF applications is provided. The goal is that you design/build/test your application and after that ..."it just works".
Feature list (non-exhaustive)
- Management Capabilities through IIS Manager snap in tool
- Management APIs in the form of PowerShell command-lets
- Hosting (Durable Time Service/Discovery Service)
- Persistence (Scale-out and Reliability)
- Monitoring(WCF and WF Tracking)
- Messaging (Forwarding Service)
- System Center Integration
- Modeling Deployment via Quadrant
The Management experience had some of the "look and feel" that you would find in BizTalk. The difference being is that there is not a new or separate tool. Additional functionality is "plugged" into IIS Manager. The rationale behind this decision was that Microsoft did not want to introduce a new tool that would also introduce another learning curve. By using IIS Manager, they could leverage an existing tool which should allow people to get up to speed quicker since they may already be familiar with the tool.
They showed a cool demo where they were able to Model a workflow in the new Quadrant tool. They were then able to deploy the Model to the runtime. This demonstrated the vast integration between the technologies and perhaps gave us a real world glimpse into how we will develop and deploy software in the future.
There were a few things that I believe require some additional investigation. I do realize that Dublin is currently a CTP so it may just be a matter of having more time to include some of these features.
- More details in IIS Manager surrounding instances. Having just a dialog box pop up indicating the amount of successful or failed instances is not quite enough information.
- No Workflow Debug/Replay capabilities. What I am looking for here is a similar experience to the Orchestration Debugger that essentially allows you to replay a suspended (or successful) instance.
- Provide a GUI for the forwarding service configuration. While Powershell is a great tool and I can see it being very useful, something like inputting an XPath statement into a GUI would be my preferred method. While I encourage scriptable deployments, making a change to this could be done on the fly and I may not always want to switch into a command based session to make a change like this
- Unless I misunderstood, you have to deploy the installation 'package' on each node in your Dublin "group". There was no way to "push" the application to all nodes in a "group". While this is probably achievable via Powershell, it would be nice to have more visibility into other servers that may be running the same application.
So all in all , I have listed some pretty minor enhancements . Overall, I think the Dublin team has done some great work with the technology and remember that it is still early.