Showing posts with label .Net Services. Show all posts
Showing posts with label .Net Services. Show all posts

Thursday, December 18, 2008

Azure Services Kit - Exercise 2 (IntroServiceBus)

I have recently been evaluating the Azure Services Kit and ran across an error that I figured I would post in case anyone is looking for a solution.

The error occurs in Exercise #2 when trying to initialize the Server project to listen for requests through the .Net Services Service Bus. I would imagine that this error applies more so to IIS and WCF than the actual Service Bus.

Error
System.ServiceModel.AddressAccessDeniedException was unhandled Message="HTTP could not register URL http://+:80/services/My-Solution-Name/EchoService/. Your process does not have access rights to this namespace (see http://go.microsoft.com/fwlink/?LinkId=70353 for details)." Source="System.ServiceModel" StackTrace: at System.ServiceModel.Channels.SharedHttpTransportManager.OnOpen() at System.ServiceModel.Channels.TransportManager.Open(TransportChannelListener channelListener) at System.ServiceModel.Channels.TransportManagerContainer.Open(SelectTransportManagersCallback selectTransportManagerCallback) at System.ServiceModel.Channels.TransportChannelListener.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.HttpChannelListener.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.Dispatcher.ChannelDispatcher.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.ServiceHostBase.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open() at Service.Program.Main() in D:\Development\AzureServicesKit\Labs\IntroServiceBus\Ex02-BindingsConnectionModesSample\begin\Service\Program.cs:line 34 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: System.Net.HttpListenerException Message="Access is denied" Source="System" ErrorCode=5 NativeErrorCode=5 StackTrace: at System.Net.HttpListener.AddAll() at System.Net.HttpListener.Start() at System.ServiceModel.Channels.SharedHttpTransportManager.OnOpen() InnerException:

Solution
The solution itself can be found in the link that is provided ( http://go.microsoft.com/fwlink/?LinkId=70353) ...I know what a bonus; an error message that actually points you in the right direction to a solution.

The description of the error is not all that useful, but does cover some scenarios:
Using Windows Communication Foundation (WCF) over HTTP either requires the use of a host, such as Internet Information Services (IIS), or manual configuration of the HTTP settings through the HTTP Server API. This document describes manually configuring WCF when using HTTP and HTTPS.

In order to actually solve the problem, I ran this command on my Vista machine:

netsh http add urlacl url=http://+:80/services/My-Solution-Name/EchoService/ user=domain\user

Note:
  • "My-Solution-Name" is where I inserted my Azure Solution name. You would enter yours in this place holder.
  • If your machine is not part of a Domain, just use the local machine name instead of Domain

Now that I have some spare time, I plan on spending some more time with Azure and more specifically .Net Services. Hopefully, I will have some interesting posts regarding these technologies.

Sunday, November 2, 2008

PDC Day 4 - More time spent in the Cloud

On the 4th day of PDC I spent more time in the Cloud learning about some architectural scenarios that the Cloud is capable of supporting.

As part of the PDC Symposium series, I attended Gianpaolo Carraro's session called "Head in the Cloud, Feet on the Ground". The objective of this session was to further investigate the Architectural challenges and opportunities in the cloud.

I found this session to be really helpful as so much of the other sessions has been about the technical bits that makes the Cloud technology "cool". This session focused on decisions that an Architect or even CIO may face when thinking about what applications belong in the cloud and which applications may be better hosted on premise. On premise meaning locally in your organization.

He also attacked this subject from two perspectives:
  1. Corporate IT perspective
  2. ISV Perspective

Since I live in Corporate IT I will spend more time in this area.

On Premise
Applications that are hosted locally tend to have more control than those in the cloud. Think of this more in terms of a custom application versus an application that runs in the cloud like Salesforce. If you have a custom built application, then your organization controls the feature set and have more agility when making a change as less people are involved. A challenge with this approach is that you have very little economy of scale as only your organization can extract value out of this application.

Cloud Applications
Conversely, an application that runs in the cloud has an opportunity for great economy of scale. Since many customers may be paying for this application, the savings are returned to the customers as collectively they are paying for the software to be developed. The challenge with this approach is that a company may need to alter an internal business process to align with the way that the application has been designed.

Conclusion
The end result came down to determine which applications are core to your business and provide a competitive advantage. These are the types of applications that are ideal candidates to be built and hosted internally. The commodity applications like Mail Services or Timesheet applications that are required to run your business, but not necessarily make your business more competitive are candidates for cloud based applications.

Best of both worlds
What I find extremely compelling is the idea of building an on premise service, yet leverage the capabilities of the cloud to expose this service to the world. This alleviates you from being overwhelmed with challenges related to Firewalls, NATs and security. I find this of even of greater value when you introduce several B2B partner scenarios. You don't need to be concerned with each of your partner's connectivity requirements. You let the .Net Service bus deal with those complexities.

Co-location
Also worth mentioning is a middle ground called Co-location or Managed Servers. This type of scenario may be ideal for you to build a custom application, but have it hosted by a 3rd party where you are not responsible for the on-going maintenance of the server that is hosting your application. For instance you may be a small or up and coming company that just does not want to make a large up front investment in infrastructure, but want to build an application and have it hosted in an external environment. This allows you to reduce the amount of up-front Capital Expenditures.

Looking Ahead
While some of this discussion is nothing earth shattering or ground breaking, I am curious to see how monitoring will evolve with these cloud based scenarios. For instance, currently with our on-premise applications we run Microsoft Operations Manager (MOM) and System Center Operations Manager (SCOM) to provide us with some visibility into the health of our applications. If in the future we decide that we want to host a WF workflow in the cloud, what type of tooling will be available to inform us of any issues that may be occurring? This is a scenario that Microsoft is definitely aware of so I will be keeping an eye open on what type of tooling will be available.

Thursday, October 30, 2008

PDC Day 3

The highlights for me on Day 3 include the .NET Services: Connectivity, Messaging, Events, and Discovery with the Service Bus and Dublin": Hosting and Managing Workflows and Services in Windows Application Server" sessions.

This blog post will provide a summary of both of these sessions and any additional 'tid-bits' that stuck out for me.

.NET Services: Connectivity, Messaging, Events, and Discovery with the Service Bus
Clemons Vasters presented the .Net Services session and did he ever do a good job. Not only is he extremely technically gifted, but he also has good presentation skills(this reminds me that I need to fill out a review for this session).

What are .Net Services
.Net Services is the successor of what was formerly called "BizTalk Services" and is one of the core components of the Azure platform. I would describe .Net Services as providing the abilities to host .Net Applications in the cloud (off-premise) and provide the ability to traverse firewalls by using Relay Bindings in messaging scenarios. For instance, if you have an on premise service that you would like to expose to the world but don't want to deal with some of the challenges that firewalls and security bring.

Dealing with firewalls is becoming a bigger and greater challenge as they are extremely pervasive. Also the use of NAT (Network Address Translation) devices makes it difficult to connect with publicly exposed services. This is the result of the IPV4 address supply being pretty much exhausted.

Service Bus Capabilities
.Net Services provide Service Bus capabilities in the cloud. What this essentially means is that you are able to are able to place a message on the wire and let the Service Bus look after directing that message to the appropriate subscriber. Subscriptions are handled by the Service Bus Naming System and are URI based.

Message Confidentiality
Transport security, in the form of SSL, is used for all connections. Microsoft has no need to look at the payload of your messages and claims that they do not. They welcome, and to an extent, encourage you to use Message level encryption if you have concerns as to whether or not your data is safe. A question that I repeatedly heard was, what type of audience do you expect to use these services as certain agencies would "never" trust Microsoft with their data. For instance, could the level of privacy that Microsoft is offering be sufficient enough to meet the criteria of Governments or Health Care organizations? Since .Net Services is still in CTP mode, I never did hear a real definitive answer to the question, but it is definitely something that is on Microsoft's radar.

Bindings
A very cool demo and discussion about the NetTCPRelayBindingHybrid was included in this presentation. The goal of this binding is to try to establish a direct peer to peer connection between the service consumer and service provider. You may be asking: how is this accomplished? At a (very) high level, A Relay connection is established that includes some NAT Probing. Microsoft will use the data that is obtained during this NAT Probing to form an educated guess on what NAT settings need to be used in order to establish a direct connection between the parties. If a direct connection can be established then the message payload will be sent directly to the destination system. If a direct connection cannot be established, then the Relay connection will be used to send data to the destination system via the .Net Service bus. Since .Net Services will use a "Pay as you go approach" the data, that is sent over the Relay connection, would be subject to the "cost" model.


Dublin": Hosting and Managing Workflows and Services in Windows Application Server"

This was my first real good look at the technology since the SDR sessions that were held at the MVP Global Summit in April. While I cannot discuss what I saw in April, I am able to say that the Dublin team has been doing some good work and has made progress.

When will Dublin be available
No public date was given other than "shortly" after Visual Studio 10 is released. This means we are probably looking at 1.5 - 2 years from now.

It just works
A slogan that is being used by the Connected Systems team. The idea behind this slogan is that in the past developers have had to either implement some features themselves or tweak their WCF/WF application in order to get it to work the way they want it to. In Dublin, more tooling and visibility into your WCF/WF applications is provided. The goal is that you design/build/test your application and after that ..."it just works".

Feature list (non-exhaustive)

  • Management Capabilities through IIS Manager snap in tool
  • Management APIs in the form of PowerShell command-lets
  • Hosting (Durable Time Service/Discovery Service)
  • Persistence (Scale-out and Reliability)
  • Monitoring(WCF and WF Tracking)
  • Messaging (Forwarding Service)
  • System Center Integration
  • Modeling Deployment via Quadrant

Management
The Management experience had some of the "look and feel" that you would find in BizTalk. The difference being is that there is not a new or separate tool. Additional functionality is "plugged" into IIS Manager. The rationale behind this decision was that Microsoft did not want to introduce a new tool that would also introduce another learning curve. By using IIS Manager, they could leverage an existing tool which should allow people to get up to speed quicker since they may already be familiar with the tool.

Model Deployment
They showed a cool demo where they were able to Model a workflow in the new Quadrant tool. They were then able to deploy the Model to the runtime. This demonstrated the vast integration between the technologies and perhaps gave us a real world glimpse into how we will develop and deploy software in the future.

Looking Ahead
There were a few things that I believe require some additional investigation. I do realize that Dublin is currently a CTP so it may just be a matter of having more time to include some of these features.

  • More details in IIS Manager surrounding instances. Having just a dialog box pop up indicating the amount of successful or failed instances is not quite enough information.
  • No Workflow Debug/Replay capabilities. What I am looking for here is a similar experience to the Orchestration Debugger that essentially allows you to replay a suspended (or successful) instance.
  • Provide a GUI for the forwarding service configuration. While Powershell is a great tool and I can see it being very useful, something like inputting an XPath statement into a GUI would be my preferred method. While I encourage scriptable deployments, making a change to this could be done on the fly and I may not always want to switch into a command based session to make a change like this
  • Unless I misunderstood, you have to deploy the installation 'package' on each node in your Dublin "group". There was no way to "push" the application to all nodes in a "group". While this is probably achievable via Powershell, it would be nice to have more visibility into other servers that may be running the same application.

So all in all , I have listed some pretty minor enhancements . Overall, I think the Dublin team has done some great work with the technology and remember that it is still early.