Thursday, May 21, 2009

Another BizTalk Online Resource -

I just wanted to plug another good BizTalk resource: Saravana Kumar has given the site an extensive facelift and it looks great!

On the site, you will find links to Blogs, installation guides, BizTalk posters, Hot News, Web Casts, tutorials and links to lastest in BizTalk books.

While I am in the plugging mood, a book that I have started to read is Richard Seroter's SOA Patterns with BizTalk Server 2009. So far I am a couple chapters in and I am impressed with it. I will post a more comprehensive review once I have completed it.

Friday, May 15, 2009

TechEd 2009 - Day 5

Friday was the final day for TechEd 2009 North America. There were fewer people around and it had that feeling of writing a final exam on the last day of the semester. I started out the day with BizTalk and ended with a BizTalk RFID session. In between those two sessions I spent some time learning about DUET and WCF. Read the rest of the blog for more details.

Stephen Kaufman had a good session on BizTalk 2009 Application Lifecycle Management (ALM) using the new features of BizTalk 2009 and its integration with Team Foundation Server(TFS). BizTalk is now a first class citizen in the Visual Studio 2008 family and therefore has added benefits when you deploy TFS as well. This includes bug tracking, task management and automated builds to name a few.

Next Stephen ran us through some of the new unit testing capabilities. He went through writing unit tests for both Maps and Schemas. Having the ability to write these tests using Microsoft tools and then having the ability to publish the results to TFS is extremely powerful. Writing and executing unit tests is a good practice to follow. Like most developers I don't mind writing and executing the tests, but I hate documenting the results. If this can be published to a tracking tool automatically then I am all for this type of functionality.

Automating your builds was next on the agenda. If you are using automated builds in BizTalk 2006, then chances are you have a Build server with Visual Studio installed on it. This is no longer required as there is an option in the BizTalk 2009 installation that allows you to specify only the build components. This allows you to execute build scripts using MSBuild without the need of visual studio. I strongly suggest automating your builds and deployments. The benefits are consistency, repeatability and times savings. For those of you who were around in the BizTalk 2004 days know how much of a pain deployment was back then. Having a scripted deployment is truly a breath of fresh air compared to those days.

The DUET session was good, however I walked away a little discouraged with the complexity of the DUET architecture. Being a Microsoft developer at an organization that uses SAP as a system of record forces you to find different ways to integrate with SAP. From a middleware perspective, this path is very clear: BizTalk. However, there are situations when BizTalk may not be the clear cut choice. I would like to think of these types of scenarios to involve User to System interactions and potentially human workflow scenarios. BizTalk is very good with asynchronous scenarios. While it certainly can function in synchronous scenarios, you need to be careful in order to prevent client side timeouts.

The demos were impressive: the speaker showed a few scenarios where a user would be working in Outlook and had the ability to book time against project codes, submit a leave request and generate reports. In scenarios where you need approval, those requests will get forwarded to the appropriate person(Manager) as configured in SAP. The interactions were very smooth and responsive.

Now for the discouraging part…the amount of infrastructure and pre-requisites to get DUET functioning is significant. It would only be fair at this point to disclose that I am far from a DUET expert, but this is the way I interpreted the requirements:

Client Computers:
DUET Client
Local SQL Server Express (yes – must have)
Hidden Mail Folder

Exchange Server
DUET Service Mail Box

SAP Server
Enterprise Services need to be enabled
DUET Add on
Additional ABAP modules

DUET Server
IIS – Microsoft DUET Server Components
J2EE – SAP DUET Server Components
SQL Server

So as you can see there is a tremendous amount of infrastructure required in order to get DUET up and running. What also concerns me is the skill set in order to implement these functions. In my experience, I have not met too many people who understand both of the SAP and Microsoft technology stacks. The other thing that tends to happen on these types of projects is the finger pointing that occurs between the two technology groups. Within DUET, at least from my perspective, the line between roles and responsibilities becomes very blurry. The only way that I can see these types of implementations succeeding is to have a “DUET team” that is made up of both SAP and Microsoft resources and their mission is to ensure of the success of the technology. If you leave these two teams segregated I think you are in for a very long ride on a bumpy road.

Perhaps I am jumping to conclusions here, but I would love to hear any real world experiences if anyone is willing to share.

The next session I attended was a WCF session put on by Jon Flanders. In case you haven’t heard of Jon, he is a Connected Systems Guru having worked a lot with BizTalk, WF, WCF and Restful Services. He is also a Microsoft MVP, trainer at Pluralsight and an accomplished author.

He took a bit of a gamble in his session, but I think it paid off. He essentially wiped the slate clean and prompted the audience on the topics that we would like more info on. He did ensure that the topics listed in the abstract were covered just to ensure that no one felt slighted.
If you follow Jon’s work, you will quickly find out that he is very big supporter of Restful Services. It was really interesting to gain more insight on why he is such a proponent of the technology.

When you think about the various bindings that are available in WCF, you tend to think that the basicHttpBinding is usually the “safe” bet. Meaning that it allows for the most interoperability between your service and a variety of clients. These clients may be running on JAVA or even older versions of .Net (think asmx). Jon quickly changed my way of thinking with regards to interoperability. The webHttpBinding is truly the most interoperable binding. There was a bit of a sidebar jabbering between Jon and Clemens Vasters regarding this statement, but I will leave that one alone for now. The rational that Jon used was that http and XML are extremely pervasive across nearly every modern platform. He gave an example from his consulting experience in that a customer had a version of Linux and they were trying to connect to an ASMX web service via SOAP. In order to get this scenario working, they had to start including some hacks so that the client and service could communicate. When they brought Jon in, he convinced them to change the service to a RESTful service and once they had done that there were no more interoperability challenges. It was a very good scenario that he described and it certainly opened my eyes to REST.

I do consider myself to be more of a Contract First type of guy. Especially when communicating with external parties. Having recently communicated with a really big internet company over REST, I was frustrated at times because it wasn’t explicitly clear what my responsibility as a client was when submitting messages to their service. Sure there was a high level design document that described the various fields and a sample xml message, but what did I have to ensure that I was constructing my message according to their specification? It also wasn’t clear what the response message looked like. At first I wasn’t even sure if it was two way service? So yes some of this isn’t the fault of REST itself, but it does highlight that there can be a lot of ambiguity. Something that SOAP provides via WSDL is that the responsibilities are very explicit. When working with external parties, and especially when tight timelines are involved, I do like the explicit nature of SOAP. Now certainly with making contracts explicit you have challenges with iterations as a change to the service payload may slow down the process. The service developer needs to update his WSDL and then provide it to the client developer.

But all in all it was a very good session, and I am happy to say that I did learn to appreciate REST more so than I previously had.

The final session of the day was a BizTalk RFID session put on by BizTalk MVP Winson Woo. I had a little exposure to RFID previously, but he was able to fill in the gaps for me and I learned a lot from him in that 1h15.

BizTalk RFID is an application that comes with BizTalk, but it is not tightly coupled with BizTalk Server whatsoever. It does not hook into the message box or anything like that. As Winson put it, BizTalk Server is where the real value of RFID comes into play. I am not trying to downplay the role of BizTalk RFID, but it is essentially just reading tags. RFID readers have been out for years so there is nothing earth shattering about this. However, once you have read the data, having the ability to wrap this functionality around business process management and rules engine execution is where the value is really extracted.

Having the ability to read a tag, update your ERP system and send a message to a downstream provider is where this technology is impressive. A sample scenario could be some product being sent from a value add supplier to a retailer. The retailer wants to know when they can expect this product because the quicker they can get their hands on it, the quicker they can sell it. So as a pallet of widgets leaves the warehouse, an RFID reader detects this and pushes the data to BizTalk RFID via TCPIP. These readers essentially are network resolvable devices. On the BizTalk RFID server you are able to create what I will call a “mini-workflow” via event handlers. Within this “mini-workflow” you may write an event handler that will compose a message and sent it to BizTalk using WCF. You may also write this data to the RFID SQL Server Database. If you didn’t want to use WCF to when getting these RFID Reads, BizTalk could always poll the BizTalk RFID database as well. Once BizTalk has this data, it is business as usual. If you need to update your ERP, you would compose a message and send it to the ERP using the appropriate adapter. If you need to construct an EDI message and submit that message to your retailer via ASN you able to do that as well.

In scenarios where you have a hand held reader, you have a couple communication options. These readers will be network resolvable using TCP/IP as well, but in the event that you cannot communicate with the BizTalk RFID system, a store and forward persistence mechanism will maintain a record of all of your reads so that when you place the handheld reader into its cradle these records will be synchronized in the order that they were read.

As these reader devices evolve, so does the monitoring and tooling capabilities. SCOM and SCCM are able to determine the health of readers and push out updates to them. This is a great story as you no longer need to be running around a warehouse trying to determine whether a reader is functioning or not.

So that was TechEd 2009 in a nutshell. I hope that you have enjoyed following this series. You could tell that the tough economic climate had an effect on the attendance and atmosphere this year. There was no night at Universal studio which was a little unfortunate. I had a good time when we did this at TechEd 2007 and PDC 2008. All in all, I was happy with my TechEd 2009 experience. It was also a good opportunity to catch up with some of my MVP buddies and the BizTalk product team.

Thursday, May 14, 2009

TechEd 2009 - Day 4

Thursday ended up being a great day for sessions. The two top sessions for me were "Enhancing the SAP User Experience: Building Rich Composite Applications in Microsoft Office SharePoint Server 2007 Using the BizTalk Adapter Pack" and "SOA319 Interconnect and Orchestrate Services and Applications with Microsoft .NET Services"

Enhancing the SAP User Experience: Building Rich Composite Applications in Microsoft Office SharePoint Server 2007 Using the BizTalk Adapter Pack
In this session Chris Kabat and Naresh Koka demonstrated the various ways of exchanging data between SAP and other Microsoft technologies.

Why would you want to extract data from SAP - can't you do everything in SAP?
SAP systems tend to be mission critical, sources of truth or systems of records. Bottom line they tend to be very important. However, it is not practical to expect that all information in the enterprise is contained in SAP. You may have acquired a company that used different software, you may have an industry specific application where a SAP module doesn't exist or you may have decided that building an application on a different platform was more cost effective. Microsoft is widely deployed across many enterprises making it an ideal candidate to interoperate with SAP. Microsoft's technologies tend to be easy to use, quick to build and deploy and generally have a lower TCO. (Total Cost of Ownership). Both Microsoft and SAP have recognized this and have formed a partnership to ensure of interoperability.

How can Microsoft connect with SAP?
The 4 ways that they discussed included:
  • RFC/BAPI calls from .Net
  • RFC/BAPI calls hosted in IIS
  • RFC/BAPI calls from BizTalk Server
  • .Net Data Providers for SAP

Most of their discussion involved using the BizTalk Adapter Pack 2.0 when communicating with SAP. In case you were not aware, this Adapter Pack can be used in and outside of BizTalk. They demonstrated both of these scenarios.

Best Practice
A best practice that they described was using a canonical contract(or schema) when exposing SAP data through a Service. I completely agree with this technique as you are abstracting some of the complexity away from downstream clients. You are also limiting the coupling between SAP and a consumer of your service. SAP segments/nodes/field names are not very user friendly. If you wanted a Sharepoint app or .Net app to consume your service, you shouldn't have to delegate that pain of figuring out what AUFNR(for example) means to them. Instead you should use a business friendly term like OrderNumber.

.Net or BizTalk, how do I choose?
Since the BizTalk adapter pack can be used inside or outside of BizTalk, how do you decide where to use it? This decision can be very subjective. If you already own BizTalk, have the team to develop and support the interface then it makes sense to leverage those skills and infrastructure that you have in house to build the application with BizTalk. You can also build out a service catalogue, using this approach, that allows other applications to leverage these services as well. The scale out story in BizTalk is very good so you do not have to be too concerned with a service that will be sparingly used initially and then mutates into a mission critical service that is used by many other consuming applications. Next thing you know this service can't scale and your client apps have now broke because they cannot connect. Another benefit of using BizTalk is the canonical example that I previously described. Mapping your canonical schema values to your SAP values is every easy. All you have to do is drag a line from your source schema to your destination schema.

If you do not have BizTalk, or resources to support this scenario then leveraging the Adapter pack outside of BizTalk is definitely an acceptable practice. In many ways this type of a decision comes down to your organizations' appetite to build vs buy.

From a development perspective the meta data generation is very similar. Navigating through the SAP catalogue is the same no matter whether you are connecting with BizTalk or .Net. The end result is that you are going to get schemas generated for the BizTalk solution vs code for the .Net solution.

SOA319 Interconnect and Orchestrate Services and Applications with Microsoft .NET Services
Clemens Vasters' session on .Net Services was well done. I saw him speak at PDC and he didn't disappoint again. He gave an introduction demo and explanation about the relay service and direct connect service. Even though these demos are console applications, if you sit back and think about what he is demonstrating it blows your mind. Another demo he gave involved a blog site that he was hosting on his laptop. The blog was publicly accessible because he registered his application in the cloud. This allowed the audience to hit his cloud address over the internet, but it was really his laptop that serviced the web page request. As he put it, "he didn't talk to anyone in order to make any network configuration arrangements". This was all made possible by him having an application that established a connection with the cloud and listened for any requests being made.

As mentioned in my Day 1 post, the .Net Services team has been working hard on some enhancements to the bus. The changes mainly address issues that "sparsely connected receivers" may have. What this means is that if you have a receiver that has problems maintaining a reliable connection to the cloud, that you may need to add some durability.

So how do you add durability to the cloud? Two ways (currently):

  • Routers
  • Queues

Routers have a built in buffer that will continue to retry and connect to the downstream end point. Where as a Queue will persist the message, but the message needs to be pulled from the endpoint. So Routers push and Queues need to be pulled from.

Another interesting feature of the Router is dealing with bandwidth distribution scenarios. Lets say you are exchanging a lot of information between a client who has a lot of bandwidth (say a T1) and someone with little bandwidth (say a 128 kbps). The system that has a lot of bandwidth will overwhelm the system with the little bandwidth. Another way to look at this is someone who "drinks from a fire hose". So by using buffers, the Router is able to effectively deal with this unfair distribution of bandwidth by only providing as much data to the downstream application as it can handle. At some point the downstream endpoint should be able to catch up with the upstream system once the message generation process starts to slow down.

Routers also have a multi-cast feature. You can think of this much like a UDP Multicast scenario. Best efforts are made to distribute the message to all subscribers, but there is no durability built in. However, I just mentioned that a Router that is configured to have one subscriber, has the ability to take advantage of a buffer. There is nothing from stopping you from multi-casting to a set of routers and therefore you are able to achieve durability.

A feature of Queues that I found interesting was the two modes that they operate in. The first is a destructive receive where the client pulls the message and it is turning back. The second mode has the receiver connecting, locking the message so that it can cannot be pulled from another source, once the message is retrieved, the client then issues the delete command when its ready.

I am sure it happens in every Azure presentation, and this one was no different, that pricing comes up. We didn't get any hard facts, but were told that Microsoft's pricing should be competitive with other competing offerings. Both bandwidth and cost per message will be part of the equation. So when you are streaming large messages you are best off looking at direct connections. Direct connections are established initially as a relay, but while the relay is taking place, the .Net Service bus performs some NAT probing. Approximately 80% of the time the .Net Service bus is able to determine the correct settings that allow for a direct connection to be established. This improves the speed of the data exchange as Microsoft no longer becomes an intermediary. It also reduces the amount of money that the data exchange costs since you are connecting directly with the other party.

At this point, it looks like Workflow in the cloud will remain in CTP mode. The feedback that the Product Team received, strongly encouraged them to deliver .Net 4.0 workflow in the cloud instead of releasing Cloud WF based upon the 3.5 version. The .Net Services team is trying to do the right thing once; so they are going to wait for .Net 4.o workflow to finish cooking before the make it a core offering in the .Net Service stack.

TechEd 2009 - Day 3

Day 3 was a bit of a lighter day for me. I spent most of my time "in the clouds" attending sessions and having some good conversations at the .Net Services booth...thanks Clemens.

In the Architecting Solutions for the Cloud session, Clemens provided an introduction to the .Net Service bus. For those not familiar with the .Net Service Bus it is essentially an Internet Service bus that allows you to exchange information with other parties using the Cloud as an intermediary. Another way of thinking about it is building a bridge between two islands, only the islands in this case are applications. The real power of the .Net Service bus are the WCF based relay bindings. These bindings allow endpoints to make outbound connections to the Bus and then listen for messages. This makes punching holes in your Firewall a task of the past. Very Cool!!! As mentioned on my Day 1 blog there are a couple new features as part of the March CTP that allow for more capabilities in the bus including Routers and Queues. Look for more information on this in my upcoming Day 4 blog.

When prompted for a comment about private clouds, the response was a very clear in that it is not going to happen anytime soon. The reason for this is that to build up your own "private" cloud would be too cost prohibitive. People shouldn't underestimate the complexity involved in building such an elastic, dynamic platform like Azure.

The second half of the session dealt more with deploying your Web Applications to the cloud. People have been hosting their web applications with Application Server Providers for years, what is the difference with Azure? Having the ability to efficiently scale would be an answer that I would have. If you had a viral marketing project underway and you were not sure just how much bandwidth, or processing power, that you are going to need then having an environment like Azure that can scale your app in minutes is a great option. The other thing to consider is that you can scale you application tiers independently. By establishing Web and Worker roles you can allocate resources to serving pages versus doing the back ground work. So if you were doing a lot of number crunching in the back end and the Web Requests were lightweight you can configure your application to suite your needs. I doubt that there are very many ASPs that can provide you this type of granularity.

In the demos they showed how easy it was to work with the local Azure Dev Fabric and then how easy it was to deploy to the cloud. I would expect to see some more tooling around this experience that allows for scripted deployments plus some delegated administration in the cloud. Currently, the developer working on the cloud application is the only one who can deploy the project to the cloud. Obviously this methodology would not fly in many corporate environments, but this is something that they are aware of and have on their task list to work on.

When working with ASP.Net web apps, be sure to use ASP.Net Web projects instead of ASP.Net Web Sites if you have plans of deploying them to the cloud. The Cloud does not support the "Web Site" flavour.

Here are some upcoming dates to look for albeit they are not "solid" at this point:
  • Pricing - August 2009
  • Reliability/SLA - shooting for August 2009, but may slide
  • Launch - targeting PDC time frame release (November 17th - 20th 2009)

Wednesday, May 13, 2009

TechEd 2009 - Day 2

On Day 2 I spent some more time looking into BizTalk 2009 and System Center(SCOM).

The ESB Guidance 2.0 presentation from Brian Loesgen provided some excellent insight into the latest version of the ESB Guidance 2.0 CTP. I am going highlight some of the interesting bits of info that I picked up. For more details regarding ESB Guidance check out the codeplex site. You may not want to bookmark that site because it won't be living there very long. You will have to read the rest of the blog to get the punchline.

What is ESB Guidance 2.0?
It is an initiative of the Microsoft Patterns and Practices team that provides architectural guidance, patterns and practices. The building blocks that are in the package are reusable blocks of code that compliment BizTalk Server. They do not replace BizTalk servers but they allow you to use BizTalk in a "BUS" mode instead of the more traditional Hub and Spoke Model.

Why 2.0?
There was a previous version called 1.0 that left some people wanting more. The initial stab of the kit was known for a tedious installation, pushed some of the itinerary decisions onto the client and lacked some of the tooling that developers were asking for. I am happy to say that these issues have been resolved. Brian indicated that his install only took around 15 minutes instead of hours or even days with the old version. They have done a great job on this version.

The itinerary designer was pretty slick. When you install ESB Guidance, an additional toolbox will be included in Visual Studio which allows you to drag these ESB related shapes onto your orchestration designer. You then are able to configure these shapes within Visual studio. I tend to look at this as if you are configuring a workflow for a message. You have a particular series of events and you want to configure it to encounter. The sum of all of these events are essentially your itinerary. So for example you may receive a message and as part of this message's interaction, you need it to be transformed to a new message format, have it passed to an orchestration for additional processing only to be transformed to an additional format on its way out the door. Instead of tightly coupling this solution within a series of maps and orchestration(s) you essentially are configuring the itinerary which will instruct BizTalk what to do with the file. BizTalk will use .Net components, that are included in the Guidance kit, to perform all of the map and orchestration resolutions at run time.

Additional info:

  • Only available for BizTalk Server 2009
  • Provides extensibility points so you can customize to meet your needs
  • More prescriptive guidance is available in this version
  • Samples of popular scenarios in SDK
  • Itineraries can now be published to an XML file or SQL Server repository. This worked very well. From within Visual Studio you can push the itinerary to SQL Server with a click of a mouse. You can use deploy multiple versions of the itinerary to the repository and by default the newest version will get executed.

Big News
It was announced at TechEd 2009 that the ESB Guidance will be making its way into the product offering. Starting in June, it will be known as the BizTalk Server ESB Toolkit. It will be signed code from Microsoft and available via MSDN Download centre. Private fixes will be available via MS Connect site and support will be available via Microsoft Premiere Services. There will be no additional charges for the toolkit when you have a BizTalk license.

SCOM (System Center Operations Manager)
I decided to attend a session that was a little outside my comfort zone. I have had exposure to both MOM (Microsoft Operations Manager) and its successor SCOM through my involvement with BizTalk. We have used both of these technologies to inform our team of any issues that are occurring on the BizTalk Servers. I can't imagine running BizTalk without them.

The session itself was dedicated to Cross Platform Management packs. More specifically, using SCOM to monitor your Unix/Linux operating systems and the applications that run on these platforms.

I was impressed with the experience inside of SCOM. The experience of managing and monitoring these platforms is the same as it is for Windows platform. Under the hood SCOM is issuing commands through SSH that is able to retrieve information from the platform or is able to execute a command on those systems.

In the Windows world an agent is pushed to the server that SCOM is going to be monitoring. The process is very similar for Unix/Linux however the terminology is a little different. In Unix the equivalent of a Windows service is a daemon. So you will find that daemon bits are deployed to these servers much like windows.

Out of the box you will find that SCOM has the addressed the core set of functionality that you would expect. This includes:

  • File Systems (both physical and logical)
  • Memory usage
  • Processor Usage
  • Network interfaces
  • Daemon availability

I wasn't able to catch the entire list of supported flavours of Unix/Linux so the following list is not comprehensive:

  • IBM AIX 5.3, 6.1
  • HP - 11.2, 11.3
  • Red Hat ?,?
  • Solaris 8, 9, 10
  • SUSE

If you need more visibility than this you can look to some 3rd party packages like Novell's SUSE Linux management pack. For more application specific management packs that run on Unix/Linux you can look to Bridgeways' Management Pack. With the Bridgeways' management pack you can find support for:

  • Apache Web Servers
  • PostGres Database
  • Oracle Database
  • DB2 Database
  • MySQL
  • Apache Application Server
  • JBOSS Application Server
  • WebSphere Application Server
  • Oracle Application Server
  • BlackBerry Enterprise Server
  • VM Ware ESX
  • and more

It was an interesting session. What I found was that Microsoft is very serious in this area. Customers have demanded a composite monitoring solution that allows them to watch their entire enterprise, not just their Windows Servers. Microsoft has stepped up by providing the functionality themselves or by leveraging a 3rd party management pack. Microsoft has also stepped up their game in the support area. They have increased their support capabilities so that when you do have an issue with their Unix/Linux management packs that they will have someone that can speak intelligently about the issue from a Unix/Linux perspective.

Expect SCOM 2007 R2 and these third party management packs to ship June 2009.

Tuesday, May 12, 2009

TechEd 2009 - Day 1

Key Note:
Besides re-iterating the theme of "Do more with less" several times throughout the presentation, the big news was that Windows 7 and Windows 2008 R2 will be available for Christmas, at least that is what the current intentions are.

We saw some very interesting demos that displayed some of the synergies between these two products:
  • MEDV (aka Microsoft Enterprise Desktop Virtualization) provides you the ability to run applications on a new platform that otherwise would not be able to run. For example lets take an application that was built for Windows XP that cannot run on Windows 7 (or Windows Vista for that matter). This type of scenario may kill, or delay, your desktop refresh project until you can either figure out how to run it on Win 7 or rebuild the application. Enter MEDV. Using MEDV allows you to run two Operating systems on one device simultaneously. In the demonstration, they had an application that would not natively run on Win7 but it would run on Windows XP. With a double click of the mouse the application launches before your eyes. I was expecting some significant lag in this application loading due to the fact that it is really running on Windows XP. However it was extremely snappy. Within a second the application was launched and there was no real indication that XP was even running. The differentiator was that the application had a Red outline around it. Pretty cool stuff...I was blown away. For more info on MEDV check out the following link.
  • APPV (aka Microsoft Application Virtualization ) provides you the ability to stream portions of applications on demand to the end client. In demo they simulated a user logging on to a brand new machine that they had never logged into and opening an Excel spreadsheet. So what's the big deal? Excel was not installed. Once the presenter double clicked on the Excel spreadsheet, Excel bits were brought down to the laptop and the spread sheet opened. This all occurred within a couple minutes. Last time I checked, the installation of Excel took quite a bit longer than that and no reboot was required. For more info on APPV check out the following link.
  • Branch Cache - Do you have any local branches that may be in remote areas where the bandwidth just isn't there? If so then this may a feature of Windows 7 + Windows 2008 R2 that is for you. When a user downloads a file, or Web page, from say your corporate intranet, this artifact will be cached locally within the branch. So when the next person comes looking for that same resource, it can be downloaded from the branch cache instead of the Intranet. This seems like a good way to increase productivity, reduce user frustrations and save on data communication costs. Check out the following video for more details.

Introduction to BizTalk Server 2009
I only caught the tail end of this presentation but wished that I had seen it all. There were two demos that included connecting BizTalk to the cloud. In Ofer Ashkenazi and Danny Garber's session they demonstrated two cloud scenarios: the first one connected to the Microsoft Live Mesh service and the second connected to the .Net Service Bus by participating in a relay . These adapters are currently not publicly available but I have been told that at some point they should surface.

So why is this important? I can envision several scenarios where BizTalk can be used in conjunction with the cloud. Today if you want to expose BizTalk hosted services outside your organization, you need to get your hands dirty in the DMZ. For many organizations the the risks are significant and can slow down or even stop a project. For some organizations, they will rely upon ISA servers to forward the traffic onto the BizTalk servers, others may install BizTalk App Servers in the DMZ and then poke a hole in the firewall so that BizTalk can speak with SQL Server. Others may implement their own custom proxy( like a Web Service) that will just act as a router. As you can see none of these solutions are that great. By using the .Net Service bus, BizTalk can establish an outbound connection to the bus and subscribe to messages moving through the bus. This way you do not need to open firewalls or introduce new infrastructure components into your DMZ. The other benefit is that you can continue to use the all of the tooling that BizTalk provides out of the box. Yes I could create a WCF endpoint to listen to the .Net Service bus, but then I lose out on many of the benefits that BizTalk already provides.

Programming Microsoft .Net Services
The first part of this session was primarily a review, but the second half more than made up for it. I have read Aaron Skonnard's blog several times but never had the opportunity to hear him speak. He is an excellent presenter and I highly recommend seeing any of his sessions or taking training from him.

The 2 new features in the .Net Service bus are Routers and Queues. Routers provide you the ability to multi cast messages onto multiple subscribers or it provides you the ability to send a message to 1, of many, subscribers. The Queues have been added to provide some durability around messages moving through the bus. If you have a consumer who is not always connected, pushing a message to a queue until they can make a connection provides the sender some additional assurances that their message is safe until a connection can be made. Here is a link to the recently published white papers. I know what I will be doing on the plane ride home.

Saturday, May 9, 2009

Ignite Your Career: IT Architecture Career Webcast Series

You can now download this discussion from here.

I will be participating in a panel discussion for the IT Architecture Career Webcast Series - Honing Your Experience And Skills For Uncertain Times (SESSION #2 OF 4) on Tuesday, May 12th.

IGNITE YOUR CAREER: IT Architecture Career Webcast Series Overview
With the economy experiencing challenges it is more important than ever for IT architects to manage their career growth and skills – and one excellent way of doing this is to learn and discuss with our peers as well as mentors. The purpose of the Architect Webcast series is to share with architects and other technical decision makers experiences and insights from a panel of leading architects in the IT industry across Canada.

If you are interested in listening on the discussion, you can register here:

Saturday, May 2, 2009

WebSphere Best on Windows

I don't think that Steve Martin is bored these days. In March he was caught up in the "Cloud Manifesto" fiasco and now he is right in the middle of a new battle with IBM. No I am not referring to Steve Martin the actor, but rather the senior director of developer platform marketing at Microsoft.

His latest battle involves WebSphere running better on Windows Server 2008 than a high priced IBM AIX system. Martin's claim is that you can get more transactions per second (11000 vs 8000) at about a third of the price. ($87161 vs $260128).

I have no idea whether the claims are true or false, but I do know that Microsoft generally puts a lot of effort into backing their claims. Obviously Steve is pretty comfortable with the claims that have been made, otherwise I am sure he wouldn't be willing to fund a third party bake off as he has mentioned in his blog.

Check out for more details on how you can effectively run WebSphere on Windows!

Should be interesting to see where this ends up.