Check out "Understanding .NET and WCF Transactions":
In order to get full advantage of this article, you have to download and setup the attached projects on your machine. The article content relies on you having the projects up and running. The projects are built using VS 2012 and Windows Server AppFabric 1.1.
Basic knowledge of Windows Workflow Foundation and Windows Communication Foundation is assumed. Also, basic familiarity with IIS 7 and above and WAS is required.
You should also be comfortable dealing with Visual Studio and setting up projects and solutions.
If you have ever done composite web applications development, then you are definitely familiar with the associated problems. Consider this scenario: you want to build an ASP.NET application calling into a business process built using WF services (a WF business process published as a WCF Service). Now, let's check the challenges and requirements that you will usually face with the WF service:
Windows Server AppFabric provides a set of Windows Server extensions that act as an infrastructure for building composite web applications. In more practical terms, these extensions help tackle all the above requirements.
The remaining of the article will show AppFabric in action and, through demos, will tackle and elaborate on each and every one of these requirements.
If you are already familiar with BizTalk Server you will notice that all the above requirements (with the exception of caching) are covered. So why is there another product (in AppFabric) that
seems to be doing the same exact thing?
Well – besides cost – there are some important differences and the two products can really go hand by hand as each is targeted to a different level of projects. I will point you to this msdn blog which shows Microsoft’s view about the topic: http://blogs.msdn.com/b/skaufman/archive/2009/11/23/appfabric-and-biztalk.aspx
On another hand, here is a talk I gave about BizTalk Server back in TechEd 2010. If you are new to BizTalk, that should be a great starting point for you:
AppFabric installation is fairly straightforward (at least on a workgroup machine).
After starting the wizard, select both hosting and caching features where hosting contains the features I discussed before (monitoring and persistence):
In the next couple of screens you will configure the persistence and monitoring databases. Obviously these databases will contain persistence and monitoring information, and you’ll need SQL Server for that:
When configuring the caching feature, you have two options: SQL Server or XML files. If you are setting up a workgroup configuration you are limited to the XML option:
The next image shows configuring the caching feature while joining a new cache cluster (you will get to know more about clusters later):
The first demo (attached as HelloWFService.zip) includes three projects:
Let's examine the business process from the HelloWFService project:
The TestClient console application calls into the service operation SubmitOrder, passing a string parameter. SubmitOrder returns a message to the console application and its job ends there. The workflow service then continues execution and calls into the activity ProcessOrder, which calls the WCF service (HelloService), passing in the same parameter submitted by the console application.
The ProcessOrder activity is created automatically in the VS Toolbox when you add a service reference to the WCF service and build the project.
When you deploy the above solution into your machine, you will end up with two WCF services hosted on IIS; the HelloService WCF Service and the HelloWFService WCF Workflow Service.
Open IIS manager. You will notice a new AppFabric tab as follows:
If you click on "Default Web Site" as shown in the figure, then the data you see in the AppFabric tab will correspond to all the services hosted under the default website. If, however, you click on a single service, then the data corresponds only to that particular service.
Now, let's start configuring the services. Right click HelloWFService and select "Manage WCF and WF Services -> Configure".
Select the Monitoring tab. Here you specify the level of monitoring you want for your service and the source where to store the monitoring events. In the image below, I have selected the database created in the AppFabric configuration wizard and selected Health Monitoring level which is enough for the level of monitoring I want in this example. Be careful in this step as the higher the monitoring level, the more overhead you have in your application. Select Troubleshooting level when in development and testing phase. Health Monitoring or even Errors Only should be enough for production environments.
In the Workflow Persistence tab, select Custom or None. We will cover persistence later.
Do the same for the WCF service HelloService (recall that AppFabric will monitor ALL WCF services, not only WF Services, persistence however of course is a different story I will cover in a coming section).
No more configuration is required for the sake of this discussion; some other configuration options will be covered as we go on.
Now let's run the example: build the solution and fire the console application "TestClient", and wait until you get the message "Your order is under process". Let's quickly recap what has just happened: the console application called into the workflow service and got the message back; behind the scenes - something which is not visible to the console application client - the workflow service must have called the WCF Service.
Open the AppFabric dashboard of the Default Web Site and examine the stats:
So in total, we have two WCF calls and one WF Service call. The two WCF calls are actually one for the HelloService WCF Service and one for the HelloWFService workflow service. The WF service call is the HelloWFService (the WF Instance History is a subset of the WCF Call History).
AppFabric has tracked for us service call activities. You can also see if there are any errors and the completed vs. the non completed service calls. You can also drill into more details; right click the WF Service in the dashboard and select "Tracked WF Instances" as shown below:
In the result screen, you will see more details about the WF instance as shown below:
What you have seen so far is great reporting, but recall that we have configured our service for monitoring also (using the Health Monitoring level). To see that in action, right click "Service1" and select "View Tracked Events" (you can also access monitoring from the first dashboard page). You will now get the screen below:
As you can see, here you get to monitor the flow of the business process by examining the name of the shapes and order of execution.
Persistence is a very important concept when building long running business processes. At certain points in your service, you want to persist (serialize and store into a medium - usually database) the instance state so that in case of failure, you can resume the instance from the last persistence point.
In this example, we will see persistence in action. Another variation of the same concept is "Unloading". While Persistence "alone" means persist but keep the instance in memory, Unload means persist and remove the instance from memory. This, of course, opens the possibility of scaling your service because an instance can be flushed out of memory on one machine only to pick up execution on another machine. Unloading will be covered in a later example.
So in order to see persistence in action, go back to Visual Studio and open "Service1.xamlx" and do this change: check the "PersistBeforeSend" property of the "SendResponse" shape. This simply means that just before sending the response (to the console application), the instance state will be persisted; however, as just explained, the instance itself will keep executing.
The other change is in IIS: select the HelloService application and click "Stop Application" in the "Manage WCF and WF Services" section.
Finally, we need to configure our WF Service for persistence. From IIS, right click "HelloWFService" and select "Manage WCF and WF Services -> Configure". From the Persistence tab, select the default persistence database which you have configured using the AppFabric Configuration Wizard, as shown below:
Hint: when you are in the development/testing phase, you might find that pilling up stats in AppFabric will make it difficult for you to focus on a certain scenario. If you need to (as I always do in development), you can clean up the AppFabric databases in order to start fresh. This post (http://thedotnethub.blogspot.com/2010/05/clean-appfabric-databases.html) shows how to do so.
With everything set, build the solution and run the console application again.
What will happen in this case? The console client calls the WF Service which - just before sending a response - persists the state and continues execution. Next, the WF Service tries to call the WCF Service which is made down so an error should be thrown. Now, let's examine the AppFabric dashboard and see what's going on:
The failed call to the WCF Service is logged in the Failures section of the WF Instance History, and is set in the Non Recovered state. The other important thing to notice is the Persisted WF Instances section; the WF Service instance is persisted. Right click the suspended instance and select "Persisted WF Instances", as shown below:
Next, you will get to see the persisted WF instance in the "Suspended" state. The great thing about this is that you can right click on this instance and select "Resume", as shown below:
However, just before doing that, restart the WCF application. From IIS, select the HelloService application and click "Start Application".
Now resume the WF instance. Wait for a couple of seconds (until the Windows Service kicks in) and refresh to see that the instance has resumed execution and finished successfully. Since the last persisted point (the only one actually) was just before sending the response to the console application - after which the fail happened when calling the WCF service - the instance picks up from there and tries to recall the WCF application. Since we have just restarted the WCF application, this time, the call succeeds and the WF instance completes successfully. If you notice the WF Instance History section, you will see that the instance has moved from the "Not Recovered" to the "Recovered" state.
The second demo (attached as SaleService.zip) was originally provided as part of the AppFabric Beta 2 Samples, but I tweaked it a little bit for the sake of this article. It contains three projects:
The business process is shown below (it's just too large to expand it all and view in one shot, but you can expand each section by double clicking on it):
The process starts when a client (TestClient console application) asks to browse through a set of catalog information (no database here, the information is just hardcoded in the process itself). After that, the process waits for a minute; during this time, another client (TestClient2 console application) has to reply to confirm the purchase. If the one minute passes by without any invocation from TestClient2, then the business process terminates.
Like I said at the start, workflow development is not covered here, and basic knowledge is required, so I am not covering the details of the business process shapes. However, using the description I just gave, going over the process, and viewing the shapes should be enough for you to understand what is going on in details.
Build and deploy the solution; you will get an IIS application by the name of "SaleService" as configured via the Web tab in Visual Studio project properties.
Just like we did in the first demo, configure the project for Health Monitoring. As for persistence, we will do something new here. In the first example, we just configured the persistence store through AppFabric and used the WF designer (by setting the PersistBeforeSend property). Here, we will additionally use AppFabric to set an unloading time for our WF instance. First set the persistence store as shown below:
Then set the unloading time as shown below:
We have just instructed our workflow service to unload itself after 20 seconds of inactivity. Now the following will happen: the first console client will issue a request to browse the catalog. The WF process is configured to wait for one minute until it receives a second request from the second console to confirm the request. If no second request is issued, the process will terminate itself - which is what we will do in this example.
Run the first console application, and you will see a list of products as shown below:
Now switch to the AppFabric dashboard and you should see your requests logged and the process is in the "In Progress" state. Moreover, wait for 20 seconds and you will see the persistence instance also logged in the dashboard. Why? Because we have configured our service to unload itself after 20 idle seconds. This is shown below:
If you click the WF instance, you will see that it's in the idle state:
Now wait for another 40 seconds (for the 1 minute to pass by) and refresh the dashboard; you will see that the persistence instance has disappeared and the WF instance has completed execution and terminated itself:
We have so far experienced one demo showing persistence and another one showing unloading. I have already described the difference between the two from a technical perspective. But what are the business cases where you should use persistence vs. those where you should use unloading?
Consider a scenario where a WF business process accepts purchase requests from clients; the process must check the client bank credit via a WCF call and reply in a real-time fashion to the client. The scenario itself is not long running, and will finish in a matter of seconds. However, what if the WCF bank service is down? This is something that you cannot predict but must take precautions against nonetheless. In such a scenario, it makes sense to persist your business process just before calling the WCF bank service; this way, if you detect that the bank service is down and an exception is thrown, you always have a persisted point to go back to, and from there, you can try to resend the message to the WCF bank service... until it is up again. Well, this is analogous to our first demo.
Now consider a second scenario where a PO WF business process accepts requests from clients to browse the product catalog. However, clients can take their time deciding if they want to carry on with the order; they can, for example, take a day to decide. In this case, you do not want to keep the WF process in memory; rather, you want to unload it and wake it up again when clients send their decisions. Well, this was exactly the scenario shown in the second demo.
So in short, you use persistence when you want to be in the safe side and have some point to go back to in case of failure. You use unloading, on the other hand, when you want to free up resources and remove your process from memory, typically in long running processes. Finally, design your persistence (and unloading) carefully because serializing and storing an instance state in the database comes with a performance hit.
Tracking is the ability to step inside a running workflow service instance and peek into variable values during the instance lifetime.
Before configuring tracking from AppFabric, there are some concepts that you need to know. In a nutshell, when you deal with WF 4.0 tracking, you have to understand three concepts:
From IIS, click on the "SaleService" application and double click the "Services" icon form the AppFabric section. Right click the service name and select Configure, as shown below:
From the resulting screen, select "Monitoring" and click "Configure" as shown below:
From the drop down list Tracking profile, select "Sale Service Order Tracking", as shown above.
Now, let's switch to the solution and link all this stuff together. Open the web.config file of the SaleService project and locate the "Tracking" section. This section defines a tracking profile with the name of "Sale Service Order Tracking"; the same name we just configured inside AppFabric.
The tracking profile defines the tracking records we want to observe. For example, the tracking profile states that we want to track all the states of the workflow instance. The corresponding section is:
It also states that we want to track the variable "StatusText" when the activity (i.e., workflow shape) called "Assign Catalog Expired Status" becomes in the "Closed" state. The corresponding section is:
Similarly, the variables "StatusText", "NewPurchaseOrder", "PurchaseTotal", and "OrderId" will be tracked when the activity called "Process New Order" becomes in the "Closed" state. The corresponding section is:
Time to run the program again; this time, however, we will bring the second console client application into play.
Run "TestClient" and copy the GUID that appears on the console... this is the ID assigned to the current order. Now, before 1 minute passes by (time configured before the process terminates itself), run "TestClient2" and paste the GUID. This will confirm the order as shown below:
Now switch back to the AppFabric dashboard; the view should be familiar to you by now. Two WCF calls and a WF instance call have been recorded.
Select to track the events of the WF instance as shown below:
You will again be taken to the events monitored through AppFabric; however, now you can also track the variables as configured in the web.config file. To do so, scroll until you reach the "Process New Order" activity, and you will notice that the four variables are recorded at that particular time, as shown below:
You can even now search through your workflow instances based on the variable values. Use the Query Summary table to set your search query as shown below:
This is not related to AppFabric, but still worth mentioning for the sake of completion, and since it's a particularly important concept in workflow services development.
Let's revisit the demo 2 scenario: a client sends a request to the business process to view the catalog; later it sends another message confirming the initial catalog request. Well, what if we have client A and client B issuing catalog requests? Now we have two process instances waiting (persisted and saved in the DB) to get the confirmation messages. What will happen when client B, for example, sends the confirmation request? How will the engine know to what waiting instance should the message be routed to?
The concept used to solve such cases is called correlation; the act of relating both messages (the catalog request and the request confirmation) via a single unique ID - called the correlation ID. In our example, the GUID you copied from the first console to the second console is the correlation ID; what you did was assign a GUID to a particular client so that both requests (simulated by two different console applications) are correlated using this unique ID.
Correlation is configured within the WF designer in the "Correlations" section of the relevant WCF shapes (again, workflow development is not covered here).
Note: Caching is a fairly large topic by itself, so a full discussion here is not possible. What follows just scratches the surface of AppFabic caching. Dedicated posting about caching will hopefully follow.
Caching speeds up applications by storing frequently accessed information in memory and thus reducing database access time. Scaling cached data, however, is a common problem. Having data stored in memory makes it machine specific, and having all cached data in a single machine quickly creates an application bottleneck.
Distributed caching allows spreading data across multiple machines, and this is now part of AppFabric. The initiative was released well before AppFabric, and had a project name "Velocity".
A full discussion of distributed caching needs a complete article by itself; in the "More Resources" section, I will point you to some resources that do just that. However, in summary, distributed caching works as follows: you have a cache client application (for example, your ASP.NET application, or in our case, the WF business process) that accesses a cache cluster configured on multiple machines. All machines joined by a cache cluster can have data spread and duplicated across them, which provides highly available cached data.
The API is straightforward, and in this section, we will see how to use it to store and retrieve data.
Recall that in the last example, you had to copy the GUID from the first to the second client; here, we will utilize AppFabric caching and its API to store and retrieve the GUID instead.
Open the "Program.cs" file of the TestClient console application. Uncomment the following lines:
The above methods set up the cache configuration and store the GUID in the cache.
Next, open the "Program.cs" file of the TestClient2 console application. Comment the following line:
string catalogId = Console.ReadLine();
And uncomment the following line:
//string catalogId = GetCache();
The above changes instruct the second client to get the GUID from the cache (which was set by the first client) instead of getting it from the console interface.
Part 1 of this article is posted here
Azure Access Control Service (ACS) plays two roles; it is used to authorize access to the Azure Service Bus and Caching, and it also serves as a Federation Provider (R-STS) for your applications on the cloud.
In the context of this article, I am concerned with the second role: how ACS serves as an R-STS.
The below sketch shows the conceptual model of ACS in action:
Just as ADFS was your R-STS (or IP-STS depensing on the context) on premise, ACS is your R-STS on the cloud.
Note that ADFS is itself one kind of providers that can be trusted by ACS; this will be shows in a separate section.
In this example, I will show you how to outsource authentication of your ASP.NET application to ACS.Step1: Create a Service Bus namespace and get the Access Key
Navigate to your Azure portal and create a namespace:
Click on Access Key and copy the Default Key value:
The ACS portal address can be reached using the following format:
So in my case it will be: https://mvpdemons.accesscontrol.windows.net/v2/mgmt/web
Click on Identity Providers and Click Add:
As you can see, Windows Live ID is a default IP for ACS. In the next screen you will get to add the IPs that you want. You can add other IPs such as Google and Yahoo! (who rely on OpenID), Facebook (which provides authentication via its Graph API and authorization using OAuth 2.0), and any WS-Federation IP one of which is ADFS. In this case add Google (ADFS will wait for the next section):
The steps are the same regardless of whether you are using an MVC or Web Forms application. I will use web forms and will call my application:
Next right click on the project and fire the good old Identity and Access wizard. Here are the familiar 3 options:
Note: the approach I am following now will automatically configure the RP in ACS. Another approach is to select “Use a business identity provider (e.g. ADFS2)” in which case you will have to get the federation metadata file URL of ACS from the Application Integration option in ACS portal.
Once the wizard is completed, the application will be configured as a trusted RP for ACS; as always you can check the updates in web.config and the newly created file \FederationMetadata\2007-06\FederationMetadata.xml.Step4: Check ACS configuration
In the ACS portal click Relying party applications and you will see that the RP is automatically configured via the wizard you ran at step3.
Next click on Rule groups. Rule groups are the set of rules that ACS shall apply on claims coming from IPs; these rules can transform, copy, and insert additional claims. You can think of Rule groups as the equivalent to claims engine of ADFS in the context of claim manipulation. The default Rule group generated just passes through the claims offered by Live ID and Google to the RP:
Browse to http://localhost/TestACS. You will be presented with the home realm page to select which IP you want to authenticate with:
Assuming you selected Google, ACS will just pass through Google claims and hand them to your application in a SAML 2.0 token that it trusts (review “What is ACS section?” for a description of the full interaction). Here is the final page:
Note that each provider issue by default a set of claims; of these claims offered by Google are the nameidentifier and name claims, which is how the RP (actually WIF at the RP) managed to identify my name in the welcome message.
In this section, we’ll use ADFS as an IP for ACS.Step1: Grab the FederationMetadata.xml file of ADFS
Since ADFS is installed on my local network, ACS cannot extract ADFS metadata via a URL so we need to give ACS the metadata file in order to set up WS-Federation. Open https://win2012.lab.local/FederationMetadata/2007-06/FederationMetadata.xml in the browser, and save the result XML in a file.Step2: Add ADFS as an IP for ACS
We will continue using the same service bus namespace we used before. So again navigate to https://mvpdemons.accesscontrol.windows.net/v2/mgmt/web.
Add a new Identity Provider and select ADFS:
Next, you supply the federation metadata XML file you saved in step1:
Note that we will use the same TestACS application we used in section “ASP.NET and ADFS”.Step3: Add ACS as a trusted relying party for ADFS
As you should know by now, ADFS is an IP for ACS, which makes ACS an RP for ADFS (if that did not make sense, then I failed horribly in this non-ending publication :)).
So back to ADFS management console and establish a relying party trust with ACS:
Now we configure which claims we want ADFS to pass to ACS. For this sample I will simply pass over Active Directory UPN and Email address as claims:
Recall that ACS uses Rule Groups to set how claims are passed over to relying applications (ASP.NET application), so we need to configure Rule Groups to handle the new claims to be sent from the new IP (ADFS) the same way we did to handle claims from Live ID and Google.
So back to ACS portal, and go to the Rule Group we created before for the TestACS application:
As you can see we already have the rules Live ID and Google from the previous excersice. Now click on Generate and add the claims for the new ADFS IP:
Now do not get scared! I know that when we configured Claim Rules in ADFS we only configured Email Address and UPN, so what are all these claims? These are all the claims configured for ADFS, but not necessarily all what ACS will get from ADFS. We have told ADFS to pass Email and UPN (the above claims list is a grid with paging, UPN is on another page), and that is what ACS will get.
Can you see our good old “birthplace” claim? This is the claim we configured long time ago using a SQL Attribute Store, see how it appears here; it’s all adding up, isn’t it?
Finally we’re ready to re-run TestACS on localhost and see how the ADFS has been added as an authentication option via ACS.
First, re-run the Identity and Access wizard to get the updated metadata from ACS, this metadata now shows the new IP:
Now browse to http://localhost/TestADFS. ACS will give you the new IP as a login option:
Select “Login using ADFS” and you will find yourself back into ADFS domain to login (recall one more time that we customized this page in a previous demo and we are using forms authentication instead of integrated windows):
After you login, you will be redirected back to localhost as a logged in user.