Zulfiqar's weblog

Architecture, security & random .Net

Custom STS for Sitefinity 5.x

Posted by zamd on February 6, 2013

Sitefinity 5.x introduced claims based security & Single-Sign-On features based on a simple HTTP redirect based token issuance protocol which I’m going to call ‘Sitefinity sign-in protocol’ in my posts. Version 5.x has also standardized on using Simple Web Token (SWT) as the default token format for user authentication and SSO needs.

Sitefinity 5.x comes with a built-in local STS which authenticates users using the standard membership authentication and issue SWT tokens in accordance with Sitefinity sign-in protocol. Sitefinity doesn’t have a hard dependency on this built-in STS rather it relies on it’s sign-in protocol and SWT token format which means we can introduce a custom STS in the mix and Sitefinity would happily work with our Custom STS which obviously has to adhere to Sitfinity sign-in protocol and token format.

This STS based design in Sitefinity 5.x could enable many SSO scenarios, some of which I’m going to explore in future posts. Following are examples of few possibilities:

  • I can create a Custom STS and then have multiple applications (RPs :)) including Sitefinity 5.x trust this single STS, which would enable the users to single sign-on across all those applications.
  • I can create a multi-protocol STS which can enable user SSO across workloads/products. For example, SSO between Sitefinity & Office 365 or another portals, speaking the SAML protocol.

For now, I’ll show you how to use a custom STS with Sitefinity for user authentication. I have already developed and deployed a Sitefinity compatible STS @ http://sts.pilesoft.com while Sitefinity is running @ http://pilesoft.com.

Step 1: Register custom STS with Sitefinity so that it can trust the token issued by custom STS.

Open the .\App_Data\Sitefinity\Configuration\SecurityConfig.config file and locate the <securityTokenIssuers> element and following line to <securityTokenIssuers> element.

<add key="CD29559E6EDC312272976AC43F7E921C5766D7063DAF6D177F3EEDEB1802FABE" encoding="Hexadecimal" membershipProvider="Default" realm="http://sts.pilesoft.com"/>

Your config should now look like following:

  1. <securityTokenIssuers>
  2.   <add key="CD29559E6EDC312272976AC43F7E921C5766D7063DAF6D177F3EEDEB1802FABE" encoding="Hexadecimal" membershipProvider="Default" realm="http://sts.pilesoft.com"/>
  3.       <add key="6C4B865442D166796756C8DA1765584F7DD5EC0DE81B1CF29AC5FCE85AE5331D" encoding="Hexadecimal" membershipProvider="Default" realm="http://localhost" />
  4.   </securityTokenIssuers>


In most cases, you need to configure a custom Membership provider as well, which I’m going to talk in a future post.

Step 2: Open the main web.config file and locate the <federatedAuthentication> under the <microsoft.identityModel> section. This is WIF configuration and we need to change the <wsFederation> element to point to our custom STS.

Locate the <wsFederation> element & change the issuer attribute to point to our Custom STS as shown below:

    <claimsAuthenticationManager type="Telerik.Sitefinity.Security.Claims.SFClaimsAuthenticationManager, Telerik.Sitefinity" />
      <add type="Telerik.Sitefinity.Security.Claims.SWT.SWTSecurityTokenHandler, Telerik.Sitefinity" />
    <audienceUris mode="Never"></audienceUris>
    ==>  <wsFederation passiveRedirectEnabled="true"
                       issuer="http://sts.pilesoft.com/issue/sitefinity" realm="http://localhost" requireHttps="false" />
      <cookieHandler requireSsl="false" />
    <issuerNameRegistry type="Telerik.Sitefinity.Security.Claims.CustomIssuerNameRegistry, Telerik.Sitefinity">
    <issuerTokenResolver type="Telerik.Sitefinity.Security.Claims.SWT.WrapIssuerTokenResolver, Telerik.Sitefinity" />


Now if I browse to Sitefinity – I get:


When I click on ‘Login to the backend link’, I’m redirected to my Custom STS. The address bar shows the sitefinity sign-in protocol in action.


When I sign-in at the STS, it issues a SWT token & redirects me back to the Sitefinity app.

As this STS is trusted by Sitefinity, it happily accepts the incoming SWT token and logs me in.


I’ll publish the Custom STS code after removing the IP related bits. Ping me if you desperately needs it 🙂


Posted in Federation/STS, Sitefinity, SSO | 4 Comments »

Enabling ‘Import Service Contract’ menu option

Posted by zamd on January 2, 2013

WF 4.5 introduced contact first development using which you can generate messaging activities from your existing WCF contracts. Out of box, this feature is only enabled for ‘WCF Workflow Service Application’ project type and is exposed using the ‘Import Service Contract’ context menu.

image image

This is quite useful feature and is certainly required in other project types as well. For example, a workflow hosted using WorkflowServiceHost in a windows service or a console application. You can easily enable the context menu option for other project types by including an additional GUID in the <ProjectTypeGuids> element in csproj file.


  • Unload the project in VS and open the csproj file using the xml editor.
  • Locate the <ProjectTypeGuids> element and insert this {349c5851-65df-11da-9384-00065b846f21} as the content of element along with other GUIDs.
  • Make sure to put a semicolon at the end of your newly inserted GUID.
  • Reload the project in VS and you should now see the ‘Import Service Contract’ menu option.

Posted in WF4.5 | Tagged: , , | Leave a Comment »

Service Bus Property Promotion Nuget Package

Posted by zamd on July 18, 2012

I have just published a Nuget package which adds property promotion features to Service Bus WCF programing model.


Once you added the package to your project you can use the PromotedProperty attribute to mark your properties as promoted. The package supports promotion from both complex & primitive arguments. In addition to PromotedPropertyAttribute you also need to stick PropertyPromotionBehavior on each method of your service contract.

Following service contract captures the sample usage.

  1. public class Order
  2. {
  3.     public double Amount { get; set; }
  4.     [PromotedProperty]
  5.     public string ShipCity { get; set; }
  6. }
  8. [ServiceContract]
  9. public interface IOrderService
  10. {
  11.     [OperationContract(Name = "SubmitFlat", IsOneWay = true)]
  12.     [PropertyPromotionBehavior]
  13.     void Submit(double amount, [PromotedProperty] string shipCity);
  15.     [OperationContract(IsOneWay = true)]
  16.     [PropertyPromotionBehavior]
  17.     void Submit(Order order);
  18. }


Posted in ServiceBusV2 | Leave a Comment »

Service Bus Server Install Experience

Posted by zamd on July 17, 2012

Today I installed Service Bus Server Beta release and the overall install experience was fairly smooth until I reached the New-SBFarm step of the ‘Getting started’ tutorial. The cmdlet just seems to hang for few minutes and failed ultimately – I tried on another machine & got same results. After lot of head–scratching I narrowed down the issue to SQL connectivity. Turns out New-SBFarm create 3 different databases, Farm management DB, Gateway DB & the message container database. The first two DBs are created by the cmdlet itself & it uses the Connection String passed into the cmdlet and just replaces the DB name. The message container DB creation is handled by another cmdlet ‘New-SBMessageContainer’ which uses the FQDN of the database server.

When server is identified using FQDN, SQL client code treats the connection as ‘Remote’ and because I was using a named instance – it tries to resolve the name using SQL Browser service which was by default disabled 😦

Hence the cmdlet hanged until connection request timeout – Enabling remote connections on Sql express & starting the SQL Browser service has fixed the issue.

Posted in Uncategorized | Leave a Comment »

Claim-based-security for ASP.NET Web APIs using DotNetOpenAuth

Posted by zamd on May 4, 2012

Source Code

Recently I worked with a customer assisting them in implementing their Web APIs using the new ASP.NET Web API framework. Their API would be public so obviously security came up as the key concern to address. Claims-Based-Security is widely used in SOAP/WS-* world and we have rich APIs available in .NET Framework in the form of WCF, WIF & ADFS 2.0. Even though we now have this cool library to develop Web APIs, the claims-based-security story for REST/HTTP is still catching up. OAuth 2.0 is almost ready, OpenID Connect is catching up quickly however it would still take sometime before we have WIF equivalent libraries for implementing claims-based-security in REST/HTTP world. DotNetOpenAuth seems to be the most prominent open-source library claiming to support OAuth 2.0 so I decided to give it a go to implement the ‘Resource Owner Password Credentials’ authorization grant. Following diagram shows the solution structure for my target scenario.


1. OAuth 2.0 issuer is an ASP.NET MVC application responsible for issuing token based on OAuth 2.0 ‘Password Credentials’ grant type.

2. Web API Host exposes secured Web APIs which can only be accessed by presenting a valid token issued by the trusted issuer

3. Sample thick client which consumes the Web API

I have used the DotNetOpenAuth.Ultimate NuGet package which is just a single assembly implementing quite a few security protocols. From OAuth 2.0 perspective, AuthorizationServer is the main class responsible for processing the token issuance request, producing and returning a token for valid & authenticated request. The token issuance action of my OAuthIssuerController looks like this:

OAuth 2.0 Issuer

public class OAuthIssuerController : Controller {
    public ActionResult Index()
        var configuration = new IssuerConfiguration {
            EncryptionCertificate = new X509Certificate2(Server.MapPath("~/Certs/localhost.cer")),
            SigningCertificate = new X509Certificate2(Server.MapPath("~/Certs/localhost.pfx"), "a")

        var authorizationServer = new AuthorizationServer(new OAuth2Issuer(configuration));
        var response = authorizationServer.HandleTokenRequest(Request).AsActionResult();

        return response;

AuthorizationServer handles all the protocol details and delegate the real token issuance logic to a custom token issuer handler (OAuth2Issuer in following snippet)

Protocol independent issuer
  1. public class OAuth2Issuer : IAuthorizationServer
  2. {
  3.     private readonly IssuerConfiguration _configuration;
  4.     public OAuth2Issuer(IssuerConfiguration configuration)
  5.     {
  6.         if (configuration == null) throw new ArgumentNullException(“configuration”);
  7.         _configuration = configuration;
  8.     }
  9.     public RSACryptoServiceProvider AccessTokenSigningKey
  10.     {
  11.         get
  12.         {
  13.             return (RSACryptoServiceProvider)_configuration.SigningCertificate.PrivateKey;
  14.         }
  15.     }
  16.     public DotNetOpenAuth.Messaging.Bindings.ICryptoKeyStore CryptoKeyStore
  17.     {
  18.         get { throw new NotImplementedException(); }
  19.     }
  20.     public TimeSpan GetAccessTokenLifetime(DotNetOpenAuth.OAuth2.Messages.IAccessTokenRequest accessTokenRequestMessage)
  21.     {
  22.         return _configuration.TokenLifetime;
  23.     }
  24.     public IClientDescription GetClient(string clientIdentifier)
  25.     {
  26.         const string secretPassword = “test1243”;
  27.         return new ClientDescription(secretPassword, new Uri(http://localhost/&#8221;), ClientType.Confidential);
  28.     }
  29.     public RSACryptoServiceProvider GetResourceServerEncryptionKey(DotNetOpenAuth.OAuth2.Messages.IAccessTokenRequest accessTokenRequestMessage)
  30.     {
  31.         return (RSACryptoServiceProvider)_configuration.EncryptionCertificate.PublicKey.Key;
  32.     }
  33.     public bool IsAuthorizationValid(DotNetOpenAuth.OAuth2.ChannelElements.IAuthorizationDescription authorization)
  34.     {
  35.         //claims added to the token
  36.         authorization.Scope.Add(“adminstrator”);
  37.         authorization.Scope.Add(“poweruser”);
  38.         return true;
  39.     }
  40.     public bool IsResourceOwnerCredentialValid(string userName, string password)
  41.     {
  42.         return true;
  43.     }
  44.     public DotNetOpenAuth.Messaging.Bindings.INonceStore VerificationCodeNonceStore
  45.     {
  46.         get
  47.         {
  48.             throw new NotImplementedException();
  49.         }
  50.     }
  51. }

Now with my issuer setup, I can acquire access tokens by POSTing following request to the token issuer endpoint


POST /Issuer HTTP/1.1

Content-Type: application/x-www-form-urlencoded; charset=utf-8



In response, I get 200 OK with following payload


HTTP/1.1 200 OK

Cache-Control: no-cache, no-store, max-age=0, must-revalidate

Pragma: no-cache

Content-Type: application/json; charset=utf-8

Server: Microsoft-IIS/7.5

Content-Length: 685

{“access_token”:”gAAAAC5KksmbH0FyG5snks_xOcROnIcPldpgksi5b8Egk7DmrRhbswiEYCX7RLdb2l0siW8ZWyqTqxOFxBCjthjTfAHrE8owe3hPxur7Wmn2LZciTYfTlKQZW6ujlhEv6N4V1HL4Md5hdtwy51_7RMzGG6MvvNbEU8_3GauIgaF7JcbQJAEAAIAAAABR4tbwLFF57frAdPyZsIeA6ljo_Y01u-2p5KTfJ2xa6ZhtEpzmC46Omcvps9MbFWgyz6536_77jx9nE3sePTSeyB5zyLznkGDKhjfWwx3KjbYnxCVCV-n2pqKtry0l8nkMj4MrjqoTXpvd_P0c_VGfVXCsVt7BYOO68QbD-m7Yz9rHIZn-CQ4po0FqS2elDVe9qwu_uATbAmOXlkWsbnFwa6_ZDHcSr2M-WZxHTVFin7vEWO7FxIQStabu_r4_0Mo_xaFlBKp2hl9Podq8ltx7KvhqFS0Xu8oIJGp1t5lQKoaJSRTgU8N8iEyQfCeU5hvynZVeoVPaXfMA-gyYfMGspLybaw7XaBOuFJ20-BZW0sAFGm_0sqNq7CLm7LibWNw”,”token_type”:”bearer”,”expires_in”:”300″,”scope”:”http:\/\/localhost\/ adminstrator poweruser”}


DotNetOpenAuth also has a WebServerClient class which can be used to acquire tokens and I have used in my test application instead of crafting raw HTTP requests. Following code snippet generates the same above request/response

Get Access Token
  1. private static IAuthorizationState GetAccessToken()
  2. {
  3.     var authorizationServer = new AuthorizationServerDescription
  4.     {
  5.         TokenEndpoint = new Uri(http://localhost:1960/Issuer&#8221;),
  6.         ProtocolVersion = ProtocolVersion.V20
  7.     };
  8.     var client = new WebServerClient(authorizationServer, http://localhost/&#8221;);
  9.     client.ClientIdentifier = “zamd”;
  10.     client.ClientSecret = “test1243”;
  11.     var state = client.GetClientAccessToken(new[] { http://localhost/&#8221; });
  12.     return state;
  13. }

Ok Now the 2nd part is to use this access token for authentication & authorization when consuming ASP.NET Web APIs.

Web API Client
  1. static void Main(string[] args)
  2. {
  3.     var state = GetAccessToken();
  4.     Console.WriteLine(“Expires = {0}”, state.AccessTokenExpirationUtc);
  5.     Console.WriteLine(“Token = {0}”, state.AccessToken);
  6.     var httpClient = new OAuthHttpClient(state.AccessToken)
  7.     {
  8.         BaseAddress = new Uri(http://localhost:2150/api/values&#8221;)
  9.     };
  10.     Console.WriteLine(“Calling web api…”);
  11.     Console.WriteLine();
  12.     var response = httpClient.GetAsync(“”).Result;
  13.     if (response.StatusCode==HttpStatusCode.OK)
  14.         Console.WriteLine(response.Content.ReadAsStringAsync().Result);
  15.     else
  16.         Console.WriteLine(response);
  17.     Console.ReadLine();
  18. }

On line 8, I’m creating an instance of a customized HttpClient passing in the access token. The httpClient would use this access token for all subsequent HTTP requests

OAuth enabled HttpClient
  1. public class OAuthHttpClient : HttpClient
  2. {
  3.     public OAuthHttpClient(string accessToken)
  4.         : base(new OAuthTokenHandler(accessToken))
  5.     {
  6.     }
  7.     class OAuthTokenHandler : MessageProcessingHandler
  8.     {
  9.         string _accessToken;
  10.         public OAuthTokenHandler(string accessToken)
  11.             : base(new HttpClientHandler())
  12.         {
  13.             _accessToken = accessToken;
  14.         }
  15.         protected override HttpRequestMessage ProcessRequest(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
  16.         {
  17.             request.Headers.Authorization = new AuthenticationHeaderValue(“Bearer”, _accessToken);
  18.             return request;
  19.         }
  20.         protected override HttpResponseMessage ProcessResponse(HttpResponseMessage response, System.Threading.CancellationToken cancellationToken)
  21.         {
  22.             return response;
  23.         }
  24.     }
  25. }

Relying Party (ASP.NET Web APIs)

Finally on the RP side, I have used standard MessageHandler extensibility to extract and validate the ‘access token’. The OAuth2 message handler also extracts the claims from the access token and create a ClaimsPrincipal which is passed on the Web API implementation for authorization decisions.

OAuth2 Message Handler
  1. public class OAuth2Handler : DelegatingHandler
  2. {
  3.     private readonly ResourceServerConfiguration _configuration;
  4.     public OAuth2Handler(ResourceServerConfiguration configuration)
  5.     {
  6.         if (configuration == null) throw new ArgumentNullException(“configuration”);
  7.         _configuration = configuration;
  8.     }
  9.     protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
  10.     {
  11.         HttpContextBase httpContext;
  12.         string userName;
  13.         HashSet<string> scope;
  14.         if (!request.TryGetHttpContext(out httpContext))
  15.             throw new InvalidOperationException(“HttpContext must not be null.”);
  16.         var resourceServer = new ResourceServer(new StandardAccessTokenAnalyzer(
  17.                                                     (RSACryptoServiceProvider)_configuration.IssuerSigningCertificate.PublicKey.Key,
  18.                                                     (RSACryptoServiceProvider)_configuration.EncryptionVerificationCertificate.PrivateKey));
  19.         var error = resourceServer.VerifyAccess(httpContext.Request, out userName, out scope);
  20.         if (error != null)
  21.             return Task<HttpResponseMessage>.Factory.StartNew(error.ToHttpResponseMessage);
  22.         var identity = new ClaimsIdentity(scope.Select(s => new Claim(s, s)));
  23.         if (!string.IsNullOrEmpty(userName))
  24.             identity.Claims.Add(new Claim(ClaimTypes.Name, userName));
  25.         httpContext.User = ClaimsPrincipal.CreateFromIdentity(identity);
  26.         Thread.CurrentPrincipal = httpContext.User;
  27.         return base.SendAsync(request, cancellationToken);
  28.     }
  29. }

Inside my Web API, I access the claims information using the standard IClaimsIdentity abstraction.

Accessing claims information
  1. public IEnumerable<string> Get()
  2. {
  3.     if (User.Identity.IsAuthenticated && User.Identity is IClaimsIdentity)
  4.         return ((IClaimsIdentity) User.Identity).Claims.Select(c => c.Value);
  5.     return new string[] { “value1”, “value2” };
  6. }

Fiddler Testing

Once I got the “access token”, I can test few scenarios in fiddler by attaching and tweaking the token when calling my web api.

401 without an “access token”


200 OK with a Valid token


401 with Expired token


401 with Tempered token


Source code attached. Please feel free to download and use.

Posted in ASP.NET Web APIs | Tagged: | 15 Comments »

Deploying Umbraco to Windows Azure

Posted by zamd on January 27, 2012

Umbraco is a fairly mature CMS system and recently I was engaged in an assignment to deploy Umbraco CMS to Windows Azure. In this post, I’ll share some of my learning’s.

  • Umbraco accelerator (I call it bootstrapper) works well for small to medium web sites but you definitely have to increase the default blob sync interval which by default is set to 1 second.
  • In the context of Windows Azure, accelerator/bootstrapper become your main application which gets deployed to Azure Web Roles using Azure service model.
  • Umbraco installation (I used standard Umbraco download) is stored in blob storage while accelerator is configured to pick this and install it on the web roles where accelerator itself is running. Accelerator does both push & pull so file system changes done by Umbraco would be pushed to blob storage and automatically be synced to other web roles by the accelerator.
  • The standard Azure staging/production deployment doesn’t really fit with accelerator style deployment used by Umbraco. So instead of using staging & production, we used azure production environment only(with well known DNS) and decided to use two hosted services representing UAT & Live environments. Our plan is to use end to end separation i.e. separate blob storage & separate SQL Azure databases. This setup would give us required change management & content management control.
  • Using accelerator I was quickly able to deploy Umbraco web app to two Azure web roles.
  • Soon after the deployment I was welcomed by ASP.net yellow screen of death.
  • I RDP into the server and debugged the W3P.exe to realize ASP.net was failing to validate ViewState.  Make sense! 🙂 Azure is load-balanced & requests were sent to two web role instances in a round robin scheme. ASP.net machine key scheme was set to default Autogenerate which resulted in both web roles using different keys to secure/validate ViewState. Once we knew the issue the fix was simple:
  • Use pre-generated fixed key values for <machineKey>
<machineKey validationKey="21F090935F6E49C2C797F69BBAAD8402ABD2EE0B667A8B44EA7DD4374267A75D7"
  • Next step was to create the Umbraco database for which we decided to use SQL Azure as the DB platform.
  • My first thoughts were to use the basic approach of manually creating a blank DB in Sql Azure and then run Umbraco configuration wizard to populate the schema & data. Even though this approach won’t give us necessary control for the DB assets it was simple to use.
  • Turns out this approach doesn’t work & Umbraco simply hangs. I debugged & found Umbraco configuration wizard was unable to create couple of columns (of bit data type) & data access was failing because of the missing columns. I discarded this approach without going any further.
  • I then installed a local Umbraco instance and configured a local Sql Server database which worked smoothly.
  • I used data tier application framework to export local SQL Server database as a package. As SQL Azure natively supports import of dacpac packages, the import was fairly simple process using the Sql Azure management portal.
  • The dacpac approach also solved the lack of control issues with the simple in-place DB creation option as it gave us a golden copy of the database for future use. We are also planning to use dacpac feature as the basis for backup & restore requirements.
  • Once DB is ready in SQL Azure,  Umbraco was configured to use this DB and we have a somewhat working Umbraco installation.
  • During testing, I quickly realized that by default Umbraco stores session state in memory and this would not work in Window Azure load balanced environment.
  • I could use SQL Azure for session state storage but it’s NOT supported in SQL Azure and also adds quite a bit of overhead in terms of session state clean up as there is no SQL Agent in Azure.
  • I decided to use Windows Azure Cache (a.k.a AppFabric cache) as the session state storage mechanism. It’s fast & also doesn’t require additional data purge solutions. I created a 128MB cache and configured Umbaco web app to use AppFabric cache (instead of local machine’s memory) as the session state storage.
  • And finally Umbraco is up & running in Azure 🙂


Posted in Windows Azure | 4 Comments »

WebSockets with WCF

Posted by zamd on November 23, 2011

Notification & “Duplex communication” are important scenario over the internet but firewalls and browser limitations makes them very hard to implement. In the browser world, tricks like long polling is commonly used to implement server-push requirements. For non-browser scenarios relay technologies like Azure Service Bus overcome the lack of inbound connectivity by creating a relay in the cloud where both client & server connects by making an outbound connection. Both long polling & relay works but are not optimum solutions due to latency and the complexity involved.

WebSockets is designed to address some of these limitation. With WebSockets, a client & server can upgrade an existing HTTP connection to a full-duplex TCP/IP connection or setup a new WebSockets connection using an HTTP based handshake. WebSockets uses standard HTTP ports (80, 443) so it’s just works with Firewall & the existing security infrastructure. WebSockets technology bucket has following two parts:

  1. WebSockets Protocol  (Currently being standardized by IETF)
  2. WebSockets JavaScript API (Currently being standardized by W3C)

Windows “8” has native support for WebSockets protocol & there are quite a few API (native & managed) available for programing WebSockets servers & clients on windows. In addition, IE 10 supports both Web Sockets protocol & the JavaScript API.


  • IE 10
  • WinRT


  • Native windows implementation ( >= Windows “8”)
    • IIS 8.0
  • System.Net.WebSockets (Managed Wrapper)
    • HttpListener
  • System.Web (ASP.net)
    • HttpContext
  • System.ServiceModel (WCF)
    • NetHttpBinding

WCF supported duplex services since V1 but these required either a duplex transport binding (netTcpBinding, netNamedPipeBinding) or wsDualHttpBinding which forces a client to have a public URI accessible to service (and to the world Smile)   running on the public internet.

wsDualHttpBinding is not really suitable for internet scenarios due to inbound connectivity issues. NetTcpBinding could also be problematic in tightly-locked down environments allowing only outbound connections to port 80/443.

With the WebSockets support in Windows, WCF introduced a new Binding NetHttpBinding which does binary SOAP messaging over WebSockets protocol and overcome the limitations of existing bindings. Below I created a basic duplex WCF service

Basic Service
public interface IPing
    void Ping(string msg);

class PingService : IPing
    public void Ping(string msg)
        Console.WriteLine("Service: {0}",msg);
        var chnl = OperationContext.Current.GetCallbackChannel<IPing>();

        chnl.Ping(string.Format("You said \"{0}\"?", msg));

Standard WCF hosting code with one endpoint using the new binding (line 5 & 14)

  1. static void Main(string[] args)
  2. {
  3.     var sh = new ServiceHost(typeof(PingService),
  4.         new Uri("http://localhost:9090/&quot;));
  5.     var binding = new NetHttpBinding();
  6.     sh.AddServiceEndpoint(typeof(IPing), binding, "Ping");
  8.     sh.Open();
  10.     Console.WriteLine("Service ready…");
  12.     var cf = new DuplexChannelFactory<IPing>(
  13.         new InstanceContext(new PingBack()),
  14.         binding,
  15.         new EndpointAddress("http://localhost:8080/Ping&quot;));
  17.     var chnl = cf.CreateChannel();
  18.     chnl.Ping("Hello!");
  20.     Console.WriteLine("Finishing…");
  21.     Console.ReadLine();
  22. }

and finally my callback handler class.

Callback handler
public class PingBack : IPing
    public void Ping(string msg)
        Console.WriteLine("Client: {0}",msg);

Running the project I get the expected output.

By default, WebSockets protocol is allowed on NetHttpBinding i.e. if your contract is a Duplex contract NetHttpBinding automatically upgrades to WebSockets protocol. Out of box, this binding does SOAP messaging and encodes SOAP using the WCF binary encoding. You can reuse the WebSockets transport binding element in a custom binding to support other encodings & protocols and I’ll talk about this in a future post.


Posted in Uncategorized | Leave a Comment »

Pub/Sub with WCF (Part 2)

Posted by zamd on May 25, 2011

Source Code Download

Service Bus May CTP has a small glitch when it comes to pub/sub messaging using the WCF programing model. The May CTP API out-of-box doesn’t pick up filter/promoted properties from the WCF data contracts and requires you to explicitly specify these properties on the BrokeredMessage object outside of core WCF programing model as shown in part 1.

I didn’t like this repetition and decided to prototype a solution using the WCF extensibility model and after few hours of coding created a solution which looks quite cool 🙂

In my solution a DataMember can be marked with [PromotedProperty] attribute and a custom operation behavior picks these annotations and promote them as filter properties by automatically attaching them with the outgoing message.

    public class Order
public double Amount { get; set
; }
        public string ShipCity { get; set
; }

public interface IOrderService
OperationContract(Name = “SubmitFlat”, IsOneWay = true
        void Submit(double amount, [PromotedProperty] string

[OperationContract(IsOneWay = true)]
void Submit(Order order);

I have decided to use a custom formatter to implement property lifting and injection functionality primarily because at the formatter level I still have a fairly typed view of the method call. At message inspector level most of typed-ness has gone and it would have required more work.

The [PropertyPromotionBehavior] creates a ‘promotion model’ (list of properties needs to be promoted) by reflecting on the data contract. The ‘promotion model’ is then populated by the custom formatter with actual parameter values extracted from the call context. [PropertyPromotionBehavior] also replaces the default formatter with a custom PromotionFormatter which wraps the default formatter and does the additional work of property promotion. I have highlighted the relevant bits below.


    publicclassPropertyPromotionBehaviorAttribute : Attribute, IOperationBehavior


        publicvoid Validate(OperationDescription operationDescription) { }


        publicvoid ApplyDispatchBehavior(OperationDescription operationDescription, DispatchOperation dispatchOperation) { }


        publicvoid ApplyClientBehavior(OperationDescription operationDescription, ClientOperation clientOperation)


            var promotedProperties = LoadPromotedProperties(operationDescription);

            if (promotedProperties.Count <= 0) return;


            var dummy = newClientOperation(clientOperation.Parent, “dummy”, “urn:dummy”);

            var behavior =

                operationDescription.Behaviors.Find<DataContractSerializerOperationBehavior>() asIOperationBehavior;

            behavior.ApplyClientBehavior(operationDescription, dummy);


            clientOperation.Formatter = newPromotionFormatter(dummy.Formatter, promotedProperties);

            clientOperation.SerializeRequest = dummy.SerializeRequest;



        publicvoid AddBindingParameters(OperationDescription operationDescription, BindingParameterCollection bindingParameters) { }




    classPromotionFormatter : IClientMessageFormatter


        privatereadonlyIClientMessageFormatter _orignalFormatter;

        privatereadonlyIList<PromotedPropertyDescription> _promotions;


        public PromotionFormatter(IClientMessageFormatter orignalFormatter, IList<PromotedPropertyDescription> promotions)


            _orignalFormatter = orignalFormatter;

            _promotions = promotions;



        publicobject DeserializeReply(Message message, object[] parameters)


            return _orignalFormatter.DeserializeReply(message, parameters);



        publicMessage SerializeRequest(MessageVersion messageVersion, object[] parameters)


            var message = _orignalFormatter.SerializeRequest(messageVersion, parameters);



            if (_promotions.Count > 0)


                var bmp = newBrokeredMessageProperty();

                foreach (var promotion in _promotions)


                    bmp.Properties[promotion.Name] = promotion.Value;


                message.Properties[BrokeredMessageProperty.Name] = bmp;



            return message;



With these extensions in place, the code looks just like the normal WCF code and property promotion happens in the background as you would expect.

var cf = new ChannelFactory<IOrderService>(serviceBusBinding, topicAddress); 
var proxy = cf.CreateChannel();

proxy.Submit(new Order { Amount = 200, ShipCity = “london” });

proxy.Submit(new Order { Amount = 322, ShipCity = “reading” });

proxy.Submit(101, “reading”);


Hopefully Service Bus programing model will support this kind of behavior soon but for the time being these extensions would probably fill the gap.

I have attached complete source code with this post so please feel free to download and use.

Posted in ServiceBusV2 | 2 Comments »

Pub/Sub with WCF (Part 1)

Posted by zamd on May 19, 2011

Code download

In yesterday’s post I have explored how to use Service Bus queues as a transport to communicate between a WCF client and a service. In today’s post I will show you WCF pub/sub messaging using the topics and subscriptions. A subscription behaves exactly like a queue for reads while a topic behaves exactly like a queue for writes. This metaphor maps nicely to WCF where our services would listen on different subscriptions while the clients would send messages to a single topic. Service Bus would automatically forward matching messages to correct services.

For this example, I have created a simple order service as shown below. The Write extension simply writes to the console using a certain color assigned to a particular host. I have used this to distinguish the service host receiving the messages.

public class Order
    public double Amount { get; set; }
    public string ShipCity { get; set; }

public interface IOrderService
    [OperationContract(IsOneWay = true)]
    void Submit(Order order);

public class OrderService : IOrderService
    public void Submit(Order order)
        var writer = OperationContext.Current.Host.Extensions.Find<Writer>();

        writer.WriteLine("Received order value = {0}, ShipCity = {1}", order.Amount, order.ShipCity);

Next I use the Management API to create topics & related subscriptions.

var baseAddress = "sb://soundbyte.servicebus.appfabriclabs.com";
var credential = TransportClientCredentialBase.CreateSharedSecretCredential("owner", "zYDbQ2wM4343dbBukWJTF6Y=");
var namespaceClient = new ServiceBusNamespaceClient(baseAddress, credential);

catch { }

var topic= namespaceClient.CreateTopic("orders");
topic.AddSubscription("london", new SqlFilterExpression("ShipCity = 'london'"));
topic.AddSubscription("reading", new SqlFilterExpression("ShipCity = 'reading'"));

Both subscriptions has a filter applied to them and only the messages matching this filter would be delivered to the subscription. I then create two separate service hosts simulating separate services running in different cities. I assigned ‘Cyan’ color to the London host (simulated service hosted in London) and ‘Yellow’ to the Reading Host (Service hosted in Reading).

var serviceBusBinding = new ServiceBusMessagingBinding();
serviceBusBinding.MessagingFactorySettings.Credential = credential;

string topicAddress = baseAddress + "/orders";
string londonSubscription = baseAddress + "/orders/subscriptions/london";
string readingSubscription = baseAddress + "/orders/subscriptions/reading";

var hostLondon = new ServiceHost(typeof(OrderService));
hostLondon.Extensions.Add(new Writer(ConsoleColor.Cyan));
hostLondon.AddServiceEndpoint(typeof(IOrderService), serviceBusBinding, new Uri(topicAddress), new Uri(londonSubscription));

var hostReading = new ServiceHost(typeof(OrderService));
hostReading.Extensions.Add(new Writer(ConsoleColor.Yellow));
hostReading.AddServiceEndpoint(typeof(IOrderService), serviceBusBinding, new Uri(topicAddress), new Uri(readingSubscription));

Next I have created a simple WCF client using the ChannelFactory API and used it to send messages to the topic. In the current CTP, you have to set the filter properties separately on the BrokeredMessage object, which is a native ServiceBus message object and is different from the WCF Message object.

ServiceBusMessagingBinding automatically converts the WCF message object (generated by the proxy object) to a BrokeredMessage object which is then sent to ServiceBus. On the receive side a similar conversion happens from BrokeredMessage to a WCF Message which is then on to WCF Service. To set BrokeredMessage specific properties, we need to create a BrokeredMessageProperty object, set the properties on it and add it to generated WCF messages. ServiceBusMessagingBinding looks for this property and copy all the properties to the BrokeredMessage object it creates. I’m doing exactly the same using the OperationContextScope below.

var cf = new ChannelFactory<IOrderService>(serviceBusBinding, topicAddress);
var proxy = cf.CreateChannel();

using (new OperationContextScope( (IContextChannel)  proxy))
    var bmp = new BrokeredMessageProperty();
    bmp.Properties["ShipCity"] = "london";
    OperationContext.Current.OutgoingMessageProperties.Add(BrokeredMessageProperty.Name, bmp);
    proxy.Submit(new Order {Amount = 200, ShipCity = "london"});

    bmp.Properties["ShipCity"] = "reading";
    proxy.Submit(new Order { Amount = 322, ShipCity = "london" });


As you can see from the following output that the ‘london’ message was received by London host (‘yellow’) while the ‘reading’ message was received by Reading (‘cyan’) service host.


Finally yes it looks bit ugly that I have to specify these promoted/filter properties separately Sad smilefrom their WCF contract values. Ideally ServiceBus APIs should have picked up the ShipCity property from my Order object.  Unfortunately in the current CTP you have to do this separately as I have shown you in the above snippet.  In the next post I’ll show how you can extend WCF so that it automatically picks these ‘filter properties’  from operations parameters or WCF DataContract(s).

Stay tuned…

Posted in ServiceBusV2 | 3 Comments »

WCF with Service Bus V2: Queues

Posted by zamd on May 18, 2011

May CTP of Service Bus introduced tons of new messaging capabilities including Queuing and Topic based pub/sub messaging. Check out Clemens post for an overview.

The usage of Service Bus messaging entities (Queues, Topics) is divided across two namespaces – a ‘Management Namespace’ and a ‘Runtime Namespace’. Management namespace is used to create/define messaging entities while the Runtime Namespace is used to do the actual messaging operations using the already defined ‘messaging entities’. The management namespace is exposed using as a REST service and Service Bus SDK also provides with a client library which hides the HTTP based interface behind a nice object oriented API.

So once a messaging entity (for example a Queue) is created and is ready you can use the Runtime API to send/receive messages. May CTP comes with two flavours of runtime APIs.

  • A low level MessagingFactory based API: Which is the native, high fidelity Service Bus API exposing all the Service Bus messaging features.
  • A higher level WCF binding: Which internally uses the MessagingFactory API and integrates the Service Bus messaging with WCF programing model.

There are already few blogs posts which talks about the MessagingFactory API so I’m not going to repeat that here. Let’s see how can I use Service Bus queues a communication mechanism between my WCF client & service. I’ll start with creating a usual one-way service as you would do for MSMQ or any other queuing technology.

    public interface IHelloService
        [OperationContract(IsOneWay = true)]
        void SayHello(string input);

    public class HelloService : IHelloService
        public void SayHello(string input)
            Console.WriteLine("Says: " + input);


Following snippet create a queue (line 6) using the management API. I’m using the wrapper class provided with the SDK for the REST based management service.

I then create a binding instance and specify ACS credentials on it. All the operations on Service Bus requires a valid token from ACS and these credentials would be used to acquire an ACS token for SB operations.

Finally I added an endpoint specifying the queue location as the endpoint address.

  1. static void Main(string[] args)
  2. {
  3.     const string baseAddress = "sb://soundbyte.servicebus.appfabriclabs.com";
  4.     var credential = TransportClientCredentialBase.CreateSharedSecretCredential("owner",
  5.                                                                                 "zYDbQ2wM1k7J32323232323VdbBukWJTF6Y=");
  6.     CreateQueue(baseAddress, credential,"q1");
  8.     var serviceBusBinding = new ServiceBusMessagingBinding();
  9.     serviceBusBinding.MessagingFactorySettings.Credential = credential;
  11.     var sh = new ServiceHost(typeof (HelloService));
  12.     sh.AddServiceEndpoint(typeof (IHelloService), serviceBusBinding, baseAddress + "/q1");
  13.     sh.Open();
  17.     Console.WriteLine("Host ready…");
  19.     var cf = new ChannelFactory<IHelloService>(serviceBusBinding, baseAddress + "/q1");
  20.     var proxy = cf.CreateChannel();
  21.     for (int i = 0; i < 10; i++)
  22.     {
  23.         proxy.SayHello("Hi " + i);
  24.     }
  26.     Console.ReadLine();
  27. }


My client is a standard WCF proxy (line 19-23) which is sending messages to the same endpoint address (queue location) using the same ServiceBus binding.

The above produces following expected output. Easy isn’t it?


By integrating Service Bus messaging with WCF programing model, we can reuse all of the WCF goodness with Service bus messaging. For example I can secure messages while they are traveling through queues/topics by just changing the binding.

Next time I’ll show how to do pub/sub using the WCF programing model.

Posted in ServiceBusV2 | 2 Comments »