Quantcast
Channel: Kirk Evans Blog
Viewing all 139 articles
Browse latest View live

Creating a Fiddler Extension for SharePoint 2013 App Tokens

$
0
0

This post will show how to create a Fiddler extension to inspect SharePoint 2013 context and access tokens.

Overview

In a previous post, I showed how to inspect a SharePoint 2013 context token.  While working with apps, I frequently demonstrate how the context, refresh, and access tokens work.  I realized that it might be helpful to see the actual tokens while working with an app.  Instead of writing more code in my web page to show the information, I decided to use Fiddler instead.  The result is a handy extension for Fiddler that SharePoint 2013 app builders can use to inspect the context and access tokens for your apps for provider hosted apps using either a client secret or a certificate.

image

Creating the Extension

The first step is to create an extension for Fiddler using the Inspector2 and IRequestInspector2 interfaces.  These will simply give you the HTTP headers and HTTP body.  I also created a Windows Forms control called “SPOAuthRequestControl” that contains a textbox and a DataGridView.

To start with, I created a new Windows Forms User Control project and added references to System.Web, System.Web.Extensions, and a reference to Fiddler.exe.

image

Next, we add a class for the Fiddler extension.

 

using System;using System.Windows.Forms;using Fiddler;

[assembly: Fiddler.RequiredVersion("4.4.5.1")]namespace MSDN.Samples.SharePoint.OAuth
{publicclass SPOAuthExtension : Inspector2, IRequestInspector2    
    {privatebool _readOnly;
        HTTPRequestHeaders _headers;privatebyte[] _body;
        SPOAuthRequestControl _displayControl;#region Inspector2 implementationpublicoverridevoid AddToTab(TabPage o)
        {
            _displayControl = new SPOAuthRequestControl();
            o.Text = "SPOAuth";
            o.Controls.Add(_displayControl);
            o.Controls[0].Dock = DockStyle.Fill;
        }publicoverrideint GetOrder()
        {return 0;
        }#endregion#region IRequestInspector2 implementationpublic HTTPRequestHeaders headers
        {
            get
            {return _headers;
            }
            set
            {
                _headers = value;
                System.Collections.Generic.Dictionary<string, string> httpHeaders = new System.Collections.Generic.Dictionary<string, string>();foreach (var item in headers)
                {
                    httpHeaders.Add(item.Name, item.Value);
                }
                _displayControl.Headers = httpHeaders;
            }
        }publicvoid Clear()
        {
            _displayControl.Clear();
        }publicbool bDirty
        {
            get { returnfalse; }
        }publicbool bReadOnly
        {
            get
            {return _readOnly;
            }
            set
            {
                _readOnly = value;
            }
        }publicbyte[] body
        {
            get
            {return _body;
            }
            set
            {
                _body = value;
                _displayControl.Body = body;                
            }
        }#endregion

    }

}

Creating the User Control

The next step is to create the Windows Forms user control.  I used a textbox and a DataGridView.  The DataGridView has two columns to show the key and value.  Here is the design surface:

image

The code for the control is pretty basic.  There is a property, Headers, and another property, Body.  When the Headers property is set, we look for an HTTP header with the name Authorization.  If we see one, then we strip out the value “Bearer “ and that gives us the UTF8 encoded access token.  We use a helper class, JsonWebToken, to decode and deserialize. 

using System;using System.Collections.Generic;using System.ComponentModel;using System.Drawing;using System.Data;using System.Linq;using System.Text;using System.Threading.Tasks;using System.Windows.Forms;using System.Web.Script.Serialization;namespace MSDN.Samples.SharePoint.OAuth
{publicpartialclass SPOAuthRequestControl : UserControl
    {public SPOAuthRequestControl()
        {
            InitializeComponent();

        }

        public Dictionary<string, string> Headers
        {
            set
            {
                txtContext.Text = string.Empty;foreach (string key invalue.Keys)
                {if (key == "Authorization")
                    {//Access tokenstring accessToken = value[key].Replace("Bearer ", string.Empty);string token = JsonWebToken.Decode(accessToken)[1];
                        txtContext.Text = token;
                        var dictionary = JsonWebToken.Deserialize(token);

                        PopulateGrid(dictionary);    
                                                                   
                    }
                }
            }
        }

        publicbyte[] Body
        {            
            set
            {                                bool found = false;if (null != value)
                {string ret = System.Text.Encoding.UTF8.GetString(value);//Context token may be sent as a querystring with these names, but this is not//recommended practice and is not implemented here as a result. Only POST values//are processed.string[] formParameters = ret.Split('&');string[] paramNames = { "AppContext", "AppContextToken", "AccessToken", "SPAppToken" };foreach (string valuePair in formParameters)
                    {string[] formParameter = valuePair.Split('=');foreach (string paramName in paramNames)
                        {if (formParameter[0] == paramName)
                            {                                //Decode header of JWT tokenstring tokenHeader = JsonWebToken.Decode(formParameter[1])[0];
                                txtContext.Text = tokenHeader;//Decode body of JWT tokenstring tokenBody = JsonWebToken.Decode(formParameter[1])[1];
                                txtContext.Text += tokenBody;
                                var dictionary = JsonWebToken.Deserialize(tokenBody);
                                PopulateGrid(dictionary);                                                                     

                                found = true;break;
                            }
                        }if (found)break;
                    }                        
                }

            }
        }



        publicvoid Clear()
        {
            txtContext.Text = string.Empty;

        }

        privatevoid PopulateGrid(IReadOnlyDictionary<string,object> dictionary)
        {
            dataGridView1.Rows.Clear();foreach (string key in dictionary.Keys)
            {if (key == "nbf" || key == "exp")
                {double d = double.Parse(dictionary[key].ToString());
                    DateTime dt = new DateTime(1970, 1, 1).AddSeconds(d);
                    dataGridView1.Rows.Add(key, dt);
                }else
                {
                    dataGridView1.Rows.Add(key, dictionary[key]);
                }
            }     
        }
    }
}

You can see there’s not much to the code other than how we find the context or access tokens.  Once we have them, we get the data and then populate the data in the grid.  The PopulateGrid calculates the nbf (not before) and exp (expires) values according to the number of seconds since January 1st, 1970.  For more information, see Tips and FAQs: OAuth and remote apps for SharePoint 2013

The JsonWebToken Helper Class

The JsonWebToken helper class provides methods to decode and deserialize the JWT token.  The JWT token may have multiple parts separated by “.”, which is why we use the Split function to split the header and body.  Once we have the header and body, we next need to make sure the token is the correct length by padding with “=” characters.  Finally, the Deserialize method will just create an IReadOnlyDictionary<string,object> so that we can easily access the data in the JWT token.

 

using System;using System.Collections.Generic;using System.Linq;using System.Security.Cryptography;using System.Text;using System.Web.Script.Serialization;namespace MSDN.Samples.SharePoint.OAuth
{publicclass JsonWebToken
    {publicstaticstring[] Decode(string token)
        {

            var parts = token.Split('.');string[] ret = newstring[2];

            var header = parts[0];
            var payload = parts[1];
            
            var headerJson = Encoding.UTF8.GetString(Base64UrlDecode(header));
            ret[0] = headerJson;

            string payloadJson = Encoding.UTF8.GetString(Base64UrlDecode(payload));
            ret[1] = payloadJson;return ret;
        }publicstatic IReadOnlyDictionary<string, object> Deserialize(string token)
        {
            JavaScriptSerializer ser = new JavaScriptSerializer();
            var dict = ser.Deserialize<dynamic>(token);return dict;
        }privatestaticbyte[] Base64UrlDecode(stringvalue)
        {string convert = value;switch (convert.Length % 4) 
            {case 0:                     break; case 2: 
                    convert += "=="; break; case 3: 
                    convert += "="; break; default: thrownew System.Exception("oops");
            }
            var ret = Convert.FromBase64String(convert); return ret;
        }

    }

}


We now have 3 classes:  the extension, the user control class, and the helper class.  That’s all we need in order to inspect the traffic in Fiddler and to provide a different visualization of the data.

Debugging the Extension

To debug the extension, I created post-build events to copy the .DLL to the Scripts and Inspectors folders for Fiddler.

image

I then set the debug start action to start Fiddler.  Now, when I hit F5, the assemblies are automatically copied and Fiddler starts.

image

 

The Result

I tested this with both an S2S provider hosted app and a provider hosted app that uses a client secret.  First, let’s look at the provider hosted app that uses S2S.  I am showing the access token here because in the S2S model there is not a context token, you only have the access token.  Click on the request to _vti_bin/client.svc and you will now see the details of the access token, including an easier to read nbf and exp value.

 

image

Next, here is an app that uses a client secret.  In this model, we are looking at the context token, which includes the refresh token.  Note again that we can easily read the values of nbf and exp to see when the context token expires. 

image

As you can see, this can be tremendously valuable in debugging your apps while trying to determine any possible causes for 401 unauthorized issues, such as possibly an expired access token or an incorrect audience value.

The code is attached to this post.  No warranties, provided as-is. 

 

For More Information

Inside SharePoint 2013 OAuth Context Tokens

Tips and FAQs: OAuth and remote apps for SharePoint 2013

Build a Custom Inspector for Fiddler


Moving Path Based to Host Named Site Collections

$
0
0

This post illustrates a problem with detaching content databases that contain site collections restored from path-based site collections to host named site collections.

Background

The recommendation for SharePoint 2013 is to use a single web application and leverage host named site collections.  In a previous post, I wrote about What Every SharePoint Admin Needs to Know About Host Named Site Collections.  In that post, I showed one approach for moving an existing path-based site collection to a host-header site collection.  This is invaluable if you have too many web applications in your farm and need to consolidate the site collections while preserving URLs.  It’s also invaluable to improving the health of your farm as I have seen multiple farms that suffered performance issues that were resolved by consolidating web applications to host-named site collections.

As a reminder, I provided the following sample script:

Backup-SPSite http://server_name/sites/site_name -Path C:\Backup\site_name.bak 
Remove-SPSite –Identity http://server_name/sites/site_name –Confirm:$FalseRestore-SPSite http://www.example.com -Path C:\Backup\site_name.bak -HostHeaderWebApplication http://server_name 

This works, and the site collection is restored successfully with the new host header.  However, there are some additional considerations you’ll want to be aware of.

Existing Web Application With the Same Url

The first problem is that the site collection may be at the root of a web application with the same URL that you are trying to move to a host named site collection.  For example, I have a web application, Intranet.Contoso.lab, that contains a single root site collection that is path-based.  I want to move this to a host named site collection, but that URL is already in use.  The fix is to delete the web application first.  Don’t worry, you have the option of preserving the content database just in case something goes wrong, in which case you could create a new web application using the existing content database and you’ll be back to where you started.  Here is a function that you can use to move your path-based site collection to a host-named site collection and optionally delete the existing web application while preserving the original content database.

 

 
function Move-PathBasedToHNSC(
    [string]$siteUrl, 
    [string]$backupFilePath, 
    [string]$pathBasedWebApplication, 
    [bool]$deletePathBasedWebApplication, 
    [string]$hostHeaderUrl,
    [string]$hostHeaderWebApplication, 
    [string]$contentDatabase)
{
    Backup-SPSite $siteUrl -Path $backupFilePath

    if($deletePathBasedWebApplication)
    {
        #If the HNSC uses the same URL as an existing web application,#the web application must be removed
        Remove-SPWebApplication $pathBasedWebApplication -RemoveContentDatabases:$false -DeleteIISSite:$true
    }
    else
    {
        #Not removing the web application, so just remove the site collection
        Remove-SPSite -Identity $siteUrl -Confirm:$false
    }

    Restore-SPSite $hostHeaderUrl -Path $backupFilePath 
        -HostHeaderWebApplication $hostHeaderWebApplication -ContentDatabase $contentDatabase    
}

Move-PathBasedToHNSC -siteUrl http://HNSCMoveTest2.Contoso.lab 
    -backupFilePath "C:\Backup\HNSCMoveTest2.bak" 
    -pathBasedWebApplication http://HNSCMoveTest2.contoso.lab 
    -deletePathBasedWebApplication $true 
    -hostHeaderUrl http://HNSCMoveTest2.contoso.lab 
    -hostHeaderWebApplication http://spdev 
    -ContentDatabase WSS_Content_HNSC

Before I run the script, here’s what the list of web applications looks like:

image

After running the script, the web application is gone, and I now see the host named site collection in the new web application and in the content database that I specified.

image

As the administrator, I’m happy because there’s one less web application to maintain and, likely, the performance of my farm will increase a bit. 

Detaching (Dismounting) the Content Database

Here’s where the weird things start happening.  You can detach a content database so that it’s not serving any content, but the database is still in SQL Server.  You might do this for a number of reasons, such as upgrades.  Let’s try detaching the content database using PowerShell:

Dismount-SPContentDatabase WSS_Content_HNSC

Now we want to attach it again.

Mount-SPContentDatabase "WSS_Content_HNSC" -WebApplication http://spdev

Go back to the browser and hit refresh, and after some time the host-named site collection will render correctly.  However, we have a few problems.  First, go look at the site collections again in Central Administration.  You might see that the site collection is gone!  We run some PowerShell to see what’s up:

PS C:\> get-spwebapplication http://spdev | get-spsite -limit all

Url                                                   
---                                                   
http://spdev  

Huh?  Where’d my site collection go?  If we go into the content database, we can see the site is still there.  However, the site doesn’t seem to actually be there.  I tried Get-SPSite, even stsadm –o EnumSites, and the site isn’t showing anywhere.  Thanks to my colleague, Joe Rodgers, for showing me the fix. 

$db = Get-SPContentDatabase WSS_Content_HNSC
$db.RefreshSitesInConfigurationDatabase()

This refreshes the sites in the site map in the configuration database, at which point the site collection appears again in PowerShell and in the UI.

image

PS C:\> get-spwebapplication http://spdev | get-spsite -limit all

Url                                                    
---                                                    
http://spdev                                           
http://hnscmovetest.contoso.lab                        
http://hnscmovetest2.contoso.lab                       
http://hnscmovetest3.contoso.lab 

If you are upgrading and have used this technique to move path-based to host-named site collections, I would definitely recommend keeping this in mind.  Note that this behavior does not seem to happen when you create a new host-named site collection or a new path-based site collection, it only seems to happen when you move an existing path-based site collection to become a host-named site collection. I also only tested this in SharePoint 2010. 

Summary

SharePoint scales by having many site collections instead of many web applications, and Host named site collections are a fantastic way to get there without changing URLs.  Honestly, the bit about detaching and attaching the content database and losing the information in the site map seems like unintended behavior to me.  I haven’t tried this in SharePoint 2013 to see if the problem reproduces there, I’d be interested to see if anyone reproduces this in an SP2013 environment.  If so, leave comments!

Introducing SharePointContext for Provider-Hosted SharePoint Apps!

$
0
0

One of the more frustrating parts of building provider-hosted apps for SharePoint 2013 was that you needed to choose ahead of time if you were targeting a low-trust app or a high-trust app and use the appropriate methods in TokenHelper.cs.  Not only that, but figuring out how to use the app-only policy was less than straightforward.  You were also left to your own devices for caching.  The code in Visual Studio 2013 addresses this with the new SharePointContext.

Choose Your Weapon

As an example, here is a snippet of code from my blog post SharePoint 2013 App Only Policy Made Easy that shows how to use a high-trust app with the app-only policy.

string appOnlyAccessToken = 
   TokenHelper.GetS2SAccessTokenWithWindowsIdentity(_hostWeb, null);

using (ClientContext clientContext = 
    TokenHelper.GetClientContextWithAccessToken(_hostWeb.ToString(), appOnlyAccessToken))
{
    List list = clientContext.Web.Lists.GetByTitle("Announcements");
    ListItemCreationInformation info = new ListItemCreationInformation();
    Microsoft.SharePoint.Client.ListItem item = list.AddItem(info);
    item["Title"] = "Created from CSOM";
    item["Body"] = "Created from CSOM " + DateTime.Now.ToLongTimeString();

    item.Update();
    clientContext.Load(item);
    clientContext.ExecuteQuery();
}

Unless you heavily comment your code (guilty, I don’t), it would be difficult to figure out that the null parameter in the call to GetS2SAccessTokenWithWindowsIdentity actually meant “don’t pass in user information as part of the token”, therefore using the app-only policy.  To compound the problem, that code would only work with high-trust apps.  If you are building a low-trust app, the method used to obtain the access token would look very different, as the use of an app-only token is explicitly declared. 

SharePointContextToken contextToken =
    TokenHelper.ReadAndValidateContextToken_contextTokenString, Request.Url.Authority);

//Get app only access token.string appOnlyAccessToken = 
    TokenHelper.GetAppOnlyAccessToken(contextToken.TargetPrincipalName, 
            _hostWeb.Authority, contextToken.Realm).AccessToken;

using (ClientContext clientContext = 
    TokenHelper.GetClientContextWithAccessToken(_hostWeb.ToString(), appOnlyAccessToken))
{
    List list = clientContext.Web.Lists.GetByTitle("Announcements");
    ListItemCreationInformation info = new ListItemCreationInformation();
    Microsoft.SharePoint.Client.ListItem item = list.AddItem(info);
    item["Title"] = "Created from CSOM";
    item["Body"] = "Created from CSOM " + DateTime.Now.ToLongTimeString();

    item.Update();
    clientContext.Load(item);
    clientContext.ExecuteQuery();
}

Developers were left to their own devices to try to figure out how to create a factory pattern to abstract this low-level code.  So there’s two problems highlighted.  The first is that the way you obtain the app-only token is very different in both models, the second is that it’s not very straightforward to begin with.  Another thing in that code that I don’t particularly like is that you have to pass in the host web or app web URL. 

The Problem with the Cache

There’s another more insidious problem in there.  This code will cause the token to be obtained every time this code is run.  In the S2S case, this isn’t necessarily that huge of an impact, but in the case of the low-trust app, this is a performance hit because it requires a call to Azure ACS each time.  To avoid this, you had to implement your own cache strategy, and that often led to developers putting the access token in an HTTP cookie.  This is particularly bad because it opens you up to attacks where the access token could be obtained and reused.  Consider the access token your app’s username and password and keep the keys away from the clients who use your app.  There is a property in the context token called CacheKey (see my post Inside SharePoint 2013 OAuth Context Tokens for more information on the context token) that you should store in an HTTP cookie instead, and use that key to reference the refresh and access token stored in state on the server.  Without having that code, developers were left on their own to implement caching without understanding the ramifications.

One Code to Rule Them All

OK, you’ve trudged through the rather extensive setup of the problems being addressed.  Enough already, show me what’s changed to improve the situation!

The new SharePointContext abstracts the details of an app using ACS (a low-trust app) or S2S (a high-trust app).  The class structures supporting SharePointContext look like the following:

There are abstract classes for the provider and context, and concrete classes for high trust and low trust apps.  Very nicely done.

THE PAYOFF – RUNS EITHER AS S2S or ACS!

var spContext = SharePointContextProvider.Current.GetSharePointContext(Context);
using (var appOnlyClientContext = spContext.CreateAppOnlyClientContextForSPHost())
            {
                List list = appOnlyClientContext.Web.Lists.GetByTitle("Announcements");
                ListItemCreationInformation info = new ListItemCreationInformation();
                Microsoft.SharePoint.Client.ListItem item = list.AddItem(info);

                item["Title"] = "Created from App Only CSOM " + DateTime.Now.ToLongTimeString();
                item["Body"] = "App Only created by CSOM";

                item.Update();
                appOnlyClientContext.Load(item);
                appOnlyClientContext.ExecuteQuery();
            }           

This same code runs whether my app is a high-trust app or a low-trust app!

The way this works is the SharePointContextProvider has logic that detects if the app is a high trust app or not.  The default static constructor for SharePointContextProvider checks to see if the app is a high trust app or not and creates a concrete provider class to handle the specific differences between each type of authorization.

static SharePointContextProvider()
        {
            if (!TokenHelper.IsHighTrustApp())
            {
                SharePointContextProvider.current = new SharePointAcsContextProvider();
            }
            else
            {
                SharePointContextProvider.current = new SharePointHighTrustContextProvider();
            }
        }

That call to IsHighTrustApp, how does it figure out if this is a high trust app or not?  It checks for SigningCredentials.

publicstaticbool IsHighTrustApp()
        {
            return SigningCredentials != null;
        }

SigningCredentials is a property that’s populated by looking in the web.config for an appSetting value containing the certificate path.  If there is a certificate, SigningCredentials will be non-null, and it is deemed a high trust app.  In case it isn’t abundantly clear, you no longer have to create two versions of your app (hope you weren’t doing this), or write a bunch of code just to figure out how to write the app only once.

 

Fixing the Cache Issue

The other really nice part of this new model is that you don’t have to worry about the cache problem nearly as much.  When you make the call to GetSharePointContext, the implementation of that method makes a call to SaveSharePointContext.

public SharePointContext GetSharePointContext(HttpContextBase httpContext)
        {
            if (httpContext == null)
            {
                thrownew ArgumentNullException("httpContext");
            }

            Uri spHostUrl = SharePointContext.GetSPHostUrl(httpContext.Request);
            if (spHostUrl == null)
            {
                returnnull;
            }

            SharePointContext spContext = LoadSharePointContext(httpContext);

            if (spContext == null || !ValidateSharePointContext(spContext, httpContext))
            {
                spContext = CreateSharePointContext(httpContext.Request);

                if (spContext != null)
                {
                    SaveSharePointContext(spContext, httpContext);
                }
            }

            return spContext;
        }

Why is this a win?  For starters, it does the right thing by using a cookie to store the CacheKey, and stores the actual token in session state on the server referenced by the cache key.  Notice that call to LoadSharePointContext.  For the SharePointAcsContextProvider concrete class, this method looks like the following:

protectedoverride SharePointContext LoadSharePointContext(HttpContextBase httpContext)
        {
            return httpContext.Session[SPContextKey] as SharePointAcsContext;
        }

How sweet is that?  It does the right thing and looks in session state for the context based on the key.  That significantly reduces the amount of traffic.  Note, though, that it is using session state, which doesn’t survive nearly as long as the refresh token and access token.  You may have different caching needs.  Thankfully, the code is generated for you as part of your SharePoint provider-hosted app project, and the classes are not sealed.  This means you can modify or extend the behavior to suit your needs.  You could also change and extend TokenHelper.cs as well, such as in Steve Peschka’s sample in his post Using SharePoint Apps with SAML and FBA Sites in SharePoint 2013.

Host Web or App Web?

This is also a nice change in Visual Studio 2013 for the SharePointContext that lets you explicitly use the app-only or app+user context, and explicitly target the app web or host web.  Notice the members

image

Notice the methods for CreateAppOnlyClientContextForSP*, CreateUserClientContextForSP*, and each has suffixes of *AppWeb and *Host.  This makes the coding much more explicit and self-documenting without having to discover my old blog posts to figure it out.

Great job, Visual Studio team!  Well done!

You can find more information from Chris Johnson’s blog post, SharePoint app tools in Visual Studio 2013 preview, the new SharePointContext helper!  Chris highlights additional benefits, such as surviving postbacks in MVC apps and the [SharePointContextFilter] attribute that ensures that you have a context.  He provides additional links to talks from the Build Conference in that post. 

For More Information

Using SharePoint Apps with SAML and FBA Sites in SharePoint 2013

Inside SharePoint 2013 OAuth Context Tokens

SharePoint 2013 App Only Policy Made Easy

SharePoint app tools in Visual Studio 2013 preview, the new SharePointContext helper!

Enabling the Developer Site Collection Feature in SharePoint Online

$
0
0

I have been doing a bit of work updating some of our content for apps lately, and I found a couple areas where I needed to enable the developer site collection feature for a new site collection.  You might want to enable the developer site collection feature when you need to side-load an app and deploy directly from Visual Studio.  There isn’t a button in the site collection features to let you enable this. 

If you were on-premises, you’d just run Enable-SPFeature:

Enable-SPFeature e374875e-06b6-11e0-b0fa-57f5dfd72085 –url http://sp.contoso.com

However, we can’t do that with SharePoint Online as the PowerShell cmdlets don’t expose the ability to turn features on and off.  There are a few ways to do this by taking advantage of the Client Side Object Model.

Using PowerShell

One way would be to install the excellent (and free!) SharePoint Client Browser for SharePoint 2010 and 2013.  It includes an extremely useful feature to open up PowerShell with the CSOM already loaded.

image

Using that PowerShell window, you could then easily use the client side object model (CSOM) to enable the developer feature.

$ctx.Load($ctx.Site);
$ctx.ExecuteQuery();
$guid = [System.Guid]"e374875e-06b6-11e0-b0fa-57f5dfd72085"
$ctx.Site.Features.Add($guid,$true,[Microsoft.SharePoint.Client.FeatureDefinitionScope]::None)
$ctx.ExecuteQuery();

Too easy.

Creating an App

Of course, I am lazy and I don’t want to have to dig up that PowerShell code each time I do this.  I want to be able to just click a button like I do in the site collection features dialog.  I can do that by creating a simple SharePoint hosted app.  In Visual Studio, create a new app for SharePoint.  Replace the contents of app.js.

'use strict';

var context = SP.ClientContext.get_current();


// This code runs when the DOM is ready and creates a context object which is needed to use the SharePoint object model
$(document).ready(function () {
    var site = context.get_site();
    context.load(site);
    context.executeQueryAsync(function () {
        site.get_features().add("e374875e-06b6-11e0-b0fa-57f5dfd72085", true, 0);
        context.executeQueryAsync(function () {
            alert("added the developer site feature to the site collection!");
        },
            function(sender, args){
                alert("unable to add the developer site feature to the site collection: " + args );
            });
    },
    function (sender, args) {
        alert("oops! " + args);
    });
});

Go to the appmanifest for your app and request Manage permission for the site collection.

image

Package the app and then add the .app package to the app catalog for your tenant.  Now you can go to any site in your tenancy where you are a site collection administrator and add the app.  Click on the app to execute the code above, and then optionally uninstall the app from that site so that other users don’t feel compelled to click on it. 

Now it’s as simple as add the app, execute it, then uninstall the app.  You might add additional capabilities, such as a toggle button to enable/disable the feature, but I’ll leave that as an exercise to the reader. 

What Every Developer Needs to Know About SharePoint Apps, CSOM, and Anonymous Publishing Sites

$
0
0

This post will show what works and what doesn’t with CSOM and REST in a SharePoint 2013 publishing site that permits anonymous access.  More importantly, we show what you should and should not do… and why.

Overview

I frequently see questions about using SharePoint apps with “public-facing web sites” where the web content is available to anonymous users.  There are a lot of misconceptions about what is possible.  This post will dive into some of the gory details of CSOM with anonymous access.  This demonstration will use an on-premises lab environment instead of O365.

Setting Up the Demonstration

I created a new web application that allows anonymous access (http://anonymous.contoso.lab) and a site collection using the Publishing template.  I then enabled anonymous access for the entire site by going to Site Settings/ Site Permissions and clicking the Anonymous Access button in the ribbon.  Notice that checkbox “Require Use Remote Interfaces permission” that is checked by default… leave it checked for now.

image

Next, I created a SharePoint-hosted app.  I just slightly modified the out of box template.

'use strict';

var context = SP.ClientContext.get_current();
var user = context.get_web().get_currentUser();

$(document).ready(function () {
    getUserName();
});

function getUserName() {
    context.load(user);
    context.executeQueryAsync(onGetUserNameSuccess, onGetUserNameFail);
}


function onGetUserNameSuccess() {
    if (null != user) {
        //The user is not nullvar userName = null;
        try {
            userName = user.get_title();
        } catch (e) {
            userName = "Anonymous user!";
        }
    }
    $('#message').text('Hello ' + userName);
}


function onGetUserNameFail(sender, args) {
    alert('Failed to get user name. Error:' + args.get_message());
}

Next, I add a client app part to the project.  The client app part isn’t going to do anything special, it just says Hello World.

<%@ Page language="C#" Inherits="Microsoft.SharePoint.WebPartPages.WebPartPage, Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %><%@ Register Tagprefix="SharePoint" Namespace="Microsoft.SharePoint.WebControls" Assembly="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %><%@ Register Tagprefix="Utilities" Namespace="Microsoft.SharePoint.Utilities" Assembly="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %><%@ Register Tagprefix="WebPartPages" Namespace="Microsoft.SharePoint.WebPartPages" Assembly="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %><WebPartPages:AllowFramingID="AllowFraming"runat="server"/><html><head><title></title><scripttype="text/javascript"src="../Scripts/jquery-1.8.2.min.js"></script><script type="text/javascript" src="/_layouts/15/MicrosoftAjax.js"></script>
    <script type="text/javascript" src="/_layouts/15/sp.runtime.js"></script>
    <script type="text/javascript" src="/_layouts/15/sp.js"></script>

    <script type="text/javascript">
        'use strict';

        // Set the style of the client web part page to be consistent with the host web.
        (function () {
            var hostUrl = '';
            if (document.URL.indexOf('?') != -1) {
                varparams = document.URL.split('?')[1].split('&');
                for (var i = 0; i < params.length; i++) {
                    var p = decodeURIComponent(params[i]);
                    if (/^SPHostUrl=/i.test(p)) {
                        hostUrl = p.split('=')[1];
                        document.write('<link rel="stylesheet" href="' + hostUrl + '/_layouts/15/defaultcss.ashx" />');
                        break;
                    }
                }
            }
            if (hostUrl == '') {
                document.write('<link rel="stylesheet" href="/_layouts/15/1033/styles/themable/corev15.css" />');
            }
        })();
    </script></head><body><h1>Hello, world!</h1></body></html>

Next, I went to Central Administration / Apps / Manage App Catalog and created an app catalog for the web application.  I published the SharePoint-hosted app in Visual Studio (which just generates the .app package) and then uploaded the .app package to the App Catalog.

image

Next, as the site collection administrator, I added the app to the publishing site.

image

Finally, I edit the main page of the publishing site and add the client app part and test that it works.  Check in the page and publish and you should see something like this:

image

What Do Anonymous Users See?

The question on everybody’s mind is what happens if there is not an authenticated user.  In our simple test, recall that the only thing we are showing is a simple IFRAME with some styling obtained from SharePoint.  The only thing our IFRAME is showing is a page that contains static HTML, “Hello, world!”.  I highlighted the “Sign In” link to show that I really am an anonymous user.

image

Now, click the link “SPHostedClientWebPart Title” (yeah, I know, I am lazy… I should have given it a better name) and you are taken to the full-page experience for the app.  What do we see?  We get an error.

image

That error is saying that the anonymous user does not have access to use the Client Side Object Model.  Just for grins, let’s try the REST API with the URL http://anonymous.contoso.lab/_api/Web.  First, you get a login prompt.  Next, you see error text that says you do not have access.

image

This makes sense because the CSOM and REST API are not available by default to anonymous users.

Enabling CSOM for Anonymous Users

Let me start this section by saying what I am about to show you comes with some risk that you will need to evaluate for your environment.  Continue reading the entire article to learn about the risks.

That said, go back to Site Settings / Site Permissions and then click the Anonymous Access button in the ribbon.  Remember that rather cryptic-sounding checkbox Require Use Remote Interfaces permission?  Uncheck it and click OK.

image

That check box decouples use of CSOM from the Use Remote Interfaces permission.  When checked, it simply means that the user must possess the Use Remote Interfaces permission which allows access to SOAP, Web DAV, the Client Object Model.  You can remove this permission from users, disabling their ability to use SharePoint Designer.  There are cases where you still want to remove this permission, such as for anonymous users, but you still want to use the CSOM.  This is exactly what the checkbox is letting you do; you are enabling use of CSOM without requiring the users to have that permission.

To test the change, go back to the main page for your site.  Of course, we still see the IFRAME from before, that’s not doing anything with CSOM.  Click the title of the web part to see the full-page immersive experience.  This time, we do not see an error message, instead we see that our code fell into an exception handler because the Title property of the User object was not initialized.  Our error handling code interprets this as an anonymous user. 

image

You just used the app model in a public-facing SharePoint site with anonymous users.

In case you are interested, you can set the same property with PowerShell using a comically long yet self-descriptive method UpdateClientObjectModelUseRemoteAPIsPermissionSetting.

PS C:\> $site = Get-SPSite http://anonymous.contoso.lab
PS C:\> $site.UpdateClientObjectModelUseRemoteAPIsPermissionSetting($false)

How about that REST API call?  What happens with anonymous users now?

image

Now that you know how to fire the shotgun, let’s help you move it away from your foot.

All Or Nothing

There is no way to selectively enable parts of the CSOM EXCEPT search.

UPDATE: Thanks to Sanjay for pointing out that it is possible to enable search for anonymous without enabling the entire CSOM or REST API, and thanks to Waldek Matykarz for a great article showing how to do it.

Enabling use of CSOM for anonymous users presents a possible information disclosure risk in that it potentially divulges much more information than you would anticipate.  Let me make that clear:  If you remove the require remote interface permission for an anonymous site, the entire CSOM is now available to anonymous users.  Of course, that doesn’t mean they can do anything they want, SharePoint permissions still apply.  If the a list is not made available to anonymous users, then you can’t use the CSOM to circumvent that security requirement.  Similarly, an anonymous user will only be able to see lists or items that have been explicitly made available to anonymous users.  It’s important to know that more than just what you see on the web page is now available via CSOM or REST.

ViewFormPagesLockDown?  Ha!

In a public-facing web site using SharePoint, you want to make sure that users cannot go to form pages such as Pages/Forms/AllItems.aspx where we would see things like CreatedBy and ModifiedBy. The ViewFormPagesLockDown feature is enabled by default for publishing sites to prevent against this very scenario.  This feature reduces fine-grained permissions for the limited access permission level, removing the permission to View Application Pages or Use Remote Interfaces.  This means that an anonymous user cannot go to Pages/Forms/AllItems.aspx and see all of the pages in that library.  If we enable CSOM for anonymous users, you still wont’ be able to access the CreatedBy and ModifiedBy via the HTML UI in the browser, but you can now access that information using CSOM or REST.

To demonstrate, let’s use the REST API as an anonymous user to look through the items in the Pages library by appending _api/Web/Lists/Pages/Items to the site.

image

I’ll give you a moment to soak that one in.

Let me log in as Dan Jump, the CEO of my fictitious Contoso company in my lab environment.  Dan authors a page and publishes it.  An anonymous user now uses the REST API (or CSOM, but if you are reading this hopefully you get that they are the same endpoint) using the URL:


http://anonymous.contoso.lab/_api/Web/Lists/Pages/Items(4)/FieldValuesAsText

image

The resulting text shows his domain account (contoso\danj) and his name (Dan Jump).  This may not be a big deal in many organizations, but for some this would be a huge deal and an unintended disclosure of personally identifiable information.  Understand that if you enable the CSOM for your public-facing internet site, you run the risk of information disclosure. 

For those that might be confused about my using the term “CSOM” but showing examples using REST, here is some code to show you it works.  I don’t need OAuth or anything here, and I run this from a non-domain joined machine to prove that you can now get to the data.

using Microsoft.SharePoint.Client;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace ConsoleApplication1
{
    class Program
    {
        staticvoid Main(string[] args)
        {
            using (ClientContext ctx = new ClientContext(http://anonymous.contoso.lab))
            {
                ctx.Credentials = System.Net.CredentialCache.DefaultCredentials;
                Console.WriteLine("Enter the list name: ");
                string listName = Console.ReadLine();
                List list = ctx.Web.Lists.GetByTitle(listName);

                Console.WriteLine("Enter the field name: ");
                string fieldName = Console.ReadLine();


                CamlQuery camlQuery = new CamlQuery();
                
                ListItemCollection listItems = list.GetItems(camlQuery);
                ctx.Load(
                     listItems,
                     items => items
                         .Include(
                             item => item[fieldName],
                             item => item["Author"],
                             item => item["Editor"]));
                try
                {
                    ctx.ExecuteQuery();

                    foreach (var myListItem in listItems)
                    {
                        Console.WriteLine("{0} = {1}, Created By={2}, Modified By={3}", 
fieldName, myListItem[fieldName], ((FieldUserValue)myListItem["Author"]).LookupValue,  
((FieldUserValue)myListItem["Editor"]).LookupValue );
                    }
                }
                catch (Exception oops)
                {
                    Console.WriteLine(oops.Message);
                }
            }
        }
    }
}

This code simply asks for a list name and a column name that you would like to query data for, such as “Title”. I also include the Created By and Modified By fields as well to demonstrate the potential disclosure risk. Since the CSOM is available to anonymous users, I can call it from a remote machine and gain information that was not intended to be disclosed.

clip_image002

You see that CSOM and REST are calling the same endpoint and getting the same data.

Security Trimming Still in Effect

At this point I have probably freaked a few readers out who didn’t understand the full consequences when they unchecked that checkbox (or, more likely, people who skimmed and did not read this section).  Does this mean that anonymous users can do ANYTHING they want to with the CSOM?  Of course not.  When you configured anonymous access for the web application, you specified the anonymous policy, likely with “Deny Write – Has no write access”. 

image

This means what it says: Anonymous users cannot write, even with the REST API or by using CSOM code.  Further, anonymous users can only see the information that you granted them to see when you configured anonymous access for the site. 

image

If there is content in the site that you don’t want anonymous users to access, you have to break permission inheritance and remove the right for viewers to read.

image

Additionally, there is some information that is already locked down.  Logged in as the site collection administrator, I can go to the User Information List and see information about the site users.

http://anonymous.contoso.lab/_api/web/lists/getbytitle('User%20Information%20List')/Items

If I try that same URL as an anonymous user, I simply get a 404 not found. 

To summarize this section and make it perfectly clear: security trimming is still in effect.  Unpublished pages are not visible by default to anonymous users.  They can only see the lists that enable anonymous access.  If, despite what I’ve written so far, you decide to enable CSOM for anonymous users, then you will want to make sure that you don’t accidentally grant access for anonymous users to things they shouldn’t have access to.

Potential for DoS

When you use SharePoint to create web pages, you carefully construct the information that is shown on the page and you control how frequently it is queried or seen.  Hopefully you perform load tests to confirm your environment can sustain the expected traffic load before putting it into production.

With CSOM enabled for anonymous users, all that testing was pointless.

There is no caching with the CSOM, so this would open up the possibility for me to do things like query batches of information to query 2000 items from multiple lists simultaneously, staying under the default list view threshold while taxing your database. Now if I can get a few other people to run that code, say by using some JavaScript exploit and posting it to Twitter, then I now have the makings for a DoS attack… or at least one hell of a stressful day for my databases.

You Really Need to Understand How OAuth Works

Hopefully by now I have convinced you that enabling the CSOM for anonymous users to directly access is not advised (and hence why it is turned off by default).  At this point in the conversation, I usually hear someone chime in about using the app only policy.  Let me tell you why this is potentially a MONUMENTALLY bad idea.

With provider-hosted apps, I can use this thing called the App Only Policy.  This lets my app perform actions that the current user is not authorized to do.  As an example, an administrator installs the app and grants it permission to write to a list.  A user who has read-only permission can use the app, and the app can still write to the list even though the user does not have permission.  Pretty cool! 

I have presented on the SharePoint 2013 app model around the world, and I can assure you that it all comes down to security and permissions.  Simply put, you must invest the time to understand how OAuth works in SharePoint 2013.  Watch my Build 2013 talk Understanding Authentication and Permissions with Apps for SharePoint and Office as a good starter.

The part to keep in mind is how OAuth works and why we say you ABSOLUTELY MUST USE SSL WITH SHAREPOINT APPS.  It works by the remote web site sending an access token in the HTTP header “Authorization” with a value of “Bearer “ + a base64 encoded string.  Notice I didn’t say encrypted, I said encodedThat means that the information can easily be decoded by anybody who can read the HTTP header value.

I showed a working example of this when I wrote a Fiddler extension to inspect SharePoint tokens.  It isn’t that hard to crack open an access token to see what’s in it.

image

If you have a public-facing web site, you are likely letting everyone access it using HTTP and not requiring HTTPS (of course, you are doing this).  If you tried to use a provider-hosted app without using SSL, then anyone can get to the Authorization token and replay the action, or worse.  They could create a series of tests against lists, the web, the root web, the site collection, additional site collections, or the tenant to see just what level of permission the app had.  If the app has been granted Full Control permission to a list, it has the ability to do anything it wants to that list including delete it.  Even though your app may only be writing a document to the list, your app is authorized to do anything with that list.  Start playing around with CSOM and REST, and I can do some nasty stuff to your environment. 

One more thing… the access token is good for 12 hours.  That’s a pretty large window of time for someone to get creative and make HTTP calls to your server, doing things that you never intended.

Doing It The Right Way

Suffice to say, you DO NOT want to try using CSOM calls in a provider-hosted app to an anonymous site, and you DO NOT want to enable CSOM for anonymous users.  Does this completely rule out using the app model?  

You could set up a different zone with a different URL that uses SSL, and your app will communicate to SharePoint using only server-side calls to an SSL protected endpoint.  To achieve this, the app would have to use the app-only policy because no user information would be passed as part of the token (see my blog post, SharePoint 2013 App Only Policy Made Easy for more information).

image

The reason that I stressed server-side calls in the previous paragraph is simple: if they were client-side calls using the Silverlight CSOM or JavaScript CSOM implementation, we’d be back at the previous problem of exposing the CSOM directly to anonymous users.

This pattern means there are certain interactions with apps that are not going to work easily.  For instance, using this pattern with an ACS trust means that a context token will not be passed to your app because your app is using HTTP.  You can still communicate with SharePoint, but the coding is going to look a bit different. 

string realm = TokenHelper.GetRealmFromTargetUrl(siteUri);

//Get the access token for the URL.  //Requires this app to be registered with the tenantstring accessToken = TokenHelper.GetAppOnlyAccessToken(
    TokenHelper.SharePointPrincipal, 
    siteUri.Authority, 
    realm).AccessToken;

//Get client context with access tokenusing(var clientContext = TokenHelper.GetClientContextWithAccessToken(
     siteUri.ToString(),accessToken))
{
    //Do work here
}

Instead of reading information from the context token, we are simply making a request to Azure ACS to get the access token based on the realm and the URL of the SharePoint site.  This keeps the access token completely away from users, uses SSL when communicating with SharePoint, and allows you to control the frequency and shape of information that is queried rather than opening access to the whole API to any developer who wants to have a go at your servers.

Enabling Search for Anonymous Users

Sanjay Narang pointed out in the comments to this post that it is possible to enable anonymous search REST queries without removing the requires remote interface permission setting.  This is detailed in the article SharePoint Search REST API overview.  administrators can restrict what query parameters to expose to anonymous users by using a file called queryparametertemplate.xml. To demonstrate, I first make sure that the site has the requires remote interface permission.

image

Next, I make sure that search results will return something.  I have a library, Members Only, that contains a single document.

image

That document contains the following text.

image

I break permissions for the library and remove the ability for anonymous users to access it.

image

A search for “dan” returns two results if I am an authenticated user.

image

A search for “dan” only returns 1 result as an anonymous user.

image

If I attempt to use the search REST API as an authenticated user, I get results.

image

If I attempt it as an anonymous user, I get an HTTP 500 error. 

image

Looking in the ULS logs, you will see that the following error occurs.

Microsoft.Office.Server.Search.REST.SearchServiceException: The SafeQueryPropertiesTemplateUrl "The SafeQueryPropertiesTemplateUrl &quot;{0}&quot; is not a valid URL." is not a valid URL. 

To address this, we will use the approach detailed by Waldek Mastykarz in his blog post Configuring SharePoint 2013 Search REST API for anonymous users.  I copy the XML from the SharePoint Search REST API overview article and paste into notepad.  As instructed in that article, you need to replace the farm ID, site ID, and web ID.  You can get those from PowerShell.

PS C:\> Get-SPFarm | select ID

Id                                                                             
--                                                                             
bfa6aff8-2dd4-4fcf-8f80-926f869f63e8                                           



PS C:\> Get-SPSite http://anonymous.contoso.lab | select ID

ID                                                                             
--                                                                             
6a851e78-5065-447a-9094-090555b6e855                                           



PS C:\> Get-SPWeb http://anonymous.contoso.lab | select ID

ID                                                                             
--                                                                             
8d0a1d1a-cdae-4210-a794-ba0206af1751   

Given those values, I can now replace them in the XML file.

image

Save the XML file to a document library named QueryPropertiesTemplate.

image

Finally, append &QueryTemplatePropertiesUrl='spfile://webroot/queryparametertemplate.xml' to the query.

image

Now try the query as an anonymous user, and you get results even though we still require the use remote interfaces permission.  The rest of the CSOM and REST API is not accessible, only search is, and only for the properties allowed through that file.

image

Summary

This was a long read (took even longer to write), but hopefully it helps to answer the question if you should use apps for your public-facing site.  As you saw, you can easily create client app parts and even custom actions.  You should not enable CSOM for anonymous access unless you are OK with the risks and can mitigate against them.  You can still use the app model, but it’s going to take a little more engineering and will require your SharePoint site have an endpoint protected by SSL that is different than the endpoint that users will access.

For More Information

Understanding Authentication and Permissions with Apps for SharePoint and Office

SharePoint 2013 App Only Policy Made Easy

Fiddler extension to inspect SharePoint tokens

Configuring SharePoint 2013 Search REST API for anonymous users.

SharePoint Search REST API overview

Setting a SharePoint Person or Group Field Value with CSOM

$
0
0

I recently was asked a question about setting a Person or Group field value in a list using CSOM.  This post shows how to programmatically set a user or group field value using CSOM.

Background

A developer wants to programmatically add users as members to a Community site in SharePoint 2013.  Once the users have adequate permissions to the site, they still need to join the community, which adds a new list item in the Community Members list.  The Community Members list in a Community site has a few columns, but the only one that really matters is the Member column.

image

The Member column is of type Person or Group.  Most of the CSOM examples you will see use simple types, such as text or numbers, how can we programmatically set a Person or Group field?  Turns out it is pretty easy.

The Code

I created a provider-hosted app to demonstrate this using managed code, but could have done this with JavaScript in a SharePoint-hosted app as well.  

I am using the new SharePointContext in Visual Studio 2013 to greatly simplify working with OAuth tokens in SharePoint apps.  We obtain a reference to the host web, then call OpenWeb to obtain a reference to another web within the site collection.  Load the site and execute the query, and we can now access properties from the web. 

I am using Office 365 for this demonstration, so we use the claims-encoded value for the account name to check whether the specified logon name belongs to a valid user of the website, and if the logon name does not already exist, adds it to the website.

Once we ensure the user account, we next use the FieldUserValue type.  We get the user’s ID and set it as the LookupId property, then set the Member field value to the FieldUserValue type.  Call update, and the new user value is set.

protectedvoid Page_Load(object sender, EventArgs e)
{
    var spContext = SharePointContextProvider.Current.GetSharePointContext(Context);

    using (var clientContext = spContext.CreateUserClientContextForSPHost())
    {
        Web communitySite = clientContext.Site.OpenWeb("Community");
        clientContext.Load(communitySite);
        clientContext.ExecuteQuery();

        User newUser = communitySite.EnsureUser("i:0#.f|membership|allieb@kirke.onmicrosoft.com");
        clientContext.Load(newUser);
        clientContext.ExecuteQuery();

        FieldUserValue userValue = new FieldUserValue();
        userValue.LookupId = newUser.Id;

        List members = communitySite.Lists.GetByTitle("Community Members");
        Microsoft.SharePoint.Client.ListItem item = 
members.AddItem(new ListItemCreationInformation());
        item["Member"] = userValue;
        item.Update();
        clientContext.ExecuteQuery();
    }
}

The Result

The result is pretty uninteresting, as you can expect it simply adds a new list item to the Community Members list.  You can see the result by inspecting the Members of the Community site.  Just by using the FieldUserValue type, the user’s information and picture are properly set without us having to write any code to support it.

image

For More Information

FieldUserValue Class

Web.EnsureUser Method

How to Allow Only Users Who Have a Community Badge to Your SharePoint 2013 Site

Call O365 using CSOM with a Console Application

$
0
0

This post shows how to use the SharePointOnlineCredentials class to authenticate to O365 from a console application.

Background

I write a ton of short samples for customers and co-workers.  I’ve written this one quite a few times but never seemed to add it to my personal source code control repository in the cloud (you are aware that you can get TFS in the cloud for free with Visual Studio Online, right?)  As I started adding this code to TFS today, I realized that I should also blog this one as it may help someone else.

In 2011, Wictor Wilen wrote a fantastic post that showed how to do active authentication to Office 365 and SharePoint Online.  While that post is still very accurate, that functionality has been brought into the client side object model so that you do not have to write this code yourself.  The CSOM for SharePoint 2013 introduces the new SharePointOnlineCredentials class that provides this functionality.

Show Me the Code!

To show you how easy this is, here is a Console application that uses the SharePointOnlineCredentials class to get a remote site’s Title property.

using System;
using System.Security;
using Microsoft.SharePoint.Client;

namespace MSDN.Samples
{
    class Program
    {
        staticvoid Main(string[] args)
        {
            ConsoleColor defaultForeground = Console.ForegroundColor;

            Console.ForegroundColor = ConsoleColor.Green;
            Console.WriteLine("Enter the URL of the SharePoint Online site:");

            Console.ForegroundColor = defaultForeground;
            string webUrl = Console.ReadLine();

            Console.ForegroundColor = ConsoleColor.Green;
            Console.WriteLine("Enter your user name (ex: kirke@mytenant.microsoftonline.com):");
            Console.ForegroundColor = defaultForeground;
            string userName = Console.ReadLine();

            Console.ForegroundColor = ConsoleColor.Green;
            Console.WriteLine("Enter your password.");
            Console.ForegroundColor = defaultForeground;
            SecureString password = GetPasswordFromConsoleInput();

            using (var context = new ClientContext(webUrl))
            {
                context.Credentials = new SharePointOnlineCredentials(userName,password);
                context.Load(context.Web, w => w.Title);
                context.ExecuteQuery();

                Console.ForegroundColor = ConsoleColor.White;
                Console.WriteLine("Your site title is: " + context.Web.Title);
                Console.ForegroundColor = defaultForeground;
            }
        }

        privatestatic SecureString GetPasswordFromConsoleInput()
        {
            ConsoleKeyInfo info;

            //Get the user's password as a SecureString
            SecureString securePassword = new SecureString();
            do
            {
                info = Console.ReadKey(true);
                if (info.Key != ConsoleKey.Enter)
                {
                    securePassword.AppendChar(info.KeyChar);
                }
            }
            while (info.Key != ConsoleKey.Enter);
            return securePassword;
        }
    }
}

Once you run the application, supply the URL, username, and password of a user that has permission to access the site using CSOM.  Here is what the output looks like:

image

For More Information

how to do active authentication to Office 365 and SharePoint Online

SharePointOnlineCredentials class

Connecting to Office 365 using Client Side Object Model and Web Services

Using PowerShell and the .NET CSOM to Query SharePoint 2013 Online

Creating a SharePoint 2013 App With Azure Web Sites

$
0
0

This post will show how to create a SharePoint 2013 app for Office 365 and publish it to an Azure web site.  If you don’t have either today, you can get an Office 365 trial for free and an Azure web site for free.  If you are an MSDN subscriber, you have access to free MSDN benefits including Windows Azure and Office 365.

Now we need a web site for our app.  Hmm… where to get one?  Oh yeah… Azure gives me TEN FREE WEB SITES.  Plus, I have an MSDN subscription that gives me $150 per month, I’ll just use that.  Free money.

You’ll also need to install the Windows Azure Tools for Microsoft Visual Studio.  Go ahead, go do that, then come back when you’re done.  I’ll wait.

Creating the Azure Web Site

Since you already installed the Windows Azure Tools for Microsoft Visual Studio, you have a new option in Server Explorer to connect to Windows Azure.

image

You are prompted to sign into Azure. 

image

Once signed in, expand the tree view.  Right-click on Web Sites and choose Add New Site.

image

Give the site a name (it must be unique) and choose a location nearest to you.

image

Click Create.  OMG, that is so cool.  Creating web sites without leaving Visual Studio.

Creating the App

To get started, install the Windows Azure Tools for Microsoft Visual Studio.  Once installed, open Visual Studio 2013 and create a new SharePoint provider-hosted app.

image

In the next screen, provide the URL for your SharePoint site and leave the hosting type as the default “provider-hosted”.

image

Finally, choose what type of hosting model you want.  SharePoint developers have been asking for years for ASP.NET MVC… well, here it is!

image

Finally, choose how you want your app to authenticate.  We are using Office 365 for this example, leave the default “Use Windows Azure Access Control Service”.

image

That’s it, you now have an app.  If you look at the code that was generated by default, you’ll see that it shows how to use the managed client side object model to obtain the user’s Title property, displaying their name. 

Creating an O365 SharePoint Developer Site Collection

To enable side-loading of apps, you’ll need a Developer Site Collection in O365.  Go to your SharePoint site as a tenant administrator and choose Admin / SharePoint.

image

In the tenant administration screen, choose New Private Site Collection.

image

Give the site a name and choose a URL, and choose the site template as “Developer Site”.

image

Once that’s created, you now have a SharePoint site capable of side-loading apps.

Debugging Locally

To do this, you’ll need a Developer Site Collection in SharePoint to deploy to. 

Hit F5 and notice that you first have to log into Office 365.  This is because the .app package for the app is being deployed to SharePoint in the cloud while the app runs locally in IIS Express.  SharePoint needs the credentials of a user who is authorized to deploy the app package.

image

You are then prompted to enter your credentials, this time as the user who is installing the app to the site.

image

image

Of course you trust MyAzureDemo Smile  Click Trust It.  You are then redirected to your web page running on IIS Express that uses the managed client side object model to query data from an O365 SharePoint site.

image

The point here is that you would typically just hit F5 to debug locally until you are ready to publish. 

Publishing to Azure Web Sites

We now have a web site and our app, how do we get our app onto the web site?  We first download the publish profile for the web site.  Right-click the web site and choose “Download Publish Profile”. 

image

A file with the extension “.publishsettings” is now in your downloads folder.

image

We now need to get a client ID and client secret from SharePoint.  Go to your SharePoint site and append the URL “_layouts/15/appregnew.aspx” to the site.  Click Generate to create a client ID and a client secret.  Give the domain name of your new Azure web site (the one I created was called KirkEAzureDemo.AzureWebSites.net… capitalization doesn’t matter).

image

Click Create and the app principal is created.  Copy the Client ID and Client Secret values, we’ll need those in just a second.

Now, in Visual Studio 2013, right-click the SharePoint app and project (not the web site) and choose Publish.

image

Create a new publish profile.

image

Select the .publishsettings file that you downloaded previously.

image

Provide the client ID and client secret and choose Finish.

image

You now have a profile that contains your client ID and client secret as well as the location of the Azure web site to deploy to!  In the next screen, choose “Deploy your web project”.

image

You can preview the changes.

image

Click publish and then watch the Output window to see that the web site was successfully published.

image

Installing the App in SharePoint

Now, choose “Package the app”

image

The window will now have a lovely big RED X on it telling you that a valid HTTPS connection is required. 

image

Just change it to HTTPS and it goes away (Azure web sites are nice like that… no futzing with certificates to get this to work).

image

Finally, click FINISH.  Windows Explorer comes up to show you the .app package for the app.

image

Once you have the .app package, you can upload it to an app catalog and then install the app to your site.  However, since we are using a Developer Site collection, we can take advantage of side-loading to quickly test our app.  Go to your SharePoint developer site collection and choose “Apps in Testing”.  Notice that our app package is still there from when we pressed F5 previously. 

image

Before continuing, we need to get rid of that existing app package.  Just click the elipses and choose Remove.

image

Choose “new app to deploy”.  On the next screen, click the link to upload an app package.

image

Browse to the location of your newly published .app package, and leave the checkbox to overwrite existing files.

image

Click OK, then click Deploy.

image

Trust the app.

image

Now click on the app name to launch it.  You are redirected, and finally the app shows in the browser.  Notice the URL is no longer “localhost”, it is now your Azure web site.

image

Congratulations, you just published an app to an Azure web site. 

Bonus Feature – Editing with Visual Studio “Monaco”

Wanna see something cool?  Go to the Azure Management Portal, select your web site, and go to the Configure tab.  There’s a new option to enable editing in Visual Studio.  Turn that on.

image

Now go to the dashboard for your web site and choose “Reset your deployment credentials”.

image

Provide a username and password.

image

On the Quick Glance menu in the dashboard, you will see a new link to “Edit in Visual Studio Online”. 

image

Click that, and you are prompted for credentials.

image

Once you log in, you can now EDIT PAGES FOR YOUR WEB SITE ONLINE.

image

Click the Run button on the left of the screen.  You will see output:

image

Ctrl + Click on the link to your web site, you’ll see an unfriendly error.  This is because there is no host URL to redirect to.

image

Change the URL to “/Home/Contact” to see the page you edited.

image

Summary

If you are attending SharePoint Conference 2014, make sure to come to my session Building SharePoint Apps with Windows Azure Platform as a Service. I will be showing this and a whole lot more about building apps to take advantage of Azure PaaS.


Attaching Remote Event Receivers to Lists in the Host Web

$
0
0

This post shows how to attach a remote event receiver to a list in the host web for a SharePoint provider-hosted app.

Background

While working on the SharePoint 2013 Ignite content, apps were still very much new and very little documentation existed.  We were fighting a problem of using an app that required a feature to first be activated in SharePoint.  How could we activate the feature?  That’s when Keenan Newton wrote his blog post, Defining content in Host Web from an App for SharePoint.  The idea was so simple: use the App Installed event. 

This post follows the same pattern.  I am going to take a pretty long path to do something that can be accomplished pretty quickly because there are a few confusing elements to this pattern:

  1. Handle the app installed event.
  2. When the app installed event occurs, an event is sent to our service.  We use the client side object model to attach an event receiver to a list in the host web.
  3. When an item is added to the list, an ItemAdded event is sent to our service.

Visually, it looks like this:

image

Once you understand this pattern, you’ll use it for all sorts of things such as activating features, creating subsites, applying themes, all kinds of stuff.

If you don’t care about how it all works, just skip to the end to the section “Show Me The Code!”

Remote Event Receivers

A remote event receiver is just like a traditional event receiver in SharePoint.  Your code registers itself with SharePoint to be called whenever an event occurs, such as a list is being deleted or a list item is being added.  With full trust code solutions, you would register your code by giving SharePoint an assembly name and type name.  Server side code for apps isn’t installed on SharePoint, but rather on your own web server, so how would you register a remote endpoint?  Provide a URL to a service. 

image

If you aren’t familiar with remote event receivers, go check out the Developer training for Office, SharePoint, Project, Visio, and Access Services which includes a module on remote event receivers. 

The point that I want to highlight here is that you tell SharePoint what WCF service endpoint to call when a specific event occurs.  That means that SharePoint needs to be able to resolve the address to that endpoint.

Handle App Installed and App Uninstalling

To perform this step, I assume you already have an Office 365 Developer Site Collection.  If you don’t, you can sign up for a free 30-day trial.  Even better, as an MSDN subscriber you get an Office 365 developer tenant as part of your MSDN benefits.

In Visual Studio 2013, create a new provider-hosted app. 

image

Provide the URL for your Office 365 developer site, used for debugging, and leave the host type as Provider-hosted.

image

The next screen asks if you want to use the traditional Web Forms model for your app, or if you prefer ASP.NET MVC.  I really love ASP.NET MVC, so I’ll use that option.

image

Finally, you are asked about how your app will authenticate.  We are using Office 365, so leave the default option, “Use Windows Azure Access Control Service”.  Click Finish.

image

Once your project is created, click on the app project (not the web project) and change its Handle App Installed and HandleAppUninstalling properties to True.

image

That will create a WCF service for you in the project where you can now handle an event for when the app is installed.

image

There are two methods, ProcessEvent and ProcessOneWayEvent, and sample code exists in the ProcessEvent method to show you how to get started with a remote event receiver. 

We are going to use the ProcessEvent method to register an event receiver on a list in the host web.  We will also use the ProcessEvent method to unregister the remote event receiver when the app is uninstalled.  Clean up after yourself!

Add a breakpoint in the ProcessEvent method, but don’t hit F5 just yet.

Debugging Remote Event Receivers

Let me restate that last part if you didn’t catch it: SharePoint needs to be able to resolve the address to your WCF endpoint.  Let me change that picture just a bit:

image

See the difference?  Here we have Office 365 calling our web service.  If we told O365 that our WCF service was available at http://localhost:44307/AppEventReceiver.svc, that server would try to make an HTTP call to localhost… the HTTP call would never leave that server.  There’s no way that SharePoint can figure out that what you really meant was to traverse your corporate firewall and get past the Windows Firewall on your laptop to call an HTTP endpoint in IIS Express.

Thankfully, someone incredibly smart on the Visual Studio team (hat’s off, Chaks!) figured out how to use Windows Azure Service Bus to debug remote events.  That means that SharePoint now has an endpoint that it can deliver messages to, and your app can then connect to service bus to receive those messages.

image

Even better, you really don’t have to know much about this to make it all work.  If you don’t have an Azure subscription already, you can sign up for a free trial.  If you have MSDN, you get an Azure subscription as part of your MSDN benefits that includes monthly credits!  If you are worried about the cost here, don’t be: as of today, you are charged $0.10 for every 100 relay hours, $0.01 for every 10,000 messages.  I seriously doubt anyone is leaving their machine debugging for that long.

Once you have an Azure subscription, log into the Windows Azure Management Portal.  Go to the Service Bus extension on the left of the screen.

image

On the bottom of the screen, click Create to add a new namespace. 

image

Give it a unique name and provide a location near you.

image

Once the namespace is created, click the Connection Information button, you will see your connection string. Copy it into your clipboard buffer.

image

Go back to Visual Studio.  In the Solution Explorer, click the app project (not the web project) in Visual Studio’s Solution Explorer pane, then go to Project / AttachEventsInHostWeb Properties…

image

Go to the SharePoint tab and check the checkbox to enable debugging via Windows Service Bus and paste your connection string. 

image

Now, let’s test our app so far.  Press F5 in Visual Studio to start debugging.

image

Our breakpoint is then hit.  Let’s inspect where the WCF message was sent to. In the Watch window in Visual Studio, add the value System.ServiceModel.OperationContext.Current.RequestContext.RequestMessage.Headers.To

image

You can see that SharePoint Online sent the message to:

https://kirkevans.servicebus.windows.net/2228577862/1147692368/obj/0958b186-260a-4fb4-a140-7437d6f2b686/Services/AppEventReceiver.svc

This is the service bus endpoint used during debugging.  This solves our earlier problem of SharePoint not being able to send messages to https://localhost:44307.  The messages are relayed from Service Bus to our local endpoint.

Ask for Permission

The last bit of setup that we need to do is to ask for permission.  We are going to add a remote event receiver to a list in the host web, which means we need to ask for permission to manage the list.  We don’t need Full Control for this operation, we just need Manage.  Further, we only need Manage permission for a list, not the whole web, site collection, or tenant.

The list we will work with is an Announcements list, which has a template ID of 104.  Adding the BaseTemplateId=104 property in a list permission request significantly reduces the number and type of lists that a user chooses from when granting permission.

image

Notice the app-only permission request?  That’s added when we handle the App Installed and App Uninstalling events, because when those happen we want to execute operations that the current user may not have permission to. 

Show Me The Code!

Finally, we’re here.  First, let’s define the name of the event handler and implement the required ProcessEvent method.

privateconststring ReceiverName = "ItemAddedEvent";
privateconststring ListName = "Announcements";

public SPRemoteEventResult ProcessEvent(SPRemoteEventProperties properties)
{
            
    SPRemoteEventResult result = new SPRemoteEventResult();

    switch (properties.EventType)
    {
        case SPRemoteEventType.AppInstalled:
            HandleAppInstalled(properties);
            break;
        case SPRemoteEventType.AppUninstalling:
            HandleAppUninstalling(properties);
            break;
        case SPRemoteEventType.ItemAdded:
            HandleItemAdded(properties);
            break;
    }

            
    return result;
}

Those methods (HandleAppInstalled, HandleAppUninstalling, HandleItemAdded) are methods that we will define. 

   1:  privatevoid HandleAppInstalled(SPRemoteEventProperties properties)
   2:  {
   3:  using (ClientContext clientContext =
   4:          TokenHelper.CreateAppEventClientContext(properties, false))
   5:      {
   6:  if (clientContext != null)
   7:          {
   8:              List myList = clientContext.Web.Lists.GetByTitle(ListName);
   9:              clientContext.Load(myList, p => p.EventReceivers);
  10:              clientContext.ExecuteQuery();
  11:   
  12:  bool rerExists = false;
  13:   
  14:  foreach (var rer in myList.EventReceivers)
  15:              {                        
  16:  if (rer.ReceiverName == ReceiverName)
  17:                  {
  18:                      rerExists = true;
  19:                      System.Diagnostics.Trace.WriteLine("Found existing ItemAdded receiver at "
  20:                          + rer.ReceiverUrl);
  21:                  }
  22:              }
  23:   
  24:  if (!rerExists)
  25:              {
  26:                  EventReceiverDefinitionCreationInformation receiver =
  27:  new EventReceiverDefinitionCreationInformation();
  28:                  receiver.EventType = EventReceiverType.ItemAdded;
  29:   
  30:  //Get WCF URL where this message was handled
  31:                  OperationContext op = OperationContext.Current;
  32:                  Message msg = op.RequestContext.RequestMessage;
  33:   
  34:                  receiver.ReceiverUrl = msg.Headers.To.ToString();
  35:   
  36:                  receiver.ReceiverName = ReceiverName;
  37:                  receiver.Synchronization = EventReceiverSynchronization.Synchronous;
  38:                  myList.EventReceivers.Add(receiver);
  39:   
  40:                  clientContext.ExecuteQuery();
  41:   
  42:                  System.Diagnostics.Trace.WriteLine("Added ItemAdded receiver at "
  43:                          + msg.Headers.To.ToString());
  44:              }
  45:          }
  46:      }
  47:  }

Lines 8-10 just get the list and the event receivers for the list using the client side object model.  The real work is in lines 24-38 where we obtain the WCF address of where the message was originally sent to and use that URL for our new event receiver.  This is how we add a remote event receiver to a list in the host web.

We need to clean up after ourselves, otherwise we may continue to receive messages after someone has uninstalled the app.

   1:  privatevoid HandleAppUninstalling(SPRemoteEventProperties properties)
   2:  {
   3:  using (ClientContext clientContext =
   4:          TokenHelper.CreateAppEventClientContext(properties, false))
   5:      {
   6:  if (clientContext != null)
   7:          {
   8:              List myList = clientContext.Web.Lists.GetByTitle(ListName);
   9:              clientContext.Load(myList, p => p.EventReceivers);
  10:              clientContext.ExecuteQuery();
  11:   
  12:              var rer = myList.EventReceivers.Where(
  13:                  e => e.ReceiverName == ReceiverName).FirstOrDefault();
  14:   
  15:  try
  16:              {
  17:                  System.Diagnostics.Trace.WriteLine("Removing ItemAdded receiver at "
  18:                          + rer.ReceiverUrl);
  19:   
  20:  //This will fail when deploying via F5, but works
  21:  //when deployed to production
  22:                  rer.DeleteObject();
  23:                  clientContext.ExecuteQuery();
  24:  
  25:              }
  26:  catch (Exception oops)
  27:              {                            
  28:                  System.Diagnostics.Trace.WriteLine(oops.Message);                            
  29:              }
  30:  
  31:          }
  32:      }
  33:  }

Now let’s handle the ItemAdded event.

   1:  privatevoid HandleItemAdded(SPRemoteEventProperties properties)
   2:  {
   3:  using (ClientContext clientContext =
   4:          TokenHelper.CreateRemoteEventReceiverClientContext(properties))
   5:      {
   6:  if (clientContext != null)
   7:          {
   8:  try
   9:              {                        
  10:                  List photos = clientContext.Web.Lists.GetById(
  11:                      properties.ItemEventProperties.ListId);
  12:                  ListItem item = photos.GetItemById(
  13:                      properties.ItemEventProperties.ListItemId);
  14:                  clientContext.Load(item);
  15:                  clientContext.ExecuteQuery();
  16:   
  17:                  item["Title"] += "\nUpdated by RER " +
  18:                      System.DateTime.Now.ToLongTimeString();
  19:                  item.Update();
  20:                  clientContext.ExecuteQuery();
  21:              }
  22:  catch (Exception oops)
  23:              {                        
  24:                  System.Diagnostics.Trace.WriteLine(oops.Message);
  25:              }
  26:          }
  27:   
  28:      }
  29:   
  30:  }

I need to point out line 4.  TokenHelper has two different methods for creating a client context for an event.  The first is CreateAppEventClientContext, which is used for app events such as AppInstalled or AppUninstalling.  The second is CreateRemoteEventReceiverClientContext, which is used for all other events.  This has tripped me up on more than one occasion, make sure to use the CreateRemoteEventReceiverClientContext method for handling item events.

That’s really all there is to it… we use the AppInstalled event to register an event on a list in the host web, use the same WCF service to handle the event.  These operations require Manage permission on the object where the event is being added.

Testing it Out

We’ve gone through the steps of creating the app and adding the service bus connection string, let’s see the code work!  Add breakpoints to each of your private methods in the WCF service and press F5 to see it work.

We are prompted to trust the app.  Notice that only the announcements lists in the host web show in the drop-down.

image

Click Trust It.  A short time later, the breakpoint in the HandleAppInstalled method fires.  We continue debugging, and then O365 prompts us to log in. 

Your app’s main entry point is then shown.

image

Without closing the browser (which would stop your debugging session), go back to your SharePoint site.  Go to the Announcements list and add a new announcement.

image

W00t!  Our breakpoint for the ItemAdded event is then hit!

image

If you want to inspect the properties of the remote event receiver that was attached, you can use Chris O’Brien’s scripts from his post, Add/delete and list Remote Event Receivers with PowerShell/CSOM:

image 

Debugging and the Handle App Uninstalling Event

Recall that we will be using the App Installed event to register a remote event receiver on a list in the host web.  We want to also remove the remote event receiver from the list.  If we try to use the AppUninstalling event and unregister the event using DeleteObject(), it doesn’t work.  You will consistently receive an error saying you don’t have permissions.  This only happens when side-loading the app, which is what happens when you use F5 to deploy the solution with Visual Studio. 

Unfortunately, that means that the receivers that are registered for the list hang around.  The only way to get rid of them is to delete the list.  Again, this only occurs when side-loading the apps, it doesn’t happen when the app is deployed.

To see the App Uninstalling event work, we are going to need to deploy our app.

Deploy to Azure and App Catalog

In my previous post, Creating a SharePoint 2013 App With Azure Web Sites, I showed how to create an Azure web site, go to AppRegNew.aspx to create a client ID, and a client secret.  I then showed how to publish the app to an Azure web site, and package the app to generate the .app package.  I did the same here, deploying the web application to an Azure web site called “rerdemo”. 

image

Instead of copying the .app package to a Developer Site Collection, we are instead going to copy the .app package to our App Catalog for our tenant.  Just go to the Apps for SharePoint library and upload the .app package.

image

Now go to a SharePoint site that you want to deploy the app to.  Make sure to create an Announcements list.  Our app could have done this in the App Installed event, but c’mon, this post is long enough as it is.  I’ll leave that as an exercise to the reader.

image

Before we add the app to the site, let’s see something incredibly cool.  Go to the Azure web site in Visual Studio, right-click and choose Settings, and turn up logging for everything.

image

Click save.

Right-click the Azure web site and choose View Streaming Logs in Output Window.  You’ll be greeted with a friendly message.

image

Now go back to your SharePoint site and choose add an app.  You should now see your app as one of the apps that can be installed.

image

Click Trust It.

image

Your app will show that it is installing.

image

Go back to Visual Studio and look at the streaming output logs.

image

OMG.  I don’t know about you, but I nearly wet myself when I saw that.  That is so unbelievably cool.  Let’s keep playing to see what other messages show up.  Go to the Announcements list and add an item.

image

Shortly after clicking OK, you’ll see the Title has changed.

image

Finally, uninstall the app to test our HandleAppUninstalling method.

image

We see a new message that the remote event receiver is being removed.

image

And we can again use Chris O’Brien’s PowerShell script to check if there are any remote event receivers still attached to the Announcements list.

image

Now, go back to Visual Studio.  Right-click on the Azure web site and choose Settings.  Go to the Logs tab and choose Download Logs.

image

A file is now in my Downloads folder.

image

I can double-click on the zip file to navigate into it.  Go to LogFiles / http / RawLogs and see the log file that is sitting there.  Double-click it.  You can see the IIS logs for your site!

image

For More Information

Developer training for Office, SharePoint, Project, Visio, and Access Services

Defining content in Host Web from an App for SharePoint

Add/delete and list Remote Event Receivers with PowerShell/CSOM

Streaming Diagnostics Trace Logging from the Azure Command Line (plus Glimpse!)

Creating a SharePoint 2013 App With Azure Web Sites

Ten Years at Microsoft

$
0
0

Ten years ago today I swallowed the red pill.

Prior to joining Microsoft, I was an independent contractor.  I worked for a few companies before the dot-bomb hit in 2001, and I was forced to seek shelter with a few different companies full-time.  I decided in 2004 that I would go work with Microsoft for just a few years, build up my contacts, and then continue as an independent contractor.

The Leap of Faith

I joined Microsoft as a Developer Evangelist in Microsoft’s Communications Sector vertical.  I worked with companies such as Turner Broadcasting, Disney, Verizon, and AT&T to show them how to take advantage of developer technologies.  Back then, I was showing off the soon-to-be-released Visual Studio 2005 and the fledgling Visual Studio Team Foundation System.  My primary interests were XSLT, XML web services, and using things like Web Services Enhancements (WSE… yes, they actually pronounced it “woosy” and it make me and Scott Cate laugh pretty hard). 

I did some work with a partner to build a connector for Microsoft Content Management Server to connect it to Plumtree (remember way back then?).  It was kind of ugly if you looked at how it all worked (“web services” that were actually just strings sent over HTTP, usually malformed), but it worked.  It connected MCMS to Plumtree, and it was published as a Microsoft Download.

My boss asked me to help evangelize SharePoint, saying it had grown in popularity and we were getting a lot of requests to show it off.  Maybe I could do something with SharePoint 2003.  Well, I did some work with it, wrote some code, and then replied to him, “Hell no.  This sucks.”

I did a lot of BizTalk pre-sales work, writing solutions that took advantage of all my web services geekery.  I went into competitive situations against BEA, trying to show how BizTalk was a better solution than WebLogic.  One thing that WLS (this was around the 8.1 timeframe) did better than BizTalk was that it was lightning fast because it didn’t require durable messaging while BizTalk required every message be sent to the database.  Still, I had a lot of fun competing and building solutions that did way more than WLS 8.1 could possibly do, and I loved winning. 

My Love Affair with WCF and WF

Along came a new technology originally known as WinOE, and I was hooked.  Of course, WinOE later became known as Windows Workflow Foundation.  I started evangelizing Indigo (which became Windows Communication Foundation, WCF) and Windows Workflow Foundation (abbreviated as WF… it’s not WWF because we don’t want to fight pandas or wrestlers over the use of an acronym).  I loved how deep and geeky I could get with these, and I was able to present on these at TechEd Australia and TechEd New Zealand, showing some of the incredible things you could do with these amazing technologies.  Best yet, I had two options to compete against BEA and I was loving it. 

As much as I loved talking about WCF and WF (I was good at it, too… I could dazzle an audience with WCF and WF internals and build amazing stuff), developers just didn’t get it.  They kind of saw WCF (although it complicated the hell out of what you could do with a simple ASMX service or HttpWebRequest), but they really didn’t get WF.  They didn’t get how or why to build their applications using it.  Developers were still coming off the drunken Windows Forms days, looking at this new WPF thing and wondering if any of this stuff applied to them.

SharePoint Again?

My boss told me he needed someone to help with WF stuff with SharePoint 2007 because the sales people were having problems finding someone who knew anything about it.  Remembering my previous stint with SharePoint, I replied politely, “I’ll look again, but I’m pretty sure it still sucks.”

It did.  It still sucked.

This was 2007, there was no documentation, the APIs were littered with traps and holes that were prone to causing memory leaks and severe performance problems.  The platform was exploding, and you just kind of had to know a guy who knew a guy who figured out not to do something in a particular way.  But it had one redeeming quality… it had workflow, and I was hooked. 

I started to evangelize Visual Studio Extensions for Windows SharePoint Services (VSeWSS) and I recorded a few videos on Channel9.  I figured out how to deploy workflows with it, even to a 64-bit machine, and generally got the platform to suck a little less.  I was doing a lot of deep dive stuff on workflows and putting my esoteric knowledge on it to use.  I was getting to write a lot of code.

Fast Forward

Since 2007, I have been primarily focused on SharePoint.  Kicking and screaming along the way, I’ve been focused on SharePoint for almost 7 years.

I left DPE in 2011 to join Premier Field Engineering.  It was a fantastic group filled with deep geeky people who knew SharePoint in very deep ways.  I learned about things that I hadn’t really been exposed to while in DPE, such as Active Directory, DNS, SQL, networking, and all kinds of deep geeky stuff.  I was in my element.

Problem was, my developer skills were eroding. 

I identify myself primarily as a developer, and in PFE I was focused on infrastructure stuff.  It was nice to round out my skills, but at the cost of keeping current.  ASP.NET MVC, Web API, and a whole slew of JavaScript frameworks were suddenly the norm and here I was still looking at 14 year old ASP.NET Web Forms code in SharePoint.  The new app model was affording me to do get back into development, but I was still focused on helping customers who were using stuff we had already shipped years ago instead of doing what I really love… helping developers learn to use new technologies to solve problems.

Modern Apps

I am now an Architect in the Windows Azure Modern Apps Center of Excellence.  This job was tailor-built for me, allowing me to show developers the way to new technologies to solve new problems.  I get to learn a ton about Azure (I’ll be presenting three sessions on Azure at the SharePoint Conference this week), and I get to work with a lot of technology that I have completely ignored such as Windows 8 and Windows Phone development, iOS and Android development with Xamarin and PhoneGap, and building solutions with ASP.NET MVC and Web API that are deployed to Azure.  I am having a blast learning so many new technologies.  As I work with all of these technologies, my focus is no longer on furthering the SharePoint platform… it is now focused on helping developers understand Azure Platform as a Service.  I can do that with SharePoint apps a little but, but there is a much bigger world out there.  SharePoint becomes just a part of the overall cloud puzzle, the real opportunity is seeing how you can build solutions with Azure that might comprise some bit of Office 365 functionality with them.

Really?  Ten Years?

Ten years.  This was supposed to be a short stint, a 3-hour tour, just a pitstop until I built my contracting business up.  The job is too good, I have had way too many awesome opportunities, and I have made too many friends to leave.  I love this company, I love the technology, and I love that it has afforded me the opportunity to transform myself several times over during my career.  I have unlocked so many personal achievements (Microsoft Certified Master, speaker at TechEd, SharePoint Conference, Build, and 18 TechReady conferences, VSLive, and a slew of other conferences), I have done so many things thanks to this amazing company.  As I move on from SharePoint towards Azure, I am thankful that I have learned so much about the platform and specifically about architectural decisions when scaling.  I have learned, and truthfully, that is what I love about this job the most… the opportunity to continually learn and grow.

I fly out to Vegas today.  I have to go pack for what is likely my last SharePoint Conference. 

Building a SharePoint App as a Timer Job

$
0
0

This post will show how to create an app as a timer job.

Background

One of the complicated parts of the app model today is trying to figure out how to do things that I used to do in full trust code using the app model.  Honestly, things look a little different, and this pattern will be useful to understand. 

As usual, if you aren’t interested in the narrative, skip down to the section, “Show Me the Code!”

I worked with a few customers who were concerned about some of their end users who kept using the new Change the Look feature of SharePoint 2013 to change the branding of their site.  They turned their site into something hideous like this.

image

This doesn’t conform to their corporate branding, so they wanted a way to go back and change this in an automated fashion.  Further, they want to do this on a daily basis to make sure the site is always changed back.  They didn’t want to remove permissions for the user to do this.  I’ll admit, there are other ways to do this, but it helps me to illustrate using a timer job to achieve the same thing.

Create the Console Application

In Visual Studio 2013, create a new Console Application.  Here’s one of my favorite parts… go to the NuGet Package Manager and search for “sharepoint app”.  You will see the App for SharePoint Web Toolkit. 

image

Add this NuGet package to your Console app and you will get the TokenHelper and SharePointContext code to work with SharePoint apps.

Creating the AppPrincipal

The first step to understand is how to create the app principal.  The app principal is an actual principal in SharePoint 2013 for the app that can be granted permissions.  When you create an app in Visual Studio 2013 and press F5, Visual Studio is nice enough to take care of registering the app principal for you behind the scenes.  It does this when using the Visual Studio template, but we’re not using their template here… we are using a Console Application.  We need to register the app principal first for our Console Application to be able to call SharePoint.

To register the app principal, we can use a page in SharePoint 2013 called “_layouts/AppRegNew.aspx”.

image

This page is used to create the client ID and client secret for the app.  Give it a name and click Generate, then click Create.

image

Note that I removed the actual string for the client secret for security purposes (hey, it’s a secret!)

The result is an app principal.

image

No, that is not the real client secret… I changed it for security purposes.   Just use the string that the page generates and don’t change it.

Giving the App Principal Permissions

Now we need to grant permissions to the app principal.  The easiest way to do this is to create a new provider-hosted app in SharePoint, give it the permissions that your app needs, then go to the appmanifest.xml and copy the AppPermissionRequests element. 

<AppPermissionRequestsAllowAppOnlyPolicy="true"><AppPermissionRequestScope="http://sharepoint/content/tenant"Right="Manage"/></AppPermissionRequests>

The permissions that we will grant will be Tenant/Manage permission because our Console Application will go to multiple webs that are located in multiple site collections and change the branding.  To have permission to access multiple site collections, I need to request Tenant permission.  To change the branding for a site, I need Manage… hence Tenant/Manage.

You then go to a second page in SharePoint called “_layouts/AppInv.aspx”. 

image

Look up the app based on the Client ID that you just generated and click Lookup, it will find the app principal.  Then paste the AppPermissionRequests XML into the Permissions text box and click Create.

image

Once you click Create, the result is the Trust It dialog.

image

Click Trust It (of course you trust it). 

App Only Permission

I previously wrote a blog post, SharePoint 2013 App Only Policy Made Easy, that talks about the app only policy. If you aren’t familiar with app only, you need to go read that post.  Our timer job will not have an interactive user, so we need to use the app only policy.  The relevant code for this is the TokenHelper.GetAppOnlyAccessToken.

//Get the realm for the URLstring realm = TokenHelper.GetRealmFromTargetUrl(siteUri);

//Get the access token for the URL.  //   Requires this app to be registered with the tenantstring accessToken = TokenHelper.GetAppOnlyAccessToken(
    TokenHelper.SharePointPrincipal, 
    siteUri.Authority, realm).AccessToken;

Once we have the access token, we can now create a ClientContext using that access token.

using(var clientContext = 
    TokenHelper.GetClientContextWithAccessToken(
        siteUri.ToString(),accessToken))

We now have a client context to use with the rest of our CSOM operation calls.

Update the App.config

The app.config will be used to store the URLs for various webs that need to have their branding updated via a timer job.  The app.config also stores the Client ID and Client Secret for our app.

<?xmlversion="1.0"encoding="utf-8" ?><configuration><configSections><sectionname="Sites"type="System.Configuration.NameValueSectionHandler"/></configSections><appSettings><addkey="ClientId"value="0c5579cd-c3c7-458c-91e4-8a557c33fc50"/><addkey="ClientSecret"value="925gRemovedForSecurityReasons="/></appSettings><startup><supportedRuntimeversion="v4.0"sku=".NETFramework,Version=v4.5"/></startup><Sites><addkey="site2"value="https://kirke.sharepoint.com/sites/dev"/><addkey="site1"value="https://kirke.sharepoint.com/sites/developer"/><addkey="site3"value="https://kirke.sharepoint.com/sites/dev2"/></Sites></configuration>

Our Console Application will read the Sites section, pull the URL for each site, and call CSOM on it to update the branding.

Show Me the Code!

using Microsoft.SharePoint.Client;
using System;
using System.Collections;
using System.Collections.Generic;
using System.Collections.Specialized;
using System.Configuration;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace TimerJobAsAnApp
{
    class Program
    {
        /// <summary>/// To register the app:/// 1) Go to appregnew.aspx to create the client ID and client secret/// 2) Copy the client ID and client secret to app.config/// 3) Go to appinv.aspx to lookup by client ID and add permission XML below/// </summary>/// <param name="args"></param>         /*          <AppPermissionRequests AllowAppOnlyPolicy="true"><AppPermissionRequest Scope="http://sharepoint/content/tenant" Right="Manage" /></AppPermissionRequests>        */staticvoid Main(string[] args)
        {
            
            var config = (NameValueCollection)ConfigurationManager.GetSection("Sites");
            foreach (var key in config.Keys)
            {
                Uri siteUri = new Uri(config.GetValues(key asstring)[0]);
                    
                //Get the realm for the URLstring realm = TokenHelper.GetRealmFromTargetUrl(siteUri);

                //Get the access token for the URL.  //   Requires this app to be registered with the tenantstring accessToken = TokenHelper.GetAppOnlyAccessToken(
                    TokenHelper.SharePointPrincipal, 
                    siteUri.Authority, realm).AccessToken;

                //Get client context with access tokenusing(var clientContext = 
                    TokenHelper.GetClientContextWithAccessToken(
                        siteUri.ToString(),accessToken))
                {
                    //Poor man's timerdo
                    {
                        ApplyTheme(clientContext);
                        System.Threading.Thread.Sleep(10000);
                    }
                    while (true);
                }
            }            
        }

        /// <summary>/// Applies a red and black theme with a Georgia font to the Web/// </summary>/// <param name="clientContext"></param>privatestaticvoid ApplyTheme(ClientContext clientContext)
        {
            Web currentWeb = clientContext.Web;
            clientContext.Load(currentWeb);
            clientContext.ExecuteQuery();

            //Apply RED theme with Georgia font
            currentWeb.ApplyTheme(
                URLCombine(
                    currentWeb.ServerRelativeUrl, 
                    "/_catalogs/theme/15/palette022.spcolor"),
                URLCombine(
                    currentWeb.ServerRelativeUrl, 
                    "/_catalogs/theme/15/fontscheme002.spfont"),
                null, false);
            clientContext.ExecuteQuery();
        }
        privatestaticstring URLCombine(string baseUrl, string relativeUrl)
        {
            if (baseUrl.Length == 0)
                return relativeUrl;
            if (relativeUrl.Length == 0)
                return baseUrl;
            returnstring.Format("{0}/{1}", 
                baseUrl.TrimEnd(newchar[] { '/', '\\' }), 
                relativeUrl.TrimStart(new char[] { '/', '\\' }));
        }
    }
}

The Result

The sites in the app.config used to have that hideous theme.  Once the code runs, the sites in the app.config will have the colors of my favorite college football team (Go Georgia Bulldogs!), even using the “Georgia” font Smile

image

Of course, you can use whatever logic you want, the logic we use here is setting branding based on pre-configured URLs for the sites.  You can use whatever you want to schedule the timer job.  I used a poor man’s timer job, using a While loop with Thread.Sleep, but you could use the Windows Scheduler, a Cron job, or event Azure Web Jobs.

For More Information

SharePoint 2013 App Only Policy Made Easy

Building SharePoint 2013 Apps with Windows Azure PaaS

$
0
0

This post will show how to create a SharePoint 2013 app leveraging Windows Azure PaaS and the Windows Azure SDK 2.2. 

You can watch the video online of the presentation that I did about this solution at SharePoint Conference 2014, Building SharePoint Apps with Windows Azure Platform as a Service, at http://channel9.msdn.com/Events/SharePoint-Conference/2014/SPC385.

Background

I recently joined the Windows Azure Modern Apps Center of Excellence at Microsoft, and have been focused on Azure.  Since I have done so much work with SharePoint apps, what better way to show readers of my blog how to get started with Windows Azure Platform as a Service than to just create a provider-hosted app using the Windows Azure SDK 2.2. 

Credit to a colleague and friend of mine, Jason Vallery (an awesome Premier Field Engineer at Microsoft), he came up with the overall design of this app.  I took… umm… “inspiration” from his work, gotta give Jason credit and thanks.

This post will just show the highlights.  The code is attached to this post. 

The solution will use a remote event receiver to attach a remote event to a list named Photos in the host web.

image

I’ve already covered this in my post, Attaching Remote Event Receivers to Lists in the Host Web, so I’m not going into much detail here.

When the event receiver fires, we are going to capture the ListID, ListItemID, and WebURL from the ItemAdded event and store it in an Azure Table.

image

When the ItemAdded event receiver fires, we will use an Azure Storage Queue to send a message that will be picked up by a Console application.  The Console application will reach into the Photos list and download the item, make changes to the item, then store the item in Azure Blob Storage.

image

Get Office 365 and a Windows Azure Subscription

If you already have both an Office 365 subscription and a Windows Azure subscription, skip this section.

If you don’t have a Windows Azure subscription, there are multiple ways to obtain one to get started.  As an MSDN subscriber, you get up to $150 (US) credit for an Azure subscription every month.  Yes, you heard me… if you are an MSDN subscriber and you are not leveraging your Windows Azure benefit for MSDN subscribers, you are throwing $150 out the window.  Per month.

If you’re not an MSDN subscriber, you can also create a free trial subscription.  That will give you $200 credit for an Azure subscription to gain full access to Windows Azure for 30 days.

Some MSDN subscription levels also have a benefit for an Office 365 developer subscription for one year.  Check your MSDN benefits to see if you qualify.  If not, you can also create a free trial for Office 365.

Install the SDK

The first thing you need to do is to obtain the Windows Azure SDK.  At the time of this writing, the latest version is 2.2.  The SDK is available at http://www.windowsazure.com/en-us/develop/net, which also includes tutorials and documentation.  Look how easy they made it: just click the link to install the SDK (link is on the front page) and choose between Visual Studio 2012 or Visual Studio 2013.

image

Once installed, Visual Studio 2013 will have a new node in Server Explorer.

image

Right-click the Windows Azure node and choose Connect to Windows Azure.

image

You are prompted to sign into your subscription.

image

That’s it.

Create an Azure Web Site

I showed how to do this in a previous post, so I’m not going into much detail here.  Create a SharePoint 2013 app and publish it to an Azure web site.  Creating an Azure web site is crazy easy… just right-click the Web Sites node and choose Add New Site.

image

Give the new site a name and choose a location closest to you.  We’re not using a SQL Azure database for this post, we’ll look at that in a future post.

image

Next, right-click the new web site and choose Settings.  Turn up logging and click save. 

image

Oh yeah… that just happened.  Control web site logging from Visual Studio.  It still makes me tear up with joy each time I use it.

The rest of the details are covered in my post SharePoint 2013 app and publish it to an Azure web site, showing how to use Office 365 to register the app principal, obtaining the client ID and client secret for your app. 

Create a Storage Account

Go to the Windows Azure Management Portal and go to the Storage add-in.

image

At the bottom-left of the screen, choose the big “New” button.  Using the Quick Create option (that’s the only option currently), give your new account a name and location then click Create Storage Account.

image

Go back to Visual Studio 2013 and refresh the Storage node, you’ll see three new nodes:  Blobs, Queues, and Tables.

image

They’re empty… let’s do something about that.

References and Connection Strings

We’ll use the Item Added event receiver to write to Windows Azure tables.  To set this up, add a new class library project to your project that will serve as a data access layer.  Next, right-click the new class library project and choose Manage NuGet Packages.  Add the packaged for Windows Azure Storage.

image

Add the NuGet package to the web application project as well.

We need to update the configuration files with the connection string to your new storage account.  Right-click the new storage account in the Server Explorer pane and choose Properties.  In the Properties pane, click the elipses.

image

image

Copy the connection string and replace the text “YOUR_AZURE_STORAGE_CONNECTION_STRING” with your connection string value.

image

There is another project, PaaSDemoJobs, that contains an app.config file.  You need to replace the values there as well for the keys paasdemo, AzureJobsData, and AzureJobsRuntime.

image

The Data Library

The data library is really just a class library that provides some methods to make it easy to work with Azure tables, queues, and blobs.  The first class to point out is PhotoLocation, which derives from the TableEntity class.  This class can be used to serialize data into and out of an Azure table.

publicclass PhotoLocation : TableEntity
{public PhotoLocation() { }public PhotoLocation(string realm)
    {this.Realm = realm;this.PartitionKey = realm;this.RowKey = Guid.NewGuid().ToString();
    }publicstring WebURL { get; set; }publicint ListItemId { get; set; }public Guid ListId { get; set; }publicstring Realm { get; set; }publicstring BlobUrl { get; set; }
}

The second class is StorageRepository, which will provide the methods to read and write to table storage. Our private helper method will let us obtain a reference to the table, and if it doesn’t already exist, creates it.

CloudStorageAccount _storageAccount;public StorageRepository(string connectionString)
{
    _storageAccount = CloudStorageAccount.Parse(connectionString);
}/// <summary>/// Gets or creates the table reference/// </summary>/// <returns></returns>private CloudTable GetLocationsTable()
{
    CloudTableClient client = _storageAccount.CreateCloudTableClient();
    CloudTable table = client.GetTableReference("locations");
    table.CreateIfNotExists();return table;
}

Next, we’ll add methods to save a single PhotoLocation and to retrieve a single PhotoLocation.  Inserting is easy, just use the InsertOrReplace method of the TableOperation class to create a TableOperation class, then tell the table to execute the operation.  Querying is just as easy… Create a new TableQuery class and use a Where operation, using a FilterCondition. 

publicvoid SavePhotoLocation(PhotoLocation location)
{
    var table = GetLocationsTable();
    TableOperation operation = TableOperation.InsertOrReplace(location);
    TableResult result = table.Execute(operation);
}public PhotoLocation GetLocationByID(string rowkey)
{
    PhotoLocation location = new PhotoLocation();
    var table = GetLocationsTable();
    TableQuery<PhotoLocation> query =new TableQuery<PhotoLocation>().Where(
        TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.Equal, rowkey));
    var results = table.ExecuteQuery(query);return results.FirstOrDefault();
}

I love it… expressive code that is clear and shows exactly what you want to do.  I think they did a great job designing the classes for the SDK.

If we want to return all records in the table based on a different value, such as any records where the Realm column matches the Realm ID of the current O365 tenancy, we can use the GenerateFilterCondition method again, just using the name of the column we want to use as a filter.

/// <summary>/// Get all rows for an O365 tenant based on realm/// </summary>/// <param name="realm"></param>/// <returns></returns>public List<PhotoLocation> GetLocationsForTenant(string realm)
{
    List<PhotoLocation> locations = new List<PhotoLocation>();

    var table = GetLocationsTable();
    TableQuery<PhotoLocation> query = new TableQuery<PhotoLocation>().Where(
        TableQuery.GenerateFilterCondition("Realm", QueryComparisons.Equal, realm));

    var results = table.ExecuteQuery(query);

    if (results.Count() > 0)
    {
        locations = results.ToList();
    }return locations;
}

Our remote event receiver will then use our StorageRepository class to save a new record every time the ItemAdded remote event receiver fires.

privatevoid HandleItemAdded(SPRemoteEventProperties properties)
{
    System.Diagnostics.Trace.WriteLine("Handle Item Added");string realm = TokenHelper.GetRealmFromTargetUrl(new Uri(properties.ItemEventProperties.WebUrl));
    var p = properties.ItemEventProperties;//Add code to save to storage here
    PhotoLocation location = new PhotoLocation(realm)
    {
        ListId = p.ListId,
        ListItemId = p.ListItemId,
        WebURL = p.WebUrl
    };string connectionString = CloudConfigurationManager.GetSetting("paasdemo");
    StorageRepository repo = new StorageRepository(connectionString);

    repo.SavePhotoLocation(location);

We can then use the same library from our HomeController class, using the controller to query the data to be displayed on the page.

public ActionResult Index()
{
    System.Diagnostics.Trace.WriteLine("In the Index action");
    var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);string realm = TokenHelper.GetRealmFromTargetUrl(spContext.SPHostUrl);
    ViewBag.HostUrl = spContext.SPHostUrl;//Add code for viewing storage herestring connectionString = CloudConfigurationManager.GetSetting("paasdemo");
    StorageRepository repo = new StorageRepository(connectionString);
    var model = repo.GetLocationsForTenant(realm);return View(model);
}

The controller is returning the model, so we need to update the View.  I’ve already done this in the same, I’m just showing you how I did this.  I deleted the Index.cshtml view that Visual Studio generates, then right-click the Home folder and choose Add/View.  Give it the name Index, change the template to List, and change the Model class to the PhotoLocation class.

image

I then updated the link to see Details for the item.

<td>
            @Html.ActionLink("Details", "Details", new {  id=item.RowKey  })             </td>

When clicked, we are routed to the Details action.  That action’s code is:

public ActionResult Details(string id)
{
    var spContext = SharePointContextProvider.Current.GetSharePointContext(HttpContext);string realm = TokenHelper.GetRealmFromTargetUrl(spContext.SPHostUrl);
    ViewBag.HostUrl = spContext.SPHostUrl;//Add code for viewing storage herestring connectionString = CloudConfigurationManager.GetSetting("paasdemo");
    StorageRepository repo = new StorageRepository(connectionString);

    var model = repo.GetLocationByID(id);

    return View(model);                      
}

Again, we right-click on the Home folder and generate a View to show the model data that is returned.

image

We update it’s view to show the picture from blob storage.

image

To test our work so far, we publish the web site to Azure.  This step includes going to AppRegNew.aspx to obtain a client ID and client secret for the app, downloading the publishing profile for the Azure web site, and publishing the web site to Azure.  More details can be found in my post, Creating a SharePoint 2013 App With Azure Web Sites, here are the highlights.

Download the publishing profile.

image

Provide the client ID and client secret obtained from SharePoint using the AppRegNew.aspx page.

image

Publish to Azure.

image

Package the app, changing the URL to https.

image

Copy the app package to your developer site and deploy it.

image

In the Trust It screen, give it permissions to a pre-existing Document Library named Photos.

image

Click the link to the deployed app, and you see the app in the Azure web site.

image

Click the Back to Site link to go back to SharePoint.  Upload a new picture to the library.

image

Click the link to the app, and see that the data was written to Azure table storage.

image

Go into Visual Studio and select the Storage node in Server Explorer.  There is a new node “locations”, which is the table that our code creates.

image

Double-click the table, and we can see the data in table storage.

image

Queues and Blobs

Our solution simply writes data to table storage so far.  When an item is added, we want to queue up a new message.  Add methods to the StorageRepository class to queue a message and to write to blob storage.

publicvoid SendPhotoLocationQueueMessage(PhotoLocation p)
{
    CloudQueueClient client = _storageAccount.CreateCloudQueueClient();
    CloudQueue queue = client.GetQueueReference("jobs");
    queue.CreateIfNotExists();string message = "\"" + p.PartitionKey + ":" + p.RowKey + "\"";
    queue.AddMessage(new CloudQueueMessage(message));
}publicvoid SaveImageBlob(PhotoLocation p, Stream s)
{
    CloudBlobClient blobClient = _storageAccount.CreateCloudBlobClient();
    CloudBlobContainer container = blobClient.GetContainerReference("images");
    container.CreateIfNotExists();//Set public access permissions on the container
    var permissions = container.GetPermissions();
    permissions.PublicAccess = BlobContainerPublicAccessType.Container;
    container.SetPermissions(permissions);//Upload the blob
    CloudBlockBlob blockBlob = container.GetBlockBlobReference(p.RowKey + ".png");
    blockBlob.UploadFromStream(s);//Set content-type header
    blockBlob.Properties.ContentType = "image/png";
    blockBlob.SetProperties();//Update the table storage object
    p.BlobUrl = blockBlob.Uri.ToString();
    SavePhotoLocation(p);


}

Update the HandleItemAdding method in the remote event receiver to queue a new message when an item is added.  The binaries have changed, publish again to the Azure web site.

image

Go back to SharePoint, add another picture to the Photos library.  Now go to Visual Studio, open the Queues node, and we see a new queue named “jobs”. 

image

Double-click the jobs queue, and see a new message has been queued.

image

Web Jobs

Instead of creating a cloud service to process the queue message, we will use a web job.  Scott Hanselman does a fantastic job of introducing web jobs in his post Introducing Windows Azure WebJobs. We create a Console application (yes, a CONSOLE application, like we all learned to code .NET with!) 

In order for this to work, you will have to use an app-only context because there will not be a current user context.  This means the app must request app-only permissions.  For more information, see my post Building a SharePoint App as a Timer Job.  

Update the app.config for the PaaSDemoJob project with your client ID and client secret, and make sure you updated the AzureJobsData and AzureJobsRuntime and paasdemo configurations with your storage account connection string as instructed previously.

image

To use web jobs, we need to add a NuGet package.  Web Jobs are still in preview, so you’ll need to include pre-release in the NuGet manager.

image

Pretend that we have an existing Console application that runs using Windows Scheduler, and we’d like to move this to the cloud as well.  We can made a few modifications to our code.  Update the Main entry point for the console application.

staticvoid Main(string[] args)
        {//Add JobHost code here
            JobHost host = new JobHost();
            host.RunAndBlock();

        }

Next, update the parameter for the ProcessQueueMessage method with an attribute from the Web Jobs SDK to process queue messages from a queue named “jobs”.

image

Our code is going to reach back into SharePoint using the Client Side Object Model (CSOM).  We need all the TokenHelper stuff to make that happen.  Go back to NuGet Package Manager and add the App for SharePoint Web Toolkit.

image

This will add the necessary references and will make sure the references are marked as CopyLocal=true, because we will push the entire console application to Azure in just a bit.

The remaining code will reach back into SharePoint using CSOM, get the image from SharePoint, and overlay some text “#SPC385” as well as the SharePoint Conference logo (obtained as a resource file).

Add a reference to our PaaSDemoData library, which will also require adding the NuGet package to Windows Azure Storage.

Run the Console application locally (right-click and choose Debug / Start New Instance for the console application.  You will see that the Web Jobs SDK picks up the queue message, processes it.

image

Go to Visual Studio 2013, and we see a new blob container named “images”.

image

Double-click the images container, and we see the new image!

image

Let’s go back to the app in SharePoint.  We see the image now appears in the Index view.

image

Click Details, and we see the image was processed, overlaying the hashtag “#SPC385” and the SharePoint Conference logo.

image

Go to the bin directory for the PaaSDemoJob console application and zip its contents.

image

Go to the Azure Management Portal, go to your web site, and go to the Web Jobs tab.

image

Upload the zip file.

image

The web job will use the web.config settings from our web application to connect to storage, so it’s important that you followed directions previously and updated the AzureJobsRuntime and AzureJobsData connection strings with the connection string for your storage account.

Go to the web site dashboard and reset your deployment credentials.

image

Now go back to the Web Jobs tab and click the link next to the job. 

image

Enter your credentials, and you can now see the status of any jobs.

image

Add a photo to the SharePoint document library, and you’ll then see the job picks it up and processes it!

image

image

As a parting gift to the attendees at the session at SharePoint Conference, here is the group photo I took at the beginning of the session as part of the demo.

You can watch the video online of the presentation from SharePoint Conference 2014, Building SharePoint Apps with Windows Azure Platform as a Service, at http://channel9.msdn.com/Events/SharePoint-Conference/2014/SPC385.

For More Information

MSDN subscription benefits

Office 365 Free Trial

Windows Azure Free Trial

Creating a SharePoint 2013 App With Azure Web Sites

Attaching Remote Event Receivers to Lists in the Host Web

Building a SharePoint App as a Timer Job

Introducing Windows Azure WebJobs

Git for Team Foundation Developers

$
0
0

This post will introduce you to using Git using Visual Studio Online.  This is the first post in a series.

This series is targeted towards easing the Git learning curve for developers familiar with Team Foundation Version Control.

Background

I joined the Azure Modern Apps Center of Excellence in Microsoft around 6 months ago and have been working on a project that uses Git for source control with Visual Studio Online.  It has been frustrating to say the least.  I have been working with TFS source control since 2004, and SourceSafe prior to that, Git was like a foreign language to me. 

To work with the source code, my boss tells me to go clone the repo and create a published branch.  Finally, 6 months later, I can tell you what that means.  No, it did not take me 6 solid months to learn this, but it did take me 6 months to find the time to read, watch videos, and keep trying things for myself until the lightbulb went off. 

This post is to help reduce that time to a single blog post.

You can still use TFS source control.  When you create a project, you have a choice between TFS or Git.  I am going to challenge you to create a project using Git and become familiar with it.

TFS Source Control

I am used to using TFS for source control.  In that model, every developer works with the source control all at once.

image

TFS source control is a centralized version control system.  When TFS first came out, I remember work with customers with geo-dispersed development teams who were concerned about performance because it required a connection to the central repository, so we added the TFS proxy to address those issues.  If I didn’t have access to TFS (I was at a conference with crappy wireless or on a plane), I was kind of out of luck for checking in changes. 

Using TFS, I would create a new team project, which creates a repository for my source code.  TFS manages the deltas between files as part of a change set.  Those deltas are managed on TFS itself.  Each time I modify the source code and check that into TFS, the changeset is stored within TFS, allowing me to roll back to a previous version.

If I were going to work on a new version release using TFS source control, I would create a branch.  The new branch would be mapped to a completely new directory on my hard drive, and all of the source code in that branch would be downloaded to that new directory.  I would have an entire new copy of the source code on my hard drive. 

At the same time that I was working on a new feature, bug fixes occur in the main trunk for the source code, so I have to eventually merge those bug fixes into my branch.  I check in code very frequently, often with comments as simple as “added UI for feature”, resulting in many individual changesets for the feature I am working on. 

Introducing Git

Git is a distributed version control system, which provides several layers of abstraction.  The first layer of abstraction is that every developer has both the central repository as well as a local repository. 

image

This allows you to make lots and lots of changes locally, then finally take those changes as a feature and add them to the central repository.  It took me a LONG time to come around to this conclusion: it’s a better model.  I can make changes to the local repository and, when the feature is complete, I can push the changes from the local repository to the central repository.  This allows me to work offline and with distributed teams. 

Taking that a step further, you only work with a single repository at a time, and you can push your changes from the local repository to a remote repository somewhere else.  I loved how Jessica Kerr (@jessitron) explains this in her video (at the end of this post), think of this as putting a bunch of stuff on a palette.  Once you are done putting things on a pallet (the wooden structure you’d pile stuff on to be lifted by a forklift), you then put them in a truck to be driven somewhere else.

Another abstraction layer is the separation of the working directory, staging area (also called the index), and the repository. 

image

Checking in code is a two-stage process: you first stage the changes that you want and then you commit the changes.  There is a tremendous amount of power in this abstraction. 

These two concepts (the local repository and staged commits) were the things that took me so long to grok.  The rest of this post will walk through a few simple examples.

The .git Folder

Let’s start with a very simple example.  I am going to open Git Bash, which opens a command window. 

image

The very first thing you should do is know where to find help.  Type “git help”.

image

The next thing you will want to know is how to clear the screen.  Type “clear”.

image

To interact with Git using Git Bash, you will use Unix syntax commands.  You can use “cd c/” to change directory to the C: drive, and then use “ls” to list all of the files and folders.

image

Let’s make a directory, “GitDemo”.  Use the command “mkdir GitDemo”, then “cd GitDemo” to navigate to that directory.

image

Now we have some basic directory navigation out of the way, let’s use Git.  Let’s see if our new folder is a Git repository using the command “git status”.

image

This is not a Git repository.  We can tell that just by looking at the directory structure, there is no .git folder.

image

Let’s turn this folder into a Git repository.  We use the command “git init”. 

image

That creates an empty Git repository, which is the .git folder. 

image

Open that guy up, and you can now see what just happened.  A few folders are created for us, namely objects and refs, as well as a config file.

image

Now we check to see if this is a repository again using “git status”.

image

We initialized the repository, which creates the .git folder and its contents.  Let’s delete the .git folder.

image

Now try “git status” again and see what happens.

image

We see that this is not a repository because we deleted the .git folder and its contents.  We can create a new repository using “git init” again.

image

Lesson 1: the .git folder is the “database” for your repository, everything is contained within that folder.

Cloning a Repository

You would typically start by creating a new team project in Visual Studio Online, choosing Git as the source control provider. 

image

This creates that central “origin” repository, but you don’t yet have a local repository.  Once the project is created, click the link to open the project in Visual Studio 2013, and you’ll see a message in the Team Explorer node telling you to clone the repository.

image

Click that link, and you’ll see something like this:

image

That link tells you to create a local copy of the central repository.  I prefer to change the destination path to something other than the default so I can find it easier.  Click “clone”.  Go to that directory, and notice that you have a .git subdirectory.

image

Go back to Git Bash and navigate to the directory using “cd /c/gitlearning”.  Now try “git status” again.

image

Lesson 2: Cloning the repository does nothing more than downloading the files and the .git subdirectory.

Checking in Source Code

I’m going to do something a little bizarre here, so bear with me.  Let’s go to that folder that Visual Studio just created and add a file.  I am going to use “echo” to echo some text to output, and redirect the output to a file called “hello.txt”.

image

That added a file to the directory.  We talked before about the concept of staging changes first.  Let me demonstrate what I mean, we’ll use “git status” to see the status of the repository.

image

This is an “untracked file”, meaning it exists in the working directory but has not been added to the index yet.  To do this, let’s use “git add hello.txt” to add the file to the index, then use “git status” to check the status again.

image

We can see now that there is a change to be committed.  Let’s go inspect the folder structure in the .git subdirectory to see what happened there.

image

This is a SHA-1 hash of the change information.  Let’s commit the change using “git commit hello.txt” –m “My initial check-in”.

image

When we used “git commit hello.txt”, we provided the check-in comment “My initial check-in”.  Now let’s inspect the folder structure again.

image

A few new folders and files.  This is how Git is tracking the changes, using the SHA-1 hash of the change.

Let’s go edit our hello.txt file and change the contents.

image

Save, then use “git status” again.

image

Git knows that the SHA-1 hash of the file is different than what it is tracking, so it reports the file has been changed.  If we want to commit that change, remember we first have to stage it using “git add”, and then commit using “git commit”.

image

Now let’s see the status so far.  Use “git log”.

image

We see our two check-ins that have been committed, and each one is identified by a SHA-1 hash.

Lesson 3: Checking in source code stages changes to the local repository, and those changes must be committed.

Pushing to Visual Studio Online

So far, we cloned the repository and showed how to stage and commit changes to the local repository.  To understand this, go to your Visual Studio Online page and notice that there’s nothing in there.

image

There are instructions to enabling basic authentication for your profile.  After enabling alternate credentials, we can now go back to the command line and push our changes from the local repository to Visual Studio Online using “git push –u origin –all".

image

Now that we’ve pushed changes, let’s look at Visual Studio Online again.

image

We now see that our code has been pushed to the remote repository.  Let’s look at the history.

image

OK, that is seriously cool.  Not only did the source code get copied from my local repository, but it also copied the change history.  This is the most powerful part of Git, the fact that I can push the entire change history, I can cherry pick a few changes, or I can push all of the changes but squash them all into the latest commit.  This allows you to tell a story about what’s in your feature without having a ton of chatty check-in comments in the central repository.

Lesson 4 – Changes are committed to the local repository and then pushed to the remote repository.

Using Visual Studio Git Integration

Now that we’ve seen how to do everything from command line, let’s use Visual Studio instead.  Create a new project and use the same folder.  Check the option to add to source control.

image

The result is a new project in Visual Studio. 

image

We can go to the Changes tab to see the changes.

image

Notice that it looks like the changes are staged.  However, that’s really not the case, Visual Studio is trying to make things easier for us by hiding the low-level details.  Let’s go look at the repository real quick to see the status.

image

Everything Visual Studio just created is untracked as far as Git is concerned.  Visual Studio is going to make things easy for us, instead of having to know anything about staging and committing, we just commit.  Go back to the Changes tab and enter a check-in comment, the Commit button is enabled.

image

You can see there are three different options for committing changes: Commit, Commit and Push, and Commit and Sync. 

image

We will just use Commit to commit the changes to the local repository, just like we did with the command line before.

image

Click that button, “Unsynced Commits” to see the commits that exist in the local repository but have not yet been pushed to the remote repository.

image

We see the commit that we just created in our local repository.  We click Push to push the changes to Visual Studio Online.  Once that is complete, go back to the browser for your project, we can see the changes.

image

Now let’s go see if the source is actually there.

image

Our source code is there, just as we expect. 

Lesson 5 – Visual Studio tries to simplify the interactions with Git, but it will help you tremendously if you familiarize yourself with how Git works.

            

For More Information

Set up Git on your dev machine (configure, create, clone, add)

Develop your app in a Git repository (track, commit)

Work from the Git command prompt

Jessica Kerr did a great presentation that helped me tremendously to understand the benefits of Git.

One of the best videos I have seen for understanding Git concepts

Git for Team Foundation Developers–Branches

$
0
0

This post will illustrate branching with Git.  This is the second post in a series.

The series focuses on introducing Git for developers who are familiar with Team Foundation Version Control. 

Background

One of the concepts that really threw me when I was first learning Git was branching.  When using branching with Team Foundation Version Control (TFVC), you would typically use a pattern similar to the following:

image

In this model, you would build version 1 and release it.  Upon release, you would create a new branch for version 2, but continue to support version 1 through bug fixes.  At some point your team merges the fixes from v1 to the v2 branch, and v2 is released.  When version 2 is released, a new branch for version 3 is created, but version 2 continues in maintenance mode until you merge changes from v2 to the v3 branch.

This could be an absolute nightmare.  Not kidding, this is really easy to show on a diagram, but the reality was that the merge operation could often mean rewriting entire portions of a solution.  Git, on the other hand, encourages branching even for small bug fixes because it doesn’t track files, it tracks content, making it easier to merge content later.

The Master Branch

There’s really nothing special about the Master branch.  Git needs a single default branch to work with, and names the default “Master”.  In reality, you may decide to create your own branching structure and never use the name “master”.  For instance, you might use a branching structure like:

image

You may decide to instead create branches based on features.

image

One colleague on my team is working on a project where they create branches daily. 

Creating a Branch

In my previous post, Git for Team Foundation Developers, I showed how to create a project in Visual Studio Online, clone the repository, commit changes to the local repository, and push changes to the remote repository.  Everything we did used a default branch named “master”: the local repository uses “master”, and the remote repository uses “master”.  Let’s create a local branch.  I am going to use Visual Studio primarily to do work, but then use Git Bash and gitk to see what Git thinks is going on.

Click Branches.

image

Click New Branch.

image

Give it a name and click “Create Branch”.

image

We just created a branch named “v2”.  Once we are done, Visual Studio tells us that the branch is now v2 and that v2 is an unpublished branch.

image

Let’s see what Git thinks.  Open Git Bash and change directory to your working directory (“cd /c/gitlearning”).

image

Notice that the command prompt now has (v2) appended to it, indicating that our current branch is v2.  Want to see something cool?  In Git Bash, switch to the master branch using “git checkout master”.

image

Notice the prompt now has (master) appended to it, indicating the current branch is the master branch.  Now, go take a look at Visual Studio, and it knows the branch has changed to master.

image

Let’s switch branches back to our v2 branch using Visual Studio this time.  Go to the Branch drop down and select the v2 branch.

image

In Git, we can use “git branch” to show all the branches, and the asterisk next to a branch indicates the current branch.

image

Git Bash also knows that we switched the branch using Visual Studio.  There is a file in the .git directory named HEAD that contains this information, pointing to the current branch.

image

Creating Some History

Now that we’re using the v2 branch, let’s make some changes.  I am going to add a class named UserInput with two properties.

image

The solution explorer pane has that little green plus sign next to the UserInput.cs file that I talked about in my previous post in the series. 

image

Right-click the file (not the project node) and choose Commit.

image

We can now see the included and excluded changes.  The new class, UserInput.cs, is an included change.  However, the .csproj is an excluded change, meaning it will not be committed. 

image

That would be weird, to have the new class but the csproj not know anything about that.  Simply drag the csproj to the Included Changes section, enter a check-in comment, and click Commit.

image

Now that we’ve checked in the code to our local repository, a colleague stops by your cube and points out that you should have used an interface for that class to improve testability.  You right-click on the class, choose Refactor / Extract Interface.

image

image

Click OK, and the class now implements the new IUserInput interface.  We go to Solution Explorer to see the changes.

image

Right-click on the project node (not the file this time) and choose Source Control / Commit.

image

The changes look good, we enter a check-in comment and choose Commit.  When we are done, notice the link up top.  We can click it to inspect details.

image

Before we do that, let’s go inspect what’s happened on the file system.

image

There are two files in the heads subdirectory, master and v2.  There are also a bunch of subfolders in the objects subdirectory.  Our latest commit shows an ID of “ffffb5c7” in Visual Studio.  I go to the “ff” subfolder under objects, and see a file with a long weird name.

image

Let’s go look at Git Bash for a second.  I use “git log” to show the log.  It reads as newest at the top to oldest at the bottom. 

image

Notice the full commit ID for the latest commit?  The first two characters are the subfolder “ff”, the rest of it are the file name.  Every commit is identified by a SHA-1 hash value of its author information and contents.

Now go back over to Visual Studio.  We can now click the link for the commit to see the details, including changes as well as a pointer to its parent.

image

Click the link to its parent to see what that looks like.

image

Changes have parents.

Branches and the Working Directory

Here is the part that absolutely threw me for a loop.  In TFVC, remember that you would have a different working directory for every single branch.  In Git, there’s only one working directory and all changes are tracked within the repository.  Here’s what my file system looks like right now:

image

OK, that’s expected, the IUserInput.cs and UserInput.cs files are there, I just coded those in Visual Studio.  Now go to Visual Studio and switch branches back to master.

image

Now take a look at the Solution Explorer pane.  UserInput.cs and IUserInput.cs are gone!  Don’t worry, the'y’re not really gone, they’re just not in the master branch.

image

Let’s take a look at the file system.  The files are gone from there, too.

image

The files are not deleted, they’re still there in the repository.  Remember that what you’re looking at in File Explorer is the working directory.  As we switch between branches, the working directory will reflect the latest commit.  Switch back to the “v2” branch, and our files reappear in the working directory.

image

Get Those Changes to the Server

When I use TFVC, I would frequently check in source code to TFS to prevent against loss.  This is a self-serving pattern.  If I win the lottery, I am not going to care about those changes, and let’s face it, if I get hit by a bus I’m not going to care very much, either.  I am only doing this to make sure that I don’t lose any changes should my hard drive crash or my laptop is stolen. 

Let’s get the v2 branch to the server. I go back to the Branches section in the Team Explorer pane and choose “publish branch”.

image

Go to Visual Studio Online, under Code / Branches and we can see the new v2 branch is now in TFS.

image

I can also see a visual indication that the v2 branch is 2 commits ahead of the master branch.

image

I can compare branches.

image

When I compare, I can see the commit history that differs.

image

I can also see the difference in files.

image

This is the beauty of the Git model, I can create my own branch, such as “kirk_dev” where I do all of my own work.  I publish that branch to the TFS server.  I perform commits against my local repository offline, and when I am able to connect I can push the commits to the remote repository.  Those changes can now be seen by the rest of my team and can be used for peer code reviews.

Merges

We now have two branches both on the server and in the local repository, master and v2.  Let’s merge the changes from v2 back into master.  Right-click the branch and choose Merge.

image

You are prompted to choose the source and target branches.  In our scenario, v2 is the source and master is the target.

image

Click Merge, and an interesting thing happens.  First, it looks like a new commit is created in the local repository.  However, if we look closely, the commit is “ffffb5c7”, which is the same commit we saw previously.  Git just takes the commits from the v2 branch and puts them in the master branch, effectively “replaying” all of our changes.  Second, the branch is switched to master.

image

Remember, we have only done this in the local repository.  No changes go to the remote repository until we decide they should.  This was a mental block for me, as I am used to all operations being performed against the server.  With Git, all changes are performed against the local repository and then pushed to the remote repository.

Note: there are other options than just pushing.  We can also fetch and pull.  For now, let’s just focus on push.

We can click that link “Unsynced commits” to commit the changes from the local repository to the remote repository.  We can see the commits that will be added, and we can use the Sync button to synchronize our changes to the remote repository.

image

After it is complete, we see there are no additional outgoing commits.

image

Finally, we see a visual indication in Visual Studio Online that the master branch has the same commits as the v2 branch because it is 0 ahead and 0 behind.

image

For More Information

Collaborate in a Git team project (pull, push)

Use Git branches to switch contexts, suspend work, and isolate risk

Git for Team Foundation Developers - Merging

$
0
0

This post will show how to merge using Git.  This is the third post in a series.

The series focuses on introducing Git for developers who are familiar with Team Foundation Version Control.

Background

One of the absolute coolest parts of Git is the flexibility it has with merging.  The types of scenarios it enables are exactly the scenarios that you face daily.  You make lots of check-ins, some of them you need to make (“Going on vacation… sorry if this breaks the build”), others you are happy to make (“Version 3 release… time for vacation”).  At the same time, other members of your team are also making check-ins.  Git makes it easy to merge your frequent check-ins (“going to lunch” type check-ins) with the release branch for your team, and it makes it easy for you to merge your changes with the rest of the team.

Creating Branches

My previous post talked about branches in Git.  This is the thing I now most about Git, and also one of the things that completely tripped me up originally because I didn’t understand all the names that Git used.  When I create a team project in Visual Studio online, I clone the repository to a local repository.  I get the default branch, master.  I then create another branch, v2, and publish it to Visual Studio Online.  Here is what my branches look like right now:

image

What you don’t see in this picture is that there are actually 4 branches: 2 local, 2 remote.  VSOnline is the “origin” of the code, and each user has their own local repository.

image

The reason I bring this up in a discussion of merging is because you will see the term “ORIGIN” occasionally.  You can choose to create a branch from an existing branch in your local repository or the remote repository.

Our team decided to use the v2 branch for builds.  I first switch to the v2 branch (which switches in my local repository), and then I sync changes by clicking the Sync button, pulling changes from Visual Studio Online and pushing any outgoing commits.

image

Now that I know the local v2 and origin v2 repositories are synchronized, I am going to create a branch, kirke_v2, to do my work in.  The rest of my team may have branches specific to them, such as Simon_v2, Donovan_v2, and Paul_v2.  They are free to make whatever changes they want to their local branch, even publish that branch to the server, knowing that those changes are not part of the branch that our team is using for builds, the v2 branch.

image

Think about how powerful this is.  Rather than do all of your work against the central repository and possibly break the build, you can fetch the latest source from a particular branch on the server and work locally.  When you are ready to push your changes to the server, you do that explicitly.  I like that model.

Now that I’ve created the kirke_v2 branch, notice that it is local only and is not on the server.  We can tell that because it is an “unpublished branch”.

image

Finally, I decide to publish that branch to Visual Studio Online. 

image

There are now 6 total branches (3 local and 3 remote), and any changes that I make continue to work only against the local repository until I decide to push the changes to the remote repository.

Change Happens

Let’s cause a few changes.  Here is my project as I start:

image

I am going to add a few classes.  Here’s my base class.

image

I then implement a ConsoleOutputWriter and a DebugOutputWriter.

image

Each time I write some code, I commit.  You probably wouldn’t do this in your daily work, but I am doing it here to highlight how commits are tracked.

Studying History

Each time I added code, I made sure to commit.  You can see the history for the kirke_v2 branch in Visual Studio.  Go to Changes / Actions / View History.

image

The history for the branch is now shown.  Notice those two markers on the right.  Those show the pointers to the last commit in master that the branch kirke_v2 knows about, and the last commit in kirke_v2.

image

Let me explain that last part a little.  The pointer to master points to ffffb5c7, “Modified to use an interface”.  We’ll see in a second that the “v2” branch actually has new commits in it made by a new intern in our group.  When we branched from “v2” to create “kirke_v2”, Git copied all of the commits to the kirke_v2 branch.  Think of it as a complete copy of all the commits from the source branch, because that’s what it is.

Let’s go over to Git Bash.  We will run a program called “gitk”.

image

gitk opens, and we can see all of our checkins represented as a Directed Acyclic Graph (DAG).  I use this fancy word not to show off my incredibly vocabulary and sound as geeky as possible, but to point out that this term is used frequently in the Git documentation.  A directed acyclic graph is where you go from node to node and can never revisit a node.  For instance:

image

  1. The DAG shows all of the commits, with the most recent at top
  2. Each commit is by a user at a specific time
  3. Each commit has history that can be used to show the different between the current commit and the previous commit.

GitK is a fairly useful tool. 

Notice the changes that it is showing in section 3, highlighting new code in the Main method (using a green font).   

Managing Conflict

While I’ve been busy working on my fancy new class structure for writing output, a new intern in our group took the initiative to go make changes in Program.cs and commit them to the v2 branch, the same branch I’m about to merge my awesome new code into.

 image

Houston, we have a conflict.  Of course, I won’t know this yet because that change commit is not in my kirke_v2 branch, but I’ll discover it when I attempt to merge.  That edit is smack-dab in the middle of the edits we made, and Git will let us know there’s a conflict that has to be resolved.  To see this in action, let’s try to merge our changes from the kirke_v2 branch to the v2 branch.

I want to first make sure that I have the latest source for v2.  I first switch to the v2 branch.

image

Next, I use the Sync button to pull commits from the remote repository. 

image

Now the v2 branch is synchronized with my local repository.

Go to Team Explorer / Branches to view the branches, and then choose Merge.  The dialog will change to ask you to pick a source and destination.

image

Click the Merge button, and we start yelling at the intern.

image

Visual Studio is doing something incredibly nice for us here.  If you’re not familiar with Unix tools like vim, then editing this stuff using Git Bash is going to be very difficult.  Thankfully, the Visual Studio team provided a UI for this.  Click the “Resolve the conflicts” link.  The dialog changes to:

image

OK, that’s not very helpful.  Click the program.cs file in the Conflicts section.

image

That’s a little better.  Click that big “Merge” button.

image

Ah, there we go!  A visual diff tool that allows us to pick the changes from the conflicting commits.  Next to each conflict (highlighted in red in the tool) is a checkbox so that you can visually select which change stays.  The bottom pane shows the results.

image

Take that, intern.

Here’s the confusing part… what next?  The file has an asterisk next to its name, and its not a preview file it’s a file actually opened for editing.  Do I save it, or do I click that big “Merge” button on the right of the screen? 

Tucked away, somewhat hidden, is a button that says “Accept Merge”.

image

Click that, and now the Team Explorer pane changes to include a button that says “Commit Merge”.

image

Click the Commit Merge button.  Remember how we started this out trying to commit our changes?  Visual Studio says, “you’ve resolved all the problems, but we still haven’t done a commit.” 

image

Finally, we want to push our changes to Visual Studio Online so that the rest of the team can use them.  We can then go to Changes and see the unsynced commits section that shows all of the commits in our local repository.  We go to the outgoing commits section and click Push to push them into the remote repository.

image

The results show that we were successful pushing to “origin/v2”.

image

Now hopefully you see why I started this post with the explanation of “origin”. 

Let’s go to Visual Studio Online again.  A nice shortcut to get there is to use the link in Visual Studio.

image

We are taken to the portal, and we switch to the Code section.  Click the History tab, and we can see all of our commits, including the merge, are in the commit history.

image

Visualizing the Branches

Like I said in my previous posts, it helps to understand what Git is doing under the covers because the Visual Studio tools are nice enough to hide some of the gory details from us.  Let’s go back to the Git command line tools.  We used Git Bash to open a visual tool called gitk.  We should already have gitk open from our previous demo, go to the gitk File menu and choose “Start git gui”.

image

Once Git Gui starts, we can choose “Visualize All Branch History”.

image

That command will then launch gitk again, and now we can see the Directed Acyclic Graph (DAG) for our commits.  The picture below is straight out of the GitK tool, I didn’t draw this myself, so this can be a hugely valuable tool for understanding what’s going on in your commit history. 

Here is what it looked like after I pushed to the remote repository.

image

A little explanation of what you’re looking at is in order.  You’re not looking at a time line, but rather a visualization of the commits.  In a previous post I commented that changes have parents, and that is what this graph is showing.

The initial vertex in the graph is our initial check-in at the bottom where I was working in the master branch.  I made a few changes, and then at the 5th commit I pushed the changes to “remotes/origin/master”.  What’s interesting here is that the same line also shows a push to “remotes/origin/kirke_v2”.  This graph doesn’t show when the branches were created, it just shows when something happened on them.  When I created the kirke_v2 branch and published it at the beginning of this post, the changes it had were the same changes as in v2 and in master.  At some point, the intern added a commit to the v2 branch (the nodes do not reflect time relative to each branch, only precedence on the current branch).  We finally see our kirke_v2 branch is merged with the main v2 branch and pushed to “remotes/origin/v2”.

Note: I’m impressed that I made it this far without really explaining the DAG and the various types in it.  Most texts you read on Git dive into tag, branch, commit, tree, and blob by now.  My goal is to make you familiar and productive, not to mire you in theory. 

For More Information

Use Git branches to switch contexts, suspend work, and isolate risk

Resolve Git conflicts


Deploying an Azure Web Site Using Git and Continuous Integration

$
0
0

This post will show how to deploy an Azure Web Site using Git and Continuous Integration.  By the time I typed that long title, I’ve told you pretty much the whole story. 

Background

I attended the Global Windows Azure Bootcamp today and had a blast going through labs and watching demos.  Sure, most of it was stuff I already knew, but there’s always some new trick you pick up that makes it completely worth your while.  This post covers one of those tricks that I learned from Rick Rainey (co-founder of http://opsgility.com and @rickraineytx on Twitter), and it fits very nicely with the stuff I’ve been blogging about lately: deploying an Azure web site using Git.

I wrote a previous post showed how to deploy an Azure web site for a SharePoint-hosted app using Visual Studio 2013.  In that post, I showed how to use the integrated tools to deploy a web site to Azure straight from your desktop.  What would be nice is if we could target the hosted build controller in Visual Studio Online using continuous integration to automatically deploy the web site to Azure anytime there is a new commit to the source code repository.

This post is long just because I took a bunch of screen shots.  Doing all of this takes maybe 5 minutes, especially once you understand what’s going on.

Create a Team Project in Visual Studio Online.

Go to Visual Studio Online and create a new team project, choosing Git for version control.

image

When that’s complete, Close the window.  You could navigate the project and check out how awesome the Visual Studio Online dashboard is, even define some sprints, create some project backlog items, and elaborate tasks.

image

We’re not covering those features today, but we’ll come back to Visual Studio Online in just a minute.

Create a Standard Web Site with Staged Publishing

In Visual Studio 2013, go to the Server Explorer pane, log into your Azure subscription, and then choose Add New Site.

image

The wizard asks you for a name and a location.

image

Once created, we go to the Azure Management Portal and click on our web site, then choose Scale.  On the Scale tab, change the web site mode from Free to Standard.

image

Click Save to save your settings.  Click Yes for the friendly confirmation message.

image

After a few minutes, the operation is complete.  We now go to the dashboard for the web site.  Under the Quick Glance section, click the link to “Enable staged publishing”.

image

This is going to enable our web site to have a staging and production site.  Once completed, there will be a new staging web site with a different URL nested below the production one.

image

Set Up Deployment from Source Control

This part makes me giddy with joy, it really does.  Open the staging web site (the one that has “(staging)” appended to it, you should be on a page that looks like the picture below.  Notice the “(staging)” appended to the end of the web site name, that’s going to be important in a minute.

image

Go to the dashboard tab for the staging web site, and under the Quick Glance section choose “Set up deployment from source control”.

image

Azure asks where your source code lives.  We tell it to look in Visual Studio Online.  Notice, though, that we can deploy from a local Git repository, from GitHub, heck even Dropbox! Other options include Bitbucket, Codeplex, or a publicly-accessible repository using Git or Mercurial.  Of course, we will choose Visual Studio Online because awesome.  Because awesome.

image

You now have to authorize the connection so that Visual Studio Online will have the right permissions to push to the Azure web site.

image

You will see a connection request that you must Accept. 

image

Finally, we see the connection was successful and we choose a repository to deploy from.

image

Once that completes, you’ll see a confirmation screen that the team project is linked, and you will see a Visual Studio logo at the bottom of the screen.

image

Click that.  You are then prompted if you really want to allow the web site to open Visual Studio (you do, you really do).

image

Create an ASP.NET Web Site

Once Visual Studio opens, the Team Explorer pane shows your Visual Studio Online team project information (oh, that’s so cool… we opened the project from Azure, but by virtue of linking Azure to VSO we now get our team project to show).  Click the “New” button under Solutions.

image

In the New Project dialog, I will create a new ASP.NET Web Application named DemoWebApplication.  Leave the checkbox to add to source control checked, we need that to create a Git repository.

image

I’ll create a new Web API project, which includes ASP.NET MVC and does not authenticate the user.

image

Once that’s complete, I have my web site project and my local repository.

Note: when trying this, I ran into a weird error where Visual Studio tried to put my project into a different repository and I got weird behavior trying to add the web site project.  If this happens to you, just close and re-open Visual Studio, go to the Team Explorer, and choose to Clone the repository to a new working directory.  I show you how to do this in my post Git for Team Foundation Developers

I then right-click the solution and commit. 

image

Enter a check-in comment and click Commit.

image

We now want to push our changes to the remote repository, so click the Sync link.

image

Next, click the Sync button.

image

We now have committed our changes to Visual Studio Online. 

Stand Back in Amazement

Now for the truly inspiring moment.  If you go to the Builds section in Visual Studio’s Team Explorer pane, you’ll see there is a build definition already created for you.  Even better… the build is probably already running.

image

Double-click the running build (under My Builds (1)) and you will see the status of the current build.

image

ZOMG.  I just peed myself a little.  That is so awesome.

Once it is done, you see that the build succeeded.  You could view the log, open the drop folder, yadda yadda yadda… let’s go look at the web site!  Click the link under Deployment Summary to go to the web site.

image

IT.  IS.  ALIVE!!!!

image

Remember that we are on the staging web site. 

image

We haven’t done anything to production yet.  Let’s go check the production URL.

image

We haven’t pushed code to it yet. 

Swap Staging and Production

When we enabled staged publishing for our web site, it enabled us to have a staging web site and a production web site.  Azure will simply swap the pointers between them when you tell it to.  This is incredibly useful for things like automated Build Verification Testing to test the staging site as part of your build process.  You can make sure everything works in staging before swapping to production. 

Go to the Azure Management Portal and go to the Web Sites node.  You will see a button that says “Swap”.

image

Click it, this will swap your staging and production sites. 

Another nice friendly confirmation message click Yes (I know you’re as antsy as me to see this… just so cool to me)

image

Click Yes.

image

Refresh the page for the production web site.  Make sure you aren’t holding liquids near your keyboard, because this is way too cool.

image

Do It All Over, Just Because You Can

Let’s make a change to the web site.  I go into the Views / Home / index.cshtml file and make an edit.

image

We see the file is checked out when we edit, so we need to commit.

image

Enter a check-in comment and click Commit.

image

Sync your changes.

image

Go look at the Build tab again!

image

Once it is complete, we go look at the staging site.

image

Aw heck, just because it’s fun to do, let’s go swap staging and production.

image

And now production has the correct version of the web site.

image

For More Information

Git for Team Foundation Developers

Git for Team Foundation Developers – Branches

Git for Team Foundation Developers - Merging

Creating a SharePoint 2013 App With Azure Web Sites (my blog post showing how to use Azure web sites, if you were building an app for SharePoint you could totally do everything in this post as a provider-hosted app)

Building SharePoint Apps with Windows Azure Platform as a Service (my one-hour talk at SharePoint Conference where I covered a bunch of stuff with Azure web sites and platform as a service stuff)

Opsgility, Windows Azure Training Experts (They sponsored the Global Windows Azure Bootcamp in Irving, TX and did an awesome job presenting)

Deploying a SharePoint App to Azure As Part of a Build

$
0
0

This post will show how to use continuous integration with a SharePoint provider-hosted app deployed to an Azure web site.

Background

I’ve written on the topic of SharePoint apps and ALM before, but I couldn’t get the whole thing to work with Azure web sites, just my own IIS server.  I decided it was too much work to figure out how to make the available scripts work with Azure web sites.  This weekend, while I was writing the blog post Deploying an Azure Web Site Using Git and Continuous Integration, and I noticed a suspicious little setting in the build definition.

image

I was thinking, “no way did they fix this.”  Well, they did!

Create a Team Project in Visual Studio Online

A few readers have asked why Visual Studio Online, why not your own TFS?  Honestly, I love Visual Studio Online.  I cannot picture a day where I go through the pain of setting up TFS ever again, I’ll just use the cloud.   

image

Once you’re done creating the project, click Close.

Create an Azure Web Site

In the previous post, I changed the web site to standard so that we could use staged publishing.  I’m not doing that here for simplicity, let’s just create an Azure web site using the custom create option.  That will give us the little checkbox to publish from source control.

image

We checked the checkbox, now Azure wants to know where to get the source from.  Visual Studio Online, of course!

image

We then authorize the connection.

image

We say accept…

image

and finally choose a repository to deploy from. 

image

Once the web site is created, click the Visual Studio icon at the bottom of the screen.

image

We are then asked if we want to allow the website to open Visual Studio.  Yep, we do.

image

Create Some Code and Commit

Now that we have a web site and a remote repository, clicking the Visual Studio button above will open Visual Studio.  We are asked to clone the repository to get started.

image

I clone the repository, providing a path to a folder that does not have a .git subdirectory already.

image

Once the repository is cloned, I then create a solution using the “New…” link under solutions on the Team Explorer pane.

image

We will create an App for SharePoint 2013 called MySPAppDemo.  Make sure to leave that checkbox for “Add to source control” checked, we’re going to need that.

image

We then provide a URL for debugging, and click Finish. 

image

We make sure it is a provider-hosted app, then accept all defaults, which creates an ASP.NET MVC site using Office 365.

image

The Client ID and Client Secret

Remember that SharePoint apps require a client ID and client secret when being published to SharePoint Online.  If you didn’t provide those, here is the error that you can expect in your build.

Exception Message: The application identifier is invalid, it must be a valid GUID. (type ServerException)
Exception Stack Trace:    at System.Activities.Statements.Throw.Execute(CodeActivityContext context)
    at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager)
    at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)

image

Ask me how I know.  I lost at least an hour on this one.  I finally realized, “duh… of course, I never told it how to authorize.”

Fixing this is simple, just go to your SharePoint site, appending _layouts/15/appregnew.aspx, to generate the client ID and client secret.

image

Go to Visual Studio and download the publish profile for the web site.  The file should go to your Downloads folder.

image

Here is an IMPORTANT STEP, do not miss this or your app won’t workWithout this step the app package redirects you to an HTTP URL instead of HTTPS, and the app won’t work.

Open the .publishsettings file in your downloads folder that you just saved and edit it.  We need to change the URL to use HTTPS instead of HTTP.

image

Save your changes.

Right-click your app project and choose Publish…

image

You are asked for a current profile, but you don’t have one yet.  Create a new one.

image

Import the publishing profile that you just downloaded.

image

You can now see the Publish your app wizard. 

image

Click the “Deploy your web project” button, and your web site will be deployed to Azure after your provide the client ID and client secret for your app.

image

Next, click the button to “Package the app”.  Remember to change the URL to use SSL, otherwise SharePoint apps won’t work.  Since we edited the publishsettings for the site, this should default to HTTPS now.

image

This step is still required, now let’s talk about a gap in the process template.

If you read my previous article, ALM for SharePoint Apps – Understanding Provider Hosted App Publishing, you would know that the previous method we used to deploy simply used the publish profile.  However, in this model the publish profile is only used to generate the app package, its settings are not applied to the web.config for our Azure web site.  How do we then handle this? 

One way would be to just enter the settings into web.config and commit to source control.  Another way is to simply go to the Azure web site and apply your ClientId and ClientSecret values in the appSettings, which will override whatever value is in your web.config file.  I greatly prefer this model because it lets me control it later via the Management Portal.

image

Once you enter the values, make sure to image your work!

Make a Commitment

Commit your changes to the local repository by right-clicking the solution node in Solution Explorer and choosing Commit.  Provide a check-in comment.

image

You can see that I committed with the incredibly useful check-in comment “asdf”.  Don’t judge me.

image

Once you’ve committed to the local repo, sync your changes to Visual Studio Online.

image

Once you have synchronized your commits to Visual Studio Online, you can edit the build definition that was created for you.  Click the ellipses in the “SharePoint Deployment Environment” parameter to bring up the dialog to enter information about your environment. 

image

Once you’ve edited the build definition and saved it, you can test the process by manually queuing a build.  Just right-click the build definition and choose “Queue new build”.

image

On the next dialog, choose queue, leaving the defaults, and then be patient.  You can monitor details by double-clicking on the running build and then choosing View Details.

image

image

Continuous Integration

Here’s the part that got me pretty excited… you don’t have to do anything for continuous integration to work, it’s all there as part of the process template.  Make a change to your code, commit, and then sync to the remote repository.  Now, go look at your Builds tab.  A build is already running because of your last commit.

image

You can see that the app is being installed.

image

Once the build completes, you will see the Deployment Summary.

image

Click the Trust URL and you will be taken to a page to trust the app.

image

Click Trust It.  Now, click the link to open your app.

BOOYAH!  That’s an app running on an Azure web site, deployed using continuous integration.

image

The Big Payoff

Not seeing why this is so cool yet?  OK, let me explain.  No, it’s too much, lemme sum up. Smile

Make a change to your source code.  We’ll edit the index.cshtml view.

image

Commit your changes, and then sync to Visual Studio Online.

image

Wait for the build to complete.  Next, trust the app again, then open it. 

image

Take it a step further by configuring your web site for staged publishing and using continuous integration against the staging site, then when you’re ready you simply swap the staging and production sites using the Azure Management Portal.

image

You can read more about that approach in my post Deploying an Azure Web Site Using Git and Continuous Integration.

For More Information

Publishing apps for Office and SharePoint to Windows Azure Websites

Deploying an Azure Web Site Using Git and Continuous Integration

Using Portable Class Libraries to Reuse Models and ViewModels

$
0
0

This post will show you how to create a portable class library (PCL) to reuse logic across client applications.  We will look at how the newly announced Universal Apps fit into this picture in a subsequent post.

Background

My team built a solution to help jumpstart custom development projects that involve mobile workers, showing how to leverage Microsoft Azure as the center of gravity for everything.  We built apps using ASP.NET MVC, Windows Phone 8, Windows 8 Store Apps, and we even used Xamarin to build iOS and Android apps.  All of these clients talk to Web API endpoints hosted in Azure, are authenticated with Windows Azure Active Directory, and leverage various services such as Service Bus and Push Notifications. 

One of the challenges with building all of these apps is trying to avoid writing the same thing over and over, reusing logic when possible.  That’s where using portable class libraries provides a huge advantage.

The code samples are based off the MSDN article Using Portable Class Library with Model-View-View Model.  This post simply adds a few things on top of that sample, including demonstrations of different views that reference a PCL.

Portable Class Libraries

Like many of you, I haven’t spent much time doing client development over the past few years, I have been strictly doing web development, primarily with SharePoint.  A key concept you may not already be familiar with is that of a Portable Class Library (PCL).  From MSDN:

The .NET Framework Portable Class Library project type in Visual Studio helps you build cross-platform apps and libraries for Microsoft platforms quickly and easily.

Portable class libraries can help you reduce the time and costs of developing and testing code. Use this project type to write and build portable .NET Framework assemblies, and then reference those assemblies from apps that target multiple platforms such as Windows and Windows Phone.

[via http://msdn.microsoft.com/en-us/library/vstudio/gg597391(v=vs.100).aspx]

Each platform has the concept of a class library that provides the implementation for each type of platform.

image

If I’m creating an app for Windows 8.1 and an app for Windows Phone 8, I don’t want to have to write that code twice.  I’d rather write it once and reference it from each project.  That’s exactly what a PCL provides you.  This helps you to create a single library of code that is shared across multiple platforms.  Think about this for a second, because it’s pretty cool.  I can write a library of code that can be written once and referenced from my Windows Phone 8, Windows 8.1 app, or even an app written with Xamarin. 

image

When you create a PCL, you choose the targets for your library, which determines the baseline functionality that you want to apply across platforms.

image

This PCL can then be referenced from a project that targets .NET 4.5 for desktop apps, Windows 8 or Windows 8.1 store apps, Windows Phone apps for WP8 or WP8.1 that use Silverlight, or even Xamarin apps that target Android or iOS. 

Write the code once and reference it across multiple project types.  To demonstrate the power behind this model, I will show an example of a PCL that enables reuse of models, repositories, and view models.  The platform-specific implementation is implemented in the views for each platform, which results in nearly zero code in each platform implementation.  The project structure shows the library of reusable code and the three platform implementations that reuse this code.

image

Reusing Models

The easiest thing to reuse in a PCL library is the model.  That’s typically a class that provides property getters and setters that model the data that you want to use.  A simple example is a Customer class:

using System;
using System.ComponentModel;

namespace ReusableLibrary
{
    publicclass Customer : INotifyPropertyChanged
    {
        publicevent PropertyChangedEventHandler PropertyChanged;
        
        privatestring _fullName;
        privatestring _phone;
        
        publicint CustomerID { get; set; }
        
        publicstring FullName 
        { 
            get
            {
                return _fullName;
            }
            set
            {
                _fullName = value;
                OnPropertyChanged("FullName");
            } 
        }

        publicstring Phone 
        { 
            get
            {
                return _phone;
            }
            set
            {
                _phone = value;
                OnPropertyChanged("Phone");
            }
        }

        protectedvirtualvoid OnPropertyChanged(string propName)
        {
            if (PropertyChanged != null)
            {
                PropertyChanged(this, new PropertyChangedEventArgs(propName));
            }
        }
    }
}

Notice that we can do more than just primitive types here, we can also use the INotifyPropertyChanged interface to notify when a property changes.  Also notice the reuse of the PropertyChangedEventArgs type that can be reused across all implementations. 

Reusing a Repository

This one gets a little harder.  Reusing a repository is completely possible so long as the types that you are using are common across all implementations.  As an example, we have a CustomerRepository class that initializes with a few items.

using System;
using System.Collections.Generic;
using System.Linq;

namespace ReusableLibrary
{
    publicclass CustomerRepository
    {
        private List<Customer> _customers;

        public CustomerRepository()
        {
            _customers = new List<Customer>
            {
                new Customer(){ CustomerID = 1, FullName="Dana Birkby", Phone="394-555-0181"},
                new Customer(){ CustomerID = 2, FullName="Adriana Giorgi", Phone="117-555-0119"},
                new Customer(){ CustomerID = 3, FullName="Wei Yu", Phone="798-555-0118"}
            };
        }

        public List<Customer> GetCustomers()
        {
            return _customers;
        }

        publicvoid UpdateCustomer(Customer SelectedCustomer)
        {
            Customer customerToChange = _customers.Single(
              c => c.CustomerID == SelectedCustomer.CustomerID);
            customerToChange = SelectedCustomer;
            
        }
    }
}

Admittedly, this is a simplistic scenario that might not be applicable.  For instance, your repository class might need to grab data from a database, in which case the classes to do so might not be available on all platforms.  In this case, you might need to inject the platform-specific implementation into the repository, and each platform deals with the specifics of how to obtain the data.  For now, we’ll stick with our simple case of returning a list of customer objects.

Reusing ViewModels

Here’s where it gets really tricky.  Defining ViewModels can be difficult based on differences between platforms.  For instance, handling navigation can be different between platforms, even between Windows Phone 8 and Windows Store applications.  Handling those differences requires that you create those abstractions yourself.  For a simple example of reusing the ViewModels, we first define a base class.

using System;
using System.ComponentModel;

namespace ReusableLibrary
{
    publicabstractclass ViewModelBase : INotifyPropertyChanged
    {
        publicevent PropertyChangedEventHandler PropertyChanged;

        protectedvirtualvoid OnPropertyChanged(string propName)
        {
            if (PropertyChanged != null)
            {
                PropertyChanged(this, new PropertyChangedEventArgs(propName));
            }
        }
    }
}

This base class provides the base implementation for handling property changed events in the ViewModel.  The responsibility of the view model is to provide the glue between the view and the model, which is typically done through events and commands.  When a property is changed, we raise an event.  We also want to provide the ability for our view model to respond to commands, such as a button is being clicked.  We provide this through a command class called RelayCommand that implements the ICommand interface, which is typical when using MVVM.

using System;
using System.Windows.Input;

namespace ReusableLibrary
{
    publicclass RelayCommand : ICommand
    {
        privatereadonly Action _handler;
        privatebool _isEnabled;

        public RelayCommand(Action handler)
        {
            _handler = handler;
        }

        publicbool IsEnabled
        {
            get { return _isEnabled; }
            set
            {
                if (value != _isEnabled)
                {
                    _isEnabled = value;
                    if (CanExecuteChanged != null)
                    {
                        CanExecuteChanged(this, EventArgs.Empty);
                    }
                }
            }
        }

        publicbool CanExecute(object parameter)
        {
            return IsEnabled;
        }

        publicevent EventHandler CanExecuteChanged;

        publicvoid Execute(object parameter)
        {
            _handler();
        }
    }
}

We can then define a view model that derives from the base class that wires up the ICommand interface implementation.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace ReusableLibrary
{
    publicclass CustomerViewModel : ViewModelBase
    {
        private List<Customer> _customers;
        private Customer _currentCustomer;
        private CustomerRepository _repository;

        public CustomerViewModel()
        {
            _repository = new CustomerRepository();
            _customers = _repository.GetCustomers();

            WireCommands();
        }

        privatevoid WireCommands()
        {
            UpdateCustomerCommand = new RelayCommand(UpdateCustomer);
        }

        public RelayCommand UpdateCustomerCommand
        {
            get;
            private set;
        }

        public List<Customer> Customers
        {
            get { return _customers; }
            set { _customers = value; }
        }

        
        public Customer CurrentCustomer
        {
            get
            {
                return _currentCustomer;
            }

            set
            {
                if (_currentCustomer != value)
                {
                    _currentCustomer = value;
                    OnPropertyChanged("CurrentCustomer");
                    UpdateCustomerCommand.IsEnabled = true;
                }
            }
        }

        publicvoid UpdateCustomer()
        {
            _repository.UpdateCustomer(CurrentCustomer);
            OnPropertyChanged("CurrentCustomer");
            UpdateCustomerCommand.IsEnabled = false;
        }
    }

}

Our view model can now respond to commands and notify any listeners when a property is changed.  This is all typical MVVM plumbing so far, but the key point is that this is all implemented within a single PCL library that can be reused across platforms.

Defining a View for WPF

OK, let’s start to show the payoff.  We can now create a WPF application that references the PCL library.

image

I generated a blank WPF application and then changed the MainWindow.xaml to bind to the library.

<Windowx:Class="WPFApplication.MainWindow"xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"xmlns:viewModels="clr-namespace:ReusableLibrary;assembly=ReusableLibrary"Title="MainWindow"Height="350"Width="525"><Window.Resources><viewModels:CustomerViewModelx:Key="ViewModel"/></Window.Resources><GridDataContext="{Binding Source={StaticResource ViewModel}}"><Grid.ColumnDefinitions><ColumnDefinition></ColumnDefinition><ColumnDefinition></ColumnDefinition></Grid.ColumnDefinitions><Grid.RowDefinitions><RowDefinitionHeight="Auto"></RowDefinition><RowDefinitionHeight="Auto"></RowDefinition><RowDefinitionHeight="Auto"></RowDefinition><RowDefinitionHeight="Auto"></RowDefinition><RowDefinitionHeight="Auto"></RowDefinition></Grid.RowDefinitions><TextBlockHeight="23"Margin="5"HorizontalAlignment="Right"Grid.Column="0"Grid.Row="0"Name="textBlock2"Text="Select a Customer:"VerticalAlignment="Top"/><ComboBoxHeight="23"HorizontalAlignment="Left"Grid.Column="1"Grid.Row="0"Name="CustomersComboBox"VerticalAlignment="Top"Width="173"DisplayMemberPath="FullName"SelectedItem="{Binding Path=CurrentCustomer, Mode=TwoWay}"ItemsSource="{Binding Path=Customers}"/><TextBlockHeight="23"Margin="5"HorizontalAlignment="Right"Grid.Column="0"Grid.Row="1"Name="textBlock4"Text="Customer ID"/><TextBlockHeight="23"Margin="5"HorizontalAlignment="Right"Grid.Column="0"Grid.Row="2"Name="textBlock5"Text="Name"/><TextBlockHeight="23"Margin="5"HorizontalAlignment="Right"Grid.Column="0"Grid.Row="3"Name="textBlock9"Text="Phone"/><TextBlockHeight="23"HorizontalAlignment="Left"Grid.Column="1"Grid.Row="1"Name="CustomerIDTextBlock"Text="{Binding ElementName=CustomersComboBox, Path=SelectedItem.CustomerID}"/><TextBoxHeight="23"HorizontalAlignment="Left"Grid.Column="1"Grid.Row="2"Width="219"Text="{Binding Path=CurrentCustomer.FullName, Mode=TwoWay}"/><TextBoxHeight="23"HorizontalAlignment="Left"Grid.Column="1"Grid.Row="3"Width="219"Text="{Binding Path=CurrentCustomer.Phone, Mode=TwoWay}"/><ButtonName="UpdateButton"Command="{Binding UpdateCustomerCommand}"Content="Update"Height="40"Grid.Column="0"Grid.Row="4"VerticalAlignment="Top"HorizontalAlignment="Center"Width="80"Grid.ColumnSpan="2"Margin="10,10,10,10"/></Grid></Window>

I know all that XAML looks intimidating, but it looks like this when it runs.

image

What’s even better is the code-behind for the MainWindow.xaml page.  I offer it here in its entirety.

image

There’s no databinding code, no code for a button click event, there’s just the single call to InitializeComponent and that’s it.  The view is entirely in XAML, and the code provides two-way databinding to the view model, which in turn changes properties of the model. When the model is updated, it notifies the view of the change, and the view responds accordingly.  To demonstrate, I change the customer’s name to “Kirk Evans’ and click the update button, and the model is changed and reflected in the UI.

image

Still not seeing the payoff?  Let’s create a Windows Phone app.

Defining a View for Windows Phone

Now we’ll create a Windows Phone project that references our PCL. 

image

The Windows Phone 8 app contains a single page, MainPage.xaml.  First, let’s look at its code-behind.  I present the code-behind in its entirety.

image

I hope you get the point by now that there’s no button1_click, there’s no databinding code, all of that implementation is contained completely within the view, which is the XAML for the MainPage.xaml.

<phone:PhoneApplicationPage x:Class="WP8App.MainPage"
          xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
          xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
          xmlns:phone="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone"
          xmlns:shell="clr-namespace:Microsoft.Phone.Shell;assembly=Microsoft.Phone"
          xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
          xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
          xmlns:viewModels="clr-namespace:ReusableLibrary;assembly=ReusableLibrary"
          mc:Ignorable="d"
          FontFamily="{StaticResource PhoneFontFamilyNormal}"
          FontSize="{StaticResource PhoneFontSizeNormal}"
          Foreground="{StaticResource PhoneForegroundBrush}"
          SupportedOrientations="Portrait"
          Orientation="Portrait"
          shell:SystemTray.IsVisible="True">

    <phone:PhoneApplicationPage.Resources>
        <viewModels:CustomerViewModel x:Key="ViewModel" />
    </phone:PhoneApplicationPage.Resources>
    <!--LayoutRoot is the root grid where all page content is placed-->
    <Grid x:Name="LayoutRoot"
          Background="Transparent">
        <Grid.RowDefinitions>
            <RowDefinition Height="Auto" />
            <RowDefinition Height="*" />
        </Grid.RowDefinitions>

                <!--TitlePanel contains the name of the application and page title-->
        <StackPanel x:Name="TitlePanel"
                    Grid.Row="0"
                    Margin="12,17,0,28">
            <TextBlock Text="MY PHONE APPLICATION"
                       Style="{StaticResource PhoneTextNormalStyle}"
                       Margin="12,0" />
            <TextBlock Text="Main Page"
                       Margin="9,-7,0,0"
                       Style="{StaticResource PhoneTextTitle1Style}" />
        </StackPanel>

        <!--ContentPanel - place additional content here-->
        <Grid x:Name="ContentPanel"
              DataContext="{Binding Source={StaticResource ViewModel}}"
              Grid.Row="1"
              Margin="12,0,12,0">
            <Grid.ColumnDefinitions>
                <ColumnDefinition></ColumnDefinition>
                <ColumnDefinition></ColumnDefinition>
            </Grid.ColumnDefinitions>
            <Grid.RowDefinitions>
                <RowDefinition Height="Auto"></RowDefinition>
                <RowDefinition Height="Auto"></RowDefinition>
                <RowDefinition Height="Auto"></RowDefinition>
                <RowDefinition Height="Auto"></RowDefinition>
                <RowDefinition Height="Auto"></RowDefinition>
            </Grid.RowDefinitions>
            
            <TextBlock Name="textBlock2"
                       Height="23"
                       Margin="10,10,10,10"
                       HorizontalAlignment="Right"
                       Grid.Column="0"
                       Grid.Row="0"
                       Text="Select a Customer:"
                       VerticalAlignment="Top"
                       Width="164" />
            <ListBox Name="CustomersComboBox"
                     Margin="10,10,10,10"
                     HorizontalAlignment="Left"
                     Grid.Column="1"
                     Grid.Row="0"
                     VerticalAlignment="Top"
                     DisplayMemberPath="FullName"
                     SelectedItem="{Binding Path=CurrentCustomer, Mode=TwoWay}"
                     ItemsSource="{Binding Path=Customers}" />
            <TextBlock Name="textBlock4"
                       HorizontalAlignment="Right"
                       Grid.Column="0"
                       Grid.Row="1"
                       Text="Customer ID"
                       Margin="10,10,10,10" />
            <TextBlock Margin="10,10,10,10"
                       HorizontalAlignment="Right"
                       Grid.Column="0"
                       Grid.Row="2"
                       Name="textBlock5"
                       Text="Name" />
            <TextBlock Margin="10,10,10,10"
                       HorizontalAlignment="Right"
                       Grid.Column="0"
                       Grid.Row="3"
                       Name="textBlock9"
                       Text="Phone" />
            <TextBlock HorizontalAlignment="Left"
                       Grid.Column="1"
                       Grid.Row="1"
                       Name="CustomerIDTextBlock"
                       Text="{Binding ElementName=CustomersComboBox, Path=SelectedItem.CustomerID}"
                       Margin="10,10,10,10" />
            <TextBox HorizontalAlignment="Left"
                     Grid.Column="1"
                     Grid.Row="2"
                     Width="219"
                     Text="{Binding Path=CurrentCustomer.FullName, Mode=TwoWay}"
                      />
            <TextBox HorizontalAlignment="Left"
                     Grid.Column="1"
                     Grid.Row="3"
                     Width="219"
                     Text="{Binding Path=CurrentCustomer.Phone, Mode=TwoWay}"
                      />
            <Button Command="{Binding UpdateCustomerCommand}"
                    Content="Update"
                    Height="95"
                    HorizontalAlignment="Right"
                    Grid.Column="0"
                    Grid.Row="4"
                    Name="UpdateButton"
                    VerticalAlignment="Top"
                    Width="180"
                    Grid.ColumnSpan="2"
                    Margin="0,0,123,-72" />
        </Grid>
    </Grid>
</phone:PhoneApplicationPage>

Again, it’s kind of hard to picture what this looks like just looking at the XAML, so we run the application to see that we’ve created a completely different view of the data yet still leverage the exact same PCL code.

image

OK, hopefully by now it’s starting to make sense.  We simply define different views per platform, providing optimal code reuse.

Defining a View for a Windows 8 Store App

Now that we’ve seen it’s completely possible to just define a view for each platform (so far, WPF and Windows Phone), let’s create one more just to show off.  This time, we’ll create a Windows 8 app.  Just like before, we reference the PCL.

image

We now inspect the MainPage.xaml and see that we can, again, create a view that is platform-specific while referencing the view model, model, and repository classes from the PCL.

<Pagex:Class="WindowsStoreApp.MainPage"xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"xmlns:local="using:WindowsStoreApp"xmlns:d="http://schemas.microsoft.com/expression/blend/2008"xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"xmlns:mylib="using:ReusableLibrary"mc:Ignorable="d"FontSize="30"><StackPanelBackground="{ThemeResource ApplicationPageBackgroundThemeBrush}"><TextBlockx:Name="pageTitle"Text="My Store Application"Style="{StaticResource HeaderTextBlockStyle}"IsHitTestVisible="false"TextWrapping="NoWrap"VerticalAlignment="Bottom"Margin="0,0,30,40"/><Grid><Grid.DataContext><mylib:CustomerViewModel/></Grid.DataContext><Grid.ColumnDefinitions><ColumnDefinition></ColumnDefinition><ColumnDefinition></ColumnDefinition></Grid.ColumnDefinitions><Grid.RowDefinitions><RowDefinitionHeight="Auto"></RowDefinition><RowDefinitionHeight="Auto"></RowDefinition><RowDefinitionHeight="Auto"></RowDefinition><RowDefinitionHeight="Auto"></RowDefinition><RowDefinitionHeight="Auto"></RowDefinition></Grid.RowDefinitions><TextBlockName="textBlock2"HorizontalAlignment="Right"Grid.Column="0"Grid.Row="0"Margin="10,10,10,10"Text="Select a Customer:"/><ComboBoxName="CustomersComboBox"HorizontalAlignment="Left"Grid.Column="1"Grid.Row="0"VerticalAlignment="Top"Width="219"Margin="10,10,10,10"DisplayMemberPath="FullName"SelectedItem="{Binding Path=CurrentCustomer, Mode=TwoWay}"ItemsSource="{Binding Path=Customers}"/><TextBlockName="textBlock4"HorizontalAlignment="Right"Grid.Column="0"Grid.Row="1"Margin="10,10,10,10"Text="Customer ID"/><TextBlockName="textBlock5"HorizontalAlignment="Right"Grid.Column="0"Grid.Row="2"Margin="10,10,10,10"Text="Name"/><TextBlockName="textBlock9"HorizontalAlignment="Right"Grid.Column="0"Grid.Row="3"Margin="10,10,10,10"Text="Phone"/><TextBlockName="CustomerIDTextBlock"HorizontalAlignment="Left"Grid.Column="1"Grid.Row="1"Margin="10,10,10,10"Text="{Binding ElementName=CustomersComboBox, Path=SelectedItem.CustomerID}"/><TextBoxHorizontalAlignment="Left"Grid.Column="1"Grid.Row="2"Width="219"Margin="10,10,10,10"Text="{Binding Path=CurrentCustomer.FullName, Mode=TwoWay}"/><TextBoxHorizontalAlignment="Left"Grid.Column="1"Grid.Row="3"Width="219"Margin="10,10,10,10"Text="{Binding Path=CurrentCustomer.Phone, Mode=TwoWay}"/><ButtonName="UpdateButton"Command="{Binding UpdateCustomerCommand}"Content="Update"Height="95"Grid.Column="1"Grid.Row="4"VerticalAlignment="Top"Width="180"Grid.ColumnSpan="2"Margin="10,10,10,10"/></Grid></StackPanel></Page>

If we inspect the code-behind for this view, we’ll see that there is zero additional code needed.  Again, I present the code in all its bareness and glory.

image

Yet when we run the application, we provide a completely different view over the same exact view model, which then provides the interactions with the model.

image

By simply leveraging bindings to the view model, we can provide the platform-specific implementation (the view) that interacts with the platform-independent components in the PCL.

The next post in this series will take a closer look at the newly announced Universal Apps and see how that plays a part in the whole cross-platform story.  Stay tuned for more.

For More Information

Sharing Code between Windows Store and Windows Phone App (PCL + MVVM + OData)

Using Portable Class Library with Model-View-View Model

Calling O365 APIs from your Web API on behalf of a user

$
0
0

This post will show how to create a custom Web API that calls the Office 365 APIs on behalf of a user.

Background

How about that title?!?  That’s pretty geeky.

I’ve been working with a customer who is interested in the new Office 365 APIs that were announced at SharePoint Conference.  They are building multiple client applications that will consume their custom Web API endpoints.  They did not want to reference the O365 APIs and perform all of the work from the client applications, but rather centralize that logic into a service that they control. 

clip_image002

In this scenario, the user opens an application and logs in using their Azure Active Directory credentials.  Once the user is authenticated, we obtain an access token used to call the custom Web API endpoint.  The Web API endpoint, in turn, obtains an access token for O365 using the user’s token.  This means when the O365 API is called, it is called using an app+user context on behalf of the currently logged in user.  The current user must have permissions to write to the list, as does the app, and if one of them is missing permission then the operation fails.

Yeah, even after editing that paragraph several times over it’s hard to understand.  Let’s walk through an example.

The code I am going to show is based on a sample posted by the Azure Active Directory team, Web API OnBehalfOf DotNet available on GitHub.  I am not going to provide a code download for this post, I strongly urge you to go download their sample and become familiar with it.  Instead of using the O365 API, that sample shows how to use the Graph API.  This post details how to leverage some of the code in that example to call the O365 API.  There are many things I omitted that you should spend time in that sample understanding.

There’s also a great post by Johan Danforth, Secure ASP.NET Web API with Windows Azure AD, that illustrates some of the same concepts that I show here.  His example is well formatted and might help explain some of the settings that I am going to gloss over here.  If you need help understanding the various parts (client ID, client secret, callback URL, app ID URI for the web API) that is a great resource as well.

Manage O365 Tenant from Azure Management Portal

The first part of this exercise requires that you manage your Office 365 tenant from the Azure Management Portal.  A blog post with detailed instructions and an accompanying video are available at Using a existing Windows Azure AD Tenant with Windows Azure.  I have an existing Office 365 tenant (https://kirke.sharepoint.com) that I log into with the domain “kirke.onmicrosoft.com”, and I want to add that account into Azure AD.  To do this was as simple as logging into my Azure subscription using a Live ID and adding an existing directory in the Azure Management Portal.

image

Next, I log in using my Office 365 tenant administrator, and then the directory shows up in the Azure Management portal.

image

I can now open the directory and see all of my users from my O365 tenant.

image 

Make sure you start In Private browsing before doing this, and it should work smoothly.

Create the Web API Project

Create a new ASP.NET Application in Visual Studio 2013 and then choose Web API. 

image

Click the button to Change Authentication.  We’ll use organizational accounts in Azure AD to authenticate, using the same domain that our O365 tenancy was in.

image

Leave the default App ID URI value, which is your tenant + “/ListService”.  When you click OK, you are prompted to sign in as a Global Administrator for the tenant.  This is because Visual Studio is going to do several nice things for you, including registering the app and setting up the configuration settings for you.

image

Sign in, click OK, then click OK again, and the project is created.  If you inspect the web.config, you’ll see that several settings were added for you.

image

Now go look at your Azure Active Directory tenant and click on the Applications tab.  Notice that an application was added for you.

image

Click the ListService application to see its settings, then go to the Configure tab.  In the section “keys”, create a new key:

image

Click Save, and the key is then displayed for you, and you can now copy its value.

image

Go back to web.config and add a new key “ida:AppKey” with the value of the key you just created.  Add a key “ida:Resource” which points to your SharePoint Online tenant (note: this need not be the site collection you are accessing).  Finally, add a key “ida:SiteURL” that points to your site collection.

image

Give the Web API Permission

The Web API has been registered with Azure AD, we now need to give it some permissions.  In Azure AD, select your Web API application and choose Manage Manifest / Download Manifest.

image

We are prompted to download and save a .JSON file. 

image

Save it, and then open it with Visual Studio 2013.  If you have the Visual Studio 2013 Update 2 RC installed, you will be delighted to see color-coding while editing the JSON.  There’s a property called appPermissions that we need to edit.

image

We’ll replace this with the following:

"appPermissions": [
{"claimValue": "user_impersonation","description": "Allow full access to the To Do List service on behalf of the signed-in user","directAccessGrantTypes": [],"displayName": "Have full access to the To Do List service","impersonationAccessGrantTypes": [
        {"impersonated": "User","impersonator": "Application"
        }
    ],"isDisabled": false,"origin": "Application","permissionId": "b69ee3c9-c40d-4f2a-ac80-961cd1534e40","resourceScopeType": "Personal","userConsentDescription": "Allow full access to the To Do service on your behalf","userConsentDisplayName": "Have full access to the To Do service"
    }
],

The result looks like this:

image

Save it, then upload it to Azure by going to Manage Manifest and Upload Manifest.

The last thing we need to do is to grant our Web API application permission to call the O365 APIs.  In Azure AD, go to the Configure tab for your application.  We will grant the Web API permission to “Read items in all site collections” and “Create or delete items and lists in all site collections”.

image 

Check for User Impersonation

In the previous step, we added the appPermission claim “user_impersonation”.  We will check for the user_impersonation scope claim, making sure it was registered for the Web API application in Azure AD.  We need to perform this check every time an action is executed for our controller, so we create a filter.  It doesn’t matter where you put the class, but I prefer to put this in a new folder named Filters in my Web API project.

using System.Net;using System.Net.Http;using System.Security.Claims;using System.Web.Http.Filters;namespace ListService.Filters
{publicclass ImpersonationScopeFilterAttribute : ActionFilterAttribute
    {        // The Scope claim tells you what permissions the client application has in the service.// In this case we look for a scope value of user_impersonation, or full access to the service as the user.       publicoverridevoid OnActionExecuting(System.Web.Http.Controllers.HttpActionContext actionContext)
        {if (ClaimsPrincipal.Current.FindFirst("http://schemas.microsoft.com/identity/claims/scope").Value != "user_impersonation")
            {
                HttpResponseMessage message = actionContext.Request.CreateErrorResponse(
HttpStatusCode.Unauthorized, "The Scope claim does not contain 'user_impersonation' or scope claim not found");
            }
        }
    }
}

To use the filter, we apply it to the top of our controller class.  Yes, I am going to be lazy and just use the ValuesController class that Visual Studio generates.

image

Using ADAL

We’re now ready to start doing the Azure Active Directory coding.  Right-click the Web API project and choose Manage NuGet Packages.  Search for “adal” to find the Active Directory Authentication Library, and use the drop-down that says “Include Prerelease” to find version 2.6.1-alpha (prerelease).

image

Install and accept the EULA.  Next, add a class “SharePointOnlineRepository” to the Web API project.  We borrow some code from the GitHub project mentioned previously to obtain the user token.  This class will use the SharePoint REST API, passing an access token in the header.  Here I give you two different ways to accomplish this: using XML (as in the GetAnnouncements method) and JSON (as in the AddAnnouncement method). 

using Microsoft.IdentityModel.Clients.ActiveDirectory;using System;using System.Globalization;using System.Net.Http;using System.Net.Http.Headers;using System.Threading;using System.Threading.Tasks;using System.Web;using System.Xml.Linq;namespace ListService.Models
{publicclass SharePointOnlineRepository
    {/// <summary>/// Get the access token/// </summary>/// <param name="clientId">Client ID of the Web API app</param>/// <param name="appKey">Client secret for the Web API app</param>/// <param name="aadInstance">The login URL for AAD</param>/// <param name="tenant">Your tenant (eg kirke.onmicrosoft.com)</param>/// <param name="resource">The resource being accessed///   (eg., https://kirke.sharepoint.com)/// </param>/// <returns>string containing the access token</returns>publicstatic async Task<string> GetAccessToken(string clientId,string appKey,string aadInstance,string tenant,string resource)
        {string accessToken = null;
            AuthenticationResult result = null;


            ClientCredential clientCred = new ClientCredential(clientId, appKey);string authHeader = HttpContext.Current.Request.Headers["Authorization"];string userAccessToken = authHeader.Substring(authHeader.LastIndexOf(' ')).Trim();
            UserAssertion userAssertion = new UserAssertion(userAccessToken);string authority = String.Format(CultureInfo.InvariantCulture, aadInstance, tenant);

            AuthenticationContext authContext = new AuthenticationContext(authority);

            result = await authContext.AcquireTokenAsync(resource, userAssertion, clientCred);
            accessToken = result.AccessToken;

            return accessToken;
        }/// <summary>/// Gets list items from a list named Announcements/// </summary>/// <param name="siteURL">The URL of the SharePoint site</param>/// <param name="accessToken">The access token</param>/// <returns>string containing response from SharePoint</returns>publicstatic async Task<string> GetAnnouncements(string siteURL,string accessToken)
        {//// Call the O365 API and retrieve the user's profile.//string requestUrl = siteURL + "/_api/Web/Lists/GetByTitle('Announcements')/Items?$select=Title";

            HttpClient client = new HttpClient();
            HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Get, requestUrl);
            request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/atom+xml"));
            request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);
            HttpResponseMessage response = await client.SendAsync(request);if (response.IsSuccessStatusCode)
            {string responseString = await response.Content.ReadAsStringAsync();return responseString;
            }// An unexpected error occurred calling the O365 API.  Return a null value.return (null);
        }/// <summary>/// Gets the form digest value, required for modifying/// data in SharePoint.  This is not needed for bearer authentication and
/// can be safely removed in this scenario, but is left here for posterity./// </summary>/// <param name="siteURL">The URL of the SharePoint site</param>/// <param name="accessToken">The access token</param>/// <returns>string containing the form digest</returns>privatestatic async Task<string> GetFormDigest(string siteURL,string accessToken)
        {//Get the form digest value in order to write data
            HttpClient client = new HttpClient();
            HttpRequestMessage request = new HttpRequestMessage(
                  HttpMethod.Post, siteURL + "/_api/contextinfo");
            request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/xml"));
            request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);
            HttpResponseMessage response = await client.SendAsync(request);string responseString = await response.Content.ReadAsStringAsync();

            XNamespace d = "http://schemas.microsoft.com/ado/2007/08/dataservices";
            var root = XElement.Parse(responseString);
            var formDigestValue = root.Element(d + "FormDigestValue").Value;return formDigestValue;
        }/// <summary>/// Adds an announcement to a SharePoint list /// named Announcements/// </summary>/// <param name="title">The title of the announcement to add</param>        /// <param name="siteURL">The URL of the SharePoint site</param>/// <param name="accessToken">The access token</param>/// <returns></returns>publicstatic async Task<string> AddAnnouncement(string title,            string siteURL,string accessToken)
        {//// Call the O365 API and retrieve the user's profile.//string requestUrl = 
                siteURL + "/_api/Web/Lists/GetByTitle('Announcements')/Items";

            title = title.Replace('\'', ' ');//get the form digest, required for SharePoint list item modifications
            var formDigest = await GetFormDigest(siteURL, accessToken);

            HttpClient client = new HttpClient();
            client.DefaultRequestHeaders.Add("Accept", "application/json;odata=verbose");

            HttpRequestMessage request = 
                new HttpRequestMessage(HttpMethod.Post, requestUrl);
            request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);// Note that the form digest is not needed for bearer authentication.  This can
//safely be removed, but left here for posterity.
request.Headers.Add("X-RequestDigest", formDigest);

            var requestContent = new StringContent("{ '__metadata': { 'type': 'SP.Data.AnnouncementsListItem' }, 'Title': '" + title + "'}");
            requestContent.Headers.ContentType =
               System.Net.Http.Headers.MediaTypeHeaderValue.Parse("application/json;odata=verbose");

            request.Content = requestContent;

            HttpResponseMessage response = await client.SendAsync(request);

            if (response.IsSuccessStatusCode)
            {string responseString = await response.Content.ReadAsStringAsync();return responseString;
            }// An unexpected error occurred calling the O365 API.  Return a null value.return (null);
        }
    }
}

We will accept an HTTP POST from the caller to insert new announcements into the list.  The easiest way to do model this is to create a class with the values that will be posted.  I add a new class “Announcement” to the Models folder in my Web API project.

namespace ListService.Models
{publicclass Announcement
    {publicstring Title { get; set; }
    }
}

Now that we have the repository class that interacts with SharePoint, and we have our new model class, we update the ValuesController class.  The Get action will simply return the entire string of data back from SharePoint.  The Post action will read data from the form data, add an announcement to SharePoint, and then return the SPList data for the item.  I leave it as an exercise to the reader to do something more meaningful with the data such as deserializing it into a model object.

using ListService.Filters;using System.Collections.Generic;using System.Configuration;using System.Threading.Tasks;using System.Web.Http;namespace ListService.Controllers
{
    [Authorize]
    [ImpersonationScopeFilter]publicclass ValuesController : ApiController
    {// GET api/valuespublic async Task<string> Get()
        {string clientID = ConfigurationManager.AppSettings["ida:ClientID"];string appKey = ConfigurationManager.AppSettings["ida:AppKey"];string aadInstance = ConfigurationManager.AppSettings["ida:AADInstance"];string tenant = ConfigurationManager.AppSettings["ida:Tenant"];string resource = "https://kirke.sharepoint.com";string accessToken = await Models.SharePointOnlineRepository.GetAccessToken(
                clientID, 
                appKey, 
                aadInstance, 
                tenant, 
                resource);
            var ret = await Models.SharePointOnlineRepository.GetAnnouncements("https://kirke.sharepoint.com/sites/dev", 
                accessToken);return ret;
        }// POST api/values        public async Task<string> Post(Models.Announcement announcement)
        {string clientID = ConfigurationManager.AppSettings["ida:ClientID"];string appKey = ConfigurationManager.AppSettings["ida:AppKey"];string aadInstance = ConfigurationManager.AppSettings["ida:AADInstance"];string tenant = ConfigurationManager.AppSettings["ida:Tenant"];string resource = "https://kirke.sharepoint.com";string accessToken = await Models.SharePointOnlineRepository.GetAccessToken(
                clientID, 
                appKey, 
                aadInstance, 
                tenant, 
                resource);
            var ret = await Models.SharePointOnlineRepository.AddAnnouncement(
                announcement.Title,                 "https://kirke.sharepoint.com/sites/dev",
                accessToken);return ret;
        }        
    }
}

 

Register the Client Application

In Azure AD, go to your directory and add a new application.  Choose “Add an application my organization is developing”. 

image

Next, choose “Native Client Application” and give it a name.

image

In the next screen, we provide a URL.  Any URL is fine here.  If we were building a Windows 8 application, then we’d provide the ms-app URL obtained from the store.  To save some digital ink, this is why we chose a WPF application… any URL will do here as long as it is a valid URL (but it does not to have to actually resolve to anything).  I used “https://OnBehalfOf/Client".

image

In the Update Your Code section, you see that a client ID was generated for you.  Copy that, you’ll need it later.

image

In the Configure Access to Web APIs in Other Applications section, click Configure It Now.  Under Permission to Other Applications, find the ListService that we modified the manifest for earlier in this process.  When we modified the manifest, we modified the appPermissions section and provided a new permission.  We now use this to grant the client application permission to the Web API.  Choose ListService, and choose “Have full access to the To Do List service”.

image

Notice that we are configuring the permission for the client application here, and the only permission it needs is to access our Web API.  We do not have to grant the client application access to O365 because it never talks directly to O365… it only communicates with our custom Web API.

And a reminder to self… you can change the text of the permission by downloading the manifest, editing the appPermission section with the proper text, and uploading it again.  Now make sure to click Save!

image

Create the Client Application

Create a new WPF application.  This is going to be the easiest way to get started as it is the most forgiving in terms of callback URL.  If we used a Windows 8 app, we’d need to register it with the store and obtain the app ID (the one that starts with ms-app).  For now, a WPF application is fine.

image

Add the Active Directory Authentication Library pre-release version 2.6.1-alpha NuGet package.

image

The UI for the application is simple, two buttons and a text box.

<Windowx:Class="Client.MainWindow"xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"Title="MainWindow"Height="350"Width="525"><Grid><ButtonContent="Add Announcement"HorizontalAlignment="Left"Margin="330,80,0,0"VerticalAlignment="Top"Width="142"Click="Button_Click"/><ButtonContent="Get Announcements"HorizontalAlignment="Left"Margin="190,147,0,0"VerticalAlignment="Top"Width="136"Click="Button_Click_1"/><TextBoxHorizontalAlignment="Left"Height="20"Margin="93,81,0,0"TextWrapping="Wrap"Text="TextBox"VerticalAlignment="Top"Width="215"x:Name="announcementTitle"/></Grid></Window>

The code behind the XAML is what’s interesting.  I am using the HttpClient classes to call our custom Web API.  Note how we add the form data to the request using a Dictionary.  I could have moved the settings to app.config instead of hard-coding everything in the GetAuthHeader method, I leave that as an exercise to the reader as well.  The GitHub sample (link at the For More Information section of this post) does a better job of putting things in app.config, I’d suggest using that as a baseline.

using Microsoft.IdentityModel.Clients.ActiveDirectory;using System;using System.Collections.Generic;using System.Net.Http;using System.Windows;namespace Client
{/// <summary>/// Interaction logic for MainWindow.xaml/// </summary>publicpartialclass MainWindow : Window
    {public MainWindow()
        {
            InitializeComponent();
        }private async void Button_Click(object sender, RoutedEventArgs e)
        {//POST to the Web API to create a new announcement
            HttpClient client = new HttpClient();

            HttpRequestMessage request = new HttpRequestMessage(
              HttpMethod.Post, "https://localhost:44307/api/Values");//Create the form POST data
            var postData = new Dictionary<string, string>
            {
                {"Title",announcementTitle.Text}
            };
            request.Content = new FormUrlEncodedContent(postData);//Add the authorization Bearer headerstring authHeader = GetAuthHeaderValue();
            request.Headers.TryAddWithoutValidation("Authorization", authHeader);
            HttpResponseMessage response = await client.SendAsync(request);string responseString = await response.Content.ReadAsStringAsync();

            MessageBox.Show(responseString);
        }

        private async void Button_Click_1(object sender, RoutedEventArgs e)
        {            // Get the list of announcements          
            HttpClient client = new HttpClient();
            HttpRequestMessage request = new HttpRequestMessage(
              HttpMethod.Get, "https://localhost:44307/api/Values");string authHeader = GetAuthHeaderValue();
            request.Headers.TryAddWithoutValidation("Authorization", authHeader);
            HttpResponseMessage response = await client.SendAsync(request);string responseString = await response.Content.ReadAsStringAsync();
            MessageBox.Show(responseString);

        }

        privatestring GetAuthHeaderValue()
        {string aadInstance = "https://login.windows.net/{0}";string tenant = "kirke.onmicrosoft.com";

            AuthenticationContext ac = new AuthenticationContext(string.Format(aadInstance, tenant));string webAPIAppID = "https://kirke.onmicrosoft.com/ListService";string clientID = "67070e10-9540-444f-883b-d090d7b7be18";string callbackUrl = "https://OnBehalfOf/Client";

            AuthenticationResult ar =
              ac.AcquireToken(webAPIAppID,
              clientID,
              new Uri(callbackUrl));// Call Web APIstring authHeader = ar.CreateAuthorizationHeader();return authHeader;
        }
    }
}

Cross Your Fingers

This is a lot of configuration, and a lot of code.  There are a lot of opportunities for errors here, so we’re going to cross our fingers, set the start-up projects, and run the application.

image

We run the application and click the button.  A login screen shows itself (this is a good thing!)

image

I wait for a little bit and… darn it, I see this screen.

image

That’s not right, I should see the Announcements list data.  I fire up Fiddler and watch the traffic.  Looking through the requests, I notice that one of them does not have an access token value.

image

I double-check the code, yep… we’re providing it.  What did I miss?  I debug through the code and see that I do not get an access token.  Hmm… maybe permissions?  Let’s go double-check the Web API permissions in Azure AD. 

image

Yep… that’s the problem.  Earlier in the post I told you to add two permissions for Office 365 SharePoint Online, but apparently I forgot to hit Save.  Do this step again, and this time, hit Save!

image

Run the application again, this time we see a different response… we get the items from our Announcements list!

image

We then try our operation to write an announcement to the list, and it works.  The UI is horrible, but serves my purpose for now.

image

We then check the Announcements list in SharePoint and see the new announcement.  Notice that the item was added by the Web API application on behalf of the current user!

image

Think about how incredibly cool that is.  We just used constrained delegation to a downstream resource, all using OAuth.

image

For More Information

Using a existing Windows Azure AD Tenant with Windows Azure

Web API OnBehalfOf DotNet 

Secure ASP.NET Web API with Windows Azure AD

Build Android Apps With Xamarin Using Portable Class Libraries

$
0
0

This post will show how to use Xamarin.Android and Visual Studio 2013 to build an app for Android reusing an existing portable class library.

Background

In a previous post, Using Portable Class Libraries to Reuse Models and ViewModels, I showed an example of creating a portable class library (PCL) and how to use it with several types of clients (WPF, Windows 8, and Windows Phone).  This post is going to show how to reuse the same PCL library to create an app for Android with Visual Studio 2013 and Xamarin.Android. 

When we created the PCL library named “ReusableLibrary” in the previous post, we configured its targets to be .NET Framework 4.5, Windows 8, Windows Phone Silverlight 8, Xamarin.Android, and Xamarin.iOS.  Note that you only get Xamarin.Android and Xamarin.iOS if you have the Xamarin tools installed.

image

Since our PCL library targets Xamarin.Android, we can show how to reference the library and reuse it within the context of an Android app.

Create the Xamarin Project

Once you have Xamarin.Android installed and the emulators configured, you can use Visual Studio 2013 to create a new Android Application. 

image

I added this project to the existing solution that already contains my PCL library named “ReusableCode” as well as the implementations for Windows Phone, Windows 8, and WPF.  Once the project is created, the Android project contains a few files that we will modify (Main.axml, strings.xml, and Activity1.cs).

image

Create the UI

When you create a new project, your form will contain a button that says “Hello World, Click Me”.  Open strings.xml and change the resource name from “hello” to “buttonText”, and change the resource value to “Update Customer”.

image

Now go to the Main.axml file and modify the layout.  Our UI will be a simple list, 3 text boxes, and a button. 

<?xmlversion="1.0"encoding="utf-8"?><LinearLayoutxmlns:android="http://schemas.android.com/apk/res/android"android:orientation="vertical"android:layout_width="fill_parent"android:layout_height="fill_parent"><ListViewandroid:minWidth="25px"android:minHeight="25px"android:layout_width="match_parent"android:layout_height="wrap_content"android:id="@+id/listView1"/><EditTextandroid:inputType="number"android:layout_width="match_parent"android:layout_height="wrap_content"android:id="@+id/customerID"/><EditTextandroid:inputType="textPersonName"android:layout_width="match_parent"android:layout_height="wrap_content"android:id="@+id/fullName"/><EditTextandroid:inputType="phone"android:layout_width="match_parent"android:layout_height="wrap_content"android:id="@+id/phone"/><Buttonandroid:id="@+id/MyButton"android:layout_width="fill_parent"android:layout_height="wrap_content"android:text="@string/buttonText"/></LinearLayout>

Notice that our button uses the resource string ID “buttonText” that we modified in the previous step.

Implement the Code

The last part we want to do is to create the app itself, referencing our PCL library named “ReusableLibrary” (detailed in the post Using Portable Class Libraries to Reuse Models and ViewModels).  We add a reference to the ReusableLibrary project.

image

We now edit the Activity1.cs file.  Our app screen contains a list view, we want to receive an event when the list view is clicked.  The easiest way to do this is to add the ListView.IOnItemClickListener interface to our Activity class.

image

Add a field to contain the CustomerViewModel. 

image

This is not like the nice XAML model that has data binding baked into it, we are going to have to write a little code to reuse the Model and View Model from our PCL library.  Our ViewModel has a property to obtain a list of customers.  We need to get a string array of customer names, because we are going to use the ArrayAdapter<string> to display the items (you’ll see this in a subsequent step).  In the OnCreate method, we add the following code.

//Copy the items into a string array so we can //use ArrayAdapter<string>string[] customers = newstring[viewModel.Customers.Count];
int index = 0;
foreach (Customer c in viewModel.Customers)
{
    customers[index] = c.FullName;
    index++;
}

If we want to reference controls in the UI, we do that by using the FindViewById method, providing the ID of the control from the Main.axml file that we edited previously.  Using this method, we can now set the Adapter property of the ListView control to set its data source.  We also tell Android to use the SimpleListItemView1 view that comes out of the box, relieving us of having to write our own view.  We also wire up the event handlers for when an item is clicked and when the button is clicked.

//Set the data source for the listView
ListView listView = FindViewById<ListView>(Resource.Id.listView1);            
listView.Adapter = new ArrayAdapter<string>(
    this, 
    Android.Resource.Layout.SimpleListItem1, 
    customers);

//When an item is selected, call our method
listView.OnItemClickListener = this;
            
//Wire up the button click
Button button = FindViewById<Button>(Resource.Id.MyButton);
button.Click += OnButtonClick; 

Let’s implement the listView.OnItemClickListener functionality, which allows us to select an item from the list.  Just as before, we use the FindViewById method to find a control by its ID in the Main.axml file.  We then find the current item by its position.  Then I do something a little screwy… I look through all of the Customer objects and find the one whose name matches.  This is a hack due to the fact that I am using the ArrayAdapter<string>.  I could have written a custom adapter and just referenced the item by its ID, but I wanted to get something working quickly.  I leave it as an exercise to the reader to replace this hack with your own adapter.  Once we set the current customer, we can then populate the other controls.

publicvoid OnItemClick(AdapterView parent, View view, int position, long id)
{
    ListView listView = FindViewById<ListView>(Resource.Id.listView1);
    object o = listView.Adapter.GetItem(position);

    Customer c = viewModel.Customers.Find(x => x.FullName.Equals(o.ToString()));
    viewModel.CurrentCustomer = c;

    EditText customerID = FindViewById<EditText>(Resource.Id.customerID);
    customerID.Text = c.CustomerID.ToString();

    EditText fullName = FindViewById<EditText>(Resource.Id.fullName);
    fullName.Text = c.FullName;

    EditText phone = FindViewById<EditText>(Resource.Id.phone);
    phone.Text = c.Phone;
}

Let’s implement the button.Click event handler, which we use to update the data.  Just as we did previously, we find the control using the FindViewById method using its name from the Main.axml file.  We then modify the current customer’s phone and execute our RelayCommand from our PCL library.

publicvoid OnButtonClick(object sender, EventArgs e)
{
    EditText phone = FindViewById<EditText>(Resource.Id.phone);
    viewModel.CurrentCustomer.Phone = phone.Text;
    viewModel.UpdateCustomerCommand.Execute(null); 
}

The End Result

The nice thing about using Xamarin.Android with Visual Studio 2013 is that you get all the debugging capabilities that you are used to.  Simply press F5 to start debugging, and the project is deployed to the emulator.  When the screen first appears, our text boxes do not have any data.  We see the list of customers, and we can select one.

image

When we select one of the items from the list, the textboxes are populated with data.

image

We can change the phone number and click the button.  That will execute our RelayCommand in our view model, updating the customer object.

Summary

This was not a summary of best practices for building an Android app using Xamarin.Android, and it certainly is not a lesson in how to build a nice-looking UI Smile  This post demonstrates how you could reuse models and view models in a PCL library from an Android application built using Xamarin tools.

For More Information

Using Portable Class Libraries to Reuse Models and ViewModels

Simple ListView | Android Developer Tutorial (Part 16)

Android Listview item selection in Xamarin using VS 2012 C#

Viewing all 139 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>