SharePoint 2013 – Search Index Component Stuck in Degraded (Yellow) State

I read many posts about this, but none had my solution so let’s get into it. Oh the infamous yellow triangle! One of my Index Components was in a degraded state. First off, I logged into the server to make sure disk space wasn’t the issue. There was loads of disk space. I then tried the following:

  • Restarted SharePoint Server Search 15 Service
  • Cleared Config Cache
  • Reset the Index and ran a full crawl of all content

Some sources day a server reboot would fix the issue, but that’s no fun. I did some digging and found that restarting the SharePoint Search Host Controller Service fixed the issue in this case.


Duplicate List DisplayName causing ListData.svc 500 error

This is a fun one I haven’t seen in a while so I figured I’d blog. Support issue today where user was trying to access listdata.svc


This worked at the web app URL root, the site collection root, but was giving a 500 error at a specific subsite. There was a subsite underneath the affected site and that one loaded up just fine as well..seemed to be isolated to one site.

This issue is caused by a couple of things (specifically if it’s isolated to a single site or list):

  1. Invalid characters in list (Either displayname of list or column names):
  2. List view threshold/list lookup threshold
  3. Anonymous user tries to access ListData.svc
  4. svc does not work with Discussion Lists
  5. A few others:

It was none of these..pretty small site with nothing too crazy for naming or anything like that. Turns out the user created a list and gave it the same displayname as another list on the site (URL’s different..sharing the same displayname). This caused the entire site to throw a 500 error when accessing it with listdata.svc. Even when explicitly going to a list on the site that wasn’t affected. Once the user switched one of the lists to <listname> (old) everything started working.


SharePoint 2013/SSRS 2014 Trace Logs (and SSRS 2012 too)

ReportServer trace logs (Described here: can get quite huge during an upgrade. This might be a good candidate to get to a secondary drive along with IIS Logs, ULS Logs, etc.

These logs are stored in the following location (And according to a friend at Microsoft this is the only place you can find the build version of SSRS your SharePoint environment is running): C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\WebServices\LogFiles

Note: They are in this location for SharePoint 2010 (In 14 Hive) and SharePoint 2013 w/ either SSRS 2012 or 2014 running in SharePoint Integrated mode. If this is a native SSRS instance and you somehow found my blog everything still applies. Just look here for the trace logs instead:

C:\Program Files\Microsoft SQL Server\MSRS12.MSSQLSERVER\Reporting Services\LogFiles

The default max logfile size is 32MB and the default retention is 14 days. These can be tweaked at the following location:

C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\WebServices\Reporting\rsreportserver.config

You could potentially add a custom property called Directory and set it to your secondary drive. Here are those default values I talked about:
<add name=”FileName” value=”ReportServerService” />
<add name=”FileSizeLimitMb” value=”32″ />
<add name=”KeepFilesForDays” value=”14″ />

SharePoint 2013/SSRS 2014 – Error Activating Reporting Services Integration Feature

I was contacted about a BI site without SSRS content types the other day. I sent them this document and we went through everything on it:

When trying to deactivate and reactivate the Reporting Services Integration Feature we got the following error..Bwomp

The content type with Id 0x010100C3676CDFA2F24E1D949A8BF2B06F6B8B defined in feature {e8389ec7-70fd-4179-a1c4-6fcb4342d7a0} was found in the current site collection or in a subsite.

First, we tried using brute force with some PowerShell magic:

This successfully activated the feature, but still no content types.

I then went on a PowerShell-ing magical journey to see if I could find if SharePoint was lying to me. It was…

*This script searches the entire web app to see if it can find a content type with ID 0x010100C3676CDFA2F24E1D949A8BF2B06F6B8B. The first line looks to see if the Reporting Services Integration feature is enabled anywhere else.

Since this returned nothing (And I did a few manual checks to make sure PowerShell wasn’t lying to me too. Trust issues..I know) I did some searching online and found some recommended fixes:

  • Most blogs state to use the -Force command like I states above. Even though it does successfully activate the feature…still no content types
  • Tried clearing the SharePoint Config Cache
  • Tried repairing the Reporting Services Add-In on ALL servers
  • Did a SharePoint rain dance..just kidding..maybe

Then I found this awesome official Microsoft article on the Reporting Services Add-In Installation:

There is an area of this article that talks about using a two-step install to troubleshooting issues. This was the Golden Ticket…A normal repair didn’t work, but this two-step install/repair did the trick. I only needed to do the following steps on the SharePoint server running SSRS (This specific farm had 2 servers and these commands did not need to be run on the other server).

I fired up the command prompt (As Admin) and changed directories to the rsSharePoint.msi file (SSRS Add-In install file..You can get this right out of the SQL installation files or go here and grab the appropriate one:

Msiexec.exe /i rsSharePoint.msi SKIPCA=1

This popped up the Reporting Services add-in installation wizard. I clicked repair as I did before and it completed successfully. The SKIPCA=1 parameter skips installing the Reporting Services Custom Actions and puts another Install file in the %TEMP% location or C:\Users\<your name>\AppData\Local\Temp

With the same command prompt window opened I changed directories to this location and ran the following command:

.\rsCustomAction.exe /i

This is what it should look like on your end..


After that I checked out the BI site’s site content types and look at those sexy beasts:

Content Types

SharePoint/Azure ACS Token Signing Certificate. Will you please just sign my tokens?!

Setting up Azure ACS was fun. It’s so easy to get it up/running/connected to SharePoint and you have the instant satisfaction of using Microsoft/Google/Facebook accounts to login to SharePoint. Great success! Note: Microsoft only gives you the UPN claim..which is a unique ID so when users log in it looks gross. Google and Facebook are able to pull in a lot more claims..but Microsoft is more secure in that fashion I suppose.

Anyways there is great documentation out there already on how to get rocking and rolling. Here’s a few I’ve used:

Anyways there isn’t really much documentation out there on the Token Signing Certificate. Most of the documentation out there states to use a self-signed certificate for DEV and get a certificate from a Commercial Certificate Authority for PROD. Alrighty then. Here’s the screen in Azure


Not knowing too much in the ADFS token signing cert space (In the past most environments I have worked with use ADCS or PKI to generate these)  I took to the interwebs.

The reason I was researching is because if I were to put in a CSR for I wouldn’t get it or it would get revoked…I don’t own Companies like Comodo have a DCV (Domain Control Validation) questionnaire built right into the certificate purchasing process. For the self-signed cert you can use whatever you want.

I researched to see if Azure ACS could have a friendly name or DNS CName that we could pull the cert for. NOOPE!

I found a great tool by Steve Peschka that allows you to actually export the token signing certificate right out of ACS. The ACS tenant is actually already an HTTPS site so there is a preexisting cert. SWEEET! It works like a charm too..

This specific client had their heart set on using the commercial certificate authority so I kept trucking.

The certificate for ACS is described in detail here:

Alright I’m still not sure what subject name to use..until I found this forum post:

Frank Lesniak had the answer I was looking for (This was for ADFS, but still applied to ACS):

**I’m just copying his answer in here in case the forum post ever gets deleted

  1. The certificate’s key length should be at least 2048 bits.
  2. Validity period should be as long as possible (given cost), up to 5 years
  3. The signing algorithm should be either SHA-1 or SHA-256. If you need to support ADFS 1.x legacy federation, Windows 2000, Windows XP SP2, or Windows Server 2003, use SHA-1. Otherwise, for best security, use SHA-256. You may need to call your publically-trusted certificate issuer to validate the signing algorithm.
  4. Ensure that the private key is exportable
  5. Subject name does not matter… but something like would be a common implementation.
  6. Key usage does not matter.

The key points being #5 and #6 – ADFS does not care what you name the certificate or what kind of certificate is being used (i.e. code signing, server authentication, client authentication, etc.). My advice would be to generate a certificate however you’d normally feel comfortable doing so. For example, many of my clients use IIS to generate the certificate signing request (CSR), then submit the CSR to the commercial CA. Once you’ve loaded the certificate into the computer store, it should be available for AD FS to use.


In summary – It doesn’t matter! Use or if you’re already rocking a wildcard cert for everything use that. Any X.509 certificate will do…

SharePoint 2013/SSRS 2014 – HTTPException Request Timed Out

Here’s the scenario – SharePoint 2013 with SSRS 2014. This is a small 3 tier farm – 1 App (Running SSRS), 1 Web, and 1 SQL server. This farm had been running smoothly for quite some time and started sporadically receiving HTTPException Request Timed Out errors. This seemed to only be affecting 1 specific report (Largest/Most used report in the farm) as I was able to run other reports when the 1 report was acting up.

The end users would just see the typical SSRS loading screen until the 110 second timeout kicks into effect and then the use is presented with a “Request Timed Out” error with a correlation ID. In the eventvwr application log I could see this:

Process information:

   Process ID: XXX

   Process name: w3wp.exe

   Account name: Domain\App Pool Account

Exception information:

   Exception type: HttpException

   Exception message: Request timed out.

After some digging I noticed that the page file on the system had been modified to a static size of 4GB. After changing this to system managed everything started working perfectly (Note: You could also use the Microsoft recommendation of 150% RAM on the system – Recently, I have also seen where search crawls stop working (Running continuously for 10+ days) due to switching the page file to a very low value. Moral of the story – make sure you’re page file is large enough!


SharePoint Upgrade – Incoming E-mail Issues

Here’s another fun scenario: SharePoint 2007 to 2010 upgrade that heavily relies on Incoming e-mail. When migrating/upgrading the content database, the incoming e-mail information is retained and you can see it by browsing to your favorite list or library of choice. Yay!..well kinda. This doesn’t work..I felt like Clark Griswold trying to light up his house on Christmas Vacation. The incoming e-mail alias is ALSO kept in the SharePoint configuration database. This means that the content database will have everything you need, but the config database is out of sync. You can fix this using a manual method of turning off and turning back on the ability for that list/library to receive e-mail…NO THANK YOU. As Russ declined to check each bulb individually..I respectfully declined that offer here as well.

PowerShell to the rescue! There is a RefreshEmailEnabledObjects() method you can use on a SPSite object to bring your SharePoint farm back in perfect harmony..Just like the old Coca Cola commercial used to say (Just a pop culture drop day today)

You can create your own script to loop through all SharePoint site collections or you don’t have to reinvent the wheel because Salaudeen Rajack at has already done this for you:

SharePoint Foundation 2013 SP1 Bits..Diagnostic Data Provider Timer Jobs Enabled

I don’t know if this was a one-off thing, but I figured I’d share just in case. It’s even possible someone turned these jobs on without notifying anyone..though nobody has fessed up yet! If I run through another SPF13 install soon I’ll be sure to update the post.

I have confirmed that all copies of SharePoint Server 2013 do NOT enable the Data Diagnostic Timer Jobs by default. I have also confirmed that a RTM SharePoint Foundation 2013 install has the same behavior. Recently I ran through a SharePoint Foundation SP1 install (ISO pulled from VLSC)..and after a few weeks noticed the Usage Logging database was growing out of control! Looking over the timer jobs I saw that all diagnostic data provider timer jobs were turned on:


That explains it..These jobs are normally disabled as they aggregate a lot of different information/logs from SharePoint and puts it into one central location/database. We usually either turn these on for “health checks” or when troubleshooting issues and want a complete snapshot of the farm. Turned them off..Trimmed up the usage data using this method:

Note: The link above cleared up only about 10GB of data..leaving me still with a gigantic Usage Logging database. Apparently there isn’t any way I could find (without SQL queries) to clear the Diagnostic Data out of the DB. It did trim some items – Page Requests, Feature Usage, etc. You could either wait for the retention period to kick in..or if the data isn’t important you can create a new Usage database and delete the old one using the Set-SPUsageApplication PowerShell cmdlet explained here:

SharePoint 2013 – “2010 Mode” Site Collection Search Scopes

One migration tidbit to note when going from 2010 to 2013. Search scopes are contained in the Search Service database..NOT the content database. This means that if a site heavily relies on search scopes..and you are choosing to keep this site in “2010 Mode” (Not generally recommended, but sometimes makes sense) then you will need to upgrade the Search database as well. This is because sites running in “2010 Mode” will use existing scopes, but you cannot create new search scopes after the content database is upgraded to SharePoint 2013. Side Note – If this site collection is upgraded to SharePoint 2013 then you can use the fancy shmancy new result sources.

The search database can be upgraded using the following PowerShell cmdlet:


More about this cmdlet here:

This process is rock solid…kind of. It doesn’t give you GUIDs, but the search database names are in the following format:

  • <Search Service Application Name>_AnalyticsReportingDB
  • <Search Service Application Name>_CrawlDB

I had a DB naming format and this did NOT work for me. The search Admin DB (The one I restored) was renamed as I went through the SQL backup/restore process so it had the naming down. I used the process described here to get everything nice and clean:

Search database names were scopes were showing up. Life is good