Administering SharePoint using best practices

Fixing repeated logon prompts in SharePoint

There’s nothing that annoys users as much as repeated logon prompts.

Recently a SharePoint 2013 farm was prompting all users for logon except when logging on from the web server itself.  It seems someone had changed the Web Application User Authentication setting for the web app from Claims Based Authentication to Windows Authentication.

Other areas to check:

  • Add SharePoint web application to the Trusted Sites
  • Clear all cached credentials.  Here’s how. Go to Control Panel, User Accounts, Credential Manager, Manage Windows Credentials, and remove all relevant cached credential entries.  In case old credentials are stored.
  • Make sure Integrated windows authentication is enabled in IE. (Tools >> Internet Options >> Advanced >> under security, enable integrated authentication)
  • Ensure IE settings for User Authentication has “Automatic Logon with current user name and password” selected

Solving Timer Service halting daily

Overview

The SharePoint Timer Service is key to keeping SharePoint humming.  It runs all the timer jobs, and is therefore responsible for a near endless set of tasks.  I recently found that the timer service was halting shortly after 6am daily.  the service appears as halted.  Some additional symptoms:

  • Trying to set the password in Central Administration, Security, Managed Accounts doesn’t fix the issue
  • Trying to set the managed account password via PowerShell doesn’t help
  • The following appears in the event log:

Cannot log on as a service2

  •  Trying to start the service fails:

Cannot log on as a service

Solution

First, check GPEdit.msc to make sure the Computer security policy allows the user to run as a service.  The real catch is that the Domain policy overrides the local policy, so unless the farm account has domain rights to log on as a service, it will fail the next morning as the GP settings propagate.

Getting your arms around the database sizing of your SharePoint farm

SharePoint Database Size Planning

In order to manage your SharePoint farm, and especially for planning for backup/recovery you need to understand data sizing of your farm. Here are the steps you can take to gather the information needed to understand the existing farm and estimate its growth. This will give you a clear understanding of the size of your backups, so you can plan for recovery timeframes, and will also give insights into the rate of growth and on quotas that can govern growth of databases.

Size of all SharePoint Databases

To plan for DR one needs to know the size of all databases to be backed up and restored. This small script will produce a CSV report of the bytes per database attached to the SharePoint farm:

Get-SPDatabase | select name,DiskSizeRequired | convertto-csv | set-content "C:DBsize.csv"

RBS Report

There is no direct mechanism in Central Admin to view RBS configuration. This script will give you a report of the RBS settings throughout your farm:

Get-SPContentDatabase | foreach {$_;
try {
$rbs = $_.RemoteBlobStorageSettings;
write-host "Provider 	Name=$($rbs.GetProviderNames())";
write-host "Enabled=$($rbs.enabled)";
write-host "Min Blob 	Size=$($rbs.MinimumBlobStorageSize)"
}
catch
{write-host -foregroundcolor red "RBS not installed on this database!`n"}
finally {write-host "End`n"}
}

Site Collection size report

It is useful to know the sizes of your Site Collections, and their distribution among your Content Databases. You can report on the size of each Site Collection within each Content DB within a given Web Application with the script below. The output is a CSV (Comma Separated Value) file easily read into Excel. If you have a lot of Site Collections, just convert to a PivotTable, to see the distribution and sizes of Site Collections across Content Databases.

get-spwebapplication http ://SharePoint | Get-SPSite -Limit all | select url,contentdatabase,@{label="Size in GB";Expression={$_.usage.storage/1GB}} | convertto-csv | set-content "C:TEMPDBsize.csv"

Site Collection sizes help inform how to rebalance Content Databases for optimal sizing to allow you to meet your RTO.
One common situation is for MySites to be distributed unevenly across Content Databases, leading to one Content Database being much larger than others. As discussed earlier, managing Content Database sizes is key to meet your RTO.

Quota Report

Setting quotas puts in place limits on Site Collection growth. It also gives the Administrator weekly notification of Site Collections that have exceeded a preset warning size.
This report gives you a list of all the quotas in place across a Web Application:

$webapp = Get-SPwebapplication "http ://SharePoint"
$webapp | get-spsite -Limit ALL | ForEach-Object {
$site = $_;
$site;
$site.quota;
}
$site.dispose()
$webapp.dispose()

What you want to look for first are Site Collections that have no quotas. These represent opportunities for unconstrained growth without notification that could result in Content Database growth that exceeds your RTO targets.

Limiting Library Version Storage across your SharePoint farm

There are situations where documents are frequently edited. Each edit creates a new version in SharePoint. In SP2010, each version consumed the full disk space, with no optimization for managing deltas. In SP2013, one of the benefits of Shredded Storage is that it optimizes storage usage for multiple similar versions of not just Office XML (Office 2010/2013) documents but also other filetypes like PDFs and image files. It does this by working out and storing only the file differentials. Even with Shredded Storage, you can limit the number of versions retained on document edits. Here’s how to do this across your farm. Let’s limit major versions to three, and minor versions to five:

$spWebApp = Get-SPWebApplication http ://SharePoint
for ($Si=0; $Si -lt $spWebApp.Sites.count; $Si++)
{
$site = $spWebApp.Sites[$Si];
for ($Wi=0; $Wi -lt $site.AllWebs.count; $Wi++)
{
$web = $site.AllWebs[$Wi];
for ($Li=0; $Li -lt $web.Lists.count; $Li++)
{
$List = $web.Lists[$Li];
if ($list.EnableVersioning)
{
$list.MajorVersionLimit = 3
}
if ($list.EnableMinorVersions)
{
$list.MajorWithMinorVersionsLimit = 5
}
$list.Update()
}
$web.dispose()
}
$site.dispose()
}
$spWebApp.dispose()

Reporting on SharePoint MySite distribution by Content Database

Reporting on MySite Content Databases

Knowing how sites are distributed among Content Databases is key, such as knowing which Content Database to restore for a given user.

Wouldn’t it be nice to see the breakdown of MySites, which belong to a given Content Database, and the total size of each Content Database? Here’s a script that generates this useful report:

$DBs = Get-SPWebApplication http://MySites | Get-SPContentDatabase
foreach ($db in $DBs)
{
Write-Host -ForegroundColor DarkBlue "DB Name: $($db.Name)"
$siz="{0:N3}" -f ($db.disksizerequired/1GB)
write-host -ForegroundColor DarkBlue "DB Size: $($siz) GB"
Write-Host "========"
$db | Get-SPSite -Limit all
Write-Host " " 
}

SharePoint Admins – how to recover

Save your hide by restoring a SharePoint farm configuration

I had the scare of the week last night. I was doing some pre-approved SharePoint farm cleanup. Part of that was removing some Farm Solutions (WSPs). The Retract of the solution partially failed, and hosed Central Admin (ouch).

Last time I had this, the uninstall of UMT software left a dozen dangling references in all Web.Configs that I cleaned up by hand. Those caused workflows to all stop. This time, I was getting security and page rendering errors.

Good thing we had done a backupSPFarm recently with ConfigurationOnly option.

Given Central Admin was hosed, I used PowerShell:
Backup-SPFarm
Restore-SPFarm

Once the restore was done, Central Admin appeared to behave erratically. This was simply the farm sync’ing up, with a series of IISResets associated with feature re-deployment.

Remote Blob Storage report

When configuring RBS (Remote Blob Storage), we get to select a minimum blob threshold. Setting this offers a tradeoff between performance and storage cost efficiency.

Wouldn’t it be nice to have a report of the RBS (Remote Blob Storage) settings for all content databases within a farm? Well, here’s a script that reports on each content database, whether it is configured for RBS, and what that minimum blob threshold size is.

$sep="|"
Write-Host "DB Name$($sep)RBS Enabled$($sep)MinBlobThreshold"
Get-SPContentDatabase  | foreach {;
try {
$rbs = $_.RemoteBlobStorageSettings;
Write-Host "$($_.name)$($sep)$($rbs.enabled)$($sep)$($rbs.MinimumBlobStorageSize)"
} 
catch {
write-host -foregroundcolor red "RBS not installed on $($_.name)!`n"
Write-Host "$($_.name)$($sep)False$($sep)0"
}
}

Rationalizing SharePoint Storage

SharePoint Storage Management

SharePoint storage tends to grow organically, and in an uneven fashion. Periodically it makes sense to take stock of how Site Collections are distributed amongst Content DBs. The goal should ideally be to keep Content DBs at 50GB or below where possible. When a Site Collection grows to 100GB or more, steps should be taken to manage the growth, as large Content DB performance can degrade, and backup/restore can become lengthy.

Here’s a one-line script that outputs how site collections are distributed among Content DBs, and the size of the Content DB. The results can be pasted into Excel. If needed Excel can separate Text to Columns, allowing you to to pivot larger data sets:

Get-SPContentDatabase | get-spsite -Limit all | % {write-host "$($_.rootweb.url)|$($_.rootweb.title)|$($_.contentdatabase.name)|$($_.ContentDatabase.DiskSizeRequired)"}

Patching SharePoint: useful tips

Patching SharePoint

In patching SharePoint, prior to the August 2012 CU (wow, that’s a long time ago), patches had to be applied sequentially as a matched pair. First Foundation, then Server. As of August 2012 CU, there’s a single patch. If you have SP2010 Server, just apply that one CU. Note the 2012 CUs assume SP1 is already installed.

Note: I would not apply a CU unless the last SharePoint Config Wizard ran successfully. That can be seen in Central Admin under Upgrade/Patch Status. The one exception is a farm with a known existing problem running a successful Config Wizard session, in which case a CU is a rational approach to solving this, as long as you have a server snapshot and a full DB Server backup (all SharePoint DBs) to roll back to in case the CU doesn’t solve the Config Wizard session issue. The place to start for Config failures is the log. After any Config Wizard run a log is produced. This log provides (too much) detail. The place in the log to look is the first “ERROR” text.

Sometimes the Registry on a server gets out of sync with the products in the Farm Config DB. Here’s how to very quickly and safely sync them:

Set-SPFarmConfig -installedproductsrefresh

Sometimes the CU will complain that the expected version is not found. You can force this to occur, by telling the CU to bypass the detection of correct software, but do so at your own risk:

<filename>.glb.exe PACKAGE.BYPASS.DETECTION.CHECK=1

After applying the CU, a reboot is required. The process tells you as much at the end. However on reboot it doesn’t tell you you need to run the Config Wizard. here’s how:

From command line, you can always trigger a Config Wizard session using:

psconfig -cmd upgrade -inplace b2b -wait

More thorough is this command:

psconfig -cmd upgrade -inplace b2b -force -cmd applicationcontent -install -cmd installfeatures

August 2012 CU installs clean, but sometimes the bypass detection is needed. Spot testing indicates faster performance in the browswer, and lower RAM utilization when compared against SP1 + June 2011 CU.

System characteristics of a well performing SharePoint Farm

Characteristics of a well performing SharePoint Farm

Tempdb data files

  • Dedicated disks for the tempdb
  • Tempdb should be placed on RAID 10
  • The number of tempdb files should be equal the number of CPU Cores, and the tempdb data files should be set at an equal size
  •  The size of the tempdb data files should be equal to the (Physical memory no. of processor). That’s the initial size of the tempdb data files.

DB management

  •  Enough headroom must be available for the databases and log files, plus enough capacity to keep up the requests
  • Pre grow all databases and logs if you can. Be sure to monitor the sizes so that you do not run out of disk space
  • When using SQL Server mirroring, do not store more than 50 databases on a single physical instance of SQL Server
  • Database servers should not be overloaded by using too many databases or data
  • The content databases must be limited to 200GB. Limit content databases to 200GB Indices must be Defragmented and rebuilt daily, if users can absorb the downtime required to rebuild
  • More than 25% disk space must be free.
  • Another instance of SQL must be created for data exceeding 5 TB.
  • When using SQL Server mirroring, more than 50 databases must not be stored on a single physical instance of SQL Server

Monitor the database server

Key performance counters to monitor are:

  • Network Wait Queue: at 0 or 1 for good performance
  • Average disk queue Length (latency): less than 5 ms
  • Memory used: less than 70%
  • Free disk space: more than 25%