Workflow error in SP2013 related to App Step

I had a situation when my workflows went to “Suspended” after a few minutes. I noticed this occured when I used the “App Step”. Looking at the “Suspended” state, there was an “i” icon, when clicked, showed this error:
Details: An unhandled exception occurred during the execution of the workflow instance. Exception details: System.ApplicationException: HTTP 401 {“error_description”:”The server was unable to process the request due to an internal error

Here is how the error actually consistently appears:
WF Suspended

It turns out the App Step will not work, without specific configuration to grant the App Step rights.

So of course some permission issue was most probable. And I was suspicious of App Step, as I mentioned yesterday as the proximate cause.

So configured the permission as below link:
https://www.dmcinfo.com/latest-thinking/blog/id/8661/create-site-from-template-using-sharepoint-2013-workflow

The description in the above link should make things really clear. So when an App Step runs, it uses permissions that needed to be set.

One more thing, we need specify the Scope URL as below

On the subweb, you will find the configuration setting for app permission: http :[SPWeb URL] /_layouts/15/appinv.aspx

You can see it set up from here:
[site URL] /_layouts/15/appprincipals.aspx?Scope=Web

Here is the feature to enable:
WFAppStepFeatureToEnable

For App Step Permissions, it is the 3rd link under permissions from Site Settings.

Here is how the specific App Step Permission appears when viewed from Site Settinga App Step Permissions:
SpecificAppStepPermissionDefinition

With the permissions granted, the workflows should then work on retry. You can retry from the workflow summary page shown on top

Set an Email alert for Document library or List for SharePoint 2013

In this post, I will explain how to add an alert on a document library or list to get notified about item added, updated or deleted events. Below are the steps to follow for adding an alert:

1. First of all Open List or Library.
2. If it is a library then open Documents, if it is List then open Lists ribbon tab. In my example I am using a Document Library.

LibraryTab
3. No under Share & Track section, search Alert Me option. Click on Set alert on this library

Set Alert Link

4. A dialog box will appear with available options such as Alert Title, Send Alerts To, Delivery Method, Change Type, Send Alerts for These Changes, When to Send Alerts.

Alert Dialog Options

5. Apply relevant configuration in the dialog and then click OK. And you are done with setting the alert.

Solving Timer Service halting daily

Overview

The SharePoint Timer Service is key to keeping SharePoint humming.  It runs all the timer jobs, and is therefore responsible for a near endless set of tasks.  I recently found that the timer service was halting shortly after 6am daily.  the service appears as halted.  Some additional symptoms:

  • Trying to set the password in Central Administration, Security, Managed Accounts doesn’t fix the issue
  • Trying to set the managed account password via PowerShell doesn’t help
  • The following appears in the event log:

Cannot log on as a service2

  •  Trying to start the service fails:

Cannot log on as a service

Solution

First, check GPEdit.msc to make sure the Computer security policy allows the user to run as a service.  The real catch is that the Domain policy overrides the local policy, so unless the farm account has domain rights to log on as a service, it will fail the next morning as the GP settings propagate.

SharePoint 2013 issues with IE11

Working with IE11, Web Part pages could not be edited in the browser.  Web parts could not be selected during creating the page, and the web part properties could not be presented for editing.  The problem seems to be specific to IE11.  It works great in IE10.

The solution is to set the hostname for the web part to run in Compatibility Mode.

To get it to work, here’s what I did:
1.Press ALT + T when viewing the SharePoint page
2.Click Compatibility View Settings in the menu
3.Click Add to add the current SharePoint site to the list of compatibility view pages
4.Click Close

That’s it.  Not only does the page work, but other aspects of the page come back to life.

Happy browsing!

Running SharePoint 2013 search within a limited RAM footprint

Running SharePoint 2013 search with limited RAM

SharePoint 2013 search is very powerful, however if you have limited server resources, it can easily get the better of your environment.  I’ve seen a small SharePoint 2013 environment go unstable, with w3p processes crashing, ULS logs filling with low RAM errors, and search index going into “Degraded” mode during a crawl, and end-user search attempts returning correlation errors, and even sites and Central Admin returning 500 errors; all for wont of a few more GB of RAM.  An IIS Reset gets the server responsive again, and an index reset will get SharePoint crawling again, but outside of tossing in precious RAM chips, what’s a caring administrator to do?  Let’s first see how to determine whether your search index is degraded:

Get-SPEnterpriseSearchServiceApplication | Get-SPEnterpriseSearchStatus
Name State Description
---- ----- -----------
IndexComponent1 Degraded
Cell:IndexComponent1-SPb5b3474c2cdcI.0.0 Degraded
Partition:0 Degraded
AdminComponent1 Active
QueryProcessingComponent1 Active
ContentProcessingComponent1 Active
AnalyticsProcessingComponent1 Active
CrawlComponent0 Active

In the example above, note the Index component is degraded.  In Central Admin, simply do an Index Reset to get things back on foot, and restart the World Web Publishing to kick-start IIS and its app pools.  In the  command below, we’ll lower the priority of Search, so it doesn’t blow up our underresourced farm:

set-SPEnterpriseSearchService -PerformanceLevel Reduced

Next, let’s limit the RAM utilized by the NodeRunners; these are the processes that handle search crawling.  You can find this on C: or perhaps a different drive letter on your system:

C:Program FilesMicrosoft Office Servers15.0SearchRuntime1.0

Open text file (notepad is fine, especially if your farm is wheezing from being RAM challenged, here’ the XML file: file noderunner.exe.CONFIG
Change value from 0 to 180. Note I would not run with less than 180MB per nodeRunner, as I’ve seen search components fail to start as a result.

<nodeRunnerSettings memoryLimitMegabytes="180" />

Try another crawl, with a more RAM stable experience.

Here’s how to tell where your index is located on disk:

$ssi = Get-SPEnterpriseSearchServiceInstance
$ssi.Components

Here’s how to get the topology and the Index component:

$ssa = Get-SPEnterpriseSearchServiceApplication
$active = Get-SPEnterpriseSearchTopology -SearchApplication $ssa -Active
$iComponent = $active | Get-SPEnterpriseSearchComponent IndexComponent1

Search Crawling in SP2013

Search Crawling in SP2013

In SP2010, we have two types of crawls; Full or Incremental Crawl. In a nutshell, your search index can be made on-average fresher, but it is not real-time. Right now we do Full Crawls weekly and incremental crawls hourly.

One of the limitations of Full and Incremental Crawls in SP2010 is that they cannot run in parallel, i.e. if a full or incremental crawl is in progress, the admin cannot kick-off another crawl on that content source. This forces a first-in-first-out approach to how items are indexed.

Moreover, some types of changes result in extended run times; such as script based permission changes, or moving a folder, or changing fields in a content type. Incremental crawls don’t remove “deletes”, so ghost documents are still returned as hits, after deletion, until the next full crawl.

SharePoint 2013 will introduce the concept of “Continuous Crawl”. It doesn’t need scheduling. The underlying architecture is designed to ensure consistent freshness by running in parallel. Right now, if a Full or Incremental crawl is slow, everything else awaits its completion. It’s a sequential crawl. Behind the scenes, continuous crawl selection results in the kick-off of a crawl every 15minutes (this wait can be configured) regardless of whether the prior session has completed or not. This means a change that is made immediately after a deep and wide-ranging change doesn’t need to ‘wait’ behind it. New changes will continue to be processed in parallel as a deep policy change is being worked on by another continuous crawl session.

Note that Continuous crawl will increase load incrementally on the SharePoint server since it inherently can run parallel multiple sessions simultaneously. If needed, we can tune this through ‘Crawl Impact Rule’ settings (which exist today in SP2010) which controls the maximum number of simultaneous requests that can be made to a host (default is 12 threads, but is configurable).