Jesper M. Christensen

SharePoint and Security

Tag Archives: sharepoint

Lookup the SharePoint 2013 app-web url

The SharePoint 2013 apps are easy to install, update and uninstall and our team has a lot of experience in developing and using apps at customers.

Sometimes, though, we need some more information about the app itself. I was troubleshooting a customer environment as an app did not work. It could install, update and uninstall but an error occurred when accessing the app. I detected that no DNS entry was made for the app (usually with the prefix app-<guid>), but I could not see from the error URL or other information what the <guid> was. In the ULS I could detect this, but wanted to check other apps – without using the ULSViewer.

Here is the method I used for looking up the app-web URL with Windows PowerShell:

(SharePoint snap-in already loaded in the Windows PowerShell)


Get the installed apps:

     $instances = Get-SPAppInstance -Web

List apps installed:


Get a single app, here “RePoint Infoscreen”

     $instance = $instances | where {$_.Title -eq ‘RePoint Infoscreen’}

See properties of the app instance:


With PowerShell you can also do other things to apps, and here is the command for uninstalling the app above.

To uninstall the app:

     Uninstall-SPAppInstance -Identity $instance

I hope this can help you also as I could not find the scripts easily on the internet. If you are interested in trying our apps, please visit the Microsoft Office Store

Optimizing SQL Server performance for Microsoft SharePoint 2010/2013

This is a duplicate post from my SharePoint Community blog:

– | –

During my time as a consultant I see a lot of environments. More and more I have been involved in setting up the SQL Servers for SharePoint and I would like to share this information with you all.

Please consult a SQL Server Database Administrator and a SharePoint Administrator and discuss the settings before making any adjustments in your environment(s) – I can only give advice and not be responsible for any wrong configurations – remember that every environment is different!

90% of all data of a Microsoft SharePoint 2010/2013 farm is placed in the SQL Server databases, and every transaction from the users, all SharePoint services and SharePoint timer-jobs are working with these databases at the same time. SharePoint is depends on the performance of the web frontend servers and SQL servers. If the users should have the best experience of SharePoint, you need to optimize the resources that are used by SharePoint.

On the web frontend servers the Internet Information Server (IIS) delivers the content to the users. CPU and especially memory is important here as SharePoint uses the server memory to cache the information requested by the users. Usually this is not an issue as IT administrators can monitor these resources and do as a normal performance evaluation of Microsoft servers.

Fewer IT administrators or Database Administrators has knowledge of how SharePoint use the SQL Server and which components to optimize for delivering the best SharePoint end user experience. This article is about giving a little insight on this subject.

Physical or virtual servers?
Running the Windows Server and SQL Server directly on the hardware itself will give you the optimal performance. You will not have the advantages of the virtual environments such as failover, backup/restore and flexibility for the IT administrators. In my experiences a good virtual setup is as good as a physical one – and is the most chosen of my customers.

The SQL Server instance
On the SQL server we have some general best practices to follow and this means that the database administrators must change some default values to make the SQL server, and thereby also the applications, to perform better. SQL servers that host SharePoint databases are no exception, but certain settings should be changed to give even more power to the SharePoint servers. These settings can also have influence on other databases in the same SQL instance so it is advised to have a separate SQL Server for SharePoint – or at least a separate instance on an existing SQL Server.

If you choose to run multiple instances of SQL Server on a single Windows Server you need to control your CPU usage, minimum + maximum memory usage and database placement on disks. If the load on the databases and the SharePoint servers is small to medium you will achieve okay performance. But still, try to separate the SQL Servers to multiple Windows Servers as the used network bandwidth also has an effect on performance.

Memory is also used by the SQL Server for caching data requested by the applications. Therefore you should have plenty memory allocated to the SQL Server for best performance. It can be hard to evaluate how much memory is required as the SQL Server adjust the memory used depending on how much memory is available.

The CPU is used to perform calculations, queries and sorting the data. CPU usage can easily be monitored and evaluated if more CPU power is needed.  Multiple processors with multiple cores is highly recommended as the SQL Server can take advantage of these.

The default file location of database and log should be set according to your needs (see the section Disk and file locations)

SQL Server version
The SQL Server should be a SQL Server 2008 (64 bit) with SP1 and CU 5 (or later version) installed even though former versions this edition of SQL Server is optimized for 64-bit and Microsoft SharePoint 2010/2013 can take advantage of the new functionality giving you’re a better overall performance. SharePoint will detect the SQL Server version and adjust the queries accordingly to the features available.

Memory limits
Add as much memory as possible on the Windows Server – the SQL Server will automatically allocate and cache based on the amount of RAM for the SQL Server instance.

The SQL Server will control the memory usage according to predefined rules in the SQL Server dynamic memory management.

If you only have one SQL Server instance on the server there’s not need to do memory adjustments!

If other applications or instances of SQL Server run on the same Windows Operating system then you need to adjust every SQL Server instance memory limits. By default the SQL Server dynamic memory management will use all available RAM on the operating system and release memory of this is required. Microsoft states this at

The minimum memory limit is set to a value that the SQL Server instance cannot go below if it passes this limit. This value should be set to avoid the server to avoid unnecessary release of memory resources that the serve

The maximum memory limit defines the maximum memory the SQL Server instance is allowed to use. If you have multiple instances on the same Windows Server then evaluate how much each SQL Server instance should be allowed to use and how much should be available to the operating system and other programs.

Disk and file locations
You have the possibility to set the default locations for the database and the database log files for each SQL Server instance on the Windows Server. It is advised to use separate physical disks as the requirements for database files and log files are different. Performance wise you should also evaluate the load of the different databases as some might require more resources than others. Log files requires good write performance and the database files most read and caching performance

It is advised to place non-SharePoint databases on separate physical drives because SharePoint is very database transaction intensive.

The optimal disk cluster size (is set during formatting the drive) is 64k. Using this recommended cluster size the server reads 64k at the time and can deliver larger chucks of data to the SQL Server.

For the best performance you can add additional transaction log files to separate disks (not disks you are using for database files) Create same sized files on separate disks for tempdb, Content and Search databases. Use the same number of files as physical processor socket on your SQL Server to take advantage of the CPU’s (e.g. 2 data files on a 2 socket machine, 4 data files on a 4 socket machine).

System databases and SharePoint
The Model database, “Model”, is a system database that defines how a new database is default created by administrators and SharePoint. Set the default size on the Model-database to what size the future databases should be. If is advised to set this to what size each of your content database is believed to be in 1½-2 years. You should set the size to one value before the SharePoint installation and change this to your future default content database size afterwards.

The default created size could differ from what settings you want on non-SharePoint databases and therefore it is advised to have a separate SQL Server or SQL Server instance for the SharePoint databases.

In my experience the growth rate is not adjusted on new databases with these are created by SharePoint – so please check the database file growth periodically (you can use my remote Windows PowerShell script which can be found here: Checking SQL database db and log file sizes and growth with PowerSh…)

The Temp database, “Tempdb”, is also a system database and this is used in a special way by SharePoint. It is used only as a working area and do not contain data that is important. At every SQL Server reboot the Temp database is rebuild. We should set the recovery mode to SIMPLE as we do not need to recover any information in the tempdb database.

The Temp database should be placed on a very fast drive for best performance on sorting and filtering operations. Microsoft SharePoint views are never faster than the “tempdb” can deliver data.

Database settings
Microsoft SharePoint use different databases for different purposes. Some contain configuration data and others the actual content. Most databases are created during the installation of the SharePoint farm so you must have your SQL Server settings correct before installing the product if you want to avoid performance issues or reconfiguration.

To make sure that putting content in your SharePoint databases is fast I recommend you to pre-grow these. Set the Database and log-sizes to what you expect within a year. This will also help reducing fragmentation and performance impact.

You should evaluate the usage of every database and adjust the

  • Database and log files
    • Location (fast separate drives)
    • Size (~ what you expect within 1 years)
    • Growth (~ 25% of the db size)
    • Recovery Model (FULL recommended)
    • Auto-shink -> set to the default: OFF

You should set the recovery mode to FULL on every SharePoint database to make sure that your data can be restored properly if a disaster happens – remember to do database and log backups to avoid running out of disk space. Also consult a SQL Server Administrator to ensure this is set up correctly – and check the backups with test-restore so you know that everything is good (and the restore time is okay with you).

It is advised that the Search databases are placed on even different physical disks or even a separate SQL Server, as the crawler- and index service is performing many transactions during a crawl and index propagation.

A small side-note: If you database administrators creates new databases for you, they must create these with a collation of Latin1_General_CL_AS_KS_WS. During the installation of the SQL instance they could set this to the default for new databases. If databases is created from the SharePoint user Interface, this value is set automatically.

Shrinking the logfiles – when?
The database transaction logfiles seems to keep on growing – especially if you do not back these up. This is because every transaction is put in these and not “flushed” until you do a backup. When you perform a backup the file gets “empty” and is ready for new transactions. If your transaction logfiles is too big then you need to shrink these – but please leave some empty space and don’t shrink to the minimum size. When a file needs to grow it has a large impact on performance.

As all data is sent over the network the bandwidth between all servers must be optimal. A 1Gbit network connection on the servers is advised at all times. Also be sure that devices can handle the load – such as network routers, switches and firewalls.

Key takeaways
Here are some key adjustments from this article:

I made a little graphical overview also:

Optimizing SQL Server Performance for Microsoft SharePoint 2010-2013

What I did not cover
Adjusting the SQL Server for optimal performance is not easy and I can only give you my experiences. There are a lot of possibilities and scenarios that I did not cover here and some of these are

  • SAN optimization
  • Database partitioning

Please contact me or reply to this article if you have comments, suggestions or any other thoughs.

SP Conference for Business users – Workshop day

If you are interested in my summary for the workshop day on the SharePoint Conference for Business Users arranged by T. A. Cook, then please read this at the SharePoint Community net:

Error publishing Internal SSL site through ForeFront UAG

If you have an internal site you publish through a Microsoft ForeFront UAG and protects this with a SSL certificate, you could run into an error if the internal site is protected by a SSL certificate:

UAG specified target common name of the certificate is invalid

In my test environment I started with:

  Internal Site: https://mysite.domain.local External URL:
SSL Certificate DNS/CN (Common name): intranet.domain.local

I publish this site with ForeFront UAG through a HTTPS Trunk, and the error occurs.

image UAG setup example

The reason for this error is that the UAG checkes the internal SSL certificate for the external “Common Name”. The internal users usually access the site and their browser checks for the internal “Common Name” match, which is intranet.domain.loca. But the ForeFront UAG needs to check whether the certificate includes the external name aswell. So even though you configure SharePoint AAM (alternate address mapping) and configure your UAG correct, this doesn’t solve this problem.

I found some forums on the internet that suggested that I just switched the internal site to be unencrypted using the standard port 80, but this did not match my security policy.

My solution was to create a new internal SSL certificate and include the external “Common Name”, and switch to this new certificate in IIS on the internal web server (Intenet Information Server). After the adjistments I got a working external site.

The new table now looks like this:

  Internal Site: https://mysite.domain.local External URL:
SSL Certificate DNS/CN (Common name): intranet.domain.local