scom web application monitoring – making it useful – part 1

I could go on for days about SCOM and the URL monitoring and how it needs to be improved. Honestly.. it kinda sucks. So here I will attempt to describe what I think is wrong with it and how I work around it. The items in bold below are what I feel like are failures in the way this was designed.

Also I am not writing this as strictly a “how to monitor a web app” post, there are already plenty of those. This is just about the changes required to make this useful. Here is a good article with the basics on setting up a web application monitor in SCOM.

  • Requirements

To begin with, you will need to figure out what you need to monitor. In many cases it is simple enough to pull up the main page of a website and as long as it comes up, is in a reasonable timeframe, and is giving an HTTP status code of 200, you’re OK. This sort of monitoring is useful, but you can do so much more in order to get a lot more out of it. What I like to do is get the devs to code you up something special through some sort of bribery or blackmail. In our case what they did was define 5 business processes, for example “make a payment” and create a page that does the back end work of making that transaction but also the other end of the work which is cleaning up after itself. What you will get in the end isn’t exactly user experience, but it’s a good way to track the ongoing performance of a process relative to itself, and it’s a very good up/down indicator. Since we have dev environments as well, I have those on a development scom server, and I have the below web monitoring in place there as well in the first production like environment. This allows our QA folks to compare state and response time and see if the environment is working before they release code or start a test, but also they can see the impact of the new code by comparing response times from before and after the code release.

  • Once you have your URL’s, it’s time to get to work.

Create a web application monitor and give it your URL. The problem with those default settings is that by default you are only logging the transaction response time and not alerting on it. From an alert standpoint, there is no timeout for your web request, matter of fact, the only thing SCOM will tell you out of the box is just if it was eventually able to pull up a URL as long as it doesn’t have an HTTP response code > 400. This default setting is not useful!

To fix this, what you want to do is add response time criteria like this.


Because of a problem with the service level dashboard that I will explain later, I only put one HTTP request in each web application monitor. This brings me to a little UI weirdness here because you can also set response times in the “configure settings” for the specific URL pull like this.


I always leave this performance criteria blank because I can see the other one easier and get more out of it. This one here just seems redundant.

  • Seeing the data

Now once you gather some data you will want to, well, see what’s going on. In order to do this, create a new performance view in the monitoring console and scope it to “collected by specific rules”, and then you get to go manually pick your rules. This is where Microsoft fails again, because the list of rules is not searchable and they all have arbitrary names. For web requests I figured out they are called “Performance Collection: Transaction response time total for Name of web app monitor”. like this screenshot.


Now that you have done that, you will be able to see a nice blank performance chart with some stuff to check.


Now when we pick one, we get a pretty graph like this.


This brings me to my next issue with all of this.. it’s that the performance chart settings are user specific.. meaning I cannot create a view of any sort that contains performance information and have the counters checked already. No matter which ones I put in, and it doesn’t matter if you are using a performance view or even a dashboard view that contains a performance view, those have to be selected every time. This is a pain!

This also means that if you wanted to say, get fancy with a URL to a specific view, you cannot just create one of these and have folks click the link and end up at a pretty performance chart with the counters already checked. The fact that you cannot do this is a serious limitation with SCOM, IMO.

  • setting up alert parameters (what you cannot change)

You will likely have to play with the values a bit in order to get them not to false alert. And this brings me to my next problem with SCOM web monitoring, it’s that you cannot change anything about how it samples other than where it is from (what host) and how often it samples. What I would love to do is be able to say “only alert when two consecutive thresholds are exceeded”, but that’s not an option. We get a lot of failures at night during our backup window that cause a single transaction to go out of SLA, and we get alerts based on that. As a result, we have to set our thresholds for response time to the highest level it could possibly be so that we aren’t false alerted every night, but this makes it so high that the alerting becomes less useful during the daytime. As of now I do not have a workaround for this.

  • stopping duplicate alerts

When you do get your first alert you will see that two are sent.. one for the URL pull and one for the aggregate monitor on the web application monitor. This doesn’t really make sense to me why this would be set up this way at all, so let’s fix it.

Start by right clicking on one of the alerts and open the health explorer for it. Expand it out and you will see something like this.


Each of the red lines has an alert set up for it, and the lower one for the actual request rolls up into the web application one. In my mind the web application one is redundant, so I am going to disable it. Right click, choose “monitor properties”, go to alerting, and uncheck it.


Now you will receive one alert instead of two.

  • useful alert details

Of course the text of the alerts isn’t useful at all out of the box (it doesn’t tell you if the URL failed for time, SSL, http response, or anything). I am using this article as a basis for fixing this, but I don’t have it totally worked out yet. This will continue to require some further tweaking.

This post ended up being longer than I intended (there’s a lot to fix) so I am going to break it up into two parts and get the service level dashboard stuff into a 2nd post.

scom bug with the service level dashboard

I have a web application URL monitor here and I am attempting to remove a performance counter on which I have a service level objective set. Because of this, if I try to delete the counter from the web application monitor… I cannot and I receive this error.


Since I just made these service level objectives recently, I was able to quickly figure this one out, but the product really should handle this condition more gracefully.

scom service level dashboard gotcha – gauge order

I created a scom service level dashboard and had some consistency issues with it where the gauges were out of order on a couple of the service levels. What I found was that the order you put them in your service level objectives is important. So if you want them to be consistent then the order has to be the same here (screenshot below) for all of your service level tracking objectives that you want to show in the same service level dashboard.


Unfortunately you cannot change the order directly so if you get them out of order, you have to delete and recreate them in the order you want. New ones go on the bottom.

AD: find DNS records that do not age

We are about to enable scavenging for DNS here at work and needed to get a list of DNS entries that were set to not scavenge. I did it like this:

  • dnscmd /zoneprint >c:joe.txt
  • findstr /v "[" c:joe.txt >c:noage.txt

The text file you get at the end has the entries that don’t age. That’s it!

Note: I had a little help from Marcus on this. He has a post similar here.

scom and redirects to views

I am probably the only person on the planet doing things this way, but I want to document this anyway.

In the scom web console, I am publishing views for various bits of the business. Then I am using this procedure mentioned here. This is not perfect because you cannot save the way you want a performance view to look (which counters are checked) but it at least is a start. (Hello Microsoft! Please fix!) In order for the whole thing to be easily memorable, I create a url like http://weberrors, and this error contains… the website errors.

How I am doing this is pretty easy.

  1. On any server really with IIS (I use my scom server), I create a new website.
  2. For the site name I use redirect.weberrors so that I know what it is for (I have quite a few of these)
  3. Create the path inside IIS as c:inetpubredirect.weberrors
  4. for binding I use "http” and “all unassigned”, but you have to enter a host name, which is “weberrors” for my example
  5. once the site is created, click on it and look for “http redirect” on the right hand side.
  6. click “redirect requests to this destination” and input the URL you made from the link at the top of this post
  7. it is also important that you check “redirect all requests to exact destination”, if you do not, see the note at the bottom
  8. now the IIS part is all done, open up DNS for your domain
  9. create a new entry, CNAME and make “weberrors” CNAME to your IIS server
  10. once everything replicates, folks in your company will be able to type in “weberrors” in their browser and see the errors

This is a pretty simple thing and it makes navigating to specific spots in the SCOM UI much easier.


Note: If you do not check the box for “redirect all requests to exact destination” then when IIS redirects it will add an extra slash “/” to your URL. Scom does not like this! You will get an error:

  • Unfortunately the "Name of your view" view cannot be displayed.

All you have to do is what’s in step #7 above. That’ll make the redirects do their thing properly.

Microsoft, you HAVE to do a better job than this

Here’s an error from SCOM.

Performance data collection process was unable load SQL Server Authentication configuration information. Account for RunAs profile in workflow "Microsoft.SystemCenter.DataWarehouse.CollectPerformanceData", running for instance "" with id:"{81890C12-35B3-7AEA-C0FF-3EFCA7486E97}" is not defined. Workflow will not be loaded. Please associate an account with the profile. Management group "Access"

OK guys WHICH profile? Come on.. how hard is this? I mean I can guess, and I have, and guess what? It has one associated.

new toy – 91 Jeep YJ

I have had my Jeep for a few months now but just now got it to the point where it’s not stock and worth at least speaking about and showing pics of and whatnot. My cousin is Graham of Rough Customs in Acworth, GA (used to be in Cartersville) and he did all the fab work for me. The setup is all his design, using SOA up front so it would ride good and flex while staying more conventional in the back because of axle wrap issues. This is my first jeep and my first real off-road vehicle other than my mostly stock 98 F150. It’s been out on what Graham called “medium” trails and it does real nice. The tires stay on the ground and even open on both ends it’s made it up everything I have tried with very little drama, so I’m a very happy camper right now.

10_06_29_misc 083

Here are the specs

  • 1992 YJ, 4.0, 5 speed, 193K miles
  • 33×12.50 BFG MT’s on some random 15×8 stock Jeep wheels


  • Used dana 30 with 4.10 gears
  • lock right locking (duh) diff
  • SOA up front on stock springs
  • Sway bar on old shock mounts (plates flipped)
  • Custom shock mounts
  • Pro comp es9000 shocks
  • No track bar
  • all new urethane bushings
  • Longer brake lines


  • Ford 8.8 axle with 4.10 gears
  • RC 4” springs
  • Pro-Comp es3000 shocks
  • Big u joint in back
  • No track bar
  • Longer brake lines


  • Right now it’s low steer, and this setup is a little, ugh, weird to drive on the road right now. I have another D30 sitting in the back of the truck that has a high steer setup on it along with TJ axles (one piece passenger side), big outer u joints, and the hubs (WJ), brakes (WJ+explorer), etc to make it all work.
  • Longer shackles for the rear to even it out a bit are in transit (1.25” lift Rough Country boomerang shackles).
  • I need to make some sway bar disconnects. Taking them off with a wrench and a hammer is a PITA.

Here’s a link to some pictures I took during the build and afterwards.

And just for fun, here’s some video of Graham messing around in his CJ.

.net health monitoring

This is a little blurb I use almost everywhere for almost everything that will log all sorts of useful info about a .net app in the application log. It will grab unhandled exceptions as well as application lifetime events (app pool or domain restarts, etc.) This is a really good one to use when your devs won’t add this to the code themselves! It will work (or has for me) straight up in any .net code. All you do is place this in the web.config.

<healthMonitoring enabled="true">


        <clear />

        <!– Log ALL error events –>

        <add name="All Errors" type="System.Web.Management.WebBaseErrorEvent" startEventCode="0" endEventCode="2147483647" />

        <!– Log application startup/shutdown events –>

        <add name="Application Lifetime Events" type="System.Web.Management.WebApplicationLifetimeEvent" startEventCode="0" endEventCode="2147483647"/>



        <clear />

        <add name="Application Events" eventName="Application Lifetime Events" provider="EventLogProvider" profile="Default" minInstances="1" maxLimit="Infinite" minInterval="00:01:00" custom="" />

        <add name="All Errors Default" eventName="All Errors" provider="EventLogProvider" profile="Default" minInstances="1" maxLimit="Infinite" minInterval="00:00:00" />



SCOM 2007 R2 – workgroup/DMZ server notes

This is harder than it should be. Here are my notes on doing this.

1. On cert server go here: http://blah/certsrv/

2. request cert. choose type other and paste in the below OID

3. OID =,

4. Make sure to check key exportable. Make sure to use FQDN of server for name and common name.

5. Open up server mgt for certificate manager and approve.

6. Go back to website, install the cert.

7. Mmc, certificates for personal. Export the cert. make private key exportable.

8. Copy cert to client server.

9. On server do mmc for client, import cert, mark as exportable.

10. Run momcertimport on client, choose cert.

11. Restart system center manager service on client.

12. Wait a min and go to mom console, administration, pending management. Approve it.

13. Done!

Dear SCOM. You blew it

In case you weren’t aware, for SCOM to work against a non domain machine, all manner of certificates is required between the RMS and the agents in order for this to work. Not only is it required, but you have to use the fairly archaic tools provided with certificates, oh, and you will need your own certificate authority too. This is such a complete and utter #FAIL that I don’t really know where to start. Mainly my issue is that it doesn’t need to be this hard.. if someone wants to see the CPU time on my webserver, then by all means, hack in, but damn if I care enough to go through this level of work for it. And that brings me to my second issue, the shit just doesn’t work. Sure you could say this is a “rush it out the door” kinda thing, but this happened back in 2007 and there have been plenty of releases including an R2 version, yet still this useless and archaic process is still in place.

So in short, the SCOM guys failed by over-complicating something that isn’t needed, and then making it 10 times more difficult than necessary. FAIL.