Category: "SCOM 2012"

Savision Live Maps Service Health Index

SCOM, System Center, SCOM 2012, SCOM 2016 Send feedback »

Starting with version of Savision Live Maps version 8.5 they added a new feature called Service Health Index.
Let us investigate what it does.

Those who have been using Live Maps the last few years know about the Services monitoring, which basically is a definition of a single application/service/distributed app which gets split up in 3 parts: Infrastructure, Application and User. We place items like Operating System and Disks in the infrastructure part, we can place specific server roles like Web Server, Domain Controller and monitored items such as website, database, windows service in the Application layer. The user checks go in the User side. This way we can display the state of the Service as a whole, but also its main parts and the effect the users might see.

However if one of the items of one of the 3 main maps goes red this usually makes that map and the whole service go red as well. Things can be overridden with custom health rollups, but still there are the usual Green-Yellow-Red colors and the rollup to the top. There have been several requests to be able to specify which parts of our application are more important than others. For instance, imagine a web farm. Lets say this farm has 3 web servers and 1 database. Now, if 1 web server goes down this will make the application go red, but the website is still up. The user side website check would show a green state still as well, but the health rollup does not make a distinction of this and rolls up the Application map to the Service state.

Now imagine the database going down. Assuming for a second any other high availability solutions for this database have failed. Without the backend database the website will not work. This is also rolled up again to a red state for the Application side and up to the total Service health. Depending on how the user side web checks are setup this could make that check go red as well as a User impact. However, looking at both imaginary situations the Service went into a red state and we potentially did not see much difference as to how important this red state was to the service.

Bring in the new feature Service Health Index!

Quite simply we have a list of items we are monitoring in the Infrastructure/Application/User maps and we define how important they are to the working of the Service on a scale from 1 to 5 with 5 being very bad.

What does this look like? Lets open up the Savision Live Maps Authoring Console and open up one of the Services. In this case I am opening up the SCOM service. There is now a tab called Health Index.

From this screen you can Enable the Health Index and set it to update its health index indication every x minutes. I set it to 15 minutes at first.
There is the option to set which states have an impact on the Health Index:

So I added Warning in this case as an example.

Next you will see a list of all current objects added to all 3 maps (Infrastructure/Application/User) which are added to one of the levels. You can now drag them around to the correct effect it would have to your Service.

So over here I have been dragging some components of the SCOM Service up to the higher impact levels.
The SCOM operational database, the main Resource Pool and the Data Access Service in this case were placed in the Catastrophic level (level 5). Next move down and place other components according to the expected impact of those components on the working of the service.
Next Save the result. Give it the amount of minutes you specified to calculate the health index the first time.

If we now go to the All Services Dashboard we see the following:

Luckily the SCOM service is still green. On the other service (Exchange) you can see the Health Index of 4, which means this red is quite red, but not catastrophic yet.

So now we have a combination of the health state rollups of the 3 main components of every Service and an additional Health Index indicating the resulting effect and priority of handling the situation!

Enjoy your monitoring and pass on the value of monitoring to the whole organization by displaying the state of company services and its impact to all stakeholders!
Bob Cornelissen

NiCE DB2 Management Pack updated

SCOM, System Center, SCOM 2012, SCOM 2016 Send feedback »

The new version 4.20 of NiCE DB2 Management Pack has been released!

New with this release
• Feature: Support of DB2 BLU Acceleration
• Feature: Monitoring of InDoubt Transactions
• Security: Support of DB2 restrictive databases
• Security: Support of non-root setup and operation
• Security: DB2 Instance attach extensions for user and password options
• Platform: New platform support for IBM AIX 7.2
• Platform: Support of non-standard paths for both installation path and instance user home directory

If you are interested in learning more you can click on the Nice logo to the right of this screen.

Happy monitoring!
Bob Cornelissen

SCOM Web Console Application Pool crashing every 15 minutes

SCOM, System Center, SCOM 2012, SCOM 2016 Send feedback »

Recently I had a customer where the SCOM web console application pool would be crashing every 15 minutes (2 servers in this case). This was on a SCOM 2016 instance on a Windows 2012 R2 server.

The error message we got was (the process id is a different number each time):

A process serving application pool 'OperationsManagerMonitoringView' terminated unexpectedly. The process id was '1111'. The process exit code was '0xc0000005'.

This is a bit of a generic access denied error code.
While looking at the application pool which was crashing all the time we see the application pool is running under the security context of "ApplicationPoolIdentity".
In this environment there are several policies in effect and this was probably affecting the access of this generic placeholder account to not be able to access some registry key or local path.

We changed the application pool identity to LocalSystem by opening IIS Manager -> finding the application pool -> on the right click Advanced settings -> find the Identity and use the dropdown to select the LocalSystem in this case. Could have also used another account which was used for another application pool on the server, but went with this one first.
Recycle the application pool after this.

The crashes stopped happening from here. The SCOM web console was reachable.

Hope it helps somebody sometime.
Bob Cornelissen

SCOM agent for Linux and root squash

SCOM, System Center, SCOM Tricks, SCOM 2012, SCOM 2016 Send feedback »

At one of my customers they had a problem deploying SCOM agents through a script on Linux servers. They had a number of Red Hat 6 servers and all went well. On the Red Hat 7 servers however the agent refused to install. Also through a push of the agent through the console. It seemed to stop around the file copy stage where the rpm file gets copied to the server and next run for installation.

It turned out to be a feature called "root squash" causing the issue. What it does is lock rights on NFS shared volumes, so root can not simply access or run commands from any directory. For instance the /home parts. When they turned off this feature the agent installed immediately.

Just writing this down because I am sure I will run into this again somewhere.

Happy agent deployment!
Bob Cornelissen

Test your knowledge on SCOM/OMS/Azure and more

SCOM, System Center, SCOM 2012, SCOM 2016, Windows 2016, OMS Send feedback »

Now test your knowledge on SCOM/OMS/Azure and more through this quiz for fun and to win a Band as well :D

You can take the quiz by clicking on the picture of by this link:
Test your knowledge on SCOM/OMS/Azure and more

Have fun!
Bob Cornelissen

Error 500.19 after installing Savision LiveMaps Unity Portal

SCOM, System Center, SCOM Tricks, SCOM 2012, SCOM 2016 Send feedback »

Today I was doing a quick installation of the Savision 8.2 Live Maps Unity Portal. Downloaded the self-extracting executable from the website and of course arranged a license key. While running the installer I selected the Express setup which just pushes the web portal onto the machine and not the other components available in the Advanced installation option. The installation ran in 2 minutes on a slow machine, and this is including the extracting of the files and running checks.

After installation the web page automaticaly opens up and I was greeted with the following error:

HTTP Error 500.19 - Internal Server Error
Module: WindowsAuthenticationModule

In the error description there is talk of a configuration section being locked at parent level.

Screenshot of the error:

What happened is that the configuration on the server level is that Windows Authentication is turned off and that this configuration is locked for the whole machine. So for the Live Maps Portal it is trying to read configuration from a configuration file relating to Authentication and because this configuration is locked at a higher level it throws an error.

How to fix it:

Open IIS Manager
In the left menu select your server name
In the middle of the screen select Configuration Editor

Near the top of the Configuration Editor is a selection box for which section you want to see and edit.
Go to system.webServer/security/authentication/windowsAuthentication

In the right hand manu you will find a link to Unlock Section. Click it to unlock this configuration item.

Now any lower level (Sites or Applications within a site) can have their own configuration for Windows Authentication.

Refresh the error page and the Live Maps Unity Portal came up fine!

Happy dashboarding!
Bob Cornelissen

SCOM: DMZ or workgroup machines refusing to connect to SCOM

SCOM, SCOM Tricks, SCOM 2012 Send feedback »

Ran into a customer issue today whereby there was a nice clean SCOM 2012 R2 installation with UR's. Certificates arranged and momcertimport ran. On the agent machines in DMZ we had the agent installed, UR on it, certificate root imported, certificate meant for computer imported. momcertimport ran to get the correct certficate running. Yet no communication at all between agent and server. This is what I found:

So first checks are:

  1. does the agent machine have the certificate for the name of the server (which in workgroup can be the short name and in a dmz domain a fully qualified name)? Yes
  2. does the agent machine trust the CA which issued the certificate? (in this case a customer own CA, so the root chain cert was imported). Yes
  3. can the agent resolve the SCOM server name you used while configuring the agent? Yes
  4. Is the management group name we used in configuring the agent correct (case sensitive!)? Yes
  5. Is there a firewall blocking TCP 5723 from agent to SCOM server? Yes! OK this was fixed quickly, and verified with telnet. Still no communication! Moving on.
  6. On the SCOM server did we import the CA root chain as trusted and did momcertimport run on the correct machine certificate with the correct FQDN for that server? Yes
  7. restart healthservice on both sides... Yes. No effect

Man usually its name resolving, firewall and routing, certificate with wrong name, no certificate, or not trusted certificate. Pffff.

Something must be wrong with the SCOM server, I'm sure of it.

Next step, lets check out if all our SPN's are correct.

setspn -L scomservername

He wait a second, I see an entry like this:

MSOMSdkSvc/scomservername

Now this SCOM server is installed with the setting that the SDK service is running using a domain account. So this SPN should not be registered to the server itself but to the service account in the domain.

setspn -L domain\sdkserviceaccount

Sure enough the entry is not here for MSOMSdkSvc on this service for the mentioned server.

ALright, now we can not place thie correct SPN for this until we remove the wrong one. so we first delete the wrong ones.

setspn -d MSOMSdkSvc/scomservername scomservername
setspn -d MSOMSdkSvc/scomservername.domain.com scomservername

Next we enter the SPNs on the service account:

setspn -s MSOMSdkSvc/scomservername domain\serviceaccount
setspn -s MSOMSdkSvc/scomservername.domain.com domain\serviceaccount

And we check our results again with the setspn -L command.
Looks fine now.
Try again.
Grrrrrrr.

It must be the certificate somehow.
Open MMC Certificates, check the computer certificate. Is it valid, is it trusted, is it for the right purposes, does it have the correct name... Yes.
momcertimport it again.. only 1 certificate to chose from and its the same one. Restart the Microsoft Management Agent service afterwards.

Same.

Wait a second. Let me check in the registry for this certificate. What Momcertimport does is not that difficult. It grabs two properties of the certificate and creates two registry keys for it for SCOM to use.

Aha! NO registry values!

Looking in this key there must be two entries relating to the certificate:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Operations Manager\3.0\Machine Settings

Alright, so I will create them manually!
What you do is open the properties of the certificate. You need the Thumbprint and the SerialNumber.

Create a New -> String Value
Name it: ChannelCertificateHash
Copy and paste the Thumbprint contents into it and remove the spaces in between

Create a New -> Binary Value
Name it: ChannelCertificateSerialNumber
Now go to the properties of the certificate and click the Serial Number. Its again a string of numbers and letters in pairs of 2. What you need to do is fill in the pairs of 2 in the registry Binary value IN REVERSE.
Example:
Original serial number in certificate = 68 00 AB CD 69 00 23
What you enter in Binary field = 23 00 69 CD AB 00 68
So the pair of 2 characters stays the same, but the order of the pairs in the total string is reversed.

Next I restarted the SCOM services.

Within the minute it started saying that: A device which is not part of this management group has attempted to access this Health Service.
Those were the DMZ machines which just keep trying again and again!

Succes!

In the end it will have been the certificate rather than the SPN record which messed it up, but at least I could show what things I checked. When the SPN came up I just fixed it as well. In the end it WAS the certificate eventhough I felt that it was alright. Well when in doubt and ALL untrusted agents refuse to talk to this machine, and all trusted ones have no issue... triple-check the certificate and if SCO is actually using it!

Have fun monitoring!
Bob Cornelissen

How to make a SCOM implementation project successful

SCOM, System Center, SCOM Tricks, SCOM 2012, SCOM 2016 Send feedback »

I thought I would take a different approach to thinking about how to make a SCOM monitoring project a success. It is not about technical details or designs this time, but about a way to bring business and IT together into monitoring business related services and being in control of those processes. In a short blog post below I am touching upon some of those items.

https://www.savision.com/resources/how-to-make-a-scom-implementation-project-successful

Enjoy B)
Bob Cornelissen

Activating a de-activated Evaluation SCOM 2012 instance

SCOM, System Center, SCOM Tricks, SCOM 2012 Send feedback »

Came along a SCOM 2012 R2 instance which was expired. The license key was not entered on time, so SCOM did not work anymore and the SDK refused connection. Look in the event log and you will see that your evaluation version has expired and you need to enter your key. The thing is that you connect to SCOM through the Shell to activate it and it refuses connection at that point.

The trick is to restart the SDK service and quickly enter the production key.

Just open a normal PowerShell in administrator mode on the SCOM server and throw these three commands in there:

restart-service -name omsdk
import-module operationsmanager
set-scomlicense -productid XYZXX-XYZXX-XYZXX-XYZXX-XYZXX -confirm:$false

Of course use the real product key in there where the X's are!

Have fun and good luck!
Bob Cornelissen

How to monitor e-mail data sources with SCOM and Orchestrator

SCOM, System Center, SCOM Tricks, SCOM 2012, SCORCH 2012 Send feedback »

While chatting with some MVP friends of mine about a specific scenario where data from e-mails needed to be read and monitored, there are multiple possibilities to do it. I proposed one possibility which I implemented at a customer a while ago and got asked to blog about the solution, so here it is. Because SCOM is not built to natively read from a mailbox, one has to come up with a workaround, and in my case I used System Center Orchestrator to do part of the job.

Challenge:

Following is the situation. A number of servers monitored by another company and using another monitoring product. That product monitors servers from several customers of theirs, so we can not directly access it. We could not access or query the product directly either through scripts or commands or database queries. So in the end the result was that the other company would send e-mails from their several monitoring systems to one of our mailboxes. Resulting in 3 e-mails every 15 minutes. The e-mails contained an XML formatted body containing a list of servers and their state.

  1. So, we have to read 3 e-mails from a mailbox every 15 minutes. Pull out the body of the e-mails. Next merge the content to make it 1 XML file placed on a server with a SCOM agent on it. These steps are not native to SCOM, but a combination or Orchestrator and PowerShell
  2. After that we can use one of several methods to monitor a text based file on a server to create the monitoring part. For this we can use SCOM.

SO let us start with the first part

Using Orchestrator to get our e-mails into an XML file

I bet there are also other methods of doing this, but this was the method I selected and due to Orchestrator having some flexibility and some built-in actions in the intelligence packs this is very versatile.

Let us check out the email for a second:

We see the XML body there. In this case there are two servers mentioned in the email, however with longer names than how we know them so we need to play around with that too. Also with XML there is a header (first line) and a wrapper (second line start and end of last line), with the two actual content lines in the middle of it. Notice there are carriage returns and also spaces and potential tabs in there, which make it “nice” to filter those out while pulling the XML apart and creating a new XML file from that!

Ingredients needed:

  • A destination File share where the final XML file will be placed for being monitored.
  • A mailbox where those messages arrive and we can read them from
  • We created an automatic rule to place those e-mails in a specific named folder in the mailbox.
  • We created a second folder where we can move the already read messages to.
  • An account able to read in that mailbox.
  • Orchestrator to create a runbook and bring it all together.
  • An intelligence pack for Orchestrator which can read from a mailbox. I used the “SCORCH Dev - Exchange Email” IP for this which can be found at https://scorch.codeplex.com/

First import the Orchestrator IP needed to read the email and distribute it to the runbook servers as usual. Next start a fresh runbook and name it appropriately and place it in a folder where you can actually find it within Orchestrator. Advice is to use a clear folder structure within Orchestrator to place your runbooks in. This is not for the benefit of Orchestrator, but for yours!

Now we create the runbook. I will put the picture of the finished runbook here first before going through the activities:

Let’s now cut up the pieces:

Monitor Date/Time

Well this one simply says to check every 15 minutes

Format Date/Time

This one takes the current time from the first activity and at the bottom there subtracts 15 minutes from it. The story behind this is that we want to read all emails which came in between now and 15 minutes ago. So this gives us that point in time.

Rename File

We wanted our monitored xml file to always have a fixed name. So when we are about to create a new version of that file we first go out to that file share and take the current XML file and rename it by adding a date-time format in the name to make it unique. We wanted to be able to look back in history here, else we would have chosen to just delete it. This makes the folder look like this:

Read mail from folder

Now this is a custom activity coming from the Exchange Email IP we imported earlier.
From the top we see we have to define a configuration. We will get back to that in a second. Next you can see that we are looking for Unread emails in a certain folder (keep in mind folder name must be unique in that mailbox else it just takes the other one, which you did not want to). Now on the left hand side we see Filters:

We also want those emails to have a certain subject line. And we want those emails to be received after the time from the Format Date/Time activity above. Meaning the email was received after 15 minutes ago. So in the last 15 minutes.

Now to get back to the Configuration part. Many IP’s in Orchestrator have a place where you can centrally set some parameters. For instance a login account, a server connection, and so on. This can be found on the top menu bar of the Orchestrator Runbook Designer under the Options menu. Find the item with the same name as the IP you are trying to configure. In this case it needs us to setup a connection to an email server. Type is Exchange Server, type a username, password, domain, and a ServiceURL. For an exchange server this could be https://webmail.domain.com/EWS/Exchange.asmx for example, but check this for your own environment.

Retry Read mail from folder

This one will only run if the first read mail from folder activity fails. You can set properties on those connecting arrows between the activities to make it go here it the first one fails. I made the line color red and set a delay on the line of 20 seconds. Else it will follow the other line and go to the script. This activity does exactly the same as the previous one. We had some time-outs during certain times so this extra loop slipped in there.

So those Read mail from folder activities should contain 3 e-mails received in the last 15 minutes from that folder, unread, with a subject line, and Orchestrator now knows what the body of those emails contains. This also means that the next activity (the script) will run three times.

Run .net script

At the top we define this to be a PowerShell script. So first we pull in the variable, which is the body of the email from the previous step. Next thing we do in the script is remove all excess stuff that we do not need. Empty spaces before and after several lines and entries. Also we will take out those headers and surrounding entries. We can add them ourselves to a clean file, right? SO this should give us a new string which only contains the XML entries for those servers with their state.

Next thing we needed to do is build in some tricks into this script. We know it is going to run three times and we need to stitch the contents together into one file.

Line of thought:

If there is no xml file there to write to this means this is the first time we run the script after the old file got renamed. So we need to create the xml file right now and add the headers to it. Next we add the body to it (server names with state).

If there is a file there with the correct name it means we are either in the second or third run. So what we do is simply write down the body (servers and state) and add the trailing end tag to it. This can be done on the second and third run. However, if this happens to be the third run, we will first check if that trailing tag is there and remove it. And next dump the body again and add the end tag.

So that part takes care of dumping the contents into the file following the above thought process (with the first thought coming at the end as the Else statement). Sorry for the Dutch comments, but you get the idea.

Move mail

Next we take the e-mails found by the Read mail from folder activity and move them to the other folder in the mailbox.

So, that is the whole runbook to get a few emails and merge them together so we can monitor the thing!
There is a separate runbook which cleans old files from that file share and which cleans old emails from that folder in the mailbox by the way. At least we can look a few days back what happened.

The monitoring part in SCOM

Now I am not going into all the details of this part. I had a reason to not link these entries directly to the monitored servers, or to write the xml file to those servers. I opted to create a watcher node (and its discovery from a registry entry on that machine). That watcher node is the server with that file share and the xml file on it.

Next I created watchers in a class, and discovered them through registry as well. Containing the names of the servers we wanted to check for in the XML.

For each watcher it runs a PowerShell monitor which goes into the XML file and finds its corresponding entry (server name). Next it picks up the State (which is a number) and we translate the 12 possible numbers into green/yellow/red type entries and place them into the property bag. That gets evaluated into the three states we know so well.

Next we could throw those watcher entries for each server and also some other entries onto a dashboard. We could see the state the other party saw from their monitoring system and the state we see from SCOM side on one dashboard for those servers and monitored entries. We have the hardware/OS layer with a few extras, and they have an OS layer and application layers which we could not pick up.

Conclusion

As you can see sometimes we run into situations where there is no other way to get monitoring data than through workarounds and the long way. This is not ideal. As you can understand there is dependencies left and right for this whole chain to work. If there is no other way then that is the way it has to be. Direct monitoring or direct connecting is preferred.
But this shows how you can get monitoring data from e-mails into SCOM, in this case through the use of Orchestrator and watchers because that was what we needed.

Shout-out to amongst others Cameron Fuller for making me write this post!
Happy monitoring!
Bob Cornelissen

Contact / Help. ©2017 by Bob Cornelissen. multiple blogs.
Design & icons by N.Design Studio. Skin by Tender Feelings / EvoFactory.