In my previous post which introduced SCOM 2016 Features - Network Monitoring MP Generator I have shown you how to use the command syntax of the tool and why it was created. Now it is time for an example.
Have fun monitoring some network device and see how the principles of the input XML file works.
Also because I have been doing a few presentations with a SCOMosaur theme, so we combine a little SCOM with a little dinosaur madness. You will see a few references of that here and there.
Mind I am using a simulated device which may not be fit for this purpose. Reason being the default simulated devices by the Jalasoft SNMP Device Simulator are all CERTIFIED. ANd we are of course creating monitoring for the non certified devices. The OID's in the example below are from a APC UPS device, but for now we can use it as exampe clearly enough.
- First of all I am using SCOM 2016 TP5 here, which is the first version to include this feature.
- I am using Jalasoft SNMP Device Simulator on another machine to simulate a few network devices of different types.
- Of course make sure both sides can reach eachother with ping (ICMP) and SNMP.
- I am using iReasoning MIB Browser to browse the SNMP tree on the device selected to determine we actually have data there and the right OID's.
Next on the list is to discover the devices in SCOM by creating a Device Discovery and adding the device IP addresses and SNMP community string to it and letting SCOM discover the devices.
The XML input file
Actually the idea here is relatively the same as a simple management pack setup.
- A manifest with management pack name and version
- A Device definition
A Device discovery>/li>
- Device Components
- Device ฉomponent Discovery
- Rules (these are collection rules)
Starting the Manifest
First we are going to define the start to the input file by the Root tag.
Next we define the Display Name and Version for the management pack.
Name and Version are mandatory and an optional tag is KeyToken.
Device Definition and Discovery
The next thing to do is create an entry for each type of device and to make a device discovery for it.
First we define a name for the device.
Next we jump into a discovery for it.
The discovery covers the SysObjId tag which points to the unique device identifier for the device type.
Next we have to specify a device type. The following types are supported for now: Switch, Router, Firewall, LoadBalancer.
Next fill out the Vendor and Model.
Components and Discovery
Now it is time to look into the components of the device. For example Processors or Fans. After we dicover those we can target monitors and rules to those components in order to monitor them.
We are opening the Components tag here, and it will be closed all the way at the end of the story.
Next we define our first component.
There are a few component types supported at this moment: Processor, Memory, Fan, Voltage Sensor, Power Supply, Temperature Sensor.
And we give it a name of course.
Now we define the OIDs we are interested in. These OIDs will have to be there for each instance of the Component we define. One of these will be used in the discovery of the component and the same one and/or others we can use for rules and monitors. At least we have defined all of them here and given them original names.
We do not have to enter the index number of each component instance. For example...
fan2 = 220.127.116.11
fan2 = 18.104.22.168
fan3 = 22.214.171.124
In the very short OID example above you can see the last number is the index number for each fan. So we only need to specify 1.3.6 in this case and the discoveries will find each instance for you.
In this case I named the component the Tricera Environment and gave it a Processor type, just because it needs to conform to the default types at this moment.
The 3 used OID's are a Temperature OID, a Usage OID (which happens to be the amount of battery percent left for the UPS), and an overal state indicator OID for this component.
For the step coming after this, it means we have two performance counters we can collect (but I will collect all three in the example), and also we can create state monitors based on the values.
Lastly the ComponentDiscovery is a pointer to which of the already defined OIDs is a component indicator. In this case I use the state indicator OID. If that one is there (with an index number behind it) an instance of the component will be created or as many as needed.
Monitoring and Rules
Alright now the monitoring needs to start for the component we are still at.
For starters we set the Monitoring tag. We will close that tag later after we have defined all rules and monitors.
Next we start with the rules:
We open the Rules tag and next define the performance collection rules as you see here. I used short names for it and pointed each rule to the name of the OID we defined already. See how easy that part is?
Lets go to the monitors now...
First again we start it off with the Monitors tag which we will close off after the last monitor we add.
Alright, first UnitMonitor. We give it a name. In this case Triceratops Environment Status.
It is a two state monitor so we define two expressions.
Both of them point (in black letters in the middle here)
to the name of the OID containing the state indication.
The first expression is for success (green state) and uses 2 or less. And the second expression uses anything higher than 2 to set it to an error state.
So i repeated that two more times for the Temperature and set it to 30 degrees as maximum acceptable value, otherwise our dino gets sunburn.
And the third monitor is using the TriEnvUsage OID to determine if it is at 100 or below.
And now as promissed we close the whole load of tags off:
The conversion process
Alright we now have an XML input file with all the stuff we need. Now we need to use the Network Monitoring MP Generator tool to convert the input file to a management pack XML file.
Open a command prompt and go to
%Program Files%\Microsoft System Center 2016\Operations Manager\
I placed my input file in the folder C:\SCOMosaur with file name dino.xml and I will allow the output file to be written to that folder as well.
I run the command:
NetMonMPGenerator.exe -InputFile "C:\SCOMosaur\dinos.xml" -OutputDir "C:\SCOMosaur"
The program will let you know if there are any errors and it will confirm if it finished creating the management pack file.
From here you simply import the management pack and as usual wait a little bit.
Well it is a lot easier to create this input file with the basics we need to be monitoring the custom device. The total input XML file was about 60 lines if we take away the empty lines. The resulting management pack was 690 lines long.
There will be a complete example coming from the product team very soon now, including comments in the file and such. This is just a quick starter to help you play with this feature.
This is meant to get NOT Certified devices in a more complete monitoring state as if it were CERTIFIED. As you have seen the device types and component types are for the moment a limited set.
My idea around this feature is that the possibilities might still expand in due time to be more and more flexible. Also it would be nice to see a graphic interface to build up the input XML and of course that would immediately build up the management pack. However those kind of things take a lot of time to build. I consider the current solution a nice go between.
Back to the SCOM 2016 Features - Overview post!
Hope you all have fun!
Obviously the product team has received some feedback in the past on the performance of the SCOM console. It is not a secret this is not the fastest tool out there when opening it, changing views or refreshing even. This is the most apparent in larger environments of course. We can name several good reasons for this which we will not dive into now, but there was room for improvement even when taking the good reasons into account. Now they have started work to increase the speed of certain views within the SCOM console and expand from there.
In SCOM 2016 TP5 first the Alert views were looked at and worked on.
- Alert view is optimized to load efficiently
- Alert tasks and alert details in alert view is optimized to load efficiently
- Context menus of an alert in alert view is optimized to load efficiently
Alert views are one of the most used in SCOM, so this is where they started. Meanwhile work is done on other types of views as well, such as State and Performance views. These improvements will arrive later than TP5.
Of course these changes are likely most apparent in larger views and busy environments.
I do not have numbers or percentages of improvement for you yet. We might really start to notice a change in RTM production environments of a certain size later. Still I am very happy this bit of feedback was picked up and worked on.
Back to the SCOM 2016 Features - Overview post!
Wishing you speedy monitoring!
In SCOM 2012 there was a difference between Certified devices and generic devices. When you added a network device to SCOM it would show up as on of both. The certified devices had additional monitoring applied to them such as Processor and Memory monitoring, while the generic devices were much more basic in their monitoring possibilities. To get around that and/or to create additional monitoring for a devices components and add monitors and rules to them was quite difficult to achieve. I know I spent a week creating a custom management pack for a customer with a few classes, discoveries, monitors and rules, also because the amount of information was very limited but also because it is such a hard process to get through. Plus I am not really much of a developer to be honest. Lets say in that week a lot of words were used and thankfully I got great tips from my MVP friend Daniele Grandini.
Now however we are getting some help from SCOM 2016!
What is the process?
What you do is create a custom formatted XML file. This contains some basic information you are used to while creating management packs, such as a name and version number. Next you define Discoveries for devices and components. You define the SNMP OID's to look for. And you create Rules which look at the defined OID's and collect their data, and you create monitors which also look at predefined OID's and have expressions connected to them which look easier than the ones you used to create in custom packs to determine state of the components.
The tool we are talking about converts this structured XML file into a management pack XML file which can be used by SCOM. It is a simple command line executable with very few options and it will check for mistakes in the input XML and notify you.
The first thing which needs to happen is that you discover the targetted device first as an SNMP network device in SCOM through the usual method. The management pack which will be created using this tool would only work on discovered and monitored network devices. We are just expanding the default monitoring set to include more specific monitoring.
Where is it found:
%Program Files%\Microsoft System Center 2016\Operations Manager\Server\NetMonMpGenerator.exe
The command line options:
-InputFile or -I is used to pass the filename of the XML file you created (can add a path to that within quotes).
-OuputDir or -O is the directory where the output of this tool will be written to (can use a full path between quotes). The tool will write the management pack file to this directory.
-Overwrite or -W will overwrite an existing MP with the same name if found in the output directory.
-Help or -H can be used to display short usage help for the executable.
Example of command line tool usage:
I opened up a command prompt and went to the following directory
C:\Program Files\Microsoft System Center 2016\Operations Manager\Server
Next I ran this command (and the directories already existed)
NetMonMPGenerator.exe -InputFile "C:\SCOMosaur\dinos.xml" -OutputDir "C:\SCOMosaur"
And a few seconds later I got his message:
Management pack created: C:\SCOMosaur\System.NetworkManagement.SCOMosaursNetworkPack.xml
This file can be imported into your SCOM environmet to start monitoring.
Now I know you are going to ask me for a full example where I create the input XML as well.
Example of the SCOM 2016 Network Monitoring MP Generator where I will be attempting to monitor a Triceratops somehow.
This of course relates to me being one of the SCOMosaurs and staying on the Theme.
Back to the SCOM 2016 Features - Overview post!
With this post I am giving you an overview of the new features in SCOM 2016 which have been added currently. I bet you thought not much was happening with SCOM for the 2016 version right? Well I can tell you there is still a lot going on. Below you will find some of the things which have been worked on.
A number of features were added in early Technical Preview Releases (TP3 and TP4), such as Scheduled Maintenance Mode and Nano Server Agent. I will cover those in the series below as well, but first I will focus on the items added in TP5.
The following features and items were added since Technical Preview 5 of SCOM 2016 (Start of May 2016 timeframe) and we want YOU to know about them and you can use the links for each feature to dive more deeply into these features and improvements:
Now there are also other SCOM 2016 improvements on the list:
Give feedback on SCOM features
By the way, feel free to interact with the product team by giving them feedback:
The SCOM User Voice site
For example to get the Scheduled Maintenance Mode feature to move from the Admin pane to the Monitoring pane somehow so Operator level SCOM users can use the feature as well and not only SCOM admins Assuming of course most Operators and Service Desk staff are not heavy PowerSHell users (yet).
This and more is going on in SCOM 2016. I will be writing more about these subjects soon on my blog and in a future book and elsewhere probably.
Also be sure to watch for my presentations on SCOM 2016 at conferences (MMS 2016 Minneapolis on 17 May) and user group meetings (WMUG NL in May). I will be recording one and posting it up soon.
Enjoy being in control of your network infrastructure!
In SCOM 2012 R2 we were able to monitor up to 500 Unix/Linux agents per management server or about 100 through a gateway. To be honest I think that was already stretching it, unless the amount of workflows was kept to a minimum.
In SCOM 2016 work has been done to be able to scale up to higher numbers for this. Up to twice as much actually IF you use another monitoring method for cross platform monitoring. I will show you what I mean below.
In SCOM 2012 we were using WSMAN Sync API's to connect to the Linux agents and pull data from them. This is also the default setting for SCOM 2016.
However if you have a large Linux/Unix deployment that you wish to monitoring using SCOM 2016 there is a registry key you can set on the management server which will change the behavior of monitoring to use ASync MI API's. MI in this case stands for Windows Management Infrastructure which is based on CIM standards (the SCOM OMI agent is as well).
In order to get the SCOM management servers to use the new method (and thus scale up more!) you add a registry key to the management server which is monitoring the cross-platform agents.
Create this entry:
HKLM:\Software\Microsoft\Microsoft Operations Manager\3.0\Setup\UseMIAPI
After you do this I suggest you restart the Microsoft Monitoring Agent Service (also called the Healthservice) to be sure this goes into effect. Make sure all your management servers used for this purpose use the same method.
I think if you are monitoring a significant number of Linux/Unix agents in your environment (hundreds) that you change this setting on your SCOM 2016 management servers.
Back to the SCOM 2016 Features - Overview post!
Happy crossplat monitoring!
This blog post will introduce the new SCOM 2016 feature of Management Pack Tuning. It is meant to use alert data from SCOM to determine where tuning may be beneficial. The screenshots are based on the TP5 release of SCOM 2016 and could be changed in a few months as work continues to be done to several features of SCOM.
The way we often used to tune out alerts and management packs was by a few methods. The first method is to import the management packs and sit back and see the alerts flowing in and taking them on one at a time.
The second method was by using reporting:
The two Data Volume reports are actually very useful in going through which management packs cause the most data volume (number of performance counter entries collected, number of alerts, number of events….). And they have possibility to drill down into them as well to see which workflows are the busy ones. After this you could go into SCOM and find the rules and monitors and tune them to your liking.
There are also reports in the SCC Health Check Reports library created by Oskar Landman and Pete Zerger which we can use for this. It is called SCOM Health Check Reports V3 now and can be found in the Technet Gallery.
A new solution
Now in order to facilitate alert tuning for you the product team has worked on a custom solution to help you analyze the alerts and which machines cause the most of this and tune the workflows directly from there.
Starting SCOM 2016 TP5 Tech Preview you can now go into the SCOM Administration pane and in the Management Packs folder you will find “Tune Management Packs” now.
To the right hand side in the tasks pane you will find "Identify management packs to tune" where you can set a time range for analysis. Otherwise just wait 2 days and things will surface.
Now in the middle we see I currently have one management pack which may need tuning and it has given us 32 alerts in a limited amount of time. SO we press the "Tune Alerts" task now!
From here we can see which alert(s) came up during this period. To the right of what is in this screenshot there is also the name of the Rule or Monitor which caused this alert.
Now which possibilities do we have from here? If we right-click we get the following options:
The Copy function will give you the possibility to have a clear text cop of the selected fields so you can put them in a notepad or Excel sheet.
The Overrides option gives you the usual overrides options where you can override the monitor for all objects of this class or a group or single objects.
Of course we can directly open the properties for the monitor right from here.
ANd lastly there is the option to "View or overrides sources" which will open up a popup where you can see which instances of the targetted class (here Logical Disk) have caused the alerts.
From here we can tune the selected monitor for the specific objects which caused the alerts.
As I said at the start of the article, these are screenshots on TP5 preview and there may be changes to come to the interface and possibilities presented here.
The idea is however very clear and I like that this will help a lot of SCOM admins move into the tuning of alerts easier and quicker. Some people know how to do this using available reports both from the default reports or third party reports packs, but this new feature opens this up for more regular use by more SCOM admins.
One more remark here: I tried to fool around with another monitor to force it to give lots of alerts and what happens? Another monitor causes alerts and the one I set to very low thresholds never even fired an alert. ha ha ha ha ha ha.
Back to the SCOM 2016 Features - Overview post!
We waited for this for a while now. But Windows 2016 TP5 and System Center 2016 TP5 are now available for downloading. This is a screenshot from the MSDN downloads site:
Good luck playing with the new releases
I have started with the SCOM 2016 TP5 myself of course
This blog post discussed one of the new features in SCOM 2016 which is the Management Pack Updates and Recommendations. Now this feature addition was introduced I think in SCOM 2016 TP4 preview version already, but I will discuss it now anyway.
All SCOM admins know that we can get management packs from either the Microsoft websites (and of course community and third party pages for their management packs), or we could use the Import Management Packs option and point it to the Catalog.
In there we have the options of looking for specific management packs, or to look for recently released management packs, or look for updates to already installed management packs.
Thing is that it was easy to forget to look for new management pack updates, and also it often happened that SCOM admins forgot to download management packs for new products they did install on servers in their environment (or new versions like a new SQL version).
A new solution
In SCOM 2016 we can see in the Administration pane an entry under Management Packs called Updates and Recommendations:
From here we can select one management pack and download and install that management pack. There is also the possibility to do that with all of them. This will take you to the management pack download interface we were used to already.
As you can see from above screenshot there are a few management packs where we get an update recommendation, and two management packs this solution found to be missing if you thought you were already monitoring all roles.
What happens really is that this is a mini management pack which runs on all your agents and has very basic discoveries in it. It runs a discovery to see if you have for instance IIS or SQL installed or a number of other roles. These are looking for Microsoft management packs and not custom ones. When it finds certain software/roles installed this feature will check if you have the applicable management pack installed. There will be more discoveries added over time for additional software/features/roles over time.
Also of course there is a pack version comparison done with the catalog to check if you have the latest version of already installed management packs.
Another interesting addition to the tasks pane in that view above is the possibility to go to the management pack guide. This option will take you right to the download of the management pack guide in a web browser.
The second option there is to go to the DLC page. This is the Microsoft download center page where you can find the description of the management pack, its downloads and guides, and installation instructions. Not all management packs have this link enabled, but a lot of them will have.
The last task is called More Information. Now this is also a nice one. It will open a popup and show you which agents are running a workload relating to this management pack recommendation.
In this case it is my freshly installed SCOM TP5 machine needing the SQL 2014 management pack.
This is going to help us manage our management packs and check for updates to currently loaded management packs and also to check for forgotten management packs to get as much monitoring coverage as we can.
Back to the SCOM 2016 Features - Overview post!
Good luck monitoring!
Came along a SCOM 2012 R2 instance which was expired. The license key was not entered on time, so SCOM did not work anymore and the SDK refused connection. Look in the event log and you will see that your evaluation version has expired and you need to enter your key. The thing is that you connect to SCOM through the Shell to activate it and it refuses connection at that point.
The trick is to restart the SDK service and quickly enter the production key.
Just open a normal PowerShell in administrator mode on the SCOM server and throw these three commands in there:
restart-service -name omsdk
set-scomlicense -productid XYZXX-XYZXX-XYZXX-XYZXX-XYZXX -confirm:$false
Of course use the real product key in there where the X's are!
Have fun and good luck!
While chatting with some MVP friends of mine about a specific scenario where data from e-mails needed to be read and monitored, there are multiple possibilities to do it. I proposed one possibility which I implemented at a customer a while ago and got asked to blog about the solution, so here it is. Because SCOM is not built to natively read from a mailbox, one has to come up with a workaround, and in my case I used System Center Orchestrator to do part of the job.
Following is the situation. A number of servers monitored by another company and using another monitoring product. That product monitors servers from several customers of theirs, so we can not directly access it. We could not access or query the product directly either through scripts or commands or database queries. So in the end the result was that the other company would send e-mails from their several monitoring systems to one of our mailboxes. Resulting in 3 e-mails every 15 minutes. The e-mails contained an XML formatted body containing a list of servers and their state.
- So, we have to read 3 e-mails from a mailbox every 15 minutes. Pull out the body of the e-mails. Next merge the content to make it 1 XML file placed on a server with a SCOM agent on it. These steps are not native to SCOM, but a combination or Orchestrator and PowerShell
- After that we can use one of several methods to monitor a text based file on a server to create the monitoring part. For this we can use SCOM.
SO let us start with the first part
Using Orchestrator to get our e-mails into an XML file
I bet there are also other methods of doing this, but this was the method I selected and due to Orchestrator having some flexibility and some built-in actions in the intelligence packs this is very versatile.
Let us check out the email for a second:
We see the XML body there. In this case there are two servers mentioned in the email, however with longer names than how we know them so we need to play around with that too. Also with XML there is a header (first line) and a wrapper (second line start and end of last line), with the two actual content lines in the middle of it. Notice there are carriage returns and also spaces and potential tabs in there, which make it “nice” to filter those out while pulling the XML apart and creating a new XML file from that!
- A destination File share where the final XML file will be placed for being monitored.
- A mailbox where those messages arrive and we can read them from
- We created an automatic rule to place those e-mails in a specific named folder in the mailbox.
- We created a second folder where we can move the already read messages to.
- An account able to read in that mailbox.
- Orchestrator to create a runbook and bring it all together.
- An intelligence pack for Orchestrator which can read from a mailbox. I used the “SCORCH Dev - Exchange Email” IP for this which can be found at https://scorch.codeplex.com/
First import the Orchestrator IP needed to read the email and distribute it to the runbook servers as usual. Next start a fresh runbook and name it appropriately and place it in a folder where you can actually find it within Orchestrator. Advice is to use a clear folder structure within Orchestrator to place your runbooks in. This is not for the benefit of Orchestrator, but for yours!
Now we create the runbook. I will put the picture of the finished runbook here first before going through the activities:
Let’s now cut up the pieces:
Well this one simply says to check every 15 minutes
This one takes the current time from the first activity and at the bottom there subtracts 15 minutes from it. The story behind this is that we want to read all emails which came in between now and 15 minutes ago. So this gives us that point in time.
We wanted our monitored xml file to always have a fixed name. So when we are about to create a new version of that file we first go out to that file share and take the current XML file and rename it by adding a date-time format in the name to make it unique. We wanted to be able to look back in history here, else we would have chosen to just delete it. This makes the folder look like this:
Read mail from folder
Now this is a custom activity coming from the Exchange Email IP we imported earlier.
From the top we see we have to define a configuration. We will get back to that in a second. Next you can see that we are looking for Unread emails in a certain folder (keep in mind folder name must be unique in that mailbox else it just takes the other one, which you did not want to). Now on the left hand side we see Filters:
We also want those emails to have a certain subject line. And we want those emails to be received after the time from the Format Date/Time activity above. Meaning the email was received after 15 minutes ago. So in the last 15 minutes.
Now to get back to the Configuration part. Many IP’s in Orchestrator have a place where you can centrally set some parameters. For instance a login account, a server connection, and so on. This can be found on the top menu bar of the Orchestrator Runbook Designer under the Options menu. Find the item with the same name as the IP you are trying to configure. In this case it needs us to setup a connection to an email server. Type is Exchange Server, type a username, password, domain, and a ServiceURL. For an exchange server this could be https://webmail.domain.com/EWS/Exchange.asmx for example, but check this for your own environment.
Retry Read mail from folder
This one will only run if the first read mail from folder activity fails. You can set properties on those connecting arrows between the activities to make it go here it the first one fails. I made the line color red and set a delay on the line of 20 seconds. Else it will follow the other line and go to the script. This activity does exactly the same as the previous one. We had some time-outs during certain times so this extra loop slipped in there.
So those Read mail from folder activities should contain 3 e-mails received in the last 15 minutes from that folder, unread, with a subject line, and Orchestrator now knows what the body of those emails contains. This also means that the next activity (the script) will run three times.
Run .net script
At the top we define this to be a PowerShell script. So first we pull in the variable, which is the body of the email from the previous step. Next thing we do in the script is remove all excess stuff that we do not need. Empty spaces before and after several lines and entries. Also we will take out those headers and surrounding entries. We can add them ourselves to a clean file, right? SO this should give us a new string which only contains the XML entries for those servers with their state.
Next thing we needed to do is build in some tricks into this script. We know it is going to run three times and we need to stitch the contents together into one file.
Line of thought:
If there is no xml file there to write to this means this is the first time we run the script after the old file got renamed. So we need to create the xml file right now and add the headers to it. Next we add the body to it (server names with state).
If there is a file there with the correct name it means we are either in the second or third run. So what we do is simply write down the body (servers and state) and add the trailing end tag to it. This can be done on the second and third run. However, if this happens to be the third run, we will first check if that trailing tag is there and remove it. And next dump the body again and add the end tag.
So that part takes care of dumping the contents into the file following the above thought process (with the first thought coming at the end as the Else statement). Sorry for the Dutch comments, but you get the idea.
Next we take the e-mails found by the Read mail from folder activity and move them to the other folder in the mailbox.
So, that is the whole runbook to get a few emails and merge them together so we can monitor the thing!
There is a separate runbook which cleans old files from that file share and which cleans old emails from that folder in the mailbox by the way. At least we can look a few days back what happened.
The monitoring part in SCOM
Now I am not going into all the details of this part. I had a reason to not link these entries directly to the monitored servers, or to write the xml file to those servers. I opted to create a watcher node (and its discovery from a registry entry on that machine). That watcher node is the server with that file share and the xml file on it.
Next I created watchers in a class, and discovered them through registry as well. Containing the names of the servers we wanted to check for in the XML.
For each watcher it runs a PowerShell monitor which goes into the XML file and finds its corresponding entry (server name). Next it picks up the State (which is a number) and we translate the 12 possible numbers into green/yellow/red type entries and place them into the property bag. That gets evaluated into the three states we know so well.
Next we could throw those watcher entries for each server and also some other entries onto a dashboard. We could see the state the other party saw from their monitoring system and the state we see from SCOM side on one dashboard for those servers and monitored entries. We have the hardware/OS layer with a few extras, and they have an OS layer and application layers which we could not pick up.
As you can see sometimes we run into situations where there is no other way to get monitoring data than through workarounds and the long way. This is not ideal. As you can understand there is dependencies left and right for this whole chain to work. If there is no other way then that is the way it has to be. Direct monitoring or direct connecting is preferred.
But this shows how you can get monitoring data from e-mails into SCOM, in this case through the use of Orchestrator and watchers because that was what we needed.
Shout-out to amongst others Cameron Fuller for making me write this post!