SCOM – Quick Tip: Exchange Queue Length Monitor

July 11, 2018

Hi,

I’m currently busy at several customers setting up SCOM infrastructures. At one of those customers there were complaints by users that email messages were queued. When looking at the queues on the exchange servers they were actually HUGE.

Due to the fact Exchange was already monitored in SCOM I thought we would have seen this but that was not the case. In the Exchange Management pack there is NO monitor to check the actual queue length although there are rules which collect the queue lengths. I think this is not OK as it is really important to be notified when messages get queued in Exchange.

Therefore I created a custom Powershell monitor to check this. I created the monitor using the SquaredUp Powershell management pack, this management pack adds Powershell support everywhere throughout SCOM (where you would expect that by default :)) The management pack can be downloaded here: https://squaredup.com/content/management-packs/free-powershell-management-pack/

This is the powershell script itself:

# Any Arguments specified will be sent to the script as a single string.
# If you need to send multiple values, delimit them with a space, semicolon or other separator and then use split.
param([string]$Arguments)

$ScomAPI = New-Object -comObject “MOM.ScriptAPI”
$PropertyBag = $ScomAPI.CreatePropertyBag()

# Example of use below, in this case return the length of the string passed in and we’ll set health state based on that.
# Since the health state comparison is string based in this template we’ll need to create a state value and return it.
# Ensure you return a unique value per health state (e.g. a service status), or a unique combination of values.

Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn;

$queue = ((Get-Queue | Select-Object @{ n = “MessageCount”; e = { [int]($_.MessageCount) } }).MessageCount | measure-object -sum).sum

$PropertyBag.AddValue(“Length”,$queue)

if($queue -gt 500) {
$PropertyBag.AddValue(“State”,”OverThreshold”)
}
else
{
$PropertyBag.AddValue(“State”,”UnderThreshold”)
}

# Send output to SCOM
$PropertyBag

The monitor will become critical when the messagecount exceeds 500 messages, this can be increased or decreased if needed by changing “$queue -gt 500” accordingly.

 

This monitor has helped my customer to prevent this issue from happening again.

Hope this helps!

 

Best regards,

Bert

 

Advertisements

SCOM – File Count Management Pack

December 21, 2017

Hi,

I come at a lot of customers to implement or support SCOM. Sometimes the same questions or troubles come up.

One of that questions is: “Is it possible to monitor the count of files (with a specific extension) in a share?”

The answer to this question is yes and no. There is a possibility to count files on Windows Servers that have an agent installed using this management pack: http://www.systemcentercentral.com/pack-catalog/file-system-management-pack-2/ but for shares located on non-Windows Servers, let’s say on a SAN for example I haven’t found a solution available.

Therefore I created my own management pack to monitor the file count, independent of the location of the file share (Windows Server or not).

In this post I describe how the management pack works. With the management pack you can count files with a specific extension (or no extension if everything should be counted) in a share (optionally also subfolders included).

There is also the ability to add a specific age zo the given scenario is possible: Count if there are more then 20 files in a share (subfolders included) that are older then 10 minutes.

First of all we need a seed discovery which is targeted to a registry key located on a SCOM agent monitored Windows Server.

The value in the registry is located under SOFTWARE\Filecount. The value is “CSV” and it should contain the path to a CSV file. The server will be discovered as a “File Count Watcher Node”

Next stop is the csv file itself, for every share to be monitored it should contain a line with a specific syntax shown in the screenshot below

Different parameters are added:

  • ID
    • Must be unique per share
  • Share
    • UNC path of the share
  • Extension
    • The extension of the files that needs to be counted, leave empty to count all files in the share
  • Count
    • How many files must be present for a critical state
  • Time
    • This is the time in minutes of the maximum file age of file count
  • Recurse
    • 0 = No need to count files in subfolders
    • 1 = Count also files in subfolders

When the info is filled in, SCOM will discover every line as a “File Count Share”. The properties are used to configure the monitoring.

A monitor is also defined based on the properties filled in the csv file, but it’s basically a powershell script with necessary parameters.

The core of the script is this command:

$count  = Get-ChildItem -Recurse $strShare\$strExtension | where{$_.LastWriteTime -le (Get-Date).AddMinutes($strAge)}|Measure-Object |%{$_.Count}

The file count is also gathered as a performance counter so it can be included in reporting or in a Squared Up dashboard for example.

The management pack is also configured to use a specific Run As account. This account needs rights on the shares: at least Read-only Share rights and Read-Only NTFS rights.

I’ve been able to help some customers already by using this management pack.

The first customer where I set this up is a big hospital in Belgium where they use this management pack to monitor shares which are used to store (and process) images and movies made during surgery.

The content should be processed from the network share and transferred somewhere else but sometimes the processing hangs and the share is getting full without anyone knowing. Since they have the management pack in place this hasn’t happened anymore.

If you have interest in the management pack, I’ve made it available via GitHub: https://github.com/bpinoy/ManagementPacks/tree/master/File%20Count%20MP

Best regards,

Bert

 

 

 


AD FS Management Pack undocumented required configuration

October 5, 2012

When implementing the AD FS management pack in System Center Operations Manager and looking at the guide, the only required configuration you get, is:

  • Install SCOM 2007 R2 or 2007 SP1 agents on AD FS servers
  • Enable Agent Proxy on all AD FS servers.
  • Using the Add Role Services Wizard in Windows Server 2008, verify that the IIS 6 Management Compatibility and IIS 6 Metabase Compatibility role services are installed. (Some AD FS 2.0 scripts depend on Internet Information Services (IIS) Windows Management Instrumentation (WMI) objects being installed.)

We did just that: we installed SCOM 2012 (hey, MS told us the SCOM 2007 R2 management packs work in SCOM 2012!) and enabled agent proxy.

Next, on the AD FS servers, we added the IIS services IIS 6 Management Compatibility and IIS 6 Metabase Compatibility role service.

The discovery scripts kicked in, and our AD FS servers were discovered. Next, we noticed some alerts on AD FS:

AD FS 2.0 application pool Is Not Running On The Federation Server


But the alert is false. The application pool is running!



When looking at the monitor, you see that a powershell script is executed, trying to connect to root/MicrosoftIISv2. Using wbemtest, you will notice that root/MicrosoftIISv2 is not available.

To ensure this script works, add the following IIS services:

  • IIS Management Scripts and Tools
    • To enable managing IIS using WMI
  • IIS 6 WMI compatibility

    • To enable the provider root/MicrosoftIISv2

    No restart required. Just wait a few moments and the alerts will magically disappear!


Data Warehouse Object Health State Data Dedicated Maintenance Recovery State

August 1, 2012

A customer of mine had his SCOM 2007 R2 CU6 Root Management Server that was in a critical health state. When looking at the health explorer, we saw this:


 

SCOM performance its own database maintenance. Kevin Holman wrote a nice article about what maintenance is done automatically by SCOM and what maintenance you could configure additionally: http://blogs.technet.com/b/kevinholman/archive/2008/04/12/what-sql-maintenance-should-i-perform-on-my-opsmgr-databases.aspx .

When looking at the state change events, we saw the following description:

Failed to store data in the Data Warehouse. Exception ‘SqlException’: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. One or more workflows were affected by this. Workflow name: Microsoft.SystemCenter.DataWarehouse.StandardDataSetMaintenance Instance name: State data set Instance ID: {GUID} Management group: MG

 What does this mean? SCOM is trying to perform its maintenance on state data, but unfortunately, there is too much maintenance work to do, so the rule times out. Best would be to now check why this is timing out. Maybe you have corrupt tables? Or you had a sudden flow of a lot of data that was inserted in the datawarehouse? For this customer, we had issues with config churn which we just fixed. So this time-out could be easily explained.

How to fix this? First, disable the rule performing the maintenance so it will not interfere with our manual procedure. Open the SCOM console

  1. Go to the authoring pane
  2. Select Authoring => Management Pack Objects => Rules
  3. Change the scope to ‘Standard Data Set’
  4. Right-click The only rule there is: Standard Data Warehouse Data Set maintenance rule
  5. Select Override the rule for all objects of class: Standard Data Set
  6. Disable the rule by ticking the row with the parameter name Enabled and by changing the Override Value to false


  7. Select an appropriate override management pack (not the default management pack!) and click on apply.

Now we will be triggering the stored procedure used by this rule manually, and this with no timeout!

  1. Open SQL Management Studio
  2. Select the instance where your OperationsManagerDW database is residing
  3. Once opened click on new query and select you OperationsManagerDW database.
  4. Remember that our state change event description mentioned issues with state data, so we first need to get the ID of the state data set. To do this, click on new query, select your datawarehouse database and enter the following command:

    SELECT DatasetID FROM vDataset WHERE DatasetDefaultName = ‘State data set’


    Click on execute and Copy the resulting ID.

  5. Click on new query, select your datawarehouse database and enter the following command:

    EXEC StandardDataSetMaintenance “GUID”, with GUID the ID that resulted from the previous query.


  6. Wait until the command finishes successfully.

Go back to the authoring pane of the SCOM console to remove our previously defined override.

  1. Go to the authoring pane
  2. Select Authoring => Management Pack Objects => Rules
  3. Change the scope to ‘Standard Data Set’
  4. Right-click The only rule there is: Standard Data Warehouse Data Set maintenance rule
  5. Select Overrides summary
  6. Delete the previously defined override:

And the error disappeared! If this error comes back frequently, you should check for corrupt tables, config churn, database performance or other issues why the maintenance is timing out.


SCOM 2007 R2 dbcreatewizard: fails to install System.IO.DriveNotFoundException and System.ArgumentOutOfRangeException

June 25, 2012

Recently, I needed to use dbcreatewizard.exe to install a datawarehouse on a SQL 2008 R2 cluster on windows 2008 R2 SP1. The SQL server is using mountpoints for its datafile locations. I ran into 2 issues:

  1. System.IO.DriveNotFoundException

When starting dbcreatewizard.exe, when you click on next, the tool tries to enumerate all local drives. While trying this, I received the error that he couldn’t find the X-drive:

This was quite logical, as the X-drive is located on another instance on another node. To solve this issue, I had to failover all SQL instances to the same node and restart dbcreatewizard.exe. But be careful: a lot of applications don’t like it when you failover an instance: do an impact analysis before you do it and ensure everyone involved knows about what you are doing! Also, once the installation succeeds, ensure the instances are relocated on the original nodes.

  1. System.ArgumentOutOfRangeException

Once I fixed the issue with the X-drive, I select the installing the datawarehouse as well as the right instance. I click on next and finish and encounter another error:

This message is a lot more cryptic than the one before:

Note: The following information was gathered when the operation was attempted. The information may appear cryptic but provides context for the error. The application will continue to run.

System.InvalidOperationException: An error occurred while trying to create the database on your SQL Server. Check your logs for more information.

at Microsoft.EnterpriseManagement.Setup.DBCreateWizard.Program.LaunchDBCreation()

at Microsoft.EnterpriseManagement.Setup.DBCreateWizard.SummaryPage.BackgroundThread()

I skipped back to the configuration of the database, and what do I see?

Apparently, when selecting another instance in the dropbox, the data file path and log file path aren’t updated. For me, this default setting was incorrect (though correct for another existing instance). I changed the settings to reflect the real file locations and the database created just fine. What’s the lesson here? Always check carefully what you do ;).


Veeam Introduces Free System Center 2012 Management Pack 10-Pack for VMware Monitoring

April 16, 2012

Veeam makes great products! I really love their nWorks Management Pack to include VMware metrics into System Center Operations Manager 2007 and 2012. They offer various advantages over the classical approach using only vCenter:

  • Use the advantages of the SCOM datawarehouse to enable data aggregation of VMware metrics. It is finally possible to report about VMware metrics over a period of up to 400-days! (or more if you change the grooming settings in your datawarehouse)
  • Get VM IO metrics even if they reside on NFS NetApp storage. (I’m told by my VMware engineers that this isn’t posible in VCenter)
  • Get advanced alerting possibilities using the standard used by your Operations team: System Center Operations Manager
  • Enable deep integration of the operator’s view from the VMware perspective until the applications running on it.

It really is software you can’t live without if you want to monitor all levels of your datacenter. Example screenshot:

Image

My fellow VMware engineers also love their Back-up & replication solution! I’m not an expert in that field, so I won’t bother you with these details :).

Veeam just made a big announcement on the Microsoft Management Summit:

FREE 10 sockets of Veeam Management Pack

The Veeam Management Pack 10-Pack – a free VMware monitoring solution exclusively for new Veeam MP customers worldwide who are using Microsoft System Center 2012.

The Veeam Management Pack 10-Pack includes:

  • A free 10-socket license of the Veeam Management Pack for deep VMware monitoring in System Center 2012
  • One full year of maintenance and support

This is great news for a lot of my smaller customers who found the prices of Veeam rather steep. Keep up the great work Veeam!

All info can be found on the following link:
http://www.veeam.com/news/veeam-introduces-free-system-center-2012-management-pack-10-pack-for-vmware-monitoring170.html

Get your free management pack at:
http://www.veeam.com/SC2012


Upgrade Veeam nWorks to version 5.7

April 16, 2012

What do you need?

  • SCOM 2007 R2 or higher of course 🙂
  • The nworks 5.7 installer.
  • An already working nworks 5.6 environment.
  • Your SQL collation settings for SCOM should be SQL_Latin1_General_CP1_CI_AS. If not, the reports won’t be pushed to the report server.
  • Also, SQL Reporting Server 2008 or higher is recommended.

How to install?

First, start the nworks EMS installer. As always, I am starting the installer from a command prompt with administrative privileges.



Setup is detecting that a previous version of nworks was installed. Click on yes:



Click on next:



Click on I accept and next:



Here you need to enter the path to the converted license file. Contact Veeam to convert your orginal license file to a license file for the new version. Click on next:



Enter the path where you want to install veeam:



Enter the TCP port that will be used to access the nWorks Enterprise Manager. The standard port is 8084:



Enter the credentials for the service account and click on next:



Click on Install:



Click on finish:



Click on logoff en logon again to upgrade the last components. This is needed to update group membership for the logged on user account:



Now start the nworks UI update, of course from a command prompt with administrative privileges:



Click on yes:



Click on next:



Click on I accept and on yes:



Click on next:



Enter the details of the enterprise manager we just installed and click on next:



Enter the tcp port for the WebUI, here left default, and click on next:



Click on Install:



Click on finish:



Now start the nworksVIC installer from a command prompt with administrative privileges:



Click on Yes:



Click on next:



Click on I accept and next:



Enter the path where you want to install the software and click on next:



Enter the details of the enterprise management server we installed earlier:


Enter the details of the service accounts:



Click on Install:



Click on finish:



Import the new management pack. Open the System Center Operations Manager 2007 console with OpsMgr administrator credentials. Navigate to administration => management packs:



Right-click management packs and select Import Management Packs…:



Click on add from disk:



You can select no because all dependencies should already be met:



Select the management pack:



Click on Install:



Now click on close:



To check that the installation went fine, op en the nwoks enterprise manager (on the server you just installed it on):



Log in with credentials with enough rights:



Ensure everything is ok:



Now return to the SCOM console and go to monitoring => nworks Vmware\_Enterprise Manager Dashboard
Select the collectors in the collector state view.



Click on the “Configure OpsMgr agent” task:



Click on run:



This should be it!