Reboot notifications

May 9, 2016

Hello,

Reboot notifications , we all hate to reboot. Normally the less the better but as … an admin you want to pursuade your users into rebooting the device from time to time. Keeps it healthy and running smoothly.

Now in sccm we have several options for rebooting. In this particalur case we supress the reboot for the update deployment. So the user gets notified but not forced to reboot.

Unfortunately the result was this :

clip_image002

That’s odd the windows update reboot notification was not wat we wanted. If we check the notifications area we see 2 notifications : one for sccm client and one for windows update.

clip_image004

The setting required to modify this behavior was the following :

· System -> Windows Components -> Windows Update -> Configure Automatic Updates :Disabled

· Re-prompt for restart : Disabled.

clip_image006

After modification of these policies the result was better ! Just one notification.

clip_image008

And if the user presses the Open restart button :

clip_image010

Or select the restart now option :

clip_image012

In the software center applet you can see detailed info about which update requires a reboot.

clip_image014

Now the behavior is different for software installations requiring a reboot. For example this IE11 installation returns a 3010.

clip_image016

The user will be notified about a required reboot on the device , the settings are be configured by the sccm client settings for “Computer Restart”

clip_image018

The user will recieve a popup :

clip_image020

If ignored the restart icon will stay in the notification area.

clip_image022

Now according to the settings there is a permanent message shown as soon as there is only 15′ left on the clock. The color of the progress bar will change and the hide button will become unavailable.

clip_image024

Enjoy

Gino D

Advertisements

Quick Tip ! Automate it !

April 28, 2016

Hello,

I am a big fan of automation , you improve efficiency by generating a consistent result fast.

But it needs to be worth it, you need a certain quantity of requests before the investment pays off.

Luckily we have these kind of environments in our partner portfolio.

Here we use the service manager portal not for the end-users but we present the portal to the first line helpdesk so they don’t have to escalate certain tasks to second line. All requests are automated by orchestrator runbooks.

clip_image002

And we are on 4874 completed request.

clip_image004

Simple math : about 10 minutes if the action is performed manually , this makes 48740 minutes -> 812 hours -> 101 working days saved, this time can be spent on tasks that create a real added value for the partner.

Enjoy.

Gino D


TCP Port monitoring in SCOM without using template.

April 6, 2016

Hello everyone,

As you may or may not know, creating TCP monitors in SCOM through use of the template is a fairly time intensive task, especially if you have to create a ton of TCP monitors. Furthermore, templates are fine for smaller scale Operations Manager environments but tend to create a lot of unneccessary groups, overrides, views etc.

So naturally I was looking for a more elegant solution as I did not want to go through creating 100’s TCP monitors. My first thought was to google if anything exists already, and to my surprise, I did not find any immediate solutions.
What I did find however was the following post. (Credits to Gowdhaman Karthikeyan)

This post explains how you can use a powershell discovery with a comma seperated file or ‘CSV’ to add the proper TCP Port instances in SCOM.
This has some significant advantages over using the template (as  outlined in the blog post):

  • You can let other teams add TCP monitors themselves, with minimum SCOM knowledge or access.
  • It is more scalable, as it does not create any unnecessary groups, overrides, views compared to the template.
  • It is a lot faster, as you dont have to go through the template for each TCP monitor you want to create.
  • The information is centrally stored in the CSV.

The blogpost covers the class/discovery creation of these TCP port instances, but does not cover the monitoring part. As I did not have time to wait for part 2, I decided to use his management pack to add monitoring to it as well.

To enable monitoring I went through the following steps:

  • Created a Visual Studio solution and migrated the classes/discovery in my new management pack.
  • Create a ‘dummy’ TCP port monitor from the template wizard and save it in a new management pack.
  • Export this management pack, and manipulating the datasources to change the hardcoded stuff to the properties of our custom class.

This is what the initial datasource for the monitor looks like generated by the template:

<ModuleTypes>
<DataSourceModuleType ID=”TCPPortCheck_078ada71de03493d927d74746d848bd6.TCPPortCheckDataSource” Accessibility=”Public” Batching=”false”>
<Configuration />
<ModuleImplementation Isolation=”Any”>
<Composite>
<MemberModules>
<DataSource ID=”Scheduler” TypeID=”System!System.Scheduler”>
<Scheduler>
<SimpleReccuringSchedule>
<Interval Unit=”Seconds”>120</Interval>
</SimpleReccuringSchedule>
<ExcludeDates />
</Scheduler>
</DataSource>
<ProbeAction ID=”Probe” TypeID=”MicrosoftSystemCenterSyntheticTransactionsLibrary!Microsoft.SystemCenter.SyntheticTransactions.TCPPortCheckProbe”>
<ServerName>server1.customer.org</ServerName>
<Port>80</Port>
</ProbeAction>
</MemberModules>
<Composition>
<Node ID=”Probe”>
<Node ID=”Scheduler” />
</Node>
</Composition>
</Composite>
</ModuleImplementation>
<OutputType>MicrosoftSystemCenterSyntheticTransactionsLibrary!Microsoft.SystemCenter.SyntheticTransactions.TCPPortCheckData</OutputType>
</DataSourceModuleType>

The bold part is the hardcoded part we have to replace. However, we do not have added any data yet from our target class to the data source, which we will have to add as well. The datasource eventually looks like this:

<DataSourceModuleType ID=”TCPPortMonitor.TCPPortCheck.DataSource” Accessibility=”Public” Batching=”false”>
<Configuration>
<xsd:element name=”ServerName” type=”xsd:string” />
<xsd:element name=”Port” type=”xsd:int” />
<xsd:element name=”NoOfRetries” type=”xsd:int” />
<xsd:element name=”TimeWindowInSeconds” type=”xsd:int” />
</Configuration>
<ModuleImplementation Isolation=”Any”>
<Composite>
<MemberModules>
<DataSource ID=”Scheduler” TypeID=”System!System.Scheduler”>
<Scheduler>
<SimpleReccuringSchedule>
<Interval Unit=”Seconds”>$Config/TimeWindowInSeconds$</Interval>
</SimpleReccuringSchedule>
<ExcludeDates />
</Scheduler>
</DataSource>
<ProbeAction ID=”Probe” TypeID=”Synth!Microsoft.SystemCenter.SyntheticTransactions.TCPPortCheckProbe”>
<ServerName>$Config/ServerName$</ServerName>
<Port>$Config/Port$</Port>
</ProbeAction>
</MemberModules>
<Composition>
<Node ID=”Probe”>
<Node ID=”Scheduler” />
</Node>
</Composition>
</Composite>
</ModuleImplementation>
<OutputType>Synth!Microsoft.SystemCenter.SyntheticTransactions.TCPPortCheckData</OutputType>
</DataSourceModuleType>
</ModuleTypes>

The monitor types will have to be changed as well, as the properties of the class are not passed through in the template version of the monitor.
So it went from this:

<UnitMonitorType ID=”TCPPortCheck_078ada71de03493d927d74746d848bd6.TimeOut” Accessibility=”Public”>
<MonitorTypeStates>
<MonitorTypeState ID=”TimeOutFailure” NoDetection=”false” />
<MonitorTypeState ID=”NoTimeOutFailure” NoDetection=”false” />
</MonitorTypeStates>
<Configuration />
<MonitorImplementation>
<MemberModules>
<DataSource ID=”DS1″ TypeID=”TCPPortCheck_078ada71de03493d927d74746d848bd6.TCPPortCheckDataSource” />
<ConditionDetection ID=”CDTimeOutFailure” TypeID=”System!System.ExpressionFilter”>
<Expression>
<SimpleExpression>
<ValueExpression>
<XPathQuery Type=”UnsignedInteger”>StatusCode</XPathQuery>
</ValueExpression>
<Operator>Equal</Operator>
<ValueExpression>
<Value Type=”UnsignedInteger”>2147952460</Value>
</ValueExpression>
</SimpleExpression>
</Expression>
</ConditionDetection>
<ConditionDetection ID=”CDNoTimeOutFailure” TypeID=”System!System.ExpressionFilter”>
<Expression>
<SimpleExpression>
<ValueExpression>
<XPathQuery Type=”UnsignedInteger”>StatusCode</XPathQuery>
</ValueExpression>
<Operator>NotEqual</Operator>
<ValueExpression>
<Value Type=”UnsignedInteger”>2147952460</Value>
</ValueExpression>
</SimpleExpression>
</Expression>
</ConditionDetection>
</MemberModules>
<RegularDetections>
<RegularDetection MonitorTypeStateID=”TimeOutFailure”>
<Node ID=”CDTimeOutFailure”>
<Node ID=”DS1″ />
</Node>
</RegularDetection>
<RegularDetection MonitorTypeStateID=”NoTimeOutFailure”>
<Node ID=”CDNoTimeOutFailure”>
<Node ID=”DS1″ />
</Node>
</RegularDetection>
</RegularDetections>
</MonitorImplementation>
</UnitMonitorType>

To this:

<UnitMonitorType ID=”TCPPortMonitor.TimeOut.MonitorType” Accessibility=”Public”>
<MonitorTypeStates>
<MonitorTypeState ID=”TimeOutFailure” NoDetection=”false” />
<MonitorTypeState ID=”NoTimeOutFailure” NoDetection=”false” />
</MonitorTypeStates>
<Configuration>
<xsd:element name=”ServerName” type=”xsd:string” />
<xsd:element name=”Port” type=”xsd:int” />
<xsd:element name=”NoOfRetries” type=”xsd:int” />
<xsd:element name=”TimeWindowInSeconds” type=”xsd:int” />
</Configuration>
<MonitorImplementation>
<MemberModules>
<DataSource ID=”DS1″ TypeID=”TCPPortMonitor.TCPPortCheck.DataSource”>
<ServerName>$Config/ServerName$</ServerName>
<Port>$Config/Port$</Port>
<NoOfRetries>$Config/NoOfRetries$</NoOfRetries>
<TimeWindowInSeconds>$Config/TimeWindowInSeconds$</TimeWindowInSeconds>
</DataSource>
<ConditionDetection ID=”CDTimeOutFailure” TypeID=”System!System.ExpressionFilter”>
<Expression>
<SimpleExpression>
<ValueExpression>
<XPathQuery Type=”UnsignedInteger”>StatusCode</XPathQuery>
</ValueExpression>
<Operator>Equal</Operator>
<ValueExpression>
<Value Type=”UnsignedInteger”>2147952460</Value>
</ValueExpression>
</SimpleExpression>
</Expression>
</ConditionDetection>
<ConditionDetection ID=”CDNoTimeOutFailure” TypeID=”System!System.ExpressionFilter”>
<Expression>
<SimpleExpression>
<ValueExpression>
<XPathQuery Type=”UnsignedInteger”>StatusCode</XPathQuery>
</ValueExpression>
<Operator>NotEqual</Operator>
<ValueExpression>
<Value Type=”UnsignedInteger”>2147952460</Value>
</ValueExpression>
</SimpleExpression>
</Expression>
</ConditionDetection>
</MemberModules>
<RegularDetections>
<RegularDetection MonitorTypeStateID=”TimeOutFailure”>
<Node ID=”CDTimeOutFailure”>
<Node ID=”DS1″ />
</Node>
</RegularDetection>
<RegularDetection MonitorTypeStateID=”NoTimeOutFailure”>
<Node ID=”CDNoTimeOutFailure”>
<Node ID=”DS1″ />
</Node>
</RegularDetection>
</RegularDetections>
</MonitorImplementation>
</UnitMonitorType>

By changing the monitor types and datasource part of the code, the hardest part is basically done. All we have to do is create 4 monitors and use the proper monitor types. These are the monitors that are included in the management pack:

  • TCP Unreachable Monitor
  • TCP Timeout Monitor
  • DNS Resolution Monitor
  • Connection Refused Monitors.

I have not added any performance collection yet, but will probably add this in a later stage.

Before of after importing the management pack, make sure you still have to follow these steps:

  • A share on which you will place the CSV file. It should be reachable from the management servers and the default management server action account should have access to the share. The discovery runs on an a default interval of  4 hours. The CSV file should look like this (make sure to use a ‘comma’ as your delimiter!):
  • Change the sharename of the discovery by overriding the filepath property in the discovery (TCP Monitoring Class Discovery).
    aa
  • Create views based on the TCP Monitoring Class, as I always use Squared Up instead of the standard scom console, I decided not to include any views in the MP. Here are some screenshots of what it looks like:
    8
    10
    9

Note: this MP only works with 2012 R2, but you can change the references to an older version and it should work as well.

As always, I would recommend to test the management pack before using it. Feel free to comment should you run into any issues. The management pack can be downloaded here

Regards,

Jasper


Squared Up v2.3 – SCOM Dashboards – New features

December 8, 2015

Hello Everyone,

Today I am going to cover the new release of my favourite SCOM dashboarding product Squared Up.
Here are some of the new features, ordered by usefulness:

  • Open Access Dashboards
    This feature replaces read-only dashboards, and its a pretty big deal. Previously you still required SCOM permissions when logging in to read only.
    Open access allows you to share non-interactive dashboards without having to log in, ideal for sharing dashboards on TV’s. These dashboards also do not consume a named user license.
    It is also a great replacement for performance (or availabililty even)reports. As you may or may not know, creating performance reports in SCOM can be a serious pain in the behind, especially for non-scom guru’s.
    Now you can create a performance dashboard quick and easy with squared up (it literally takes 5 minutes for me to create one), and share it with your colleagues. What’s even more convenient is they can access it any time, anywhere,on any platform, and the data is near real time. Compare this to scheduled reports, where you still need to wait for it to be generated, it really is a thing of the past.
    Furthermore, open access dashboards generates a bitmap, which are then updated every 60 seconds automatically. This means that, regardless of how many people are watching the dashboard, they are watching data that is only being queried in the background once, which makes it very scalable!
    So here’s how it works:

    • You create a dashboard in Squared Up (that one is pretty obvious :))
    • As you can see, on the right you have a ‘share’ button as we all recognize from using our smartphones. Click this button to make the dashboard open access.
      ScreenHunter_231 Dec. 08 16.05
    • This gives us the choice to make it open access with or without authentication, and some view options, for fullscreen, embedded functionalities. Click Generate.
      ScreenHunter_232 Dec. 08 16.07
    • Clicking generate will show you the randomly generated URL. Previous dashboards had a certain suffix, which was easy to predict. Open access eliviates this issue.
      ScreenHunter_233 Dec. 08 16.11
    • The dashboards is generated the first time its opened, this takes about 5-10 seconds depending on the amount of data. Afterwards the dashboard will refresh by itself. The timestamp on the dashboard will show this.
      ScreenHunter_234 Dec. 08 16.14.jpg
      ScreenHunter_235 Dec. 08 16.15.jpg
    • That’s it! You can now share this URL with all your colleagues.
  • Colored lines on performance graphs.
    Maybe you’d say, why is this such a big deal? Well it really compliments the open access dashboards. In the previous release, when using the graph in read-only mode, you had no idea which was what because you had no means to hover over performance counters. The colors are also consistent for the whole dashboard, this means that server X for example will have the same color in every graph section.
    By introducing colors, they also introduced a ‘key’, here’s a screenshot to clarify this:
    ScreenHunter_237 Dec. 08 16.19.jpg
    So now open access dashboards have more contextual information of what the graphs represent. Really neat!
    Plus it adds some ‘flair’ to your dashboards :).

    • Other improvements in the performance section are things like being able to choose your resolution (or data aggreggation), for smaller scopes you might want near real time info, whereas for longer periods you would probably want the hourly or daily data. Remember, real time stuff is a lot more costly on the data warehouse. Previously you had no control over this (as far as I knew)
      ScreenHunter_238 Dec. 08 16.21.jpg
    • Here’s a screenshot of the new ‘coloring’ options of the performance sections:
      ScreenHunter_240 Dec. 08 16.26.jpg
  • Next up, the alert section has been revamped. You have a lot more control over how you want to show your alerts.
    • First new options are being able to choose between error, warning and info or any combination of this, previously this was only possible through the use of criteria. This makes the whole process more user friendly.
      ScreenHunter_241 Dec. 08 16.28.jpg
    • The colums section is also new, here you can choose which columns you want to show regarding your alerts, and shuffle them around by using drag and drop.
      ScreenHunter_242 Dec. 08 16.31.jpg
  • Improved installation user experience.
    The installation has been further simplified.To be honest, it was already very easy to install, but they made further improvements regarding this. Setting permissions on the data warehouse is a thing of the past.
    login-setup1.png

    • After installation you are also introduced to a new page, which navigates you to some very useful squaredup resources.
      ScreenHunter_243 Dec. 08 16.37.jpg

Version 2.3 is another step in the right direction to what I consider to be a console replacement. If you haven’t patched yet, you definitely should. It was a very smooth and painless process for me.

Regards,

Jasper


Orchestrator Quick Tip ! Junction

August 26, 2015

Hello,

When you have multiple actions that you want to run in a parallel way you can link them and use the junction in order to wait for all actions to be finished before continuing.

Here’s the Technet explanation : https://technet.microsoft.com/en-us/library/hh206089.aspx

Now consider the following example :

We use the logging IP in order to grab some information in service manager and save it in a custom field. This is accomplished by calling several sub runbooks.

It looks like this :

clip_image002

Now if we return no data from the junction then our get log data is not succesful as the logging ID is empty.

clip_image004

If you run the tester you recieve no error but the logging id is empty.

clip_image006

clip_image008

While the action clearly stated to use the logging_id from the start activity.

clip_image010

Now if we add the return activity from our previous branch we recieve exactly the same issue.

clip_image012

I had to add a link to our first subrunbook in order to be able to retrieve the Logging_id from our first start action. Then it works.

clip_image014

And set the returned data from the junction to this activity.

clip_image016

At last success.

Enjoy.

Gino D


Orchestrator run .NET version

June 22, 2015

 

All,

We’re using a simple script to enumerate all AD groups containing info in a notes field in Orchestrator.

The script is this :

import-module activedirectory -force

$ArrayProcessList = @()

$Searchbase = “OU=Security Groups,OU=Groups,DC=localdomain,DC=com”

$results = get-adgroup -filter {info -like “*”} -searchbase $searchbase

foreach ( $result in $results )

{

$ArrayProcessList += $result.distinguishedname

}

$ArrayProcessList

When running in the runbook tester with an admin user all works fine. However when testing with a calling runbook so the runbook is executed on the runbook server using service acocunts I recieve an error:

clip_image002

Hmm strange.

Digging into this issue I noticed that the powershell version running using the run .net script is a V2.0 X86 powershell edition ( thank for that Thomas 🙂 )

As you can see in the default V3 version the import-module works.

clip_image004

And this doesn’t work in the V2 version :

clip_image006

Okay , so we have identified the issue , how to resolve it ?

We like this : http://karlprosser.com/coder/2012/04/16/calling-powershell-v3-from-orchestrator-2012/

Modify the script so it starts a new powershell session and pass the output

clip_image008

So start with a variable and run powershell { command} after this, make sure you output the desired result and then pass the initial variable as published data.

clip_image010

And check result !

clip_image012

clip_image014

Yes ! Success.

Enjoy.

Gino D


Quick Tip ! Redistribute failed packages in ConfigMgr #RDProud

May 26, 2015

Hello,

We were having issues with several packages not being distributed correctly in a large sccm 2012 environment. If we redsitribute the failed packages on a specific dp then the issue is resolved.

We’ll perform a root cause analysis next time, let’s focus on getting our content to the DP’s for now.

So I have created a powershell script ( based on http://www.david-obrien.net/2013/11/redistribute-failed-packages-configmgr-dps/ ) that will take all the failed distributions on a specifc DP and refresh them.

Here it is ( replace XXX with your site code):

$fileserver=”%Name_of_your_DP%”

$failures = Get-WmiObject -Namespace root\sms\site_XXX -Query “SELECT packageid FROM SMS_PackageStatusDistPointsSummarizer WHERE (State = 3 AND SourceNALPath like ‘$fileserver’ )”

foreach ( $failure in $failures )

{

$id = $failure.PackageID

write-host $id

$DP = Get-WmiObject -Namespace root\sms\site_XXX -Query “SELECT * FROM SMS_DistributionPoint WHERE (ServerNALPath like ‘$fileserver’ and PackageID=’$id’ )”

write-host $DP

$dp.RefreshNow=$true

$dp.put()

}

If you combine this with content validation on a schedule and the report ->

Software Distribution -> Content -> All active content distribution -> failed

clip_image002

You can export to csv and have a nice filtered excel that can be used as in order to select the correct DP.

clip_image004

Enjoy.

Gino D