SCOM – Exchange Client Proxy Error

I was implementing System Center Operations Manager in a project at a customer for monitoring Exchange.
After loading the Exchange Management Pack, a bunch of errors like these were logged:

The Ehlo options for the client proxy target 10.2.38.164 did not match while setting up proxy for user on inbound session 08D2F00F25C773B2. The critical non-matching options were <maxSize>. The non-critical non-matching options were <NONE>. Client proxying will continue.

The Ehlo options for the client proxy target did not match while setting up proxy for user on inbound session 08D2F00F25C773B2. The critical non-matching options were . The non-critical non-matching options were . Client proxying will continue.

This error is logged for servers holding the mailbox role, even though it states Hub Transport in the source (that is no longer a standalone role in Exchange 2013).

 

So, looking at the alert description we can see it somehow relates to max size, and is generated in response to the EHLO options when a test connection is made to the server in question.
To dig into this, we need to determine the what the maximum message size defined in the Exchange organization is. This can be done using an Exchange Management Shell with this command:

Get-TransportConfig | ft maxsendsize, maxreceivesize

This lists the sizes configured for the Exchange organization.

Next, we need to determine which receive connector is configured with the different limit. This can be done by either looking at the receive connectors on the server listed in the error or running a command to list all receive connectors in the organization and correct those who deviate:

Get-ReceiveConnector | ft name, maxmessagesize

Examine the output, find the receive connector in question and correct its message size restriction. Or use this command to find all receive connectors with the deviating limit and correct them at the same time. Change the MB limit to fit your need.

Get-ReceiveConnector | Where-Object {$_.maxmessagesize -ne 35MB} | set-receiveconnector -maxmessagesize 35MB

Restart the Exchange Transport Service and voila, the error will go away.

Max Concurrent API reached alert in SCOM 2012…

Over the last couple of months I’ve implemented a couple of Operations Manager solutions at customers where they all use the Windows Server Management Pack version 6.0.7026.0
This management pack includes a new monitor where it monitors how many secure channels are created to a domain controller when authenticating users using NTLM pass-through.

In Windows 2008 and R2 the monitor can however create false positives, which can make this monitor quite noisy. This is a confirmed bug in this version of the management pack which can be seen here at Kevin Holmans Technet blog here.

First off, you need to ascertain whether this is an actual issue on the server in question or if it’s a false positive. To do this you need to monitor the performance counters for NETLOGON.
The default values to expect are as follows:

  • Windows Server, pre-Windows 2012: 2 concurrent threads
  • Windows Server 2012: 10
  • Windows client: 1
  • Domain controllers, pre-Windows-2012: 1
  • Domain controllers, Windows-2012: 10

If you do not bump against these values, then you are most likely struck by above mentioned bug and could turn off the monitor if you don’t want it to be noisy. If you decide to do this, then remember to check whether this problem is resolved in an upcoming update and delete the overrides.

If you however bump against these values, then you can increase it by editing this registry value:
HKLM\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\MaxConcurrentApi

The maximum value is however 150, which Windows Server 2012 and beyond already has set at the default value. In that case you would consider scaling out unless you are willing to accept the user experience degradation of slower validation and possibly additional validation prompts.

DNS Best Practice Analyzer error…

At a customer site, we’ve after some consideration enabled the Best Practice Analyzer monitor in Operations Manager. When I say careful consideration, I always tell my customer that they will be getting a lot of work with this monitor and sure enough it happened here as well.

The customer was busy cleaning out in the errors, but kept getting one that he couldn’t figure out:

Dns servers on <network adapter name> should include the loopback address but not as the first entry

Problem:
The network adapter <network adapter name> does not list the local server as a DNS server; or it is configured as the first DNS server on this adapter.

Impact:
If the loopback IP address is the first entry in the list of DNS servers, Active Directory might be unable to find its replication partners.

Resolution:
Configure adapter settings to add the loopback IP address to the list of DNS servers on all active interfaces, but not as the first server in the list.

The customer insisted that he had ensured that the DNS server local IP and loopback IP was listed as last in the order as shown below:

DNS BPA error

So, I took a look on the server and sure enough the server order was correct… on the IPV4 settings that is. Looking at the IPV6 settings (which the customer hasn’t deployed) the address ::1 was for some reason listed in the DNS servers.

Removing this and setting it to automatically retrieve DNS servers from DHCP fixed the BPA error.