DNS Best Practice Analyzer error…

At a customer site, we’ve after some consideration enabled the Best Practice Analyzer monitor in Operations Manager. When I say careful consideration, I always tell my customer that they will be getting a lot of work with this monitor and sure enough it happened here as well.

The customer was busy cleaning out in the errors, but kept getting one that he couldn’t figure out:

Dns servers on <network adapter name> should include the loopback address but not as the first entry

The network adapter <network adapter name> does not list the local server as a DNS server; or it is configured as the first DNS server on this adapter.

If the loopback IP address is the first entry in the list of DNS servers, Active Directory might be unable to find its replication partners.

Configure adapter settings to add the loopback IP address to the list of DNS servers on all active interfaces, but not as the first server in the list.

The customer insisted that he had ensured that the DNS server local IP and loopback IP was listed as last in the order as shown below:

DNS BPA error

So, I took a look on the server and sure enough the server order was correct… on the IPV4 settings that is. Looking at the IPV6 settings (which the customer hasn’t deployed) the address ::1 was for some reason listed in the DNS servers.

Removing this and setting it to automatically retrieve DNS servers from DHCP fixed the BPA error.


Create virtual machine fails with Error (2911) Not enough storage is available to complete this operation (0x8007000E)

Came across this error on a Hyper-V cluster running 2008 R2 SP1. As part of a previous troubleshooting effort (link), the Windows Management Framework had been upgraded to 3.0

This caused this error:

Error (2911) Insufficient resources are available to complete this operation on the xxxxxx server. Not enough storage is available to complete this operation (0x8007000E)

Recommended Action
Ensure that the virtual machine host has sufficient memory and disk space to perform this action. Try the operation again.

Some power-googling let me to this KB from MIcrosoft: Managing a host in Virtual Machine Manager fails with error 2911 – Not enough storage is available to complete this operation (0x8007000E)

So, the solution is to install the hotfix from KB8781512

The case of the missing performance counters in VMM…

I upgraded a Virtual Machine Manager (VMM) 2012 installation to SP1 a little while ago, which since the base OS of the server was Windows 2008 R2, involved uninstalling VMM, upgrading the OS to 2012 and then installing VMM 2012 SP1 on the server.

Since then, the counters for CPU, Assigned Memory and Memory demand was not working as you can see in the screenshot below:

VMM no counters

The only ones showing anything where the machines with fixed memory, which off course displayed assigned memory.

I tried the usual troubleshooting options of updating the integration tools in the VM’s, re-installing the VMM agent on the hosts, rebuilding performance counters and so on but to no avail. So since then I’ve been beating my head over this, and was pondering opening a support case with Microsoft PSS.

But tonight, as I was doing some routine maintenance and also troubleshooting another problem I was installing the hotfix from KB2580360 and after rebooting the hosts and moving virtual machines back the counters where working again.
So it seems to be related to some WMI queries failing on the servers, although Operations Manager didn’t log any errors on this.

For reference, the hotfix is only intended for systems running Windows 2008 R2 SP1 or Windows 2008 R2 RTM with hotfix KB974930 installed.

So for now, I can at least close that case… The big question will then be if the error I was troubleshooting originally will disappears as well. Hopefully more on that later…

Unable to RDP to TMG 2010 server (0x80074e21 FWX_E_ABORTIVE_SHUTDOWN)

I was visiting a customer today, where I was to do some configuration changes on a TMG server. Well arrived at the customer site, I was unable to RDP to any of the nodes in the TMG array.
Logging on through vcenter and running a trace on one of the nodes revealed this when the connection was attempted:

Status: A connection was abortively closed after one of the peers sent an RST packet.(0x80074e21 FWX_E_ABORTIVE_SHUTDOWN)
Rule: [System] Allow remote management from selected computers using Terminal Server

Remote management was enabled when looking in system properties, but when running a netstat on the server, I noticed that it didn’t listen on port 3389.

Disabling and enabling remote management in System Properties as shown below fixed the error:
Enable RDP

Quickly setting up an Exchange 2010 or 2013 DAG cluster…

So, you’ve installed a couple of Exchange 2010 or 2013 servers and want to setup DAG quick and easy…

Well, just copy/paste these powershell commands (and edit the few parameters that can vary from your installation and you’re good to go.

Create DAG cluster

New-DatabaseAvailabilityGroup -Name <network name of DAG cluster> -WitnessServer <Netbios name of witness server> -WitnessDirectory <local path on witness server to where you want the witness directory stored> -DatabaseAvailabilityGroupIPAddresses <IP address of DAG cluster>

This will form the DAG cluster.

Add nodes to DAG cluster

Add-DatabaseAvailabilityGroupServer -Id <netbios name of DAG cluster> -MailboxServer <Netbios name of server to be added>

This will add the server in question to the DAG cluster and the command must be run once per server to be added (unless you use an answer file, but I won’t cover that here).

Create databases

New-MailboxDatabase -Name <Name of mailbox database> -EDBFilePath <Local path to where you want the file stored, remember to include filename and extension of the EDB file> -LogFolderPath <Local path to where you want the log files to be created>

This will create the databases you want. Remember that the paths chosen above must be available on all servers.

Move arbitration, system or discovery mailboxes to the newly created databases

Get-MailboxDatabase -Arbitration | New-MoveRequest -TargetDatabase <name of target mailbox database>

This will start to move the arbitration mailboxes to the selected database. Use this only if you want to delete the default database. Otherwise, continue to “Add database copies to database”

Check status the move requests created above


This will show the status of the move requests created above. Once they all lists as completed, you can go ahead and delete them as I will show below.

Delete the move requests created above

Get-MoveRequest | Remove-MoveRequest

This will remove the move requests listed above and you will be able to delete the default database created during installation.

Cleanup disconnected mailboxes

Get-MailboxStatistics –Database <name of database to be removed> | Where-Object {$_.DisconnectReason –eq ‘Softdeleted’} | ForEach {Remove-StoreMailbox –Database $_.database –identity $_.mailboxguid –MailboxState Softdeleted

Remove default database

Remove-MailboxDatabase -Identity <Name of the database you wish to remove>

This will remove the database specified. This must be done once per DAG node you installed, as each mailbox server that is installed will create a default database.

Add database copies to databases

Get-MailboxDatabase | Where-Object {$_.replicationtype -ne ‘Remote’} | Add-MailboxDatabaseCopy -MailboxServer <Netbios name of server to be added> -ActivationPreference <activation preference in number form>

This command adds the mailbox database copy to the server in question. Off course the server must be added as a DAG node as done above. I’ve purposely built the command to scan for non-replicated databases, so it can be used on a later occasion if you add more databases.
The activation prefence lists in what order you want the servers to activate the database, for example 1 is primary, 2 is secondary, 3 is tertiary and so on. If you are adding more than one additional server, the above command can also be used by appending | Add-MailboxDatabaseCopy -MailboxServer <Netbios name of server to be added> -ActivationPreference <activation preference in number form> and simply change the server name as well as increment the activation preference number.

With these steps, you have a running Exchange 2010 or 2013 DAG cluster and are ready to deploy mailboxes on these…

Exchange 2010 setup fails with “There are no more endpoints available from the endpoint mapper”…

So, first blog post here at my new blog 🙂

I’m currently working on a project for a customer wanting to upgrade their aging Exchange 2003 platform (which one must say is about time) to Exchange 2013. As this co-existence scenario is not supported we need to take an intermediate step and implement Exchange 2010 first.

As we were installing the first Exchange 2010 server it threw when trying to install the Hub Transport Role:

There are no more endpoints available from the endpoint mapper. (Exception from HRESULT: 0x800706D9)

As a result the installer ended and left us stranded there.

So first off, we needed to figure out why the RPC endpoint mapper error occured as we would most likely encounter it once more. Going through the requirements listed on the official Technet pages, I found that the customer had for some reason disabled Windows Firewall (as in the service, not just through Windows Firewall MMC snap-in).
This could be the likely reason for this error to occur, so we re-enabled it and tried to restart the installer but it then gave this message:

A setup failure previously occurred while installing the HubTransport role. Either run Setup Again for just this role, or remove the role using control panel

So, the error should be pretty forward as stated in the error message. But restarting the setup notified us that we needed to re-run it from Control Panel and going there didn’t allow us to either install nor uninstall (as nothing was actually installed on the server as a result of the first error).
As the Exchange installer runs, it writes various registry keys to keep track of the installation process. Removing these keys put us back on track to a successful installation.
We simply deleted the keys Watermark and Action located at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ExchangeServer\v14\HubTransportRole (and as always, make a backup of anything before you edit the registry).

So, after doing this we were able to run a successful installation of Exchange 2010 🙂