working

NSLookup still showing IP of demoted Domain Controller

So had an interesting issue today where a Domain Controller (DC) was demoted yet the IP of the demoted DC was still showing up when running nslookup internaldomain.local

Demoted DC: MWDC04 / IP: 10.14.111.111

I had done the metadata cleanup and tried many suggestions when googling the subject. To my surprise none of the solutions I found worked.

I had removed the IP address from the Primary DNS Server and saw entries for:

(same as parent folder) Host(A)  10.14.111.111
(same as parent folder) NameServer (NS)  10.14.111.111

I also looked under internaldomain.local > _msdcs and deleted entries from there.

After clearing the cache and waiting for replication, did a nslookup again and the IP was still there.

Well, there are some good and bad things about Microsoft DNS.

The BAD:

You cannot search DNS values in DNS Management. You are limited to searching just the names.

THE GOOD:

All DNS entries are stored in a flat file on the DNS Server “C:\WINDOWS\system32\dns\internaldomain.local.dns” (The default location). JACKPOT!

I opened it up in Notepad++, did a search for IP and DNS name of the demoted server(MWDC04-10.14.111.111) and started deleting matched entries. I was so surprised to find entries that were deeply buried under “domaindnszones” & “forestdnszones” and a few other subzones.

Cleared the cache again and waited for replication. Once replication completed I tried nslookup internaldomain.local and this time it didn’t list the demoted DC anymore.

I hope this saves others time, because finding a record in DNS might be like searching for a needle in a haystack!

ConfigMgr 2012 R2 – WSUS sync fails with HTTP 503 errors

Ran into this issue with ConfigMgr 2012 R2 where it was unable to synchronize Software Update Point with the WSUS server. A review of the component status messages for the SMS_WSUS_SYNC_MANAGER component on the primary site server reveals errors related to WSUS synchronization which are similar to the following:
Message ID: 6703 WSUS Synchronization failed. Message: The request failed with HTTP status 503: Service Unavailable. Source: Microsoft.UpdateServices.Administration.AdminProxy.CreateUpdateServer.
Got the following error when trying to open Update Services on the WSUS server

Error: Connection Error An error occurred trying to connect to the WSUS server. This error can happen for a number of reasons. Please contact your network administrator if the problem persists. Click the Reset Server Node to connect to the server again.

In addition to the above, attempts to access the URL for the WSUS Administration website (i.e., http://CMCASSERVER:8530) fails with the error:

HTTP Error 503. The service is unavailable

In this situation, the most likely cause is that the WsusPool Application Pool in IIS is in a stopped state, as shown below.

Also, the Private Memory Limit (KB) for the Application Pool is probably set to the default value of 1843200 KB.

If you encounter this problem, increase the Private Memory Limit to 4GB (4000000 KB) and restart the Application Pool. To increase the Private Memory Limit, select the WsusPool Application Pool and click Advanced Settings under Edit Application Pool. Then set the Private Memory Limit to 4GB (4000000 KB).

After the Application Pool has been restarted, monitor the SMS_WSUS_SYNC_MANAGER component status, wcm.log and wsyncmgr.log for failures. Please note that it may be necessary to increase the Private Memory Limit to 8GB (8000000 KB) or higher depending on the environment.

Now WSUS is back online!

Going back to the basics….moving out of Amazon Drive!

As of June 8, 2017, it was announced that when when users try to sign up for Amazon Drive they will not be able to select an unlimited cloud storage option. Instead they can choose either 100 GB for $11.99 per year, or 1 TB for $59.99, with up to 30 TB available for an additional $59.99 per TB. (The prior pricing was unlimited everything for $59.99.) My data came up to about 5TB, which according to their new pricing structure would cost me $300+ (Data is always growing!!).

That is quite costly for just 5TB of storage when I can buy two 8TB drives and have it locally in a RAID configuration or a mirrored set. I shopped around with other popular cloud providers but each and every one of them have some sort of limitations. I decided to purchase two 8TB drives and maintain it locally. 

I found very little help on Google when searching for ways to move out of Amazon Drive with ease. I found a lot of cool little utilities but none were able to do a clean and consistent sync copy/move. It most cases the application would either hang, or incomplete the job.

I tried a lot of tools to get a synced local copy but the process seemed harder and harder. I tried a lot of freeware and shareware utilities as well as those offered by Amazon. I am just listing my personal experiences here so that I can save time for those whole have a similar situation.

Tools I tried:

Amazon Drive Desktop Sync

  •  Horrible transfer speeds +
  • Buggy Software
  • Startup & Resuming files would delay download significantly.

SymLink (MacOS/ Linux)

  • Somewhat works but metadata is lost.

NetDrive

  • Mounts the Amazon Cloud Drive and a Network Drive
  • Constant disconnects + too many app updates
  • Application hangs with large files
  • Service needed to be restarted multiple times to connect with Amazon

Cloudberry Explorer

  • Quirks around Admin Mode.
  • Ghostfiles (0kb) leftover.
  • Acts like an FTP Client but missing a lot features

rClone (Banned)

AllwaySync

  • The Oneway transfer feature is nice but it was taking a long time between files
  • This might have worked if my filebase was a whole lot smaller but failed for larger jobs.

Expandrive

  • Similar to NetDrive but a whole lot stable, but would fail on larger files.

Odrive

  • Horrible interface. Didn’t work most of time.

& a few more applications…. that didn’t work out!

Syncovery

Syncovery was the  winner in my case. This tools was the best in speed and got me an exact copy out from Amazon Cloud drive. It supports resuming! It is available on all platforms. It has a nice layout and can run as a scheduled job!

It took Syncovery literally 2 days to get all of my data downloaded. I was simply amazed at how efficient this tool was working. It maintained a consistent speed. Didn’t lose any metadeta. I ran a file check and all of them checked out 100%.

The trial version worked in my case and I am considering getting the Pro version. It excelled where all other failed. It wasn’t a resource hog and did the job in the first go! Thank you Syncovery!

 

Couple of lessons learned in getting success with all my data downloaded.

  1. Metadata is important especially when dealing with older files. Try not of lose it, as once it is lost there is no going back.
  2. Don’t copy to the same path as the original. Use an external drive and copy it there.
  3. If dealing with a lot of smaller files break them into chunks or batches to avoid application hang
  4. Apart from Syncovery, there were some utilities that might delete the files from Amazon and put them in Trash. Make sure you look there if you notice files missing file. It is most certainly there. I personally didn’t have this issue but some people have reported this with other utilities.
  5. Share your experiences to help out others.

Conclusion

I am in no way promoting a product from Syncovery, but based on my personal experience I found it to be the easiest to move the amount of data I had from Amazon down to my local server.  I am going to sway away from the public cloud space for a while at-least for my personal stuff. Based on the pricing, limitation of file size and types, and amount of data I have, I am still searching for good cloud store. I am evaluating ownCloud for now. If I ever goto a public cloud storage solution again, I am going to try my exit exercise/ strategy prior to bulk upload.

Another strategy people are recommending is hosting all the files in a VM on AWS/ Google/ Azure. My issue there is access cost. If my access is within the VM I am good, but any data I am pulling or accessing out of the VM – I am paying for it!

ESXi 6.0 not detecting BROCADE HBA adapter

Steps:

  1. Make sure HBA is connected on the PCI slot and visible under esx hardware list:
  2. Check if VMKernel can detect any storage via Fibre Channel
    (output will be blank line if HBA driver is missing but HBA appears to be in PCI card determined from step 1)
  3. Search and download the relevant ESXi drivers for HBAthe recommended driver (bfa) version for 82B in ESXi 5.1 is 3.0.0.0
    You can download it from the following URL.
    https://my.vmware.com/web/vmware/details?downloadGroup=DT-ESXi50-BROCADE-bfa-3000&productId=229
  4. Download the driver and install it using following instructions:

New Installation

For new installs, you should perform the following steps:

  1. Copy the VIB to the ESX server.  Technically, you can place the file anywhere that is accessible to the ESX console shell, but for these instructions, we’ll assume the location is in ‘/tmp’. Here’s an example of using the Linux ‘scp’ utility to copy the file from a local system to an ESX server located at 10.10.10.10:
  2. Issue the following command (full path to the VIB must be specified):

In the example above, this would be:

Note: Depending on the certificate used to sign the VIB, you may need to change the host acceptance level.  To do this, use the following command:

Also, depending on the type of VIB being installed, you may have to put ESX into maintenance mode.  This can be done through the VI Client, or by adding the ‘–maintenance-mode’ option to the above esxcli command.

Upgrade Installation

The upgrade process is similar to a new install, except the command that should be issued is the following:

Reboot host.

Now you should have the HBA should the datastores.