Menu Sidebar

Going back to the basics….moving out of Amazon Drive!

As of June 8, 2017, it was announced that when when users try to sign up for Amazon Drive they will not be able to select an unlimited cloud storage option. Instead they can choose either 100 GB for $11.99 per year, or 1 TB for $59.99, with up to 30 TB available for an additional $59.99 per TB. (The prior pricing was unlimited everything for $59.99.) My data came up to about 5TB, which according to their new pricing structure would cost me $300+ (Data is always growing!!).

That is quite costly for just 5TB of storage when I can buy two 8TB drives and have it locally in a RAID configuration or a mirrored set. I shopped around with other popular cloud providers but each and every one of them have some sort of limitations. I decided to purchase two 8TB drives and maintain it locally. 

I found very little help on Google when searching for ways to move out of Amazon Drive with ease. I found a lot of cool little utilities but none were able to do a clean and consistent sync copy/move. It most cases the application would either hang, or incomplete the job.

I tried a lot of tools to get a synced local copy but the process seemed harder and harder. I tried a lot of freeware and shareware utilities as well as those offered by Amazon. I am just listing my personal experiences here so that I can save time for those whole have a similar situation.

Tools I tried:

Amazon Drive Desktop Sync

  •  Horrible transfer speeds +
  • Buggy Software
  • Startup & Resuming files would delay download significantly.

SymLink (MacOS/ Linux)

  • Somewhat works but metadata is lost.


  • Mounts the Amazon Cloud Drive and a Network Drive
  • Constant disconnects + too many app updates
  • Application hangs with large files
  • Service needed to be restarted multiple times to connect with Amazon

Cloudberry Explorer

  • Quirks around Admin Mode.
  • Ghostfiles (0kb) leftover.
  • Acts like an FTP Client but missing a lot features

rClone (Banned)


  • The Oneway transfer feature is nice but it was taking a long time between files
  • This might have worked if my filebase was a whole lot smaller but failed for larger jobs.


  • Similar to NetDrive but a whole lot stable, but would fail on larger files.


  • Horrible interface. Didn’t work most of time.

& a few more applications…. that didn’t work out!


Syncovery was the  winner in my case. This tools was the best in speed and got me an exact copy out from Amazon Cloud drive. It supports resuming! It is available on all platforms. It has a nice layout and can run as a scheduled job!

It took Syncovery literally 2 days to get all of my data downloaded. I was simply amazed at how efficient this tool was working. It maintained a consistent speed. Didn’t lose any metadeta. I ran a file check and all of them checked out 100%.

The trial version worked in my case and I am considering getting the Pro version. It excelled where all other failed. It wasn’t a resource hog and did the job in the first go! Thank you Syncovery!


Couple of lessons learned in getting success with all my data downloaded.

  1. Metadata is important especially when dealing with older files. Try not of lose it, as once it is lost there is no going back.
  2. Don’t copy to the same path as the original. Use an external drive and copy it there.
  3. If dealing with a lot of smaller files break them into chunks or batches to avoid application hang
  4. Apart from Syncovery, there were some utilities that might delete the files from Amazon and put them in Trash. Make sure you look there if you notice files missing file. It is most certainly there. I personally didn’t have this issue but some people have reported this with other utilities.
  5. Share your experiences to help out others.


I am in no way promoting a product from Syncovery, but based on my personal experience I found it to be the easiest to move the amount of data I had from Amazon down to my local server.  I am going to sway away from the public cloud space for a while at-least for my personal stuff. Based on the pricing, limitation of file size and types, and amount of data I have, I am still searching for good cloud store. I am evaluating ownCloud for now. If I ever goto a public cloud storage solution again, I am going to try my exit exercise/ strategy prior to bulk upload.

Another strategy people are recommending is hosting all the files in a VM on AWS/ Google/ Azure. My issue there is access cost. If my access is within the VM I am good, but any data I am pulling or accessing out of the VM – I am paying for it!

Get .Net Framework Version for the .DLL & .EXE files

Working with many app/dev teams it is hard to find which version of Dot Net  an application was designed or made in.

Now if your application server has multiple drives and depending on which drive the application resides it may be hard to find this information.

Let’s assume there are two drives C: and D:.

We will start with D: drive as it is easy.

Now the C: drive is a little more work. The above method wont work because C:  drive has system files and depending on your rights you may not have access to them.

You may get the following error:

But there is a way we can get this accomplished. Good old dos commands to the rescue! We are basically going to get a list of .exe and .dll files from the C: drive and then run the above code against it.

Lets capture the files:

Now we have the .EXE files stored in C_EXE_Paths.txt and we query it for .NET versions and save the output to DotNetFiles_C_EXE.txt

Similarly we have the .DLLfiles stored in C_DLL_Paths.txt and we query it for .NET versions and save the output to DotNetFiles_C_DLL.txt

You might get errors for files that do not meet criteria or fails to list .Net version.

This can be surpressed by using:

The output would be similar to:

Now you can import this in Excel and go crazy!  😉

Additionally, if you want to detect what version of .NETis installed on your server here is a cool utility (ASoft .NET Version Detector) to get you the info, as well as download links to the installer in case you need to download and install.

12 dig Command Examples for DNS

dig can we very useful in finding out DNS related issues.

To install dig for Window/ Linux/ MacOSX click here.

  1. A basic dig command – dig a domain nameIn the most basic of dig commands, you have a domain name like, and you want to find information about it, so you issue the following dig command:
    and get the following results:
    The dig command output has the following sections:

    Header: This displays the dig command version number, the global options used by the dig command, and few additional header information.

    QUESTION SECTION: This displays the question it asked the DNS. i.e This is your input. Since we said ‘dig’, and the default type dig command uses is A record, it indicates in this section that we asked for the A record of the website

    ANSWER SECTION: This displays the answer it receives from the DNS. i.e This is your output. This displays the A record of

    AUTHORITY SECTION: This displays the DNS name server that has the authority to respond to this query. Basically this displays available name servers of

    ADDITIONAL SECTION: This displays the ip address of the name servers listed in the AUTHORITY SECTION.

    Stats section at the bottom displays few dig command statistics including how much time it took to execute this query

  2. Display only the ANSWER Section of the dig command output
    For most part, all you need to look at is the “ANSWER SECTION” of the dig command. So, we can turn off all other sections as shown below.+nocomments – Turn off the comment lines
    +noauthority – Turn off the authority section
    +noadditional – Turn off the additional section
    +nostats – Turn off the stats section
    +noanswer – Turn off the answer section (Of course, you wouldn’t want to turn off the answer section)
    The following dig command displays only the ANSWER SECTION.
    Instead of disabling all the sections that we don’t want one by one, we can disable all sections using +noall (this turns off answer section also), and add the +answer which will show only the answer section.

    The above command can also be written in a short form as shown below, which displays only the ANSWER SECTION.

  3. dig a TCP/IP address I was trying to find the PTR record for the following IP address:
    but as you can see, I don’t get a PTR record in this dig output. To perform a DNS reverse look up using the ip-address you need to use something like the -x option, like this:
    As you can see, this does indeed return a PTR record.
  4. How to get IP address(es) for a domain:
    An easy way to get the IP address(es) corresponding to a domain name is to add the “+short” option to your dig command. As the name implies, this gives you the dig short output, and if you don’t specify any other command line options, that output is the IP address. Here’s what it looks like for :
  5. Get MX record for a domain:
    Another common dig command need is to find an “MX record” for a domain name. This is easily done with the “dig mx” command, like this:
    You can also use option -t to pass the query type (for example: MX) as shown below.
  6. Show the nameservers for your domain
    Here’s how to query for a list of nameservers for a given domain, again using the ‘short’ option to keep the output down:
    You can also use option -t to pass the query type (for example: NS) as shown below.
  7. Query specific nameservers with dig
    Or, if you prefer the shorter version of the output:
  8. View ALL DNS records types using dig -t ANY
    To view all the record types (A, MX, NS, etc.), use ANY as the record type as shown below.
    (or) Use -t ANY
  9. Traceroute Information
    If you like the traceroute command, you can do something similar with dig to follow DNS nameservers, like this, using the ‘+short’ option to keep the output manageable:
  10. Query multiple sites from dig command line:
  11. Specify Port Number
    By default the dig command queries port 53 which is the standard DNS port, however we can optionally specify an alternate port if required. This may be useful if an external name server is configured to use a non standard port for some reason. We specify the port to query with the -p option, followed by the port number. In the below example we perform a DNS query to port 5300.
    Note that the external name server must actually be listening for traffic on this port specified, and its firewall will also need to allow the traffic through otherwise the lookup will fail. In this example the connection times out, as is not configured to listen on the random port 5300 that I selected for this example.
  12.  User IPv4 or IPv6
    By default our dig queries are running over the IPv4 network, we can specify if we want to use the IPv4 transport with the -4 option, or alternatively we can specify to use the IPv6 transport with the -6 option.
    Short version:
    Hope this was able to explain how do use dig, or at least get you started. Do you ‘dig’ it ?

R.I.P. nslookup – Start using dig or host

I have been using nslookup for the longest time I can remember. Although, this may be an older topic to some, it may be a newer topic to most Windows users.

Dear Windows users,

nslookup has been deprecated.

The organization that maintains the code for nslookup, Internet Systems Consortium (ISC), has very clearly stated it in the most recent version of nslookup (included with BIND 9), the following message appears:

ISC is the organization behind the Berkeley Internet Name Daemon (BIND). BIND is the most widely used DNS server in the world. nslookup is distributed with BIND.

If you run OS X or any current version of Linux “dig/ host” already installed.


Dig (domain information groper) is a tool that is used for querying DNS servers for various DNS records, making it very useful for troubleshooting DNS problems.

For example, if we enter:

we got the following output:

There is a lot of information in the above output, but we can break each section down to get a better understanding of what we’re looking at. First, we are presented with the version and global options section:

This is followed by a section that gives us more in-depth technical information about the response, or answer:

Then we have a section that repeats our question back to us. This basically serves as a reminder of exactly what we told dig we want to look up:

The answer section is probably the section we’re most interested in. This section is where we find the IP addresses that correspond to where we pointed dig:

In our test case, we now know can see that resolves to

Finally, the last section shows us some more general statistics about the query. We have the amount of time the query takes, the address the query came from (our router IP), the time the query was placed, and the amount of data that was returned to us:

This example is a very basic example of a common lookup. More advanced lookups can be performed using dig, and therein lies its power. If we type in:


The host command is much like dig, but more succinct. If we enter:

host is also capable of running reverse lookups. You can provide it with an IP address, and it will tell you the name of the specific server associated with that IP. For example:

Try typing in host -a, followed by a website address, and note the results. Yes, that’s right. If you type in host -a it gives you the same exact output that you would get from a plain-old dig command with no options set.

Hopefully that quickly explains how dig/host is being used as a nslookup replacement. dig is a whole lot more informative and easy to use.  See the top 12 commands used in dig here.

Installing dig for


  1. First go to and look for BIND, click on download button.
  2. and choose right version, for my windows I’m downloading win – 64bit version
  3. When we have successfully downloaded the archive file, extract the zipfile content to a temporary directory on your workstation.
  4. Go into this directory and run as Administrator “BINDInstall.exe”, then choose the “Tools only” option and target directory where to install dig, I had choosen C:\Program Files\dig
  5. Next add the path of the dig folder (C:\Program Files\dig\bin) to the system PATHS variable. We should this way:Within win 10/server2012R2/server2016 search box type environment variables
    Then choose “Edit the system environment variable” and type admin password if needed. Following box will open. There is a button Environment Variables, click on it and
    add the path

    Close all dialogs.

  6. Now we should be able to run dig tool directly from the command line typing dig
  7. Depending on your system you may get an error dialog will open saying MSVCR110.dll is missing.
    Fix: Within this step the dig tools should already work. MS Visual C++ redistribution should be already installed, if not…follow next steps
    the dll is:

    MSVCR110.dll is the Microsoft Visual C++ Redistributable dll that is needed for projects built with Visual Studio 2011. The dll letters spell this out. MS = Microsoft, V = Visual, C = C++, R = Redistributable. For Winroy to get started, this file is probably needed. This error appears when you wish to run a software which require the Microsoft Visual C++ Redistributable 2012. The redistributable can easily be downloaded on the Microsoft website as x86 or x64 edition. Depending on the software you wish to install you need to install either the 32 bit or the 64 bit version. Refer the following link:         Visual C++ Redistributable for Visual Studio 2012 Update 4

    so download mentioned update and install it.

    Download and installing Visual Studio 2015 from the link: did not work )
  8. Download & Install:
  9. Depending on the system it may require a restart. I would go ahead and restart.
  10. Now it works!

Linux/ Mac:

In CentOS/RHEL/Fedora dig is part of the ‘bind-utils’ package


For Debian/Ubuntu based distributions it comes from the ‘dnsutils’ package.



Dig is included in most Linux and Mac OS X installations by default via the Terminal.

How to Generate a Group Policy Report

This may be a noob topic, but it is an important one.

Applies To: Windows Server 2003, Windows Vista, Windows Server 2008, Windows 7, Windows Server 2003 R2, Windows Server 2008 R2, Windows Server 2012, Windows 8

Depending on the size of your organization you could have a few Group Policy Objects (GPO) or you could have many. Sometimes it is very hard find out why a workstation or server is acting the way it is. I would say that the GPOs are the heart of security in windows operating system.

A nice way to view which policies are being applied to the target Workstation/Server is by generating an .html file that shows all GPOs applied. The GPRESULT command displays the Resultant Set of Policy (RSoP) information for a remote user and computer.

Open Command Prompt and type the following:

Now open the file GPReport.html that is present on the desktop. It should look similar to the image below.

To read more about GPRESULT and switches allowed – click here.

Provisioning a New Office 365 User and Mailbox from Exchange Hybrid via PowerShell

Working with many Office365 clients, I receive queries on how to go about provisioning users and mailboxes for an Exchange hybrid deployment.

To begin with, let’s assume a couple things.

  1. We have a Windows 2012 R2 member server with Azure AD Connect (AAD Connect) version (or newer) and the Azure AD Module for PowerShell installed; and
  2. We have an Exchange 2013 CU11 (or newer) server configured for hybrid with an active O365 tenant.

Now that we’ve established a baseline, there are a couple of options to perform the task of provisioning an AD user, creating a mailbox, and assigning an Office 365 license.

  1. The first option would be to create an AD user, create an on premise mailbox, migrate the mailbox to Office 365, and assign a license; or
  2. The second option would be to create an AD user, create a remote (or Office 365) mailbox, and assign a license.

In this post, I will cover the second option simply because it includes fewer steps and attempts to avoid confusion around where the mailbox should be created.

Do not create an AD user and then go to the Office 365 portal to create a new user and associated mailbox. This method will not properly create a synchronized O365 user and mailbox.


From the Exchange server, first create the AD user with remote mailbox using one command via Exchange Management Shell (EMS or Exchange PowerShell)…

In the command above, I created the AD user in an OU named “Office 365 Users”, set the password to “EnterPasswordHere”, and will require the user to change their password at next logon. However, I did not assign an SMTP address or remote routing address assuming that the email address policies are configured to be applied as new mailboxes are created.


Once the AD user and mailbox are created, the AD object must to be synchronized to O365 in order to add the user with associated mailbox in the tenant. With the new version of AAD Connect, the scheduled sync time occurs every 30 minutes. In my case, I’m not that patient and will manually force a sync to O365.

From the server with AAD Connect installed, via an elevated PowerShell console, run the following command to perform the sync to O365…

This task will synchronize all changes made to AD since the user and mailbox were created.


In the final step, I assign an O365 license to the newly created and synchronized user. The following commands can be run from any machine that has both Microsoft Online Services Sign-in Assistant for IT Professionals RTW and Windows Azure Active Directory Module for Windows PowerShell installed. In my case, they are installed on each server, as well as my admin workstation.

Connect to O365 via PowerShell from an elevated PowerShell console; or using Azure AD Module for PowerShell console.

Confirm the new user does not have an O365 license assigned.

This command returns unlicensed O365 users in which the “isLicensed” parameter is “False”.

The next command returns the “AccountSkuId“, or subscription license(s), of my tenant that I will use to assign to the new user.

The AccountSkuId will look something similar to “tenantname:ENTERPRISEPACK“; where “ENTERPRISEPACK” represents my Office 365 Enterprise E3 subscription. Other subscriptions will have different representations.

Before I can assign any licenses to my new user, the user must be assigned a location (or country code). Since I’m am located in the United States, I use “US” as the two letter country code for the user, using this command…

Now that I’ve set a location for the new user, I can assign a license from my associated O365 subscription, using this command…

Finally, the user can access their assigned mailbox in Exchange Online.

Add Alternate Email Address or Recovery Email Address for Office365 Administrator

In Office365, depending on the admin role of an account you may want to add an alternate email address for password recovery. This is a basically a self-service password reset for Administrators of Office365.

Quick way to do this is with PowerShell:

If this setting is unset for an administrator, Office365 gives you a nice reminder about adding an alternate email address in case your primary account gets locked out.

You can add this information when first setting up the account:

It can also be added for an existing admin user by going to the Gear, Office 365 settings, and edit your settings in the ‘me’ section, you can enter your mobile phone number and alternate email there.

Active Directory and Kerberos SPNs Made Easy!

What Is an SPN?

An SPN is a reference to a specific service, for example, an instance of SQL or a web application run by IIS. Since SPNs are specific, they reference not only what the service is (such as an SQL server), but also which hostname runs the instance and on which port it’s running (however, you don’t have to specify the port if running on default ports).

Service Principal Names are already in use for every computer and user account. Though not usually seen, there is a default SPN established at the time of account creation which is identified as the SAMAccountName with a Dollar Sign appended to it. Therefore, would have a Service Principal Name of Contoso\JoeUser$ which is referenced by the domain during authentication and ticket granting.

When Should You Set an SPN?

Service Principal Names are not always necessary. Again, using the SQL Server as an example, once the SQL instance is established, a web application that uses the databases in the instance may point directly at the server. In that case, an SPN is not required, because there is no confusion about where the authentication is going to take place or where the service is located. However, in some cases you do not reference the SQL Server by direct name.

Another time that you may need to configure SPNs through the use of SetSPN is when using Kerberos to connect to a web application. In many cases, web applications running on IIS 7.5 will be using Kernel Mode authentication and will not require the use of SPNs to authenticate properly. But not all use cases can take advantage of Kernel Mode Authentication: SharePoint 2010 is an example of a web application that does not support Kernel Mode Authentication, even when running on IIS 7.5.

There are more use cases published by Microsoft that provide examples of when you will need to set a Service Principal Name with SetSPN.

Making it Simple:

There are a lot of articles out there on setting up Kerberos Service Principal Names but today I’m going to make it simple. Bear with me as I start off with the basics; by the end of the post it will all be very clear.

Throughout this post I’ll make reference to a scenario of a client computer connecting to an SQL server called however the same applies for any service, for example a web server where the client connects via HTTP.

The SQL server service is running under a domain service account called “domain\SQLSVC“. No SPNs have been set yet.

The Basics

Active directory user and computer accounts are objects in the active directory database. These objects have attributes. Attributes like Name and Description.

Computer and User accounts are actually very similar in the way they operate on a Windows domain and they both share an attribute called ServicePrincipalName. An account object can have multiple ServicePrincipalName attributes defined.

The setspn.exe tool manipulates this attribute. That’s all it does.

The Failure

The client wants to access the SQL server so he asks his domain controller: “Please may I have a ticket for accessing MSSQLSvc/”

Now the domain controller asks the active directory database: “Give me the name of the account object who’s ServicePrincipalName is MSSQLSvc/

The active directory database replies: “Sorry, there are no account objects with that ServicePrincipalName”

So the domain controller asks the active directory database again: “Ok then, give me the account object who’s ServicePrincipalName is HOST/

All computer accounts have, by default ServicePinciaplName attributes set to:
HOST/[computername] and HOST/[computername].[domain]

So the active directory database replies to the domain controller: “The account object that has that ServicePrincipalName is’s computer account

The domain controller now creates a ticket that only the computer account of can read. He gives the ticket to the client.

The client goes to the SQL service on and says “here is my ticket, may I come in?”

The SQL service will attempt to read the ticket. The problem is, the SQL service is not running under the computer account; it is running under a domain service account. It can not read the ticket; the ticket is only intended for the computer account of Authentication fails (falls backto NTLM).

The Fix

Now lets run the setspn.exe tool to manipulate the ServicePrincipalName attribute of the SQL service account.

We will also add sql1 (without the domain name) in case we want to access the the server without the domain name appended.

Now run  through the scenario again and this time notice that the domain controller will return a ticket that the SQL server service account can read.

Obviously this is heavily paraphrased but hopefully it helps you understand the reason for setting the SPN attribute on the account that runs a given service.  Of course if the service runs under the local NetworkService or LocalSystem account then everything will just work because these local accounts represent the computer account in active directory.

SetSPN.exe Switches and Syntax

You may have noticed the “-a” switch used on the previous examples. SetSPN can be used with no switch, but then it doesn’t set an SPN, it displays them.

SetSPN contoso\SQLService_SCCM

This example displays all SPNs that have been set on the SQL service account. Here are the most common switches used with SetSPN:

-a    Add an entry to an account (explicitly)
-s    Add an entry to an account (only after checking for duplicates first)
-d    Delete an entry from an account
-x    Search the domain for duplicate SPNs
-q    Query the domain for a specific SPN

There are also a few switches that specify whether an account is a computer or user (-c and –u), but if you omit those you’re likely all right, as it will check for computers first and then check for users. If in your domain environment you have computers and users that share account names, then you will want to use the –u switch to modify user accounts.

Another way to check if SPN is working on the service account.

Open the Service Account properties and goto the Delegation tab. You should be able to see enterprises here if everything is setup correctly. If SPN is not registering even though a command prompt query says its there.

To make sure the SPN show up:

Click Add on the window above. The Add Services dialog box shows up.

Click Users or Computer and add the same user account. ( I know this sounds silly, because you are adding the account to itself), but this way it can see if the SPNs that are registered to the account and will show the available services automatically.

One the services show up. Click Select All and OK.

The services show up correctly to be used for Kerberos Only authentication.


Important things to know:

SPNs should be unique within the domain. If you set an AD account to have an SPN, do not set it on another account. This goes for the SPN being set on multiple computers, multiple users; it will also not function properly if there is both a user and a computer account that have the same SPN.

You can search for SPNs in the domain by using the –q switch. This will tell you if there is already an account that is using that SPN. For example:

And if you need to troubleshoot a problem with an SPN, a good place to start is by verifying that there are no duplicate entries:


Map a network drive using PowerShell

Make sure you are using the latest version of PowerShell. On Windows 8/10 run it as administrator and type the following:


Z – is the Drive Letter

Within ” ” is the path of the network share that will be presented as the root of the drive letter Z

The -Persist parameter so that you can not only see the name of your new drive in Windows explorer, but also know it’s still there the next time you logon.

-Name <String>
Specifies a name for the new drive. For persistent mapped network drives, type a drive letter. For temporary drives type you are not limited to drive letters.
Required? true
Position 1

-PSProvider <String>
Specifies the Windows PowerShell provider, for example, FileType or Registry.
Required? true
Position? 2

-Root <String>
Specifies the data store location, for example, \\Server\Drivers, or a registry key such as HKLM:\Software\Microsoft\Windows NT\CurrentVersion.
Required? true
Position? 3

Speed up Active Directory & DNS replication between Sites

Using the standard GUI Microsoft Management Consoles to make the change to speed up Active Directory replication is not possible. The best result of using administrator consoles will be to increase domain replication between domain controllers to 15 minutes. These large time values were instituted into Active Directory at version 1 because inter-site connections during that era of computing and networking were much lower in bandwidth with the most common being frame-relay or 56k circuits. Since then, inter-site connections and the Internet speeds have increased tremendously so faster domain controller replication is possible even over wan links.

Fast Intersite Replication Interval – Speed up DC Replication, Updates are in Seconds

To enabled faster Intersite Replication, to nearly the speed of intra-site or LAN replication, use ADSI Edit.
Start ADSI edit and go to
Configuration > then Sites > Inter Site Transports > IP.
Note this setting cannot be enabled for SMTP InterSite links.
Unless it has been renamed, right click on  the default Intersite link and choose properties. Then scroll down to the options line. Double-click and change the value to 1 if it has a value .
 <not set> is the default unless this option has been previously modified.  Once changed to 1, click OK twice to save and close the properties window.
Force a replication using Sites and Services so this setting get pushed/pulled to the other domain controllers.
Test by creating a couple of test accounts in AD.
Check your other domain controller or controllers for the new account. You will see it appear in seconds.
Newer Posts
Older Posts

Mohammed Wasay

Dallas based Design Technologist & Hybrid Developer

Secured By miniOrange