Sunday, December 9, 2018



Top eight VMware vSphere backup best practices

  • 2

Ease your VMware vSphere backup burden with these eight vSphere
backup best practices.


What you will learn in this tip: The architecture and operation of a virtual environment is much different than a traditional backup environment and demands specific data backup techniques. In this tip, you'll learn the top VMware vSphere backup best practices.
When it comes to backing up virtual machines in VMware vSphere, you need to leverage the strengths of virtualization to maximize your backup efficiency. You also need to know what to back up as well as how to back it up. In addition, you can't use the same principles that you use in a traditional environment to back up a virtual environment. The following are eight vSphere backup best practices.

Don't back up virtual machines at the guest OS layer

With traditional servers you typically install a backup agent on the guest operating system that the backup server contacts when it needs to back up the data on the server. But this method isn't efficient in a virtual environment because it causes unnecessary resource consumption on the virtual machine (VM) that can impact its performance, as well as the performance of other VMs running on the host. You should instead back up at the virtualization layer; this means using image-level backups that back up the large .vmdk file without involving the guest OS. To do this, you must use a backup application designed to work with virtualization, and can back up the VM's virtual disk directly without involving the guest OS and/or the host. This will eliminate the resource consumption that normally occurs when backing up a VM at the guest OS layer, and will ensure your VMs get all the resources they can for their workloads.

Leverage the vStorage APIs

The vStorage APIs were introduced with vSphere as a replacement for the VMware Consolidated Backup (VCB) framework that was released with VI3 to help offload backup processing from the host. Not only do they allow for easier access to a VM's virtual disk file, but they also contain features that can improve backup speeds such as the Changed Block Tracking (CBT) feature. CBT is a feature that keeps track of any blocks that have changed since the last backup so a backup application can simply query the VM kernel to find the information out. This quick operation means the backup application no longer needs to track this, which allows for much quicker incremental backups. The vStorage APIs provide a much more efficient mechanism for backing up VMs and you should use backup applications that take full advantage of them.

Know how quiescing and VSS works

If you're backing up VMs that have transactional applications like database and email servers, it's critical that you quiesce them so they are in the proper state to be backed up. This type of backup state is called an application-consistent backup where before the backup begins applications are paused so any outstanding writes and transactions can be written to disk. This ensures the server is in a proper state so no data is lost if a restore is needed. This type of quiescing only works with applications that specifically support being told to pause and write pending data when necessary. VMware Tools contains a driver that works with Microsoft Volume Shadow Copy Service (VSS) to quiesce applications before they are backed up. This VMware Tools driver hasn't always supported all Windows operating systems in the past, so many vendors have come up with their own driver instead. Therefore, you should make sure you are using a supported VMware Tools driver or have installed the vendor supplied driver on you VMs. Also make sure that the VSS service isn't disabled and everything is configured properly to perform an application-consistent backup.

Don't skimp on backup resources

To ensure that you have the shortest backup windows possible, make sure you have adequate hardware for your backup server so it doesn't become a bottleneck when performing backups. While having adequate network bandwidth is critical, having enough CPU and memory resources is, too. Your backup server is doing more than just moving data from a source server to a target storage device, it's also doing things like data deduplication and compression to help reduce the size of backups. These types of processes require a lot of CPU and memory to help keep up with all the data that is flowing through the server. Make sure you follow the backup vendor's hardware recommendations for the backup server. This is one area where it can't hurt to give it more than it needs -- backups can slow down dramatically if the backup server does not have adequate resources.

Snapshots are not backups

Virtual machine snapshots should never be used as a primary backup means. Snapshots are OK for short-term ad hoc backups of VMs but there are penalties that are incurred when using them. When a snapshot is created, all writes to the VM's disk file are deflected to a new delta disk file, and the original disk becomes read-only. The delta disk file grows in 16 MB increments as data is written to it, and each growth increment causes a lock on the LUN that it resides on that can degrade performance. The more snapshots you have running, the greater you can impact the performance of all the VMs running on the LUN. Snapshots also take up additional disk space on your datastores -- each one can grow up to the size of the original disk. If you run out of disk space on your datastores, all of your VMs will shut down. Merging snapshot data back into the original disk when you delete them is also a heavy I/O operation that can affect the performance of the VM. In addition, because snapshots are creating new virtual disks that link back to the original, this can cause some features to not be available, and can also cause problems with the mapping between the original disk and its snapshots. As a result, use snapshots sparingly and delete them as soon as you no longer need them.

Schedule backups carefully

Backups in a virtual environment can strain resources because of the shared virtualization architecture. As a result, you should plan your backup schedule to avoid putting too much concentrated stress on a single resource. For example, don't back up too many VMs on the same host or the same LUN concurrently; try to balance your backup schedule to even out the resource usage so any one resource is not getting over utilized. If you do not, your backups may slow down and also degrade the performance of your VMs.

Know your Fault Tolerance backup alternatives

Almost all virtualization backup solutions that use image-level backups utilize VM snapshots to stop writes to the virtual disk from happening while backups are running. The VMware Fault Tolerance (FT) feature uses two VMs, a primary and a secondary located on separate hosts, but they both share the same virtual disk file. Currently, the Fault Tolerance feature doesn't support VM snapshots, which can make backing up FT-enabled VMs a challenge. To get around this limitation, you need to look at alternatives for backing up the VM. One way is to temporarily disable the FT feature while the backup is running, which allows snapshots to be taken. Disabling preserves the secondary VM and it can be easily enabled again once the backup completes. It is possible to use PowerShell to automate this and by using pre- and post-backup scripts you can automate the whole process. Another method is to create another copy of the VM by cloning it either through vCenter Server or using the vCenter Converter. This creates a new copy of the VM that can be backed up with the clone deleted afterwards. You can also use storage-level snapshots or back up the VM using an agent installed inside the OS.

Don't forget to back up host and vCenter Server configs

If you ever lose a host or vCenter Server, you can easily rebuild them, but you lose all your configuration information. Therefore, it's a good idea to periodically back up the information. When you back up a host, you're typically only backing up the VMs and not any of the files that reside in the host's management console. While you shouldn't back up the files inside the management console, you should back up the configuration information to make it easier to rebuild a host.
For ESX hosts, you can use the esxcfg-info Service Console command, which will output a ton of configuration information into a text file. For ESXi hosts, you can use the vicfg-cfgbackup command that's part of the vSphere CLI to output configuration information to a text file. For ESX hosts you cannot restore the information from the esxcfg-info outout but at least you will know what you need to reconfigure. For ESXi hosts, you can also use the vicfg-cfgbackup command to restore configuration to a host. For vCenter Servers, it's criticial to back up the database it uses that contains all the configuration information that is unique to vCenter Server. This includes configuration information on clusters, resource pools, permissions, alarms, performance data and much more. With a good database backup you can simply reinstall vCenter Server and point it to the database and you'll be back up and running. Also make sure you back up the vCenter Server SSL certificate folder that is located in the vCenter Server data directory. This contains the SSL certificates that are used to communicate securely with ESX and ESXi hosts as well as clients.
As you can see, although VMware has eased virtual machine backup in the latest version of vSphere, but there are a number of things to watch out for. These eight vSphere backup tips will help make vSphere backup a little less painful.

Monday, September 17, 2018

Linux audit files to see who made changes to a file

Linux audit files to see who made changes to a file


How do I audit file events such as read / write etc? How can I use audit to see who changed a file in Linux?
The answer is to use 2.6 kernel’s audit system. Modern Linux kernel (2.6.x) comes with auditd daemon. It’s responsible for writing audit records to the disk. During startup, the rules in /etc/audit.rules are read by this daemon. You can open /etc/audit.rules file and make changes such as setup audit file log location and other option. The default file is good enough to get started with auditd.
In order to use audit facility you need to use following utilities
=> auditctl – a command to assist controlling the kernel’s audit system. You can get status, and add or delete rules into kernel audit system. Setting a watch on a file is accomplished using this command:
=> ausearch – a command that can query the audit daemon logs based for events based on different search criteria.
=> aureport – a tool that produces summary reports of the audit system logs.
Note that following all instructions are tested on CentOS 4.x and Fedora Core and RHEL 4/5 Linux.

Task: install audit package

The audit package contains the user space utilities for storing and searching the audit records generate by the audit subsystem in the Linux 2.6 kernel. CentOS/Red Hat and Fedora core includes audit rpm package. Use yum or up2date command to install package
# yum install audit
or
# up2date install audit
Auto start auditd service on boot
# ntsysv
OR
# chkconfig auditd on
Now start service:
# /etc/init.d/auditd start

How do I set a watch on a file for auditing?

Let us say you would like to audit a /etc/passwd file. You need to type command as follows:
# auditctl -w /etc/passwd -p war -k password-file
Where,
  • -w /etc/passwd : Insert a watch for the file system object at given path i.e. watch file called /etc/passwd
  • -p war : Set permissions filter for a file system watch. It can be r for read, w for write, x for execute, a for append.
  • -k password-file : Set a filter key on a /etc/passwd file (watch). The password-file is a filterkey (string of text that can be up to 31 bytes long). It can uniquely identify the audit records produced by the watch. You need to use password-file string or phrase while searching audit logs.
In short you are monitoring (read as watching) a /etc/passwd file for anyone (including syscall) that may perform a write, append or read operation on a file.
Wait for some time or as a normal user run command as follows:
$ grep 'something' /etc/passwd
$ vi /etc/passwd
Following are more examples:

File System audit rules

Add a watch on “/etc/shadow” with the arbitrary filterkey “shadow-file” that generates records for “reads, writes, executes, and appends” on “shadow”
# auditctl -w /etc/shadow -k shadow-file -p rwxa

SYSCALL AUDIT RULE

The next rule suppresses auditing for mount syscall exits
# auditctl -a exit,never -S mount

FILE SYSTEM AUDIT RULE

Add a watch “tmp” with a NULL filterkey that generates records “executes” on “/tmp” (good for a webserver)
# auditctl -w /tmp -p e -k webserver-watch-tmp

SYSCALL AUDIT RULE USING PID

To see all syscalls made by a program called sshd (pid – 1005):
# auditctl -a entry,always -S all -F pid=1005

How do I find out who changed or accessed a file /etc/passwd?

Use ausearch command as follows:
# ausearch -f /etc/passwd
OR
# ausearch -f /etc/passwd | less
OR
# ausearch -f /etc/passwd -i | less
Where,
  • -f /etc/passwd : Only search for this file
  • -i : Interpret numeric entities into text. For example, uid is converted to account name.
Output:
----
type=PATH msg=audit(03/16/2007 14:52:59.985:55) : name=/etc/passwd flags=follow,open inode=23087346 dev=08:02 mode=file,644 ouid=root ogid=root rdev=00:00
type=CWD msg=audit(03/16/2007 14:52:59.985:55) :  cwd=/webroot/home/lighttpd
type=FS_INODE msg=audit(03/16/2007 14:52:59.985:55) : inode=23087346 inode_uid=root inode_gid=root inode_dev=08:02 inode_rdev=00:00
type=FS_WATCH msg=audit(03/16/2007 14:52:59.985:55) : watch_inode=23087346 watch=passwd filterkey=password-file perm=read,write,append perm_mask=read
type=SYSCALL msg=audit(03/16/2007 14:52:59.985:55) : arch=x86_64 syscall=open success=yes exit=3 a0=7fbffffcb4 a1=0 a2=2 a3=6171d0 items=1 pid=12551 auid=unknown(4294967295) uid=lighttpd gid=lighttpd euid=lighttpd suid=lighttpd fsuid=lighttpd egid=lighttpd sgid=lighttpd fsgid=lighttpd comm=grep exe=/bin/grep
Let us try to understand output
  • audit(03/16/2007 14:52:59.985:55) : Audit log time
  • uid=lighttpd gid=lighttpd : User ids in numerical format. By passing -i option to command you can convert most of numeric data to human readable format. In our example user is lighttpd used grep command to open a file
  • exe=”/bin/grep” : Command grep used to access /etc/passwd file
  • perm_mask=read : File was open for read operation
So from log files you can clearly see who read file using grep or made changes to a file using vi/vim text editor. Log provides tons of other information. You need to read man pages and documentation to understand raw log format.

Other useful examples

Search for events with date and time stamps. if the date is omitted, today is assumed. If the time is omitted, now is assumed. Use 24 hour clock time rather than AM or PM to specify time. An example date is 10/24/05. An example of time is 18:00:00.
# ausearch -ts today -k password-file
# ausearch -ts 3/12/07 -k password-file
Search for an event matching the given executable name using -x option. For example find out who has accessed /etc/passwd using rm command:
# ausearch -ts today -k password-file -x rm
# ausearch -ts 3/12/07 -k password-file -x rm
Search for an event with the given user name (UID). For example find out if user vivek (uid 506) try to open /etc/passwd:
# ausearch -ts today -k password-file -x rm -ui 506
# ausearch -k password-file -ui 506

Thursday, August 23, 2018

Airplane mode turns itself on (DESKTOP PC) - HOW TO FIX IT!

To fix the issue, we suggest that you follow the methods below:
Method 1: Modify the Network Adapter Properties
1. Use Windows shortcut keys Win + X (or right click the Start menu) and select Device Manager from the menu.
2. Expand the Network adapters, right click on the network adapter of you device and select Properties.
3. Select Power Management tab from the pop-up dialog box and uncheck the item Allow the computer to turn off this device to save power.
4. Click OK to save the changes.
Method 2: Disable Non-Microsoft Services
Some programs or services can affect the Windows 10 airplane mode. You could try to disable some non-Microsoft services to fix the issue. If you happen to know the program, disable it directly. If not, it may take your time to figure it out.
1. Use Windows shortcut keys Win + R to launch Run.
2. Type msconfig into the box and press Enter.
3. Select the Service tab, check Hide all Microsoft services and click the button Disable. Then click Apply.
4. Select the Startup tab, click Open Task Manager.
5. Select Startup tab from the new pop-up dialog and disable all the startup items.
You need to reboot the computer and re-enable the services you’ve disabled one by one to find out the problematic services or programs that result in the Windows 10 airplane mode error. Once you figure them out, disable them again.
Method 3: Update or Reinstall the Network Adapter Drivers
Problematic drivers can bring about Windows 10 airplane mode errors and other Windows 10 issues, so it is necessary to fix the network adapter drivers on Windows 10.

Monday, July 16, 2018

How to skip domain joining during client deployment in a Windows Server 2012 Essentials network 

    General discussion

  • This post describes a temporary solution that allows client computers to connect to Windows Server 2012 Essentials without joining the Windows Server 2012 Essentials domain. Please read the following Notes carefully before you take any actions.  (Applies to Windows Essentials 2016 also)
    Description
    When deploying Pro/Enterprise/Ultimate Windows client computers in a Windows Server 2012 Essentials network, joining the Windows Server 2012 Essentials domain is mandatory. If the client computer is already joined to another domain, you are required to manually leave the existing domain; otherwise, the client deployment process will be blocked.
    Currently we have received requests from customers asking for the option to skip domain joining in a client deployment. As a result, in this article we provide a solution so that the client can connect to the server and utilize the majority of client features without joining the domain.
    Before you take any action, please read the following note. 
    Note:  
    If you skip joining the domain, the following areas will be impacted:
      • All features that require that you be joined to the domain will not be available, including domain credentials, Group Policy, and VPN.
      • Any third-party add-ons and applications that require that you join the domain will not be working properly.
      • Skipping domain joining in an off-premises client deployment is not supported.
      • This solution is only supported on the following Windows client versions:  
        • Windows 7 Professional
        • Windows 7 Enterprise
        • Windows 7 Ultimate  
        • Windows 8 Pro
        • Windows 8 Enterprise
    To skip joining the domain during a client deployment
    1. On your client computer, go to Start and search for command prompt "cmd".
    2. In the search results, find cmd.exe and run as administrator.
    3. Type the following command prompt:
      reg add "HKLM\SOFTWARE\Microsoft\Windows Server\ClientDeployment" /v SkipDomainJoin /t REG_DWORD /d 1
    4. Complete the steps on the Connect Computers to the Server Help topic.  

Friday, March 9, 2018

How To Migrate Linux Servers Part 3 - Final Steps

How To Migrate Linux Servers Part 3 - Final Steps

Introduction

There are many scenarios where you might have to move your data and operating requirements from one server to another. You may need to implement your solutions in a new datacenter, upgrade to a larger machine, or transition to new hardware or a new VPS provider.
Whatever your reasons, there are many different considerations you should make when migrating from one system to another. Getting functionally equivalent configurations can be difficult if you are not operating with a configuration management solution such as Chef, Puppet, or Ansible. You need to not only transfer data, but also configure your services to operate in the same way on a new machine.
In our last article, we covered how to transfer data with rsync and migrate your database. We will continue our migration in this article by migrating users, groups, mail, crontabs, and other settings.

Migrate Users and Groups

Although your primary concern may be for your services and programs, we need to pay attention to users and groups as well.
Most services that need specific users to operate will create these users and groups at installation. However, this still leaves users and groups that have been created manually or through other methods.
Luckily, all of the information for users and groups is contained within a few files. The main files we need to look at are:
  • /etc/passwd: This file defines our users and basic attributes. Despite its name, this file no longer contains any password information. Instead, it focuses on username, user and primary group numbers, home directories, and default shells.
  • /etc/shadow: This file contains the actual information about passwords for each user. It should contain a line for each of the users defined in the passwd file, along with a hash of their password and some information about password policies.
  • /etc/group: This file defines each group available on your system. Basically, this just contains the group name and the associated group number, along with any usernames that use this as a supplementary group.
  • /etc/gshadow: This file contains a line for each group on the system. It basically lists the group, a password that can be used by non-group members to access the group, a list of administrators and non-administrators.
While it may seem like a good idea to just copy these files directly from the source system onto the new system, this can cause complications and is not recommended.
One of the main issues that can come up is conflicting group and user id numbers. If software that creates its own users and groups is installed in a different order between the systems, the user and group numbers can be different, causing conflicts.
It is instead better to leave the majority of these files alone and only adjust the values that we need. We can do this in a number of ways.

Creating Migration Files

Regardless of the method we'd like to use to add users to our new system, we should generate a list of the users, groups, etc. that should be transferred and added.
A method that has been floating around the internet for awhile is mentioned below:
We will create a file associated with each of the above files that we need to modify. They will contain all of the appropriate transfer information.
First, figure out what the ID limit between regular and system users is on your machine. This is typically either 500 or 1000 depending on your system. If you have a regular user, an easy way to find out is to inspect the /etc/passwd file and see where the regular user accounts start:
less /etc/passwd
Afterwards, we can use this number (the first regular user ID number, in the 3rd column) to set the limit on our command. We won't be exporting users or groups below this limit. We will also exclude the "nobody" account that is given the user ID of "65534".
We can create a sync file for our /etc/passwd file by typing this. Substitute the limit# with the lowest regular user number you discovered in the /etc/passwd file:
awk -v LIMIT=limit# -F: '($3>=LIMIT) && ($3!=65534)' /etc/passwd > /root/passwd.sync
Afterwards, we can do a similar thing to make a group sync file:
awk -v LIMIT=limit# -F: '($3>=LIMIT) && ($3!=65534)' /etc/group > /root/group.sync
We can use the usernames within the range we're interested in from our /etc/passwd file to get the values we want from our shadow file:
awk -v LIMIT=limit# -F: '($3>=LIMIT) && ($3!=35534) {print $1}' /etc/passwd | tee - | egrep -f - /etc/shadow > /root/shadow.sync
For the /etc/gshadow file, we'll do a similar operation:
awk -v LIMIT=limit# -F: '($3>=LIMIT) && ($3!=65534) {print $1}' /etc/group | tee - | egrep -f - /etc/gshadow > /root/gshadow.sync
Once we know the commands we want to run, we can add them to our script after a regular SSH command and then rsync them off, like this:
ssh 111.222.333.444 "awk -v LIMIT=limit# -F: '($3>=LIMIT) && ($3!=65534)' /etc/passwd > /root/passwd.sync"
ssh 111.222.333.444 "awk -v LIMIT=limit# -F: '($3>=LIMIT) && ($3!=65534)' /etc/group > /root/group.sync"
ssh 111.222.333.444 "awk -v LIMIT=limit# -F: '($3>=LIMIT) && ($3!=35534) {print $1}' /etc/passwd | tee - | egrep -f - /etc/shadow > /root/shadow.sync"
ssh 111.222.333.444 "awk -v LIMIT=limit# -F: '($3>=LIMIT) && ($3!=65534) {print $1}' /etc/group | tee - | egrep -f - /etc/gshadow > /root/gshadow.sync"
rsync 111.222.333.444:/root/passwd.sync /root/
rsync 111.222.333.444:/root/group.sync /root/
rsync 111.222.333.444:/root/shadow.sync /root/
rsync 111.222.333.444:/root/gshadow.sync /root/

Manually Add Users

If we want to just add a comment to our script file and do this manually, the vipw and vigr commands are recommended, because they lock the files while editing and guard against corruption. You can edit the files manually by typing:
vipw
Passing the -s flag edits the associated shadow file, and passing the -g flag edits the group file.
You may be tempted to just add the lines from the files directly onto the end of the associated file on the new system like this:
cat /root/passwd.sync >> /etc/passwd
If you choose to go this route, you must be aware that there can be ID conflicts if the ID is already taken by another user on the new system.
You can also add each username using the available tools on the system after getting a list from the source computer. The useradd command can allow you to quickly create user accounts to match the source computer:
useradd -s /path/to/shell -m -d /home/username -p password -G supplementary_groups
You can use the *.sync files for reference and add them in this way.

Automatically Add Users

If we instead want to script the user and group additions within our file, we can easily do that too. We'll want to comment these out after the first successful run though, because the script will attempt to create users/groups multiple times otherwise.
There is a command called newusers that can bulk add users from a file. This is perfect for us, but we want to modify our files first to remove the user and group IDs. The command will generate the next available users and groups for the new system.
We can strip the group and user IDs from the passwd file like this:
awk 'BEGIN { OFS=FS=":"; } {$3=""; $4=""; } { print; }' /root/passwd.sync > /root/passwd.sync.mod
We can apply this new modified file like this:
newusers /root/passwd.sync.mod
This will add all of the users from the file to the local /etc/passwd file. It will also create the associated user group automatically. You will have to manually have to add additional groups that aren't associated with a user to the /etc/group file. Use your migration files to edit the appropriate files.
For the /etc/shadow file, you can copy the second column from your shadow.sync file into the second column of the associated account in the new system. This will transfer the passwords for your accounts to the new system.
You can attempt to script these changes, but this may be one case where it is easier to do it by hand. Remember to comment out any user or group lines after the users and groups are configured.

Transfer Mail and Jobs to New System

Now that your users are transferred from the old system, and have your user's home directories populated by the rsync commands that have been running, you can migrate the mail of each user as well. We want to replicate the cron jobs too.
We can begin by doing another rsync command for the spool directory. Within the spool directory on our source system, we can usually see some important files:
ls /var/spool
anacron   cron   mail   plymouth   rsyslog
We want to transfer the mail directory to our target server, so we can add an rsync line that looks like this to our migration script:
rsync -avz --progress 111.222.333.444:/var/spool/mail/* /var/spool/mail/
Another directory within the /var/spool directory that we want to pay attention to is the cron directory. This directory keeps cron and at jobs, which are used for scheduling. The crontabs directory within contains individual user's crontab are used to schedule jobs.
We want to preserve the automated tasks that our users have assigned. We can do this with yet another rsync command:
rsync -avz --progress 111.222.333.444:/var/spool/cron/crontabs/* /var/spool/cron/crontabs/*
This will get individual user's crontabs onto our new system. However, there are other crontabs that we need to move. Within the /etc directory, there is a crontab and a number of other directories that containing cron info.
ls /etc | grep cron
anacrontab
cron.d
cron.daily
cron.hourly
cron.monthly
crontab
cron.weekly
The crontab file contains system-wide cron details. The other items are directories that contain other cron information. Look into them and decide if they contain any information you need.
Once again, use rsync to transfer the relevant cron information to the new system.
rsync -avz --progress 111.222.333.444:/etc/crontab /etc/crontab
Once you have your cron information on your new system, you should verify that it works. This is a manual step, so you'll have to do this at the end.
The only way of doing this correctly is to log in as each individual user and run the commands in each user's crontab manually. This will make sure that there are no permissions issues or missing file paths that would prevent these commands from silently failing when running automatically.

Restart Services

At the end of your migration script, you should make sure that all of the appropriate services are restarted, reloaded, flushed, etc. You need to do this using whatever mechanisms are appropriate for the operating system that you are using.
For instance, if we're migrating a LAMP stack on Ubuntu, we can restart the important processes by typing:
service mysql restart
service apache2 restart
service php5-fpm restart
You can add these to the end of your migration script as-is, and they should operate as expected.

Test Sites and Services

After you have finished your migration script and ran it with all of the syncing and modifications, as well as performed all of the necessary manual steps, you should test out your new system.
There are quite a few areas that you'll want to check. Pay attention to any associated log files as you're testing to see if any issues come up.
First, you'll want to test the directory sizes after you've transferred. For instance, if you have a /datapartition that you've rsynced, you will want to go to that directory on both the source and target computers and run the du command:
cd /data
du -hs
471M    .
Verify that the sizes are close to the same. There might be slight differences between the original and the new system, but they should be close. If there is a large disparity, you should investigate as to why.
Next, you can check the processes that are running on each machine. You can do this by looking for important information in the ps output:
ps auxw
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.0  0.0  27024  2844 ?        Ss   Feb26   0:00 /sbin/init
root         2  0.0  0.0      0     0 ?        S    Feb26   0:00 [kthreadd]
root         3  0.0  0.0      0     0 ?        S    Feb26   0:00 [ksoftirqd/0]
root         4  0.0  0.0      0     0 ?        S    Feb26   0:00 [kworker/0:0]
root         5  0.0  0.0      0     0 ?        S<   Feb26   0:00 [kworker/0:0H]
. . .
You also can replicate some of the checks that you did initially on the source machine to see if you have emulated the environment on the new machine:
netstat -nlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.1.1:53            0.0.0.0:*               LISTEN      1564/dnsmasq    
tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      2886/cupsd      
tcp        0      0 0.0.0.0:445             0.0.0.0:*               LISTEN      752/smbd        
tcp        0      0 0.0.0.0:139             0.0.0.0:*               LISTEN      752/
. . .
Again, another option is:
lsof -nPi
COMMAND     PID        USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
smbd        752        root   26u  IPv6    9705      0t0  TCP *:445 (LISTEN)
smbd        752        root   27u  IPv6    9706      0t0  TCP *:139 (LISTEN)
smbd        752        root   28u  IPv4    9707      0t0  TCP *:445 (LISTEN)
smbd        752        root   29u  IPv4    9708      0t0  TCP *:139 (LISTEN)
. . .
You should go through the package versions of your important services like we did in the first article in order to verify if you matched version for important packages. The way to do this will be system dependent.
If you transferred a web server or a LAMP stack, you should definitely test your sites on the new server.
You can do this easily by modifying your hosts file (on your local computer) to point to your new server instead of the old one. You can then test to see if your server accepts requests correctly and that all of the components are operating together in the correct way.
The way that you modify your local hosts file differs depending on the operating system you are using. If you are using an operating system with *nix based design, like OS X or Linux, you can modify the hosts file on your local system like this:
sudo nano /etc/hosts
Inside, you need to add an entry to point your domain name to the IP address of your new server, so that your computer intercepts the request and routes it to the new location for testing.
The lines you can add may look something like this:
111.222.333.444     www.domain.com
111.222.333.444     domain.com
Add any subdomains that are used throughout your site configuration as well (images.domain.com, files.domain.com, etc.). Once you have added the host lines, save and close the file.
If you are on OS X, you will need to flush your hosts file for your computer to see the new content:
sudo lookupd -flushcache
On Linux, this should work automatically.
On Windows, you'll have to edit the C:\Windows\Wystem32\Drivers\etc\hosts file as an administrator. Add the lines in the same fashion that we did above for the *nix versions.
After your hosts file is edited on your local workstation, you should be able to access the test server by going to your domain name. Test everything you possibly can and make sure that all of the components can communicate with each other and respond in the correct way.
After you have completed testing, remember to open the hosts file again and remove the lines you added.

Migrate Firewall Rules

Remember that you need to migrate your firewall rules to your new server. Keep in mind that, prior to loading the rules into your new server, you will want to review them for anything that needs to be updated, such as changed IP addresses or ranges.

Change DNS Settings

When you've thoroughly tested your new server, look through your migration script and make sure that no portion of it is going to be reversing modifications you've made.
Afterwards, run the script one more time to bring over the most recent data from your source server.
Once you have all of the newest data on your target server, you can modify the DNS servers for your domain to point to your new server. Make sure that every reference to the old server's IP is replaced with the new server's information.
The DNS servers will take some time to update. After all of the DNS servers have gotten your new changes, you may have to run the migration script a final time to make sure that any stray requests that were still going to your original server are transferred.
Look closely at your MySQL commands to ensure that you are not throwing away or overwriting data that has been written to either the old or new servers.

Conclusion

If all went well, your new server should now be up and running, accepting requests and handling all of the data that was on your previous server. You should continue to closely monitor the situation and keep an eye out for any anomalies that may come up.
Migrations, when done properly, are not trivial, and many issues can come up. The best chance of successfully migrating a live server is to understand your system as best as you can before you begin. Every system is different and each time, you will have to work around new issues. Do not attempt to migrate if you do not have time to troubleshoot issues that may arise.