Skip to main content

XEN Cluster- VM failover


Linux - Ubuntu 8.04 Server

I am not going to go through each and every step of the Ubuntu installation as that is really straight forward. I downloaded the 8.04 Server version fromhttp://www.ubuntu.com/getubuntu/download and started the installation one on of the nodes.

During disk partitioning, select manual mode and delete any existing partitions (if there are any).
Create one partition for your main system and make it 5GB. This partition will be used for your Dom0 and 5GB should be more than enough.
Create a second partition for your swap for Dom0. Make this 512MB as we later will configure Dom0 to use 512MB of RAM.
Create a third partition with all your remaining space.

At the packages selection step do not select any packages. Leave everything deselected.

A few minutes later you should have a basic installation of Ubuntu 8.04 Server running.
Ubuntu - Base configuration


A few things need to be done to the freshly installed Ubuntu Server before we start installing XEN.

Most of the commands need to be executed with super user rights, so we start by typingsudo su


I installed SSH early on so I didn't need to run everything locally on the consoleapt-get install ssh


Configure network interfaces for static IPs:nano /etc/networking/interfaces


and configure it so it looks something like this:


auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
address 10.1.1.100
netmask 255.255.255.0
network 10.1.1.0
broadcast 10.1.1.255
gateway 10.1.1.1

# Secondary interface used between the two clusters
auto eth1
iface eth1 inet static
address 192.168.1.100
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255

This setup will use eth0 to connect to my network and eth1 as the link between the two nodes. For node ha2 use ip address 10.1.1.101 and 192.168.1.101.

Edit /etc/hosts and add both cluster nodesnano /etc/hosts


my file looks like this:
127.0.0.1 localhost
127.0.1.1 ha1.domain.local ha1
10.1.1.100 ha1.domain.local ha1
10.1.1.101 ha2.domain.local ha2
192.168.1.100 ha1X
192.168.1.101 ha2X


for node ha2 change the second line to 127.0.1.1 ha2.domain.local ha2

Edit the /etc/hostname file to match your Fully Qualified Domain Name (FQDN)nano /etc/hostname

ha1.domain.local

and for node ha2 change this to: ha2.domain.local

Reboot machine to confirm everything is workingreboot


You should now be able to connect to your static IP address and your hostname should be your FQDN. Run:hostname
hostname -f



and they should both return your FQDN, like this:
root@ha1:/# hostname
ha1.domain.local
root@ha1:/# hostname -f
ha1.domain.local

Install NTP to have your time synchronized. This is important for your VMs, especially when migrating a VM to the other node:apt-get install ntp



Now we are done with the basic configuration of the system and can move on to the fun part, to install Xen, DRBD & Heartbeat
Installing XEN


I will use only packages during this installation, and installing XEN on Ubuntu is then a quick and hassle free process. I will only cover the important parts and make comments where needed, if you want to look at a more comprehensive guide for installing XEN on Ubuntu then please check this link: http://howtoforge.com/ubuntu-8.04-server-install-xen-from-ubuntu-repositoriesapt-get install ubuntu-xen-server


Now you should have Xen installed.

Even though you might not use loop devices in your XEN setup it can be a good idea to extend the number of allowed loop devices so you don't run into trouble later if you plan to use them. Edit /etc/module and modify the line with "loop" as below:nano /etc/modules



loop max_loop=64

Now it is time to reboot your system so it will boot with the new xen kernel:reboot


After reboot, run:uname -r


to confirm your system is using the new xen kernel. It should look like this:


root@ha1:/# uname -r
2.6.24-19-xen
Configure LVM

To install LVM run:apt-get install lvm2


We will now configure a Volume Group of the third partition created during installation. On my system this is /dev/sda3. You can run:fdisk -l


to confirm this.

Do the following to create a volume group called "vg":pvcreate /dev/sda3
vgcreate vg /dev/sda3


To check the status of the volume group, run:vgdisplay



and then reboot:reboot



Within this volume group we will later create logical volumes that will be use by our VMs. You can create the logical volumes manually when installing a new VM, but below I will use xen-tools which will do all that for us.
Configure XEN-tools

If you have a cluster with two machines like me it is not necessary to configure xen-tools on both of them. You can decide that you always will use one when adding a new DomU. For flexibility I have my both machines configured the same way so I can run xen-tools on both.

Xen-tools is installed automatically as part of the Xen installation above. So now we just need to configure it. The configuration for xen-tools is stored in /etc/xen-tools/xen-tools.conf

Run:nano /etc/xen-tools/xen-tools.conf


and perform the following changes in the xen-tools.conf file:

We first uncomment the line with LVM and specify the volume group we created above:


lvm = vg

Configure the "Disk and Sizing options" section. Mine looks like this at the moment:
size = 5Gb # Disk image size.
memory = 384Mb # Memory size
swap = 384Mb # Swap size
# noswap = 1 # Don't use swap at all for the new system.
fs = ext3 # use the EXT3 filesystem for the disk image.
dist = hardy # Default distribution to install.
image = sparse # Specify sparse vs. full disk images.


the disk size and the swap size will be used to create Logical Volumes (LVs) in your VG specified above.

Next we edit the "Networking" section. Mine looks like this:
gateway = 10.1.1.1
netmask = 255.255.255.0
broadcast = 10.1.1.255


we will specify the IP address of the VM and its hostname when creating the VM with xen-tools.

Uncomment the line with passwd to always be asked for the password for your VM. Should look like:


passwd = 1

As I have a 64bit AMD CPU I set the following value for my architecture:


arch=amd64

Next and last we change the mirrors used when installing Debian and Ubuntu. This is how mine looks like using mirrors in The Netherlands:
# The default mirror for debootstrap to install Debian-derived distributions
#
mirror = http://ftp.nl.debian.org/debian/

#
# A mirror suitable for use when installing the Dapper release of Ubuntu.
#
mirror = http://nl.archive.ubuntu.com/ubuntu/

#
# If you like you could use per-distribution mirrors, which will
# be more useful if you're working in an environment where you want
# to regularly use multiple distributions:
#
mirror_sid=http://ftp.nl.debian.org/debian
mirror_sarge=http://ftp.nl.debian.org/debian
mirror_etch=http://ftp.nl.debian.org/debian
# mirror_dapper=http://archive.ubuntu.com/ubuntu
# mirror_edgy=http://archive.ubuntu.com/ubuntu
# mirror_feisty=http://archive.ubuntu.com/ubuntu
# mirror_gutsy=http://archive.ubuntu.com/ubuntu


for Debian I had to enable the per-distribution mirrors as shown above. Otherwise installing sid, sarge or etch would fail.

Now we are done configuring /etc/xen-tools/xen-tools.conf

The default setting in xen-tools.conf is using the debootstrap installation method. For installing Fedora and CentOS you need to use rinse. If you want to try that later on you should install rinse by running:apt-get install rinse


Now we are ready to use xen-tools!
Using XEN-tools to install DomU

Perform this step on only one of your machines, either HA1 or HA2, to create your first DomU.

Time to install our first DomU. The application we use for this part of Xen-tools is called: xen-create-image. The only two parameters we supply when using xen-create-image is the IP address and hostname of the DomU. All other settings will be taken from xen-tools.conf. But you can override all these settings at command line. For example, to change the distribution you would write --dist=etch to install Debian etch instead of Ubuntu Hardy.

Now, run the following to create a DomU with IP: 10.1.1.50 and with hostname: test:xen-create-image --hostname=test --ip=10.1.1.50


After a while the process hopefully completed successfully and your DomU is ready. For me it took about 5 minutes to complete.

xen-create-image has now automatically created two LVs in your VG. If you used the hostname "test", you will have one LV called "test-disk" and another one called "test-swap". Full path to these are: "/dev/vg/test-disk" and "/dev/vg/test-swap"

Run:lvdisplay


and you will see the details of your two LVs.

You should now be able to start your DomU called: test. The config files for your DomUs are stored in /etc/xen/. We first change to that folder:root@ha1:/# cd /etc/xen


In /etc/xen/ you will find test.cfg which is the config file for your newly created DomU.

Start the DomU:root@ha1:/etc/xen# xm create test.cfg


you can also add -c to the line above, that will automatically bring you to the console of the DomU. If you didn't, we can check the status of the running DomU with:xm list


To change to the console of the running DomU, run:xm console test



if everything went fine you will now see the prompt to login to your DomU test. To leave console mode in your DomU you need to press Ctrl and ]:Ctrl + ]


Perfect! You now have your first virtual machine running.
NEXT

We will soon go ahead and configure DRBD and Heartbeat to allow for live migration and high availability. This is the fun part! But before we do that we need to duplicate the LV disk setup on the other machine.

So on HA2 we need to create one LV with 5GB and another one with 384MB.

Run:root@ha2:/# lvcreate -L 5G -n test-disk vg



to create the "test-disk" with 5GB of space. We use the same name here for simplicity, but the names doesn't need to match as we will specify that later in the DRBD configuration.

Run:root@ha2:/# lvcreate -L 384M -n test-swap vg


to create "test-swap" with 384MB of space.

Now we have the same disk setup on both machines.


Install and configure DRBD


To install DRBD run:apt-get install drbd8-utils



The configuration file for DRBD is located in /etc/drbb.conf. We will now configure it to use the LVs we create above and later we will change the Xen configuration of our test DomU to use the DRBD device instead of directly the LV. Edit /etc/drbd.conf:root@ha1:/# nano /etc/drbd.conf


Below is a copy of my settings with all default comments removed from the file:
global {
usage-count yes;
}

common {
syncer { rate 90M; }
}

resource test-disk {
protocol C;
startup {
wfc-timeout 120; ## 2 min
degr-wfc-timeout 120; ## 2 minutes.
}
disk {
on-io-error detach;
}
net {
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;

timeout 60;
connect-int 10;
ping-int 10;
max-buffers 2048;
max-epoch-size 2048;
}
syncer {
}

on ha1.domain.local {
address 192.168.1.1:7789;
device /dev/drbd1;
disk /dev/vg/test-disk;
meta-disk /dev/vg/meta[0];

}

on ha2.domain.local {
address 192.168.1.2:7789;
device /dev/drbd1;
disk /dev/vg/test-disk;
meta-disk /dev/vg/meta[0];
}
}

resource test-swap {
protocol C;
startup {
wfc-timeout 120; ## 2 min
degr-wfc-timeout 120; ## 2 minutes.
}
disk {
on-io-error detach;
}
net {
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;

timeout 60;
connect-int 10;
ping-int 10;
max-buffers 2048;
max-epoch-size 2048;
}
syncer {
}

on ha1.domain.local {
address 192.168.1.1:7790;
device /dev/drbd2;
disk /dev/vg/test-swap;
meta-disk /dev/vg/meta[1];
}
on ha2.domain.local {
address 192.168.1.2:7790;
device /dev/drbd2;
disk /dev/vg/test-swap;
meta-disk /dev/vg/meta[1];
}
}


I have given the DRBD resource the same name as its corresponding LV. So DRBD resource: test-disk is using LV: test-disk.

Next we will create a separate volume where we store DRBD's meta data. Meta data is used by DRBD to store information about the device. This can either be internal or external. Internal mode is easier to setup for a new devices but requires resizing operations when using an already formatted device. Read further about this on: http://www.drbd.org/users-guide/ch-internals.html

As we already have data on our LVs, created by XEN tools, we will use external meta data. So we create another LV with 1GB of space:lvcreate -L 1G -n meta vg



If drbd.conf, above, you will see that we specify the line with meta-disk as using /dev/vg/meta[0] and /dev/vg/meta[1]. Same device can be used to store meta-data for several DRBD resources, that is done with adding [X] after the device name.

To initiate the meta data, run:drbdadm create-md test-disk
drbdadm create-md test-swap


Redo the DRBD configuration above for the other node if you haven't done already.

Now we will startup DRBD on both nodes. Run:/etc/init.d/drbd start



To check status of DRBD use the following commands:/etc/init.d/drbd status
cat /proc/drbd



Before the replication of data will begin we have to make one node the primary node for each DRBD resource. Run the following on the node you installed your DomU above to replicate the data to the other node (Do NOT run it on the other node):drbdsetup /dev/drbd1 primary -o
drbdsetup /dev/drbd2 primary -o



Check the status again to see that the replication started:/etc/init.d/drbd status
cat /proc/drbd


If everything is OK after data is replicated you should see something like this for each DRBD device when running /etc/init.d/drbd status. It should state Primary/Secondary andUpToDate/UpToDate:/etc/init.d/drbd status

1:test-disk Connected Primary/Secondary UpToDate/UpToDate C
2:test-swap Connected Primary/Secondary UpToDate/UpToDate C


Now we are done setting up the DRBD device for our LV. Next is to configure or DomU to use DRBD.


Configure DomU to use your DRBD device


The configuration files for your DomUs are stored in /etc/xen/. So we first change to that directory:cd /etc/xen


In here you have your test.cfg file. Edit it by running:/etc/xen# nano test.cfg


You will find a section that looks like below:
disk = [
'phy:/dev/vg/test-swap,xvda1,w',
'phy:/dev/vg/test-disk,xvda2,w',
]


Edit this section so it looks like this:


disk = [
'drbd:test-swap,xvda1,w',
'drbd:test-disk,xvda2,w',
]

Done! That is all, now your DomU is ready to be started using the DRBD device.

If your DomU is still running we first have to stop it. Run:xm shutdown test


Try to start your DomU again:xm create test.cfg -c


Hopefully everything went fine. Login and then shutdown your DomU:shutdown -h now


When you are back to your Dom0 prompt check the DRBD status again:/etc/init.d/drbd status


You will now see the status of the DRBD devices as Secondary/Secondary:


1:test-disk Connected Secondary/Secondary UpToDate/UpToDate C
2:test-swap Connected Secondary/Secondary UpToDate/UpToDate C

The good thing with this is that Xen takes care of your DRBD devices and will automatically bring them up and down as needed. Same goes when you start a DomU, Xen will first make sure the DRBD devices are in primary mode before starting the DomU.


Configure DomU on your other Dom0 node


Now we need to prepare the other Dom0 node to be able to run the DomU called test. This is simply done by copying the /etc/xen/test.cfg from ha1 to ha2. Copy the whole file or the content of the file, what ever is easier for you. When the file is copied we will try to start the DomU on ha2.

First make sure that DomU test is not running on ha1:root@ha1:/# xm list


If it is running the stop it:root@ha1:/# xm shutdown test


Then go to ha2 and start it:root@ha2:/# xm create test.cfg


If everything is fine you should see DomU test running. Check with xm list:root@ha2:/# xm list


Perfect! You can now start the DomU on both nodes. (Note: Don't try to start the DomU on both nodes at the same time.)



Next we will start with the real cool things!
We will go ahead and configure live migration so we can move a DomU between the two Dom0 nodes with hardly any downtime.


Configure Live Migration


By default XEN does not allow live migration, we have to enable this is /etc/xen/xend-config.sxp. Make sure the following line is commented, it should look like this:


#(xend-relocation-hosts-allow '^localhost$ ^localhost\\.localdomain)

and that the following line is not commented, it should look like this:


(xend-relocation-port 8002)

Restart xend, reload has no effect, but restart will not kill running DomUs. Run: /etc/init.d/xend restart


Make sure to do this two changes above on both nodes, both ha1 & ha2.

NOW, lets try a live migration!

If everything is in order your DomU called test should be running on your ha2 node. Please confirm this with xm list:root@ha2:/# xm list

Name ID Mem VCPUs State Time(s)
Domain-0 0 500 2 r----- 10268.0
test 7 384 1 -b---- 4694.7


To migrate a DomU we use the xm migrate command. Run:root@ha2:/# xm migrate test ha1x --live


So first we write xm migrate. Then the name of the DomU we want to migrate, in this casetest. Then the hostname or IP of the other node, in this case ha1x (remember that we specified ha1x and ha2x in the hosts file on both nodes and mapped it to the IP of the interface with the cross over connection between ha1 and ha2). And we end the command with --live, that will instruct xen to do a live migration.

If everything went fine your DomU test should be running on ha1. Run:root@ha1:/# xm list

Name ID Mem VCPUs State Time(s)
Domain-0 0 500 2 r----- 10260.1
test 11 384 1 -b---- 2.2


Try the migration a few times between the two nodes. Try accessing the DomU during a live migration, ping it from another machine in your network, open a SSH session and see how incredible fast it migrates. I have not timed it myself, but running a normal ping from a Windows machine I lose no packets or maximum 1 packet during a migration.


Install and Configure Heartbeat

We will use Heartbeat, part of the Linux High Availability project (http://www.linux-ha.org/), to monitor our XEN resources and provide failover between our two nodes. I currently use version 2 of hearbeat but utilizing versions 1's configuration files, so that is what I will describe below. Read more in the link above about the difference in configuration files.

To install Heartbeat we will use package in the Ubuntu repository. Run this on both nodes:apt-get install heartbeat



Configuration files for Heartbeat is stored in /etc/ha.d/. We need to configure the following files: authkeys, haresources and ha.cf.

We will start with configuring authkeys and ha.cf as they are the easiest to explain.
Authkeys is used to configure authentication between the cluster nodes. Configure it to look something like this:root@ha1:/etc/ha.d# nano authkeys

auth 1
1 sha1 SecretKey123!!!


This will tell heartbeat to use the method sha1 with the supplied key. Note: Make sure to copy the exact same copy to your ha2 node.

Lets continue with ha.cf. This file contains the configuration for heartbeat: nodes in the cluster, how they communicate and timer settings. What I will use is the version 1 configuration format. Configure the file to look something like this:root@ha1:/etc/ha.d# nano ha.cf

logfacility local0
udpport 694
keepalive 1
deadtime 10
warntime 3
initdead 20
ucast eth0 10.1.1.100
ucast eth0 10.1.1.101
auto_failback on
watchdog /dev/watchdog
debugfile /var/log/ha-debug
node ha1.domain.local
node ha2.domain.local


The two lines above that begin with "ucast eth0 " configures the heartbeat communication. The reason I have put both node's IP addresses is so the file can be identical on both nodes, heartbeat will ignore the IP of the local machine so this is perfectly fine. Note: Make sure to copy the exact same copy to your ha2 node.

Now we will continue with haresources file. The file itself is actually very easy but we need to setup the resources used by heartbeat which requires some explanation. We begin with the file, configure it to look like this:root@ha1:/etc/ha.d# nano haresources



ha1.domain.local xendomainsHA1
ha2.domain.local xendomainsHA2

Looks very simple, doesn't it? What this means is that the resource(script that will control our DomUs) xendomainsHA1 will default to node HA1 and xendomainsHA2 to node HA2. These two scripts are copies of the /etc/init.d/xendomains script and modified for our two node cluster. We need to do this to be able to differentiate between DomUs on HA1 resp. HA2.

First we copy the /etc/init.d/xendomains twice to create xendomainsHA1 and xendomainsHA2:root@ha1:/# cp /etc/init.d/xendomains /etc/ha.d/resource.d/xendomainsHA1
root@ha1:/# cp /etc/init.d/xendomains /etc/ha.d/resource.d/xendomainsHA2



Now we edit both files to change two lines so it looks like below:root@ha1:/# nano /etc/ha.d/resource.d/xendomainsHA1

LOCKFILE=/var/lock/xendomainsHA1
XENDOM_CONFIG=/etc/default/xendomainsHA1
root@ha1:/# nano /etc/ha.d/resource.d/xendomainsHA2


LOCKFILE=/var/lock/xendomainsHA2
XENDOM_CONFIG=/etc/default/xendomainsHA2

Please make sure that all the Heartbeat configuration above is exactly the same on node HA2. Below the configuration will differ slightly.

As you noticed above we specified different configuration files for the resources. There is a default configuration file already located in /etc/default, called xendomains. We will copy it as below:root@ha1:/# cp /etc/default/xendomains /etc/default/xendomainsHA1



We copy only xendomainsHA1 to begin with. We will modify it and then later use it for xendomainsHA2.root@ha1:/# nano /etc/default/xendomainsHA1



XENDOMAINS_MIGRATE="ha2X --live"

This will allow for live migration to HA2 when node is shutdown.

XENDOMAINS_SAVE=

Disable save feature.

XENDOMAINS_SHUTDOWN_ALL=

Disable this to prevent ALL DomUs to be shutdown even them not controlled by this script

XENDOMAINS_RESTORE=false

Disable as we don't save DomUs

XENDOMAINS_AUTO=/etc/xen/auto/HA1

Point to location for DomU configuration files that will be controlled by this script

XENDOMAINS_AUTO_ONLY=true

Only DomUs started via config files in XENDOMAINS_AUTO will be managed

Now we copy xendomainsHA1 to create xendomainsHA2:root@ha1:/# cp /etc/default/xendomainsHA1 /etc/default/xendomainsHA2


And we modify xendomainsHA2 to point to correct folder for DomU configuration files:root@ha1:/# nano /etc/default/xendomainsHA2



XENDOMAINS_AUTO=/etc/xen/auto/HA2

Now we can copy /etc/default/xendomainsHA1 and /etc/default/xendomainsHA2 to node HA2. Do that by any means you want, either file transfer or copy and paste.

On HA2 we need to modify the settings for live migration:root@ha2:/# nano /etc/default/xendomainsHA1



XENDOMAINS_MIGRATE="ha1X --live"root@ha2:/# nano /etc/default/xendomainsHA2



XENDOMAINS_MIGRATE="ha1X --live"

Next we need to create the two folders /etc/xen/auto/HA1 and /etc/xen/auto/HA2 as referred above. Do this on both node HA1 & HA2.mkdir /etc/xen/auto/HA1
mkdir /etc/xen/auto/HA2



Create a symlink on both nodes in /etc/xen/auto/HA1 pointing to our test.cfg file in /etc/xen/ln -s /etc/xen/test.cfg /etc/xen/auto/HA1/test


Whenever creating a new DomU you need to decide if you want it by default to run on HA1 or HA2, the location of the symlink will decide that.

Remove the default xendomains script to start automatically, heartbeat will now control this for us. Do this on both nodes:update-rc.d -f xendomains remove



Shutdown DomU test if it is running:xm shutdown test


Start heartbeat manually on both nodes:/etc/init.d/heartbeat start



Hopefully everything is fine. Try to reboot one node at a time to see that DomU test is migrated between the two nodes.

Popular posts from this blog

AD LDS – Syncronizing AD LDS with Active Directory

First, we will install the AD LDS Instance: 1. Create and AD LDS instance by clicking Start -> Administrative Tools -> Active Directory Lightweight Directory Services Setup Wizard. The Setup Wizard appears. 2. Click Next . The Setup Options dialog box appears. For the sake of this guide, a unique instance will be the primary focus. I will have a separate post regarding AD LDS replication at some point in the near future. 3. Select A unique instance . 4. Click Next and the Instance Name dialog box appears. The instance name will help you identify and differentiate it from other instances that you may have installed on the same end point. The instance name will be listed in the data directory for the instance as well as in the Add or Remove Programs snap-in. 5. Enter a unique instance name, for example IDG. 6. Click Next to display the Ports configuration dialog box. 7. Leave ports at their default values unless you have conflicts with the default values. 8. Click N...

HOW TO EDIT THE BCD REGISTRY FILE

The BCD registry file controls which operating system installation starts and how long the boot manager waits before starting Windows. Basically, it’s like the Boot.ini file in earlier versions of Windows. If you need to edit it, the easiest way is to use the Startup And Recovery tool from within Vista. Just follow these steps: 1. Click Start. Right-click Computer, and then click Properties. 2. Click Advanced System Settings. 3. On the Advanced tab, under Startup and Recovery, click Settings. 4. Click the Default Operating System list, and edit other startup settings. Then, click OK. Same as Windows XP, right? But you’re probably not here because you couldn’t find that dialog box. You’re probably here because Windows Vista won’t start. In that case, you shouldn’t even worry about editing the BCD. Just run Startup Repair, and let the tool do what it’s supposed to. If you’re an advanced user, like an IT guy, you might want to edit the BCD file yourself. You can do this...

DNS Scavenging.

                        DNS Scavenging is a great answer to a problem that has been nagging everyone since RFC 2136 came out way back in 1997.  Despite many clever methods of ensuring that clients and DHCP servers that perform dynamic updates clean up after themselves sometimes DNS can get messy.  Remember that old test server that you built two years ago that caught fire before it could be used?  Probably not.  DNS still remembers it though.  There are two big issues with DNS scavenging that seem to come up a lot: "I'm hitting this 'scavenge now' button like a snare drum and nothing is happening.  Why?" or "I woke up this morning, my DNS zones are nearly empty and Active Directory is sitting in a corner rocking back and forth crying.  What happened?" This post should help us figure out when the first issue will happen and completely av...