Posting insightful information thus encompassing everything technical to assist in the enlightenment of others!
Saturday, August 20, 2016
How to Install, Create and Manage LXC (Linux Containers) in RHEL/CentOS 7
LXC, acronym for Linux Containers, is a lightweight Linux kernel based virtualization solution, which practically runs on top of the Operating System, allowing you to run multiple isolated distributions the same time.
The difference between LXC and KVM virtualization is that LXC doesn’t emulates hardware, but shares the same kernel namespace, similar to chroot applications.
Install and Manage LXC Linux Container in Linux
This makes LXC a very fast virtualization solution compared to other virtualization solutions, such as KVM, XENor VMware.
This article will guide you on how you can install, deploy and run LXC containers on a CentOS/RHEL and Fedora distributions.
Requirements
A working Linux operating system with minimal installation:
Installation of CentOS 7 Linux
Installation of RHEL 7
Installation of Fedora 23 Server
Step 1: Installing LXC Virtualization in Linux
1. LXC virtualization is provided through Epel repositories. In order to use this repo, open a terminal and install Epel repositories in your system by issuing the following command:
# yum install epel-release
2. Before continuing with LXC installation process, assure that Perl language interpreter, and debootstrap packages are installed by issuing the below commands.
# yum install debootstrap perl libvirt
3. Finally install LXC virtualization solution with the following command.
# yum install lxc lxc-templates
4. After LXC service has been installed, verify if LXC and libvirt daemon is running.
# systemctl status lxc.service
# systemctl start lxc.service
# systemctl start libvirtd
# systemctl status lxc.service
Sample Output
Check LXC Daemon Status
[root@tecmint ~]# systemctl status lxc.service
lxc.service - LXC Container Initialization and Autoboot Code
Loaded: loaded (/usr/lib/systemd/system/lxc.service; disabled)
Active: inactive (dead)
[root@tecmint ~]# systemctl start lxc.service
[root@tecmint ~]# systemctl status lxc.service
lxc.service - LXC Container Initialization and Autoboot Code
Loaded: loaded (/usr/lib/systemd/system/lxc.service; disabled)
Active: active (exited) since Fri 2016-04-01 02:33:36 EDT; 1min 37s ago
Process: 2250 ExecStart=/usr/libexec/lxc/lxc-autostart-helper start (code=exited, status=0/SUCCESS)
Process: 2244 ExecStartPre=/usr/libexec/lxc/lxc-devsetup (code=exited, status=0/SUCCESS)
Main PID: 2250 (code=exited, status=0/SUCCESS)
Apr 01 02:33:06 mail systemd[1]: Starting LXC Container Initialization and Autoboot Code...
Apr 01 02:33:06 mail lxc-devsetup[2244]: Creating /dev/.lxc
Apr 01 02:33:06 mail lxc-devsetup[2244]: /dev is devtmpfs
Apr 01 02:33:06 mail lxc-devsetup[2244]: Creating /dev/.lxc/user
Apr 01 02:33:36 mail lxc-autostart-helper[2250]: Starting LXC autoboot containers: [ OK ]
Apr 01 02:33:36 mail systemd[1]: Started LXC Container Initialization and Autoboot Code.
and check LXC kernel virtualization status by issuing the below command.
# lxc-checkconfig
Sample Output
Check LXC Kernel Virtualization Configuration
[root@tecmint ~]# lxc-checkconfig
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-3.10.0-229.el7.x86_64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled
--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled
--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
Bridges: enabled
Advanced netfilter: enabled
CONFIG_NF_NAT_IPV4: enabled
CONFIG_NF_NAT_IPV6: enabled
CONFIG_IP_NF_TARGET_MASQUERADE: enabled
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled
--- Checkpoint/Restore ---
checkpoint restore: missing
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities: enabled
Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
Step 2: Create and Manage LXC Containers in Linux
5. To list available LXC templates containers already installed on your system issue the below command.
# ls -alh /usr/share/lxc/templates/
List LXC Templates Containers
total 344K
drwxr-xr-x. 2 root root 4.0K Apr 1 02:32 .
drwxr-xr-x. 6 root root 100 Apr 1 02:32 ..
-rwxr-xr-x. 1 root root 11K Nov 15 10:19 lxc-alpine
-rwxr-xr-x. 1 root root 14K Nov 15 10:19 lxc-altlinux
-rwxr-xr-x. 1 root root 11K Nov 15 10:19 lxc-archlinux
-rwxr-xr-x. 1 root root 9.7K Nov 15 10:19 lxc-busybox
-rwxr-xr-x. 1 root root 29K Nov 15 10:19 lxc-centos
-rwxr-xr-x. 1 root root 11K Nov 15 10:19 lxc-cirros
-rwxr-xr-x. 1 root root 17K Nov 15 10:19 lxc-debian
-rwxr-xr-x. 1 root root 18K Nov 15 10:19 lxc-download
-rwxr-xr-x. 1 root root 49K Nov 15 10:19 lxc-fedora
-rwxr-xr-x. 1 root root 28K Nov 15 10:19 lxc-gentoo
-rwxr-xr-x. 1 root root 14K Nov 15 10:19 lxc-openmandriva
-rwxr-xr-x. 1 root root 14K Nov 15 10:19 lxc-opensuse
-rwxr-xr-x. 1 root root 35K Nov 15 10:19 lxc-oracle
-rwxr-xr-x. 1 root root 12K Nov 15 10:19 lxc-plamo
-rwxr-xr-x. 1 root root 6.7K Nov 15 10:19 lxc-sshd
-rwxr-xr-x. 1 root root 23K Nov 15 10:19 lxc-ubuntu
-rwxr-xr-x. 1 root root 12K Nov 15 10:19 lxc-ubuntu-cloud
6. The process of creating a LXC container is very simple. The command syntax to create a new container is explained below.
In the below excerpt we’ll create a new container named mydeb based on a debian template that will be pulled off from LXC repositories.
Creating LXC Container
[root@tecmint ~]# lxc-create -n mydcb -t debian
debootstrap is /usr/sbin/debootstrap
Checking cache download in /var/cache/lxc/debian/rootfs-jessie-amd64 ...
Downloading debian minimal ...
W: Cannot check Release signature; keyring file not available /usr/share/keyrings/debian-archive-keyring.gpg
I: Retrieving Release
I: Retrieving Packages
I: Validating Packages
I: Resolving dependencies of required packages...
I: Resolving dependencies of base packages...
...
...
7. After a series of base dependencies and packages that will be downloaded and installed in your system the container will be created. When the process finishes a message will display your default root account password. Change this password once you start and login to the container console in order to be safe.
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
Failed to read /proc/cmdline. Ignoring: No such file or directory
invoke-rc.d: policy-rc.d denied execution of start.
Timezone in container is not configured. Adjust it manually.
Root password is 'root', please change !
Generating locales (this might take a while)...
en_IN.en_IN...character map file `en_IN' not found: No such file or directory
/usr/share/i18n/locales/en_IN:55: LC_MONETARY: unknown character in field `currency_symbol'
done
Generation complete.
8. Now, you can use lxc-ls to list your containers and lxc-info to obtain information about a running/stopped container.
In order to start the newly created container in background (will run as a daemon by specifying the -d option) issue the following command:
# lxc-start -n mydeb -d
9. After the container has been started you can list running containers using the lxc-ls --active command and get detailed information about the running container.
# lxc-ls --active
10. In order to login to the container console issue the lxc-console command against a running container name. Login with the user root and the password generated by default by lxc supervisor.
Once logged in the container you can run several commands in order to verify the distribution by displaying the/etc/issue.net file content, change the root password by issuing passwd command or view details about network interfaces with ifconfig.
[root@tecmint~]# lxc-console -n mydcb
Connected to tty 1
Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself
Debian GNU/Linux 8 mydcb tty1
mydcb login: root
Password:
Last login: Fri Apr 1 07:39:08 UTC 2016 on console
Linux mydcb 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 11:36:42 UTC 2015 x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@mydcb:~# cat /etc/issue.net
Debian GNU/Linux 8
root@mydcb:~# ifconfig
eth0 Link encap:Ethernet HWaddr 00:16:3e:d9:21:d7
inet6 addr: fe80::216:3eff:fed9:21d7/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:107 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:5796 (5.6 KiB) TX bytes:648 (648.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
root@mydcb:~# passwd root
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
11. To detach from the container console and go back to your host console, leaving the container in active state, hit Ctrl+a then q on the keyboard.
To stop the a running container issue the following command.
# lxc-stop -n mydcb
12. In order to create a LXC container based on an Ubuntu template, enter /usr/sbin/ directory and create the following debootstrap symlink.
# cd /usr/sbin
# ln -s debootstrap qemu-debootstrap
13. Edit qemu-debootstrap file with Vi editor and replace the following two MIRROR lines as follows:
14. Finally create a new LXC container based on Ubuntu template issuing the same lxc-create command.
Once the process of generating the Ubuntu container finishes a message will display your container default login credentials as illustrated on the below screenshot.
# lxc-create -n myubuntu -t ubuntu
Sample Output
Create LXC Ubuntu Container
Checking cache download in /var/cache/lxc/precise/rootfs-amd64 ...
Installing packages in template: ssh,vim,language-pack-en
Downloading ubuntu precise minimal ...
15. In order to create a specific container based on local template use the following syntax:
16. For instance, specific containers for different distro releases and architectures can be also created from a generic template which will be downloaded from LXC repositories as illustrated in the below example.
Here is the list of lxc-create command line switches:
-n = name
-t = template
-d = distibution
-a = arch
-r = release
17. Containers can be deleted from your host with the lxc-destroy command issued against a container name.
# lxc-destroy -n mywheezy
18. A container can be cloned from an existing container by issuing lxc-clone command:
# lxc-clone mydeb mydeb-clone
19. And finally, all created containers reside in /var/lib/lxc/ directory. If for some reason you need to manually adjust container settings you must edit the config file from each container directory.
# ls /var/lib/lxc
This are just the basic things you need to know in order to work your way around LXC.
Configure High-Avaliablity Cluster on CentOS 7 / RHEL 7
High-Availability cluster aka Failover-cluster (active-passive cluster) is one of the most widely used cluster types in the production environment, this cluster provides you the continued availability of services even one of the node from the group of computer fails. If the server running an application has failed for some reason (hardware failure), cluster software (pacemaker) will restart the application on another node.
Mostly in production, you can find this type of cluster is mainly used for databases, custom application and also for file sharing. Fail-over is not just starting an application, it has some series of operations associated with it; like mounting filesystems, configuring networks and starting dependent applications.
CentOS 7 / RHEL 7 supports Fail-over cluster using the pacemaker, we will be looking here about configuring the apache (web) server as a highly available application. As I said, fail-over is a series of operations, so we would need to configure filesystem and networks as a resource. For a filesystem, we would be using a shared storage from iSCSI storage.
Our Environment:
Cluster Nodes:
node1.itzgeek.local 192.168.12.11
node2.itzgeek.local 192.168.12.12
iSCSI Storage:
server.itzgeek.local 192.168.12.20
All nodes are of CentOS Linux release 7.2.1511 (Core), running on VMware workstation.
Building Infrastructure:
iSCSI shared storage:
Shared storage is one of the important resources in the high-availability cluster, it holds the data of a running application. All the nodes in a cluster will have access to shared storage for recent data, SAN is the most widely used shared storage in the production environment; here, we will configure a cluster with iSCSI storage for a demonstration purpose.
Here, we will create 10GB of LVM disk on the iSCSI server to use as a shared storage for our cluster nodes. Let’s list the available disks attached to the target server using the command.
[root@server ~]# fdisk -l | grep -i sd
Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 209715199 104344576 8e Linux LVM
Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
From the above output, you can see that my system has a 10GB of disk (/dev/sdb). Create an LVM with /dev/sdb (replace /dev/sdb with your disk name)
List down the hard disks attached to nodes, you would find a new disk (sdb) added to the node; run below command on both nodes.
# fdisk -l | grep -i sd
Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 209715199 104344576 8e Linux LVM
Disk /dev/sdb: 10.7 GB, 10733223936 bytes, 20963328 sectors
Format the newly detected disk with ext4.
[root@node1 ~]# mkfs.ext4 /dev/sdb
Setup Cluster Nodes:
Make a host entry on each node for all nodes, the cluster will be using the host name to communicate each other. Perform below tasks on all of your cluster nodes.
Install cluster packages (pacemaker) on all nodes using below command.
# yum install pcs fence-agents-all -y
Allow all high availability application on the firewall to have a proper communication between nodes, you can skip this step if the system doesn’t have firewalld installed.
Use below command to get the status of the cluster.
[root@node1 ~]# pcs cluster status
Cluster Status:
Last updated: Fri Mar 25 11:18:52 2016 Last change: Fri Mar 25 11:16:44 2016 by hacluster via crmd on node1.itzgeek.local
Stack: corosync
Current DC: node1.itzgeek.local (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum
2 nodes and 0 resources configured
Online: [ node1.itzgeek.local node2.itzgeek.local ]
PCSD Status:
node1.itzgeek.local: Online
node2.itzgeek.local: Online
Run the below command to get a detailed information about the cluster including its resources, pacemaker status and nodes details.
[root@node1 ~]# pcs status
Cluster name: itzgeek_cluster
WARNING: no stonith devices and stonith-enabled is not false
Last updated: Fri Mar 25 11:19:25 2016 Last change: Fri Mar 25 11:16:44 2016 by hacluster via crmd on node1.itzgeek.local
Stack: corosync
Current DC: node1.itzgeek.local (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum
2 nodes and 0 resources configured
Online: [ node1.itzgeek.local node2.itzgeek.local ]
Full list of resources:
PCSD Status:
node1.itzgeek.local: Online
node2.itzgeek.local: Online
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
Fencing Devices:
The fencing device is a hardware / software device which helps to disconnect the prob;em node by resetting node / disconnecting shared storage from accessing it. My demo cluster is running on top of VMware Virtual machine, so I am not showing you a fencing device setup, but you can follow this guideto setup a fencing device.
Preparing resources:
Apache Web Server:
Install apache server on both nodes.
# yum install -y httpd wget
Edit the configuration file.
# vi /etc/httpd/conf/httpd.conf
Add below content at the end of file on all your cluster nodes.
<Location /server-status>
SetHandler server-status
Order deny,allow
Deny from all
Allow from 127.0.0.1
</Location>
Now we need to use shared storage for storing the web content (HTML) file. Perform below operation in any one of the node.
[root@node2 ~]# mount /dev/sdb /var/www/
[root@node2 ~]# mkdir /var/www/html
[root@node2 ~]# mkdir /var/www/cgi-bin
[root@node2 ~]# mkdir /var/www/error
[root@node2 ~]# restorecon -R /var/www
[root@node2 ~]# cat <<-END >/var/www/html/index.html
<html>
<body>Hello This Is Coming From ITzGeek Cluster</body>
</html>
END
[root@node2 ~]# umount /var/www
Allow apache service in the firewall on all nodes.
Create an IP address resource, this will act a virtual IP for the apache. Clients will use this ip for accessing the web content instead of individual nodes ip.
Since we are not using fencing, disable it (STONITH). You must disable to start the cluster resources, but disabling STONITH in the production environment is not recommended.
# pcs property set stonith-enabled=false
Check the status of the cluster.
[root@node1 ~]# pcs status
Cluster name: itzgeek_cluster
Last updated: Fri Mar 25 13:47:55 2016 Last change: Fri Mar 25 13:31:58 2016 by root via cibadmin on node1.itzgeek.local
Stack: corosync
Current DC: node2.itzgeek.local (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum
2 nodes and 3 resources configured
Online: [ node1.itzgeek.local node2.itzgeek.local ]
Full list of resources:
Resource Group: apache
httpd_vip (ocf::heartbeat:IPaddr2): Started node1.itzgeek.local
httpd_ser (ocf::heartbeat:apache): Started node1.itzgeek.local
httpd_fs (ocf::heartbeat:Filesystem): Started node1.itzgeek.local
PCSD Status:
node1.itzgeek.local: Online
node2.itzgeek.local: Online
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
Once the cluster is up and running, point a web browser to the apache virtual-ip, you should get a web page like below.
Let’s check the fail over of resource of the node by stopping the cluster on the active node.