CentOS Virtualization Hypervisor Setup and Management

Author: Ken Zahorec

This document provides instructions for setting up and maintaining an environment that can be used to provide secured high performance virtualization services from a typical server. Management of the virtualization server can be easily accomplished remotely via Windows, OS X, or desktop Linux. The document assumes that the reader has some degree of experience with Linux, including the GNU Linux command line processor (Bash) and general Linux command line built-ins and utilities available in the RedHat/CentOS/Scientific Linux environments. Items in the document specific to CentOS 6 or CentOS 7 are marked as such. When not specified, the instruction applies to either system.

http://tiswww.case.edu/php/chet/bash/bashref.html

http://wiki.centos.org/

https://www.centos.org/modules/newbb/index.php?cat=9

Table of Contents

1 CentOS Virtualization Host Setup

1.1 Time Required

1.2 Network Requirements

1.3 Assumptions

2 CentOS Linux/KVM Virtualization Host

2.1 Instructions for Installing the Virtualization Host software

2.1.1 Download Installation Media Data

2.1.2 Create the Bootable Installation Media

2.1.31 Partitioning the Virtualization Host Storage device

2.2 Additional Configuration and Setup of the CentOS Virtualization Host

2.2.1 Network Adapter Setup (CentOS 7)

2.2.2 Network Adapter Setup

2.2.3 Proxy Setup

2.2.4 Network Time synchronization

2.2.5 Virtualization Host Clock Setup

2.2.6 Update System to Latest Software and Add Required Packages

2.2.7 Firewall setup

2.2.8 Adding SPICE service to the CentOS virtualization Server

2.3 Setting up User Accounts and Access to the CentOS Virtualization Host

2.3.1 Create user accounts for each user that will be managing the hypervisor

2.3.2 User administrative access “root”

2.4 Using virt-manager to Configure some Common pre-configured Hypervisor Network Backings (optional items, not mandatory)

2.4.1 Create a “virt_routed” routed network backing

2.4.2 Create a “virt_isolated” internal private network backing

2.5 Managing the CentOS Virtualization Host

2.6 Accessing virt-manager from a Remote OS management client on various operating systems

2.6.1 Management from Windows XP, 7, 8, 8.1, blah...

2.6.2 Management from Desktop Linux such as Ubuntu, Mint, Fedora, CentOS

2.6.3 Management from OS X

2.7 Creating Virtual Machines and Installing Guest OS in the CentOS Virtualization Server's Hypervisor

2.7.1 VM Guest QXL video and virtualized (virtio) drivers for improved VM guest support

2.7.2 Creating mountable iso images from the host file system

2.7.3 Virtual Desktop Interface (VDI) with KVM Spice

2.7.3.1 Fix a given static port value to a VM Display console:

2.7.4 External User Client access to the VM console

2.7.5 Windows 7 guests with Spice Channels

2.7.6 Windows 2012 R2 Server guests

2.7.7 Fedora and Ubuntu guests with Spice

2.7.8 Increasing Disk storage available on an existing VM

2.8 Care and Feeding of the CentOS virtualization host

2.9 Duplication and/or Backup of the CentOS Virtualization Host system

2.9.1 Preparation of the CentOS virtualization host system before duplicating for use in another system

2.9.2 Running the Clonezilla Duplication or Backup

3 Shutdown of the CentOS Virtualization Server

4 Configure and Control APC SmartUPS During a Power Failure

4.1 Multiple server hosts sharing the same UPS

4.1.1 Avoid thrashing on restore of power

4.1.2 Protect the LAN switch power

4.1.3 Consider a small dedicated network switch for power management

5 Logical Volume Manager notes on usage

6 yum and rpm notes

 

 

1 CentOS Virtualization Host Setup

1.1 Time Required

The process of setting up the virtualization environment can take anywhere from about 45 minutes to an hour or so, assuming the reader has sufficient experience with Linux.

1.2 Network Requirements

Network and Internet access is required in order to complete the overall process of setting up the virtualization host and required virtual machines.

1.3 Assumptions

This instruction is specific to a particular current operational instance of the virtualization hypervisor. When using this document; Do not re-use specific hostname, ipaddress, and/or other host specific settings which could cause collisions within a common network.

 

It is assumed that the reader knows and understands the general environment being deployed into. Planned network address and related settings, location of DNS servers, location of NTP time server, desired host name, user account name(s), should be determined beforehand.

 

2 CentOS Linux/KVM Virtualization Host

This is a server running CentOS (current) Minimal 64-bit with KVM/QEMU and virtualization management packages added to it.

http://www.centos.org/

The server hardware can be an enterprise grade blade, standalone, or a high end general purpose desktop computer system. I have set up the hypervisor on rack mounted Dell 2950 and various Cisco UCS server systems. I have also setup the hypervisor on  various AMD and Intel desktop systems. The overall system is extremely reliable, secure, and powerful. It is also very easy to manage from any of various possible client systems; Windows, Linux, or OS X. The system can also be used to provide an accelerated remote virtual desktop interface (VDI) experience for Windows or Linux guests.

The processor in the CentOS virtualization host system must be 64 bit and be also advanced enough to perform hardware based virtualization—as many systems today are. This means that virtualization extensions must be available and enabled in the processor. In some cases a BIOS setting may control this processor capability—i.e. it might have to be enabled through the BIOS.

http://virt-tools.org/learning/check-hardware-virt/

The CentOS Linux “minimal” server does not provide a desktop GUI—it is headless. The server does provide a remotely accessible graphical environment for management and access to the hypervisor and virtualized resources. The graphical virtualization management environment, provided by “virt-manager”, can be accessed via an external client desktop system management host running any of Windows, OSX, or desktop Linux systems.

Management of the virtualization hypervisor is accomplished via individual user accounts established and defined at the server. There is no need to login to the server as root to manage the hypervisor based resources.

2.1 Instructions for Installing the Virtualization Host software

2.1.1 Download Installation Media Data

First step is to download a copy of CentOS “minimal” 64-bit .iso installation image from one of the many official public mirrors.

For downloading CentOS 6

http://isoredirect.centos.org/centos/6/isos/x86_64/

For downloading CentOS 7

http://isoredirect.centos.org/centos/7/isos/x86_64/

2.1.2 Create the Bootable Installation Media

The system can be installed from CD/DVD or USB thumb drive.

Bootable CD/DVD

Burn the downloaded .iso file to CDROM/DVD media which will then be used as the installation media for the CentOS virtualization server.

http://www.centos.org/docs/5/html/CD_burning_howto.html

Bootable USB Thumb Drive

A bootable thumb drive can easily be created using the “dd” utility program. You must be careful to specify the correct output destination device “of=” because dd will overwrite any data at the destination device.  The drive should be enumerated, but not mounted during this operation. For example, assuming that /dev/sdd is the correct device identifier for the thumb drive:

# dd if=CentOS-7.0-1406-x86_64-Minimal.iso of=/dev/sdd

Boot the installation media created above, CD/DVD or USB thumb drive, to begin the installation process.

2.1.31 Partitioning the Virtualization Host Storage device

While in the disk partitioning tool, change the partitioning storage volumes by adding a dedicated logical volume for virtualization storage. The installer default storage arrangement suggests most storage will be committed to “/home”. The problem with this is that KVM/QEMU images, by default, are created at “/var/lib/libvirt/images/” which is not under the normal /home mount point. If you plan to have lots of virtual machine images and decide to use the standard suggested partitioning, then it will be likely you will run out of space on the filesystem root partition “/” as you add more and more virtual machines. The main point here is that we want to provide a separate logical volume for virtualization storage apart from the “/” and “/home” filesystem.

CentOS 6: Make sure to affirm that you want to review the partitioning during the installation. This will give you the chance to make some adjustments in the sizes and volumes created during the CentOS installation process. We will need to make a few adjustments in the volumes and sizes in order to optimize the system storage allotted to the virtual machines. The installation process will eventually stop at the graphical partitioning tool.

Create a new logical volume at “/var”. You will first have to free up some space in the current logical volumes to make room for the new logical volume. Name the new logical volume something like “lv_var”. This is where the virtual machine storage files will reside. Most of the available space on the hard drive should be committed to this new volume. In addition to VM guest storage volumes, we will also use this area to store .iso images which can be mounted to VM CDROM for installing software.

Reduce the /home partition to 40GB or so and provide a reasonable amount of storage, perhaps 20 GB, for the root filesystem (/). Commit the remaining available storage to the new logical volume “lv_var”.

Allow the installation to continue and complete.

CentOS 7: Select Installation Destination from the main installation dialog.

Select the desired installation target drive “checked” and then select “I will configure partitioning” and then select “Done” in the upper left.

Afterwards you will be in the partitioning tool Manual Partitioning. LVM should be selected by default. Highlight any existing partition on the target drive and choose the “-” minus button to remove that partition. You can, in the confirm dialog, select a checkbox that reads “Delete all other file systems in the Unknown root as well.” Delete all of the partitions on the drive. All partitions will be deleted and can be confirmed by showing no partitions in the lower left area of the Manual Partitioning tool.

If there happens to be a problem, or failure, with the installer getting the new partition scheme in place, it may be because of an existing foreign filesystem on the server, perhaps ESXi?. In this case, just go through a default CentOS install and “reclaim” all space on the hard drive, when prompted. Proceed to use the default CentOS partition scheme provided, unmodified. It should take only a couple of minutes to completely install CentOS minimal.

THEN afterwards, start an entirely new install such that the installer now sees the previously installed Linux filesystem in place and can handle it properly in the partitioning tool without failure.

With LVM selected click on the text item titled Click here to create them automatically. This will provide a suggested partitioning which we will modify as detailed below.

Significantly reduce “/home” and root “/” to provide space for a new mount point. Commit most of the storage space to the new mount point. Recommended sizes are as follow:

This is done by changing the desired size and selecting the apply button over in the right.

Create a new mount point by choosing /var. It will automatically become logical volume “centos-var”, which is fine.

Set the size of /var to all remaining space. If you request more than the remaining for /var the amount will automatically reduce to whatever is actually remaining. With all required partitions now in view, including the new /var, select “Done” in the upper left to return to the main installation dialog.

After partitioning and initial minimal installation has completed. Login as root and verify that the expected amount of storage is available at the file system mount points. Basically this means to perform a quick review of the storage allocations at the host system.

# df -h

...output shows mount points and usage in human readable form.

2.2 Additional Configuration and Setup of the CentOS Virtualization Host

In this section we log into the CentOS system for the first time and perform system configuration tasks.

Editing Files in the CentOS minimal command environment

Files can be edited in the CentOS minimal command environment using either “nano” or “vi”.  Later on, after the base group has been installed, “vim” will be available and provides additional edit, coloration, and help features.

http://www.nano-editor.org/dist/v2.2/nano.html

http://www.vim.org/docs.php

http://vimdoc.sourceforge.net/

Using a display and keyboard attached to the virtualization host: Log into the server command environment as “root”. This will take you to a root command prompt.

2.2.1 Network Adapter Setup (CentOS 7)

CentOS 7 uses systemd and has altered network interfaces names from tyical eth? to a new en? pattern. Check to see how they are enumerated by inspecting the network-scripts. The ifcfg-? scripts will indicate the particular network interfaces that have been enumerated.

# ls /etc/sysconfig/network-scripts/.

Try to bring up an interface with physical network connected to the desired adapter. For example, assuming network device “enp3s0f0” was enumerated on the host:

ifup enp3s0f0

Use the Network Adapter Setup instruction below, noting the difference in network interface name above.

2.2.2 Network Adapter Setup

CentOS installation processing will auto configure all available network adapters as default DHCP but does not bring up any interfaces by default. You can look in /etc/sysconfig/network-scripts to see which adapters are configured. For eth0 you will see a script named ifconfig-eth0. For eth1 you will see a script named ifconfig-eth1. The ifup command can be used to bring the network adapter online. Examples follow, substituted actual network interface adapters as necessary.

CentOS 6:

ifup eth0

CentOS 7:

# ifup enp3s0f0

Make sure that the physical LAN connection to the server is made at the server LAN port that will be used for managing the server. One single connection is all that is required at this point. On some server systems we have a multitude of network adapters. To determine which adapter name to use, first plug in the physical connection, then try bringing each available interface up until you get a connection established with an address assigned to that interface. This assumes the network you are connected to has a DHCP service available. For example:

# ifup eth2

 ...failed

# ifup eth3

 ...failed

# ifup eth4

 ...success

CentOS 6: Network adapters are typically enumerated beginning with “eth”, for example:

eth0, eth1, eth2, eth3, etc.

CentOS 7: Network adapters on CentOS 7 are enumerates in a different convention beginning with “enp”. For example:

enp3s0f0

Check network status. Determine if we have obtained an ip address from a remote DHCP server.

ip addr

To make the network default to always start at boot you will need to edit the appropriate “ifcfg-?” network adapter config script located in /etc/sysconfig/network-scripts/. The ONBOOT entry in the script must be changed to “yes”:

ONBOOT=yes

If the virtualization server host will be connected to the LAN using static ip, then a specific configuration will be required.   Ncurses or manual method can be used. The manual method is recommended.

Ncurses Interactive method: This will bring up a textual user interface wizard:

system-config-network

Manual method: Configuration for static ip can be accomplished using the following example as a guide. Substitute individual settings below as appropriate for the target system network environment. Any 10.9.x.x entries in the example below will need to be adjusted as necessary for the LAN environment. Do not change the UUID or HWADDR entries provided in your target system to the values below.

CentOS 6Configure eth4

vi /etc/sysconfig/network-scripts/ifcfg-eth4

DEVICE=eth4

HWADDR=30:E4:DB:C2:95:AA

TYPE=Ethernet

UUID=e679b570-3fc1-4e50-8e3c-f48977e823a0

ONBOOT=yes

DEFROUTE=yes

PEERDNS=no

DNS1=10.9.77.74

DNS2=10.9.77.75

BOOTPROTO=none

IPADDR=10.9.217.104

NETMASK=255.255.255.0

GATEWAY=10.9.217.1

 

CentOS 7: Configure enp3s0f0

vi /etc/sysconfig/network-scripts/ifcfg-enp3s0f0

HWADDR=A4:4C:11:29:AB:DA

TYPE=Ethernet

BOOTPROTO=none

DEFROUTE=yes

PEERDNS=no

DNS1=10.9.177.74

DNS2=10.9.177.75

PEERROUTES=no

IPV4_FAILURE_FATAL=no

IPV6INIT=yes

IPV6_AUTOCONF=yes

IPV6_DEFROUTE=yes

IPV6_PEERDNS=yes

IPV6_PEERROUTES=yes

IPV6_FAILURE_FATAL=no

NAME=enp3s0f0

UUID=c47725de-33b9-4ef9-a66f-e7ff500b9a6a

ONBOOT=yes

IPADDR=10.9.217.104

NETMASK=255.255.255.0

GATEWAY=10.9.217.1

Note that “PEERDNS=no” prevents any DHCP server from overwriting resolv.conf. The DNS server entries above are added to resolve.conf when the interface is activated and removed when the interface is deactivated.

Adjust system hostname:

CentOS 7We can adjust the hostname using the hostnamectl command. For example to show hostname and other host specific details:

hostnamectl status

To set hostname:

hostnamectl set-hostname et-virt104.mycorp.com

Simply logout, then log in, to see the hostname change at the command line.

CentOS 6We can adjust the hostname via edit of /etc/sysconfig/network

# vi /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=et-virt104.mycorp.com

Note: requires a network restart, logout, and login to complete systemwide change.

Add the hostname identifier into the tail end of the etc/hosts file. Do not remove or change any entries that are already defined. This allows any hostname directed commands which require resolution to IP address, such as libvirtd related operations, to be able to be issued to the proper host name address. In this case we specify the loopback address as the resolver for both the short and long form of the local hostname.

vi /etc/hosts

[...]

127.0.0.1 et-virt104 et-virt104.mycorp.com

Restart the networking to confirm and use the new settings.

CentOS 6Restart the network interface

/etc/init.d/network restart

CentOS 7Restart the network interface

# systemctl restart network.service

At this point we should have one single network adapter connected and functional.

Additional steps below can now be accomplished either locally or remotely from a management host connected to the CentOS virtualization server. See the latter part of this document for instructions on connecting various OS management hosts to the CentOS virtualization server.

The network DNS server can be made to provide remote name reference to the new CentOS server. This is optional, we can use the raw ip address for direct access to the server instead of the server DNS name.

The only account defined at this point at the server is root, which can be used for the remainder of the system bootstrap process. So simply using ssh from a management host as follows:

ssh root@et-virt104.mycorp.com

or via direct IP address...

ssh root@10.9.217.104

2.2.3 Proxy Setup

Setting up the CentOS virtualization host requires an uninhibited route to the Internet. If you need proxy settings to get to the Internet for the local CentOS virtualization host, there are several ways to accomplish this. One of the best ways to do this on CentOS is to add an independent custom.sh script into the filesystem at “/etc/profile.d/custom.sh” This custom script contains the required environment settings for the proxy.

Below is an example of how to format the file contents in “custom.sh” to specifically use a known corporate proxy. Remove leading spaces and adjust the specific values assigned as necessary for use in the target network environment.

export http_proxy=http://myproxy.net:8080

export HTTP_PROXY=http://myproxy.net:8080

export https_proxy=http://myproxy.net:8080

export HTTPS_PROXY=http://myproxy.net:8080

export ftp_proxy=http://myproxy.net:8080

export FTP_PROXY=http://myproxy.net:8080

export no_proxy="localhost,127.0.0.1,10.0.0.0/8,.mycorp.dev,mycorp.com"

export NO_PROXY="localhost,127.0.0.1,10.0.0.0/8,.mycorp.dev,.mycorp.com"

You will need to logout and login for these proxy setting to become active. Use the “env” command to dump all of the environment settings to the display. This will allow you to see what the proxy environment setting are currently set to:

env

A little easier, and more useful, approach might be to pipe the output of “env” to the file viewer program called “less”:

env | less

The less file viewer is a great way to inspect text files on a Linux system. The less viewer is very quick and powerful for viewing and/or searching for data in a file. Use “q” to exit less.

2.2.4 Network Time synchronization

Time needs to be accurate in the hypervisor. VM Guests will use the virtualized time clock in the seabios, which is based on the hypervisor system time. To this end we need to make sure we have ntpd configured and running at the virtualization host. Ntpd will maintain system time based on access to network based time services.

If there is no access to a time server available we can rely on the host hardware clock; Although this is not an ideal option, it will function. Without a good reference, PC and server clocks will tend to drift and require setting to proper time occasionally. Skip to the next section for details on using the hardware clock in the system as a reference.

CentOS 7: The ntpd daemon is not included in the CentOS 7 base install. It needs to be added to the system:

# yum -install ntp

Edit /etc/ntp.conf if you need to use a specific external NTP service. Defaults may be ok if you can reach the servers..

# vi /etc/ntp.conf

# systemctl status ntpd.service

# systemctl enable ntpd.service

# systemctl start ntpd.service

Check to make sure ntp service is running.

CentOS 6:

# service ntpd status

CentOS 7:

# systemctl status ntpd.service

Check for successful access and response time to configured time servers.

# ntpq -p

The output of ntpq should provide specifics about the configured time server(s) it contacts through ntpd. If configured time reference servers can not be reached, then other reference servers may need to be specified in the ntp configuration.

Adjust servers as necessary to point to local server when required. The main point here is that we can get to a time server to help maintain accurate time in the hypervisor.

# vim /etc/ntp.conf

Set the system time to network time based on a reference server of choice. This requires the ntpd time service daemon to be stopped because the daemon reserves the NTP port that ntpdate requires. The time server “timeserver.mycorp.com” below is an example of using a local enterprise time server as opposed to a public Internet time server--replace this entry with the appropriate server in the target environment, public or private (FDQN or IP address).

CentOS 6Set the system time using service calls to control the ntpd service as follows.

# service ntpd stop

# ntpdate timeserver.mycorp.com

# service ntpd start

CentOS 7Set the system time using systemctl calls to control the ntpd service as follows.

systemctl stop ntpd.service

# ntpdate timeserver.mycorp.com

systemctl start ntpd.service

If configuration changes were made to ntp.conf, check again for successful access and response time to configured time servers.

# ntpq -p

Sync hypervisor hardware clock to match the, now correct, system time. This sets the hardware clock to match the current system date and time. It is not mandatory, but convenient to have the hardware clock set precisely to the proper time.

hwclock --systohc

Finally, we can just do a simple check on the time. The output should be very accurate at this point—being that it is based on a reliable time reference.

date

2.2.5 Virtualization Host Clock Setup

Only if you were not able to use a time server as outlined in the previous section, then manually set system date, time, and zone. This would be done in an environment where we are using the hypervisor host as the sole time reference. Substitute below for current date time and zone.

date --set="2014-05-02 18:38:42 EST"

Sync system date time to hardware clock. This sets the host hardware clock to the current system date and time. It synchronizes the system hardware clock chip to match the system time. Not manadatory, but a convenience to have the hardware set precisely.

        # hwclock --systohc

2.2.6 Update System to Latest Software and Add Required Packages

With the eth4 network interface up and running, time set, and a clear route to the Internet, we will use “yum” to update system software to the latest available packages.

yum update

Now we will add some useful software programs to the basic CentOS minimal system. This will create a generally more usable system and also provide the required host KVM/QEMU virtualization packages. The “base” group installs various packages that provide extra utilities and programs useful for management of the system.

yum -y groupinstall base

# yum -y install nmap

# yum -y install genisoimage

CentOS 7: Install x11 auth, fonts, utils, and then enable remote graphical display applications. Enable X11 forwarding in ssh. This will allow remote X11 connections for graphical programs, such as wireshark.

# yum -install xorg-x11-xauth

yum -install xorg-x11-fonts-*

# yum -install xorg-x11-utils

Wireshark is a great tool for network analysis. We will add it and configure permissions with setcap such that anyone in the wireshark group can run wireshark and access interfaces in unprivileged mode.

# yum -y install wireshark-gnome

setcap cap_net_raw,cap_net_admin=eip /usr/sbin/dumpcap

Install the virtualization components, including qemu, KVM hypervisor, and various other packages using the following:

CentOS 6:

yum -y groupinstall "Virtualization"

yum -y groupinstall "Virtualization Platform"

CentOS 7:

yum -y groupinstall "Virtualization Host"

# yum -y install virt-manager

# yum -y install virt-install

CentOS 6: Install virt-manager on the virtualization host to allow remote management ssh connection for graphical virtualization management.

(FIXME: Instead of the following “confirmed” 2 groupinstall items, perhaps an improvement may be to instead install “virt-manager” and “virt-install” at this point—along with the resulting required dependencies as identified by yum)

# yum -y groupinstall "Virtualization Client"

yum -y groupinstall "Virtualization Tools"

(FIXME: Instead of the following three confirmed” installs, perhaps an improvement may be to use the xorg-x11-* items detailed earlier in this document for CentOS 7.)

yum -y install xorg-x11-xauth

yum -y install tigervnc

# yum -y install dejavu-lgc-sans-fonts

Use the policy kit to allow local users in the qemu group to access the libvirtd backend.

  # vim /etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla

[libvirt Management Access]

Identity=unix-group:qemu

Action=org.libvirt.unix.manage

ResultAny=yes

ResultInactive=yes

ResultActive=yes

Restart the libvirt service

CentOS 6:

# service libvirtd restart

CentOS 7:

systemctl restart libvirtd.service

To check the status of the libvirtd service...

systemctl status libvirtd.service

Create the directory that will be used to contain the .iso images needed for installing guests into the required virtual machines for the planned environment. Furthermore allow user members of the qemu group to write and execute from this directory. This directory is used to hold iso images for CD based installation into VM guests. Qemu needs access to this area when connecting iso images into virtual CD drives.

# mkdir /var/lib/libvirt/images/iso-images

# chgrp qemu /var/lib/libvirt/images/iso-images

# chmod 775 /var/lib/libvirt/images/iso-images

Set default group for all new objects created under iso-images to match the group assigned to that directory; The qemu group in this case.

# chmod g+s /var/lib/libvirt/images/iso-images

For example this directory would hold guest OS install images in .iso file format. A local libvirt group users would be able to write (or remotely scp) files into this directory. Files in this directory are also be accessible to the libvirtd backend through use of virt-manager. Once CDROM-attached into a VM, an ISO file becomes locked and owned by the hypervisor layer “libvirtd”.

2.2.7 Firewall setup

Firewall adjustments can be made at the virtualization host. You will want to make sure that in the firewall you accommodate hosts and ports as needed to allow other systems to operate with the CentOS virtualization system.

CentOS 6:

Uses the following as root.

system-config-firewall-tui

Use the tab key to navigate the tui and the Space bar to select. Un-check the “enable” box and select “OK”. This will disable the firewall for lab use.

CentOS 7:

Use the following, as root.

# firewall-cmd

If a GUI is preferred then...

# yum -install firewall-config

# firewall-config

The firewall will need to be adjusted to permit remote access to the virtual machine console.

Changes to Services settings can be made only in the “Permanent” configuration view. Those changes will be made available in the “Runtime” configuration view once the system is restarted (or “firewalld” daemon manually restarted).

Access to firewall-config for adjustments requires root access and a graphical display. This is accomplished from login session at an external client host which will present the X11 forwarded graphical interface for firewall-config.

The default zone for enumerated network interfaces is “public”. Interfaces can be changed as desired, for DMZ or other zones in the firewall-config GUI.

Assuming default, The “vnc-server” must be selected in the “public” zone to provide remote viewer client access via VNC or Spice.

You can broaden the vnc-server port range as required for however many virtual machine consoles you are planning to accommodate. The edit must be done in the “Permanent” configuration view.

The diagram below shows the pertinent service, vnc-server, being edited in the “Permanentconfiguration view. It is immediately activated.

 

2.2.8 Adding SPICE service to the CentOS virtualization Server

Desktop access in CentOS is provided using VNC by default. This section provides an alternative to VNC called Spice. Spice provides many features beyond typical VNC connection such as lower network bandwidth through compression and accelerated video at the client end.

http://www.spice-space.org/

http://wiki.centos.org/HowTos/Spice-libvirt

SPICE provides an accelerated and optimized protocol for Virtual Desktop Interface (VDI).

Cent 6: Add SPICE packages to the CentOS virtualization host.

(We are not sure if we need spice-client or not. But it will not hurt to have it.)

# yum -y install spice-client

# yum -y install spice-protocol

Cent 7: Add SPICE protocol to the CentOS virtualization host.

# yum -y install spice-protocol

Allow the spice service to be provided on all interfaces in the CentOS virtualization host. Edit qemu.conf to set the spice_listen and vnc_listen setting to 0.0.0.0 (all public interfaces).

# vim /etc/libvirt/qemu.conf

[...]

spice_listen = “0.0.0.0”

[...]

vnc_listen = “0.0.0.0”

[...]

2.3 Setting up User Accounts and Access to the CentOS Virtualization Host

2.3.1 Create user accounts for each user that will be managing the hypervisor

Create a new user account. This account will be one of possibly several used for managing virtualization resources at the virtualization host. The “userid” below will be the actual login id chosen for access to the hypervisor.

# adduser userid

# passwd userid

The new user should change their password after logging in the first time.

Append (-a) the qemu group membership into the new account, granting it permission to manage the hypervisor and access the iso-images area via the qemu group.

# usermod -a -G qemu userid

Optionally, provide the account the rights to run wireshark to monitor and analyze network packets. This will allow them to record, inspect, and analyze all network traffic in the virtualization host.

usermod -a -G wireshark userid

Add additional users as required, using the instructions above.

2.3.2 User administrative access “root”

CentOS 6: To use the “wheel” group as instructed below, you will need to uncomment the wheel group definition in sudoers file. This is not required in CentOS 7, where the wheel group is defined in the default configuration.

# visudo

...Remove the comment character # at the beginning of the %wheel group definition. The one requiring passwords to be used.

Access to direct login to the root account is not mandatory for normal VM management. If a user needs to manage the underlying CentOS hypervisor host system, they will need to be part of the “wheel” group. This is done only for user accounts which need to be able to manage the virtualization host system. They will be prompted for their password when performing restricted commands as sudo. To take effect, the user will have to login to the system after this change is made.

usermod -a -G wheel userid

This will permit them to run otherwise restricted commands using sudo. For example: “sudo yum update” to perform a system software update.

2.4 Using virt-manager to Configure some Common pre-configured Hypervisor Network Backings (optional items, not mandatory)

Here we will define and create some possible network backings for use with VM guests. These operations are done using graphical virt-manager accessed from a remote management host as described in the latter part of this document. They illustrate some of the flexible virtualized networking backings available at the hypervisor. The routed and isolated network backings are optional additions to the normal “default” and Macvtap directed backings provided in the default installation.

“default”

This is the VM default network backing. It is provided in the default installation of libvirtd components. It has an internal DHCP server and will auto-config network hosts into the 192.168.122.x internal network. VM guest IP messages are routed via NAT to external network.

“*.macvtap

There are a number of default provided backings which use the Linux networking macvtap driver. These backings provide various network routes to devices and bridges within the hypervisor. Some examples included direct routes to the host adapters and loopback device. For a specific macvtap backing example, consider “Host device eth0: macvtap”. This backing can be used to drop a virtual NIC tap directly at the hypervisor eth0 interface. In addition, macvtap provides a source mode which provides VEPA, Bridge, Private, or Passthrough, connections—allowing for various network layer independent security and/or sharing needs.

“virt_routed”

Routed directly to any physical port on the outbound network without any translation. Hosts using this backing must cooperate with network requirements on the outer network. The outer network must also cooperate with this inner network. In other words; Outside routers must know this network exists and route external packets properly towards it. Furthermore, outer networks must not treat packets from this inner network as alien. We will use virt-manager to build this backing.

“virt-isolated”

Routed internally within the virtual network. It has an internal DHCP server and will auto-config network hosts into the 192.168.110.x internal network. Allows VM guest to talk only with other VM guests, but not to outside hosts. We will use virt-manager to build this backing.

Use a remote management host to access virt-manager using one of the qemu group management accounts. Follow the instructions in the next subsections to create the network backings.

2.4.1 Create a “virt_routed” routed network backing

virt-manager → Edit → Connection Details → Virtual Networks (tab)

Add (plus sign)

Forward

Network name: virt_routed

Forward

Network: 192.168.100/24

Enable DHCP: unchecked (disabled)

Forwarding to physical network

Any physical device

Routed (not NAT)

Review settings presented

Finish

2.4.2 Create a “virt_isolated” internal private network backing

virt-manager → Edit → Connection Details → Virtual Networks (tab)

Add (plus sign)

Forward

Network name: virt_private

Forward

Network: 192.168.110/24

Enable DHCP: checked (enabled)

Isolated virtual network

Review settings presented

Finish

All three network backings should now be visible in the Virtual Networks tab in Connection Details of virt-manager.

Although not mandatory, we can perform a system restart now to help validate that we can successfully boot the system with all of our modifications at this point:

shutdown -r now

This completes the installation and setup of the CentOS virtualization host. Virt-manager, which is already installed into the virtualization server at this point, will present a secure graphical management interface which we can use to manage the virtual machines and associated  resources. Virt-manager can also be used to gain access to the virtual machine's guest console or desktop console. For examples of the virt-manager user interface see the screenshots at the following link.

http://virt-manager.org/screenshots/

2.5 Managing the CentOS Virtualization Host

To gain access to manage the virtualization environment a Secure Shell (ssh) client with X11 forwarding capability is used. Secure Shell  (ssh) clients are available for all major OS types out there. Once logged into the Virtualization Host you can run the graphical virtualization management application called “virt-manager”.

The CentOS virtualization host can be managed from a separate local or distant remote system connected through the outer network interface. A dedicated physical network can be setup for management. The network configuration is extremely flexible. Refer to the screenshot below which shows the virt-manager interface with the CentOS-Elastix-PBX guest open. It also shows the originating command shell used to launch virt-manager while logged into the virtualizaiton host through ssh -X. The example below is a MAC OS X desktop view. Alternatively, a Windows or Linux desktop host could be used.

2.6 Accessing virt-manager from a Remote OS management client on various operating systems

The following sections outline the process for setting up and gaining access to the virtualization host on various different operating systems. In each case the “username” is assumed to be a local user at the virtualization host which is also included in the libvirt group.

2.6.1 Management from Windows XP, 7, 8, 8.1, blah...

In the Windows environment we don't have a local X11 server for display of programs that require X11 widgets for their user interface—such as virt-manager. Xming is used in the Windows environment to create a local X11 server which provides the capability to display the output of an X11 based program. Linux Desktop and OS X desktop environments normally provide this capability by default.

http://aruljohn.com/info/x11forwarding/

Install PuTTY

http://the.earth.li/~sgtatham/putty/latest/x86/putty.exe

Install xming

http://sourceforge.net/projects/xming/

Start xlaunch

options: multiple windows, without having it start a program, just the X server

Start PuTTY

Session: Host Name (or IP address)

Port: 22

Connection – SSH – X11

Enable X11 forwarding

X display location: localhost:0

Save session to a specific name to allow you to re-use it.

Login with local username account credentials then enter the following command:

virt-manager &

This will get you into virt-manager allowing you to manage the hypervisor at the virtualization host.

If you need to transfer files to or from the virtualization server use the secure file transfer utility provided by PuTTY.

pscp user1@host1:filespec user2@host2:filespec

2.6.2 Management from Desktop Linux such as Ubuntu, Mint, Fedora, CentOS

Open terminal session, then enter the following:

ssh -X username@virtserver

virt-manager &

This will get you into virt-manager allowing you to manage the hypervisor at the virtualization host.

If you need to transfer files to or from the virtualization server use the secure file transfer utility.

scp user1@host1:filespec user2@host2:filespec

For details on scp see the manual pages:

man scp

2.6.3 Management from OS X

Open terminal session

ssh -X username@virtserver

virt-manager &

This will get you into virt-manager allowing you to manage the hypervisor at the virtualization host.

If you need to transfer files to or from the virtualization server use the secure file transfer utility.

scp user1@host1:filespec user2@host2:filespec

For details on scp see the manual pages:

man scp

2.7 Creating Virtual Machines and Installing Guest OS in the CentOS Virtualization Server's Hypervisor

This section provides some useful information and concepts concerning creation of virtual machines in the CentOS Linux KVM hypervisor.

Key Mapping for OS X management hosts

CentOS 6: When creating virtual machines with virt-manager from a remote management system, set the “Keymap” under “Display VNC” to English US as follows:

Keymap: “en-us

This prevents a key mapping issue when accessing the virtualized guest OS from a OS X management host. Without this explicit mapping, the OS X keyboard keys will not map correctly when accessing the VM guest.

CentOS 7: The specific “en-us” setting is not required. Using the default setting “Auto” works properly with OS X. (confirmed with Yosemite).

Fluid Mouse Control at the management host

To integrate simple mouse movement and mouse use between the management host and VM guest be sure that an Input device “EvTouch USB Graphics Tablet” is included in  the VM details. This provides fluid simple mouse integration between internal VM guest (console/desktop) and the remote management host desktop.

Proper Shutdown Signaling to a Linux VM Guest OS

When creating CentOS Minimal vm guests, make sure that acpid is installed within the guest OS. Acpid provides a daemon that listens to commands from the hypervisor during shutdown and restart sequences. This allows virt-manager and other libvirt virtualization management tools to gracefully shutdown the Linux based VM guest. To install and configure the acpid deamon in the Linux environment use the following commands:

yum -y install acpid

service acpid start

chkconfig acpid on

Linux guests created with Ubuntu distributions normally have acpid pre-installed as part of the base installation.

Default Route with Dual Network VM guests

In most cases a single network connection to a VM guest is all that is needed. But in some more complicated configurations a dual network or multi-headed network guest is needed. If your VM guests are using a single network connection then ignore this section.

Dual network head VM guests may not respond to ping or packet routes from an internal network when it is connected to the Internet on the “default” network. This is because, on some systems, the default gateway gets toggled depending on whatever interface comes up last.

It is possible to resolve this issue—but beyond our scope or needs for this system. To provide support for both networks simultaneously we could add a routing policy table as suggested here: http://kindlund.wordpress.com/2007/11/19/configuring-multiple-default-routes-in-linux/

VM Startup when Virtualization Host starts

To autostart the VM during boot and startup of virtualization host, make sure to set the start option in the VM details view accordingly as follows:

virt-manager →  VM Details → Boot Options → “Start virtual machine on host boot up” (option should be checked)

2.7.1 VM Guest QXL video and virtualized (virtio) drivers for improved VM guest support

Typically, default emulated hardware is provided when setting up VM guests under virt-manager. This works absolutely fine in most cases and is the most straightforward and simple to setup. These defaults present a set of simulated (emulated) industry standard common hardware components to the guest (drive controller, network adapter, video hardware). Using emulated hardware tends to make common OS guests easy to install because they normally come pre-packaged with necessary drivers.

Optionally: Greater performance is possible with use of various virtio virtualized hardware components “virtoio” and the QXL video. These components are written with the purpose of providing the best performance in the KVM virtualized environment. Setup of these components is slightly more complex.

When virtio and QXL video components are used, the corresponding drivers need to be present in the guest OS. That is the virtualized hardware needs specific drivers in the guest OS to function properly.

Linux: Virtio drivers are available as simple package installs for most modern Linux distrubutions. In most cases virtio modules are already part of the kernel build.

Windows: Windows virtio drivers can be downloaded and installed into the Windows guest. The following links point to the latest stable builds for windows virtio drivers:

http://alt.fedoraproject.org/pub/alt/virtio-win/stable/

http://alt.fedoraproject.org/pub/alt/virtio-win/stable/virtio-win-0.1-74.iso

 

Windows virtio drivers can also be built from source:

http://www.spice-space.org/page/WinQXL

2.7.2 Creating mountable iso images from the host file system

For example: The command below creates a windows-installers.iso file in the current directory. It is created using all files located at and below the windows-installers directory. We would end up with a mountable .iso file that appears to the guest as a mounted CDROM and contains both installers.

mkisofs -J -r -o ./windows-installers.iso ./windows-installers/

2.7.3 Virtual Desktop Interface (VDI) with KVM Spice

If Spice support has been added per instructions above (Section 2), then it will be available for use with the virtual machines created within virtualization host. Some useful links follow:

http://people.freedesktop.org/~teuf/spice-doc/html/

https://elmarco.fedorapeople.org/manual.html

When building a new virtual machine, for the Display use a “spice server” connection instead of VNC. Make sure to enable service on all public connections instead of localhost only if you plan to access the VM from a remote host. When creating the Display device with Spice specified in virt-manager, it will force you to also enable a TLS port. To avoid having to use a TLS port value, first set up the Display device as VNC, then immediately afterwards change the protocol in the new Display device from “VNC” to “Spice”.

http://wiki.centos.org/HowTos/Spice-libvirt

Scan for open ports when debugging Spice/VNC: Make sure to either disable the firewall or specifically permit the assigned Spice ports to be open. Spice ports are auto-assigned in the 5900 range and are typically assigned dynamically, as available, when the VM is started. Keep in mind that this does not indicate what ports are actually available to external hosts. The firewall may be blocking access to anything listed. Use nmap to show the available ports from a localhost perspective—and implies specific ports which may be required to be permitted through the firewall.

# nmap -sT localhost

# nmap -sT my-virt-host-ip

The virtualization host can then be scanned from an external host using nmap. This provides confirmation of available virtualization host service ports available at the specific external host where the nmap scan is run.

# nmap -Pn my-virt-host-ip

If the required port(s) are not available from the external host viewpoint, the firewall at the virtualization host may be blocking access.

Transport Layer Security (TLS):
TLS can be used with Spice. We do not have specific instructions for Spice TLS setup in this document. See /etc/libvirt/qemu.conf for more details on Spice with TLS.

2.7.3.1 Fix a given static port value to a VM Display console:

Spice and/or VNC service ports are normally auto-assigned at VM startup starting with 5900. This prevents conflicts, but does not provide a consistent fixed url to access the VM from a remote client. To configure a fixed URL we can set a given dedicated port value to a VM console by removing the default Display device in the VM and re-adding it specifying the connection specifics we need.

Spice without TLS: For example we can add a new Display device as a VNC device, available on all interfaces, and fixed port 5999 at the hypervisor. Once we have created the VNC Display device, we can then adjust the protocol on the new Display device from VNC to Spice to enable the Spice protocol for the VM console display (without TLS).

Spice channels: When adding a Display with Spice protocol, some spice channel devices are automatically inserted into the VM. This includes QXL video and VirtIO devices. You can keep these devices when switching Display device to VNC if the guest OS can handle the devices.

For example, Windows Server 2012 R2 can accommodate the Spice QXL virtualized video adapter providing native driver support for high resolution choices for console video display in the guest—as opposed to typical VGA resolution limitations with other virtualized adapters. This can be used when providing VNC or Spice console access to the Windows Server 2012 VM guest.

Later versions of guest operating systems, in general, tend to have native support for the enhanced QXL virtualized adapter. This includes later and the latest versions of desktop Windows and desktop Linux, such as Linux Mint.

Spice with TLS: If we want to use TLS, we would add the new Display device choosing “Spice” display and provide the desired TLS port value in the UI. Use of TLS requires a certificate of authority and certificates be used between hosts. Details for setting up TLS are not included in this document.

Avoiding port conflicts between VM console access points: When mixing auto and static port assignments in the same hypervisor, a good approach might be to assign static, or fixed, port values from 5999 downward and allow the auto assigned ports to increment as they do from 5900 upwards. This avoids collisions and also provides console port value resources for up to 100 virtual machines. Make sure to accommodate whatever range you plan to use in the firewall configuration.

2.7.4 External User Client access to the VM console

Using a specialized client to access a VM is the suggested approach for typical user access to the VM. There is no need to provide virtualization management access to anyone that needs to get to a guest desktop. Console level VM access can be provided anyone using a Spice/VNC client to access the VM. The accessing user does not need any management privileges at the hypervisor. They do not need an account at the hypervisor. Access can be provided via simple URL and use of a digest password.

Remote client access to the Spice-enabled guest can be provided by a number of different approaches with clients available for all major operating systems today.

Linux: Currently available spice clients include: remote-viewer, virt-viewer, spicec, spicy, and some others.

Windows 32/64 bit: A well known Spice client for Windows is called “virt-viewer” available on the virtual machine manager download website.

http://virt-manager.org/download/

OS X: There is a Spice client called “Vinagre” for OS X available. The spice support for this client is under development and is currently very slow—spice not recommended. Using VNC instead is recommended and very reasonable. OSX also has a very capable built in VNC client which works great with this solution. It is called “Screen Sharing”. Another built in VNC client is the Safari web browser.

http://www.davidtheexpert.com/post.php?id=5

2.7.5 Windows 7 guests with Spice Channels

To take full advantage of “Spice channels” we need proper “virtio” hardware devices defined in the VM and the proper drivers and software components in the guest OS. Here is a brief outline of how to accomplish this.

In the VM

- Devices for hard drive controller, network adapter, and video, and serial device adapter should all be set to genuine “virtio” devices instead of the typical hardware emulated devices.

- Make sure also that the device “Controller Virtio Serial” is included in the VM. This device is used to communicate with the spice service and agent running in the guest OS. It provides for auto-resizing of desktop to fit the client and data copy-paste support for the client accessing the desktop.

- CentOS 6: Remove the Tablet device from the VM. This emulated device is not needed when Spice is being used. In some cases it might cause problems with mouse control when in VDI—for example, having a missing mouse cursor in the VDI.

The image below represents the virt-manager VM view with Spice channels and virtio devices available to the guest.

 

The windows installer process will need the proper virtio storage driver to be added during the Windows CD boot installation process. This allows Windows installation to access and manage the virtio hard drive.

1. This can be accomplished by adding a CD which contains the virtio drivers to the VM. This is done by optionally modifying VM settings before starting the VM for the first time.

2. Once the Windows install CD first boots You should have two CDs attached; One with Windows installer CD .iso and the other with the virtio CD .iso file attached.

4. When Windows CD install boots, it will eventually pause on a screen that allows you to “add drivers”. Selection this option and point to the respective directory on the virtio CD that contains the drivers. Once this is done, the Windows installer will point to the 3 specific drivers that match hardware in the VM. Select only the SCSI virtio driver in this case and continue. Windows will complete the installation to the virtio hard drive using this driver.

5. Continue with the basic Windows installation.

6. After Windows has completed installation, you can optionally install the virtio drivers for the remaining virtio devices manually using the Windows device manager or install them later as part of Spice guest tools installation process outlined below. Regardless, you still will want the spice guest tools, because they provide Windows desktop integrated enhancements for the Spice desktop.

Upon completion of Windows installation

1. Shutdown the Windows guest

2. Disconnect the .iso CD storage files from both CD drives.

3. Remove one of the CD drives from the VM to reduce the number of CD drives to one.

After Windows is installed as a guest in the VM you will need to install the spice guest tools.

1. Restart the VM to bring up the Windows guest.

2. Connect the VM's single CD drive to a .iso image that contains the “Spice Guest Tools”.

3. Access the CD drive at the Windows guest and run the Spice Guest Tools installation process. If you did not choose to manually install the various virtio drivers earlier, then install each of the drivers that are presented during the installation of Spice Guest Tools.

Spice Guest tools provide the additional virtio drivers for the virt network adapter and virtio video. The spice guest tools also provide the virtual display service (vdservice) and virtual display agent (vdagent) process.

vdagent runs as a process and can be seen in the task manager in the “Processes” tab

vdservice runs as a service and can be seen in the Windows service control manager (SCM).

2.7.6 Windows 2012 R2 Server guests

Using QXL video adapter in the VM details provides a much greater assortment of higher display resolutions in the guests console client view. These extended modes are available without adding any drivers to the guest. Apparently Windows Server 2012 has a driver for the QXL virtualized video hardware. Compare this to using the traditional hardware device emulation, VGA or Cirrus, for video which provides only limited and lower resolution choices in the Windows Server VM guest.

2.7.7 Fedora and Ubuntu guests with Spice

Make sure to remove the Input Tablet device from the VM details. It is not needed with the Spice enabled guest OS and may cause problems with the mouse control if it is present.

2.7.8 Increasing Disk storage available on an existing VM

Disk drive size originally committed to a VM can be increased when necessary to accommodate more local hard drive capacity for the guest OS.

Create a backup copy of the image for safe-keeping.

# cp existing.img existing.img.bak

Create a raw blank qemu image equal to the amount of drive storage space planned for increase.

qemu-img create -f raw addon.raw 30G

Rename the original image file.

# mv existing.img existing.img.save

Append the blank image to the end of the original image.

# cat existing.img.save addon.raw >> existing.img

Now the existing.img should have the additional capacity available for use within the guest OS.

Use the drive management facilities of the guest OS to expand the used portion of the new hard drive space into the new unused space.

Windows – Use drive manager to expand used storage.

Linux – Boot alternate rescue CD, as an iso mounted virtual CD, with gparted included to expand used storage. This can also be done using gparted or parted installed into the normal VM guest OS.

2.8 Care and Feeding of the CentOS virtualization host

To keep your system up do date with the latest software updated in the CentOS repositories make sure to regularly run an update on it. Run the following command, as sudo or root, to keep your system updated to the latest available software and security fixes.

# yum update

2.9 Duplication and/or Backup of the CentOS Virtualization Host system

One of the many advantages of Linux is how easy it accommodates variations in system hardware. The environment can be duplicated and run on nearly any PC or host server system. It will boot on an extremely wide range of 64 bit host systems with AMD or Intel processor hardware. This could be a modern high-end desktop system or an advanced state of the art enterprise blade server—or anything in between.

The processor in the CentOS virtualization host system must be 64 bit and be also advanced enough to perform hardware based virtualization—as many systems today are. This means that virtualization extensions must be available in the processor.

http://virt-tools.org/learning/check-hardware-virt/

Duplication of the system can be achieved using Clonezilla, a GPL program designed for disk and image based backups and recovery.

http://clonezilla.org/

2.9.1 Preparation of the CentOS virtualization host system before duplicating for use in another system

Delete the udev rules file for preserving network adapter assignments to network interfaces.

/ext/udev/rules.d/70-persistent-net.rules

This file will be auto-generated upon the next boot in the new hardware system. The system will assign specific eth*/enp* (CentOS 6/7) references to available network adapters in the new system.

Remove the fixed MAC address in the Ethernet interface configuration. This will allow the configuration to remain in effect and assigned to specific adapter hardware when Ethernet adapters are encountered and enumerated in the new hardware system.

Edit the following files to remove the “HWADDR=” configuration line entry.

/etc/sysconfig/network-scripts/ifcfg-eth*

/etc/sysconfig/network-scripts/ifcfg-enp*

Removing the HWADDR line entry will allow the specific Ethernet devices to be assigned on the first two enumerated adapters and automatically configured as specified in the ifcfg-* network adapter configuration files.

2.9.2 Running the Clonezilla Duplication or Backup

Use Clonezilla Boot CD, or Clonezilla Boot USB drive, to either duplicate the system or image it to a removable drive. Follow instructions closely and be careful not to destroy the original system drive by accidentally overwriting it.

Before beginning the duplication process, write down the drive model and serial numbers for both the source and destination hard drives. This will help when selecting the proper respective drive during the duplication and/or imaging process.

3 Shutdown of the CentOS Virtualization Server

During normal shutdown the virtualization server will pause and store the running state of the VMs. Upon startup of the hypervisor, the VMs will be resumed at the point they were previously paused. This makes startup quicker and prevents damage to guest systems during a shutdown.

Pressing the power switch on the virtualization server will begin the shutdown process.

Alternatively shutdown can be accomplished from remote using Linux commands. Root (su) access will be required to shutdown the system.

shutdown -h now

4 Configure and Control APC SmartUPS During a Power Failure

Many UPSs can be controlled or monitored using Linux. The following briefly describes one way to configure the virtualization host for auto-shutdown during power failure with a typical APC brand power supply. Many other third party power supplies are actually re-branded APC units, so the same can be used for them.

There is a manual here: http://www.apcupsd.com/manual/manual.html and some information here as well: http://www.cyberciti.biz/faq/debian-ubuntu-centos-rhel-install-apcups/

This section basically involves installing the apcupsd package and configuring it for use in the system.

Cent 7: Requires outsourcing the required rpm from fedora project, but built for epl 7.

http://pkgs.org/search/apcups

http://dl.fedoraproject.org/pub/epel/7/x86_64/a/

apcupsd-3.14.10-14.el7.x86_64.rpm

Download and transfer the file to the server using secure copy utility “scp” or any other means necessary.

Cent 6: Same as Cent 7 above, but use epl 6 rpm from pkgs.org.

http://pkgs.org/search/apcups

http://dl.fedoraproject.org/pub/epel/6/x86_64/

apcupsd-3.14.10-2.el6.x86_64.rpm

Download and transfer the file to the server using secure copy utility “scp” or any other means necessary.

The apcupsd package requires snmp libs provided via net-snmp...

# yum -install net-snmp

# rpm --verify apcupsd...

rpm --install --test apcupsd...

yum -y localinstall apcupsd...

Edit the apcupsd configuration file. Follow the instructions provided in comments within the config file.

# vim /etc/apcupsd/apcupsd.conf

Test the UPS

# apctest

(be careful not to kill the UPS power to any running system with test item 1)

Enable and start the apcupsd daemon.

Cent 6:

# chkconfig apcupsd on

# /etc/init.d/apcupsd start

Cent 7:

systemctl enable apcupsd.service

systemctl start apcupsd.service

systemctl status apcupsd.service

Check current status of the UPS

# apcaccess status

Check the apcupsd log file for events, such as brown-outs or power failures.

# tail -f /var/log/apcupsd.events

Desktop monitoring tool if you happen to have a desktop on your server. Not necessary for headless server although it could be invoked and viewed through X access to the server as wireshark or virt-manager is done.

# yum install apcupsd-gui

run as follows:

gapcmon

There is a web based monitoring tool as well, but I am not recommending this for the CentOS virtualization server.

4.1 Multiple server hosts sharing the same UPS

For a server rack with all servers sharing the same UPS, connect UPS USB cable to one server. Set up acpupsd for running longer than other servers. This is now considered the master server. Other servers are considered slave and will monitor the master server UPS. During a power failure the slave servers need to go down some time before the master to prevent them loosing the ability to monitor the UPS.

On slave servers, install acpupsd as well. Configure apcupsd for monitoring the master server. For example:

UPSCABLE ether

UPSTYPE net

DEVICE 10.9.217.101:3551

Adjust BATTERYLEVEL and MINUTES remaining for slave servers to be slightly greater than the master server. This will ensure that, during a power fail, the slave servers go down earlier than the master server—providing a controlled soft shutdown for all hosts.

4.1.1 Avoid thrashing on restore of power

I generally do not set system BIOS to start system automatically on application of AC power. I don't want my hosts to be thrashing because they are starting up repeatedly on a UPS with inadequate battery charge. Hosts are started manually after a power outage.

4.1.2 Protect the LAN switch power

Servers depend on communicating with each other in this arrangement so we must make sure to include the network switch on the UPS. The switch MUST operate during power outages, or the protected system will not behave properly. Slave servers must be able to monitor the UPS using the network to communicate with the master server.

4.1.3 Consider a small dedicated network switch for power management

If the LAN switch is not part of the rack, then add a small unmanaged dedicated network switch to the rack. Use this switch on a secondary network for power supply signaling. Rack server hosts, in this case, would be configured with a secondary network and use a separate network connection for this purpose.

5 Logical Volume Manager notes on usage

We use logical volume manager in the CentOS hypervisor. Occasionally we may need to gather or retrieve data from another LVM hard drive that we have inserted into the system. The logical volume manager command line can be used to manage logical volume based storage. Logical volumes are arranged within Logical Volume Groups. This is similar to partitions within a hard drive—but unlike physical drives, logical volumes are accomplished logically without specific ties to a particular drive or partition. The main advantages to LVM is that volumes can be extended across multiple hard drives as need for storage increases in the system.

For starters, take a look a the man page for lvm.

# man lvm

To scan available logical volume groups

# vgscan

To activate a volume group, required when the system has not automatically activated it, such as in the case of adding an older data drive to an existing system.

# vgchange -ay “my-logical-volume-group”

To enumerate, or scan, logical volumes below logical volume groups in a system

# lvs

Suppose we want to mount the volume. We can first create a mount point in the system.

# mkdir /mnt/lv-mount-point

Now we can mount the logical volume of choice to the mount point.

# mount /dev/logical-volume-group/logical-volume /mnt/lv-mount-point -o ro,user

 

6 yum and rpm notes

The man page is a great place to start.

# man rpm

To find out all rpm packages that have been installed:

# rpm -qa | less

To find out rpm packages that are not installed, but are available for install.

# yum list available | less

To list installed rpm packages which have available updates:

# yum list updates | less

To list rpm packages which are installed from available repositories:

# yum list installed | less

To list installed and available groups:

# yum group list | less

To query information about a group:

# yum group info “group-name” | less

 

 

-document history-

this document: centos-virtualization-server_2015-02-26-00

 

Document prepared using LibreOffice Version 4.2.7.2 from The Document Foundation http://www.documentfoundation.org/

2014-10-10:

Improve section on setting up ntpd and also section for manual setting of the time.

2014-11-13:

Begin dividing out sections to address both CentOS 6 and CentOS 7.

2014-11-17:

Better description for partitioning in CentOS 7. Include details on customizing the partitioning in CentOS 7.

2014-11-19:

Add example for creating mountable .iso image with genisoimage utility.

2014-11-22:

Reformatting all sections to 6pt paragraph layout.

Adjust spacing to NBS. Correct formatting for transition to HTML.

Layout and optimize for dual use, as ODT and also exportable to HTML directly for use with website.

Remove TOC

2014-12-04:

Add TOC back in. Add section 5 notes on LVM

2014-12-09:

Removed manual spice and vnc port management which required root access to vm .xml definitions in /etc/libvirt/qemu/..

Added instruction for spice/vnc VM Display port via use of VM Details in virt-manager.  

Added macvtap and brief explanation of backing and source modes.

2014-12-10:

Vast improvement in the UPS section.

Included UPS setup for multiple hosts sharing the same UPS.

2014-12-12:

Add more information on hosts sharing a UPS

Minor layout improvements in sec 5 and 6

2014-12-20:

More cleanup and differentiation between centos 6 and centos 7.

2014-12-28-00:

Cleanup host management section to remove inner eth1 refercce.

2015-01-02-00:

Add more information in the Firewall section for CentOS 7. Break out console key mapping for CentOS 7.

2015-01-12-00:

When creating user accounts, add user group access to wheel group for full management of hypervisor host system.

2015-01-13-01

Change some examples to match. Add second paragraph on assumptions at beginning of document.

2015-01-14-00:

Add page numbers.

2015-02-05-00:

Add installation of package “virt-install” in CentOS 7 specific items.

2015-02-10-00:

For both CentOS 6 and CentOS7 Change large volume specification from /var/lib/libvirt/images over to /var. This provides opportunity to create an indendent dump directory on the large volume at /var/virt-backup/ which is used for dumping Vms for backup.

2015-02-18-00:

Add some useful links for Spice setup and configuration.

2015-02-19:

Add more details about setting up the Spice display device at the VM using static port assignments.

Remove “-O” option for OS detection on nmap examples.

2015-02-26-00:

Additional information on spice channels and use of QXL video adapter.

Add enable and start of apcupsd for CentOS 7.