Table of Contents

MySQL




Xen
Category
Resource


Posted in Table of Contents

How to Install VMware vSphere Hypervisor


Introduction

Concepts and Definitions

Traditional vs. Virtualized Architecture

Traditional vs. Virtualized Architecture

Don’t know what the VMware vSphere Hypervisor is? Before preceding with this guide, please review the following overview on VMware’s website.

First off, VMware terminology can be very confusing. vSphere is the term VMware uses to describe its server virtualization platform as a whole. There are different editions of vSphere, each with its own features, but they normally consist of a central management product, known as VMware vCenter Server (used as the primary management interface), as well as a series of ESXi hypervisors.

ESX vs ESXi

ESX vs ESXi

ESXi is simply the name given to VMware’s latest hypervisor architecture. Historically, VMware’s hypervisor was simply referred to as ESX. Recent, significant changes – most notably the removal of the COS (Console Operating System, often referred to as the Service Console) – to the hypervisor architecture reduced the hypervisor’s footprint dramatically and inspired a new name, ESXi. ESXi is meant to replace ESX going forward.

The VMware vSphere Hypervisor is simply the new name for what was formerly referred to as ‘ESXi Single Server’ or ‘free ESXi.’ In other words, these are all just different names for a free VMware hypervisor. Now called VMware vSphere Hypervisor, and freely available, it consists of both a VMware ESXi hypervisor and a VMware vSphere Client (used to manage the hypervisor as well as its virtual machines).

This guide focuses exclusively on this free offering and does not cover vSphere enterprise editions that would require installation and configuration of a vCenter Server instance and so forth. Simply put, the VMware vSphere Hypervisor is for standalone use only and precludes the use of many VMware Services. Enterprise deployments, including feature-rich, cloud-based implementations and the like, are outside the scope of this guide. This guide simply aims to familiarize you with VMware and virtualization as a concept.

Certain VMware virtualization concepts and definitions must be understood before proceeding:

  • Host: The physical hardware running the VMware hypervisor (ESXi) which hosts guest VMs.
  • Hypervisor: virtualizing a physical server (a “host”) normally involves installing a software abstraction layer known as a hypervisor. The job of a hypervisor, also known as a VMM (Virtual Machine Monitor), is to abstract (to mask) the physical resources of a host into logical resources that can be pooled and shared. The hypervisor provides virtualization capabilities, that is: the ability carve up a host’s resources into smaller “guest” computers called virtual machines, each with its own operating system, virtual CPU, network interfaces, storage, etc.
  • Virtual Machine (VM): guest, virtual machine. Hosted on the VMware hypervisor (ESXi).

Getting Started

VMware ESXi Host

VMware ESXi Host

This step-by-step guide covers how to install the VMware vSphere Hypervisor using a “white box” (with 8 GB of memory, a 1 TB hard drive, and an AMD 64-bit processor). A “white box” is a personal computer or server without a registered brand name. Installation of both the VMware hypervisor (ESXi) on the “white box” machine, vmvisor.colestock.test, and of the VMware vSphere Client, on a remote machine, will be covered. Afterwards, two example guests (virtual machines) will be created, one Windows- and one Linux-based, using the vSphere Client program.


Required Software

The following software is used in this guide:

Note: You may be required to create an account to download the VMware software. After getting access to the downloads sections, download both the ‘VMware ESXi 5.1 (CD ISO) Installable’ and ‘VMware vSphere Client 5.1′ packages. We will be burning the ESXi ISO to a disk before beginning our install. Place the VMware vSphere Client executable – currently ‘VMware-viclient-all-5.1.0-860230.exe’ – on a remote, Windows-based client system which can be used to manage the hypervisor, once it is installed. Installation and use of VWware Tools is outside the scope of this guide (feel free to try this on your own!).


VMware vSphere Hypervisor

Installation

VMware Hypervisor (ESXi)

Verify Prerequisites

Download and burn the VMware ESXi 5.1.0 (vSphere Product Family) Installer CD.

If not already enabled, enable AMD-V (svm) or Intel VT-x (depending upon your chipset) virtualization support in the BIOS.

Change the boot order to list the DVD-ROM drive first; place the VMware ESXi disk into the DVD-ROM drive and then reboot the host.

Install ESXi

Upon reboot, the ESXi install will begin immediately.

The ESXi installer has a series of screens; below is a synopsis of the screens and the values to provide for each prompt:

Screen
Response

Installation Welcome
‘Enter’ (continue)

End User License Agreement
‘F11′ (accept and continue)

Select a Disk
Choose local ATA Disk

Confirm Disk Selection
‘Enter’ (OK)

Keyboard Layout
US Default

Root Password and Confirm
Enter and confirm the root user’s password.

Confirm Install
‘F11′ (install). Installation will begin. Should should see something like Installing ESXi 5.1.0 x %….

Installation Complete
‘Enter’ (reboot). The installation CD is automatically ejected and the system is rebooted. When the server comes back up, initialization routines are run and the services are started.

Customize ESXi

At this point, we need to customize the initial ESXi installation to suit our needs. Specifically, we need to enable remote access via SSH, configure the network, and specify the fully-qualified hostname.

Once you see the “Download tools to manage this host” prompt, press F2 to customize the system.

Login using the root user and previously established password.

Afterwards, make the following changes:

Screen
Response

System Customization
Configure Management Network
IP Configuration
Change from ‘dynamic’ to ‘static.’ Assign a hard-coded value for ip address, subnet mask, and default gateway. In my case, 192.168.1.200, 255.255.255.0, and 192.168.1.1, respectively.

System Customization
Configure Management Network
DNS Configuration
Provide the desired hostname, e.g., vmvisor.colestock.test.

Apply changes and restart management network
‘Y’ (When prompted, confirm Yes)

System Customization
Troubleshooting Options
Enable SSH
If SSH is disabled, enable it.

System Customization
Troubleshooting Options
Enable SSH
If SSH is disabled, enable it. To apply your changes, press ESC to exit the screen, and ESC again to logout.

ESXi Post-Installation Checks

Verify Remote Connectivity:

From a remote machine, using a shell or client program, e.g., Putty, connect to the ESXi host (the hypervisor) as root using the ip address you used when installing:

# ssh root@192.168.1.200

Congratulations! At this point, you should have a functioning hypervisor.

VMware vSphere Client

We will be using the VMware vSphere Client software to interact with our newly installed ESXi hypervisor. This client software not only allows us to control the host, but to create and modify virtual machines as well. Earlier in this guide, you were instructed to download and place the executable for the VMware vSphere Client software on a remote, Windows-based client system; navigate to that system to perform the following steps.

Install vSphere Client

On a remote Windows-based machine, launch the vSphere Client installer; the current build is VMware-viclient-all-5.1.0-860230.exe.

The installer has a series of screens; below is a synopsis of the screens and the values to provide for each prompt:

Screen
Response

Select the Language
‘English (United States)’

Welcome to the Installer
‘Next’

End-User Patent Agreement
‘Next’

License Agreement
I agree to the terms in the license agreement.

Destination Folder
Change installation destination (if desired). Click ‘Next.’

Ready to Install the Program
‘Install.’ Installation routine will begin.

Installation Completed
‘Finish’

vSphere Client Post-Installation Checks

Verify Remote Connectivity:

Using the recently installed VMware vSphere Client, connect to the ESXi hypervisor by launching the client program: Start > All Programs > VMware > VMware vSphere Client.

Connect by providing the ip address of the host as well as the root user and password.

If a security warning dialog box appears, check the ‘Install this certificate and do not display warnings’ checkbox and press ‘Ignore.’

If you were able to successfully connect, you will see a Home page with two sections: Inventory and Administration.


Creating Virtual Machines

We will be using the VMware vSphere Client software we installed earlier to create and modify virtual machines from a remote, Windows-based client.

Linux Example (CentOS 6.3, 64-bit)

If you have not already, logon to the ESXi hypervisor using the VMware vSphere Client.

Transfer the operating system ISO we want to use for installing the virtual machine to the ESXi hypervisor:

Screen
Response

Navigate to ‘Configuration’
Go to Configuration

Go to Configuration

Click on ‘Inventory.’ You should be taken to a ‘Getting Started’ tab under the ‘Inventory’ section. Click on the ‘Configuration’ tab.

Browse Datastore
Browse datastore1

Browse datastore1

Right-click on the datastore labeled ‘datastore1.’ Click ‘Browse Datastore…’

Create ‘isos’ Folder
Create 'isos' Folder on datastore1

Create 'isos' Folder on datastore1

Click on the ‘Create a new folder’ icon. Create a new folder named ‘isos.’

Upload ISO
Upload ISO to 'isos' Folder

Upload ISO to 'isos' Folder

Select the ‘upload files to this datastore’ icon. When the ‘Upload Items’ dialog box appears, navigate to the ISO on your local drive and select it. Click ‘Open.’ Wait for the ISO to be copied over the network to the datastore. Repeat as needed for additional ISOs.

The hypervisor should now be able to access the desired ISO, CentOS-6.3-x86_64-minimal.iso.

Create the virtual machine – we will call ours centos-x64-demo – by using the Create New Virtual Machine wizard which can be launched by right-clicking the ESXi server in the left pane and choosing ‘New Virtual Machine…’ or by simply typing CTRL+N. Below is a synopsis of the screens and the values to provide for each prompt:

Screen
Response

Configuration
‘Custom’

Name and Location
Provide the name of your virtual machine, e.g., centos-x64-demo

Storage
‘datastore1′

Guest Operating System
‘Linux’ and ‘CentOS 4/5/6 (64-bit)’

CPUs
Specify desired number of virtual sockets and cores per socket.

Memory
Specify desired amount of memory for the virtual machine.

Network
Specify ’1′ for number of NICs to connect, ‘VM Network’ for the network, and ‘E1000′ as the adapter. Make sure that ‘Connect at Power On’ is specified.

SCSI Controller
‘LSI Logic SAS’

Select a Disk
‘Create a new virtual disk’

Create a Disk
Specify the desired capacity, e.g., 16 GB, ‘Thin Provision,’ and ‘Store with the virtual machine.’

Advanced Options
Keep the defaults. Click ‘Next.’

Ready to Complete
Review your choices and check the ‘Edit the virtual machine settings before completion’ checkbox. Click ‘Continue.’

Virtual Machine Properties – Hardware
Add CD/DVD drive and Mount ISO

Add CD/DVD drive and Mount ISO

Select ‘New CD/DVD (adding).’ Click the ‘Datastore ISO File’ radio button. Click ‘Browse’ and select the ISO you want to install from. Under ‘Device Status,’ click ‘Connect at power on.’ Click ‘Finish.’

Virtual Machines
'Virtual Machines' tab

'Virtual Machines' tab

At this point, the newly created virtual machine should be listed under the ‘Virtual Machines’ tab under ‘Inventory.’ Right-click on the name of the virtual machine, centos-x64-demo, and select Power, Power On.

centos-x64-demo Console
Welcome to CentOS 6.3

Welcome to CentOS 6.3

Right-click the virtual machine again and this time select ‘Open Console.’ From here you will be able to install the operating system on the virtual machine using the ISO you mounted at boot time.
Remember: issue CTRL+ALT to release the cursor from the virtual machine’s console.

Installation Options
‘Install or upgrade an existing system’

Disk Found
‘Skip’

CentOS Installation Splash
‘OK’

Language Selection
‘English’

Keyboard Selection
‘us’

Installation devices
‘Basic Storage Devices’

Storage Device Warning
‘Re-initialize’

Time Zone Selection
‘America/Denver’

Root Password
Provide and confirm root password.

Writing Storage Configuration to Disk
‘Write changes to disk.’ Package installation will begin.

Congratulations
‘Reboot’

Login
Login as root

Login as root

After the ‘reboot,’ login as root using the previously established password.

Manually Configure Networking:

Once the system reboots, we will configure the network manually using the same console we used to install the operating system.

To manually configure the network interface we modify the /etc/sysconfig/network-scripts/ifcfg-eth0 file, adding or adjusting the existing values of IPADDR, NETMASK, GATEWAY, and ONBOOT.

# vi /etc/sysconfig/network-scripts/ifcfg-eth0

After editing, the file should resemble the following:

DEVICE="eth0"
IPADDR="192.168.1.201"
NETMASK="255.255.255.0"
GATEWAY="192.168.1.1"
BOOTPROTO="none"
HWADDR="00:0C:29:80:26:D4"
NM_CONTROLLED="yes"
ONBOOT="yes"
TYPE="Ethernet"
UUID="4247d58a-07ce-4327-9fbf-aba9f2096bcc"

To update the hostname, edit the following two configuration files: /etc/sysconfig/network and /etc/hosts.

Update /etc/sysconfig/network:

# vi /etc/sysconfig/network

Add/Modify the following line to reflect your hostname:

HOSTNAME=centos-x64-demo.colestock.test

Update /etc/hosts:

# vi /etc/hosts

Add/Modify the following line to reflect your hostname and ip address:

192.168.1.201 centos-x64-demo.colestock.test centos-x64-demo

To specify our DNS server(s) we manually edit the /etc/resolv.conf file, adding lines for each of our nameservers.

# vi /etc/resolv.conf

After editing, the file should resemble the following:

nameserver 192.168.1.1

Restart the network service to enable the changes.

# service network restart
Shutting down interface eth0: [ OK ]
Shutting down loopback interface: [ OK ]
Bringing up loopback interface: [ OK ]
Bringing up interface eth0: [ OK ]

Verify Remote Connectivity:

From a remote machine, using a shell or client program, e.g., Putty, connect to the ‘centos-x64-demo’ guest as root using the ip address you used when installing:

# ssh root@192.168.1.201

Windows Example (Windows Server 2008, 64-bit)

Transfer the operating system ISO we want to use for installing the virtual machine to the ESXi hypervisor, in this case, Windows_Server_2008_Enterprise_amd64_trial.iso.

Create the virtual machine – we will call ours windows-demo-2008 – by using the Create New Virtual Machine wizard which can be launched by right-clicking the ESXi server in the left pane and choosing ‘New Virtual Machine…’ or by simply typing CTRL+N. Below is a synopsis of the screens and the values to provide for each prompt:

Screen
Response

Configuration
‘Custom’

Name and Location
Provide the name of your virtual machine, e.g., windows-demo-2008

Storage
‘datastore1′

Guest Operating System
‘Windows’ and ‘Microsoft Windows Server 2008 R2 (64-bit)’

CPUs
Specify desired number of virtual sockets and cores per socket.

Memory
Specify desired amount of memory for the virtual machine.

Network
Specify ’1′ for number of NICs to connect, ‘VM Network’ for the network, and ‘E1000′ as the adapter. Make sure that ‘Connect at Power On’ is specified.

SCSI Controller
‘LSI Logic SAS’

Select a Disk
‘Create a new virtual disk’

Create a Disk
Specify the desired capacity, e.g., 40 GB, ‘Thin Provision,’ and ‘Store with the virtual machine.’

Advanced Options
Keep the defaults. Click ‘Next.’

Ready to Complete
Review your choices and check the ‘Edit the virtual machine settings before completion’ checkbox. Click ‘Continue.’

Virtual Machine Properties – Hardware
Select ‘New CD/DVD (adding).’ Click the ‘Datastore ISO File’ radio button. Click ‘Browse’ and select the ISO you want to install from. Under ‘Device Status,’ click ‘Connect at power on.’ Click ‘Finish.’

Virtual Machines
At this point, the newly created virtual machine should be listed under the ‘Virtual Machines’ tab under ‘Inventory.’ Right-click on the name of the virtual machine, windows-demo-2008, and select Power, Power On. Right-click the virtual machine again and this time select ‘Open Console.’ From here you will be able to install the operating system on the virtual machine using the ISO you mounted at boot time.
Remember: issue CTRL+ALT to release the cursor from the virtual machine’s console.

windows-demo-2008 Console
The newly created virtual machine’s console window will appear and you should see the ‘Install Windows’ screen.

Install Windows
‘Install Now’

Install Windows (Product Key)
‘Next’

Install Windows (Confirm)
‘No’

Install Windows (Edition)
‘Windows Server 2008 Enterprise (Full Installation)’

Install Windows (License)
Check ‘I accept the license terms’; click ‘Next’

Install Windows (Type)
‘Custom (advanced)’

Install Windows (Location)
‘Disk 0′; click ‘Next.’ The installer will begin copying files.

Install Windows (Restart)
‘Restart now’

Install Windows (Change Password)
Change the password as the ‘Administrator’ user

Initial Configuration Tasks
(Time Zone)
Set the time and time zone, e.g., Mountain Time (GMT-0700)

Initial Configuration Tasks
(Configure Networking)
Configure Networking

Configure Networking

Configure networking, in particular ip address, network mask, gateway, dns server; in my case: 192.168.1.202, 255.255.255.0, 192.168.1.1, 192.168.1.1, respectively.

Initial Configuration Tasks
(Provide computer name)
Set the computer name, e.g., windows2008

Initial Configuration Tasks
(Enable Remote Desktop)
Choose ‘Allow connections from computers running any version of Remote Desktop’

Restart the computer for the hostname change configuration to take effect.

Verify Remote Connectivity:

On a remote windows-based client, remote desktop to the newly created virtual machine, e.g., windows-demo-2008, by navigating to Start > All Programs > Accessories > Remote Desktop Connection and connecting to the ip address of the virtual machine, e.g., 192.168.1.202, using the ‘windows2008\Administrator’ credentials:

Remote Desktop Connection

Remote Desktop Connection

Tagged with:
Posted in Virtualization

How to Install Xen


Introduction

Concepts and Definitions

Xen architecture diagram

Xen architecture diagram

Don’t know what the Xen Hypervisor is? Before preceding with this guide, please review the following Xen Overview on the Xen wiki.

In essence, Xen is a type 1, “bare-metal” virtual machine monitor (or hypervisor), which provides the ability to run one or more operating system instances on the same physical machine (or host). Xen, like other types of virtualization, is useful for many use cases such as server consolidation and isolation of production and development environments (e.g. corporate and personal environments on the same system).

Certain Xen-specific concepts and definitions must be understood before proceeding with this guide:

  • Bridged Networking: The most common Xen networking configuration that uses software bridging on domain-0 to allow guest domains – “virtual machines” – to appear on the network as individual hosts. In this configuration, a software bridge is created on domain-0, then backend virtual network devices and an optional physical Ethernet device are added in order to provide connectivity off the host. By omitting the physical Ethernet device, an isolated network containing only guest domains can be created.
  • Domain: a running instance of a virtual machine. “VM,” “guest,” and “domain” are interchangable terms.
  • Domain-0 (dom0): The special, privileged “control” domain which contains drivers for the hardware, as well as the toolstack needed to control the VMs
  • domU: Refers to any unprivileged, non-dom0, domain
  • Hardware Virtual Machine (HVM): The term that Xen uses to refer to a guest type that utilizes so-called hardware-assisted virtualization. Its goal is to achieve full virtualization; in other words, to simulate a complete hardware environment, or virtual machine, in which an unmodified guest operating system executes in complete isolation. HVM guests need both a CPU with explicit support for virtualization (such as Intel VT or AMD-V) as well as a device emulator. Xen uses the QEMU project to emulate PC hardware devices for HVM guests. The QEMU “device manager” (qemu-dm) daemon runs as a backend process in dom0. This means that the virtualized machines see an emulated version of a fairly basic PC, including basic devices (BIOS, disk controllers, network adapters, graphics controllers, etc.).
  • Host: The physical hardware running Xen and hosting guest VMs.
  • Hypervisor: virtualizing a physical server (a “host”) normally involves installing a software abstraction layer known as a hypervisor. The job of a hypervisor, also known as a VMM (Virtual Machine Monitor), is to abstract (to mask) the physical resources of a host into logical resources that can be pooled and shared. The hypervisor provides virtualization capabilities, that is: the ability carve up a host’s resources into smaller “guest” computers called virtual machines, each having its own operating system, virtual CPU, network interfaces, storage, etc.
  • Paravirtualization (PV): a guest type (“PV guest” in Xen terminology) that is aware of the hypervisor and that can run efficiently without emulation or a CPU with special hardware extentions (such as Intel VT or AMD-V). So-called PV guests, however, must run a modified operating system which is para- aware. In other words, paravirtualization cannot support unmodified operating systems (e.g. Windows 2000/XP), therefore its compatibility and portability is considered to be poor. Paravirtualization is often referred to as OS assisted virtualization.
  • Virtual Machine (VM): guest, virtual machine, domain.

Getting Started

Example Type 1 Hypervisor Architecture

Example Type 1 Hypervisor Architecture

This step-by-step guide covers how to install the Xen Hypervisor on a “white box” (with 8 GB of memory, a 1 TB hard drive, and an AMD 64-bit processor). A “white box” is a personal computer or server without a registered brand name. The host machine, xendom0.colestock.test will run Debian “Squeeze” as its control domain, domain-0, as well as the most current platform-specific Xen software available. After installing and configuring the Xen Hypervisor, two example guests (virtual machines) will be created, one Windows- and one Linux-based, using the “Virtual Machine Manager” e.g., virt-manager, toolstack.


Required Software

The following software is used in this guide:


Xen Hypervisor

Installation

Verify Prerequisites

Download and burn Debian DVD Disk 1.

If not already enabled, enable AMD-V (svm) or Intel VT-x (depending upon your chipset) virtualization support in the BIOS.

Change the boot order to list the DVD-ROM drive first; place the Debian disk into the DVD-ROM drive and then reboot the host.

Install Domain-0 (Host)

Upon reboot, the Debian install will begin immediately.

At the ‘Install’ menu, choose ‘Graphical Install.’

The debian graphical installer has a series of screens; below is a synopsis of the screens and the values to provide for each prompt:

Screen
Response

Select a Language
‘English’

Select a Location
‘United States’

Configure the Keyboard
‘American English’

Configure the Network
‘Cancel’ when installer attempts to configure DHCP

Configure the Network
‘Configure network manually’

Configure the Network
On the subsequent ‘Configure the network’ screens, provide the appropriate values for ip address, network mask, gateway, name server(s), and hostname.
In my case, I provided the following values, respectively: 192.168.1.200, 255.255.255.0, 192.168.1.1, 192.168.1.1, and xendom0.colestock.test

Set up users and passwords
Enter and confirm the root user’s password

Set up users and passwords
Enter full name of non-root user account, e.g., James Colestock

Set up users and passwords
Enter account for non-root user account, e.g., James

Partition disks
‘Guided – use entire disk’

Partition disks
Select the only available disk

Partition disks
Confirm that you want to create a new empty partition table on the device

Partition disks
Choose the ‘All the files in one partition partitioning scheme’

Partition disks
Delete all the partitions until only free space remains

Figure A

At this point you should see a partition table similar to ‘Figure A.’
Delete all the existing partitions until there is nothing but free space; for each: right-click, choosing the ‘Delete the partition’ option, and press ‘Continue.’

Partition disks
Example Partition Table

Figure B

Once all the existing partitions have been deleted, repartition the disk until you have the same layout as ‘Figure B.’
Note that the ‘boot’ partition should be the first partition you create; it should be ‘primary’, formatted ext2, and /boot should be its mount point.
The final, and largest, partition is reserved for domU storage; choose the ‘do not use the partition’ option for it – later on we will use a LVM (Logical Volume Manager) to initialize this disk partition.

Configure the package manager
‘No’ to skip scanning another CD or DVD

Configure the package manager
‘No’ to install without a network mirror
Note: we will be configuring the package manager by hand so that it is clear where the corresponding configuration file resides, etc.

Configuring popularity-contest
‘No’

Software selection
Software Selection Screen

Figure C

Choose the options that corrsponds to ‘Figure C,’ e.g., Graphical desktop environment, SSH server, and Standard system utilities.

Configuring man-db
‘Yes’ to install the GRUB boot loader

Finish the installation
The CD will be ejected automatically, press ‘Continue.’

Optionally, you could go back into the BIOS and change the boot order to start with the hard drive. At the very least, make sure that you remove the CD from the CD-ROM drive.

Domain-0 Post-Installation Checks

Verify Remote Connectivity:

From a remote machine, using a shell or client program, e.g., Putty, connect to the domain-0 guest as root using the ip address you used when installing:

# ssh root@192.168.1.200

Verify Kernel:

root@xendom0:~# uname -a
Linux xendom0 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux

Verify Hostname:

root@xendom0:~# hostname
xendom0

Verify Filesystem:

root@xendom0:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda5 30G 3.2G 25G 12% /
tmpfs 3.7G 0 3.7G 0% /lib/init/rw
udev 3.7G 208K 3.7G 1% /dev
tmpfs 3.7G 0 3.7G 0% /dev/shm
/dev/sda1 268M 15M 239M 6% /boot

Verify Swap:

root@xendom0:~# swapon -s
Filename Type Size Used Priority
/dev/sda6 partition 11717624 0 -1

Verify CPU Hardware Virtualization:

If you don’t already know whether your CPU supports hardware virtualization, you can use either of the following two commands to verify [the "vmx" flag for Intel or the "svm" flag for AMD]:

grep "vmx" /proc/cpuinfo

or

grep "svm" /proc/cpuinfo

Install Xen Packages and Toolstacks

Configure APT (Advanced Package Tool)

We will be using Debian’s APT tool to download and install the latest Xen packages. Before we do, however, we need update the mirror list and update the local package index.

Update the /etc/apt/sources.list file:

# vi /etc/apt/sources.list

Comment out – “put a # in front of” – any ‘cdrom’ lines, including the following line:

deb cdrom:[Debian GNU/Linux 6.0.6 _Squeeze_ - Official amd64 DVD Binary-1 20120929-16:46]/ squeeze contrib main

Uncomment – “remove the # from in front of” – the following lines:

# deb http://ftp.debian.org/debian/ squeeze-updates main contrib
# deb-src http://ftp.debian.org/debian/ squeeze-updates main contrib

Add the following line to the end of the file:

deb http://ftp.us.debian.org/debian squeeze main

Update the APT Package Index using apt-get, the APT package index is essentially a database of available packages from the repositories defined in the /etc/apt/sources.list file. To update the local package index with the latest changes made in repositories, type the following:

root@xendom0:~# apt-get update

Install Package Dependencies

Install the packages’ dependencies via the following series of commands:

root@xendom0:~# aptitude -y build-dep xen-linux-system-2.6.32-5-xen-amd64
root@xendom0:~# aptitude -y build-dep xen-qemu-dm-4.0
root@xendom0:~# aptitude -y build-dep qemu-system

Install Xen Packages

Install the core Xen packages using the following command:

root@xendom0:~# aptitude -y install xen-linux-system-2.6.32-5-xen-amd64 xen-qemu-dm-4.0 qemu-system

Install Toolstack Packages

Install toolstacks for working with Xen, e.g., xen-tools and virtual manager:

root@xendom0:~# apt-get -y install virt-manager virt-viewer xen-tools xen-utils

Afterwards, verify core Xen packages and toolstacks:

root@xendom0:~# dpkg --list 'xen*' | grep ^ii
ii xen-hypervisor-4.0-amd64 4.0.1-5.4 The Xen Hypervisor on AMD64
ii xen-linux-system-2.6.32-5-xen-amd64 2.6.32-46 Xen system with Linux 2.6.32 on 64-bit PCs (meta-package)
ii xen-qemu-dm-4.0 4.0.1-2+squeeze2 Xen Qemu Device Model virtual machine hardware emulator
ii xen-tools 4.2-1 Tools to manage Xen virtual servers
ii xen-utils-4.0 4.0.1-5.4 XEN administrative tools
ii xen-utils-common 4.0.0-1 XEN administrative tools - common files
ii xenstore-utils 4.0.1-5.4 Xenstore utilities for Xen

Verify virt* packages:

root@xendom0:~# dpkg --list 'virt*' | grep ^ii
ii virt-manager 0.8.4-8 desktop application for managing virtual machines
ii virt-viewer 0.2.1-1 Displaying the graphical console of a virtual machine
ii virtinst 0.500.3-2 Programs to create and clone virtual machines

Set-up Bridged Networking

Xen Bridge Networking Diagram

Xen Bridge Networking Diagram

In Xen, virtual machines get access to an outside network via some kind of specialized network configuration. In our case, we will use the most common configuration known as bridged networking.

Before attempting to create a bridge you must have the bridge-utils package installed.

In my case, this package is already installed:

root@xendom0:~# dpkg --list 'bridge-utils*' | grep ^ii
ii bridge-utils 1.4-5 Utilities for configuring the Linux Ethernet bridge

If it weren’t, however, I could install it via apt-get:

root@xendom0:~# apt-get install bridge-utils

Create the Network Bridge, ‘xenbr0′:

Backup the current /etc/network/interfaces file.

root@xendom0:/etc/network# cp interfaces interfaces.orig
Open the file with the text editor of your choosing; we will be adjusting this configuration file, including environment-specific values, such as: ip address, network mask, gateway, dns-nameservers, and dns-search.

root@xendom0:/etc/network# vi interfaces

Before you make your edits to the /etc/network/interfaces file, it should look like this:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug eth0
iface eth0 inet static
address 192.168.1.200
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers 192.168.1.1
dns-search colestock.test

After you make your edits to the /etc/network/interfaces file, it should look something like the following. Simply adjust the values below to reflect your environment:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug eth0
iface eth0 inet manual
pre-up ifconfig eth0 up
post-down ifconfig eth0 down

# Network Bridge, we will call ours xenbr0
auto xenbr0
iface xenbr0 inet static
bridge_ports eth0
address 192.168.1.200
netmask 255.255.255.0
broadcast 192.168.1.255
gateway 192.168.1.1
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers 192.168.1.1
dns-search colestock.test

Bring up (and validate) the Network Bridge:

root@xendom0:~# /etc/network# ifup xenbr0

Use the brctl command to show all the ethernet bridges on your system (you should see your newly created bridge in the list):

root@xendom0:/etc/network# brctl show
bridge name bridge id STP enabled interfaces
pan0 8000.000000000000 no
xenbr0 8000.b8975a0f2f3f no eth0

Prepare Storage

Earlier – during the domain-0 installation – we reserved the majority of our disk’s capacity on a large, unused disk partition; we will be using this partition to store our virtual machines (domU guests).

Specifically, we will be managing our storage with a Logical Volume Manager (LVM). Logical volume management, in general, is a method of allocating hard drive space into “logical” volumes that can easily be resized (in contrast to “traditional” disk partitioning techniques). A physical disk is divided into one or more physical volumes. Volume groups are, in turn, created by combining these physical volumes. Lastly, these volume groups are subdivided into virtual disks called logical volumes. Logical volumes may be used just like regular disks with filesystems created on them and mounted in the Unix/Linux filesystem tree.

Using an LVM, we will be creating a single volume group, named vgdomu, which will be carved up later on, as needed, to allocate storage to our virtual machines.

Make sure lvm2 is installed:

root@xendom0:~# dpkg --list 'lvm*' | grep ^ii

If not, install it via apt-get:

root@xendom0:~# apt-get install lvm2

And verify again:

root@xendom0:~# dpkg --list 'lvm*' | grep ^ii
ii lvm2 2.02.66-5 The Linux Logical Volume Manager

Use fdisk -l to find the large disk partition you reserved for domU storage; in my case, it is /dev/sda7:

root@xendom0:~# fdisk -l /dev/sda7

Disk /dev/sda7: 955.9 GB, 955902853120 bytes
255 heads, 63 sectors/track, 116215 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

Disk /dev/sda7 doesn't contain a valid partition table

Initialize the physical volume for later use by the volume manager:

root@xendom0:~# pvcreate /dev/sda7
Physical volume "/dev/sda7" successfully created

Create a volume group called ‘vgdomu’ using the physical volume we initialized earlier, /dev/sda7:

root@xendom0:~# vgcreate vgdomu /dev/sda7
Volume group "vgdomu" successfully created

Verify the volume group’s creation via vgdisplay:

root@xendom0:~# /sbin/vgdisplay vgdomu
--- Volume group ---
VG Name vgdomu
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 890.25 GiB
PE Size 4.00 MiB
Total PE 227904
Alloc PE / Size 0 / 0
Free PE / Size 227904 / 890.25 GiB
VG UUID oSeWuD-ex7k-Vz1Z-HWEg-0Vlq-SKZj-Yd7MX6

Modify GRUB

Next, we need to adjust our boot loader to boot the Xen-enabled kernel first, by default. The boot loader, often referred to as a “bootstrap” or “bootstrap loader,” is a small program that places the operating system (OS) of a computer into memory. In our case, we are using the GRUB (short for GNU GRand Unified Bootloader) multi-boot loader; a multi-boot loader is a boot loader that allows a computer to boot more than one operating system; in other words, it provides multiple booting choices.

Change the boot order, placing the Xen-enabled kernel first:

root@xendom0:~# mv -i /etc/grub.d/20_linux_xen /etc/grub.d/09_linux_xen

Afterwards, update GRUB’s configuration:

root@xendom0:~# update-grub
Generating grub.cfg ...
Found background image: /usr/share/images/desktop-base/desktop-grub.png
Found linux image: /boot/vmlinuz-2.6.32-5-xen-amd64
Found initrd image: /boot/initrd.img-2.6.32-5-xen-amd64
Found linux image: /boot/vmlinuz-2.6.32-5-xen-amd64
Found initrd image: /boot/initrd.img-2.6.32-5-xen-amd64
Found linux image: /boot/vmlinuz-2.6.32-5-amd64
Found initrd image: /boot/initrd.img-2.6.32-5-amd64
done

Reboot and Post-Installation Checks

At this point, we are ready to reboot the system in order to load the Xen-enabled kernel:

root@xendom0:~# shutdown -r now

GRUB should now load the Xen-enabled amd64 kernel by default.

Logon to the box:

# ssh root@192.168.1.200

Verify Kernel:

root@xendom0:~# uname -a
Linux xendom0 2.6.32-5-xen-amd64 #1 SMP Sun Sep 23 13:49:30 UTC 2012 x86_64 GNU/Linux

Notice how the command now reflects “2.6.32-5-xen-amd64.”

Verify Domain-0:

In Xen, the domain-0 privileged guest should always be running. In order to verify this, we will be using the xm program, which is the main interface for managing Xen guest domains.

Use xm’s list command to ensure that Xen is running; at this point domain-0 should be the only domain in the list:

root@xendom0:~# xm list
Name ID Mem VCPUs State Time(s)
Domain-0 0 7015 4 r----- 273.8

Alternatively, you can get a detailed listing for any virtual machine using xm’s info command; for example:

root@xendom0:~# xm info Domain-0
host : xendom0
release : 2.6.32-5-xen-amd64
version : #1 SMP Sun Sep 23 13:49:30 UTC 2012
machine : x86_64
nr_cpus : 4
nr_nodes : 1
cores_per_socket : 4
threads_per_core : 1
cpu_mhz : 2900
hw_caps : 178bf3ff:efd3fbff:00000000:00001310:00802001:00000000:000037ff:00000000
virt_caps : hvm
total_memory : 7671
free_memory : 1000
node_to_cpu : node0:0-3
node_to_memory : node0:1000
node_to_dma32_mem : node0:552
max_node_id : 0
xen_major : 4
xen_minor : 0
xen_extra : .1
xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler : credit
xen_pagesize : 4096
platform_params : virt_start=0xffff800000000000
xen_changeset : unavailable
xen_commandline : placeholder
cc_compiler : gcc version 4.4.5 (Debian 4.4.5-8)
cc_compile_by : ultrotter
cc_compile_domain : debian.org
cc_compile_date : Sat Sep 8 19:15:46 UTC 2012
xend_config_format : 4

Congratulations! At this point, you should have a barebones, functional Xen environment.


Configuration

Our barebones Xen installation needs further configuration to optimally run guest domains, etc.

Configure Domain-0

We need to make kernel changes in order to fine-tune dom0′s resource utilization.

First, we will limit dom0′s memory consumption by adding the following line to /etc/default/grub:

GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=512M;max:512M nosmp"

Using the text editor of your choice, edit the /etc/default/grub file:

root@xendom0:~# vi /etc/default/grub

Afterwards, update GRUB’s configuration to reflect this change:

root@xendom0:~# update-grub
Generating grub.cfg ...
Found background image: /usr/share/images/desktop-base/desktop-grub.png
Found linux image: /boot/vmlinuz-2.6.32-5-xen-amd64
Found initrd image: /boot/initrd.img-2.6.32-5-xen-amd64
Found linux image: /boot/vmlinuz-2.6.32-5-xen-amd64
Found initrd image: /boot/initrd.img-2.6.32-5-xen-amd64
Found linux image: /boot/vmlinuz-2.6.32-5-amd64
Found initrd image: /boot/initrd.img-2.6.32-5-amd64
done

Next, we will increase dom0′s CPU allocation.

Xen uses a credit-based CPU scheduler wherein each domain is assigned a weight, a proportional share of CPU per guest VM, and a cap, an optional upper limit on the CPU time consumable by a particular VM. For example, a domain with a weight of 512 will get twice as much CPU as a domain with a weight of 256 on a contended host. Legal weights range from 1 to 65535 and the default is 256.

List the current value:

root@xendom0:~# xm sched-credit -d Domain-0
Name ID Weight Cap
Domain-0 0 256 0

By default, dom0′s CPU weight is 256, we will double that using xm’s sched-credit command:

root@xendom0:~# xm sched-credit -d Domain-0 -w 512

Confirm the change:

root@xendom0:~# xm sched-credit -d Domain-0
Name ID Weight Cap
Domain-0 0 512 0

Configure Xen Daemon

Next, we will be making simple changes to the Xen daemon to support the functionality we need by modifying its configuration file, /etc/xen/xend-config.sxp.

Update the /etc/xen/xend-config.sxp file:

# vi /etc/xen/xend-config.sxp

Uncomment – “remove the # from in front of” – the following lines:

#(xend-unix-server no)
#(xend-unix-path /var/lib/xend/xend-socket)
#(vnc-listen '127.0.0.1')

Change the “(xend-unix-server no)” line to “yes”:

(xend-unix-server yes)

Change the (vnc-listen ’127.0.0.1′) to ’0.0.0.0′:

(vnc-listen '0.0.0.0')

Create Symbolic Link to Xen Libraries

Next, create a symbolic link as a workaround to a Virtual Machine Manager bug.

Specifically, you need to create a symbolic link to /usr/lib/xen-<your version here> in /usr/lib64/xen:

root@xendom0:~# ln -sf /usr/lib/xen-4.0 /usr/lib64/xen

Verify the symbolic link:

root@xendom0:/etc/xen# ls -lart /usr/lib64/xen
lrwxrwxrwx 1 root root 16 Nov 13 09:26 /usr/lib64/xen -> /usr/lib/xen-4.0

Reboot

Perform one final reboot of the system.

Afterwards, we will create our first virtual machine (or “domU”, “guest”):

root@xendom0:/etc/xen# shutdown -r now


Creating Virtual Machines

Although there are a lot of different ways to interface with Xen, we will be using a powerful desktop user interface for managing our virtual machines called Virtual Machine Manager. Often referred to by its package name, virt-manager, Virtual Machine Manager is just part of a larger series of tools that you can read more about here.

Linux Example (CentOS 6.3, 64-bit)

From a remote client with access to an X Server, such as Reflection X, MobaXterm, etc., connect to the Xen system:

ssh -l root 192.168.1.200

If you are performing these tasks from a Windows-based client and don’t have access to an X Server, I would highly recommend using MobaXterm Home Edition. Best of all: it’s FREE!

Create a logical volume, centos-test-disk, on the ‘vgdomu’ volume group we created earlier:

root@xendom0:~# lvcreate -L 16G -n centos-test-disk vgdomu
Logical volume "centos-test-disk" created

We will be using the aforementioned logical volume for our virtual machine’s storage.

Transfer or make available to the Xen system the operating system ISO from which you wish to install your virtual machine.

In my case, my ISOs are on a USB flash drive that I connect to the system:

root@xendom0:~# fdisk -l | grep NTFS
/dev/sdb1 1 21270 7816688 7 HPFS/NTFS

I recursively create any needed filesystems:

root@xendom0:~# mkdir -p /media/demo-flash-drive

Then, I mount the drive:

root@xendom0:~# mount -t ntfs-3g /dev/sdb1 /media/demo-flash-drive

I should now be able to access my desired ISO, CentOS-6.3-x86_64-minimal.iso:

root@xendom0:~# ls -latr /media/demo-flash-drive
total 5863236
-rwxrwxrwx 2 root root 315185152 Oct 25 16:31 VMware-VMvisor-Installer-5.1.0-799733.x86_64.iso
-rwxrwxrwx 2 root root 4658798592 Oct 26 09:19 debian-6.0.6-amd64-DVD-1.iso
-rwxrwxrwx 2 root root 346011648 Oct 27 10:11 CentOS-6.3-x86_64-minimal.iso
-rwxrwxrwx 2 root root 307267584 Oct 27 11:30 CentOS-6.3-i386-minimal.iso
-rwxrwxrwx 2 root root 12489728 Oct 30 09:36 MobaXterm_Personal_6.0.exe
-rwxrwxrwx 2 root root 364181248 Nov 6 11:45 VMware-viclient-all-5.1.0-860230.exe
drwxrwxrwx 1 root root 4096 Nov 6 11:56 .
drwxr-xr-x 4 root root 4096 Nov 13 09:45 ..

At this point we are ready to export our DISPLAY and then launch the virt-manager GUI program:

root@xendom0:~# export DISPLAY=192.168.1.4:0.0
root@xendom0:~# virt-manager

Create the virtual machine, we will call ours centos-test, by using the virt-manager graphical interface; below is a synopsis of the screens and the values to provide for each prompt:

Screen
Response

Virtual Machine Manager
Create a new virtual machine in virt-manager

Create a new virtual machine in virt-manager

Click on the Domain-0 machine and select ‘New’ in the menu to ‘Create a new virtual machine.’

New VM – Step 1 of 5
Enter the name of the new virtual machine, centos-test, which we will be installing from local media; press ‘Forward.’

New VM – Step 2 of 5
Navigate to your ISO, specify OS type and version

Navigate to your ISO, specify OS type and version

Browse to where your ISO is, select ‘Linux’ for OS type, ‘Generic 2.6.x kernel’ for Version, press ‘Forward.’

New VM – Step 3 of 5
Choose desired amount of memory and desired number of CPUs, e.g. 1024 and 1, press ‘Forward.’

New VM – Step 4 of 5
Specify storage for the virtual machine

Specify storage for the virtual machine

Choose ‘Select managed or other existing storage,’ browse to the logical volume, /dev/vgdomu/centos-test-disk. Make sure that ‘Enable storage is for this machine’ and ‘Allocate entire disk now’ checkboxes are checked, click ‘Forward.’

New VM – Step 5 of 5
Specify advanced options

Specify advanced options

Check the ‘Customize configuration before install’ checkbox. Under ‘Advanced options,’ select ‘Specify shared device name.’ Enter the previously configured network bridge, ‘xenbr0.’ ‘Set a fixed MAC address’ checkbox should be checked. Click ‘Finish.’

New VM – Customize
Remove and recreate the display

Remove and recreate the display

Select ‘Display 0′ from the hardware list and click ‘Remove.’ Click ‘Add Hardward.’ Choose ‘Graphics’ from ‘Hardware type,’ then click ‘Forward.’ ‘VNC server’ should be selected for ‘Type.’ Check the ‘Listen on all public network interfaces’ checkbox. ‘Automatically allocated’ should be checked as well. Click ‘Forward,’ then ‘Finish.’

centos-test Virtual Machine
Finalize the creation of new virtual machine

Finalize the creation of new virtual machine

Click ‘Finish Install’ to finalize the creation of the virtual machine. A ‘Creating Virtual Machine’ dialog will follow.

centos-test Console
Welcome to CentOS 6.3

Welcome to CentOS 6.3

The newly created virtual machine’s console window will appear and you should see the ‘Welcome to CentOS 6.3′ screen.

Installation Options
‘Install or upgrade an existing system’

Disk Found
‘Skip’

CentOS Installation Splash
‘Next’

Select a Language
‘English’

Configure Keyboard
‘U.S. English’

Installation devices
‘Basic Storage Devices’

Storage Device Warning
‘Yes, discard any data’

Hostname
Enter fully-qualified hostname, e.g., centos-test.colestock.test, click ‘Configure Network’

Network Connections
Select the Wired network interface, ‘System eth0,’ to edit it

Editing System eth0
Modify system eth0

Modify system eth0

Provide your values for address, netmask, gateway, DNS servers, and search domains; make sure ‘Connect automatically’ is checked. Click ‘Next.’ In my case, 192.168.1.201, 255.255.255.0, 192.168.1.1, 192.168.1.1, and colestock.test, respectively.

Time Zone
‘America/Denver’

Root Password
Provide and confirm root password

Type of Installation
‘Use All Space,’ make sure ‘Review and modify partition layout’ is checked, click ‘Next’

Modify Partition Layout
Make any desired changes to the partition layout; for example, you may want to install the swap partition to accommodate more memory being allocated to the host. Click ‘Next’

Format Warnings
‘Format’

Writing Storage Configuration to Disk
‘Write changes to disk’

Boot Loader
Make sure ‘Install boot loader on /dev/xxxx’ is checked, click ‘Next.’ The installer will begin installing packages.

Congratulations
‘Reboot’

Virtual Machine Manager
Confirm that the virtual machine is running in virt-manager

Confirm that the virtual machine is running in virt-manager

Confirm that the new domU (“virtual machine”), centos-test, is listed in the Virtual Machine Manager with Domain-0; both should say ‘Running.’

Verify Remote Connectivity:

From a remote machine, using a shell or client program, e.g., Putty, connect to the ‘centos-test’ guest as root using the ip address you used when installing:

# ssh root@192.168.1.201

Windows Example (Windows Server 2008, 64-bit)

From a remote client with access to an X Server, such as Reflection X, MobaXterm, etc., connect to the Xen system:

ssh -l root 192.168.1.200

Create a logical volume, windows-disk, on the ‘vgdomu’ volume group we created earlier:

root@xendom0:~# lvcreate -L 20G -n windows-disk vgdomu
Logical volume "windows-disk" create

We will be using the aforementioned logical volume for our virtual machine’s storage.

Transfer or make available to the Xen system the operating system ISO you wish to install your virtual machine from.

In my case, my ISOs are on a USB flash drive that I connect to the system.

Unmount any other device, if applicable:

root@xendom0:~# umount /media/demo-flash-drive

Mount the USB flash drive:

root@xendom0:~# mount -t ntfs-3g /dev/sdc1 /media/demo-flash-drive

I should now be able to access my desired ISO, Windows_Server_2008_Enterprise_amd64_trial.iso:

root@xendom0:~# ls -lart /media/demo-flash-drive
total 2600848
-rwxrwxrwx 2 root root 2663258112 Oct 30 17:13 Windows_Server_2008_Enterprise_amd64_trial.iso
drwxrwxrwx 1 root root 4096 Nov 6 11:54 .
drwxr-xr-x 4 root root 4096 Nov 13 09:45 ..

At this point we are ready to launch virt-manager; export DISPLAY and then launching the GUI program:

root@xendom0:~# export DISPLAY=192.168.1.4:0.0
root@xendom0:~# virt-manager

Create the virtual machine, we will call ours windows-server-2008, by using the virt-manager graphical interface; below is a synopsis of the screens and the values to provide for each prompt:

Screen
Response

Virtual Machine Manager
Click on the Domain-0 machine and select 'New' in the menu to 'Create a new virtual machine.'

New VM - Step 1 of 5
Enter the name of the new virtual machine, windows-server-2008. We will be installing from local media, press 'Forward.'

New VM - Step 2 of 5
Navigate to your ISO, specify OS type and version

Navigate to your ISO, specify OS type and version

Browse to where your ISO is, select 'Windows' for OS type, Microsoft Windows 2008' for Version, press 'Forward.'

New VM - Step 3 of 5
Choose desired amount of Memory and desired number of CPUs, e.g. 1024 and 1, press 'Forward.'

New VM - Step 4 of 5
Specify storage for the virtual machine

Specify storage for the virtual machine

Choose 'Select managed or other existing storage,' browse to the logical volume, /dev/vgdomu/windows-disk. Make sure that 'Enable storage is for this machine' and 'Allocate entire disk now' checkboxes are checked, click 'Forward.'

New VM - Step 5 of 5
Specify advanced options

Specify advanced options

Check the 'Customize configuration before install' checkbox. Under 'Advanced options,' select 'Specify shared device name.' Enter the previously configured network bridge, 'xenbr0.' 'Set a fixed MAC address' checkbox should be checked. Click 'Finish.'

New VM - Customize
Remove and recreate the display

Remove and recreate the display

Select 'Display 0' from the hardware list and click 'Remove.' Click 'Add Hardward.' Choose 'Graphics' from 'Hardware type,' then click 'Forward.' 'VNC server' should be selected for 'Type.' Check the 'Listen on all public network interfaces' checkbox. 'Automatically allocated' should be checked as well. Click 'Forward,' then 'Finish.'

windows-server-2008 Virtual Machine
Finalize the creation of new virtual machine

Finalize the creation of new virtual machine

Click 'Finish Install' to finalize the creation of the virtual machine. A 'Creating Virtual Machine' dialog will follow.

windows-server-2008 Console
Install Windows Server 2008

Install Windows Server 2008

The newly created virtual machine's console window will appear and you should see the 'Install Windows' screen.

Install Windows
'Install Now'

Install Windows (Product Key)
'Next'

Install Windows (Confirm)
'No'

Install Windows (Edition)
'Windows Server 2008 Enterprise (Full Installation)'

Install Windows (License)
Check 'I accept the license terms'; click 'Next'

Install Windows (Type)
'Custom (advanced)'

Install Windows (Location)
'Disk 0'; click 'Next.' The installer will begin copying files.

Install Windows (Restart)
'Restart now'

Install Windows (Change Password)
Change the password as the 'Administrator' user

Initial Configuration Tasks
(Time Zone)
Set the time and time zone, e.g., Mountain Time (GMT-0700)

Initial Configuration Tasks
(Configure Networking)
Configure networking

Configure networking

Configure networking, in particular ip address, network mask, gateway, dns server; in my case: 192.168.1.202, 255.255.255.0, 192.168.1.1, 192.168.1.1, respectively.

Initial Configuration Tasks
(Provide computer name)
Set the computer name, e.g., windows2008

Initial Configuration Tasks
(Enable Remote Desktop)
Choose 'Allow connections from computers running any version of Remote Desktop'

Virtual Machine Manager
Confirm that the new domU ("virtual machine"), windows-server-2008, is listed in the Virtual Machine Manager with Domain-0; both should say 'Running.'

Restart the computer for the hostname change configuration to take effect.

Verify Remote Connectivity:

On a remote windows-based client, remote desktop to the newly created virtual machine, e.g., windows-server-2008, by navigating to Start > All Programs > Accessories > Remote Desktop Connection and connecting to the ip address of the virtual machine, e.g., 192.168.1.202, using the 'windows2008\Administrator' credentials:

Remote Desktop Connection

Remote Desktop Connection

Tagged with:
Posted in Virtualization

How to Install MySQL from Source (5.5.21)


Introduction

Getting Started

This step-by-step guide shows how to build MySQL from source – version 5.5.21 – on Linux. I used a host named ‘lamp1′ running CentOS 5.7 (x64). This guide exhibits the granular control that may be achieved by building MySQL yourself. In my case, great care was taken to produce a layout that could support many MySQL daemons running different versions and so forth. Laying MySQL out this way provides a lot more flexibility especially come upgrade time.

Required Software

The following software is used in this guide:

Automated Scripts

This guide shows you how to build MySQL from source in a keystroke by keystroke fashion. You may also use the automated scripts that I have created to go through this process as well. I suggest downloading the scripts and modifying as appropriate for your target environment.


Pre-installation Steps

Create ‘mysql’ Group & User

First, we need to create the Operating System group and user that will own the MySQL software as well as well as run the MySQL processes:

[root@lamp1]$ groupadd -g 505 mysql
[root@lamp1]$ useradd -u 505 -g mysql mysql

Set the password for the newly created MySQL user:

[root@lamp1]$ passwd mysql

Create Directories

Recursively create the initial directories:

[root@lamp1]$ mkdir -p /u01/app/LAMP/mysql/build
chown -R mysql:mysql /u01/app/LAMP/mysql

Install/Update OS Packages

To build MySQL there are at least 3 Operating System package dependencies; install/update as appropriate:

[root@lamp1]$ yum install -y ncurses-devel libaio-devel cmake

Copy Source Distribution to Build Directory

Reference the Required Software section. Download the source distribution of MySQL, make sure the newly created ‘mysql’ user has access to it and then copy it to the ‘build’ directory:

[root@lamp1]$ chown mysql:mysql mysql-5.5.21.tar.gz
[root@lamp1]$ cp mysql-5.5.21.tar.gz /u01/app/LAMP/mysql/build/.


Install MySQL

Set Environment Variables

Change ownership to the newly created ‘mysql’ user:

[root@lamp1]$ su - mysql

In order to streamline the installation, we will set a series of environment variables that may be used throughout the installation process:

[mysql@lamp1]$ MYSQL_PORT=3309
[mysql@lamp1]$ MYSQL_VERSION=5.5.21
[mysql@lamp1]$ MYSQL_BASE=/u01/app/LAMP/mysql
[mysql@lamp1]$ MYSQL_HOME=$MYSQL_BASE/$MYSQL_VERSION
[mysql@lamp1]$ MYSQL_BUILD_DIR=$MYSQL_BASE/build/mysql-$MYSQL_VERSION
[mysql@lamp1]$ MYSQL_DEFAULT_CHARSET=utf8
[mysql@lamp1]$ MYSQL_DEFAULT_COLLATION=utf8_general_ci
[mysql@lamp1]$ MYSQL_DATA_DIR=$MYSQL_BASE/data/$MYSQL_VERSION
[mysql@lamp1]$ MYSQL_TMP_DIR=$MYSQL_BASE/tmp/$MYSQL_VERSION
[mysql@lamp1]$ MYSQL_LOG_DIR=$MYSQL_BASE/logs/$MYSQL_VERSION
[mysql@lamp1]$ MYSQL_PID_DIR=$MYSQL_BASE/pid/$MYSQL_VERSION
[mysql@lamp1]$ MYSQL_LOGBIN_DIR=$MYSQL_BASE/log-bin/$MYSQL_VERSION
[mysql@lamp1]$ MYSQL_SOCKET_DIR=$MYSQL_BASE/socket/$MYSQL_VERSION
[mysql@lamp1]$ MYSQL_SOCKET=$MYSQL_SOCKET_DIR/mysql-$MYSQL_VERSION.sock

Create Directories

One of the primary reasons for building MySQL from source is to have control over the default locations for files and so forth. Create the target directories that we will use for our server:

[mysql@lamp1]$ mkdir -p $MYSQL_HOME
[mysql@lamp1]$ mkdir -p $MYSQL_DATA_DIR
[mysql@lamp1]$ mkdir -p $MYSQL_TMP_DIR
[mysql@lamp1]$ mkdir -p $MYSQL_LOG_DIR
[mysql@lamp1]$ mkdir -p $MYSQL_LOGBIN_DIR
[mysql@lamp1]$ mkdir -p $MYSQL_PID_DIR
[mysql@lamp1]$ mkdir -p $MYSQL_SOCKET_DIR

Additionally, create the socket file used to connect to MySQL locally:

[mysql@lamp1]$ touch $MYSQL_SOCKET

Build MySQL

MySQL is built from source using a cross-platform, open-source tool called ‘cmake.’ When building MySQL using cmake there are a variety of configuration options; reference the MySQL reference manual under “MySQL Source-Configuration Options” for a comprehensive list. In my case, I am basically changing the default port, installation prefix & file locations, and installing storage engines over and above the defaults (MyISAM, Merge, Memory, and CSV storage engines are always included).

Cmake

Make sure ‘cmake’ is in the path:

[mysql@lamp1]$ export PATH=/usr/bin/cmake:$PATH

Change directory to the ‘build’ directory where you placed the source distribution:

[mysql@lamp1]$ cd $MYSQL_BASE/build

Extract the source distribution:

[mysql@lamp1]$ tar -xvf mysql-$MYSQL_VERSION.tar.gz

Change directory to the newly created version-specific ‘build’ directory:

[mysql@lamp1]$ cd $MYSQL_BUILD_DIR

Run ‘cmake’ providing the options specific to your environment:

[mysql@lamp1]$ cmake -DCMAKE_INSTALL_PREFIX=$MYSQL_HOME -DMYSQL_TCP_PORT=$MYSQL_PORT -DWITH_ARCHIVE_STORAGE_ENGINE=1 -DWITH_INNOBASE_STORAGE_ENGINE=1 -DWITH_PARTITION_STORAGE_ENGINE=1 -DDEFAULT_CHARSET=$MYSQL_DEFAULT_CHARSET -DDEFAULT_COLLATION=$MYSQL_DEFAULT_COLLATION -DMYSQL_UNIX_ADDR=$MYSQL_SOCKET

Make

Run ‘make’ (this will take some time!)

[mysql@lamp1]$ make

Run ‘make install’:

Make Install

[mysql@lamp1]$ make install

Update Profile

This guide assumes that there is a ‘.bash_profile’ script being used for the ‘mysql’ user’s environment. Modify the ‘.bash_profile’ script to add an environment variable named ‘MYSQL_HOME,’ adding it to the ‘PATH’ as well:

[mysql@lamp1]$ sed -i -e '/MYSQL_HOME/d' $HOME/.bash_profile
[mysql@lamp1]$ echo "export MYSQL_HOME=$MYSQL_HOME" >> $HOME/.bash_profile
[mysql@lamp1]$ echo 'export PATH=$MYSQL_HOME/bin:$MYSQL_HOME/scripts:$PATH' >> $HOME/.bash_profile

Source the environment:

[mysql@lamp1]$ . ~/.bash_profile

Create System Database

Create the system database and tables:

[mysql@lamp1]$ mysql_install_db --basedir=$MYSQL_HOME --datadir=$MYSQL_DATA_DIR --tmpdir=$MYSQL_TMP_DIR --user=$USER

Update ‘defaults’ File

Update the ‘defaults’ (.cnf) file as appropriate for your environment. In my case, I use ‘sed’ to add and modify numerous lines starting with a stock ‘defaults’ (.cnf) file.

First copy the stock ‘defaults’ file of your choice from the ‘support-files’ directory to your ‘$MYSQL_HOME’:

[mysql@lamp1]$ cp $MYSQL_HOME/support-files/my-small.cnf $MYSQL_HOME/my.cnf

Update the newly copied ‘defaults’ file’s values as appropriate; I am changing the location of files, enabling ‘innodb’ storage engine options, etc.:

[mysql@lamp1]$ sed -i -e 's/^#inno/inno/g' $MYSQL_HOME/my.cnf
[mysql@lamp1]$ sed -i -e "s|^#log-bin=|log-bin=$MYSQL_LOGBIN_DIR/|g" $MYSQL_HOME/my.cnf
[mysql@lamp1]$ sed -i -e 's/#binlog_format/binlog_format/g' $MYSQL_HOME/my.cnf
[mysql@lamp1]$ sed -i -e "s|$MYSQL_HOME/data|$MYSQL_DATA_DIR|g" $MYSQL_HOME/my.cnf
[mysql@lamp1]$ sed -i -e "s|\[mysqld\]|[mysqld]\ndatadir=$MYSQL_DATA_DIR|g" $MYSQL_HOME/my.cnf
[mysql@lamp1]$ sed -i -e "s|\[mysqld\]|[mysqld]\ngeneral-log-file=$MYSQL_LOG_DIR/mysql-$MYSQL_VERSION.log|g" $MYSQL_HOME/my.cnf
[mysql@lamp1]$ sed -i -e "s|\[mysqld\]|[mysqld]\ngeneral-log=ON|g" $MYSQL_HOME/my.cnf
[mysql@lamp1]$ sed -i -e "s|\[mysqld\]|[mysqld]\nlog-error=$MYSQL_LOG_DIR/mysql-$MYSQL_VERSION.err|g" $MYSQL_HOME/my.cnf
[mysql@lamp1]$ sed -i -e "s|\[mysqld\]|[mysqld]\npid-file=$MYSQL_PID_DIR/mysql-$MYSQL_VERSION.pid|g" $MYSQL_HOME/my.cnf

Start MySQL

Start the ‘MySQL’ daemon:

[mysql@lamp1]$ mysqld_safe --defaults-file=$MYSQL_HOME/my.cnf &

Secure MySQL

Secure the server by running the approved utility, ‘mysql_secure_installation.’ Since the server was just installed the root password will be blank (simply press ‘enter’ when prompted by this script). When prompted, choose ‘Y’ for all options and also provide (and confirm) a password for the ‘root’ user:

[mysql@lamp1]$ mysql_secure_installation --socket=$MYSQL_SOCKET -u root


Post-installation Steps

Enable Automatic Startup/Shutdown

We will set-up a version-specific service – mysql-5.5.21 – using the script provided with the MySQL software.

Set a few environment variables to streamline the process:

[root@lamp1]$ MYSQL_VERSION=5.5.21
[root@lamp1]$ MYSQL_BASE=/u01/app/LAMP/mysql
[root@lamp1]$ MYSQL_HOME=$MYSQL_BASE/$MYSQL_VERSION

Copy/rename the script provided with the MySQL software:

[root@lamp1]$ cp $MYSQL_HOME/support-files/mysql.server /etc/init.d/mysql-$MYSQL_VERSION

Run ‘chkconfig’:

[root@lamp1]$ chkconfig --add mysql-$MYSQL_VERSION

Stop & Start the service to test:

[root@lamp1]$ service mysql-$MYSQL_VERSION stop
[root@lamp1]$ service mysql-$MYSQL_VERSION start

At this point, you should have a fully-functioning MySQL server which will automatically shutdown and start whenever the system is rebooted, etc.

Tagged with:
Posted in MySQL

How to Upgrade Oracle RAC 11.2.0.2 to 11.2.0.3


Introduction

Getting Started

This step-by-step guide shows how to upgrade an existing Oracle RAC cluster from 11.2.0.2 to 11.2.0.3 on Linux. It is assumed that the cluster is ‘Administrator-Managed’ and that the software is non-shared. The guide also assmues job role separation under which ‘grid’ is the clusterware owner and ‘oracle’ is the database software owner. This guide uses a 3-node cluster running CentOS 5.7 (x64). There are three nodes: ‘rac1′, ‘rac2′, and ‘rac3′. First, a number of prerequisites for upgrading are satisfied. Next, the clusterware is upgraded in a ‘rolling’ fashion – one node at a time; ASM instances are concurrently upgraded at this time. Next, the database software is upgraded. Finally, existing databases – running on the previous version (11.2.0.2) are taken offline, upgraded (to 11.2.0.3), and subsequently put back into service. The guide uses the ‘out-of-place’ upgrade method where the patchset completely reinstalls the software in contrast to previous patchsets that were applied to pre-existing software homes.

Required Software

The following software is used in this guide; available from Metalink:

  • Patch 6880880 (OPatch)
  • Patch 12539000 (Patch for Bug #12539000)
  • Patch 10404530 (11.2.0.3 Patchset Disks 1-3); Clusterware is Disk 3; Database is Disk1 & Disk2

Satisfy 11.2.0.3 Patchset Prerequisites

Update OPatch (6880880) on all Nodes

OPatch needs to be updated on all nodes and all software homes in order to apply the patch for Bug #12539000 – a prerequisite for applying the 11.2.0.3 patchset.

Create a common location for patches on all nodes:

[root@rac1]$ mkdir -p /u01/app/patches

[root@rac1]$ chgrp oinstall /u01/app/patches

[root@rac1]$ chmod 755 /u01/app/patches

On each remote node – in this case rac2 and rac3 – perform the same:

[root@rac1]$ ssh -l root rac2 'mkdir -p /u01/app/patches; chgrp oinstall /u01/app/patches; chmod 755 /u01/app/patches'
[root@rac1]$ ssh -l root rac3 'mkdir -p /u01/app/patches; chgrp oinstall /u01/app/patches; chmod 755 /u01/app/patches'

Copy the patch to the newly created common patch location:

[root@rac1]$ cp p6880880_112000_Linux-x86-64.zip /u01/app/patches/

[root@rac1]$ chown -R root:oinstall /u01/app/patches

[root@rac1]$ chmod -R 755 /u01/app/patches

On each remote node – in this case rac2 and rac3 – perform the same:

[root@rac1]$ scp p6880880_112000_Linux-x86-64.zip root@rac2:/u01/app/patches/
[root@rac1]$ scp p6880880_112000_Linux-x86-64.zip root@rac3:/u01/app/patches/

Unzip the patchset into the GRID HOME as the owner of the ‘grid’ software:

[grid@rac1]$ export GRID_HOME=/u01/app/11.2.0/grid

[grid@rac1]$ cd /u01/app/patches

[grid@rac1 patches]$ unzip -o p6880880_112000_Linux-x86-64.zip -d $GRID_HOME

On each remote node – in this case rac2 and rac3 – perform the same:

[grid@rac1]$ ssh rac2 "cd /u01/app/patches/; unzip -o p6880880_112000_Linux-x86-64.zip -d $GRID_HOME"
[grid@rac1]$ ssh rac2 "cd /u01/app/patches/; unzip -o p6880880_112000_Linux-x86-64.zip -d $GRID_HOME"

Note that the ‘-o’ switch suppresses prompting.

Unzip the patchset into the ORACLE HOME as the owner of the ‘oracle’ software:

[oracle@rac1]$ cd /u01/app/patches

[oracle@rac1]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/db_1

[oracle@rac1]$ unzip -o p6880880_112000_Linux-x86-64.zip -d $ORACLE_HOME

On each remote node – in this case rac2 and rac3 – perform the same:

[oracle@rac1]$ ssh rac2 "cd /u01/app/patches/; unzip -o p6880880_112000_Linux-x86-64.zip -d $ORACLE_HOME"
[oracle@rac1]$ ssh rac3 "cd /u01/app/patches/; unzip -o p6880880_112000_Linux-x86-64.zip -d $ORACLE_HOME"

Apply Patch for Bug #12539000 on all Nodes

Patch 12539000 needs to be applied to the clusterware and databases home of all nodes in order to apply the 11.2.0.3 patchset. We will apply this patch in a rolling fashion.

Propagate and Unzip Software

Copy the patch to the common patch location:

[root@rac1]$ cp p12539000_112020_Linux-x86-64.zip /u01/app/patches/

On each remote node – in this case rac2 and rac3 – perform the same:

[root@rac1]$ scp p12539000_112020_Linux-x86-64.zip root@rac2:/u01/app/patches/
[root@rac1]$ scp p12539000_112020_Linux-x86-64.zip root@rac3:/u01/app/patches/

Unzip the patchset into the common patch location:

[root@rac1]$ cd /u01/app/patches/

[root@rac1 patches]$ unzip p12539000_112020_Linux-x86-64.zip

[root@rac1 patches]$ chown -R root:oinstall 12539000

[root@rac1 patches]$ chmod -R g+r 12539000

On each remote node – in this case rac2 and rac3 – perform the same:

[root@rac1]$ ssh -l root rac2 "cd /u01/app/patches/; unzip p12539000_112020_Linux-x86-64.zip; chown -R root:oinstall 12539000; chmod -R g+r 12539000"
[root@rac1]$ ssh -l root rac3 "cd /u01/app/patches/; unzip p12539000_112020_Linux-x86-64.zip; chown -R root:oinstall 12539000; chmod -R g+r 12539000"

Apply the Patch

Perform the next series of steps on each node, one-by-one, in order to apply the patch to both the clusterware – GRID HOME – and database software homes – ORACLE HOME

Stop Grid-managed Resources

Stop any resources associated with the ORACLE_HOME on the local node (as ‘oracle’):

[oracle@rac1]$ srvctl stop home -o $ORACLE_HOME -s /tmp/`hostname`.state -n rac1

Pre “root” Script

Run the following script as ‘root’:

[root@rac1]$ export GRID_HOME=/u01/app/11.2.0/grid

[root@rac1]$ $GRID_HOME/crs/install/rootcrs.pl -unlock

Apply the Patch (Grid Home)

Apply patch 12539000 to the local GRID HOME as the ‘grid’ software owner:

[grid@rac1]$ export GRID_HOME=/u01/app/11.2.0/grid

[grid@rac1]$ $GRID_HOME/OPatch/opatch napply -oh $GRID_HOME -local /u01/app/patches/12539000

Apply the Patch (Database Home)

Apply patch 12539000 to the local ORACLE HOME as the ‘oracle’ software owner:

[oracle@rac1]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/db_1

[oracle@rac1]$ $ORACLE_HOME/OPatch/opatch napply -oh $ORACLE_HOME -local /u01/app/patches/12539000

Post “root” Scripts

Run the following script as ‘root’:

[root@rac1]$ export GRID_HOME=/u01/app/11.2.0/grid

[root@rac1]$ $GRID_HOME/rdbms/install/rootadd_rdbms.sh

Due to unpublished Bug# 10011084 and unpublished Bug # 10323264 perform the following steps per Metalink Note 1268390.1:

“Take a backup of the file /crs/install/crsconfig_lib.pm

# cd /crs/install
# cp crsconfig_lib.pm crsconfig_lib.pm.bak

Make the following change in that file crsconfig_lib.pm

From
my @exp_func = qw(check_CRSConfig validate_olrconfig validateOCR
To
my @exp_func = qw(check_CRSConfig validate_olrconfig validateOCR read_file”

Failure to perform the above will result in errors while running the ‘rootcrs.pl’ script. Modify the perl module per the above and run as ‘root’:

[root@rac1]$ /u01/app/11.2.0/grid/crs/install/rootcrs.pl -patch

Restart Grid-managed Resources

Restart resources associated with the ORACLE_HOME (as ‘oracle’):

[oracle@rac1]$ srvctl start home -o $ORACLE_HOME -s /tmp/`hostname`.state -n rac1

Again, perform the previous steps on each node of the cluster, one-by-one.


Apply 11.2.0.3 Patchset

Perform the following steps to upgrade the version of both the clusterware and database software from 11.2.0.2 to 11.2.0.3. Afterwards, the database will be upgraded, during which, the database and any applicable services will be unavailable to end-users.

Apply Patchset (Grid Home)

Verify Prerequisites

First, stage the software (as the ‘grid’ owner):

[grid@rac1]$ cd /u01/app/grid/software

[grid@rac1 software]$ unzip p10404530_112030_Linux-x86-64_3of7.zip

Run the cluster verification utility ‘cluvfy’ to ensure that the cluster is ready for upgrade:

[grid@rac1 software]$ cd grid

[grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -upgrade -n rac1,rac2,rac3 -rolling -src_crshome /u01/app/11.2.0/grid -dest_crshome /u01/app/11.2.0.3/grid -dest_version 11.2.0.3.0 -fixup -fixupdir /tmp –verbose

A successful run should yield ‘Pre-check for cluster services setup was successful.’

Apply the Patch

The patchset/upgrade is an “out-of-place” upgrade in which the software is completely re-installed and then migrated to. In this case, we will be installing the 11.2.0.3 software into a new software home: ‘/u01/app/11.2.0.3.’

Create Target Directories

Make the target installation directory on all nodes:

[root@rac1]$ mkdir -p /u01/app/11.2.0.3

[root@rac1]$ chown grid:oinstall /u01/app/11.2.0.3

On each remote node – in this case rac2 and rac3 – perform the same:

[root@rac1]$ ssh -l root rac2 'mkdir -p /u01/app/11.2.0.3'
[root@rac1]$ ssh -l root rac2 'chown grid:oinstall /u01/app/11.2.0.3'

[root@rac1]$ ssh -l root rac3 'mkdir -p /u01/app/11.2.0.3'
[root@rac1]$ ssh -l root rac3 'chown grid:oinstall /u01/app/11.2.0.3'

Prepare Environment

Prepare the shell environment – including DISPLAY – before beginning the software installation (as the ‘grid’ owner):

[grid@rac1]$ export LD_LIBRARY_PATH=/lib:/usr/lib:/usr/local/lib
[grid@rac1]$ unset ORA_CRS_HOME
[grid@rac1]$ unset ORACLE_BASE
[grid@rac1]$ unset ORACLE_HOME
[grid@rac1]$ unset ORACLE_SID
[grid@rac1]$ export DISPLAY=192.168.1.105:0.0

Install the Software

Install the new clusterware software by invoking the Oracle Universal Installer:

[grid@rac1]$ cd /u01/app/grid/software/grid
[grid@rac1 grid]$ ./runInstaller

The installer has a series of screens; below is a synopsis of the screens and the values to provide for each prompt:

Screen
Response

Download Software Updates
‘Skip software updates’

Select Installation Option
‘Upgrade Oracle Grid Infrastructure or Oracle Automatic Storage Management’

Select Product Languages
‘Next’

Grid Infrastructure Node Selection
‘Next’

Privileged Operating System Groups
‘Next’

Specify Installation Location
‘/u01/app/11.2.0.3′; ‘Next’

Summary
‘Next’

Install Product
N/A

Upgrade “root” Script

The post “root” script will migrate the node’s clusterware to the new software home. During this process the node will be taken out of service while ASM is upgraded and the clusterware stack is stopped and subsequently restarted under the newly installed software home. When prompted run the following script as root on each node, one-by-one:

[root@rac1]$ /u01/app/11.2.0.3/grid/rootupgrade.sh

On each remote node – in this case rac2 and rac3 – perform the same:

[root@rac1]$ ssh -l root rac2 /u01/app/11.2.0.3/grid/rootupgrade.sh
[root@rac1]$ ssh -l root rac3 /u01/app/11.2.0.3/grid/rootupgrade.sh

The ‘rootupgrade.sh’ script upgrades the Cluster Registry on the last node.

Verify Upgrade

After applying the patchset, ASM and CRS should show 11.2.0.3 as their “active” versions:

[grid@rac1]$ sqlplus "/ as sysasm"
SQL> select version from v$instance;
VERSION
-----------------
11.2.0.3.0

[grid@rac1]$ $GRID_HOME/bin/crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.3.0]

Update dependent files

Updates files that depend upon the clusterware version, namely the ‘grid’ users’ profile:

On each node, update “profile” with new software location:

[grid@rac1]$ sed -i -e 's/11.2.0/11.2.0.3/g' $HOME/.bash_profile

[grid@rac1]$ ssh rac2 "sed -i -e 's/11.2.0/11.2.0.3/g' $HOME/.bash_profile"

[grid@rac1]$ ssh rac3 "sed -i -e 's/11.2.0/11.2.0.3/g' $HOME/.bash_profile"

Remove OLD Grid Home

Modify permissions of OLD Grid Home on all nodes, as ‘root’:

[root@rac1]$ chmod -R 755 /u01/app/11.2.0/grid
[root@rac1]$ chown -R grid /u01/app/11.2.0/grid
[root@rac1]$ chown grid /u01/app/11.2.0

On each remote node – in this case rac2 and rac3 – perform the same:

[root@rac1]$ ssh -l root rac2 "chmod -R 755 /u01/app/11.2.0/grid; chown -R grid /u01/app/11.2.0/grid; chown grid /u01/app/11.2.0"
[root@rac1]$ ssh -l root rac3 "chmod -R 755 /u01/app/11.2.0/grid; chown -R grid /u01/app/11.2.0/grid; chown grid /u01/app/11.2.0"

Run ‘deinstall’ from OLD Grid Home, as the ‘grid’ software owner:

[grid@rac1]$ /u01/app/11.2.0/grid/deinstall

Remove the empty directory from all nodes as ‘root’:

[root@rac1]$ rmdir /u01/app/11.2.0

[root@rac1]$ ssh -l root rac2 'rmdir /u01/app/11.2.0'

[root@rac1]$ ssh -l root rac3 'rmdir /u01/app/11.2.0'

Update Grid Software Inventory

Many future steps require that Oracle’s software inventory reflects that each node is part of a cluster. After applying the patchset to the clusterware software home, the software inventory may not reflect that each node is part of a cluster; run the following as the ‘grid’ software owner to update the Oracle software inventory as appropriate:

[grid@rac1]$ $GRID_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME CRS=true

Apply Patchset (Oracle Home)

Unzip Software

First, stage the software (as the ‘oracle’ owner):

[oracle@rac1]$ cd /u01/app/oracle/software; unzip p10404530_112030_Linux-x86-64_1of7.zip; unzip p10404530_112030_Linux-x86-64_2of7.zip

Apply the Patch

The patchset/upgrade is an “out-of-place” upgrade in which the software is completely re-installed and then migrated to. In this case, we will be installing the 11.2.0.3 software into a new software home: ‘/u01/app/oracle/product/11.2.0.3/db_1.’ After the software is installed, each instance is migrated to the new software home then the database is taken offline, upgraded and subsequently restarted.

Create Target Directories

Make the target installation directory on all nodes:

[oracle@rac1]$ mkdir -p /u01/app/oracle/product/11.2.0.3/db_1

[oracle@rac1]$ ssh rac2 'mkdir -p /u01/app/oracle/product/11.2.0.3/db_1'

[oracle@rac1]$ ssh rac3 'mkdir -p /u01/app/oracle/product/11.2.0.3/db_1'

Prepare Environment

Prepare the shell environment – including DISPLAY – before beginning the software installation (as the ‘oracle’ owner):

[oracle@rac1]$ unset ORACLE_HOME
[oracle@rac1]$ unset ORACLE_BASE
[oracle@rac1]$ unset ORACLE_SID
[oracle@rac1]$ export LD_LIBRARY_PATH=/lib:/usr/lib:/usr/local/lib
[oracle@rac1]$ export PATH=.:/usr/local/java/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/oracle/bin:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
[oracle@rac1]$ export DISPLAY=192.168.1.105:0.0

The gist here is that we are unsetting key variables and removing the OLD ORACLE_HOME from the PATH and LD_LIBRARY_PATH.

Install the Software

Install the new oracle database software by invoking the Oracle Universal Installer:

[oracle@rac1]$ cd /u01/app/oracle/software/database
[oracle@rac1 database]$ ./runInstaller

The installer has a series of screens; below is a synopsis of the screens and the values to provide for each prompt:

Screen
Response

Configure Security Updates
Uncheck ‘I wish to receive security updates via My Oracle Support’

Download Software Updates
‘Skip Software Updates’

Select Installation Option
‘Install database software only’

Grid Installation Options
‘Next’

Select Product Languages
‘Next’

Select Database Edition
‘Enterprise Edition’

Specify Installation Location
‘/u01/app/oracle/product/11.2.0.3/db_1′; ‘Next’

Privileged Operating System Groups
‘Next’

Summary
‘Next’

Perform Prerequisite Checks
‘Install’

Summary
‘Next’

Install Product
N/A

When prompted run the ‘root.sh’ script on each of the nodes as ‘root’:

[oracle@rac1]$ /u01/app/oracle/product/11.2.0.3/db_1/root.sh

[oracle@rac1]$ ssh -l root rac2 /u01/app/oracle/product/11.2.0.3/db_1/root.sh

[oracle@rac1]$ ssh -l root rac3 /u01/app/oracle/product/11.2.0.3/db_1/root.sh

If using job role separation, then update the ‘oracle’ binary of the database software home on all nodes as ‘root’:

[root@rac1]$ chgrp asmadmin /u01/app/oracle/product/11.2.0.3/db_1/bin/oracle
[root@rac1]$ chmod 6751 /u01/app/oracle/product/11.2.0.3/db_1/bin/oracle

On each remote node – in this case rac2 and rac3 – perform the same:

[root@rac1]$ ssh -l root rac2 'chgrp asmadmin /u01/app/oracle/product/11.2.0.3/db_1/bin/oracle; chmod 6751 /u01/app/oracle/product/11.2.0.3/db_1/bin/oracle'
[root@rac1]$ ssh -l root rac3 'chgrp asmadmin /u01/app/oracle/product/11.2.0.3/db_1/bin/oracle; chmod 6751 /u01/app/oracle/product/11.2.0.3/db_1/bin/oracle'

Afterwards, a listing of the file should resemble the following:

[root@rac1]$ ls -lart /u01/app/oracle/product/11.2.0.3/db_1/bin/oracle
-rwsr-s--x 1 oracle asmadmin 232399431 Feb 23 19:33 /u01/app/oracle/product/11.2.0.3/db_1/bin/oracle

Manually Upgrade Database(s)

Now that the new version of the database software is in place, each database on the cluster needs to be moved over and upgraded per the following general process.

Pre-upgrade Information Tool

Login to database via the existing ORACLE_HOME and run the ‘utlu112i.sql’ script to determine whether the database is ready to be upgraded:

SQL> @/u01/app/oracle/product/11.2.0.3/db_1/rdbms/admin/utlu112i.sql

Take any corrective actions that are necessary per the output; for example, I needed to increase the ‘memory_target’ parameter of my instances:

SQL> alter system set memory_target=620M scope=spfile sid='*';

System altered.

Migrate instances’ files to NEW Oracle Home

Move the ‘init.ora’ and passwordfiles associated with the instance into the New Oracle Home:

[oracle@rac1]$ export NEW_ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db_1

[oracle@rac1]$ cp $ORACLE_HOME/dbs/initracdb* $NEW_ORACLE_HOME/dbs/.

[oracle@rac1]$ cp $ORACLE_HOME/dbs/orapwrac* $NEW_ORACLE_HOME/dbs/.

On each remote node – in this case rac2 and rac3 – perform the same:

[oracle@rac1]$ ssh rac2 "cp $ORACLE_HOME/dbs/orapwrac* $NEW_ORACLE_HOME/dbs/."
[oracle@rac1]$ ssh rac3 "cp $ORACLE_HOME/dbs/orapwrac* $NEW_ORACLE_HOME/dbs/."

Stop Database/Service(s)

Stop the database you wish to upgrade including any dependent client-facing services; in my case, ‘racdb’ and ‘racsvc.colestock.test,’ respectively:

[oracle@rac1]$ srvctl stop service -d racdb -s racsvc.colestock.test

[oracle@rac1]$ srvctl stop database -d racdb

Update dependent files

Update ‘/etc/oratab’ to reflect new software location on all nodes:

[oracle@rac1]$ export NEW_ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db_1
[oracle@rac1]$ sed -e "s|$ORACLE_HOME|$NEW_ORACLE_HOME|g" /etc/oratab > /tmp/oratab; cat /tmp/oratab > /etc/oratab

[oracle@rac2]$ export NEW_ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db_1
[oracle@rac2]$ sed -e "s|$ORACLE_HOME|$NEW_ORACLE_HOME|g" /etc/oratab > /tmp/oratab; cat /tmp/oratab > /etc/oratab

[oracle@rac3]$ export NEW_ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db_1
[oracle@rac3]$ sed -e "s|$ORACLE_HOME|$NEW_ORACLE_HOME|g" /etc/oratab > /tmp/oratab; cat /tmp/oratab > /etc/oratab

The above assumes that there is only one database to upgrade.

Update files that depend upon the database software version, namely the ‘oracle’ users’ profile:

On each node, update “profile” with new software location:

[oracle@rac1]$ sed -i -e 's/11.2.0/11.2.0.3/g' $HOME/.bash_profile

[oracle@rac1]$ ssh rac2 "sed -i -e 's/11.2.0/11.2.0.3/g' $HOME/.bash_profile"

[oracle@rac1]$ ssh rac3 "sed -i -e 's/11.2.0/11.2.0.3/g' $HOME/.bash_profile"

Upgrade Database

Source the database:

export ORACLE_SID=racdb1; . oraenv

Verify shell environment uses NEW Oracle database software home:

[oracle@rac1]$ echo $ORACLE_SID
racdb1

[oracle@rac1]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0.3/db_1

[oracle@rac1]$ echo $PATH
.:/usr/local/java/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/oracle/bin:/u01/app/oracle/product/11.2.0.3/db_1/bin:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin:/u01/app/oracle/dba_scripts/common/bin

[oracle@rac1]$ echo $LD_LIBRARY_PATH
/u01/app/oracle/product/11.2.0.3/db_1/lib:/u01/app/oracle/product/11.2.0.3/db_1/oracm/lib:/lib:/usr/lib:/usr/local/lib

Mount the database exclusively and startup in ‘Upgrade’ mode:

[oracle@rac1]$ sqlplus "/ as sysdba"

SQL> startup nomount;

SQL> alter system set cluster_database=false scope=spfile sid='racdb1';

SQL> shutdown immediate;

[oracle@rac1]$ sqlplus "/ as sysdba"

SQL> startup upgrade;

Run the ‘catupgrd.sql’ script to upgrade the database:

SQL> @$ORACLE_HOME/rdbms/admin/catupgrd.sql

Upon completion, ‘startup’ the database to start a new, fresh instance:

[oracle@rac1]$ sqlplus "/ as sysdba"

SQL> startup

Run ‘catuppst.sql’ script to perform upgrade steps that don’t require being in ‘Upgrade’ mode:

SQL> @$ORACLE_HOME/rdbms/admin/catuppst.sql

Run the post-upgrade status tool:

SQL> @$ORACLE_HOME/rdbms/admin/utlu112s.sql

A successful upgrade yields output similar to the following:

Oracle Database 11.2 Post-Upgrade Status Tool 02-24-2012 14:02:02
.
Component Current Version Elapsed Time
Name Status Number HH:MM:SS
.
Oracle Server
. VALID 11.2.0.3.0 02:34:13
Oracle Real Application Clusters
. VALID 11.2.0.3.0 00:00:18
Gathering Statistics
. 00:15:36
Total Upgrade Time: 02:50:11

PL/SQL procedure successfully completed.

Recompile the database and check for any invalid objects:

SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql

SQL> select count(*) from dba_invalid_objects;

COUNT(*)
----------
0

Update Oracle Cluster Registry (OCR)

Update Oracle Cluster Registry with new software location for the database:

[oracle@rac1]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0.3/db_1

[oracle@rac1]$ srvctl upgrade database -d racdb -o $ORACLE_HOME
[oracle@rac1]$ srvctl config database -d racdb -v
Database unique name: racdb
Database name: racdb
Oracle home: /u01/app/oracle/product/11.2.0.3/db_1
Oracle user: oracle
Spfile: +DATA/racdb/spfileracdb.ora
Domain: colestock.test
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: racdb
Database instances: racdb1,racdb2,racdb3
Disk Groups: DATA,FRA
Mount point paths:
Services: racsvc.colestock.test
Type: RAC
Database is administrator managed

Start Database/Service(s)

Restart the database you upgraded, including any dependent client-facing services:

[oracle@rac1]$ sqlplus "/ as sysdba"

SQL> alter system set cluster_database=TRUE scope=spfile sid='racdb1';

System altered.

SQL> shutdown immediate;

[oracle@rac1]$ srvctl start database -d racdb

Verify Upgrade

Verify that the database and services are operational and running on the new version:

[oracle@rac1]$ srvctl status database -d racdb -v
Instance racdb1 is running on node rac1 with online services racsvc.colestock.test. Instance status: Open.
Instance racdb2 is running on node rac2 with online services racsvc.colestock.test. Instance status: Open.
Instance racdb3 is running on node rac3 with online services racsvc.colestock.test. Instance status: Open.

[oracle@rac1]$ srvctl status service -d racdb -s racsvc.colestock.test
Service racsvc.colestock.test is running on instance(s) racdb1,racdb2,racdb3

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production

Remove OLD Database Home

Once all databases have been upgraded, then you can delete the OLD Oracle Home – from all nodes – using the following steps

“Detach” the software home from all nodes:

[oracle@rac1]$ $OLD_ORACLE_HOME/oui/bin/runInstaller -detachHome ORACLE_HOME=$OLD_ORACLE_HOME -silent -local
[oracle@rac1]$ ssh rac2 "$OLD_ORACLE_HOME/oui/bin/runInstaller -detachHome ORACLE_HOME=$OLD_ORACLE_HOME -silent -local"
[oracle@rac1]$ ssh rac3 "$OLD_ORACLE_HOME/oui/bin/runInstaller -detachHome ORACLE_HOME=$OLD_ORACLE_HOME -silent -local"

Remove the software directory from all nodes:

[oracle@rac1]$ rm -Rf $OLD_ORACLE_HOME
[oracle@rac1]$ ssh rac2 "rm -Rf $OLD_ORACLE_HOME"
[oracle@rac1]$ ssh rac3 "rm -Rf $OLD_ORACLE_HOME"

Tagged with: , , ,
Posted in 11G, High Availability, Oracle, RAC

How to Add a Node to Oracle RAC (11gR2)


Introduction

Getting Started

This guide shows how to add a node to an existing 11gR2 Oracle RAC cluster. It is assumed that the node in question is available and is not part of a GNS/Grid Plug and Play cluster. In other words, the database is considered to be ‘Administrator-Managed.’ Also, the database software is non-shared and uses role separation where ‘grid’ is the clusterware owner and ‘oracle’ owns the database software. This guide uses a 2-node cluster running CentOS 5.7 (x64). There are two pre-existing nodes ‘rac1′ and ‘rac2.’ We will be adding ‘rac3′ to the cluster. This guide does not cover node preparation steps/prerequisites. The assumption is that since there is a pre-existing cluster the user knows how to prepare a node – from a prerequisite perspective – for cluster addition.


Verify Requirements

The cluster verify utility – ‘cluvfy’ – is used to determine that the new node is, in fact, ready to be added to the cluster.

Verify New Node (HWOS)

From an existing node, run ‘cluvfy’ to ensure that ‘rac3′ – the cluster node to be added – is ready from a hardware and operating system perspective:

[root@rac1]$ su - grid

[grid@rac1]$ export GRID_HOME=/u01/app/11.2.0/grid

[grid@rac1]$ $GRID_HOME/bin/cluvfy stage -post hwos -n rac3

If successful, the command will end with: ‘Post-check for hardware and operating system setup was successful.’ Otherwise, the script will print meaningful error messages.

Verify Peer (REFNODE)

The cluster verify utility – ‘cluvfy’ – is again used to determine the new node’s readiness. In this case, the new node is compared to an existing node to ensure compatibility/determine any conflicts. From an existing node, run ‘cluvfy’ to check inter-node compatibility:

[grid@rac1]$ $GRID_HOME/bin/cluvfy comp peer -refnode rac1 -n rac3 -orainv oinstall -osdba dba -verbose

In this case, existing node ‘rac1′ is compared with the new node ‘rac3,’ comparing such things as the existance/version of required binaries, kernel settings, etc. Invariably, the command will report that ‘Verification of peer compatibility was unsuccessful.’ This is due to the fact that the command simply looks for mismatches between the systems in question. Certain properties between systems will undoubtedly differ. For example, the amount of free space in /tmp rarely matches exactly. Therefore, certain errors from this command should be ignored, such as ‘Free disk space for “/tmp”,’ and so forth. Differences in kernel settings and OS packages/rpms should, however, be addressed.

Verify New Node (NEW NODE PRE)

The cluster verify utility – ‘cluvfy’ – is used to determine the integrity of the cluster and whether it is ready for a new node. From an existing node, run ‘cluvfy’ to verify the integrity of the cluster:

$GRID_HOME/bin/cluvfy stage -pre nodeadd -n rac3 -fixup -verbose

If your shared storage is ASM using asmlib you may get an error – similar to the following – due to Bug #10310848:

ERROR:
PRVF-5449 : Check of Voting Disk location "ORCL:CRS1(ORCL:CRS1)" failed on the following nodes:

rac3:No such file or directory

PRVF-5431 : Oracle Cluster Voting Disk configuration check failed

The aforementioned error can be safely ignored whereas other errors should be addressed before continuing.


Extend Clusterware

The clusterware software will be extended to the new node.

Run “addNode.sh”

From an existing node, run “addNode.sh” to extend the clusterware to the new node ‘rac3:’

[grid@rac1]$ export IGNORE_PREADDNODE_CHECKS=Y

[grid@rac1]$ $GRID_HOME/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={rac3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3-vip}"

In my case, I am using ASM with asmlib on 11.2.0.2.0, so I experienced the aforementioned: Bug #10310848. Therefore, I had to add the ‘IGNORE_PREADDNODE_CHECKS’ environment variable so that the command would run to fruition; if you did not experience the bug in prior steps, then you can omit it.

If the command is successful, you should see a prompt similar to the following:

The following configuration scripts need to be executed as the "root" user in each cluster node.
/u01/app/oraInventory/orainstRoot.sh #On nodes rac3
/u01/app/11.2.0/grid/root.sh #On nodes rac3
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node

The Cluster Node Addition of /u01/app/11.2.0/grid was successful.

Run the ‘root.sh’ commands on the new node as directed:

[root@rac3]$ /u01/app/oraInventory/orainstRoot.sh

[root@rac3]$ /u01/app/11.2.0/grid/root.sh

If successful, clusterware daemons, the listener, the ASM instance, etc. should be started by the ‘root.sh’ script:

[root@rac3]$ $GRID_HOME/bin/crs_stat -t -v -c rac3
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE rac3
ora....SM3.asm application 0/5 0/0 ONLINE ONLINE rac3
ora....C3.lsnr application 0/5 0/0 ONLINE ONLINE rac3
ora.rac3.ons application 0/3 0/0 ONLINE ONLINE rac3
ora.rac3.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac3
ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE rac3

Verify New Node (NEW NODE POST)

Again, the cluster verify utility – ‘cluvfy’ – is used to verify that the clusterware has been extended to the new node properly:

[grid@rac1]$ $GRID_HOME/bin/cluvfy stage -post nodeadd -n rac3 -verbose

A successful run should yield ‘Post-check for node addition was successful.’


Extend Oracle Database Software

The Oracle database software will be extended to the new node.

Run “addNode.sh”

From an existing node – as the database software owner – run the following command to extend the Oracle database software to the new node ‘rac3:’

[oracle@rac1]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/db_1

[oracle@rac1]$ $ORACLE_HOME/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={rac3}"

If the command is successful, you should see a prompt similar to the following:

The following configuration scripts need to be executed as the "root" user in each cluster node.
/u01/app/oracle/product/11.2.0/db_1/root.sh #On nodes rac3
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node

The Cluster Node Addition of /u01/app/oracle/product/11.2.0/db_1 was successful.

Run the ‘root.sh’ commands on the new node as directed:

[root@rac3]$ /u01/app/oracle/product/11.2.0/db_1/root.sh

Change ownership of ‘oracle’

If you are using job/role separation, then you will have to ‘chmod’ the ‘oracle’ executable in the newly created $ORACLE_HOME on ‘rac3:’

[root@rac3]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1

[root@rac3]$ chgrp asmadmin $ORACLE_HOME/bin/oracle

[root@rac3]$ chmod 6751 $ORACLE_HOME/bin/oracle

[root@rac3]$ ls -lart $ORACLE_HOME/bin/oracle
-rwsr-s--x 1 oracle asmadmin 228886450 Feb 21 11:33 /u01/app/oracle/product/11.2.0/db_1/bin/oracle

The end-goal is to have the permissions of the ‘oracle’ binary match the other nodes in the cluster.

Verify Administrative Privileges (ADMPRV)

Verify administrative privileges across all nodes in the cluster for the Oracle database software home:

[oracle@rac3]$ $ORACLE_HOME/bin/cluvfy comp admprv -o db_config -d $ORACLE_HOME -n rac1,rac2,rac3 -verbose

A successful run should yield ‘Verification of administrative privileges was successful.’


Add Instance to Clustered Database

A database instance will be established on the new node. Specifically, an instance named ‘racdb3′ will be added to ‘racdb’ – a pre-existing clustered database.

Satisfy Node Instance Dependencies

Satisfy all node instance dependencies, such as passwordfile, init.ora parameters, etc.

From the new node ‘rac3,’ run the following commands to create the passwordfile, ‘init.ora’ file, and ‘oratab’ entry for the new instance:

[oracle@rac3]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/db_1

[oracle@rac3]$ cd $ORACLE_HOME/dbs

[oracle@rac3 dbs]$ mv initracdb1.ora initracdb3.ora

[oracle@rac3 dbs]$ mv orapwracdb1 orapwracdb3

[oracle@rac3 dbs]$ echo "racdb3:$ORACLE_HOME:N" >> /etc/oratab

From a node with an existing instance of ‘racdb,’ issue the following commands to create the needed public log thread, undo tablespace, and ‘init.ora’ entries for the new instance:

[oracle@rac1]$ export ORACLE_SID=racdb1

[oracle@rac1]$ . oraenv
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@rac1]$ sqlplus "/ as sysdba"

SQL> alter database add logfile thread 3 group 7 ('+DATA','+FRA') size 100M, group 8 ('+DATA','+FRA') size 100M, group 9 ('+DATA','+FRA') size 100M;

SQL> alter database enable public thread 3;

SQL> create undo tablespace undotbs3 datafile '+DATA' size 200M;

SQL> alter system set undo_tablespace=undotbs3 scope=spfile sid='racdb3';

SQL> alter system set instance_number=3 scope=spfile sid='racdb3';

SQL> alter system set cluster_database_instances=3 scope=spfile sid='*';

Update Oracle Cluster Registry (OCR)

The OCR will be updated to account for a new instance – ‘racdb3′ – being added to the ‘racdb’ cluster database as well as changes to a service – ‘racsvc.colestock.test.’

Add ‘racdb3′ instance to the ‘racdb’ database and verify:

[oracle@rac3]$ srvctl add instance -d racdb -i racdb3 -n rac3

[oracle@rac3]$ srvctl status database -d racdb -v
Instance racdb1 is running on node rac1 with online services racsvc.colestock.test. Instance status: Open.
Instance racdb2 is running on node rac2 with online services racsvc.colestock.test. Instance status: Open.
Instance racdb3 is not running on node rac3

[oracle@rac3]$ srvctl config database -d racdb
Database unique name: racdb
Database name: racdb
Oracle home: /u01/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +DATA/racdb/spfileracdb.ora
Domain: colestock.test
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: racdb
Database instances: racdb1,racdb2,racdb3
Disk Groups: DATA,FRA
Mount point paths:
Services: racsvc.colestock.test
Type: RAC
Database is administrator managed

‘racdb3′ has been added to the configuation

Add the ‘racdb3′ instance to the ‘racsvc.colestock.test’ service and verify:

[oracle@rac3]$ srvctl add service -d racdb -s racsvc.colestock.test -r racdb3 -u

[oracle@rac3]$ srvctl config service -d racdb -s racsvc.colestock.test
Service name: racsvc.colestock.test
Service is enabled
Server pool: racdb_racsvc.colestock.test
Cardinality: 3
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SELECT
Failover method: NONE
TAF failover retries: 5
TAF failover delay: 10
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Preferred instances: racdb3,racdb1,racdb2
Available instances:

Start the Instance

Now that all the prerequisites have been satisfied and OCR updated, the ‘racdb3′ instance will be started.

Start the newly created instance – ‘racdb3′ – and verify:

[oracle@rac3]$ srvctl start instance -d racdb -i racdb3

[oracle@rac3]$ srvctl status database -d racdb -v
Instance racdb1 is running on node rac1 with online services racsvc.colestock.test. Instance status: Open.
Instance racdb2 is running on node rac2 with online services racsvc.colestock.test. Instance status: Open.
Instance racdb3 is running on node rac3 with online services racsvc.colestock.test. Instance status: Open.

Tagged with: , , ,
Posted in 11G, High Availability, Oracle, RAC

How to Delete a Node from Oracle RAC (11gR2)


Introduction

Getting Started

This guide shows how to remove a node from an existing 11gR2 Oracle RAC cluster. It is assumed that the node in question is available and is not part of a GNS/Grid Plug and Play cluster. In other words, the database is considered to be ‘Administrator-Managed.’ Also, the database software is non-shared. This guide uses a 3-node cluster running CentOS 5.7 (x64). The three nodes are ‘rac1,’ ‘rac2,’ and ‘rac3;’ we will be removing ‘rac3′ from the cluster.


Delete Node from Cluster

‘Unpin’ node

‘Unpin’ the node – in our case ‘rac3′ – from all nodes that are to remain in the cluster; in this case, ‘rac1′ and ‘rac2.’ Specify the node you plan on deleting in the command and do so on each remaining node in the cluster:

[root@rac1]$ export GRID_HOME=/u01/app/11.2.0/grid

[root@rac1]$ $GRID_HOME/bin/crsctl unpin css -n rac3
CRS-4667: Node rac3 successfully unpinned.

[root@rac2]$ export GRID_HOME=/u01/app/11.2.0/grid

[root@rac2]$ $GRID_HOME/bin/crsctl unpin css -n rac3
CRS-4667: Node rac3 successfully unpinned.

Remove RAC Database Instance(s)

The node we are removing houses an instance – ‘racdb3′ – which is part of a RAC database – ‘racdb.’ The instance ‘racdb3′ is in the preferred list of the service: ‘racsvc.colestock.test.’ In this step, we will remove the database instance ‘racdb3′ from ‘racdb.’ The ‘racsvc.colestock.test’ service will also be updated as appropriate.

Update the ‘racsvc.colestock.test’ service:

$ $GRID_HOME/bin/srvctl config service -d racdb -s racsvc.colestock.test -v
Service name: racsvc.colestock.test
Service is enabled
Server pool: racdb_racsvc.colestock.test
Cardinality: 3
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SELECT
Failover method: NONE
TAF failover retries: 5
TAF failover delay: 10
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Preferred instances: racdb3,racdb2,racdb1
Available instances:

$ $GRID_HOME/bin/srvctl status service -d racdb -s racsvc.colestock.test -v
Service racsvc.colestock.test is running on instance(s) racdb1,racdb2,racdb3

As you can see, ‘racdb3′ is currently a preferred instance of the database service ‘racsvc.colestock.test.’

Remove the ‘racdb3′ instance from the ‘racsvc.colestock.test’ service:

$ $GRID_HOME/bin/srvctl modify service -d racdb -s racsvc.colestock.test -n -i racdb1,racdb2

$ $GRID_HOME/bin/srvctl config service -d racdb -s racsvc.colestock.test -v
Service name: racsvc.colestock.test
Service is enabled
Server pool: racdb_racsvc.colestock.test
Cardinality: 2
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: SELECT
Failover method: NONE
TAF failover retries: 5
TAF failover delay: 10
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition:
Preferred instances: racdb1,racdb2
Available instances:

$ $GRID_HOME/bin/srvctl status service -d racdb -s racsvc.colestock.test -v
Service racsvc.colestock.test is running on instance(s) racdb1,racdb2

Afterwards, the service should simply list ‘racdb1′ and ‘racdb2′ as its preferred instances.

Remove the ‘racdb3′ instance from the ‘racdb’ database using ‘dbca’ in ‘Silent Mode’:

[oracle@rac1]$ dbca -silent -deleteInstance -nodeList rac3 -gdbName racdb.colestock.test -instanceName racdb3 -sysDBAUserName sys -sysDBAPassword ******
Deleting instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
60% complete
66% complete
Completing instance management.
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/racdb.log" for further details.

In this case, we ran the command as ‘oracle’ on ‘rac1.’ Afterwards, the database should only show 2 threads and the configuration of the database should show ‘racdb1′ and ‘racdb2′ as its only instances:

SQL> select thread#,status,instance from v$thread;

THREAD# STATUS INSTANCE
---------- ------ ------------------------------
1 OPEN racdb1
2 OPEN racdb2

$ $GRID_HOME/bin/srvctl config database -d racdb -v
Database unique name: racdb
Database name: racdb
Oracle home: /u01/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +DATA/racdb/spfileracdb.ora
Domain: colestock.test
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: racdb
Database instances: racdb1,racdb2
Disk Groups: DATA,FRA
Mount point paths:
Services: racsvc.colestock.test
Type: RAC
Database is administrator managed

Remove RAC Database Software

In this step, the Oracle RAC database software will be removed from the node that will be deleted. Additionally, the inventories of the remaining nodes will be updated to reflect the removal of the node’s Oracle RAC database software home, etc.

Any listener running on ‘rac3′ will need to be stopped:

[oracle@rac3]$ srvctl disable listener -n rac3

[oracle@rac3]$ srvctl stop listener -n rac3

[oracle@rac3]$ ps -ef | grep tns | grep -v tns

Update inventory on ‘rac3:’

[oracle@rac3]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/db_1

[oracle@rac3]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac3}" -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 3874 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

Remove the RAC Database Software from ‘rac3:’

[oracle@rac3]$ cd $ORACLE_HOME/deinstall

[oracle@rac3 deinstall]$ ./deinstall -local

Make sure to specify the ‘-local’ flag as not to remove more than just the local node’s software, etc.

Update inventories/node list on the remaining nodes:

[oracle@rac1]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/db_1

[oracle@rac1]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac1,rac2}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 3889 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

The aforementioned command needs to be run from one of the remaining nodes. Note that ‘CLUSTER_NODES’ is a list of the nodes that are to remain in the cluster.

Remove Clusterware

Disable the Clusterware on ‘rac3′ as root:

[root@rac3]$ echo $GRID_HOME
/u01/app/11.2.0/grid

[root@rac3]$ $GRID_HOME/crs/install/rootcrs.pl -deconfig -force

Delete ‘rac3′ node from Clusterware configuration (on a remaining node):

[root@rac1]$ $GRID_HOME/bin/crsctl delete node -n rac3
CRS-4661: Node rac3 successfully deleted.

[root@rac1]$ $GRID_HOME/bin/olsnodes -t -s
rac1 Active Unpinned
rac2 Active Unpinned

Update inventory on ‘rac3′ as the ‘GRID’ software owner:

[grid@rac3]$ export GRID_HOME=/u01/app/11.2.0/grid

[grid@rac3]$ $GRID_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME "CLUSTER_NODES={rac3}" CRS=TRUE -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 4095 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

Once again, make sure you are using the ‘-local’ flag

Remove the Clusterware Software from ‘rac3:’

[grid@rac3]$ cd $GRID_HOME/deinstall

[grid@rac3 deinstall]$ ./deinstall -local

Make sure to specify the ‘-local’ flag as not to remove more than just the local node’s software, etc.

The script will provide commands to be run as ‘root,’ in another window:”

[root@rac3]$ /tmp/deinstall2012-02-20_00-08-21PM/perl/bin/perl -I/tmp/deinstall2012-02-20_00-08-21PM/perl/lib -I/tmp/deinstall2012-02-20_00-08-21PM/crs/install /tmp/deinstall2012-02-20_00-08-21PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2012-02-20_00-08-21PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
[root@rac3]$ rm -rf /etc/oraInst.loc

[root@rac3]$ rm -rf /opt/ORCLfmap

Update inventories/node list on the remaining nodes:

[grid@rac1]$ export GRID_HOME=/u01/app/11.2.0/grid

[grid@rac1]$ $GRID_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME "CLUSTER_NODES={rac1,rac2}" CRS=TRUE

The aforementioned command needs to be run from one of the remaining nodes. Note that ‘CLUSTER_NODES’ is a list of the nodes that are to remain in the cluster.

Confirm that the node in question – rac3 – has been properly removed via:

[grid@rac1]$ $GRID_HOME/bin/cluvfy stage -post nodedel -n rac3

Performing post-checks for node removal

Checking CRS integrity...

CRS integrity check passed

Node removal check passed

Post-check for node removal was successful.

Tagged with: , , ,
Posted in 11G, High Availability, Oracle, RAC

How to Build a LAMP


Introduction

Getting Started

This step-by-step guide shows how to build a LAMP (Linux-Apache-MySQL-PHP). There are many ways to build a LAMP, including using freely-available packaged frameworks, etc. This guide builds the LAMP from scratch using primarily source software distributions; additionally, Oracle (OCI) support is compiled in. The guide assumes a Linux machine – bare metal or VM – is already available; performing the OS installation/configuration is beyond the scope of this demo. This guide was developed using a x64 VM with Cent OS 5.6 (final) installed using VMWare’s ‘Easy Install.’ If you are using a different distribution of Linux then you will need to change certain steps to account for subtle difference between distributions (e.g. yum vs. up2date). The host has a static IP address added to DNS and so forth. Ports expected to be used by HTTP/HTTPS have been opened as appropriate.

Software

We will be installing/updating OS packages via traditional package management tools.
Additionally, we will be installing other software from source, etc. from outside sources; here is where to get the software appropriate for you distribution/platform:


Install Apache HTTP Server

Create ‘apache’ user/group

If the user and group don’t already exist, create them:

$ groupadd -g 48 apache
$ useradd -d /dev/null -g apache -s /sbin/nologin -u 48 apache

Build OpenSSL (from source)

This guide compiles Apache with support for SSL. OpenSSL is required in order to do this. If you already have OpenSSL installed, you can skip this step; otherwise, download the latest stable source distribution to the machine:
Download OpenSSL source to temporary location:

$ cd /tmp
$ wget http://www.openssl.org/source/openssl-1.0.0g.tar.gz

Unpack tarball:

$ tar -xvf openssl-1.0.0g.tar.gz --no-same-owner

Configure/Make/Install:

$ cd openssl-1.0.0g
$ ./config --prefix=/usr/local/ssl
$ make
$ make install

The preceding will install OpenSSL from source, installing it into /usr/local/ssl. Adjust the path – ‘prefix’ denotes the target install location – as appropriate for your system; just be sure to update relevant, subsequent commands for the new location.

Build Apache HTTP Server (from source)

Create directories and symbolic links:

$ export APACHE_BASE=/u01/app/LAMP/apache
$ mkdir -p $APACHE_BASE/build
$ mkdir -p $APACHE_BASE/httpd-2.2.22
$ ln -s $APACHE_BASE/httpd-2.2.22 $APACHE_BASE/httpd

Note the version-specific directory and ‘httpd’ symbolic link to it.
Download Apache HTTP Server source:

$ cd $APACHE_BASE/build
$ wget http://www.apache.org/dist/httpd/httpd-2.2.22.tar.gz

Unpack tarball:

$ tar -xvf httpd-2.2.22.tar.gz --no-same-owner

Configure/Make/Install:

$ cd httpd-2.2.22
$ ./configure --prefix=$APACHE_BASE/httpd --enable-ssl --with-ssl=/usr/local/ssl --enable-so
$ make
$ make install

The above installs Apache from source with support for SSL, assuming that OpenSSL is installed in /usr/local/ssl; again, adjust any paths as appropriate.

Post-install Steps

Change owner/group ‘httpd’ runs as

$ sed -i -e "s/User daemon/User apache/" \
-e "s/Group daemon/Group apache/" \
$APACHE_BASE/httpd-2.2.22/conf/httpd.conf

In this case, 2 lines in the configuration file are being changed so that the ‘httpd’ process will run as ‘apache.’

Secure Apache

Reasonable lock-down of binaries:

$ chmod 511 $APACHE_BASE/httpd-2.2.22/bin/httpd
$ cd $APACHE_BASE/httpd-2.2.22
$ chown 0 . bin conf logs
$ chgrp 0 . bin conf logs
$ chmod 755 . bin conf logs

Create new ‘DocumentRoot,’ change its ownership and modify permissions:

$ rm -rf $APACHE_BASE/httpd-2.2.22/htdocs
$ export DOC_BASE=/u01/app/LAMP/www
$ sudo groupadd -r lamp
$ usermod -a -G lamp jcolestock
$ mkdir -p $DOC_BASE
$ mkdir -p $DOC_BASE/html
$ chown root:lamp $DOC_BASE
$ chmod u=rwx,g=rwx,o=rx $DOC_BASE
$ chown root:lamp $DOC_BASE/html
$ chmod u=rwx,g=rwx,o=rx $DOC_BASE/html
$ find $DOC_BASE/html -depth -type d -exec chgrp lamp -- '{}' ';'
$ find $DOC_BASE/html -depth -type d -exec chmod u=rwx,g=rwxs,o=rx -- '{}' ';'
$ find $DOC_BASE/html -depth -type f -exec chgrp lamp -- '{}' ';'
$ find $DOC_BASE/html -depth -type f -exec chmod u=rw,g=rw,o-x -- '{}' ';'
$ sed -i -e "s|$APACHE_BASE/httpd/htdocs|$DOC_BASE/html|" $APACHE_BASE/httpd-2.2.22/conf/httpd.conf

We start by removing the old ‘DocumentRoot.’ Then we create a new ‘DocumentRoot,’ which will be owned by ‘root:lamp,’ where ‘lamp’ is a new OS group we created for the authors of web content, etc. I also granted ‘lamp’ to ‘jcolestock’ – my personal user – so that I can publish content to the LAMP. You will want to grant this group to any users whom will publish content. Additionally, users who publish content must have their ‘umask’ set to ’0002′ while publishing content. After that, we lock down the permissions on the new ‘DocumentRoot’ as appropriate. The last step is to update Apache’s configuration file to specify the new location of the ‘DocumentRoot.’

Add PHP MIME type support

$ vi $APACHE_BASE/httpd-2.2.22/conf/httpd.conf

Inside the ‘IfModule mime_module’ block add “AddType application/x-httpd-php .php .phtml”

$ grep .php $APACHE_BASE/httpd-2.2.22/conf/httpd.conf
LoadModule php5_module modules/libphp5.so
AddType application/x-httpd-php .php .phtml

Create /etc/init.d startup script for httpd

Create /etc/init.d/httpd-2.2.22 startup script

$ vi /etc/init.d/httpd-2.2.22

Modify the following script as appropriate for your environment:

#!/bin/bash
#
# httpd-2.2.22 Startup script for custom user LAMP
#
# chkconfig: - 85 15
# description: custom user LAMP plus Oracle support
# processname: httpd

APACHE_HOME=/u01/app/LAMP/apache/httpd
TNS_ADMIN=/u01/app/oracle/network/admin; export TNS_ADMIN
LD_LIBRARY_PATH=/usr/lib:/usr/lib64:$APACHE_HOME/lib:/usr/lib/oracle/11.2/client64/lib:/usr/lib/oracle/11.2/lib; export LD_LIBRARY_PATH
CONF_FILE=$APACHE_HOME/conf/httpd.conf
PIDFILE=$APACHE_HOME/logs/httpd-2.2.22.pid

if [ ! -f ${CONF_FILE} ]; then
exit 0
fi

case "$1" in
start)
/bin/rm -f ${PIDFILE}
cmdparm="start"
cmdtext="starting"
;;
restart)
cmdparm="restart"
cmdtext="restarting"
;;
stop)
cmdparm="stop"
cmdtext="stopping"
;;
*)
echo "Usage: $0 {start|stop|restart}"
exit 1
;;
esac

echo "httpd $cmdtext."

status=`${APACHE_HOME}/bin/apachectl $cmdparm 2>&1`

if [ $? != 0 ]; then
echo "$status"
exit 1
fi
exit 0

Set to startup automatically:

$ chkconfig --add httpd-2.2.22
$ chkconfig --list httpd-2.2.22
httpd-2.2.22 0:off 1:off 2:on 3:on 4:on 5:on 6:off

The above is very much a barebones startup script, but should do the trick


Install PHP

Required Packages

In most cases, some packages required for PHP are missing. Any missing packages will need to be installed prior to building PHP. Usually the following packages are missing and need to be installed: libxml2-devel, libpng-devel, libjpeg-devel, freetype-devel, fontconfig-devel, and zlib-devel.

$ rpm -qa libxml2-devel libpng-devel libjpeg-devel freetype-devel fontconfig-devel zlib-devel | sort -u
fontconfig-devel-2.4.1-7.el5
freetype-devel-2.2.1-28.el5_7.2
libjpeg-devel-6b-37
libpng-devel-1.2.10-7.1.el5_7.5
libxml2-devel-2.6.26-2.1.12.el5_7.2
zlib-devel-1.2.3-4.el5

Install any missing packages using your package management utility:

$ yum install libxml2-devel libpng-devel libjpeg-devel freetype-devel fontconfig-devel zlib-devel

MySQL Packages

Since we are building a LAMP, MySQL libraries are required in order to compile MySQL support into PHP.

Ensure the following MySQL packages are installed: mysql and mysql-devel.

$ rpm -qa mysql mysql-devel | sort -u
mysql-5.0.77-4.el5_6.6
mysql-devel-5.0.77-4.el5_6.6

If the packages are not installed, do so:

$ yum install mysql mysql-devel

Oracle Instant Client Packages

This guide builds in support for Oracle/OCI as well by installing the Oracle Instant Client and then compiling the appropriate libraries into PHP using the appropriate flags.

Download the appropriate RPMs from Oracle and install:

$ rpm -Uvh oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64.rpm \
oracle-instantclient11.2-sqlplus-11.2.0.3.0-1.x86_64.rpm \
oracle-instantclient11.2-devel-11.2.0.3.0-1.x86_64.rpm

If you would like to skip this step, feel free; however, remember to omit the ‘–with-oci8′ flag when configuring PHP.

Build PHP from source

Download PHP source:

$ cd $APACHE_BASE/build
$ wget http://us3.php.net/get/php-5.3.10.tar.gz/from/us.php.net/mirror

Unpack tarball:

$ tar -xvf php-5.3.10.tar.gz --no-same-owner

Configure/Make/Install:

At this point, the prerequisites for compiling PHP should be satisfied. Remember to include the ‘–with-oci8′ flag if you wish to compile Oracle/OCI support in (and have installed the Oracle Instant Client or equivalent). Additionally, make sure to update the ‘configure’ command to reflect the true path – on your system – to all relevant libraries and executables. Study this command carefully, making sure to identify the installation locations on your specific system. Path should diverge from 32-bit vs. 64-bit systems, different distributions of Linux, etc. Feel free to modify/add compiler flags as appropriate for your needs, etc. After you have updated the ‘configure’ command, run the following:

$ cd php-5.3.10
$ ./configure --prefix=$APACHE_BASE/httpd-2.2.22 --with-openssl-dir=/usr/local/ssl --with-zlib --with-zlib-dir=/usr/include --with-apxs2=$APACHE_BASE/httpd-2.2.22/bin/apxs --with-libdir=lib64 --with-gd --with-png-dir --with-jpeg-dir --with-freetype-dir --with-mysql=/usr/lib64/mysql --enable-mbstring --with-oci8=instantclient,/usr/lib/oracle/11.2/client64/lib
$ make
$ make install

Post-install Steps

Create ‘php.ini’ from template:

$ cp php.ini-production $APACHE_BASE/httpd-2.2.22/lib/php.ini

Add dynamic extensions to ‘php.ini’:

$ vi $APACHE_BASE/httpd-2.2.22/lib/php.ini

Add “extension=oci8.so” to ‘php.ini’

$ grep extension=oci $APACHE_BASE/httpd-2.2.22/lib/php.ini
extension=oci8.so


Functional Test

Start Apache

Before you can begin testing, you must start Apache:

$ service httpd-2.2.22 start
httpd starting.
$ ps -ef | grep httpd | grep -v grep
root 9221 1 0 Feb13 ? 00:00:00 /u01/app/LAMP/apache/httpd/bin/httpd -k start
apache 9222 9221 0 Feb13 ? 00:00:00 /u01/app/LAMP/apache/httpd/bin/httpd -k start
apache 9223 9221 0 Feb13 ? 00:00:00 /u01/app/LAMP/apache/httpd/bin/httpd -k start
apache 9224 9221 0 Feb13 ? 00:00:00 /u01/app/LAMP/apache/httpd/bin/httpd -k start
apache 9225 9221 0 Feb13 ? 00:00:00 /u01/app/LAMP/apache/httpd/bin/httpd -k start
apache 9226 9221 0 Feb13 ? 00:00:00 /u01/app/LAMP/apache/httpd/bin/httpd -k start
apache 9227 9221 0 Feb13 ? 00:00:00 /u01/app/LAMP/apache/httpd/bin/httpd -k start

Verify PHP

Login as someone in the ‘lamp’ group – in my case, jcolestock – and publish a ‘phpinfo’ page in order to verify correct operation/configuration:

$ su - jcolestock
$ export DOC_ROOT=/u01/app/LAMP/www/html
$ umask 0002
$ echo '' > $DOC_ROOT/phpinfo.php

Navigate to a browser and verify the page; for instance in my enviornment: ‘http://lamp1.colestock.test/phpinfo.php’

Verify Oracle

The example startup script in this demo uses ‘/u01/app/oracle/network/admin’ as ‘TNS_ADMIN’; this is where PHP will look for ‘tnsnames.ora’ and the like. To verify oracle, I will place a valid service entry into this file and then try to connect to the service in question with a simple PHP script.

Add an entry to your ‘tnsnames.ora’; for example:

$ more /u01/app/oracle/network/admin/tnsnames.ora
DGSVC =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = oradb1.colestock.test)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = oradb2.colestock.test)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = dgsvc.colestock.test)
)
)

Publish the following PHP script, adjusting the username, password, and service name as appropriate

$ su - jcolestock
$ umask 0002
$ export DOC_ROOT=/u01/app/LAMP/www/html
$ vi $DOC_ROOT/test-oracle.php

test-oracle.php

/* Change the username, password, service name to yours */
$db_conn = ocilogon('lamp_owner', 'lamp_password', 'dgsvc');
$sql = "SELECT sys_context('userenv','instance_name') AS \"Instance\" FROM dual";
$parsed = ociparse($db_conn,$sql);
ociexecute($parsed);
$nrows = ocifetchstatement($parsed,$results);
for ($i = 0; $i < $nrows; $i++ ) {
echo "You are connected to the following instance: " . $results["Instance"][$i];
}
?>

Navigate to a browser and verify the page; for instance in my enviornment: ‘http://lamp1.colestock.test/test-oracle.php’

You should get a response similar to mine: ‘You are connected to the following instance: dgstdby’; in my case ‘dgstdby’ is the instance name that I am connected to

Tagged with: , , , ,
Posted in Apache, LAMP, Linux, MySQL, PHP

Oracle Physical Data Guard Scripts

I have uploaded a new zip archive to the scripts section of my site. The zip archive – Oracle Physical Data Guard Scripts (11gR2) – in question contains scripts to build a complete end-to-end “Physical” Data Guard configuration – including Data Guard Broker. All that is required is 2 machines – bare metal or otherwise – with matching Oracle database software, etc.

Contents:
$ unzip -lfv physical_dataguard_11gR2.zip
Archive: physical_dataguard_11gR2.zip
Length Method Size Cmpr Date Time CRC-32 Name
-------- ------ ------- ---- ---------- ----- -------- ----
137 Defl:N 82 40% 02-07-2012 07:14 f49fc59b create_catalog.sql
1397 Defl:N 481 66% 02-07-2012 13:39 cbb5c405 create_db.sql
836 Defl:N 316 62% 02-07-2012 14:04 b493126f create_service.sql
290 Defl:N 206 29% 02-07-2012 09:28 e8ca1f2c create_tablespaces.sql
444 Defl:N 229 48% 02-07-2012 07:14 ef48055c create_users.sql
1399 Defl:N 517 63% 02-07-2012 07:14 a62d4c35 disable_autojobs.sql
450 Defl:N 323 28% 02-07-2012 13:35 b57cec07 dropdb.bash
971 Defl:N 509 48% 02-07-2012 13:53 f30ccb33 duplicate_db.bash
626 Defl:N 366 42% 02-07-2012 10:46 a69f6b36 init.ora
3942 Defl:N 1376 65% 02-07-2012 14:35 9494fcb2 install-db.bash
1207 Defl:N 507 58% 02-07-2012 07:14 750216c7 p_drop_add_srl.prc
1357 Defl:N 461 66% 02-08-2012 15:37 0fa4d2b7 primary_dg_params.sql
324 Defl:N 202 38% 02-08-2012 18:37 763e3d32 primary-listener.ora
2949 Defl:N 1296 56% 02-08-2012 18:48 85b349c6 README
5873 Defl:N 1927 67% 02-08-2012 18:20 0bd37868 setup-dataguard.bash
124 Defl:N 115 7% 02-07-2012 12:49 4d364612 shutdb.bash
350 Defl:N 212 39% 02-08-2012 18:38 ea646c2c standby-listener.ora
282 Defl:N 188 33% 02-08-2012 15:23 41257b09 start-apply.bash
751 Defl:N 203 73% 02-08-2012 14:15 fe58e940 tnsnames.ora
-------- ------- --- -------
23709 9516 60% 19 files

Readme:
README

File: physical_dataguard_11gR2.zip

Scripts to create an end-to-end 'Physical' Data Guard configuration on Linux/Orace 11G Release 2

Archive contains scripts to not only set-up a 'Physical' standby database configuration but also
to create the source primary database in the first place.

In others words, a complete, end-to-end 'Physical' Data Guard configuration can be built from scratch
provided: 2 hosts - one for the primary; one for the standby - with Oracle software installed
and basic networking (including listeners) established. Also, it is assumed that SSH User Equivalence
is in place for the 'oracle' OS user. This makes remote host operations more convenient, etc.

Alternatively, the provided scripts can be easily adapted to some other, related purpose.
Ultimately this archive is an all-in-one demonstration whereas specific production configurations
are unique each with its own unique circumstances.

Instructions:

1.) Unzip archive to desired directory (run all scripts from within this directory)
2.) Ensure that you have 2 hosts with Oracle >= 11gR2 database software installed
3.) Ensure that each of the hosts has a listener configured as well as basic networking.
(reference the sample tnsnames.ora, primary-listener.ora, and standby-listener.ora files)
4.) Ensure that SSH User Equivalence has been established between the 2 hosts for the 'oracle' user
5.) Update the configuration section of (install-db.bash); adjust as appropriate for your environment
6.) Make any desired changes to the 'barebones' parameter file (init.ora) - don't remove '{}' replacement tokens
This init.ora file becomes the basis for the primary's parameter file
7.) Make any desired changes to the 'barebones' create tablespaces script (create_tbsp.sql)
8.) Run the (install-db.bash) script providing SYS, SYSTEM, and DBSNMP passwords when prompted:

$ ./install-db.bash

9.) Update the configuration section of (setup-dataguard.bash) adjust as appropriate for your environment

10.) Run the (setup-dataguard.bash) script providing the SYS password when prompted:

$ ./setup-dataguard.bash

Assumptions:

- SSH User Equivalency is in place for the 'oracle' user
- 'oracle' is the dba user
- Both hosts have the same 11gR2 database installed, in the same location
- Database are installed on traditional file systems (e.g. ext3/ext4)
- FRA (Flash Recovery Area) is used, also on traditional file systems
- Databases are layed out identically except mapping for difference in 'instance_name'
(e.g. datafile location primary: /u01/app/oracle/oradata/primary/
standby: /u01/app/oracle/oradata/standby/)
- Data Guard Broker is set-up by default
- Instance names match the db_unique_name for each database
In other words, the instance names are different for clarity and management

2/7/2012 - Initial Creation
James Colestock, james@colestock.com

Tagged with: , , , ,
Posted in 11G, Data Guard, High Availability, Oracle, Replication

Oracle RAC Database Creation Scripts

I have uploaded a new zip archive to the scripts section of my site. The zip archive – Oracle RAC Database Creation Scripts (11gR2) – in question contains configurable scripts to automate the process of creating Oracle RAC databases manually. Many people live and breath by the DBCA; I find it much more convenient and powerful to have this kind of granular control over the process.

Contents:
$ unzip -lvf create_racdb_11gR2.zip
Archive: create_racdb.zip
Length Method Size Ratio Date Time CRC-32 Name
-------- ------ ------- ----- ---- ---- ------ ----
1580 Defl:N 810 49% 02-03-12 20:57 5a72aa67 README
1552 Defl:N 705 55% 02-03-12 11:04 d2dacbb6 create_db.sql
319 Defl:N 224 30% 02-03-12 11:04 09576f21 create_tbsp.sql
437 Defl:N 258 41% 02-03-12 11:22 65a53d29 init.ora
7829 Defl:N 2553 67% 02-03-12 20:52 1e780c51 create_racdb.bash
-------- ------- --- -------
11717 4550 61% 5 files

Readme:
README

File: create_racdb_11gR2.zip

Script to manually create RAC databases on Linux/11G Release 2

Instructions:

Unzip archive to desired directory
Update configuration sections of create_racdb.bash
Make changes to barebones parameter file (init.ora) - don't remove '{}' tokens
Make changes to barebones create database script (create_db.sql)
Make changes to barebones create tablespaces script (create_tbsp.sql)
Run the script and provide SYS and SYSTEM passwords when prompted:

$ ./create_racdb.bash

Assumptions:

# Assumptions:

- Target database is RAC using ASM
- SSH Keys/passwordless user-equivalency is in place
- Using OMF with 2 diskgroups: +DATA & +FRA
- 'oracle' is the dba user
- 11G R2 and above; assumes SCAN
- Barebones init.ora (modify as appropriate)
- Barebones create_db.sql (modify as appropriate)
- Barebones create_tbsp.sql (modify as appropriate)
- User is prompted for SYS/SYSTEM password
- Assumes that if job role separation is used and this is the first database you are installing
that appropriate fixes to the ${ORACLE_HOME}/bin/oracle executable have been made on all target nodes
Specifically, that the following has been run as root:
$ ${GRID_HOME}/bin/setasmgidwrap -o=${ORACLE_HOME}/bin/oracle
$ chmod 6751 ${ORACLE_HOME}/bin/oracle
If you experience ORA-12537: TNS:connection closed; consult Metalink Note 1069517.1
- Assumes administrator managed, BASIC TAF, and SELECT failover when creating service

2/2/2012 - Initial Creation
James Colestock, james@colestock.com

Tagged with: , , , ,
Posted in 11G, Oracle, RAC, Scripts