Installation
Caveats
The following classes of systems need some extra attention for installation:
- systems running with eth1 but no eth0 (globes - SUN V65x)
- there's a "globe" post script now to handle this
- systems running with eth0, but having additional interfaces recognized as eth0 by anaconda
- picus1,2: there's a "e1000-no-e100" post script for those
- fatmans: there's an "acenic-no-e100" post script for those
- systems with certain ethernet cards anaconda has a problem to get up the second time
this problem seems to have vanished with SL 3.0.5
- Dell 2850: workaround is to put the ks.cfg file onto a floppy and boot with "ks=floppy", or to pack a special initrd with the ks.cfg in the root filesystem and boot with ks=file, or to use the -local option to SL3U
Overview
SL3 hosts are installed using kickstart ([http://www.redhat.com/docs/manuals/enterprise/RHEL-3-Manual/sysadmin-guide/ch-kickstart2.html online manual]). The repository is mirrored from [ftp://ftp.scientificlinux.org/linux/scientific/ the ftp server at FNAL] and is located on the installation server, z.ifh.de, in /net1/z/DL6/SL. The host profiles for the kickstart install are kept in /net1/z/DL6/profiles, and some files needed during the postinstallation, before the AFS client is available, in /net1/z/DL6/postinstall (accessible through the http server running on z). More files, and most utility scripts, are located in /project/linux/SL3.
Installation takes the following steps:
- [#cfvamos Configure the host in VAMOS] This is important, because several variables must be set correctly since they are needed by the tools used in the following steps.
- [#profiles Create a system profile] Using CKS3, information from VAMOS and possibly from the AMS directory or the live host, a kickstart file is generated that will steer the installation process.
- [#ai Activate private key distribution] Only after this step, the host will be able to request its private keys and initial configuration cache from mentor.
- [#boot Prepare system boot into installation] Current options include PXE, CD-ROM, and hard disk. Other possible methods like USB stick, multiple floppies, or a tftp grub floppy are not yet available.
- Boot the system into installation During boot, the system will load the kernel and initrd made available in step (d). Networking information comes from a DHCP server (possible
with all methods) or is provided on the kernel command line (CD-ROM & hard disk methods only). The installation system locates the kickstart profile. Information is from a kernel command line provided by the tftp server (PXE method), manually (CD-ROM method), or the script preparing the hard disk boot. The kickstart profile contains all other information needed, including
the repository location, partitioning & package selection, and a postinstall script that will do some very basic configuration and retrieve and install a one-time init script. After the first reboot, this init script (executing as the very last one) will retrieve the system's private keys and initial vamos configuration cache, and then bootstrap our site mechanisms for system maintenance.
System Configuration in VAMOS
Choose a default derived from sl3-def. Defaults starting with "sl3-" are 32bit, those starting with "sl3a-" are 64bit. These will mainly differ in the settings for OS_ARCH and AFS_SYSNAME (see the sl3a-mod modifier). 64bit capable systems can run the 32bit version as well.
OS_ARCH is read by several tools in the following steps to determine what to install. The same is true for CF_SL_release: This variable determines which minor SL release the system will use. Both OS_ARCH and CF_SL_release affect the choice of installation kernel & initrd, installation repository, and yum repositories for updating and installing additional packages.
It should now be safe to do this step without disabling sue on the system, since sue.bootstrap will no longer permit OS_ARCH to change.
Run the Workflow whenever a system changes from DLx to SL3 or back, since some tools (scout) can only consult the netgroups to decide how things should be done. This is wrongwrongwrong, but ...
Creating System Profiles
This is done with the tool CKS3.pl which reads "host.cks3" files and creates "host.ks" files from them, using additional information from VAMOS, the AMS directory, or the live system still running DL4, DL5 or SL3, as well as pre/post script building blocks from /project/linux/SL3/{pre|post}.
CKS3.pl is located in /project/linux/SL3/CKS3, and is fully perldoc'd. A sample DEFAULT.cks with many comments is located in the same directory.
To create a profile:
- You need to be a member of the sysprog unix group.
- Log into z.
- Go into /net1/z/DL6/profiles.
- Check whether a .cks3 file for your host exists.
- If it does, and you find you have to modify the file, make sure it is not a link to some other file before you do so.
- If it does not, create one by starting with a copy from a similar machine, or a copy of DEFAULT.cks3.
NO host.cks3 IS NEEDED AT ALL if you just want to upgrade or reinstall a normal system without changing disk partitioning, since DEFAULT.cks3 is always read and should cover this case completely.
Run CKS3.pl, like this: ./CKS3.pl host
- Watch the output. Make sure you understand what the profile is going to do to the machine! If in doubt, read and understand the
<host>.ks file before actually installing.
Also make sure the SL release and architecture are what you want.
Activating Private Key Distribution
If you followed the instructions above (read the CKS3 output), you already know what to do: {{{ ssh mentor sudo activ-ai <host> }}} This will activate the one-shot mechanism for giving the host (back) its private keys (root password, kerberos keyfile, vamos/ssh keys, ...). The init script retrieved during postinstall will start the script /products/ai/scripts/ai-start which will NFS-export a certain directory to mentor, put an SSL public key there, and ask mentor to crypt the credentials with that key and copy them into the directory. If after the installation the host has its credentials, it worked and no other system can possibly have them as well. If it hasn't the keys are burned and have to be scrubbed. Hasn't happened yet, but who knows.
If ai-start fails, the system will retry after 5 minutes. Mails will be sent to linuxroot@ifh.de from both mentor and the installing system, indicating that this happened. The reason is usually that this step was forgotten. Remember it has to be repeated before every reinstallation.
Booting the system into installation
There are several options:
- Perl Script If the system is still running a working DL4, DL5 or SL3 installation, this is the most convenient and reliable method: After logging on as root, run the script {{{ /project/linux/SL3/SL3U/SL3U.pl yes please
}}}
- and either let the script reboot the system after the countdown, or interrupt the countdown with ^C and reboot (or have the user reboot) later. The script will create an additional, default, boot loader entry to start the installation system. By default, all needed information is appended to the kernel command line, including networking information. Hence not even DHCP is needed. The script comes with full perldoc documentation. Some additional options are available or may even be necessary for certain hosts (see 1.0).
- SL CD-ROM
It is much more convenient now to use the [#unicd unified CD]
Images are /net/z/DL6/SL/<release>/<arch>/images/SL/boot.iso The release and arch have to match exactly the planned installation, or the installation system will refuse to work.
- If the system has a valid DHCP entry (inluding the MAC address): At the boot prompt enter "linux ks=nfs:z:/net1/z/DL6/profiles/" If the system has more than one network interface, add "ksdevice=link". If the system has more than one network interface *connected*, instead add "ksdevice=eth0" or whatever is appropriate.
- If the system has no valid DHCP entry yet: You have to add parameters like "ip=141.34.x.y netmask=255.255.255.0 gateway=141.34.x.1 dns=141.34.1.16" Or watch the log on the DHCP server, wait for the unknown MAC to appear, create a working entry for the host in /etc/dhcpd.conf, and restart the dhcp server. If you're quick enough, the client will receive an answer when it retries. Otherwise it has to be booted again. Don't forget to remove the CD before the first reboot, or installation will start all over.
Anchor(unicd) Grand Unified Boot CD This CD has a boot loader menu that allows installing DL5 and SL3. It also has preset kernel parameters that save you almost all typing: ks=..., ksdevice=link, netmask=..., dns=... are always set, but hidden, and there are visible, editable templates for ip=..., gateway=...). Menu entries are:
- local This is the default, and will boot from the primary hard disk (whatever the BIOS thinks this is). Hence it's no problem if you forget to remove the CD during installation.
- dl5-dhcp Installs DL5, getting network parameters by DHCP.
- dl5-manual Installs DL5 with network parameters specified on the command line. When you select this entry, you'll see the tail of the preset options: {{{ "textmode=0 hostip=141.34.x.y gateway=141.34.x.1"
}}}
- Simply replace "x" and "y" by the appropriate values for the host and hit Enter.
- sl303_32-d Install SL3.0.3/i386 using dhcp.
- sl303_32-m Install SL3.0.3/i386. Network parameters are given on the command line (replace "x" and "y" in "ip=141.34.x.y gateway=141.34.x.1").
- sl303_64-d Install SL3.0.3/x86_64 using dhcp.
- sl303_64-m Install SL3.0.3/x86_64. Network parameters are given on the command line (replace "x" and "y" in "ip=141.34.x.y gateway=141.34.x.1").
- The entries for SL may vary over time, but generally follow the pattern
sl<release>_<bits>-<method>, where bits is "32" or "64", and method is "d" for dhcp or "m" for manual. They have to be this cryptic because there's a 10 character limit for the labels The ISO image is /project/linux/SL3/doc/InstallCD.iso, and the script next to it (it's in the CD's root directory as well) can be used to create modified images, for additional SL releases or different DL5 install kernels etc.: Simply edit the variables at the top (@SL_releases, $DL_kernel) and rerun the script on a system that has syslinux and mkisofs installed (tested on DL5 and SL3/32). The script will tell you the directory in /tmp where it writes the image.
- PXE This requires entries on both the DHCP and TFTP servers. The client will receive IP, netmask, gateway etc from the DHCP server, plus the information that it should fetch "pxelinux.0" from the TFTP server (actually, z) and run it. Then, pxelinux.0 will request the host configuration file (IP address in hex notation) from the TFTP server (a link in /tftpboot/pxelinux.cfg/). This will in turn
tell pxelinux.0 which kernel & initrd to retrieve from the TFTP server, and what parameters the kernel should receive on the command line.
TFTP & DHCP (script): As root on z, run
{{{ /project/linux/SL3/PXE/pxe <host>
}}}
- This will add the right link for the system in /tftpboot/pxelinux.cfg and also attempt to update the DHCP configuration on the right server.
- DHCP (manually): If the system has no valid DHCP entry yet, you have to use the last method given above in (b) to find out the MAC and create or complete the entry in the configuration file manually. In addition to IP and MAC, the following parameters have to be supplied:
next-server 141.34.32.16; filename "pxelinux.0";
If there was an incomplete entry for the system, these will already be present after running the "pxe" utility script.
- The changes on the DHCP server will be reverted the next time the dhcp feature runs, so no need for cleanup. To get rid of the link in /tftpboot/pxelinux.cfg, simply run (as root on z)
/project/linux/SL3/PXE/unpxe <host>
If the client boots via PXE afterwards, it will pick up the default configuration which tells it to boot from its local disk. Anyway, using PXE is not recommended for systems which have no "one time boot menu" or a "request network boot" key stroke. - GRUB Floppy As a last resort, one can try the grub floppy. This method will obtain the kernel and the initrd by tftp, hence a matching link has to exist in /tftpboot/pxelinux.cfg on the install server. If the host has a working dhcp entry, just boot the default entry on the floppy. If it doesn't select the other entry, hit 'e' to get into the editor, replace all "x" and "y" in the templates for IP addresses and netmasks, and finally hit "b" to boot the modified entry. The network drivers in GRUB only work for older cards. This is no problem because the more recent ones support PXE anyway. In particular, the 3C905 in PIII desktop PCs (najade or older PIII 750) is supported. The floppy image is located in /project/linux/SL3/Floppy. To adapt it to a new SL release or install kernel, simply loop mount the image and make the obvious changes in boot/grub/menu.lst.
System/Boot Method Matrix
- (1) This seems to depend on the motherboard and/or BIOS revision. In fact, some 850MHz models won't boot from CD while there are older 750MHz systems that will. (2) The PXE implementation in the MBA has a bug: It does not recognize the next-server argument and always tries to download the Loader from the same server that sent the DHCP offer. Hence these systems
Script
CD
Floppy
PXE
older systems
yes
maybe
yes
no
PIII 750 desktop
yes
some(1)
yes
no(2)
Najade (PIII 850)
yes
some(1)
yes
no(2)
Nereide (P4 1700)
yes
yes
?
?
Oceanide (P4 2400)
yes
yes
?
yes
Hyade (Dell 350)
yes
yes
no
yes
Dryade (Dell 360)
yes
yes
no
yes
Satyr (Dell 370)
yes
yes
no
yes
Dell 380
yes
yes
no
yes
ice (intel serverboard)
yes
yes
yes
yes
fatman (same board)
yes
yes
yes
yes
Supermicro PIII
yes
yes
no
no
Supermicro Xeon
yes
yes
no
no
globe (SUN V65x)
yes
yes
no
yes
heliade (SUN V20z)
yes
yes
no
yes
Dell 1850
yes
yes
no
yes
Dell 2850
yes(3)
yes(3)
no
yes(3)
can be PXE-booted by setting up a special DHCP&TFTP server. (3) Anaconda up to and including at least SL 3.0.4 has a problem to get up the NIC a second time. Workarounds include putting the ks.cfg on a floppy or into the initrd, or using the -local switch to SL3U.pl.
Package Handling & Automatic Updates
See the "aaru" feature for how all this (except kernels) is handled.
There are three distinct mechanisms for package handling on the client:
- aaru (package updates) Handled by the aaru feature, the scripts /sbin/aaru.yum.daily and /sbin/aaru.yum.boot run yum to update installed packages. Yum [2] is told to use specific repository descriptions for these tasks, which are created by /sbin/aaru.yum.create before, according to the values of VAMOS variables OS_ARCH, CF_SL_release, CF_YUM_extrarepos* and CF_DZPM_AGING.
- yumsel (addition and removal of packages) Handled by the aaru feature, the script /sbin/yumsel installs additional packages or removes installed ones. Configuration files for this task are read from /etc/yumsel.d/, which is populated by /sbin/yumsel.populate before, according to the values of VAMOS variables CF_yumsel_*.
- KUSL3 (everything related to kernels) Handled by the kernel feature, this script deals with kernels and related packages (modules, source), according to the values of VAMOS variable Linux_kernel_version and a few others.
SL Standard & Errata Packages
Errata are synced to arwen with /project/linux/SL3/sync-arwen.sh and then to z with /project/linux/SL3/sync-z.sh (still manually). Packages to be installed additionally by /sbin/yumsel or updated by /sbin/aaru.yum.boot and /sbin/aaru.yum.daily are NOT taken from the errata mirror created like this, but instead from "staged errata" directories created (also, still manually) by the script /project/linux/SL3/yum/stage-errata/stage-errata. The sync/stage scripts send mail to linuxroot@ifh.de unless in dryrun mode. The stage_errata script is fully perldoc'ed, the others are too simple.
Addon Packages (Zeuthen)
Most of these are found in /afs/ifh.de/packages/RPMS/@sys/System, with their (no)src rpms in /afs/ifh.de/packages/SRPMS and the source tarballs in /afs/ifh.de/packages/SOURCES. Some come from external sources like the dag repository (http://dag.wieers.com/home-made/), freshrpms (http://freshrpms.net/) or the SuSE 8.2/9.0 distributions. These latter ones are typically not accompanied by a src rpm.
After adding a package, make it available to yum like this:
cd /afs/.ifh.de/packages/RPMS/@sys/System yum-arch . arcx vos release $PWD
Selectable Addon Packages (Zeuthen)
There's a way to provide packages in selectable repositories. For example, this was used to install an openafs-1.2.13 update on selected systems while the default for SL3 was still 1.2.11, and we didn't want to have 1.2.13 on every system.
These packages reside in directories SL/<release>/<arch>_extra/<name> on the installation server. For example, the afs update packages for 3.0.4/i386 are in /net/z/DL6/SL/304/i386_extra/afs1213 . To have clients access this repository, set any vamos variable starting with CF_YUM_extrarepos (CF_YUM_extrarepos or CF_YUM_extrarepos_host or ...) to a space separated list of subdirectories in <arch>_extra.
For example, CF_YUM_extrarepos='afs1213' will make aaru.yum.create add this repository (accessible via nf or http) to the host's yum configuration.
To make available packages in such a repository, you must yum-arch the *sub*directory (not <arch>_extra). While the installation server is still running DL5, use /project/linux/SL3/YUM-DL5/yum-arch-dl5 (ignore the error messages about it being unable to open some file in /tmp).
Note that matching kernel modules must still reside in a directory searched by the update script (see below). This should generally not cause problems since these aren't updated by yum anyway.
Additional Modules for Kernel Updates
Handled by the kernel feature, the script /usr/sbin/KUSL3.pl reads its information about which kernels to install from VAMOS variables Linux_kernel_version and a few others, and carries out whatever needs to be done in order to install new kernels and remove old ones. The script is perldoc'ed.
Basically, set Linux_kernel_version in VAMOS, and on the host (after a sue.bootstrap) run "KUSL3.pl", make sure you like what it would do, then run "KUSL3.pl -x".
Kernels and additional packages are found in the repository mirror including the errata directory (CF_SL_release is used to find those), and in /afs/ifh.de/packages/RPMS/@sys/System (and some subdirectories).
If the variable "Linux_kernel_modules" is set to a (whitespace separated) list of module names, KUSL3 will install (and require the availability of) the corresponding kernel-module rpm. For example, if Linux_kernel_version is "2.4.21-20.0.1.EL 2.4.21-27.0.2.EL ", and Linux_kernel_modules is "foo bar", the mandatory modules are:
name
version
release
kernel-module-foo-2.4.21-20.0.1.EL
latest
latest
kernel-module-bar-2.4.21-20.0.1.EL
latest
latest
kernel-module-foo-2.4.21-27.0.2.EL
latest
latest
kernel-module-bar-2.4.21-27.0.2.EL
latest
latest
Generally speaking, kernel module packages must comply with the SL conventions.
KUSL3 will refuse to install a kernel if mandatory packages are not available. Non mandatory packages include kernel-source, sound modules, kernel-doc.
ALSA
Matching kernel-module-alsa-uname -r packages are installed by KUSL3.pl if (a) they are available and (b) the package "alsa-driver" is installed (the latter should be the case on desktops after yumsel has run for the first time).
Both are created from the alsa-driver srpm found in /packages/SRPMS/System. Besides manual rebuilds, there is now the option to use the script /project/linux/SL3/modules/build-alsa-modules.pl.
Short instructions for building the kernel modules package manually (for an easier method, [#alsascrp see below]):
- Install the kernel-source rpm for the target kernel. For example, this is kernel-source-2.4.21-20.EL for both kernels 2.4.21-20.EL and 2.4.21-20.ELsmp. KUSL3 will do this for you if Linux_kernel_source is set accordingly (if in doubt, set it to "all" in VAMOS, and on the build system sue.bootstrap and run KUSL3). You need not be running the target kernel in order to build the modules.
- Clean the source directory
cd /usr/src/linux-2.4.21..... make mrproper
- Configure the source tree. First, find the right configuration:
either the file /boot/config-<kernelversion
- or a matching file /usr/src/linux-2.4.21..../configs/kernel-....
{{{cp <config for target kernel> .config
make oldconfig make dep make clean}}}
- Build the binary module package, like this:
cd /usr/src/packages/SPECS rpm -ivh /packages/SRPMS/System/alsa-driver-1.0.7-1.nosrc.rpm ln -s /packages/SOURCES/alsa/* ../SOURCES rpmbuild -bb --target i686 --define "kernel 2.4.21-20.0.1.ELsmp" \ alsa-driver-1.0.7-1.spec
- Repeat steps (b,c,d) for every target kernel. Modify the target and kernel version according to what you need. We'll typically need i686 for both SMP and UP kernels. The 64bit modules have to be built on a 64bit system. Note the ia32e kernel actually *is* SMP although the name doesn't say so, and there is no UP kernel for this architecture at all. Here's a table of what you probably want to build:
target
version
needed
i686
2.4.21-20.0.1.ELsmp
definitely
i686
2.4.21-20.0.1.EL
definitely
ia32e
2.4.21-20.0.1.EL
definitely
x86_64
2.4.21-20.0.1.ELsmp
probably not
Copy the resulting kernel-module-alsa-<kernelversion> rpms to the right directory:
i686: /afs/.ifh.de/packages/RPMS/i586_rhel30/System/alsa ia32e/x86_64: /afs/.ifh.de/packages/RPMS/amd64_rhel30/System/alsa
Then make them available to yum:cd /afs/.ifh.de/..../System yum-arch . arcx vos release $PWD
There is no need to copy the alsa-driver rpms generated, unless a new alsa version has been built, in which case one of the resulting packages should be copied and yum-arch'd per target directory. After step (f), KUSL3 will pick up the modules.
Anchor(alsascrp)Scripted build of the kernel modules packages:
- Make sure the kernel-source rpms are installed for all kernels you want to build for.
Make sure the whole kernel source tree(s) are owned by you, and that you can run ssh root@<buildhost>.
- Run the script like this:
/project/linux/SL3/modules/build-alsa-modules.pl 1.0.8-1
You'll be prompted for every kernel that the script can sensibly build modules for on this system. Pick the ones you want. Check the dryrun output, and once you like it:/project/linux/SL3/modules/build-alsa-modules.pl -x 1.0.8-1
This should build everything you need (after going through the prompting again). - Copy the output rpms into the repository (as described in step (f) for the manual build above). The script will print the commands that need to be executed.
ESD CAN Module (for PITZ Radiation Monitor)
This is similar to the ALSA modules, but;
- The srpms are in /packages/SRPMS/esdcan (different ACL from others due to proprietary license of source).
- There's no build script.
- Builds should be done manually, on pitzrap itself, and always against
a fresh kernel-source package:
- remove the kernel-source package(s)
- install the right kernel-source package for the kernel you want to build the module for
- configure it:
cd /usr/src/linux-2.4..... cp configs/kernel-2.4.21-i686.config .config make dep make clean
- install the srpm (it doesn't matter from which kernel build it is):
rpm -ivh /packages/SRPMS/kernel-module-esdcan-...3.3.3-1.src.rpm
- build:
rpmbuild -ba [--define 'kernel 2.4.21....'] --target i686 ...spec
- copy the .i686.rpms to /afs/.ifh.de/packages/i586_rhel30/System/esdcan, yum-arch and release
Nvidia
Again, similar to alsa. Maybe a bit simpler since the spec will deal with the kernel sources correctly and without further attention (it makes a copy of the source directory and then does the right thing).
- install the right kernel-source package (there's a build requirement)
- install the srpm:
rpm -ivh /packages/SRPMS/System/nvidia-driver-1.0.7174-3.src.rpm
- build (on an SMP system, on a UP system the define changes accordingly):
rpmbuild -ba nvidia-driver-1.0.7174-3.spec
- on i386, will build i386 userspace packages
- on x86_64, will build userspace and kernel package for current kernel
rpmbuild -bb --target i686 nvidia-driver-1.0.7174-3.spec
- on i386, will build kernel module for running kernel
rpmbuild -bb --target i686 --define 'kernel 2.4.21-27.0.2.EL' ...
- on i686, will build kernel module for other kernels
rpmbuild -bb --define 'kernel ...' --define 'build_module 1' nvidia...
- on x86_64, will build kernel module for other kernel
- copy the .rpms to /afs/.ifh.de/packages/@sys/System/nvidia, yum-arch and release
Adding a new SL3 release
There are quarterly releases of SL3, following Red Hat's updates to RHEL. Each new release must be made available for installation and updates.
Step 1: Mirror the new subdirectory
Modify sync-arwen.sh and sync-z.sh to include the new release. Make sure there's enough space on both arwen and z. Now sync-arwen, then sync-z. If you're using 30rolling for testing, make a link like this:
/net1/z/DL6/SL/304 -> 30rolling
Step 2: Create empty extra postinstall repositories
mkdir /net1/z/DL6/SL/304/i386_post cd /net1/z/DL6/SL/304/i386_post /project/linux/SL3/YUM-DL5/yum-arch-dl5 . mkdir /net1/z/DL6/SL/304/x86_64_post cd /net1/z/DL6/SL/304/x86_64_post /project/linux/SL3/YUM-DL5/yum-arch-dl5 .
If some packages are needed at this stage, of course put them there...
Step 3: Create staged errata directories
Modify /project/linux/SL3/yum/stage-errata/stage-errata.cf to include the new release. Note if you're trying 30rolling as a test for the release, you must configure 30rolling, not 304 (or whatever). Now run stage-errata.
Step 4: Make the kernel/initrd available for PXE boot
Go into /tftpboot on z. Do something like
cp -i /net1/z/DL6/SL/304/i386/images/SL/pxeboot/vmlinuz vmlinuz.sl304 cp -i /net1/z/DL6/SL/304/x86_64/images/SL/pxeboot/vmlinuz vmlinuz.sl304amd64 cp -i /net1/z/DL6/SL/304/i386/images/SL/pxeboot/initrd.img initrd.sl304 cp -i /net1/z/DL6/SL/304/x86_64/images/SL/pxeboot/initrd.img initrd.sl304amd64
Then cd into pxelinux.cfg. Make copies of the relevant configuration files (cp SL303-i386-ks SL304-i386-ks; cp SL303-x86_64-ks SL304-x86_64-ks) and edit them accordingly (s/303/304/g);
Step 5: Make the release available in VAMOS
Fire up the GUI, select "vars" as the top object, go to CF_SL_release, choose the "values_host" tab, and add the new value to the available choices. Set it on some test host.
Step 6: test
Make sure this works and sets the right link:
/project/linux/SL3/PXE/pxe <testhost>
Make sure this chooses the right directory:
cd /net1/z/DL6/profiles ./CKS3.pl <testhost>
Make sure SL3U works correctly:
ssh <testhost> /project/linux/SL3/SL3U/SL3U.pl yes please
Try an installation:
activ-ai <testhost>
- then boot it
Try updating an existing installation:
set CF_SL_release for the host in VAMOS
sue.bootstrap
sue.update aaru
have a look into /var/log/yum.log, and check everything still works
References
http://www.redhat.com/docs/manuals/enterprise/RHEL-3-Manual/sysadmin-guide/ch-kickstart2.html