Jump to the Index for this Document |
These release notes describe the changes and new features included in the AFS® 3.4a release. These changes are not documented in the AFS System Administrator's Guide, AFS Command Reference Manual, or AFS User's Guide, although in some cases you are referred to the AFS documentation set for more information. This information is included in the sections titled ``AFS 3.4a Changes.''
Note that this document also contains AFS 3.3 release note information that has not been incorporated into the AFS documentation set. This information is included in the sections titled ``AFS 3.3 Changes.''
AFS 3.4a supports multihomed file servers for the first time; refer to the section on multihomed file servers in Chapter 14 for more information. The Backup System includes many changes to enhance performance and produce clearer status and error messages; refer to the information on the AFS Backup System in Chapter 5. Many changes are introduced in AFS 3.4a. Several changes have been incorporated to fix bugs and improve performance of the Cache Manager, database servers, file servers, and the NFS/AFS Translator.
Note: Transarc provides backward compatibility to the previous release of AFS only. Therefore, except for the incompatibilities described in Section 3.3, AFS 3.4a is compatible with AFS 3.3; however, AFS 3.4a is not compatible with AFS 3.2.The AFS 3.4a release includes the following interface and functional changes:
Chapter 2 - Supported AFS Systems
AFS 3.4a provides support for the following systems:
AFS 3.4a provides procedures and instructions for
AFS 3.4a provides changes to AFS authentication and login programs, including
AFS 3.4a provides several major and minor changes to the Backup System, including
AFS 3.4a includes a change to the bos addkey command that prevents the misuse of an existing key version number and prompts you to enter the key a second time for verification.Chapter 7 - The fs Commands
AFS 3.4a provides several major and minor changes to the fs command suite, including
AFS 3.4a includes a new command suite, fstrace, consisting of eight commands that are used by the system administrator to diagnose problems within the AFS Cache Manager. The new fstrace command suite includes
AFS 3.4a provides several changes to the kas command suite, including
AFS 3.4a provides several changes to the package command and configuration lines, including
AFS 3.4a includes a new flag, -pipe, in the uss bulk command to assist you in running batch jobs without displaying the password prompt. Chapter 12 - The vos Commands
AFS 3.4a provides several major and minor changes to the vos command suite, including
AFS 3.4a provides several major and minor changes to miscellaneous AFS commands, including
AFS 3.4a incorporates several functional changes, including
Several comments describing bugs fixed in AFS 3.4a are included in this chapter.Chapter 16 - Documentation Corrections
Several comments describing documentation corrections are included in this chapter.
AFS 3.4a supports a number of new systems while dropping support for some obsolete systems. A complete list of the supported systems appears in a table at the end of this chapter.
The following supported systems are new for AFS 3.4a:
The following supported system is enhanced for AFS 3.4a:
The following systems that were supported in AFS 3.3 or AFS 3.3a are not supported in AFS 3.4a:
Table 2-1
lists all of the systems supported in AFS 3.4a. As in previous versions
of AFS, use the system names shown in the table if you wish to use the
@sys variable in
pathnames (as discussed in Chapter 2 of the AFS System Administrator's
Guide). The fs sysname command allows you to override the default
value of the @sys variable. Supported
AFS Systems
System Name | Machines | Operating Systems |
AT&T/NCR Machines | ||
ncrx86_30 | AT&T/NCR System 3000 | 2.0.2 |
Digital Machines | ||
alpha_osf20 | DEC AXP | 2.0 |
alpha_osf30 | DEC AXP | 3.0 |
alpha_osf32 | DEC AXP | 3.2 |
pmax_ul43 | DECstation 2100, 3100, or 5000 (single processor only) | Ultrix 4.3 |
pmax_ul43a | DECstation 2100, 3100, or 5000 (single processor only) | Ultrix 4.3a or 4.4 |
Hewlett-Packard Machines | ||
hp700_ux90 | Hewlett-Packard 9000 Series 700 | HP-UX 9.01, 9.03, or 9.05 |
hp800_ux90 | Hewlett-Packard 9000 Series 800 | HP-UX 9.0, 9.02, or 9.04 |
hp800_ux90 | Hewlett-Packard 9000 Series 800 MP | HP-UX 9.0 |
IBM Machines | ||
rs_aix32 | IBM RS/6000 | AIX 3.2 |
rs_aix41 | IBM RS/6000 | AIX 4.1 |
Silicon Graphics Machines | ||
sgi_52 | Silicon Graphics | IRIX 5.2 |
sgi_53 | Silicon Graphics | IRIX 5.3 |
Sun Machines | ||
sun4_411 | Sun 4 (except SPARCstations) | SunOS 4.1.1, 4.1.2, or 4.1.3 |
sun4c_411 | Sun SPARCstation IPC (and other models with "sun4c" kernel architecture) | SunOS 4.1.1, 4.1.2, or 4.1.3 |
sun4m_412 | Sun SPARCstation 4, 10, 20, and SPARCclassic (and other models with "sun4m" kernel architecture) | SunOS 4.1.2 or 4.1.3 |
sun4_53 | Sun 4 (except SPARCstations) | Solaris 2.3 |
sun4c_53 | Sun SPARCstation IPC (and other models with "sun4c" kernel architecture) | Solaris 2.3 |
sun4m_53 | Sun SPARCstation 4, 10, 20, and SPARCclassic (and other models with "sun4m" kernel architecture) | Solaris 2.3 |
sun4_54 | Sun 4 (except SPARCstations) | Solaris 2.4 |
sun4c_54 | Sun SPARCstation IPC (and other models with "sun4c" kernel architecture) | Solaris 2.4 |
sun4m_54 | Sun SPARCstation 4, 10, 20, and SPARCclassic (and other models with "sun4m" kernel architecture) | Solaris 2.4 |
When specifying the chunk size on HP-UX systems, use the default value (64 kilobytes). The use of a chunk size larger than the default may cause HP-UX systems to hang.
Do not run the SGI File System Reorganizer (fsr) on the /usr/vice/cache or /vicepx partitions. Running fsr on these partitions can cause corruption of the AFS cache.
This chapter explains how to upgrade your cell to AFS 3.4a from a previous version of AFS. If you are installing AFS for the first time, skip this chapter and refer to the AFS Installation Guide. Before performing the upgrade, please read all of the introductory material in the following sections.
Section 3.6 explains how to upgrade your cell from AFS 3.2 to AFS 3.4a, and includes the following subsections:
Section 3.7 explains how to downgrade your cell from AFS 3.4a to AFS 3.3a, and includes the following subsections:Note: Transarc provides backward compatibility to only the previous release of AFS. Therefore, except for the incompatibilities described in Section 3.3, AFS 3.4a is compatible with AFS 3.3; however, AFS 3.4a is not compatible with AFS 3.2.
As Chapter 2 of this document makes clear, upgrading to AFS 3.4a from previous versions of AFS requires you to upgrade the operating system on several system types (for example, DEC machines to Ultrix 4.3, 4.3a, or 4.4; Hewlett-Packard machines to HP-UX 9.0, 9.01, or 9.03; and SGI machines to IRIX 5.2 or 5.3). If you need to upgrade an AFS machine to a new operating system version, you must take several actions to preserve AFS functionality before upgrading the operating system. These actions include:
Ensure the following before beginning any upgrade operations:
AFS 3.4a provides support for multihomed file server machines, which are machines that have multiple network interfaces and IP addresses. A multihomed file server can respond to a client RPC via a different network address than the one initially addressed by the client. By providing multiple paths through which a client's Cache Manager can communicate with it, a multihomed file server can increase the availability of computing resources and improve performance.
AFS 3.4a supports up to 16 addresses per multihomed file server machine. This enhancement requires a change in the way the Volume Location Database (VLDB) represents file server machines. In AFS 3.3 and earlier versions, the VLDB identified file server machines by a single network address. In AFS 3.4a, the VLDB uses a unique host identifier to identify each file server machine. The fileserver process on each file server machine generates this identifier automatically at startup and registers it with the vlserver process which maintains the VLDB. The identifier contains information about all of the machine's known network addresses and is updated at each restart of the fileserver process. A copy of the identifier is stored in each file server machine's /usr/afs/local/sysid file for possible use by the administrator. However, no intervention is required by administrators to generate the identifier or register it in the VLDB. Similarly, no action is required to update the VLDB to 3.4a format or version; the vlserver process performs the update automatically the first time an AFS 3.4a fileserver process registers its network addresses.
Notes: You cannot run the database server processes (that is, servers for the Authentication, Protection, Volume Location, and Backup Databases) on multihomed machines. If you have AFS 3.3 file server and database server processes running currently on the same machine, and you wish to use multihomed support, you must reconfigure these machines and move the database server functionality to another machine.When upgrading to AFS 3.4a, you mustAFS 3.4a does not support multihomed clients or multihomed database server machines.
Each file server machine's /usr/afs/local/sysid identifier file is unique to it. Take care not to copy a machine's /usr/afs/local/sysid file to any other machine.
If you have already upgraded some machines in your cell to AFS 3.4, you must upgrade to AFS 3.4a in a different order than cells upgrading from AFS 3.3. See section 3.5.
As mentioned previously, the AFS 3.4a vlserver process converts the VLDB to the new format automatically, whereas the conversion from AFS 3.2 to 3.3 required administrators to issue the vldb_convert command to convert the VLDB manually. Downgrading from AFS 3.4a to AFS 3.3a still requires a manual VLDB conversion using the vldb_convert command.
If upgrading from AFS 3.3 or 3.4, you do not need to bring your entire cell down during the upgrade. Restarting the vlserver and other database server processes causes a brief outage. Upgrading the fs process suite requires shutting it down, installing new kernel extensions and rebooting the machine, which interrupts file service; you can upgrade machines in the manner that least disrupts service, either one-by-one or simultaneously. Similarly, upgrading client machines requires installing new kernel extensions and rebooting, and can be done at your convenience.
AFS 3.4a makes the vos changeaddr command obsolete. File server machine addresses are registered automatically with the VL Server each time the File Server restarts.
The syscall slot number for AFS has been changed in AFS 3.4 so that AFS and DFS can coexist on the same machine. Previously, this syscall slot number conflicted with DFS.
If as part of upgrading a DEC AXP file server machine to AFS 3.4a, you
choose to upgrade from Digital The syntax for the fs_conv_osf30 conversion program follows:
fs_conv_osf30 [convert | unconvert] [-part <AFS partition name
or device>+] [-verbose] [-help]
Description:
The fs_conv_osf30 program converts Digital The fs_conv_osf30 program can perform two conversions:
The following command converts Digital # fs_conv_osf30 /vicepa convert -verbose
The following command converts Digital # fs_conv_osf30 unconvert -verbose The issuer must be ``root'' on the machine on which the command is issued. AFS 3.4a features a new version of the VLDB that supports multihomed
file servers; see Chapter 14 for additional information
on this feature. The first time an AFS 3.4a fileserver process starts
in your cell and registers its unique host identifier in the VLDB, the
vlserver process automatically converts the VLDB from version 3.3
format to version 3.4a format.
In the instructions that use the bos install and bos restart
commands in the following subsections, you may use the -cell, -localauth,
and -noauth arguments as appropriate.
You must perform the upgrade steps in this order:
Refer to the section of the AFS Installation Guide entitled ``Setting
Up Volumes to House AFS Binaries'' (in particular, to its subsection entitled
``Loading AFS Binaries into a Volume and Creating a Link to the Local Disk'')
for detailed instructions on copying AFS binaries into volumes.
Note the following about upgrading to AFS 3.4a:
# cd /afs/cellname/sysname/usr/afsws/root.server/usr/afs/bin
where cellname specifies your cell name and sysname specifies
the system type name.
Use the bos install command to copy the database server process
binaries into the /usr/afs/bin directory on each binary distribution
machine in turn:
# bos install -server binary distribution machine
-file buserver kaserver ptserver vlserver -dir /usr/afs/bin
where binary distribution machine is the name of the binary distribution
machine for each system type.
After ensuring that the binaries are installed on each of the server
machines, use the bos restart command to restart the database server
processes, beginning with the database server at the lowest network address:
# bos restart -server database server machine buserver
kaserver ptserver vlserver
where database server machine is the name of each database server
machine in turn (remember to start with the lowest-IP-addressed machine). Remember to perform these steps on your database server machines, too
(even if they don't run the fs process suite, you should still upgrade
the BOS Server and other basic processes).
# bos shutdown machine name fs -wait
where <machine name> the server machine you are upgrading.
Change directories to your local cell's binary distribution directory
or Transarc's product tree. The following example shows the recommended
name for your local distribution location:
# cd /afs/cellname/sysname/usr/afsws/root.server/usr/afs/bin
where cellname specifies your cell name and sysname specifies
the system type name.
Use the bos install command to copy the server process binaries
into the /usr/afs/bin directory on each binary distribution machine
in turn:
# bos install -server binary distribution machine
-file bosserver fileserver runntp salvager upclient upserver volserver
-dir /usr/afs/bin
where binary distribution machine is the name of the binary distribution
machine for each system type.
If the machine you are upgrading is system type hp800_ux90 or
alpha_osf20, remember to upgrade all AFS binaries at this point;
see section 3.3.1 for details. If you are
upgrading a DEC AXP machine from Digital UNIX version 2.0 to version 3.0
or 3.2, perform the upgrade at this point, remembering to run the fs_conv_osf30
program too; see section 3.3.2
Copy the AFS kernel extensions (libafs.o or equivalent) to the
local disk directory appropriate for dynamic loading (or kernel building,
if you must build a kernel on this system type). If the machine actually
runs client functionality (a Cache Manager), also copy the afsd
binary to the local /usr/vice/etc directory. The following example
command shows the recommended name for your local binary storage directory:
# cp -r /afs/cellname/sysname/usr/afsws/root.client/usr/vice/etc
/usr/vice/etc
where cellname specifies your cell name and sysname specifies
the system type name.
For specifics on installing the files needed for dynamic loading or
kernel building, consult the ``Getting Started'' section for this system
type in chapter 2 of the AFS Installation Guide.
Once you are satisfied that your cell is running smoothly at AFS 3.4a,
there is no need to retain the pre-AFS 3.4a versions of the server binaries
in the /usr/afs/bin directory (you can always use bos install
to reinstall them if it becomes necessary to downgrade). To reclaim the
disk space occupied in the /usr/afs/bin directory by .bak
and .old files, you can use the following command:
# bos prune -server file server machine -bak -old
where file server machine is the name of the machine on which
you wish to remove .old and .bak versions of AFS binaries. Copy the AFS kernel extensions (libafs.o or equivalent) to the
local disk directory appropriate for dynamic loading (or kernel building,
if you must build a kernel on this system type). Also copy the afsd
binary file to the local /usr/vice/etc directory. The following
example command shows the recommended name for your local binary storage
directory:
# cp -r /afs/cellname/sysname/usr/afsws/root.client/usr/vice/etc
/usr/vice/etc
where cellname specifies your cell name and sysname specifies
the system type name.
For specifics on installing the files needed for dynamic loading or
kernel building, consult the ``Getting Started'' section for this system
type in chapter 4 of the AFS Installation Guide. Chapter 13
of these release notes provides information on using the afsd command
to configure the Cache Manager.
This section explains how to upgrade to AFS 3.4a from an earlier version
of AFS 3.4 (either AFS 3.4 Beta or the original AFS 3.4 General Availability
release; in the remainder of this section, ``AFS 3.4'' will refer to both
Beta and the original GA). These upgrade instructions require you to have
``root'' permissions. If you have not done so already, you should read
Sections 3.1 through 3.3.
If you are already running AFS 3.4 on any machines in your cell, then
the order in which you upgrade the various types of machines is different
than for cells still running AFS 3.2 or 3.3 only. Use the following to
guide your upgrade to AFS 3.4a. You must perform the steps in the order
indicated.
Once you are satisfied that your cell is running smoothly at AFS 3.4a,
there is no need to retain the pre-AFS 3.4a versions of the server binaries
in the /usr/afs/bin directory (you can always use bos install
to reinstall them if it becomes necessary to downgrade). To reclaim the
disk space occupied in the /usr/afs/bin directory by .bak
and .old files, you can use the following command:
# bos prune -server file server machine -bak -old
where file server machine is the name of the machine on which
you wish to remove .old and .bak versions of AFS binaries. The following subsections contain instructions for upgrading to AFS
3.4 from AFS 3.2, 3.2a or 3.2b. If you have not done so already, you should
read Sections 3.1 through 3.3,
which contain information that should be understood prior to performing
the upgrade. # bos shutdown -server file server machine -instance
fs
where file server machine is the name of the file server machine
on which the fs process suite is to be shut down.
On each database server machine, issue the following bos shutdown
command to shut down the database server and Update Server processes:
# bos shutdown -server database server machine -instance
vlserver kaserver ptserver buserver upclient upclientbin upclientetc upserver
where database server machine is the name of the database server
machine on which the vlserver, kaserver, ptserver,
buserver, upclient, upclientbin, upclientetc,
and upserver processes are to be shut down.
Change directories to your local cell's binary distribution directory
or Transarc's product tree. The following example shows the recommended
name for your local distribution location:
# cd /afs/cellname/sysname/usr/afsws/root.server/usr/afs/bin
where cellname specifies your cell name and sysname specifies
the system type name.
Use the bos install command to copy the server process binaries
into the /usr/afs/bin directory on each binary distribution machine
in turn:
# bos install -server binary distribution machine
-file *-dir /usr/afs/bin
where binary distribution machine is the name of the binary distribution
machine for each system type.
If the machine you are upgrading is system type hp800_ux90 or
alpha_osf20, remember to upgrade all AFS binaries at this point;
see section 3.3.1 for details. If you are
upgrading a DEC AXP machine from Digital UNIX version 2.0 to version 3.0
or 3.2, perform the upgrade at this point, remembering to run the fs_conv_osf30
program too; see section 3.3.2
On the database server machine with the lowest network address,
copy the vldb.DB0 (database) file, preferably to a different file
system. If you copy the database to a directory in the same file system
as /usr/afs/db, make sure there are still 18 megabytes free disk
space to accommodate the conversion process. Copy the file as follows:
# cp /usr/afs/db/vldb.DB0 pathname
where pathname is the name of the directory to which the database
file is to be copied.
On the database server machine with the lowest network address,
issue the vldb_convert command to convert the database to version
3 format. (The binary for this command should be in the /etc subdirectory
of the temporary storage area of the local disk.) You cannot convert the
VLDB from version 2 to version 4 in one command. You must first convert
the VLDB to version 3 format as shown in this step; the VLDB conversion
from version 3 to version 4 is automatic. The following
command completes the conversion in a few minutes.
# vldb_convert -to 3 -from 2
# cp -r /afs/cellname/sysname/usr/afsws/root.client/usr/vice/etc
/usr/vice/etc
where cellname specifies your cell name and sysname specifies
the system type name.
For specifics on installing the files needed for dynamic loading or
kernel building, consult the ``Getting Started'' section for this system
type in chapter 2 of the AFS Installation Guide.
Reboot each remaining database server machine.
Once you are satisfied that your cell is running smoothly at AFS 3.34a,
there is no need to retain the pre-AFS 3.4a versions of the server binaries
in the /usr/afs/bin directory (you can always use bos install
to reinstall them if it becomes necessary to downgrade). To reclaim the
disk space occupied in the /usr/afs/bin directory by .bak
and .old files, you can use the following command:
# bos prune -server file server machine -bak -old
where file server machine is the name of the machine on which
you wish to remove .old and .bak versions of AFS binaries. # cp -r /afs/<cellname>/<sysname>/usr/afsws/root.client/usr/vice/etc
/usr/vice/etc
The following subsections contain instructions for downgrading from
AFS 3.4a to AFS 3.3a. If you have not done so already, you should read
Sections 3.1 through 3.3,
which contain information that should be understood prior to performing
the downgrade.
The following instructions assume that all file server and database
server machines are to be downgraded to full AFS 3.3a server functionality.
The instructions indicate steps that can be omitted in certain cases. Consider
the following before downgrading your cell's server machines:
On each server machine that runs the fs process suite,
issue the bos shutdown command to shut it down:
# bos shutdown -server file server machine -instance
fs
where file server machine is the name of the file server machine
on which the fs process suite is to be shut down.
On each database server machine, issue the bos shutdown
command to shut down the database server and Update Server processes:
# bos shutdown -server database server machine -instance
vlserver kaserver ptserver buserver upclient upclientbin upclientetc upserver
where database server machine is the name of the database server
machine on which the vlserver, kaserver, ptserver,
buserver, upclient, upclientbin, upclientetc,
and upserver processes are to be shut down.
Use the bos install command to copy the AFS 3.3a server process
binaries into the /usr/afs/bin directory on each binary distribution
machine in turn:
# bos install -server binary distribution machine
-file *-dir /usr/afs/bin
where binary distribution machine is the name of the binary distribution
machine for each system type.
On the database server machine with the lowest network address,
copy the vldb.DB0 (database) file, preferably to a different file
system. If you copy the database to a directory in the same file system
as /usr/afs/db, make sure there are still 18 megabytes free disk
space to accommodate the conversion process. Copy the file as follows:
# cp /usr/afs/db/vldb.DB0 pathname
where pathname is the name of the directory to which the database
file is to be copied. Copying the vldb.DB0 file to a different directory
is strongly recommended because the conversion utility concludes by removing
the old version of the VLDB.
On the database server machine with the lowest network address,
issue the vldb_convert command to convert the database to AFS 3.3
format. (The binary for this command should be in the /etc subdirectory
of the temporary storage area of the local disk.) The command takes no
more than a few seconds to complete the conversion.
# vldb_convert -to 3 -from 4
# cp -r /afs/cellname/sysname/usr/afsws/root.client/usr/vice/etc
/usr/vice/etc
where cellname specifies your cell name and sysname specifies
the system type name.
For specifics on installing the files needed for dynamic loading or
kernel building, consult the ``Getting Started'' section for this system
type in chapter 2 of the AFS Installation Guide.
Reboot each remaining database server machine.
Copy the afsd binary file to /usr/vice/etc and AFS kernel
extensions (libafs.o or equivalent) to the local disk directory
appropriate for dynamic loading (or kernel building, if you must build
a kernel on this system type). For specifics, consult the ``Getting Started''
section for this system type in chapter 4 of the AFS Installation Guide.
The following example for dynamic loading shows the recommended name for
your local distribution location:
# cp -r /afs/cellname/sysname/usr/afsws/root.client/usr/vice/etc
/usr/vice/etc
Chapter 13 of these release notes provides
information on using the afsd command to configure the Cache Manager.
This chapter describes changes to AFS authentication and login programs
for version 3.4a. AFS 3.4a contains changes to the following:
This chapter also contains changes from the AFS 3.3 release that have
not been incorporated into the full AFS documentation set. These changes
are marked with the heading ``AFS 3.3 Changes.''
AFS 3.4a Changes
In AFS 3.4a, the AFS kaserver Authentication Server has improved
compatibility with MIT's Kerberos version 4 and 5 clients. Specifically,
the kaserver now listens for MIT Kerberos-format requests on UDP
port 88, in addition to UDP port 750. When those requests result in an
error, the kaserver now reports the error using the proper MIT error
codes.
In AFS 3.4a, the login program logs a failed login message in
the AFS 3.4a contains the following changes to the AIX login program:
The registry variable defines in which domain users are administered.
In the /etc/security/user file on the local client machine running
the AIX 4.1 operating system, set the registry variable for default
users to DCE:
default: registry = DCE
default: SYSTEM = "AFS OR (AFS[UNAVAIL]
AND compat[SUCCESS])"
If the machine is both an AFS client and a DCE client, set the SYSTEM
variable to
default: SYSTEM = "DCE OR DCE[UNAVAIL]
OR AFS OR AFS[UNAVAIL] AND compat[SUCCESS]" DCE: program = /usr/vice/etc/afs_dynamic_auth
In the /etc/security/login.cfg file on the local client machine
running AIX 4.1, identify the AFS token with the following:
AFS: program = /usr/vice/etc/afs_dynamic_auth The 3.3 version of the AFS login program supports secondary authentication
on the AIX 3.2 operating system. In addition, the AFS login program
now checks for local Four changes have been made to the version of the login program
distributed for Digital After you enter your password at the login: prompt, the login
program checks for authentication to the local Enter AFS password:
The SGI login program imposes an 8-character limitation on passwords.
Be aware that when using the integrated login program that SGI truncates
the AFS password after the first 8 characters.
For Solaris environments, AFS supports the existence of an /etc/default/login
file. In this file, you can set the following variables:
This chapter describes changes to the
AFS 3.4a Backup System, specifically, the Tape Coordinator and the backup
command suite. In particular, AFS 3.4a contains two new commands, backup
volsetrestore and backup interactive, and the following enhancements:
The AFS 3.4a Backup System supports a new user-defined configuration
file that allows you to automate tape operations with tape stackers and
jukebox devices. Upon startup, the butc command reads the backup
configuration file, /usr/afs/backup/CFG_<tape_device>,
and configures the Tape Coordinator according to the parameters defined
in the file. You can configure the Tape Coordinator to call executable
routines that suppress operator prompts and handle changing tapes within
a tape stacker or jukebox device by setting the MOUNT and UNMOUNT
parameters in the CFG_<tape_device> file.
You can also use the CFG_<tape_device> file to automate operations
on other types of tape devices or to files on a disk device. For example,
you can automate the backup dump process and dump to a file (up
to 2 GB) on a disk drive, instead of a tape drive, by configuring the FILE
parameter. You can also cancel automatic querying for tapes on a tape device
by configuring the AUTOQUERY parameter and turn off name checking
of tapes on a tape device by configuring the NAME_CHECK parameter.
The CFG_<tape_device> file does not replace the /usr/afs/backup/tapeconfig
file; the butc process still requires the tape device information
stored in that file.
Each backup device on a Tape Coordinator machine can have its own user-defined
configuration file. The file must reside in the /usr/afs/backup
directory and it must have a name of the form CFG_<tape_device>,
where <tape_device> is a variable part of the file name that
specifies the relevant device (jukebox or stacker). A separate file is
required for each backup device.
When starting a Tape Coordinator, the butc program reads the
CFG_<tape_device> file and configures the Tape Coordinator based
on the parameter settings it finds in the file. The configuration file
parameters are the following:
MOUNT <filename>
where <filename> is the name of the file that contains the
executable routine.
If you want the Backup System to support a tape stacker or jukebox device,
you can write an executable routine and put it in this file to perform
the tape mount operations for the device. By default, the Backup System
prompts the operator to mount a tape before opening the tape device file.
Prior to opening the tape device, the Tape Coordinator checks for the
MOUNT parameter in the CFG_<tape_device> configuration
file. The configuration file contains an administrator-written script or
program that mounts the tape device. When the Tape Coordinator locates
the MOUNT parameter, it executes the file specified by the MOUNT
parameter instead of prompting the operator to mount the tape. The executable
routine will execute with administer rights. The following information
is passed from the Tape Coordinator to the executable routine via command
line arguments:
The tape operation (one of the following backup commands):
For an appended dump:
If the executable routine returns an exit code of 0, the Tape
Coordinator operation continues. If the executable routine returns an exit
code of 1, the Tape Coordinator operation aborts. If any other
exit code is returned by the routine, it causes the Tape Coordinator to
prompt the operator for the correct tape at the Tape Coordinator window.
UNMOUNT <filename>
where <filename> is the name of the file which contains the
executable routine for use with a tape stacker or jukebox device.
After closing a tape device, the Tape Coordinator executes the routine
in the file specified by the UNMOUNT parameter (whether the close
operation succeeds or fails); the routine is called only once. The routine
specified by the UNMOUNT parameter removes a tape from the tape
device. The Backup System passes the following information to the executable
routine from the Tape Coordinator:
ASK { YES | NO }
There are two valid arguments for the ASK parameter:
AUTOQUERY { YES | NO }
There are two valid arguments for the AUTOQUERY parameter:
NAME_CHECK { YES | NO }
There are two valid arguments for the NAME_CHECK parameter:
BUFFERSIZE <size>
where <size> specifies the memory allocation for backup
dump and backup restore operations. By default, <size>
is specified in bytes. If you wish to use a different unit of measure,
you can specify kilobytes (for example, 10k) or megabytes (for example,
1m) when you specify the size.
For backup dump operations, volumes are read into the memory
buffer and then written out to tape, in contrast to the normal operation
of going from disk to the tape drive at a slower rate. This allows faster
transfers of volume data from a file server to the Tape Coordinator machine
and faster transfers (streaming of the tape drive) from memory to the tape
drive. A buffer size of 1 tape block (16 KB) is the default for the parameter
for a backup dump operation.
For backup restore operations, volumes are read into the memory
buffer and then written out to the File Server. This allows faster transfers
of volume data from a File Server to the Tape Coordinator machine and faster
transfers (streaming of the tape drive) from memory to the tape drive.
A buffer size of 2 tape blocks (32 KB) is the default for the parameter
for a backup restore operation.
FILE { YES | NO }
The FILE parameter has two valid arguments:
There are two general considerations concerning the CFG_<tape_device>
files (these considerations are discussed in detail in Section 5.1.1.1):
2G 5K /dev/stacker0.1 0
The following five lines comprise an example of a configuration file
for dealing with stacker-type automated backup equipment:
MOUNT /usr/afs/backup/stacker0.1
UNMOUNT /usr/afs/backup/stacker0.1 AUTOQUERY
NO ASK YES
NAME_CHECK NO
This example CFG_<tape_device> file sets the following conditions:
1536M 0K /dev/HSM_device 20
The following example CFG_<tape_device> file configures the
Backup System to dump directly to a file.
MOUNT /usr/afs/backup/file FILE
YES ASK NO
This example CFG_<tape_device> file sets the following conditions:
The primary function of this routine is to establish a link between
the device file and the file to be dumped or restored. The UNIX ln -s
command creates the symbolic link between the two files.
A backup dump, backup restore, backup savedb, or
backup restoredb operation will link to a new file using the tapename
and tapeid parameters to build the file name. The tapename
and tapeid parameters are used so that backup restore operations
can easily link to the proper file.
AFS 3.4a contains several enhancements to error messages, log messages,
and error handling. These include the following:
In AFS 3.4a, the Tape Coordinator allows a scan to begin on any tape
of a dumpset. In previous versions of AFS, the Tape Coordinator had to
start a scan on the first tape of a dump set.
The limitations to a tape scan are:
Previously, the Backup System prompted the user for the name of the
tape by sounding a bell and sending a message to the screen. The system
reprompted the user for the information every 15 seconds. The repeated
messages caused the user's information to scroll off the screen. In AFS
3.4a, the system initially prompts for the tape name by sounding a bell
and sending a message to the screen but reprompts every 15 seconds only
by sounding a bell. This modification keeps the user's information from
scrolling off the screen.
This section describes changes to individual commands in the backup
command suite for AFS 3.4a.
All commands in the backup command suite now support the -localauth
flag and the -cell argument.
This flag is useful only for commands issued on file server machines,
since client machines do not have a /usr/afs/etc/KeyFile file. It
is intended for cron-type processes or jobs included in the machine's
/usr/afs/local/BosConfig file. An example might be a command that
automatically runs the backup dump command on certain volumes for
archival backups. See the chapter in the AFS System Administrator's
Guide for information about backing up the system.
The -localauth flag can also be used if the issuer is unable
to authenticate with AFS but is logged into the local A new command, backup volsetrestore, has been added to the backup
command suite. The backup volsetrestore command restores all volumes
in a volume set or restores one or more individual volumes. The command
is useful for recovering from catastrophic losses of data, such as the
loss of all volumes on multiple partitions of a file server machine or
the loss of multiple partitions from multiple file server machines. The
backup volsetrestore command can restore specialized collections
of volumes, as well as restore different volumes to different sites. In
contrast, the backup volrestore command restores one or more volumes
to a single site, and the backup diskrestore command restores all
volumes that reside on a single partition to the same partition.
The syntax of the backup volsetrestore command follows:
backup volsetrestore [-name <volume set name>] [-file <file
name>] [-portoffset <TC port offset>]
[-n] [-localauth] [-cell <cell name>] [-help]
The backup volsetrestore command restores the contents of specified
volumes from tape to the file system. The command performs a full restore
of each volume, restoring data from the last full dump and all subsequent
incremental dumps (if any) of each volume. Use the -name argument
or the -file argument to indicate the volumes to be restored.
Note that if you restore a volume to a site other than the site that
is indicated in the VLDB and if the volume resides in the location specified
in the VLDB, the existing version of the volume is removed when the volume
is restored and the volume's entry in the VLDB is updated accordingly.
If you restore a volume to the site at which it currently exists, the command
overwrites the existing version of the volume.
Using the -name Argument:
Use the -name argument of the backup volsetrestore command
to restore the volumes included in a specified volume set. The command
reads the VLDB to determine all volumes that satisfy fields of the entries
in the volume set. It then looks in the Backup Database to determine the
tapes that contain the last full dump and all subsequent incremental dumps
of each volume. It restores each volume included in an entry in the volume
set to the site listed in the VLDB, overwriting any existing version of
the volume.
You can specify the name of an existing volume set, or you can define
a new volume set and add entries that correspond to the volumes that need
to be restored. It can be useful to define a new volume set when you are
starting new file servers and want to create a new volume set for backing
up these file servers. For example, suppose you need to restore all volumes
that reside on the file server machines named fs1.abc.com and fs2.abc.com.
You can use the backup addvolset command to create a new volume
set. You can then use the backup addvolentry command to add the
following entries to the new volume set:
fs1.abc.com.*.* fs2.abc.com.*.*
These entries indicate all volumes on all partitions on the machines
named fs1.abc.com and fs2.abc.com. Once the new volume set
is defined, you can issue the backup volsetrestore command, specifying
the name of the volume set with the -name argument.
For volume sets created for use with the backup volsetrestore
command, define entries that match the ReadWrite versions of volumes. The
Backup System then searches the Backup Database for a dump of the ReadWrite
or Backup volume. If you define a ReadOnly or Backup volume, the Backup
System will restore only that volume name (if the ReadWrite volume exists).
Also, the volume set expansion may miss volumes that may have been dumped.
Using the -file Argument:
Use the -file argument of the backup volsetrestore command
to restore each volume that has an entry in a specified file. The command
examines the Backup Database to determine the tapes that contain the last
full dump and all subsequent incremental dumps of each specified volume.
It restores each volume to the site indicated in the specified file.
An entry for a volume in a file to be used with the command must have
the following format:
machine partition volume [comments...]
The entry provides the following information:
If you omit the -n flag, the backup volsetrestore command
returns the unique task ID number associated with the restore operation.
The task ID number is displayed in the command window directly following
the command line and in the Tape Coordinator's monitoring window if the
butc command is issued with debug level 1. The task ID number is
not the same as the job ID number, which is visible with the (backup)
jobs command if the backup volsetrestore command is issued in
interactive mode. The task ID number is a temporary number assigned to
the task by the Tape Coordinator, whereas the job ID number is a permanent
number assigned to the job by the Backup System. Since the job ID number
is permanent, it can be referenced. Note that the task ID and job ID numbers
are not assigned to the operation until the command actually begins to
restore volumes.
If you include the -n flag, the command displays the number of
volumes that would be restored, followed by a separate line of information
about each volume to be restored (its full and incremental dumps). For
each volume, the command provides the following output:
machine partition volume_dumped # as volume_restored;
tape_name; pos position number; date
The output provides the following information:
If you intend to write the output of the -n flag to a file for
use with the -file argument, you may have more than one entry for
a volume; the command ignores any additional lines for the volume, but
if you wish to exclude a volume you must remove all existing entries for
that volume in the file. You do not need to remove the number sign (#)
and the information that follows it; the command ignores any characters
that follow the third argument on a line.
When the -n flag is included, no task ID and job ID numbers are
reported because none are assigned.
The amount of time required for the backup volsetrestore command
to complete depends on the number of volumes to be restored. However, a
restore operation that includes a large number of volumes can take hours
to complete. To reduce the amount of time required for the operation, you
can execute multiple instances of the command simultaneously, specifying
disjoint volume sets with each command if you use the -name argument,
or indicating files that list different volumes with each command if you
use the -file argument. Depending on how the volumes to be restored
were dumped to tape, specifying disjoint volume sets can also enable you
to make the most efficient use of your backup tapes when many volumes need
to be restored.
The following example restores all volumes included in entries in the
volume set named data.restore, which was created expressly to restore
data to a pair of file server machines on which all data was corrupted
due to an error. All volumes are restored to the sites recorded in their
entries in the VLDB.
% backup volsetrestore data.restore
% backup volsetrestore -file /tmp/restore -portoffset 1
The issuer must be listed in the /usr/afs/etc/UserList file for
the specified cell.
-pname <permanent_tape_name>
where permanent_tape_name specifies the permanent name that the
user assigns to the tape.
The new syntax for the backup labeltape command is
backup labeltape [-name <AFS_tape_name>] [-size <tape
size in Kbytes, defaults to size in tapeconfig>] [-portoffset <TC
port offset>] [-pname <permanent_tape_name>] [-localauth]
[-cell <cell name>] [-help]
If the user does not explicitly name a tape with a permanent name, AFS
assigns a non-permanent name to the tape as it did previously. The Backup
System produces this non-permanent name by concatenating the volume set
and dump level with a tape sequence index number (for example, guests.monthly.3).
This name is not permanent and changes whenever the tape label is re-written
by a backup command (for example, when using the backup dump,
backup labeltape, and backup savedb commands). The AFS-assigned
non-permanent name is listed in the AFS tape name field in the
output resulting from the backup readlabel command.
As in AFS 3.3, the backup labeltape command overwrites the existing
tape label and destroys any data on the tape, for example, when the user
wishes to recycle a tape that was previously used to store other dumped
volumes. If the -pname argument is not supplied with the backup
labeltape command, the tape keeps its permanent name. A user can enter
a null name to remove the permanent name as shown in the following example:
backup labeltape -pname ""
In AFS 3.4a, the backup readlabel command lists the permanent
tape name, which users can assign with the backup labeltape command,
and the AFS tape name, which is assigned by AFS, in the output of the command.
If you designated a permanent tape name with the backup labeltape
command, the command displays the permanent tape name (tape name)
and the AFS-assigned tape name (AFS tape name), as shown in the
following output:
In AFS 3.4a, the backup scantape command lists the permanent
tape name, which users can assign with the backup labeltape command,
and the AFS tape name, which is assigned by AFS, in the output of the command.
If you designated a permanent tape name with the backup labeltape
command, the command displays the permanent tape name (tape name)
and the AFS-assigned tape name (AFS tape name), as shown in the
following output:
This chapter describes changes to the
bos command suite for AFS 3.4a. In particular, AFS 3.4a contains
changes to the bos addkey command.
These changes are marked with the heading ``AFS 3.4a Changes.''
This chapter also contains changes from the AFS 3.3 release that have
not been incorporated into the full AFS documentation set. These changes
are marked with the heading ``AFS 3.3 Changes.''
In AFS 3.4a, the bos addkey command has been updated to prompt
you twice for the key in the same manner that you are prompted to enter
a password during a password change. The prompt follows:
# bos addkey -server <machine name> -kvno 0 Input
key: Retype input key:
If you type the key incorrectly the second time, the command displays
the following error message and exits without adding a new key:
Input key mismatch
In AFS 3.3, if the -key argument was not provided on the command
line, the command only prompted you to enter the key once.
The bos addkey command has also been updated to prevent you from
reusing a key version number currently found in the /usr/afs/etc/KeyFile
file. This ensures that users who still have tickets sealed with the current
key are not prevented from communicating with the file server because the
current key is overwritten with a new key.
AFS 3.3 Changes
In earlier versions of AFS, the bos addkey command required the
entry of a new key on the command line. This approach posed many obvious
security problems because the key was visible on the screen, in the process
entry for the ps command, and in the command history of the
issuer's shell.
To prevent these security risks, the -key argument has been made
optional on the bos addkey command. If you do not provide the argument
on the command line, you are prompted to enter the key in the same way
that you are prompted to enter a password during a password change.
The new syntax of the bos addkey command follows:
bos addkey -server <machine name> [-key <key>]
-kvno <key version number> [-cell <cell name>] [-noauth]
[-localauth] [-help]
The -long flag with the bos status command now displays
the pathnames of notifier programs associated with processes via the bos
create command.
This chapter describes changes to the
fs command suite for AFS 3.4a. In particular, AFS 3.4a contains
a new command, fs storebehind, and changes to the following fs
commands:
This chapter also contains changes from the AFS 3.3 release that have
not been incorporated into the full AFS documentation set. These changes
are marked with the heading ``AFS 3.3 Changes.''
AFS 3.4a supports preferences for Volume Location (VL) servers in addition
to preferences for file servers. These preferences are file server or VL
server machines from which the client machine's Cache Manager prefers to
access ReadOnly volumes or VLDB information, respectively. Preferences
are specified as servers and ranks. The first value is the name or IP address
of the server; the second is the numerical rank to be associated with that
server. The Cache Manager bases its preference on a numerical rank; the
smaller the numerical rank, the greater the Cache Manager's preference
for selecting that server. The numerical rank can be set by the Cache Manager
or by the user explicitly with the fs setserverprefs command.
Each Cache Manager stores a table of preferences for file server and
VL server machines. A preference is stored as a file server or VL server
machine's Internet Protocol (IP) address and an associated ``rank.'' A
file server or VL server machine's rank is an integer in the range from
0 to 65,534 that determines the Cache Manager's preference
for selecting the server machine when the Cache Manager must access a ReadOnly
replica or VLDB that resides on it. Preferences can bias the Cache Manager
to access ReadOnly replicas or VLDBs from machines that are ``near'' rather
than from those that are ``distant'' (``near'' and ``far'' refer to network
distance rather than physical distance). Effective preferences can generally
reduce network traffic and result in faster access of data.
Most AFS cells have multiple database server machines running the vlserver
process. When a Cache Manager needs volume information from the VLDB, it
first contacts the VL server with the lowest numerical rank. If that VL
server is unavailable, it attempts to contact the VL server with the next
lowest rank. If all of a cell's VL servers are unavailable, the Cache Manager
will not be able to retrieve files from that cell.
A replicated AFS volume typically has multiple ReadOnly volumes. Each
ReadOnly volume provides the same data, but each resides on a different
file server. When the Cache Manager needs to access a ReadOnly volume,
it first contacts the VL server to determine the IP addresses of the file
servers on which the ReadOnly volume resides. The Cache Manager then checks
its internal table to determine the rank associated with each of the file
server machines. After comparing the ranks of the machines, the Cache Manager
attempts to access the ReadOnly volume on the machine that has the lowest
integer rank.
If the Cache Manager cannot access the ReadOnly volume on the server
with the lowest rank (possibly because of a server process, machine, or
network outage), the Cache Manager attempts to access the ReadOnly volume
on the server with the next lowest rank. The Cache Manager continues in
this manner until it either accesses the ReadOnly volume, or determines
that all of the relevant servers are unavailable.
If the Cache Manager is unable to access any server, the Cache Manager
marks that server as ``down.'' The server's rank is unchanged, but the
Cache Manager will not request the server until it knows that the server
has returned to service.
The Cache Manager assigns preferences to file servers as it accesses
files from volumes on those machines; the Cache Manager assigns preferences
to VL servers when it is first initialized. The Cache Manager stores the
preferences as IP addresses and associated ranks in the kernel of the client
machine. Because they are stored in the kernel of the client machine, the
preferences are recalculated when the client machine is rebooted. To rebuild
its preferences following initialization, the Cache Manager assigns a default
rank to each VL server listed in the /usr/vice/etc/CellServDB file
and to each file server that houses a copy of a ReadOnly volume from which
it accesses data. To display the Cache Manager's current set of file server
or VL server machine preferences, use the fs getserverprefs command.
By default, the command displays its output on standard output, but you
can direct the output to a specified file.
AFS provides commands for displaying and modifying a Cache Manager's
preferences for server machines. The fs getserverprefs command can
be used to display a Cache Manager's preferences for file server and VL
server machines.
The fs setserverprefs command can be used to set the preference
for one or more file server or VL server machines. Preferences are specified
with the command as server names and ranks. The first value is the name
or IP address of the server; the second is the numerical rank to be associated
with that server.
A Cache Manager's file server preferences are potentially derived from
four different sources:
The Cache Manager uses the preferences input via the fs setserverprefs
command, when they exist, over existing default preferences. The Cache
Manager uses the last preference entered from the combined input for a
particular server machine. For example, consider the following sequential
input is given for the file server fs1.abc.com via the fs setserverprefs
command:
A Cache Manager's VL server preferences are potentially derived from
two different sources:
In addition to file server preferences, the fs setserverprefs
command can set preferences for Volume Location (VL) servers via the -vlservers
argument. This section contains the revised command reference page for
the fs setserverprefs command.
fs setserverprefs [-servers <fileserver names and ranks>+]
[-vlservers <VL server names and ranks>+] [-file <input
from named file>] [-stdin] [-help]
Acceptable Abbreviations/Aliases:
fs sets [-se <fileserver names and ranks>+] [-vl <VL
server names and ranks>+] [-f <input from named file>] [-st]
[-h]
fs sp [-se <fileserver names and ranks>+] [-vl <VL
server names and ranks>+] [-f <input from named file>] [-st]
[-h]
Sets the Cache Manager's preferences for one or more file server or
VL server machines. These preferences are file server or VL server machines
from which the client machine's Cache Manager prefers to access ReadOnly
volumes or VLDB information, respectively. The Cache Manager bases its
preference on a numerical rank; the lower the numerical rank, the greater
the Cache Manager's preference for selecting that file server or VL server.
The numerical rank can be set by the Cache Manager or by the user explicitly
with the fs setserverprefs command.
Each Cache Manager stores a table of preferences for file server machines
and a table of preferences for VL server machines. A preference is stored
as a server machine's Internet Protocol (IP) address and an associated
``rank.'' A file server or VL server machine's rank is an integer in the
range from 1 to 65,534.
When the Cache Manager needs to access a VL server and look up information
in the VLDB, the Cache Manager checks its internal table to see which VL
server has the lowest recorded rank. The Cache Manager then attempts to
contact the VL server with the lowest rank. If multiple VL servers have
the same rank, the Cache Manager selects them in the order in which it
finds them in its internal table of preferences.
When the Cache Manager needs to access data from a ReadOnly volume,
it first contacts the VL server and accesses the VLDB to determine the
names of the file server machines on which a ReadOnly volume resides. If
multiple servers house the ReadOnly volume, the Cache Manager consults
its preferences for server machines and attempts to access the server with
the lowest recorded rank. If multiple servers have the same rank, the Cache
Manager selects them in the order in which it received their names from
the VL server.
If the Cache Manager cannot access the server with the lowest rank,
the Cache Manager attempts to access the server with the next-lowest rank.
The Cache Manager continues in this manner until it either succeeds in
accessing the ReadOnly volume (or VLDB) or determines that all of the appropriate
servers are unavailable.
The Cache Manager stores its server preferences in the kernel of the
local machine. The preferences are lost each time the Cache Manager is
initialized with the afsd command (each time the client machine
is rebooted). After it is initialized, the Cache Manager rebuilds its collection
of preferences by assigning a rank to each VL server listed in the /usr/vice/etc/CellServDB
file and to each file server that it contacts or that houses a ReadOnly
volume from which it accesses data. The Cache Manager makes no distinction
between preferences for servers from the local cell and those for servers
from a foreign cell. However, default preferences bias the Cache Manager
to select servers that are in the same subnetwork or network as the local
machine. You can use the fs setserverprefs command to alter the
default preferences.
If the fs setserverprefs command specifies a rank for a server
for which the Cache Manager has no rank, the command defines the server's
initial rank. If the command specifies a rank for a server for which the
Cache Manager already has a rank, the command changes the current rank
to match the specified rank. You can include the fs setserverprefs
command in a machine's initialization file to load a predefined collection
of server preferences when the machine is rebooted.
Using the fs setserverprefs command, you specify preferences
as pairs of values. The first value of the pair is the hostname (for example,
fs1.abc.com) or IP address, in dotted decimal format, of a file
server or VL server; the second value of the pair is the machine's numerical
rank, an integer in the range from 0 to 65,520. Note that
you must use the -vlservers argument with the fs setserverprefs
command to specify VL server preferences for the Cache Manager.
To minimize the chances that different servers are consistently assigned
the same rank by all clients (to ensure some load balancing among servers),
the Cache Manager adds a random number in the range from 0 (zero)
to 14 to each rank that you specify. For example, if you specify
a rank of 15,000 to a server, the Cache Manager records the rank
as an integer in the range from 15,000 to 15,014.
You can specify servers and their ranks
The -servers, -file, and -stdin arguments are not
mutually exclusive. You can include any combination of these arguments
with the command. Note that the command does not verify the IP addresses
specified with any of its arguments. You can add a preference for an invalid
IP address; the Cache Manager stores such preferences in the kernel, but
it ignores them (the Cache Manager never needs to consult such preferences).
Allowing the Cache Manager to Assign Preferences to File Server Machines:
The Cache Manager bases default ranks that it calculates on IP addresses
rather than on actual physical considerations such as location or distance.
It uses the following heuristic to calculate default ranks for file
server machines only:
# fs setserverprefs -servers fs3.abc.com 25000 128.21.18.100
25000
The following command uses the -servers argument to set the Cache
Manager's preferences for the same two file server machines, but it also
uses the -file argument to read a collection of preferences from
a file that resides on the local machine in the /etc/fs.prefs file:
# fs setserverprefs -servers fs3.abc.com 25000 128.21.18.100
25000 -file /etc/fs.prefs
The /etc/fs.prefs file has the following contents and format:
# calc_prefs | fs setserverprefs -stdin
The following command uses the -vlservers argument to set the
Cache Manager's preferences for the VL server machines named fs1.abc.com,
fs3.abc.com, and fs4.abc.com with ranks of 10000,
30000, and 45000, respectively:
# fs setserverprefs -vlservers fs1.abc.com 10000 fs3.abc.com
30000 fs4.abc.com 45000
If you want VL server preferences to survive a reboot, you can add the
fs setserverprefs command to your startup files on your client machine. The issuer must be ``root'' on the local machine.
In AFS 3.4a, the fs getserverprefs command can display preferences
for Volume Location (VL) servers via the -vlservers flag, in addition
to file servers. This section contains the revised command reference page
for the fs getserverprefs command.
fs getserverprefs [-file <output to named file>] [-numeric]
[-vlservers] [-help]
Acceptable Abbreviations/Aliases:
fs gets [-f <output to named file] [-n] [-vl] [-h]
Displays the Cache Manager's preferences for file server or VL server
machines. These preferences are file server or VL server machines from
which the client machine's Cache Manager prefers to access ReadOnly volumes
or VLDB information, respectively. The Cache Manager bases its preference
on a numerical rank; the lower the numerical rank, the greater the Cache
Manager's preference for selecting that file server or VL server. The numerical
rank can be set by the Cache Manager or by the user explicitly with the
fs setserverprefs command. To display VL server preferences, you
must specify the -vlservers flag with the fs getserverprefs
command. Refer to the Description section of the fs setserverprefs
command for a discussion on how the Cache Manager assigns preferences to
file servers.
Each Cache Manager stores a table of preferences for file server machines
and a table of preferences for VL server machines. A preference is stored
as a server machine's Internet Protocol (IP) address and an associated
``rank.'' A file server or VL server machine's rank is an integer in the
range from 0 to 65,534. The default rank assigned to a VL
server is an integer in the range from 10,000 to 10,126.
The fs getserverprefs command displays file server rank information
on standard output (stdout) by default. To write the output to a file instead
of standard output (stdout), use the -file argument.
The fs getserverprefs command displays a separate line of output
for each file server or VL server machine for which it maintains a preference.
By default, each line consists of the name of a file server machine followed
by the machine's rank, as follows:
hostnamerank
where hostname is the name of a file server machine, and rank
is the rank associated with the machine. If the -numeric flag is
included with the command, the command displays the IP address, in dotted
decimal format, of each file server machine instead of the machine's name.
The command also displays the IP address of any machine whose name it cannot
determine (for example, if a network outage prevents it from resolving
the address into the name).
% fs getserverprefs
The following command displays the same Cache Manager's preferences,
but the -numeric flag is included with the command to display the
IP addresses rather than names of the server machines. The IP address of
the local machine is 128.21.16.212. The two file server machines
on the same subnetwork as the local machine have ranks of 20,007
and 20,011; the two file server machines on a different subnetwork
in the same network as the local machine have ranks of 30,002 and
30,010; the remainder of the file server machines are in a different
network, so their ranks range from 40,000 to 40,012.
% fs getserverprefs -numeric
The following command displays the Cache Manager preferences for VL
servers by specifying the -vlservers flag.
% fs getserverprefs -vlservers
No privileges are required.
The fs storebehind command is a new command for controlling the
timing of data storage from the Cache Manager to the file server. The fs
storebehind command performs a delayed asynchronous write to the file
server for specified file(s). The fs storebehind command allows
the Cache Manager to return control to a closing application program before
the final portion of a file is completely transferred to the file server.
This command is useful for accessing and writing very large files in
AFS. For example, if you have finished working on a large database file,
the fs storebehind command can close the file in the background
and asynchronously write it to the file server while you move on to work
on something else.
The fs storebehind command does not change the normal AFS open
and close file semantics. Note that while the file is in the process of
being closed and stored to the file server, the user closing the file still
holds the file lock.
You can specify that a particular file or all files be closed (by using
the -files and -kbytes arguments or using the -allfiles
argument, respectively) after control has been returned to the closing
application program. You can also indicate the maximum amount of data that
is written to the file server after returning control to the closing application.
The -kbytes and -files arguments must appear together
on the command line to define the asynchrony for a file. If you specify
only the -kbytes argument, you will see the following message:
fs: you must specify -kbytes and -files together
If you issue the fs storebehind command without arguments or
with the -verbose argument, the command displays the current default
Cache Manager asynchrony setting (the value for the -allfiles setting).
If you issue the fs storebehind command with the -files argument,
the command displays the current asynchrony setting for the named file.
If the delayed close and write on the specified file fails, the fs
storebehind command does not notify the application or inform the application
that the close and write operations failed.
In AFS 3.4a, the default for the Cache Manager store operation is to
complete the transfer of a closed file to the file server after returning
control to the application invoking the close. In AFS 3.3, the default
for the Cache Manager operation was to return control to a closing application
program after the final chunk of a file was completely written to the file
server.
The functionality of the fs storebehind command in AFS 3.4a (delayed
asynchronous writes) was previously provided by the default setting of
the afsd command. The default functionality of the AFS 3.4a Cache
Manager (complete the transfer of a closed file to the file server) was
previously provided by the -waitclose flag of the afsd command;
for this reason, the -waitclose flag has no effect on the operation
of the Cache Manager in AFS 3.4a.
The syntax for the fs storebehind command follows:
fs storebehind [-kbytes <asynchrony for specified names>]
[-files <specific pathnames>+] [-allfiles
<new default (KB)>] [-verbose] [-help]
% fs storebehind -kbytes 500 -files test.data
The following command performs a delayed asynchronous write on all files
in the client's AFS cache and returns control to the application program
when 100 KB of any file remains to be written to the file server.
% fs storebehind -allfiles 100
You also can combine the previous examples on the same command line.
The following command performs a delayed asynchronous write on the test.data
file and returns control to the application program when 500 KB of the
file remains to be written to the file server. For all other files in the
Cache Manager, the command returns control to the application program when
100 KB remains to be written to the file server.
% fs storebehind -kbytes 500 -files test.data -allfiles
100 The issuer must be ``root'' to set the -files and -allfiles
arguments on the command or the issuer must have ``write'' permissions
on the file specified with the -files argument.
The fs checkservers command probes file servers to determine
if they are available and reports any file servers that did not respond
to the probe. The output of this command has been modified for AFS 3.4a.
The following AFS 3.3 example reports that the machines fs1.abc.com
and fs3.abc.com did not respond to the client machine's probe:
% fs checkservers -cell abc.com
In AFS 3.4a, the output of the fs checkservers command has been
modified. The new report follows:
% fs checkservers -cell abc.com
Each AFS client machine probes file server machines to determine if
any file servers it has accessed in the requested cell since the client
has been ``up'' are available. Specifically, each client machine probes
those file servers that house data that the client has cached in its local
cell by default or a cell specified by the -cell argument. If a
file server does not respond to a probe, the client assumes the file server
is unavailable due to server or network problems.
AFS 3.3 Changes
In previous versions of AFS, the interval between probes was automatically
set to 3 minutes. For some uses, a 3-minute probe interval may be too long
or too short. Therefore, a new argument, -interval, has been added
to the fs checkservers command to allow you to specifically set
this interval. The default value is 180 seconds; the maximum and minimum
values are 10 minutes (600 seconds) and 1 second, respectively. To check
the current length of the interval, specify 0 with the -interval
argument.
Only ``root'' can issue the fs checkservers command with the
-interval argument. Once set, the probe interval remains set until
it is changed via this command or until the client machine is rebooted
(at which time it returns to the default setting). If you want the time
interval specified by the -interval argument to survive a reboot,
you can put the fs checkservers command in the startup files.
Several modifications have been made to the fs exportafs command
syntax for AFS 3.4a.
The -uidcheck and -submounts arguments of the fs exportafs
command now support an on or off selection.
fs exportafs -type <exporter name> [-start <start/stop
translator ( on |off )>] [-convert <convert
from afs to unix mode ( on |off )>] [-uidcheck
<run on strict 'uid check' mode ( on |off )>]
[-submounts <allow nfs mounts to subdirs of /afs/.. ( on |off)>]
[-help]
% fs exportafs nfs
To reset all arguments to their default values, execute the following
commands:
% fs exportafs nfs off %
fs exportafs nfs on
AFS 3.3 Changes
In the past, when using the NFS/AFS Translator, it was often easy to
assign a token mistakenly to the wrong user or to delete the wrong user
token by entering the wrong UID with the knfs command. A new flag,
-uidcheck, has been added to the fs exportafs command that,
when used, prevents users from assigning and deleting the tokens of other
users with the knfs command. You can use this feature only if your
users have the same UID's in the /etc/passwd files (or equivalent)
on both the NFS client and the NFS server (the NFS/AFS Translator machine).
The -cell argument on fs commands now fully expands shortened
versions of a cell name (for example, tr is a shortened version
of the cellname transarc.com), provided the shortened version is
unique. The Cache Manager determines if a shortened version is unique by
consulting the CellServDB file.
The following fs commands are affected by this change:
Two new flags, -id and -if, have been added to the fs
copyacl, fs listacl, and fs setacl commands to allow
AFS interaction with Transarc Corporation's AFS/DFS Migration ToolkitTM.
The new flags provide no functionality outside of the Migration Toolkit.
The new syntax of the commands follows:
fs copyacl -fromdir <source directory (or DFS file)>
-todir <destination directory (or DFS file)>+[-clear] [-id] [-if]
[-help]
fs listacl [-path <dir/file path>+] [-id] [-if] [-help]
fs setacl -dir <directory>+-acl <access list entries>+[-clear]
[-negative] [-id] [-if] [-help]
Refer to the AFS/DFS Migration Toolkit Administration Guide and Reference
for more information about these commands.
A new argument, -linkedcell, has been added to the fs newcell
command to allow AFS interaction with Transarc Corporation's AFS/DFS Migration
Toolkit.
The new syntax of the command follows:
fs newcell -name <cell name> -servers <primary servers>+[-linkedcell
<linked cell name>] [-help]
Refer to the AFS/DFS Migration Toolkit Administration Guide and Reference
for more information about the fs newcell command.
AFS 3.4a Changes
This chapter defines the fstrace commands that system administrators
employ to trace Cache Manager activity for debugging purposes. It assumes
the reader is familiar with the concepts described in the AFS System
Administrator's Guide, especially the operation of the AFS Cache Manager.
This chapter includes the following sections:
Section 8.1, About the fstrace
Command Suite
The fstrace command suite monitors
the internal activity of the Cache Manager and allows you to record, or
trace, in detail the processes executed by the AFS Cache Manager.
These processes, or events, executed by the Cache Manager comprise
the Cache Manager (cm) event set. Examples of cm events
are fetching files and looking up information for a listing of files and
subdirectories using any form of the The functionality of the fstrace command suite replaces the functionality
provided by the fs debug command. The fstrace log process
is not intended to be a continuous process log as other AFS logs (FileLog,
VLLog, AFSLog, etc.) are. It is only intended for diagnosing
specific problems that occur within the AFS Cache Manager.
Following are the fstrace commands and their respective functions:
Caution should be used when enabling fstrace since the log can
grow in size very quickly; this can use valuable disk space if you are
writing to a file in the local file space. Additionally, if the size of
the log becomes too large, it may be difficult for AFS Product Support
to parse the results for pertinent information.
To use the fstrace kernel tracing utility, you must first enable
tracing and reserve, or allocate, space for the trace log with the
fstrace setset command. With this command, you can set the cm
event set to one of three states:
When AFS tracing is enabled, each time a cm event occurs, a message
is written to the trace log, cmfx. To diagnose a problem, you may
read the output of the trace log and analyze the processes executed by
the Cache Manager. The trace log has a default size of 60K; however, its
size can be increased or decreased.
If a problem is reproducible, clear the cmfx trace log with the
fstrace clear command and reproduce the problem. If the problem
is not easily reproduced, keep the state of the event set active
until the problem recurs.
To view the contents of the trace log and analyze the cm events,
use the fstrace dump command to copy the content lines of the trace
log to standard output (stdout) or to a file.
The Cache Manager catalog must be in place so that logging can occur.
The fstrace command suite uses the standard The
fstrace setset command allows you to specify the state of the cm
event set. The state of an event set determines whether information on
the events in that event set is logged. To set the state of a kernel event
set, you must issue the command on the machine on which the event set resides.
The syntax of the command is as follows:
fstrace setset [-set <set_name>+] [-active] [-inactive]
[-dormant] [-help]
Example:
The following example sets the state of the cm event set to active.
# fstrace setset cm -active
The
trace log occupies 60K of kernel memory by default. You can change the
size of the log with the fstrace setlog command. If the specified
log already exists, it is cleared when this command is issued and a new
log of the given size is created. Otherwise, a log of the desired size
is created when the log is allocated. The syntax of the command is as follows:
fstrace setlog [-log <log_name>+] -buffersize <1-kilobyte_units>
[-help]
Log wrapped; data missing.
Example:
The following example sets the size of the cmfx kernel trace
log to 80 kilobytes.
# fstrace setlog cmfx 80
To view
the information in a trace log, you must copy the content lines of the
log to standard output (stdout) or to a file. The fstrace dump command
dumps trace logs to standard output (stdout) to allow you to analyze the
Cache Manager processes. You can also direct the contents of a trace log
dump to a file by using the -file argument.
To continuously dump a single trace log, issue the fstrace dump
command with the -follow argument. If you want to dump a trace log,
it must reside on the local machine. The syntax of the command is as follows:
fstrace dump [-set <set_name>+] [-follow <log_name>+]
[-file <output_filename>] [-sleep <seconds_between_reads>]
[-help]
AFS Trace Dump -- Date: date time
Found n logs.
where date is the starting date of the trace log dump, time
is the starting time of the trace log dump, and n specifies the number
of logs found by the fstrace dump command.
The following is an example of a trace log dump header:
AFS Trace Dump -- Date: Fri Nov 18 10:44:38
1994 Found 1 logs.
The contents of the log follow the header and are comprised of messages
written to the log from an active event set. The messages written
to the log contain the following three components:
time timestamp, pid pid:event
message
where timestamp is the number of seconds from the start of trace
logging, pid is the process ID number of the Cache Manager event,
and event message is the Cache Manager event that corresponds with
a function in the AFS source code.
The following is an example of a dumped trace log message:
time 749.641274, pid 3002:Returning code 2 from 19
A catalog file needs to be installed when AFS is installed in order
to format the messages that are written to a log file. If your message
looks similar to the following, verify that the catalog file (afszcm.cat)
was installed in the /usr/vice/etc/C directory:
raw op 232c, time 511.916288, pid 0
If the afszcm.cat file is not in the directory, copy it there
from Transarc's distribution location, your cell's distribution location,
or your AFS distribution tape.
Every 1024 seconds, a current time message is written to each log. This
message has the following format:
time timestamp, pid pid Current
time: unix_time
where timestamp is the number of seconds from the start of logging,
pid is the process ID number of the Cache Manager event, and unix_time
is standard The current time message can be used to determine the actual time associated
with each log message. Determine the actual time as follows:
Log wrapped; data missing.
Example:
The following example creates a dump file with the name cmfx.dump.file.1.
Issue the command as a continuous process by adding the -follow
and -sleep arguments. Setting the -sleep argument to 10 dumps
output from the kernel trace log to the file every 10 seconds.
The
fstrace lslog command displays information about the cmfx
trace log. By default, the fstrace lslog command lists only the
name of the log. It optionally displays size and allocation information
when issued with the -long flag. The syntax is as follows:
fstrace lslog [-set <set_name>+] [-log <log_name>+]
[-long] [-help]
You must be ``root'' on the local machine to use this command.
Example:
The following example uses the -long flag to display additional
information about the cmfx trace log.
# fstrace lslog cmfx -long Available logs:
The
fstrace lsset command displays information about the state of the
cm event set.The syntax of the command is as follows:
fstrace lsset [-set <set_name>+] [-help]
The -set argument specifies the name of the event set about which
information is to be specified. The only valid argument is cm. If
you omit the -set argument, the default is cm.
The output from this command lists the event set and its states. The
three event states for the cm event set are:
Example:
The following example displays the event set and its state on the local
machine.
# fstrace lsset cm
The
fstrace clear command clears trace log data by logname or event
set. Space is still allocated for the trace log in the kernel. When you
are no longer concerned with the information in a trace log, you can clear
the log if you need to conserve space in the kernel. The syntax of the
command is as follows:
fstrace clear [-set <set_name>+] [-log <log_name>+]
[-help]
If the cmfx kernel trace log already exists and you wish to change
the size of the trace log, the fstrace setlog command automatically
clears the trace log when a new log of the given size is created.
Examples:
The following example clears the cmfx log used by the cm
event set on the local machine.
# fstrace clear cm
The following example also clears the cmfx log on the local machine.
# fstrace clear cmfx
The
fstrace apropos command and the fstrace help command display
the name and a short description for every fstrace command. If the
-topic argument is specified, the commands provide the short description
for only the command names listed. The fstrace help command provides
the syntax along with the short description when the -topic argument
is specified. The syntax of the commands is as follows:
fstrace apropos -topic <help string> [-help]
fstrace help [-topic <help string>+] [-help]
This section contains a detailed example of the use of the fstrace
command suite. Assume that the Cache Manager on the local AFS client machine
is having difficulty accessing a volume on one of your cell's file servers.
As a result of the problem, you contacted your Transarc Product Support
Representative, who requested that you start collecting data in a kernel
trace log using the fstrace facility. After collecting a reasonable
amount of data in the log, you can send the log contents to Transarc for
evaluation. Your Transarc Product Support Representative will provide you
with guidelines for setting up the trace log and, after discussing your
situation with you, will determine how long you should continue collecting
data for a trace.
Before starting the kernel trace log, try to isolate the Cache Manager
on the AFS client machine that is experiencing the problem accessing the
file. You may need to instruct users to move to another machine to minimize
the Cache Manager traffic on this machine. Ensure that you have the fstrace
binary in the local file space, and not in AFS, and also place the dump
file in the local file space. It is recommended that you use tracing in
this manner to minimize the amount of unnecessary AFS traffic that will
be logged by the trace log. You must be "root" on the local client machine
to use the fstrace command suite. If you attempt to use an fstrace
command other than fstrace apropos and fstrace help without
being "root," you will see the following error:
fstrace must be run as root
Before starting a kernel trace, check the state of the event set using
the fstrace lsset command.
# fstrace lsset cm
Available sets:
If tracing has been turned off and kernel memory is not allocated for
the trace log on the client machine, the following output is displayed:
Available sets:
# fstrace setset cm -active
If tracing is enabled currently on the client machine, the following
output is displayed:
Available sets:
If tracing is enabled currently, you do not need to use the fstrace
setset command. However, you should issue the fstrace clear
command to clear the contents of the trace log. This action ensures that
you will remove data from the trace log that is not related to the problem
that you are currently experiencing with the Cache Manager.
# fstrace clear cm
After checking on the state of the event set, you should check the current
state of the kernel trace log using the fstrace lslog command. Use
the -long flag with this command to determine the size of the trace
log.
# fstrace lslog cmfx -long
If tracing has not been enabled previously or the cm event set
was set to active or inactive previously, output similar to the following
is displayed:
Available logs:
The fstrace tracing utility allocates 60 kilobytes of memory
to the trace log by default. You can increase or decrease the amount of
memory allocated to the kernel trace log by setting it with the fstrace
setlog command. The number specified with the -buffersize argument
represents the number of kilobytes allocated to the kernel trace log. If
you want to increase the size of the kernel trace log to 100 kilobytes,
issue the following command:
# fstrace setlog cmfx 100
After ensuring that the kernel trace log is configured for your needs,
you can set up a file into which you can dump the kernel trace log. For
example, create a dump file with the name cmfx.dump.file.1 using
the following fstrace dump command. Issue the command as a continuous
process by adding the -follow and -sleep arguments. Setting
the -sleep argument to 10 dumps output from the kernel trace log
to the file every 10 seconds.
If you want to clear the trace log, use the fstrace clear command:
# fstrace clear cm
If you want to reclaim the space allocated in the kernel for the cmfx
log, issue the following command:
# fstrace setset cm -dormant
This chapter describes changes to the
kas command suite for AFS 3.4a. In particular, AFS 3.4a contains
changes to the kas examine command.
AFS 3.4a also contains a change to the kas command ticket lifetime.
These changes are marked with the heading ``AFS 3.4a Changes.''
The kas command ticket is the ticket you receive from the Authentication
server when using any command in the kas command suite. Previously,
the ticket lifetime was set to 1 hour. In AFS 3.4a, the ticket lifetime
has been changed to 6 hours to enable you to work on extended operations
such as large Authentication Database listings.
The kas examine command, which displays information for an Authentication
Database entry, has been updated to display whether a user can reuse any
of his or her last twenty passwords. This value is set by the -reuse
argument of the kas setfields command.
The following example shows the privileged user smith examining
her own Authentication Database entry with the updated output as it appears
in AFS 3.4a. Note the information provided in the last line of output about
smith's password reuse status.
This chapter describes changes to the
package command and configuration file lines for AFS 3.4a. In particular,
AFS 3.4a allows relative pathnames and contains changes to the following
arguments on configuration lines:
In AFS 3.4a, the package command interprets relative pathnames
beginning with ``./'', ``../'', or ``/'' specified
by the actual file argument of the ``L'' configuration line.
The package command also interprets ``:'' and ``!''
characters contained within a pathname.
The minor device number argument is specified on the ``B''
and ``C'' configuration file lines with the package command.
In AFS 3.4a, the package command interprets the number specified
by the minor device number argument as a hexadecimal number, an
octal number, or a decimal number.
Previously, the package command interpreted the minor device
number as a decimal number only.
The package command continues to interpret the major device
number as a decimal number only.
The owner argument (formerly known as the owner name argument)
is specified on the ``B,'' ``C,'' ``D,'' ``F,''
``L,'' and ``S'' configuration file lines with the package
command. In AFS 3.4a, the package command interprets the owner
argument as a user name or a user ID (see the ``user'' named in the device's
``owner'' field in the output from the The group argument (formerly known as the group name argument)
is specified on the ``B,'' ``C,'' ``D,'' ``F,''
``L,'' and ``S'' configuration file lines with the package
command. In AFS 3.4a, the package command interprets the group
argument as a group name or a group ID (see the ``group'' named in the
device's ``group'' field in the output from the This chapter describes changes to the uss command suite for AFS
3.4a. In particular, AFS 3.4a contains changes to the uss bulk command.
These changes are marked with the heading ``AFS 3.4a Changes.''
This chapter also contains changes from the AFS 3.3 release that have
not been incorporated into the full AFS documentation set. These changes
are marked with the heading ``AFS 3.3 Changes.''
A new flag, -pipe, has been added to the uss bulk command.
The -pipe flag has been added to assist you in running batch jobs
without displaying the password prompt. The -pipe flag allows the
uss bulk command to accept input piped in from another program.
The new syntax for the uss bulk command follows:
uss bulk -file <bulk input file> [-template <pathname
of template file>] [-verbose]
AFS 3.3 Changes
The documentation correctly states that each type of line in the uss
bulk command has a syntax order similar to its corresponding uss
command. The syntax of the delete line corresponds to the syntax
of the uss delete command and the syntax of the add line
corresponds to the syntax of the uss add command.
However, both the AFS System Administrator's Guide and the AFS
Command Reference Manual provide incorrect information on the syntax
of the add line. The correct syntax of the add line follows:
add <login name> [:<full name>][:<initial
passwd>][:<password expires>]
The syntax of the uss add command is incorrect in the AFS documentation.
The correct syntax follows:
uss add -user <login name> [-realname <full name
in quotes>] [-pass <initial password>]
[-pwexpires <password expires in [0..254] days (0 => never)>]
[-server <FileServer for home volume>] [-partition <FileServer's
disk partition for home volume>] [-mount <home
directory mount point>] [-uid <uid to assign the user>]
[-template <pathname of template file>] [-verbose] [-var <auxiliary
argument pairs (Num val)>+] [-cell <cell
name>] [-admin <administrator to authenticate>]
[-dryrun] [-skipauth] [-overwrite] [-help]
This chapter describes changes to the vos command suite for AFS
3.4a. In particular, AFS 3.4a contains changes to the following vos
commands:
These changes are marked with the heading ``AFS 3.4a Changes.''
This chapter also contains changes from the AFS 3.3 release that have
not been incorporated into the full AFS documentation set. These changes
are marked with the heading ``AFS 3.3 Changes.''
In AFS 3.3, the vos restore command determined whether the volume
specified by the -name argument already existed on the partition
specified by the -server and -partition arguments. If the
volume existed on the specified partition, the vos restore command
asked whether you wanted to overwrite the volume. If you entered a yes
response, the vos restore command completely overwrote the existing
volume; if you entered a no response, the command aborted. It was
impossible to perform an incremental restore operation. If the volume did
not exist on the specified partition, the vos restore command aborted.
In AFS 3.4a, the vos restore command determines whether the volume
specified by the -name argument already exists on the partition
specified by the -server and -partition arguments. If the
volume exists, the vos restore command prompts you to determine
which of the following actions it is to perform:
If standard input cannot be used for a prompt, the default action is
to abort the restore operation.
The vos restore command also includes a new -overwrite
argument for situations where you do not want to be prompted or where standard
input (stdin) is redirected and cannot be used for a prompt. The new command
syntax follows:
vos restore -server <machine name> -partition <partition
name> -name <name of volume to be restored>
[-file <dump file>] [-id <volume ID>]
[-overwrite <abort | full | incremental>] [-cell
<cell name>] [-noauth] [-localauth] [-verbose]
[-help]
The valid abbreviations for the -overwrite argument are the same
as those listed as valid responses to the prompt. The default action for
the -overwrite argument is abort.
The following are rules for using the vos restore command:
If the volume specified with the -name argument exists on the
specified partition and the -overwrite argument is not specified,
the command performs one of the following actions:
When the vos backup command creates a Backup volume successfully,
it returns the following message:
Created backup volume for ReadWrite volume name
However, if the VL server cannot locate the ReadWrite volume at the
site listed in the VLDB, the command exits without creating the Backup
volume. The command displays the following message telling you that the
operation aborted without creating the Backup volume:
vos: can't find volume ID or name 'volumeID or volume name'
Previously, the vos backup command exited without indicating
that the Backup volume was not created.
The vos create command has a new -maxquota argument. The
new syntax for the command follows:
vos create -server <machine name> -partition <partition
name> -name <volume name> [-maxquota
<initial quota (KB)>] [-cell <cell
name>] [-noauth] [-localauth] [-verbose] [-help]
The -maxquota argument specifies the maximum amount of disk space
the volume can use. Express the -maxquota argument in kilobyte blocks
(a value of 1024 is one megabyte). A value of 0 grants an
unlimited quota, but the size of the disk partition that houses the volume
places an absolute limit on the volume's maximum size. The default value
for the -maxquota argument is 5000.
Previously, when creating a volume, you had to use the vos create
command to create a volume, use the fs mkmount command to create
the mount point for the volume, and then use the fs setquota command
to set the quota for the volume. The -maxquota argument has been
added to the vos create command to allow you to create the volume
and set the quota in the same step. The -maxquota argument does
not replace the fs setquota command; you can still use the fs
setquota command to set or change the quota of a mounted volume.
There is no change in the requirement for creating a mount point for
a volume; that is, after creating the volume with the vos create
command, you still need to create a mount point for the volume using the
fs mkmount command after creating it with the vos create
command.
In AFS 3.4a, the vos release command can now update up to half
of a ReadWrite volume's replicas simultaneously. This is done automatically
and internally; no arguments have been added to the vos release
command. Previously, the vos release command updated one replica
at a time.
In AFS 3.4a, a message has been added to inform you that the vos
rename command failed because the specified volume does not exist.
The message follows:
vos: Could not find entry for volume <oldname>
Previously, if you specified a nonexistent volume with the vos rename
command, the command did not inform you that it had failed; the command
appeared to have executed properly.
In AFS 3.4a, the vos syncserv command continues to check all
remaining servers even if it cannot contact one or more of the servers.
Each time a server cannot be contacted, a message identifying that server
is displayed, as in the following example:
Transaction call timed out for server 'fs1.abc.com'
Previously, the vos syncserv command attempted to contact every
file server on which a volume resided. If the command could not contact
a particular file server, it failed without attempting to contact the remaining
file servers. The command displayed a message stating that it could not
contact a file server without specifying which file server it attempted
to contact.
AFS 3.4a Changes
The vos dump and vos restore commands allow you to dump
and restore volumes from a named pipe without timing out the fileserver
process. This feature allows AFS to interoperate with third-party backup
systems.
The vos changeaddr command changes a file server's IP address.
Changing the IP address of a file server was a difficult task in earlier
versions of AFS. After changing the IP address, you had to run the vos
syncserv and vos syncvldb commands. Then you had to issue the
vos remsite command to remove site information associated with the
ReadOnly volumes under the old IP address. A new vos command, vos
changeaddr, allows you to change a simple file server's IP address
easily.
vos changeaddr -oldaddr <original IP address> -newaddr
<new IP address> [-cell <cell name>]
[-noauth] [-localauth] [-verbose] [-help]
The following command changes the IP address of a simple file server
from 128.21.16.214 to 128.21.16.221:
% vos changeaddr -oldaddr 128.21.16.214 -newaddr 128.21.16.221
-localauth
This chapter describes changes to miscellaneous
(non-suite) AFS commands for AFS 3.4a. In particular, AFS 3.4a contains
changes to the following miscellaneous commands:
This chapter also contains changes from the AFS 3.3 release that have
not been incorporated into the full AFS documentation set. These changes
are marked with the heading ``AFS 3.3 Changes.''
AFS 3.4a contains four changes to the afsd command:
To avoid problems resulting from a disk cache that is too large, AFS
now compares the disk cache size to the partition size when you issue the
afsd command. If the disk cache size is greater than 95% of the
partition size, AFS returns an appropriate message to standard output (stdout)
and exits without starting the Cache Manager. You cannot start the Cache
Manager until you reduce the size of the disk cache to less than 95% of
the partition size.
The default functionality of the fs storebehind command in AFS
3.4a (delayed asynchronous writes) was previously provided by the default
setting of the afsd command. The functionality of the AFS 3.4a Cache
Manager (complete the transfer of a closed file to the file server) was
previously provided by the -waitclose flag of the afsd command;
for this reason, the -waitclose flag has no effect on the operation
of the Cache Manager in AFS 3.4a.
AFS 3.4a contains two enhancements to the butc command:
The new syntax of the butc command follows:
butc [-port <port offset>] [-debuglevel < 0 |
1 | 2 >] [-cell <cell name>]
[-aixscsi] [-noautoquery] [-localauth] [-help]
The -localauth flag assigns the butc command a token that
never expires. You need to run the butc command with the -localauth
flag from a file server machine as ``root.'' This flag instructs the butc
command interpreter running on the local file server machine to construct
a server ticket using the server encryption key with the highest key version
number in the /usr/afs/etc/KeyFile file on the local file server
machine. The butc command presents the ticket to the Volume and/or
Volume Location (VL) server to use in mutual authentication. This flag
is only useful for commands issued on file server machines, since client
workstations do not have a /usr/afs/etc/KeyFile file. It is intended
for cron-type processes or jobs included in the machine's /usr/afs/local/BosConfig
file. The flag can also be used if the issuer is unable to authenticate
to AFS but is logged into the local AFS 3.4a contains four enhancements to the fileserver command:
Previously, the fileserver command gave members of the system:administrators
group only implicit ``administer'' rights on all files. If a member of
the system:administrators group wanted to have access to a directory
path where he or she did not have explicit ``lookup'' rights, the system
administrator had to add ``lookup'' rights to each directory level on the
path.
fileserver [-d <debug level>] [-p <number of processes>]
[-spare <number of spare blocks>] [-pctspare
<percentage spare>] [-b <buffers>] [-l <large
vnodes>] [-s <small vnodes>] [-vc
<volume cachesize>] [-w <call back wait interval>]
[-cb <number of call backs>] [-banner
<print banner every 10 minutes>] [-novbc <whole volume
cbs disabled>] [-implicit <admin mode
bits: rlidwka>] [-hr <number of hours
between refreshing the host cps>] [-m <min percentage spare in
partition>] [-L <large server conf>]
[-S <Small server conf>] [-k <stack size>] [-help]
A disk reserve is a portion of the disk space that is reserved in the
event that a fileserver process puts the file server temporarily
over its disk space quota.
No space left on device
Each File Server (fileserver) process generates a key with an
infinite lifetime (using the AFS key), which it uses to communicate with
the Protection Server (ptserver) process. In earlier versions of
AFS, if the AFS key on which the File Server key was based was removed,
the File Server could not communicate with the Protection Server because
the File Server was still using the old key, which the Protection Server
could no longer access. The only way to break this deadlock was to restart
the File Server. (When the File Server was restarted, it generated a new
key based on the latest AFS key.)
The fileserver program has been changed to remove this deficiency.
Now, if a fileserver process is unable to authenticate with the
ptserver process, the fileserver process generates a new
key based on the latest AFS key and attempts to authenticate again. This
change affects cells whose administrators followed Transarc's recommendations
on AFS key changes and retirement but did not restart the fileserver
processes on a regular basis (if ever). These cells whose administrators
no longer need to restart their fileserver processes as a result
of an AFS key change.
This change does not affect cells whose administrators
The -tmp flag has been removed from the klog command.
The -tmp flag is no longer necessary because there is a klog.krb
program available to authenticate to AFS from a Kerberos database. The
new syntax of the klog command follows:
klog [-x] [-principal <user name>] [-password <user's
password>] [-cell <cell name>] [-servers
<explicit list of servers>+] [-pipe] [-silent]
[-lifetime <ticket lifetime in hh[:mm[:ss]]>] [-setpag] [-help]
Use the klog.krb program for Kerberos authentication rather than
the klog command with the -tmp flag.
AFS 3.3 Changes
A new flag, -setpag, has been added to the klog command.
When run with this flag, the klog command creates a process authentication
group (PAG) prior to requesting authentication. The tokens created are
then placed in this newly created PAG.
In AFS 3.4a, if you run the knfs command without the -id
argument, the command uses the getuid() function to identify the
issuer and grant appropriate permissions to the issuer of the command.
Previously, if you omitted the -id argument from the knfs
command, the command defaulted to granting system:anyuser permissions
to the issuer.
The pagsh command invokes the Bourne shell by default. If you
prefer the C shell over the Bourne shell, issue the following command to
invoke the C shell:
# pagsh -c /bin/csh
Two new flags have been added to the salvager command.
salvager [initcmd] [-partition <Name of partition to salvage>]
[-volumeid <Volume Id to salvage>] [-debug] [-nowrite] [-inodes]
[-force] [-oktozap] [-rootinodes] [-salvagedirs] [-blockreads]
[-parallel <# of max parallel partition salvaging>] [-tmpdir
<Name of dir to place tmp files>] [-showlog]
[-showsuid] [-help]
In AFS 3.4a, the scout command includes the name of the file
server in a message when a problem exists on a partition. An example of
the new message for a partition named /vicepx on a file server
named fs1.abc.com follows:
Could not get information on server fs1.abc.com partition
/vicepx
Previously, when a problem existed on a partition, the scout
command displayed the following message:
Could not get information on partition /vicepx
If the server name listed at the top of the screen had scrolled off,
the user might not know which server was involved.
The -level argument of the upclient command has been removed
because its functionality is duplicated by the -clear and -crypt
flags. The new syntax for the command follows:
upclient <hostname> [-clear] [-crypt] [-t <retry
time>] [-verbose] <dir>+[-help]
In addition to its previous Volume Location Database (VLDB) conversion
values, the vldb_convert command now converts the VLDB from AFS
version 3.4a (4) format to AFS version 3.3 (3) format. The
value of 4 is only used with the -from argument.
VLDB upgrade conversions from AFS version 3.3 format to AFS version
3.4a format are not necessary. The version 3.3 VLDB is automatically converted
to a version 3.4a VLDB when you upgrade the vlserver binaries.
AFS 3.3 Changes
A new flag, -dumpvldb, has been added to the vldb_convert
command. The flag directs the command to produce debugging output. The
new syntax of the vldb_convert command follows:
vldb_convert [initcmd] [-to <goal version>] [-from <current
version>] [-path <pathname>] [-showversion]
[-dumpvldb] [-help]
AFS 3.4a contains two changes to the vlserver command:
The VLLog file can be set to record three different information
levels. You can enable logging in the VLLog file by using the following
command:
# kill -TSTP <process id for vlserver>
In the following example, the ps command is run to find the process
id of the vlserver process and the kill -TSTP command is
run to enable logging in the VLLog file:
# ps -axwu | more USERPID%CPU%MEMSZRSSTTSTATSTARTTIMECOMMAND
root930.00.0600?IWFeb 270:00vlserver #
kill -TSTP 93
Use the same command to increase the current level of logging information
(that is, to change from the first level of logging information to the
second level or from the second level to the third level). A log entry
is created in the VLLog file to indicate any change in the VLLog
file detail level.
The first level of information contained in the VLLog file can
include the following messages:
You can disable logging for the vlserver process with the following
command:
# kill -HUP <process id for vlserver>
You can decrease the level of logging for the vlserver process
by issuing the following command:
# kill -HUP <process id for vlserver>
Afterwards, issue the following command to obtain the desired level
of logging:
# kill -TSTP <process id for vlserver>
AFS 3.4a contains two changes to the volserver command:
The new syntax of the volserver command follows:
/usr/afs/bin/volserver [-log] [-p <lwp processes>] [-help]
AFS 3.3 Changes
The -verbose flag has been removed from the volserver
command because the flag generates only two possible messages. The functionality
of the flag is now part of the base functionality of the command. In other
words, the AFS 3.3 version of the volserver command, when run without
any flags or arguments, behaves like the AFS 3.2 version of the command
when run with the -verbose flag.
The new dlog command is for use with Transarc Corporation's AFS/DFS
Migration Toolkit. The dlog command authenticates the AFS user specified
with the -principal argument to the DCE Security Service in the
DCE cell specified with the -cell argument. DCE authentication allows
the user to access the DCE cell from an AFS client via the Translator Server.
The command provides no functionality outside of the Migration Toolkit.
The syntax of the new command follows:
dlog [-principal <user name>] [-cell <cell name>]
[-password <user's password>] [-servers
<explicit list of servers>+] [-lifetime <ticket lifetime
in hh[:mm[:ss]]>] [-setpag] [-pipe] [-help]
Refer to the AFS/DFS Migration Toolkit Administration Guide and Reference
for more information on the dlog command.
The new dpass command is for use with Transarc Corporation's
AFS/DFS Migration Toolkit. The dpass command returns the DCE password
created for a user with the dm pass command. The command provides
no functionality outside of the Migration Toolkit.
The syntax of the new command follows:
dpass [-cell <original AFS cell name>] [-help]
Refer to the AFS/DFS Migration Toolkit Administration Guide and Reference
for more information on the dpass command.
The up command now returns a 0 only if it succeeds; otherwise,
the command returns a 1. Formerly, the command always returned a
0, regardless of success or failure.
Two new arguments, -frequency and -period, have been added
to the two xstat programs (xstat_cm_test and xstat_fs_test):
xstat_cm_test [initcmd] -cmname <Cache Manager name(s) to monitor>+
-collID <Collection(s) to fetch>+[-onceonly] [-frequency <poll
frequency, in seconds>] [-period <data
collection time, in minutes>] [-debug] [-help]
xstat_fs_test [initcmd] -fsname <File Server name(s) to monitor>+
-collID <Collection(s) to fetch>+[-onceonly] [-frequency <poll
frequency, in seconds>] [-period <data
collection time, in minutes>] [-debug] [-help]
In previous versions of AFS, the miscellaneous commands were inconsistent
in their use of the -help flag. These commands now consistently
use the -help flag to provide information on their syntax:
This chapter lists additional functionality added to AFS for the 3.4a
release, including:
This chapter also contains changes from the AFS 3.3 release that have
not been incorporated into the full AFS documentation set. These changes
are marked with the heading ``AFS 3.3 Changes.''
Multihomed file servers have multiple IP addresses. A multihomed file
server can respond to an RPC via a different IP address than the one initially
addressed by a client machine. By providing multiple paths through which
a client machine's Cache Manager can communicate with it, a multihomed
file server can increase the availability of computing resources and improve
performance. A multihomed file server makes several addresses available
to service requests for client machines. These additional addresses provide
multiple paths with which a client machine's Cache Manager can communicate
with the multihomed file server.
A multihomed file server could choose to service an RPC through a different
IP address if there is heavy network traffic at the original IP address
which serviced a previous RPC. For example, assume a multihomed file server
originally responds to a client machine's service request at the IP address
of 199.206.34.562. When the client machine sends to the file server
another service request, the file server's IP address of 199.206.34.562
is busy servicing the requests of other client machines. When the client
machine's Cache Manager realizes that this IP address is busy, it selects
another IP address, 199.206.34.564, belonging to the multihomed
file server. The client machine's Cache Manager then attempts to send the
RPC to that IP address with the service request.
AFS 3.4a supports up to 16 addresses per multihomed file server machines.
File servers register their network addresses with the Volume Location
Database (VLDB) upon startup. In AFS 3.3 and earlier versions, file servers
were identified in the VLDB by a single IP address. In AFS 3.4a, file servers
are represented in the VLDB by a unique host identifier, which is created
by the fileserver process during the startup process. This file
server host identifier contains information about all known IP addresses
for that file server. These IP addresses are updated whenever the fileserver
process is restarted.
AFS 3.4a allows you to unlink open files in the AFS file space. Unlinking
an open file is a technique for creating short-lived temporary files when
you want to keep these files hidden from other users or do not want to
keep a permanent record of the file in your file system. When you unlink
an open file, the file server renames the file with a unique file name,.__.afsxxxx,
where xxxx is a random numeric or numeric/alphabetic string generated
by the file server. The unlinked file's V file in the AFS cache
maintains the credentials from the former file and the new filename created
by the file server.
The unlinked file does not appear to users that can view the contents
of the directory using the When the unlinked temporary file is closed, the file server removes
the file from the disk on the file server machine permanently.
When the vfsck process determines that a partition needs to be
salvaged, vfsck creates a FORCESALVAGE flag on that partition.
Previously, the fileserver process did not check for a FORCESALVAGE
flag when rebooting the file server machine after a clean shutdown. The
file server attached all volumes even if a partition had a FORCESALVAGE
flag.
In AFS 3.4a, when the file server machine is rebooting, the fileserver
process looks for a FORCESALVAGE flag. If the fileserver
process detects such a flag in a partition, it detaches all of the volumes
in the partitions already attached and aborts, sending an appropriate message
to the /usr/afs/logs/FileLog file. The fileserver process
passes responsibility to the bosserver process, which causes the
salvager process to run. After the salvage is complete, the fileserver
process attaches all of the volumes properly.
For international character support, AFS 3.4a supports 8-bit characters
in file and directory names. AFS file and directory names were previously
restricted to 7-bit characters (ASCII).
AFS 3.4a allows you to create /vicepx partitions larger
than 2 GB. The maximum size of a /vicepx partition is the
same as the maximum partition size of the local Although you can create /vicepx partitions larger than
2 GB, AFS 3.4a does not fully support volumes larger than 2 GB. Files in
an AFS volume are still limited to a maximum of 2 GB.
AFS commands with the -cell argument produce an error message
when the host name is missing from the /usr/vice/etc/CellServDB
file. In AFS 3.4a, these commands produce an error message indicating that
the command failed and that there is a problem with the CellServDB
file. The new message also tells which line of the CellServDB file
caused the failure. For example, the klog command now issues the
following message:
Can't properly parse host line xxx.xx.xx.xx in configuration
file /usr/vice/etc/CellServDB klog: error reading
cell database Can't get local cell name!
Previously, this message explained that the command had failed but did
not indicate that there was a problem with the CellServDB file.
In AFS 3.4a, if the volume name information is available in the cache,
the Cache Manager displays the volume name along with the volume ID when
reporting information about a particular volume. The following message
is an example of how the Cache Manager may display volume information:
Waiting for busy volume XXX (name) in cell XXX.
Previously in AFS, the Cache Manager displayed only the volume ID when
reporting information about a particular volume.
In AFS 3.4a, members of the system:administrators group have
both administer (a) and lookup (l) rights on
the access control list of every directory in the system.
Previously, members of the system:administrators group had only
implicit administer (a) rights on the access control list
of every directory in the system.
In AFS 3.4a, users are now able to access ReadOnly data from all database
servers, even when Ubik cannot attain a quorum; however, they cannot update
the data until Ubik has a quorum.
Previously, all database servers except the Protection Server provided
ReadOnly data, even when no quorum existed. Users were able to read data
but were not able to update the data until Ubik established a quorum.
AFS 3.4a supports up to 256 partitions per server. The names of these
partitions range from /vicepa to /vicepiv.
In earlier versions of AFS, each server had a maximum of 26 partitions.
The names of these partitions ranged from /vicepa to /vicepz.
For AIX 3.2 systems only, Transarc has updated the AIX remote (r*)
commands. AFS 3.4a now supports all group permission features. For example,
entries of the following type are supported:
+@NetGroup -@NetGroup
-@HostName
Previously, AFS did not support group permission features in the /.rhosts
or /etc/hosts.equiv file.
In AFS 3.4a, it is no longer necessary to load the AFS kernel extensions
before starting the NFS daemons for the NFS/AFS Translator.
In AFS 3.4a, source customers can build AFS Cache Managers that function
as translators.
In AFS 3.3, the Rx Remote Procedure Call (RPC) system can take
better advantage of networks with large Maximum Transfer Unit (MTU) values.
Previously, the Ethernet MTU of 1500 bytes limited the efficiency of AFS
running on high-speed networks such as FDDI. The modifications allow for
higher throughput between machines directly attached to the high-speed
network.
Previously, executing fsync(2) on an AFS file caused changes
to the file to be written to the cache device and to the file server machine,
but it did not cause the changes to be written to the file server's non-volatile
storage. To provide maximum security for user data, fsync(2) now
does the latter. This modification further reduces the amount of changed
user data that can be jeopardized by a file server crash.
With these changes, fsync(2) consumes slightly more CPU and considerably
more disk I/O resources on the file server machine than it previously did.
In practice, this facility is infrequently used and the impact of the change
is negligible; however, any application that uses fsync(2) heavily
will suffer a performance penalty.
Every AFS binary file includes a version string identifying the configuration
of sources used to produce the binary. This allows AFS Product Support
to more quickly determine which AFS release is being used and which patches
(if any) have been applied. Use the `strings filename | grep afs3`
where filename is the name of the appropriate binary file.
AFS 3.3 is limited with respect to file locking, as follows:
AFS does not support byte-range locks. This includes all lockf()
calls and those fcntl() calls that specify a byte offset to a file.
However, all operations on byte-range locks return a success value of 0.
In addition, the first time a byte-range locking operation is called by
a program, AFS displays the following message:
afs: byte-range lock/unlock ignored; make sure no one else
is running this program.
AFS 3.4a Changes
AFS 3.4a includes fixes for many bugs
in AFS 3.3, a subset of which are described in this chapter. This chapter
describes only the most visible fixes included with AFS 3.4a. Unless otherwise
noted, these bug fixes do not affect the documentation.
The following backup bugs have been fixed:
The error message produced when the Backup System cannot locate a host
entry in the Backup Database has been improved. Previously, the error message
stated:
backup: Unable to connect to tape coordinator at port TC_port_offset
Now, the error message states:
backup: No such host/port entry; Can't connect to tape coordinator
at port TC_port_offset
% pts createuser -name root -id 0
the command returned a message similar to the following:
User root has id 100232323
In AFS 3.4a, the previous command does not allocate an ID, but rather
displays the following error message and aborts without creating a user:
0 isn't a valid user id; aborting As a workaround to this problem in AFS 3.4a, issuing the vos release
command with the -f flag takes into account any files deleted from
the ReadWrite volume and does not copy them into the ReadOnly volume; this
recovers some of the lost disk space without removing the ReadOnly volume
and releasing it again.
The
-log argument of the volserver command has changed so that
the VolserLog file now contains all removal activity as a result
of the vos release command if the user specified the -f flag.
The
-server and -partition arguments of the vos backupsys
command now work as described in the AFS Command Reference Manual.
The vos backupsys command has been fixed so that now when a user
specifies a volume with the -server and -partition arguments,
the command checks whether the volume is a ReadOnly or a ReadWrite and
only creates a Backup for the ReadWrite version. If the user does not specify
a volume with the -server and -partition arguments, the command
automatically creates backup volumes for every ReadWrite volume for specific
servers and partitions.
Previously in AFS, if a user issued the vos backupsys command
specifying a volume with the -server and -partition arguments,
the command did not check whether the specified volume was a ReadOnly or
a ReadWrite. If the user issued the command specifying a ReadOnly volume,
the command may have backed up the ReadOnly volume to the site of the ReadWrite
version. The user may think that the backup copy is the most recent version
when it may not be, depending on when the ReadOnly version was last updated. In
AFS 3.4a, the default value for the afsd command's -volumes
argument is 128. The afsd command now accepts values within the
range of 50 to 3000 for the -volumes argument.
Previously, the -volumes argument of the afsd command
used the value of 128, regardless of the value indicated on the command
line. Even if you did not specify the -volumes argument on the command
line and wanted to use the default value of 50 (as stated in the AFS
Command Reference Manual), the option used the value of 128.
Warning: Overwriting most recent dump before current one has
finished
The dump is overwriting a tape belonging to the current dump set. The
Backup System does not allow you to overwrite a tape belonging to the current
dump set.
Can't overwrite tape containing the dump in progress
The dump is overwriting a tape in this dump set's hierarchy. The Backup
System displays a warning on standard output (stdout) and in the TE_<device_name>
and TL_<device_name> log files:
Warning: Overwriting parent dump (DumpID number)
Any master dump is not found in the database:
Warning: Can't find parent dump number in backup database AFS 3.3 includes fixes for many bugs in AFS 3.2, a subset of which are
described in this chapter. This chapter describes only the most visible
fixes included with AFS 3.3. Unless otherwise noted, these bug fixes do
not affect the documentation.
The following bug has been fixed for the AFS miscellaneous commands:
Previous versions of the AFS documentation contained some incorrect
or misleading information. Unless otherwise noted, these documentation
errors have not been corrected in the AFS 3.4a documentation:
Step
2 of Section 2.4.1, ``Using the Kernel Extension Facility on AIX Systems,''
on page 2-9 of the AFS Installation Guide states,
``If the machine is not going to act as an NFS/AFS translator:
# cd /usr/vice/etc # ./cfgexport
-a export.ext.nonafs''
Change the command lines to
# cd /usr/vice/etc/dkload #
./cfgexport -a export.ext.nonfs''
The
section on the operation of the AFS login program on page 2-40 of
the AFS System Administrator's Guide states,
``If no AFS token was granted [because of an incorrect password], the login
program attempts
to log the user into the local The
REQUIREMENTS/RESTRICTIONS section of the inetd, rcp, and
rsh command descriptions on pages 10-5, 10-12, and 10-16, respectively,
of the AFS Command Reference Manual contain
the following information in a bulleted list:
Previous versions of the AFS documentation contained some incorrect
or misleading information. Unless otherwise noted, these documentation
errors have not been corrected in the AFS 3.3 documentation:
On page 17-15 of the AFS System Administrator's Guide and
page 4-38 of the AFS Command Reference Manual
, the following recommendation is given for limiting consecutive failed
login attempts: ``For most cells, Transarc
Corporation recommends setting the limit on authentication to 5 attempts
and the lockout time to 25 minutes.''
This is not a good recommendation. Instead, you should follow the recommendation
listed in the AFS 3.3 Release Notes
:
Recommendation: Transarc Corporation recommends a limit of 9
consecutive failed authentication attempts and a 25-minute lockout time.
Although some cells may want to use other limits, these should suffice
for most cells.
Arguments:
Examples:
Privilege Required:
3.4. Procedures for Upgrading from AFS
3.3 to AFS 3.4a
Back to Table of Contents
Note: If you have already upgraded some machines in
your cell to AFS 3.4 Beta or GA, then the instructions in this section
are not appropriate for you. See section 3.5
instead.
The following subsections contain instructions for upgrading to AFS 3.4a
from AFS 3.3 (as previously mentioned, this also refers to AFS 3.3a). These
upgrade instructions require you to have ``root'' permissions. If you have
not done so already, you should read the instructions in Sections 3.1
through 3.3, which contain information
that you should understand prior to performing the upgrade.
You may upgrade the Cache Manager on AFS client-only (non-file server)
machines at any time during the cell upgrade, even before upgrading database
server processes, if you wish. See section 3.4.3.
Note: As a reminder, you cannot run the database server
processes on a multihomed machine. If you plan to make a current database
server machine multihomed, then you must first use the bos stop
command to stop the database server processes, changing their status in
the BosConfig file to NotRun. Then issue the bos delete
command on each machine to remove the database server processes completely
from the BosConfig file. Remember also to change the CellServDB
file on all server and client machines in your cell, and to register the
changes with Transarc. If you are running a system control machine, the
easiest way to alter CellServDB on all server machines is to issue
the bos delhost command against the system control machine, which
will propagate the changes.
It is recommended that you install the entire AFS 3.4a binary distribution
into a volume for each system type in your AFS filespace (recommended location:
/afs/cellname/sysname/usr/afsws), copying
it either from the AFS Binary Distribution tape or by network from the
Transarc AFS product directory, /afs/transarc.com/product/afs/3.4a/sysname.
Then run the bos install command against each binary distribution
machine to install the binaries to the local disk location of the existing
binaries (standardly, /usr/afs/bin). When you restart the processes
using the bos restart command, the BOS Server moves the AFS 3.3
binary to a .bak file after renaming any current .bak file
to a .old file.
3.4.1. Upgrading the Database Server Processes
Change directories to your local cell's binary distribution directory
or Transarc's product tree. The following example shows the recommended
name for your local distribution location:
3.4.2. Upgrading the non-Database Server
Processes
After you have upgraded the vlserver and other database server processes
on the database server machines, you can proceed to upgrade the fs
process suite and other basic server processes (bosserver, runntp,
upclient and upserver) at your convenience. The machine is
unable to serve files during the duration of this upgrade process, so you
may wish to perform at the time and in the manner that will disturb your
users least.
Shut down the fs process suite to prevent it from accidentally
restarting before you have a chance to load the AFS 3.4a kernel extensions.
3.4.3. Upgrading the Cache Manager
on AFS Clients
The following instructions assume an AFS client is to be upgraded to full
AFS 3.4a functionality. Omit these steps if the AFS client will continue
to use AFS 3.3 or AFS 3.3a software.
3.5. Procedures for Upgrading from AFS 3.4 to AFS 3.4a
Back to Table of Contents
3.6. Procedures for Upgrading from AFS 3.2 to
AFS 3.4a
Back to Table of Contents
3.6.1. Upgrading Servers
The following instructions assume that all file servers and database servers
are to be upgraded to full AFS 3.4a server functionality. Consider the
following before upgrading your cell's servers:
On each server machine that runs the fs process suite,
issue the bos shutdown command to shut it down:
Note: You can verify the success of the conversion by
running the vldb_convert command with the -showversion flag.
On each server machine, copy the AFS kernel extensions (libafs.o
or equivalent) to the local disk directory appropriate for dynamic loading
(or kernel building, if you must build a kernel on this system type). If
the machine actually runs client functionality (a Cache Manager), also
copy the afsd binary to the local /usr/vice/etc directory.
The following example command shows the recommended name for your local
binary storage directory:
3.6.2. Upgrading the Cache Manager
on AFS Clients
The following instructions assume an AFS client is to be upgraded to full
AFS 3.4a functionality. Skip these steps if the AFS client will continue
to use AFS 3.2 software (though this is not recommended).
Chapter 13 of these release notes provides
information on using the afsd command to configure the Cache Manager.
3.7. Procedures for Downgrading from AFS 3.4a
to AFS 3.3a
Back to Table of Contents
3.7.1. Downgrading Servers
Perform the following steps to downgrade servers from AFS 3.4a to AFS 3.3a:
Note: You can verify the success of the conversion by
running the vldb_convert command with the -showversion flag.
On each server machine, copy the AFS kernel extensions (libafs.o
or equivalent) to the local disk directory appropriate for dynamic loading
(or kernel building, if you must build a kernel on this system type). If
the machine actually runs client functionality (a Cache Manager), also
copy the afsd binary to the local /usr/vice/etc directory.
The following example command shows the recommended name for your local
binary storage directory:
3.7.2. Downgrading the Cache Manager
on AFS Clients
The following instructions assume an AFS client is to be downgraded to
full AFS 3.3a functionality. Skip this section if the client will continue
to use AFS 3.4a software.
4. Authentication
Back to Table of Contents
These changes are marked with the heading ``AFS 3.4a Changes.''
4.1. Kerberos Support for the kaserver Process
Back to Table of Contents
4.2. Changes to the AFS login Program
Back to Table of Contents
4.3.1. Support for # and ! Entries in the /etc/passwd
File
AFS 3.4a
supports the pound sign (#) character in the local /etc/passwd
file on AIX machines. The # character indicates that the login
program goes directly to the AFS Authentication Database to check authentication
and skips AIX local authentication and AIX secondary authentication. It
is recommended that you include the standard AIX exclamation point (!)
character as an entry in the /etc/passwd file. The ! character
entry indicates that the login program checks for any AIX secondary
authentication.
4.3.2. Support for Alternative Authentication Programs
with AIX 4.1
Transarc
does not supply a replacement login program for AIX 4.1 as is provided
for AIX 3.2. Transarc supplies an external alternative authentication program
that is called by the AIX 4.1 login process. In order to take advantage
of this authentication program provided with AFS 3.4a, you must make the
following configuration changes to AIX 4.1 on the local client machine.
Ensure that you installed the afs_dynamic_auth program in the /usr/vice/etc
directory on the local client machine.
Note: You must use DCE for the registry
variable. AFS is not a valid registry variable in AIX 4.1.
Note: In the /etc/security/user file on the local
client machine, set the registry variable of the user ``root'' to
files, that is,
root: registry = files
The value files designates that user ``root'' can authenticate
using the local password files on the local machine only.
In the /etc/security/user file on the local client machine running
AIX 4.1:
If the machine is an AFS client only, set the SYSTEM variable
for default users to
In the /etc/security/login.cfg file on the local client machine
running AIX 4.1, identify the DCE authentication method with the
following:
Note: If you are using the afs_dynamic_kerbauth
alternative authentication program with AIX 4.1, AFS does not set the KRBTKFILE
environment variable.
4.3.3. Support for Secondary Authentication on AIX 3.2
AFS 3.3 Changes
4.4. Changes to the Digital UNIX (formerly DEC OSF/1)
login Program
Back to Table of Contents
Process authentication group (PAG) support is identical to other systems.
If this second AFS authentication attempt fails, you are authenticated
to the local
4.5. Limitations for SGI Passwords
Back to Table of Contents
4.6. Changes to the Solaris login Program
Back to Table of Contents
Note: AFS 3.4a does not support the SLEEPTIME and IDLEWEEKS variables.
5. The Backup System
Back to Table of Contents
These changes are marked with the heading ``AFS 3.4a Changes.''
5.1. A New Backup Configuration File for the Tape Coordinator
Back to Table of Contents
5.1.1. Creating a User-Defined Configuration File
Automated backup equipment, such as stackers and jukeboxes, can automatically
switch tapes during a backup dump operation. Jukeboxes can also
automatically fetch the proper tapes for a backup restore operation.
To handle the varying requirements of automated backup equipment, the user-defined
configuration file can be set up to call executable routines that you create
to operate your backup equipment. Through this configuration file, you
can select the level of automation you want the Tape Coordinator to use.
The following sections define each of the parameters in detail. Section
5.1.2 contains annotated, sample scripts
that illustrate typical routines to control automated backup equipment.
5.1.1.1. The MOUNT Parameter
The MOUNT
parameter provides a mechanism to load a tape through an automated backup
device. The MOUNT parameter takes a pathname as an argument:
If you do not specify the MOUNT parameter, the Backup System prompts
the operator to mount a tape. You can use the AUTOQUERY parameter
to prevent the Backup System from requesting the first tape (via the MOUNT
script or a prompt).
For a dump operation:
For any restore operations:
Note: If the MOUNT operation does not close with
a 0 status, the Tape Coordinator will not call the UNMOUNT
operation.
5.1.1.2. UNMOUNT Parameter
The
UNMOUNT parameter specifies a file that contains an administrator-written
executable script or program. In this case, the executable routine removes
a tape from an automated backup device. If you want the Backup System to
support a tape stacker or jukebox device, you can write an executable routine
in this file to perform the tape unmount operations for the device. The
UNMOUNT parameter takes a pathname as an argument:
If the UNMOUNT parameter is not supplied, the default action is
to take no action.
5.1.1.3. ASK Parameter
The
ASK parameter determines whether the Backup System should ask the
tape operator all questions in response to error conditions, except the
request to mount the tape, or whether default answers are to be assumed.
The format for this parameter in the CFG_<tape_device>
configuration file is
The error cases for the ASK parameter are:
5.1.1.4. AUTOQUERY Parameter
The
AUTOQUERY parameter determines whether to disable the Tape Coordinator's
initial prompt or MOUNT script execution for tape insertion when
executing backup commands involving a tape device. Use the AUTOQUERY
parameter in conjunction with the ASK parameter to disable all prompting
from the Backup System. The format for this parameter in the CFG_<tape_device>
configuration file is
5.1.1.5. NAME_CHECK Parameter
The
NAME_CHECK parameter determines whether the Backup System should
check tape names. Disabling tape name checking is useful for recycling
tapes without first relabeling them. The format for this parameter in the
CFG_<tape_device> configuration file is
5.1.1.6. BUFFERSIZE Parameter
The
BUFFERSIZE parameter allows the allocation of memory to increase
the performance of dump and restore operations with the Backup System.
The format for this parameter in the CFG_<tape_device>
configuration file is
5.1.1.7. FILE Parameter
The
FILE parameter specifies whether backup dump and backup
restore operations are written to or read from a tape device or a file.
The format for this parameter in the CFG_<tape_device>
configuration file is
Note the following requirements if you specify the YES argument
(dump to a file or restore from a file):
5.1.2. Example of User-Defined Configuration
Files
The following example configuration files detail how you might structure
configuration files for stackers, jukeboxes, and file dumps. Consider these
files as examples and not as recommendations.
5.1.2.1. Example CFG_<tape_device> File for
Stackers
The following example /usr/afs/backup/tapeconfig file contains configuration
information for the tape stacker stacker0.1.
The previous example specifies the /usr/afs/backup/stacker0.1 file
containing an executable routine that initializes the stacker and loads
a tape. An example of such an executable routine follows.:
This routine makes use of only two of the parameters passed to it by the
Backup System: tries and operation. It is a good practice
to watch the number of "tries" and exit if the number exceeds 1
(which implies that the stacker is out of tapes). Note that this routine
calls the stCmd_NextTape function for backup dump or backup
savedb operations; however, your file should call whatever routine
is required to load the next tape for your stacker. Also note that the
routine sets the appropriate exit code to prompt an operator to load a
tape if either the stacker cannot load a tape or a backup restore
operation is in process.
#! /bin/csh -f
set devicefile = $1
set operation = $2
set tries = $3
set tapename = $4
set tapeid = $5
set exit_continue = 0
set exit_abort = 1
set exit_interactive = 2
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
if (${tries} > 1) then
echo "Too many tries"
exit ${exit_interactive}
endif
if (${operation} == "unmount") then
echo "UnMount: Will leave tape in drive"
exit ${exit_continue}
endif
if ((${operation} == "dump") |\
(${operation} == "appenddump")|\
(${operation} == "savedb"))then
stCmd_NextTape ${devicefile}
if (${status} != 0) exit ${exit_interactive}
echo "Will continue"
exit ${exit_continue}
endif
if ((${operation} == "labeltape") |\
(${operation} == "readlabel")) then
echo "Will continue"
exit ${exit_continue}
endif
echo "Prompt for tape"
exit ${exit_interactive}
5.1.2.2. Example CFG_<tape_device> File for
Dump to File
The following example /usr/afs/backup/tapeconfig file contains configuration
information for the file.:
The following routine, contained in the /usr/afs/backup/file file,
demonstrates how to configure the Backup System to handle dumps to a file:
As with the stacker routine, this routine makes use of two of the parameters
passed to it by the Backup System: tries and operation.
The tries parameter monitors the number of attempts to write to
or read from a file. If the number of attempts exceeds 1, the
Backup System is unable to write to or read from the file specified in
the /usr/afs/backup/tapeconfig file. The routine will then exit
and return an exit code of 2 (which will cause the Backup System
to prompt the operator to load a tape). The operator can use this opportunity
to change the name of the file specified in the /usr/afs/backup/tapeconfig
file.
#! /bin/csh -f
set devicefile = $1
set operation = $2
set tries = $3
set tapename = $4
set tapeid = $5
set exit_continue = 0
set exit_abort = 1
set exit_interactive = 2
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
if (${tries} > 1) then
echo "Too many tries"
exit ${exit_interactive}
endif
if (${operation} == "labeltape") then
echo "Won't label a tape/file"
exit ${exit_abort}
endif
if ((${operation} == "dump")|\
(${operation} == "appenddump")|\
(${operation} == "restore")|\
(${operation} == "savedb")|\
(${operation} == "restoredb")) then
/bin/rm -f ${devicefile}
/bin/ln -s /hsm/${tapename}_${tapeid} ${devicefile}
if (${status} != 0) exit ${exit_abort}
endif
exit ${exit_continue}
5.2. Improved Error Messages and Error Handling
Back to Table of Contents
5.3. Permanent Tape Names
Back to Table of Contents
Note: The permanent tape name was set with the -name
argument of the backup labeltape command in the AFS 3.4 Beta product.
In AFS 3.4a, the permanent tape name is set with the -pname argument.
The AFS 3.4a backup labeltape command allows users to label tapes
explicitly with a permanent name. If a user supplies a permanent name for
a tape with the backup labeltape command's -pname argument,
the Backup System will use the permanent name (tape name) as the
tape is re-used or recycled. When a user labels the tape with the backup
labeltape command's -pname argument, it also sets the AFS
tape name to NULL. The Backup System uses the permanent name until
the user explicitly changes it with the backup labeltape command's
-pname argument. It is recommended that permanent tape names be
unique since that is the tape name that is recorded in the Backup Database
and that is requested on backup restore operations. The permanent
name is listed in the tape name field in the output resulting
from the backup readlabel command. You should use the -name
argument to set the AFS tape name.
Note: If you use the -pname argument to label
the tape with a permanent name, you can no longer refer to the tape by
its AFS tape name. The Backup System and Backup Database will
only recognize the tape's permanent name on commands after labelling the
tape using the -pname argument of the backup labeltape command.
If the user does not explicitly name a tape with a permanent name, the
Backup System assigns a non-permanent name to the tape as it did in previous
AFS versions. The Backup System produces this non-permanent name by concatenating
the volume set and dump level with a tape sequence index number (for example,
guests.monthly.3). This name is not permanent and changes whenever
the tape label is rewritten by a backup command (for example, when
using the backup dump, backup labeltape, and backup savedb
commands). The
AFS-assigned non-permanent name is listed in the AFS tape name
field in the output resulting from the backup readlabel command.
Note: In AFS 3.3 or earlier, if a user labeled a tape
using the -name argument and used that tape in a tape recycling
scheme, the AFS Backup System enforced name checking by requesting that
the AFS tape name of the volume to be dumped or restored match the name
of the tape in the drive. In AFS 3.4a, if users set the permanent tape
name using the -pname argument, any pre-existing AFS tape name
on the tape label from AFS 3.3 or earlier is set to NULL and the AFS Backup
System cannot verify the tape being used for the dump or restore.
5.4. Tape Coordinator Enhancement
Back to Table of Contents
5.5. Modification to Backup Prompting
Back to Table of Contents
5.6. The backup Command
Back to Table of Contents
5.6.1. The -localauth Flag and -cell Argument (New)
AFS 3.4a Changes
5.6.1.1. The -localauth Flag
The -localauth flag assigns the backup command and butc
process a token that never expires. You need to run a backup command
with the -localauth flag from a file server machine as ``root.''
The -localauth flag instructs the backup command interpreter
running on the local file server machine to construct a service ticket
using the server encryption key with the highest key version number in
the /usr/afs/etc/KeyFile file on the local file server machine.
The backup command presents the ticket to the Volume and/or Volume
Location Server to use in mutual authentication.
5.6.1.2. The -cell Argument
The -cell argument specifies the cell in which the Backup System
and volumes that are affected by the backup command and butc
process reside. The issuer can abbreviate cell name to the shortest
form that distinguishes it from the other cells listed in the /usr/vice/etc/CellServDB
file on the local client machine. By default, commands are executed in
the local cell, as defined:
5.6.2. The backup volsetrestore Command (New)
AFS 3.4a Changes
machine partition volume
Note: If multiple volumes are to be restored, the port
offset order must be the same for all volumes. That is, all full dumps
must be done on the same port offset, all first-level incrementals on the
same port offset, etc.
Description:
The -n flag instructs the command to produce a list of the volumes
it would restore without actually restoring any volumes. The command also
provides information about the tapes that contain dumps of the volumes.
You can use the -n flag with the -file argument to determine
the tapes required to restore the indicated volumes. You can also use the
-n flag with the -name argument to construct a list of volumes
that would be restored with a specified volume set; you can then modify
the list of volumes as necessary to produce a file for use with the -file
argument. You could create a file for the backup volsetrestore command
if you want to restore volumes in a volume set to a different location,
restore only a subset of the volume set, or change the order of volume
restores within the volume set.
Do not use wildcards (for example, .*) in an entry. Also, do not
include a newline character in an entry for a volume; each entry must appear
on a single line of the file. Include only a single entry for each volume
in the file. The command uses only the first entry for a given volume;
it ignores all subsequent entries for the volume.
The command displays multiple lines of information for a volume if one
or more incremental dumps were performed since the last full dump of the
volume. The command displays one line of output for the last full dump
and one line of output for each incremental dump. It displays the lines
in the order in which the dumps would need to be restored, beginning with
the full dump. It does not necessarily present all of the lines for a volume
consecutively in the order in which the incremental dumps occurred.
Starting restore
backup: task ID of restore operation: 112
backup: Finished doing restore
The following example restores all volumes that have entries in the file
named /tmp/restore:
Starting restore
backup: task ID of restore operation: 113
backup: Finished doing restore
The /tmp/restore file has the following contents:
Privilege Required:
fs1.abc.com b user.morin
fs1.abc.com b user.vijay
fs1.abc.com b user.pierette
fs2.abc.com c user.frost
fs2.abc.com c user.wvh
fs2.abc.com c user.pbill
... ...
5.6.3. The backup labeltape Command
AFS 3.4a Changes
Note: The permanent tape name was set with the -name
argument of the backup labeltape command in the AFS 3.4 Beta product.
In AFS 3.4a, the permanent tape name is set with the -pname argument.
The backup labeltape command allows you to label tapes explicitly
with a permanent name in AFS 3.4a. The Backup System uses the tape's permanent
name as the tape is re-used or recycled and prompts for the tape by its
permanent name on backup restore operations. A tape keeps its permanent
name until the user explicitly changes it using the backup labeltape
command with the -pname argument. It is recommended that permanent
tape names be unique since that is the tape name that is recorded in the
Backup Database and that is requested on backup restore operations. The
-pname argument has been changed to
Note: When you label a tape, the backup labeltape
command removes all existing data on the tape. The backup labeltape
command also removes all information about the tape's corresponding dump
set (both its initial and appended dumps) from the Backup Database.
In AFS 3.3 or earlier, if a user labeled a tape using the -name
argument and used that tape in a tape recycling scheme, the AFS Backup
System enforced name checking by requesting that the AFS tape name of the
volume to be dumped or restored match the name of the tape in the drive.
In AFS 3.4a, if users set the permanent tape name using the -pname
argument, any pre-existing AFS tape name on the tape label from
AFS 3.3 or earlier is set to NULL and the AFS Backup System cannot verify
the tape being used for the dump or restore.
5.6.4. The backup readlabel Command
AFS 3.4a Changes
If you did not designate a permanent tape name, the backup readlabel
command displays only the AFS-assigned tape name, as shown in the following
output:
Tape label
tape name = monthly.guest.dump
AFS tape name = guests.monthly.3
creationTime = Sun Jan 1 00:10:00 1995
cell = abc.com
size = 2097152 Kbytes
dump path = /monthly
dump id = 78893700
useCount = 5
-- End of tape label --
Tape label
AFS tape name = guests.monthly.3
creationTime = Wed Feb 1 00:53:20 1995
cell = abc.com
size = 2097152 Kbytes
dump path = /monthly
dump id = 791618000
useCount = 1
-- End of tape label --
5.6.5. The backup scantape Command
AFS 3.4a Changes
If you did not designate a permanent tape name, the backup scantape
command displays only the AFS-assigned tape name, as shown in the following
output:
Tape label
tape name = monthly.guest.dump
AFS tape name = guests.monthly.3
creationTime = Fri Nov 11 05:31:32 1994
cell = abc.com
size = 2097152 Kbytes
dump path = /monthly
dump id = 78893700
useCount = 5
-- End of tape label --
- - volume - -
volume name: user.guest10.backup
volume ID: 1937573829
dumpSetName: guests.monthly
dumpID 697065340
level 0
parentID 0
endTime 0
clonedate Fri Feb 7 05:03:23 1995
- - volume - -
volume name: user.guest11.backup
volume ID: 1938519386
dumpSetName: guests.monthly
dumpID 697065340
level 0
parentID 0
endTime 0
clonedate Fri Feb 7 05:05:17 1995
>
Tape label
AFS tape name = guests.monthly.3
creationTime = Fri Nov 11 05:31:32 1994
cell = abc.com
size = 2097152 Kbytes
dump path = /monthly
dump id = 697065340
useCount = 44
-- End of tape label --
6. The bos Commands
Back to Table of Contents
6.1. The bos addkey Command
Back to Table of Contents
6.2. The bos status Command
Back to Table of Contents
7. The fs Commands
Back to Table of Contents
These changes are marked with the heading ``AFS 3.4a Changes.''
7.1. Setting and Getting Cache Manager Server Preferences
Back to Table of Contents
The arguments and flag are not mutually exclusive, so multiple preferences
can be specified with one issuance of the command. You can include the
fs setserverprefs command in a machine's initialization file (the
rc.afs file or equivalent) to load server preferences at reboot.
The Cache Manager previously had a rank for the file server fs1.abc.com
of 25000. The resulting rank for fs1.abc.com is 21000
because the Cache Manager uses the last rank entered with the fs setserverprefs
command (with the -stdin flag).
The fs setserverprefs command contains a -vlservers argument
that allows you to explicitly set VL server preferences and ranks. The
fs getserverprefs command contains a -vlservers flag that
allows the Cache Manager's VL server preferences and ranks to be displayed.
The AFS 3.4a Cache Manager supports preferences for VL servers; the Cache
Manager does not contact Protection, Authentication, or Backup Database
servers.
7.1.1. The fs setserverprefs Command
AFS 3.4a Changes
Note: You can specify a unique preference for any of
the multihomed addresses available at a multihomed file server machine
using the fs setserverprefs command.
You cannot specify VL server preferences with the -file argument
or the -stdin flag. You can specify pairs of VL server machines
and their ranks explicitly via the -vlservers argument only.
As it does with ranks specified with the fs setserverprefs command,
the Cache Manager adds a random number in the range from 0 (zero)
to 14 to each initial rank that it determines. For example, when
it assigns an initial rank of 20,000 to a file server machine in
the same subnetwork as the local machine, the Cache Manager records the
actual rank as an integer in the range from 20,000 to 20,014.
Examples:
The following command uses the -servers argument to set the
Cache Manager's preferences for the file server machines named fs3.abc.com
and fs4.abc.com, the latter of which is specified by its IP address,
128.21.18.100. Assume that the file server machines reside on a
different subnetwork in the same network as the local machine, so by default
the Cache Manager would assign each a rank of 30,000 plus an integer
in the range from 0 to 14. To make the Cache Manager prefer
these file server machines over file server machines in other subnetworks
in the local network, you can use the fs setserverprefs command
to assign these machines ranks of 25,000, to which the Cache Manager
adds an integer in the range from 0 to 14.
Privilege Required:
128.21.16.2147500
128.21.16.2127500
121.86.33.4139000
121.86.33.3439000
121.86.33.3641000
121.86.33.3741000
Note: If you specify different ranks for the same file
server with the -servers argument, the -stdin flag, and the
-file argument, the Cache Manager uses the rank specified with the
-servers argument.
The following command uses the -stdin flag to read preferences from
standard input (stdin). The preferences are piped to the command from a
program, calc_prefs, which was written by the issuer to calculate
preferences based on values significant to the local cell.
7.1.2. The fs getserverprefs Command
AFS 3.4a Changes
fs gp [-f <output to named file] [-n] [-vl] [-h]
Output:
The following command displays the preferences (the list of file server
machines and their respective ranks) associated with the Cache Manager
on the local machine. The local machine belongs to the AFS cell named abc.com;
the ranks of the file server machines from the abc.com cell are
lower than the ranks of the file server machines from the foreign cell,
def.com. The command shows the IP addresses, not the names, of two
machines for which names cannot be determined.
Privilege Required:
fs2.abc.com20007
fs3.abc.com30002
fs1.abc.com20011
fs4.abc.com30010
server1.def.com40002
121.86.33.3440000
server6.def.com40012
121.86.33.3740005
128.21.16.21420007
128.21.18.9930002
128.21.16.21220011
128.21.18.10030010
121.86.33.4140002
121.86.33.3440000
121.86.33.3640012
121.86.33.3740005
fs2.abc.com10005
fs3.abc.com30004
fs1.abc.com45003
7.2. The fs storebehind Command (New)
Back to Table of Contents
Caution: Make certain that you check the disk quota
for the volume to which the specified file belongs and that you do not
exceed the disk quota when using the fs storebehind command; if
you exceed the disk quota when writing the specified file, the portion
of the file that exceeds the disk quota will be lost.
If you exceed the disk quota, you will see the following message:
No space left on device
Examples:
The following command performs a delayed asynchronous write on the
test.data file and returns control to the application program when
500 KB of the file remains to be written to the file server.
Privilege Required:
7.3. The fs checkservers Command
Back to Table of Contents
These servers are still down:fs1.abc.comfs3.abc.com
These servers unavailable due to network or server problems:fs1.abc.comfs3.abc.com
7.4. The fs exportafs Command
Back to Table of Contents
When issuing any of these four arguments, you must select on or
off. If you do not specify a certain argument, the value of that
argument is either of the following:
The modified fs exportafs command syntax follows:
To find out the current status of the fs exportafs command arguments,
execute the following command:
7.5. The -cell Argument on fs Commands
Back to Table of Contents
7.6. The fs copyacl, fs listacl, and fs setacl Commands
Back to Table of Contents
7.7. The fs newcell Command
Back to Table of Contents
8. The fstrace Commands (New)
Back to Table of Contents
In addition, Section 8.9 provides
a step-by-step example of a kernel tracing session.
8.1. About the fstrace Command Suite
Back to Table of Contents
There are two groups of AFS customers and each will have a different purpose
for using the fstrace command suite:
Some of the reasons to start tracing with the fstrace commands are:
The logging provided by the fstrace utility can be a valuable tool
for debugging problems with the AFS Cache Manager. The types of problems
where this logging may be useful are Cache Manager access failures, crashes,
hangs, or Cache Manager data corruption. It is particularly helpful when
the problem is reproducible.
When a problem occurs, set the cm event set to active using
the fstrace setset command. When tracing is enabled on a busy AFS
client, the volume of events being recorded is significant; therefore,
when you are diagnosing problems, restrict AFS activity as much as possible
so that unrelated fstrace logging is minimized.
Note: If a particular command is causing problems, it
may be helpful to determine the UNIX process id (pid) of that command.
The output of the fstrace dump command can later be searched for
the given pid to show only those lines associated with the process of the
command that exhibits the problem with AFS.
8.1.1. Requirements for Using the fstrace
Command Suite
Except for the fstrace help and fstrace apropos commands,
which require no privilege, the issuer of the fstrace commands must
be "root" on the local machine. Before issuing an fstrace command,
verify that you have the necessary privilege.
8.1.2. Recommendations for Using the fstrace
Command Suite
Transarc recommends the following with regard to your use of the fstrace
command suite:
Keep the fstrace log open for only a short period of time. Ideally,
you should begin the trace and keep the log open long enough for a reasonable
data sample, make the trace inactive, and dump the trace log. On a busy
AFS client that has tracing enabled, the volume of Cache Manager events
being recorded can be significant. When debugging an AFS problem, you should
restrict AFS activity as much as possible so that unrelated fstrace logging
is minimized. In particular, the output of fstrace is not normally
be written to AFS because that could lead to extra fstrace output.
Because tracing may have a negative impact on system performance, leave
cm tracing in the dormant state when you are not diagnosing
problems.
8.2. Setting the State of an Event Set
Back to Table of Contents
You must be ``root'' on the local machine to use this command.
8.3. Changing the Size of Trace Logs
Back to Table of Contents
Because log data is stored in a finite, circular buffer, some of the data
can be overwritten before being read. If this happens, the following message
is sent to standard output (stdout) when data is being dumped:
Note: If this message appears in the middle of a dump,
which can happen under a heavy work load, it indicates that not all of
the log data is being written to the log or that some data is being overwritten.
Increasing the size of the log with the fstrace setlog command can
alleviate this problem.
You must be ``root'' on the local machine to use this command.
8.4. Dumping the Contents of Trace
Logs
Back to Table of Contents
At the beginning of the output of each dump is a header specifying the
date and time at which the dump began. The number of logs being dumped
is also displayed if the -follow argument is not specified. The
header appears as follows:
A trace log message is formatted as follows:
p0:Fri Nov 18 10:36:31 1994
Because log data is stored in a finite, circular buffer, some of the data
can be overwritten before being read. If this happens, the following message
appears at the appropriate place in the dump:
Note: If this message appears in the middle of a dump,
which can happen under a heavy work load, it indicates that not all of
the log data is being written to the log or some data is being overwritten.
Increasing the size of the log with the fstrace setlog command can
alleviate this problem.
You must be ``root'' on the local machine to use this command.
>
# fstrace dump -follow cmfx -file cmfx.dump.file.1 -sleep
10
AFS Trace Dump -
Date: Fri Apr 7 10:54:57 1995
Found 1 logs.
time 32.965783, pid 0: Fri Apr 7 10:45:52 1995
time 32.965783, pid 33657: Close 0x5c39ed8 flags 0x20
time 32.965897, pid 33657: Gn_close vp 0x5c39ed8 flags 0x20 (returns0x0)
time 35.159854, pid 10891: Breaking callback for 5bd95e4 states 1024(volume 0)
time 35.407081, pid 10891: Breaking callback for 5c0fadc states 1024(volume 0)
8.5. Listing Information about
Trace Logs
Back to Table of Contents
When issued without the -long flag, the fstrace lslog command
displays only the name of the log.
cmfx : 60 kbytes (allocated)
8.6. Listing Information about
Event Sets
Back to Table of Contents
You must be ``root'' on the local machine to use this command.
Available sets:
cm active
8.7. Clearing Trace Logs
Back to Table of Contents
You must be ``root'' on the local machine to use this command.
8.8. Getting Help for Command Usage
Back to Table of Contents
8.9. A Sample Kernel Tracing
Session
Back to Table of Contents
If tracing has not been enabled previously or if tracing has been turned
off on the client machine, the following output is displayed:
If the current state of the cm event set is inactive or inactive
(dormant), turn on kernel tracing by issuing the fstrace setset
command with the -active flag.
cm inactive
cm inactive (dormant)
cm active
cmfx : 60 kbytes (allocated)
After dumping the trace log to the file cmfx.dump.file.1, send the
file to your Transarc Product Support Representative for evaluation.
# fstrace dump -follow cmfx -file cmfx.dump.file.1 -sleep 10
AFS Trace Dump -
Date: Fri Apr 7 10:54:57 1995
Found 1 logs.time 32.965783, pid 0: Fri Apr 7 10:45:52 1995
time 32.965783, pid 33657: Close 0x5c39ed8 flags 0x20
time 32.965897, pid 33657: Gn_close vp 0x5c39ed8 flags 0x20 (returns0x0)
time 35.159854, pid 10891: Breaking callback for 5bd95e4 states 1024(volume 0)
time 35.407081, pid 10891: Breaking callback for 5c0fadc states 1024(volume 0)
...
...
...
time 71.440456, pid 33658: Lookup adp 0x5bbdcf0 name g3oCKs fid (7564fb7e:588d240.2ff978a8.6)
time 71.440569, pid 33658: Returning code 2 from 19
time 71.440619, pid 33658: Gn_lookup vp 0x5bbdcf0 name g3oCKs (returns0x2)
time 71.464989, pid 38267: Gn_open vp 0x5bbd000 flags 0x0 (returns 0x0)
AFS Trace Dump - Completed
9. The kas Commands
Back to Table of Contents
9.1. The kas Command Ticket Lifetime
Back to Table of Contents
9.2. The kas examine Command
Back to Table of Contents
% kas examine smith
Password for smith:
User data for smith (ADMIN)
key (0) cksum is 3414844392, last cpw: Thu Dec 23 16:05:44 1993
password will expire: Fri Jul 22 20:44:36 1994
5 consecutive unsuccessful authentications are permitted.
The lock time for this user is 25.5 minutes.
User is not locked.
entry never expires. Max ticket lifetime 100.00 hours.
last mod on Thu Jul 1 08:22:29 1993 by admin
permit password reuse
10. The package Command
Back to Table of Contents
These changes are marked with the heading ``AFS 3.4a Changes.''
10.1. The package Command Allows Relative Pathnames
Back to Table of Contents
10.2. Changes to minor device number Argument
Back to Table of Contents
10.3. Changes to owner Argument
Back to Table of Contents
10.4. Changes to group Argument
Back to Table of Contents
11. The uss Commands
Back to Table of Contents
11.1. The uss bulk Command
Back to Table of Contents
[-cell <cell name>] [-admin <administrator to authenticate>]
[-dryrun] [-skipauth] [-overwrite] [-pwexpires
<password expires in [0..254] days (0 => never)>] [-pipe] [-help]
[:<FileServer for home volume>][:<FileServer's disk
partition for home volume>]
[:<home directory mount point>][:<uid to assign
the user>][:<var1>][:<var2>]
[:<var3>][:<var4>][:<var5>][:<var6>][:<var7>][:<var8>][:<var9>]
11.2. The uss add Command
Back to Table of Contents
12. The vos Commands
Back to Table of Contents
In AFS 3.4a, the vos command also can perform dump and restore operations
from a named pipe.
12.1. The vos restore Command
Back to Table of Contents
The following abbreviations are valid responses to the prompt:
If the volume exists, but not on the specified partition, the vos restore
command prompts you to either fully restore or to abort the restore operation.
An incremental restore cannot be done.
12.2. The vos backup Command
Back to Table of Contents
12.3. The vos create Command
Back to Table of Contents
12.4. The vos release Command
Back to Table of Contents
12.5. The vos rename Command
Back to Table of Contents
12.6. The vos syncserv Command
Back to Table of Contents
12.7. Restoring from a Named Pipe
Back to Table of Contents
12.8. The vos changeaddr Command
Back to Table of Contents
Note: If you are using AFS 3.4a VL servers, the vos
changeaddr command has no effect on file server addresses. AFS 3.4a
VL servers automatically register the IP addresses of file server machines
upon restarting the fileserver process.
The syntax of the new command follows:
Note: This command does not change IP addresses contained
in any protection groups that you have defined with the pts creategroup
command. Use the pts rename command to change IP addresses in existing
groups. Changing the IP
address of a Ubik database server involves additional changes. Refer to
the AFS System Administrator's Guide for more information.
Examples:
13. Miscellaneous AFS Commands
Back to Table of Contents
These changes are marked with the heading ``AFS 3.4a Changes.''
13.1. The afsd Command
Back to Table of Contents
13.1.1. AFS Compares Cache Size to Partition Size
AFS clients
can panic and create warnings and error messages if the cache size is set
too close to or higher than the size of the underlying partition. The AFS
Command Reference Manual recommends the following cache sizes: for
a disk cache, you should devote no more than 95% of the partition on which
the cache resides; for a memory cache, you should determine the maximum
amount of memory required to run processes and commands and require at
least this amount of memory for processes and commands to run.
13.1.2. Correct Interpretation of White Space in the
cacheinfo File
Changes
have been made to the afsd command's interpretation of the /usr/vice/etc/cacheinfo
file, which contains all of the information needed to run the Cache Manager.
Previously, the afsd command could not interpret spaces, carriage
returns, tabs, or blank lines that were inadvertently inserted into the
file. The afsd command failed if it found extra white space while
attempting to read the /usr/vice/etc/cacheinfo file. In AFS 3.4a,
the afsd command ignores extra white space.
13.1.3. The -waitclose Flag Has No Effect on the afsd
Command
In AFS
3.4a, the default for the Cache Manager (afsd) operation is to complete
the transfer of a closed file to the file server before returning
control to the application invoking the close. In AFS 3.3, the default
for the Cache Manager operation was to return control to a closing application
program before the final chunk of a file was completely written
to the file server.
13.2. The butc Command
Back to Table of Contents
13.2.1. New -localauth Flag
The butc
command now includes the -localauth flag, which assigns the issuer
a token that never expires and displays an expiration date of NEVER. It
is useful when the issuer wants to run a backup process in the background.
13.2.2. Change to the -debuglevel Argument
In AFS
3.4a, the -debuglevel argument of the butc command, which
determines the amount of information the Tape Coordinator displays in the
Tape Coordinator window, has three legal values: 0, 1, and
2. The following describes the information supplied by the three
legal values:
In AFS 3.3, the -debuglevel argument had two legal values: 0
and 1.
13.3. The fileserver Command
Back to Table of Contents
13.3.1. Change in Default Value of Implicit Rights
In AFS 3.4a, the fileserver command gives members of the system:administrators
group implicit ``lookup'' (l) and ``administer'' (a) rights
on all files in an AFS cell; this is analogous to having an entry of ``system:administrators
la'' on the ACL of each file on the affected file server.
13.3.2. New -implicit Argument
A new argument, -implicit, has been added to the fileserver
command. The -implicit argument determines the rights that members
of the system:administrators group have for the files on the file
server on which the command is issued. The default value for this argument
is implicit ``lookup'' (l) and ``administer'' (a) rights
for members of the system:administrators group on the files on the
affected file server. The -implicit argument allows you to establish
different implicit rights for the system:administrators group on
a file-server-by-file-server basis.
Note: The -implicit argument always sets a minimum
of ``administer'' (a) rights for the system:administrators
group. If you issue the -implicit argument with the value ``none,''
the implicit rights for the system:administrators group will be
``administer'' (a).
The new syntax of the fileserver command follows:
13.3.3. Change in Default Value of the -m Argument
The -m argument of the fileserver command has been modified;
the -m argument only affects machines running the AIX version (rs_aix32,
rs_aix41) of AFS. The -m argument specifies the percentage
by which the fileserver process allows partitions on the file server
machine to exceed their quotas. Previously, the default value for this
argument was 5. The default value has been increased in AFS 3.4a
to 10. This change was necessary because AIX does not use the BSD
standard of keeping a disk reserve.
Note: The AIX version of the fileserver process
creates a 10% disk reserve automatically. This is necessary because AIX
does not use the BSD standard of keeping a disk reserve.
The fileserver process now alerts you sooner when partitions on
the file server machine are approaching their quotas by returning the following
error message:
13.3.4. Change in Usage Options
Several options are now reflected in the fileserver command's usage
output. The options are as follows:
These options are included for debugging purposes and should only be used
with the help of an AFS Product Support Representative.
13.4. The klog Command
Back to Table of Contents
13.5. The knfs Command
Back to Table of Contents
13.6. The pagsh Command
Back to Table of Contents
13.7. The salvager Command
Back to Table of Contents
The new syntax of the salvager command follows:
13.8. The scout Command
Back to Table of Contents
13.9. The upclient Command
Back to Table of Contents
Note: The -crypt flag is not available in the
international version of this command.
13.10. The vldb_convert Command
Back to Table of Contents
13.11. The vlserver Command
Back to Table of Contents
13.11.1. Change in Values for -p Argument
The -p
argument of the vlserver command allows you to set the number of
server lightweight processes to run. The default value for the -p
argument of the vlserver command has been changed from 4 to 9. The
minimum value for this argument is 4, and the maximum value is 16.
13.11.2. New Log File for the vlserver Process
AFS 3.4a
supports a log file for the Volume Location (VL) Server (vlserver
process). When the vlserver process is started, the VL Server creates
an activity log file named /usr/afs/logs/VLLog, if the file does
not already exist. When the vlserver process creates a new VLLog
file, it copies the existing VLLog file to a file named VLLog.old.
You can examine this log file using the bos getlog command. By default,
no logging is done by the vlserver process.
The second level of information contained in the VLLog file can
include messages related to standard lookup operations, such as the following
messages:
The third level of information contained in the VLLog file can include
messages related to infrequent lookup operations, such as ListEntry
index=<id>.
13.12. The volserver Command
Back to Table of Contents
13.12.1. Change to the -log Flag
The -log
flag causes the Volume Server to record the names in the /usr/afs/logs/VolserLog
file of all users who successfully initiate a vos command. In AFS
3.4a, the VolserLog file also contains file entries for any file
removal activity resulting from using the vos release command with
the -f flag.
13.12.2. New -p Argument
A new argument
has been added to the volserver command. The -p argument
of the volserver command sets the number of server lightweight processes
(LWPs) to run. The minimum value for this argument is 4, and the maximum
value is 16. The default is 9.
13.13. The dlog Command
Back to Table of Contents
13.14. The dpass Command
Back to Table of Contents
13.15. The up Command
Back to Table of Contents
13.16. The xstat Utility
Back to Table of Contents
The new syntax of the two programs follows:
13.17. The -help Flag
Back to Table of Contents
14. Additional Functional Changes
Back to Table of Contents
These changes are marked with the heading ``AFS 3.4a Changes.''
14.1. Multihomed File Servers
Back to Table of Contents
Note: You can specify a unique preference for any of
the multihomed addresses available at a file server machine using the fs
setserverprefs command.
Note: AFS 3.4a does not support multihomed clients or
database (Authentication, Protection, Volume Location, and Backup Databases)
servers.
For more information about starting a multihomed file server, refer to
Section 3.3.
14.2. Support for Unlinking Open Files
Back to Table of Contents
14.3. The fileserver Process Checks for FORCESALVAGE
Flag
Back to Table of Contents
14.4. AFS Supports 8-Bit Characters in Filenames and
Directories
Back to Table of Contents
14.5. AFS Supports Partitions Larger Than 2 GB
Back to Table of Contents
Note: You can read from and write to AFS volumes that
are larger than 2 GB, but you cannot perform typical AFS volume operations,
such as dumping, restoring, moving, or replicating the volume.
Previous versions of AFS restricted partition sizes to under 2 GB.
14.6. New CellServDB Error Message
Back to Table of Contents
14.7. Cache Manager May Show Volume Name
Back to Table of Contents
14.8. New Rights for the system:administrators Group
Back to Table of Contents
14.9. Improved Database Access During Elections
Back to Table of Contents
14.10. Increase in Server Partitions
Back to Table of Contents
14.11. Additional AIX 3.2 Support
Back to Table of Contents
14.12. Changes to the NFS/AFS Translator
Back to Table of Contents
14.13. Improved Networking Support
Back to Table of Contents
14.14. Modification to fsync()
Back to Table of Contents
14.15. Version Strings in Binaries
Back to Table of Contents
14.16. File Locking Operations
Back to Table of Contents
15. Bug Fixes
Back to Table of Contents
The following fs command bugs have been fixed:
The following package command bug has been fixed:
The following pts command bug has been fixed:
The pts createuser command does not allow you to create a user
with an ID of 0. If you attempted to specify an ID of 0 using the -id
argument prior to AFS 3.4a, the pts createuser command created an
ID other than 0 and reported the created ID on standard output. For example,
if you issued the following command:
The following bugs in the vos command have been fixed:
Previously,
the vos release command did not take into account any files deleted
from the ReadWrite volume. When the ReadWrite volume was released for replication,
these deleted files became zero-length files with the same disk allocation
given to the original files in the ReadOnly volumes. This resulted in lost
disk space. The only way to recover the lost disk space was removing the
ReadOnly volume and generating it again.
The following bugs have been fixed for the AFS miscellaneous commands:
AFS 3.3 Changes
The dump is overwriting a tape within the most recent dump set. The
Backup System displays a warning on standard output (stdout) and in the
TE_<device_name> and TL_<device_name>
log files:
The Backup System then proceeds with the dump operation.
Instead, the Backup System prompts you for another tape.
The Backup System then proceeds with the dump operation.
The following bugs have been fixed for modified
16. Documentation Corrections
Back to Table of Contents
AFS 3.3 Changes
TCP ports 113 and 601 no longer are used by AFS, so this description is
obsolete.
Local Index for This Document
Back to Table of Contents
8-bit characters
© 1990-1996, Transarc Corporation