Sun Certified System Administrator for Solaris* 2.6 (Part 2) - Cramsession

1. The Solaris 2.X Network Environment

1.1 The Network Capabilities of the Solaris 2.x Network Environment

The Solaris 2.X network environment supports:

1.2 Definitions for the Following Terms: server, client, file server, AutoClient server, and standalone

Server - A system that provides services to other systems in its networked environment. These services include OS services for diskless clients and AutoClients, Solaris CD image and JumpStart* directory access for remote installations, and file sharing via the Network File System (NFS) service.

Client - A system that uses remote services from a server. Clients with limited disk space such as diskless, AutoClient and JavaStation* systems require a server to function.

File Server - Provides access to application and user data via the Network File System (NFS) service.

AutoClient Server - A system that provides access to operating system and applications to AutoClient systems via the network

Standalone - A system that can operate autonomously and does not require a server to function. It has enough disk space to contain root (/), /usr, and /export/home file systems and swap space. Thus it has local access to operating system software, executables, virtual memory space and user created files.

1.3 The Basic Hardware Components and File System Configurations for diskless clients, AutoClients, and standalone workstations

All three configurations require a CPU, memory, monitor, mouse and keyboard. A network interface is required for the diskless and AutoClient systems and is optional for the standalone workstation. A disk is required for the AutoClient and standalone configurations. A CD-ROM drive is also required for the standalone.

The diskless client accesses file systems remotely from a server. The disk on the AutoClient is used for local swapping and caching the root (/) and /usr file systems obtained from the server. The disk on the standalone workstation is used for root (/), /usr, and /export/home file systems and swap space. Thus it has local access to operating system software, executables, virtual memory space and user created files.

1.4 The file system configuration for diskless and AutoClient servers, and state the minimum required disk space in megabytes

The diskless client does not have a disk and must remotely access its root (/), /usr, /home and any other needed file systems from a server.

The AutoClient system requires a minimum of a 100 MB local disk for swapping and for caching the root (/) and /usr file system downloaded from the AutoClient server. All other file systems must be remotely accessed from the server.


2. Installing a Server

2.1 The Three Phases of the Installation Process: System Configuration, System Installation, and the Post-installation Processes

System configuration: There are two methods to preconfigure system information. The first involves the use of the sysidcfg file. The second involves using a name service.

System installation: There are four methods for installing Solaris. These are interactive, Web Start, JumpStart and Custom JumpStart.

Post Installation: Post installation consists of adding the appropriate patches or packages.

2.2 The Minimum Required Disk Space, in Megabytes, for a Server Installation

The Sparc software groups range from 281 MB to a maximum of 616 MB. Swap space must be a minimum of 32 MB.

2.3 Verification that a System Meets the Hardware Requirements for Installing the Solaris 2.x Environment on a Server

Verify that the hardware is supported using the Hardware Compatibility List.

2.4 The Terms software packages and software clusters

Software Package: A collection of files and directories required for a software product delivered in a standardized installable/removable format.

Software Cluster: A collection of related software packages that work together to provide as service or capability.

2.5 The Software Configuration Cluster Options

The software groups provide various clusters:

2.6 Use of Solaris Interactive Installation Program to Perform a Local Custom Installation of the Solaris 2.x Software

  1. For new system, power system on. For existing system, enter boot net or boot cdrom at ok prompt. Wait for system to boot
  2. If prompted, answer configuration questions
  3. Follow instructions on screen
  4. If space for diskless clients or AutoClients was allocated, use Solistice Host Manager to complete setup of cliets.
  5. Install co-packaged software using appropriate procedures.

3. Solstice* AdminSuite*

3.1 Capabilities of the Solstice AdminSuite

Solstice AdminSuite is a collection of GUI tools and commands used to perform administrative tasks such as managing users, groups, hosts, system files, printers, disks, file systems, terminals and modems. These tools and commands are faster than using numerous Solaris commands to perform the same tasks, update system files automatically which eliminates the risks of editing errors and allows sysems to be managed remotely.

The AdminSuite consist of the following GUI tools:

In addition, several commands provide additional functionality such as software usage monitoring and halting/rebooting remote systems.

3.2 Installation of Solstice Software

Installation Process:

  1. Remove previous version of AdminSuite
  2. Become member of sysadmin group
  3. Install AdminSuite and AutoClient using the admin_install command.
  4. Install SunSoft Print Client
  5. Install the Software Usage Monitoring Toolkit
  6. Register printers with the Name Service
  7. Setup Automounter support

3.3 Installation of DiskSuite Software

Local Installation Process:

  1. Insert CD-ROM in CD-ROM drive and mount CD if it is not automounted.
  2. Change directory to the DiskSuite directory
  3. Choose the packages to be installed:
  4. Use admintool or pkgadd to install the chosen package(s).
  5. Include /usr/opt/SUNWmd/sbin in the PATH variable.

3.4 Add a Host Using Host Manager

To add support for a standalone system, OS server or other type of system using the Host Manager:

  1. Start Host Manager from the Solstice Launcher by clicking on the Host Manager icon and then selecting the name service.
  2. On the Host Manager window, select Add from the pull-down edit menu and the Host Manager:Add window is displayed.
  3. Fill in the system information.
  4. To use remote install capabilities click on Enable Remote Install and select the appropriate options.
  5. Click on the OK button

The command line equivalent for adding a host uses the admhostadd with the following arguments:

admhostadd -i client_ip_address -e client_ethernet_address specific_settings client

where specific_settings arguments such as -x type=DATALESS, -x tz=US/Mountain, -x os=sparc.sun4c.Solaris_2.5, etc. and client is the system name of the client.

3.5 View Mount Point and Disk Slice Information Using the Storage Manager

The Storage Manager consists of the Load Context window, the File Manager and the Disk Manager Tools. The Load Context window allows the ability to select the host to manage with the File Manager and the disk set to manage with the Disk Manager.

To view mount point information using the Storage Manager:

  1. Start Storage Manager from the Solstice Launcher by clicking on the Storage Manager icon and then selecting the appropriate host name in the Storage Manager:Load Context window.
  2. On the Storage Manager:Load Context windows, select the File Manager entry in the Proptery Browser frame. The main window on the File Manager is displayed. Information on the mount points for the current context is displayed.

To view disk slice information using the Storage Manager:

  1. Start Storage Manager from the Solstice Launcher by clicking on the Storage Manager icon and then selecting the appropriate host name in the Storage Manager:Load Context window.
  2. On the Storage Manager:Load Context windows, select the Disk Manager entry in the Proptery Browser frame. The main window of the Disk Manager is displayed. Information on the slices for the current context is displayed.

3.6 View Time Zone Information Using the Database Manager

The Database Manager is a graphical user interface for managing the various network-related (/etc) system files such as hosts, passwd, services and timezone. To view timezone information (the contents of /etc/timezone) using the Database Manager:

  1. Start Database Manager from the Solstice Launcher by clicking on the Database Manager icon. The Database Manager Main Window is displayed.
  2. Select the name service used on the network.
  3. Select the desired host.
  4. Select the timezone file. The contents of the timezone file are displayed.

3.7 View the Characteristics of a Serial Port Using the Serial Port Manager

To view the characteristics of a serial port using the Serial Port Manager:

  1. Start Serial Port Manager from the Solstice Launcher by clicking on the Serial Port Manager icon. The Serial Port Manager Main Window is displayed.
  2. If the serial port is on another system, select the View pull-down menu and select a host from the list of hosts displayed in the Host window.
  3. Click on the port to be viewed to highlight it.
  4. Select Modify from the Edit pull-down menu to display the Serial Port Manager:Modify window.
  5. Click on the Cancel button when finished.

3.8 Add a User Using User Manager

To add a user account using the User Manager:

  1. Start User Manager from the Solstice Launcher by clicking on the User Manager icon.
  2. On the User Manager Main window, select Add from the pull-down edit menu and the User Manager:Add window is displayed.
  3. Fill in the user account information. This includes user name, user ID, groups, password information, home directory path and login shell.
  4. Click on the OK button

3.9 Add a User to a Group Using the Group Manager

To add a user to a group using the Group Manager:

  1. Start Group Manager from the Solstice Launcher by clicking on the Group Manager icon. A list of groups is displayed.
  2. Select the group to be modified.
  3. On the Group Manager Main window, select Modify from the pull-down edit menu and the Group Manager:Modify window is displayed.
  4. Add the user account to the Members List field.
  5. Click on the OK button

3.10 Add a Printer Using the Printer Manager

The Printer Manager can be used to install both locally attached printers and network printers.

To install a local printer using the Print Manager:

  1. Start Print Manager from the Solstice Launcher by clicking on the Print Manager icon and then selecting the name service.
  2. On the Print Manager main window, select Install Printer from the pull-down Edit menu and the Print Manager:Install Printer window is displayed.
  3. Fill in the printer information. This includes printer name, description, printer type printer port and various printer options.
  4. Click on the OK button

To install a network printer using the Print Manager:

  1. Start Print Manager from the Solstice Launcher by clicking on the Print Manager icon and then selecting the name service.
  2. On the Print Manager main window, select Install Network Printer from the pull-down Edit menu and the Print Manager:Install Network Printer window is displayed.
  3. Fill in the printer information. This includes: printer name , description, printer type, protocol and various printer options.
  4. Click on the OK button

4. The Boot PROM

4.1 Creation of a Custom Device Alias Using nvalias

The nvalias command can be used to create a custom device alias. The format of the command is:

nvalias alias device-path

This command is stored in the nvramrc parameter. The contents of the nvramrc parameter is called the script. In additon to storing user defined commands, this parameter is used by device drivers to save start-up configuration variables, to patch device driver code, bug patches and installation-specific device configuration.

If the use-nvramrc parameter is set to true, then the script is executed during start-up. The script editor nvedit can be used to copy the contents of the script into a temporary buffer where it can be edited. After editing, the nvstore command can be used to copy the contents of the temporary buffer to nvramrc. The nvquit command is used to discard the contents of the temporary buffer.

The alias defined by the nvalias command remains in the script until either the nvunalias or set-defaults command is executed. The set-defaults command can be undone by the nvrecover (if the script has not been editied).

Any aliases defined by the devalias command are lost during a reboot or system reset. Aliases defined by the nvalias command are not lost.

4.2 Removal of a Custom Device Alias Using nvunalias

The nvalias alias command deletes the specified alias from nvramrc.

4.3 Use of the Solaris* eeprom Command to Modify EEPROM

System configuration parameters are stored in the system non-volatile RAM (NVRAM) otherwise known as EEPROM. These parameters determine the initial configuration and related communication characteristics of the system and retain their value even if the power to the system is shut off.

The value of these parameters can be viewed via the Forth Monitor (OpenBoot) printenv command and modified by use the setenv OpenBoot command.

The eeprom(1M) system command can be used to both view and modify parameter values.

To view a parameter, use the syntax:

     eeprom parameter

where parameter is the name of the NVRAM parameter.

To modify a parameter, use the command:

     eeprom parameter=value

where parameter is the name of the NVRAM parameter and value is the value to assign to the parameter.

4.4 Use of the Boot Command Options to Observe System Boot Process

Setting the diag-switch? parameter to true allows displaying of power-on initialization messages on TTYA.

4.5 Use of the Keyboard Commands to Abort a Hung System

The Stop A keyboard command or keyboard chord is used to abort the system and return to OpenBoot Monitor mode. The following tables lists the available SPARC System Keyboard chords:

COMMAND DESCRIPTION
Stop Bybass POST
Stop A Abort
Stop D Enter diagnostic mode
Stop F Enter FORTH Monitor on TTYA instead of probing
Stop N Reset contents of NVRAM to default values

5. The Boot Process

5.1 The Functionality Available at Each of the Eight System Run Levels

System Run Levels
Run Level State Functionality
0 Power-down Safe to turn off power to the system.
1 Administrative Single-user All available file systems with user logins allowed. The terminal from which you issue this command becomes the Console.
2 Multiuser For normal operations. Multiple users can access the system and the entire file system. All daemons are running except for NFS server and syslog.
3 Multiuser w/ NFS For normal operations with NFS resource-sharing available.
4 Alternative multiuser This level is currently unavailable.
5 Power-down Shutdown the system and automatically turn off system power (if possible).
6 Reboot Shutdown the system to run level 0, and then reboot to multiuser state (or whatever level is the default in the inittab file).
s or S Single-user Single user mode with all file systems mounted and accessible.

5.2 The Phases of the Boot Process

Boot PROM (SPARC) or BIOS (x86), then boot programs (bootblk and ufsboot), then kernel initialization followed by the init process.

5.3 The Main Roles of the /sbin/init Program

The init progam is a general process spawner. Its primary purpose is to create processes or stop processes based run level and information stored in the /init/inittab. In addition, it sets the default environment variables defined in /etc/default/init.

5.4 The Main Features of the Autoconfiguration Process

The kernel consists of a small generic core with a platform-specific component and a set of modules. The system determines which devices are attached at boot time. Then the kernel configures itself dynamically, loading needed modules into memory. Device drivers are loaded automatically when devices are accessed. This dynamic loading is called autoconfiguration.

Autoconfiguration:

5.5 Adding Startup Files for Additional System Services

  1. Write a shell script that will accept the arguments start and stop along with the appropriate actions to perform those functions. This is the basic structure for all run control (rc) scripts.
  2. As superuser, copy the new rc script to the /etc/init.d directory.
  3. Determine the runlevel at which the service should start (typically 2 which implies the script needs to be linked to the /etc/rc2.d directory). Determine the two digit number nn to control the startup sequence (00 through 99). Link the new script from the /etc/init.d directory using the ln /etc/init.d/filename /etc/rc?.d/Snnfilename where the S implies start.
  4. Determine the runlevel at which the service should stop (typically 0 which implies the script needs to be linked to the /etc/rc0.d directory). Determine the two digit number nn to control the shutdown sequence (00 through 99). This sequence number determines the order in which services are stopped. Link the new script from the /etc/init.d directory using the ln /etc/init.d/filename /etc/rc?.d/Knnfilename where the K implies kill.

5.6 The Three Directories That Contain Kernel Modules

Directory Contains
/platform/'uname -m'/kernel Platform-specific kernel modules
/kernel Common kernel modules needed by all platforms for booting
/usr/kernel Common kernel modules for all platforms within a particular instruction set

The directories that the kernel searches for kernel modules can be changed by use of the moddir variable in the /etc/system file.

5.7 The File Used to Customize the Kernel Configuration Process

The /etc/system file is used to customize the way in which the kernel modules are loaded.


6. Changing Run Levels

6.1 Reasons (at Least Two) for Halting a System

6.2 The Five Commands Used to Change System Run Levels From the Command Line

There are 10 commands that can be used to change the run level:

Command Path Run Level(s) Description
fastboot usr/ucb 6 Restart the operating system without checking the disks
fasthalt usr/ucb 0 Stop the processor without checking the disks
halt /usr/sbin 0 Stop the processor
init /sbin 012356S Process control initialization
poweroff /usr/sbin 5 Stop the processor and power off the system (if possible)
reboot /usr/sbin 6 Restart the operating system
shutdown /usr/sbin 012356S Shutdown system
shutdown /usr/ucb 6S Shutdown system at a given time
telinit /etc 012356S Process control initialization
uadmin /sbin 056 Administrative Control

The init command can be used to change to any of the 8 run levels by executing the the commands identified in the /etc/inittab and sending a sending a SIGTERM and possibly a SIGKILL to any processes not in /etc/inittab. Three psuedo-states (a, b, and c) can be dedfined to execute commands without actually changing run levels. For each run level there is an entry in the /etc/inittab to run the appropriate /etc/rc? script which in turn executes the scripts in the appropriate /etc/rc?.d directory.

The /usr/sbin/shutdown command provides a grace period and warning message capability along with executing the appropriate /etc/rc?.d scripts.

The /usr/ucb/shutdown command shuts the system down to single user mode at the specified time. At intervals, a warning message is displayed on the terminals of logged in users. The time can be now to indicate immediate shutdown.

The telinit(1M) command is for compatibility and is actually linked to the init(1M) command.

The uadmin(1M) command provides basic administrative functions such as shutting down or rebooting a system.

6.3 Changing Run Levels Using the init and shutdown Commands

The init(1M) and shutdown(1M) commands can be used to change to the various run levels. Both execute the commands in the /etc/rc?.d directories. The shutdown(1M) also provides a grace period and warning message.


7. Device Configuration and Naming

7.1 The Structure that the Kernel Uses to Identify Devices Connected to the System

When the system is booted, the kernel builds a device hierarchy referred to as the device tree to represent the devices attached to the system. This tree is a hierarchy of interconnected buses with the devices attached to the buses as nodes. The root node is the main physical address bus.

Each device node can have:

The full device path name identifies a device in terms of its location in the device tree by identifying a series of node names separated by slashes with the root indicated by a leading slash. Each node name in the full device path name has the form:

driver-name@unit-address:device arguments

Where driver-name identifies the device name, @unit-address is the physical address of the device in the address space of the parent and :device arguments is used to define additional information regarding the device software.

Devices are referenced in three ways:

7.2 The Physical Device Names That Are Used to Identify a System's Devices

The physical device name of a device is the same as the full device path name. The physical device file are located under the /devices directory.

7.3 The Logical Device name Used by System Administrators to Reference Disk Devices and Explain When it is Used in the Solaris 2.x Environment

Logical device names are used to identify disk, tape and CD-ROM devices and provide either raw access (one character at a time) or block access (via a buffer for accessing large blocks of data). The logical name of SCSI devices identify the SCSI controller (bus), target (SCSI tap ID), drive (almost always 0)and slice (partition).

For example: /dev/dsk/c1t2d0s3

dsk identifies the device as a block disk (rdsk would indicate a raw disk) addressed as SCSI controller 1, target 2 drive 0 and slice 3.

Logical device names are located under the /dev directory and are linked to the appropriate physical device name file under the /devices directory.

Logical device names are used by the following commands:

7.4 The Type(s) of Disk Devices and Disk Device Interfaces on Your System Using the format utility or dmesg Command

The format(1M) (logical device names) and dmesg(1M) (physical/instance names) commands can be used to display the disk devices

7.5 The Instance Device name

An abbreviated name for a device that are displayed by the dmesg(1M), sysdef(1M) and prtconf(1M) commands. For disks it typically consists of a driver binding name and an instance number such as sd0.

7.6 Displaying System Configuration Information With prtconf Command

The prtconf(1M) command displays device information using both physical and instance names.

7.7 The Function of the /etc/path_to_inst File

The function of the /etc/path_to_inst file is to map the full device path name of devices to the instance name (driver binding name and instance number)of those devices.

The format of the file is:

     "physical name"    instance number    "driver binding number"


8. Disks, Partitions, and Format

8.1 Use of Solstice DiskSuite to "grow" or Concatenate a File System

To grow a file system, the following steps must be accomplished:

  1. If the slice in which the file system exists is not contained in a DiskSuite metadevice, then create a metadevice and include the slice in the metadevice.
  2. Unmount then mount the partition.
  3. If the metadevice needs more space, use Disksuite to expand the metadevice by adding (concatenating) another disk slice of appropriate size.
  4. Use the growfs(1M) command to grow the size of the file system.

The metadevices are located under the /dev/md/rdsk directory.

8.2 Defining a Disk Label

A disk label or Volume Table of Contents (VTOC) is a special area of every disk set aside to store information about the disk controller, geometry and slices (partitions). The interactive format(1M) or the commands fmthard(1M) and fdisk(1M) can be used to create a VTOC.

8.3 Defining Disk Partitions/Slices

The slices or partitions of a disk are defined by a slice number, a tag that identifies its intended use and the starting/ending cylinder numbers. These partitions are then formatted and mounted as file systems. The interactive format(1M) or the commands fmthard(1M) and fdisk(1M) can be used to create a VTOC.

8.4 Displaying a Disk's Volume Table of Contents with the prtvtoc Command

The prtvtoc(1M) command expects as an argument either a block disk name (/dev/dsk) or a raw disk name (/dev/rdsk) of an existing slice or partition.

8.5 Use of the format Utility to Partition a Disk

8.6 Use of the format Utility to Create and Save a Customized Partition Table


9. The Solaris File Systems

9.1 The Necessity of fsck

File system inconsistences caused by operator errors or defective hardware/software can result in the corruption and loss of data, the inability to perform operations or even system failure. The fsck command checks the integrity of the internal set of tables used by a file system to keep track of inodes used and available blocks and attempts to correct any discovered inconsistences.

9.2 Checking and Repairing a File System

The fsck command is used to check and repair file systems. File systems are usually checked automatically as they are mounted during a system boot. Also, fsck can be executed manually whenever file system damage is suspected. The file system should be umounted while it is being checked. The fsck(1M) command can check cachefs, s5sf and ufs file systems.

The supreblock includes the following parameters:

The following components of a UFS file system are checked by fsck:

There are three types of data blocks. Regular (or plain) data blocks, which contain the data of a file, symbolic-link data blocks which contain the path name associated with a symbolic-link and directory data blocks which contain directory entries. The fsck command can only check directory data blocks.

A file system state flag is used to record the condition of a file system:

Value Meaning
FSCLEAN The file system was unmounted cleanly. Will not be checked during boot.
FSSTABLE The file system has not changed since its last checkpoint.
FSACTIVE The file system has been modified and may not be synchronized with the in-memory copy of the superblock
FSBAD The root file system was mounted when the state was not FSSTABLE or FSCLEAN.

9.3 The Purpose of the Key Disk-Based, RAM-Based, and Network-Based File Systems Used in the Solaris 2.x Operating Environment

Disk-based file systems reside on hard disks, CD-ROMs and diskettes. They provide data storage and access for the system to which they are attached. The data is permanent in that when the system is shutdown in an orderly manner, the data is not lost. The types of disk-based file systems are Unix (UFS), High Serria (HSFS) and DOS-based (PCFS).

RAM-based or virtual file systems are in-memory file systems that provide access to special kernel information and facilities. When the system is shutdowm, the information is lost. The types of virtual file systems are Cache (CacheFS), Temporary (TMPFS), Loopback (LOFS), Process (PROCFS), Named Pipe (FIFOFS), File Descriptor (FDFS), Dynmaic File Descriptors (NAMEFS), Special (SPECFS) and Swap (SWAPFS).

Network-based files systems are typically disk-based file files that are accessible via a network and provide data storage and access for remote systems. The Network File System (NFS) is the only network-based file system available in the Solaris environment.

9.4 Cylinder Groups, Cylinder Group Blocks, Superblocks, Data Blocks, and File System Fragments for the ufs File System Type

To create a UFS file system on a disk slice or partition, the slice is divided into one or more cylinder groups. A cylinder group is one or more consecutive disk cylinders. A disk cylinder is a set of tracks across a group of platters that are the same radial distance from the center of the platter.

The cylinder group is divided into blocks. There are four types of blocks: the boot block, the superblock, inode blocks, and data blocks. The boot block is used to store information when booting the system. The superblock is used to record information about the file system. Inode blocks store all the information about a file except its name (which his stored in a directory). Data blocks are used to store the data associated with files and directories.

When the file system is created, the size of the data blocks can be specified as either 4096 or 8192 (default) bytes. To reduce waste and make more efficent use of storage, a data block can be divided into a subunit called a fragment. The default fragment is 1024 bytes. Thus a single data block can be used to store data from more than one file. Note that only the last data block of a file can be a fragment. As data is added, the blocks are reallocated.

9.5 Creation of New ufs File Systems

The mkfs(1M), mkfs_ufs or newfs(1M) commands can be used to create a new ufs file system. Although all types of file systems can be mounted and most can be checked using fsck(1M), the ufs file system is the only type that Solaris 2.6 can create.


10. Mounting File Systems

10.1 Mounting and Unmounting Local File Systems

The mount(1M) and umount(1M) commands are used to mount and unmount file systems. Mounted file systems are listed in the mount table (/etc/mnttab). Also, the mountall and umountall commands can be used to mount or unmount all file systems specified in the default file system table (/etc/vfstab).

10.2 Mounting a File System of a Specified File System Type

When a file system is mounted using mount(1M) the type of file system is specified by the -F argument. The following types of file systems can be mounted: cachefs hsfs nfs pcfs s5fs tmpfs and ufs.

10.3 Mounting a File System that Disables the Default largefiles Option

By default, ufs file systems are mounted to support files that are larger than 2 GB in size. Support for largefiles can be disabled at mount time by specifying the -o nolargefiles option. However, if a file larger than 2 GB in size existed on the file system since that last time fsck(1M) was executed, then the mount will fail.

10.4 Setting up a System to Mount a Local File System Automatically at Boot Time

A local file system can be set up to mount automatically by adding and entry for the file system in the default file system table, /etc/vfstab.

10.5 Adding a Swap File and Adding a Swap File System

A swap file is created using the mkfile(1M) command. Then it is activated (made available) by using the swap(1M) command. In addition, an entry for the new swap file should be added to the default file systems table, /etc/vfstab.


11. Configuring the NFS Environment

11.1 The Functions of an NFS Server and an NFS Client

The NFS service enables computers of difference architectures running different operating systems to share files across a network. It allows multiple computers to access the same files which eliminates redundancy and improves consistency while reducing administration.

The NFS server provides access to disk resources to other computers over the network. A NFS client is not required to have local disk storage space since it can access the resources shared by a NFS server on a as-needed basis.

11.2 Conditions That Must be Met to Share Files in the NFS Environment

The system must be at run level 3 or NFS has to be manually started at run level 2 and functioning properly. The system must be on a network and be accessible by other systems. The resources must be made available using the share(1M) commnad.

11.3 Making Resources Available and Unavailable for Mounting

Resources and made available and unavailable using the share(1M) and unshare(1M) commands. The -d option of the share(1M) command can be used to specify a description of the share which can viewed using the dfshares(1M) command.

Also if the resources are added to the /etc/dfs/dfstab they can be made available and unavailable using the shareall(1M) and unshareall commands.

11.4 Providing Read/Write Internet Access to an NFS Resource Through a Web Browser

By default, all file systems shared via NFS are available for WebNFS access.

To make modifications to the manner in which the resource is shared, edit the /etc/dfs/dfstab entry and restart NFS. To allow read/write access, remove the ro option (read only) if specified and include the rw option. To make URLs relative to the resource as opposed to the servers root directory, include the public option. To load a HTML file instead of listing the directory when an NFS URL is accessed, include the index option.

In addition, if the NFS server is separated from the Internet via a firewall, the firewall must be configured to allow TCP connections on port 2049.

11.5 Editing the /etc/dfs/dfstab File on an NFS Server, to Enable Sharing of Resources

Entries in /etc/dfs/dfstab are shared automatically whenever NFS is started. To enable the share of a resource, modify the /etc/dfs/dfstab with any supported text editor and add a line consists of a share(1M) command for the resource.

If the system is not in run level 3, enter init 3 to start NFS. If NFS is already running then stop and restart NFS to enable the new share:

     /etc/init.d/nfs.server stop
     /etc/init.d/nfs.server start

11.6 Displaying a Server's Available Resources for Mounting

The dfshares(1M) command lists available resources shared by either the local or a remote system. Also currently shared resources are listed in the /etc/dfs/sharetab file.

11.7 Mounting a Resource From Another System

Use the mount(1M) command to mount a remote resource:

     mount -F nfs -o options server:resource mount-point

Where options are any desired NFS options, server is the host name or IP address of the remote system, resource is the shared directory name of the remote resource and mount-point is the local directory where the resource should be mounted.

If the -F nfs argument is not used, then mount(1M) command uses the default network FS type as specified in the /etc/dfs/fstypes file. Since, Solaris only supports NFS, this default type is nfs.

Options include:

11.8 Editing the /etc/vfstab File to Mount Resources on an NFS Client

On the client NFS system, modify the /etc/vfstab file using any supported editor and add the following:

      server:resource - mount-point nfs - yes mode

Where server is the name of the NFS server, resource is the path name of the shared resource, the - implies no automatic fsck on mount, mount-point is the directory on the client where the resource is to be mounted, nfs is the type of file system, the - implies no fsck check, the yes implies mount at boot and mode is the access mode such as rw for read/write or ro for read only.

11.9 The Function of These Commands: mountall, umountall, shareall, and unshareall


12. Using Automount

12.1 The Benefits of Using the automount

Automount is a client-side service that automatically mounts the appropriate file system when a client attempts to access a file system that was not mounted. This simplifies keeping track of which resources are needed or mounted at any particular time. Also it eliminates the need of having remote file systems (NFS mounts) listed in /etc/vfstab which allows faster booting and shutdown.

12.2 The Purpose of Each of the Types of Automount Maps

The three types of maps are:

12.3 Setting Up automount to Read a Direct Map

After creating the direct map and edit the /etc/auto_master to include an entry in the form:

     /-   direct_map   options

Where direct_map is the name of the direct map in the /etc directory and options are any desired mount options.

12.4 Situations When the automountd Daemon Should be Restarted

The automount(1M) program should be restarted when any changes occur to the auto_master map or when additions or deletions are made to a direct map. Modifcations to existing entries in a direct map or any changes to a indirect map do not require restarting automount(1M).


13. Naming Services

13.1 Difference Between Authentication and Authorization

Authentication is a way to restrict access to specific users when accessing a remote system, which can be setup at both the system and network level. For NIS+, every access request is authenticated by checking credentials.

Authorization is a way to restrict operations that the user can perform on the remote system once the user gains access. For NIS+, every component in the namespace specifies the type of operation it will accept and from whom.

13.2 The Three NIS+ Security Levels

Security
Level
Description
0 Designed for the initial setup and testing of a NIS+ namespace. The NIS+ server grants full access rights to everyone.
1 Not supported by NIS+
2 Default. The highest security level. Authenticates all requests via credential mechanism.

13.3 The Four Types of Access Rights and the Four Principal Categories Used in NIS+ Authorization

Access Right Description
read The prinicpal can view the contents of the object.
modify The principal can change the contents of the object.
destroy The principal can delete the object.
create The principal can create new tables in a directory or new columns or entries in tables.

Class Description
owner The prinicpal is the owner of the object.
group The principal is a member of the object's group.
world The principal has been authenticated but is not a owner or a member of any group.
nobody The principal has not been authenticated and gets no respect (The Rodney Dangerfields of NIS+).

13.4 The Name Service Switch Process and Determine Which Configuration is Appropriate for Your Network

The name service switch is a file (/etc/nsswitch.conf) that controls how network information is obtained. Each sytsem has a switch file. Entries in the file determines how a particular type of information is obtained. That is which naming services (NIS, NIS+, DNS, etc.) can be used to obtain which types of information (host, password, group) and in which order the naming services should be queried.

13.5 The Name Service Concept

A name service provides a centralized place to store the information necessary for users and systems to communicate with each other across a network. This includes:

Without a centralized service, each system would have to maintain its own copy of the information (for example, using the /etc files of the original UNIX naming system). A centralized service eliminates redundancy, improves consistency and reduces administration.

In addition, a naming service also:

13.6 Available Name Services

Solaris 2.6 supports five name services:

  1. original UNIX naming system (/etc files)
  2. Domain Name system (DNS)
  3. Network Information Service (NIS)
  4. Network Information Service Plus (NIS+)
  5. Federated Naming Service (FNS) which conforms to the X/Open Federated Naming Specification (XFN)

13.7 The DNS

The Domain Name System (DNS) is part of the TCP/IP protocol suite and is the name service used by the Internet. It provides host name to IP address resolution as well as IP address to host name resolution. The namespace is divided into domains which in turn is divided into subdomains (or zones), where one or more DNS servers are responsible for providing resolution services. All the DNS servers work together to provide resolution services across the entire namespace. The DNS server provided with Solaris 2.6 is version 4.9.4 (patch level 1) of the Berkeley Internet Name Domain (BIND) program which is referred to as the Internet name daemon (in.named). Included with BIND are several DNS utilities such as nslookup, dig and dnsquery.

The host name and IP address information is stored in a set of ASCII files using a prefined syntax known as records.

13.8 The NIS Service

NIS is a distributed name service. It is an mechanism for identifying and locating network objects and resources. It provides a uniform storage and retrieval method for network wide information in a transport protocol and media-independent fashion. The database (called maps) can be distributed among NIS servers (master and slaves) and be updated from a central location in an automatic and reliable fashion.

13.9 Configuring an NIS Master, Slave and Client

To configure an NIS master:

  1. Check that the name service switch file /etc/nsswitch.conf is properly configured.
  2. Verify that the nodename and domainname are defined.
  3. Copy the source files that will be used to build the NIS tables to the source directory. Also copy the /etc/passwd file to the password source directory.
  4. Check the copied source files are formatted correctly, remove comments and extraneous information and verify that the files are up to date.
  5. Make any necessary changes to the makefile
  6. Execute the ypinit(1M) command.
  7. Execute the make(1M) command.
  8. Copy the NIS template to the /etc/nsswitch.conf to enable NIS.
  9. Execute the ypstart(1M) command to start NIS. NIS will start automatically on system reboot.

To configure an NIS Slave:

  1. Edit /etc/hosts to include entries for the NIS servers.
  2. Change to the /var/yp and execute /usr/sbin/ypinit -c to initialize the slave server as a client.
  3. If ypbind(1M) is running, stop and restart it.
  4. Initialize the slave server by executing the /usr/sbin/ypinit -s master where master is the name of the NIS master server.

To configure a NIS Client:

  1. Copy the NIS template to the nsswitch.conf to enable NIS.
  2. Execute the command ypinit -c to use NIS as a name service.

13.10 The Function of NIS+

NIS+ is a network name service that can be used to store and retrieve information about workstaion addresses, security information, mail information, Ethernet interfaces, and network services in a cetral location where all workstations have access to it. As with most name services, it provides a centralized service that eliminates redundancy, improves consistency and reduces administration costs.

13.11 The Table Objects of an NIS+ Domain

The following table summarizes the 16 preconfigured NIS+ tables:

Table Description
auto_home Location of all user home directories
auto_master Automounter map information
bootparams location of root, swap, and dump partitions of every diskless client in the domain
cred credentials of the principals
ethers Ethernet addresses of every workstation
group Group name, ID, password and members of every UNIX group in the domain
hosts Network address and every workstation
mail_aliases Information about the mail aliases of users in the domain
netgroup Network groups and their members
netmasks Networks in the domain and thier netmasks
networks Networks in the domains and their canonical names
passwd Password information about every user in the domain
protocols List of IP protocols used in the domain
RPC The RPC program numbers of RPC services available in the domain
services Names of IP services used in the domain and their port numbers
timezone Timezone of every workstation in the domain


14. Adding Network Clients

14.1 The Requirements of a Server to Support Diskless and AutoClient Network Clients

Support of network clients takes several forms:

For Diskless clients and AutoClients, an server must provide ability to remotely access the operating system and application file systems via NFS. Diskless clients access this information remotely while AutoClients will locally cache root (/) and /usr.

14.2 Use of Solstice to Add Support for a Network Client

The Host Manager which is a tool provided with Solstice AdminSuite is used to add support for AutoClients, Diskless clients, JavaStation clients and Dataless clients. Adding support for all types of clients follows the same high level procedure:

  1. On the OS Server, add system information about the client using Host Manager.
  2. Using Host Manager, select OS release and define appropriate root and swap paths for the client.
  3. Boot the client and set the root password for the client.

14.3 The Necessary Steps to Create an OS Server

Using the Host Manager, a standalone system, dataless client or generic system can be converted to an OS Server.

  1. Start Host Manager and select the appropriate host from the main window.
  2. From the Edit menu, select Convert to OS Server. The Host Manager Convert window is displayed with the host information fields filled-in.
  3. Specify the versions of operating systems (OS Services) that should be supported by the OS Server and click on the Add button.
  4. Click on the OK button, then select Save from the File menu.

The command admhostmod can be used instead of Host Manager to convert a system to an OS Server.

14.4 Files in the /etc Directory (at Least Three) That the Host Manager Edits When Adding Support for a Network Client

The following files can be modified by Host Manager:


15. JumpStart - Automatic Installation

15.1 Overall Description of the JumpStart Feature

Jumpstart is a method to automatically install Solaris on a new SPARC system by inserting the Solaris Operating System CD-ROM in the drive and powering on the system. The software installed is determined by a default profile based on system model and size of disk(s). All new SPARC systems have the JumpStart software pre-installed on its boot disk.

Custom Jumpstart is a method to automatically install groups of identical systems. To customize Jumpstart, a text file called rules must created that lists one or more profiles. A profile is a text file that defines how Solaris is to be installed on a group of systems. Once these files are completed, they are validated using the check script. In a non-networked environment, the validated files are placed on a diskette in the jumpstart directory and the system is booted. Then the appropriate profile is selected to direct the installation of Solaris. In a networked environment, the jumpstart directory is located on a network server.

15.2 The Main Components for Setting up a Network to use Automatic Installation

Note: Any of the four install methods (not just JumpStart) can be used when installing over the network.

The main components for setting up a network for automatic install are:

15.3 Setting Up the Network to Automatically Provide the Information Necessary to Configure a System

When a system is installed automatically, it needs to be able to locate network information about itself. In a NIS or NIS+ environment, it will attempt to use the name service to obtain this information. Use the Solstice Host Manager to add the information about the new client. If a name service is not being used, then the network information about the new client must be added to the /etc files of the install server or the boot server if required.

Another method to preconfigure system information is by creating a sysidcfg(4) file and making it available via the diskette drive or an NFS share.

15.4 Creation of an Install Server on the Network

On the system that will be the install server:

  1. Log in as root
  2. Insert the Solaris CD in the CD-ROM drive (and mount it if it does not automatically mount)
  3. If the Solaris CD is to be used for installation, stop here. If the Solaris CD is to be copied to the hard disk of the install server, then change directory to the Solaris_2.6/Tools directory on the Solaris CD and execute the setup_install_server command to copy the contents of the Solaris CD to hard disk.

15.5 Creation of a Boot Server on a Subnet

On the system that will be the boot server:

  1. Log in as root
  2. Mount the Solaris CD either by inserting it in the CD-ROM drive or mounting it via NFS from another system.
  3. Change directory to the Solaris_2.6/Tools directory on the Solaris CD and execute the setup_install_server -b command to the boot software from the Solaris CD to hard disk.

15.6 Creation of a Configuration Directory with a Customized Rules File and Class Files

The custom JumpStart files can either be located on a diskette or on a server (referred to as a profile server) where they are shared via NFS. Preparing a custom JumpStart dierctory and files consists of:

  1. Creating the JumpStart directory on diskette or on the profile server.
  2. Create a rule for each group of systems in the rules file using the appropriate keywords and syntax.
  3. Create a profile for each rule that specifies how a system will be installed using the appropriate keywords and syntax.
  4. Test the profile(s).
  5. Validate the rules file.

15.7 Adding Install Clients to Install Servers and Boot Servers

On the system that will be the install server or boot server (if required):

  1. Log in as root
  2. Mount the Solaris CD either by inserting it in the CD-ROM drive or mounting it via NFS from another system.
  3. Change directory to the Solaris_2.6/Tools directory on the Solaris CD and execute the add_install_client command.

15.8 Booting Install Clients

If booting a standalone system insert the Solaris CD in the CD-ROM drive and if appropriate the diskette with the JumpStart or other configuration information in the diskette drive. If booting a networked system setup to install over the network, verify that the system is attached to the network and the install server and any other required servers are available along with any configuration information.

Power on the system and if necessary, identify the CD or the network as the source of the install.


*Trademarks of Sun Microsystems, Inc.

Special Thanks to Darrell Ambro for writing this Cramsession.
Make sure to check out his extensive Solaris study guide at: http://ns.netmcr.com/~ambro/intro.htm