REW

How To Install Gpfs In AIX?

Published Aug 29, 2025 5 min read
On this page

Installing IBM GPFS, now known as IBM Spectrum Scale, on AIX involves a multi-stage process of preparing the environment, installing the software on each node, creating the GPFS cluster, and finally defining and mounting the file systems.

Prerequisites and environmental preparation

System and network requirements

  • Supported OS: Ensure all nodes are running a supported version of AIX, such as AIX 7.1 or 7.2 for recent versions of Spectrum Scale. Always verify compatibility with the specific GPFS release in the IBM Knowledge Center.
  • Network: Verify network connectivity and name resolution between all cluster nodes. Ping all nodes from each node using both short and fully qualified domain names.
  • Time synchronization: The system clocks of all nodes in the cluster must be synchronized. This can be achieved using Network Time Protocol (NTP).
  • SSH configuration: Set up passwordless SSH authentication for the root user between all nodes. This allows the GPFS administrative commands to execute remotely.
    • On each node, run ssh-keygen -t rsa -N "" to generate an SSH key pair.
    • Collect the public keys (id_rsa.pub) from all nodes and append them to the authorized_keys file for the root user on every node.
    • Create a .hushlogin file in the root directory to suppress login banners.
  • Storage access: All nodes that will act as GPFS servers must have access to the shared storage, typically through a Storage Area Network (SAN).
    • Confirm the shared disks are visible using the lspv command.
    • Tune the I/O parameters for the Fibre Channel adapters and hdisk devices according to your storage vendor's recommendations. Common settings include num_cmd_elems and queue_depth.

Preparing the software

  1. Download: Obtain the IBM Spectrum Scale installation packages for AIX from the IBM Passport Advantage website.
  2. Extract: Unpack the downloaded files into a temporary directory. If you are installing on multiple nodes from a central server, this directory should be accessible to all nodes, for example, by using NFS.
  3. Accept license: Navigate to the extracted software directory and run the license acceptance script, for example: gpfs_install --accept-license.
  4. Create TOC: Run the inutoc command to create a Table of Contents (TOC) file for the installation images.

Step-by-step installation

1. Install GPFS packages

Execute the installp command on each node to install the core GPFS filesets.

# cd /path/to/extracted/gpfs_base
# installp -aXY -d . gpfs.base gpfs.msg.en_US

Use code with caution.

This installs the gpfs.base and English message filesets. If needed, you can install other filesets like gpfs.docs.data for man pages.

2. Install any required fix packs

If you have downloaded fix packs, install them after the base installation.

# cd /path/to/extracted/gpfs_fixes
# installp -aXY -d . all

Use code with caution.

3. Verify installation

After installing the packages on all nodes, use the lslpp command to check that the filesets were installed correctly.

# lslpp -L gpfs.*

Use code with caution.

GPFS cluster configuration

1. Create the GPFS cluster

This step is performed on the designated primary configuration manager node.

# /usr/lpp/mmfs/bin/mmcrcluster -N node1:manager-quorum,node2:quorum -p node1 -s node2 -r /usr/bin/ssh -R /usr/bin/scp

Use code with caution.

  • -N: Specifies the nodes in the cluster and their roles.
  • -p: Specifies the primary configuration server.
  • -s: Specifies the secondary configuration server.
  • -r and -R: Specify the remote shell and remote copy commands.

2. Confirm cluster creation

Run mmlscluster to see the details of the newly created cluster.

3. Assign GPFS licenses

Grant a GPFS server license to each node in the cluster.

# mmchlicense server --accept -N all

Use code with caution.

4. Start GPFS on all nodes

Initiate the GPFS daemon on all cluster nodes.

# mmstartup -a

Use code with caution.

5. Verify GPFS state

Use mmgetstate to check that the GPFS daemon is active on all nodes.

# mmgetstate -a

Use code with caution.

File system creation

1. Prepare shared disks for NSDs

GPFS uses Network Shared Disks (NSDs) to represent storage devices shared across the cluster.

  • First, prepare a disk descriptor file, for example, diskdesc.txt, to define the NSDs. The format is #DiskName:ServerList::DiskUsage:FailureGroup:DesiredName:StoragePool.``` hdisk2:::dataAndMetadata::nsd1: hdisk3:::dataAndMetadata::nsd2:
  • Important: Ensure these hdisks are not part of an AIX Volume Group (lspv | grep active). For the simplest configuration, leave the ServerList blank and use a default storage pool.

2. Create the NSDs

Use the mmcrnsd command to create the NSDs based on the descriptor file.

# mmcrnsd -F /path/to/diskdesc.txt

Use code with caution.

3. Create the GPFS file system

Use the mmcrfs command to create a file system on the NSDs.

# mmcrfs /gpfs/fs1 fs1 -F /path/to/diskdesc.txt -B 1024k

Use code with caution.

This creates a file system named fs1 on the /gpfs/fs1 mount point, with a block size of 1024KB.

4. Mount the file system

Mount the newly created file system on all nodes.

# mmmount all -a

Use code with caution.

5. Verify the mounted file system

Confirm that the GPFS file system is mounted and accessible using the df command.

# df -g /gpfs/fs1

Use code with caution.

Troubleshooting tips

  • Startup failures: If the GPFS daemon fails to start on a node, check /var/adm/ras/mmfs.log.latest for error messages. Incompatible versions, failed configuration migrations, or kernel extension issues are common culprits.
  • Daemon crashes: If the GPFS daemon crashes, run mmshutdown followed by mmfsadm cleanup to clear residual segments. A reboot might be necessary if this fails.
  • Configuration propagation: If cluster configuration changes don't take effect, check for compatibility issues or network problems preventing the changes from being propagated to all nodes.
  • Storage access: If NSDs or file systems are not accessible, verify SAN connectivity and disk settings like reserve_policy.
Enjoyed this article? Share it with a friend.