Overview      Standard Features      Configuration Information      Software      Ordering Information line.gif (50 bytes)


HP XC System Software V3.2

HP XC System Software Version 3.2 is an HP supported software stack that can be used for the operation and management of an HP Cluster Platform as well as customized variations. Hereafter in this document, the term XC System Software refers to XC System Software V3.2, and the combination of HP Cluster Platform with XC System Software will be called an XC system The XC System is based on HP Integrity or HP ProLiant servers as well as HP workstations, connected by a high-speed interconnect (XC System Interconnect) such as an InfiniBand switch network, a Gigabit Ethernet switch network, the Quadrics QsNetII switch network from Quadrics Ltd, or the Myrinet-2000 switch network from Myricom, See later for a precise statement of which server-interconnect combinations are supported. In addition to the XC System Interconnect, each XC System includes an administration network (XC Administration Network) which may be shared with the XC System Interconnect. The XC Administration network is based on Gigabit switches and may include 10/100 Mbit Ethernet switches to support separate console and, optionally, administrative connections to each node. The XC System Software includes HP-MPI and other HP proprietary and open source software that supply the infrastructure for optimal interconnect performance. In addition, XC System Software includes a Linux distribution designed to be compatible with Red Hat® Enterprise Linux® Advanced Server 4.0, and software that provides users and system administrators with single-system attributes for managing and using XC system resources. Also, XC System Software includes technology from Platform Computing Inc. (Platform) for job management and optimized throughput of parallel applications. Furthermore, XC System Software includes HP Scalable Visualization Array (SVA) Software for image visualization of large data sets. XC system administrators can optionally install third-party software packages, for which separate licenses and support contracts may be required, that can be substituted for certain components of the XC System Software stack. Examples, discussed further below, include: a standard Red Hat Enterprise Linux WS 4.0 distribution, the MPICH parallelization library, and the Altair® PBS ProfessionalTM batch processing system.

Product Overview

XC System Software creates a performance optimized application environment that is manageable across a range of system scales. XC System Software is based on a standard Linux distribution combined with a number of open systems packages that leverages the work of the overall open systems community. This open systems base combined with technology from Hewlett-Packard and its partners achieves a powerful and complete solution for development, execution, and management of simultaneous parallel and serial applications. Preconfigured systems support up to 1024 nodes; larger systems may require custom configuration. The XC System Software is an integration of premier components from leading suppliers of cluster technologies. In selecting these components, several key criteria have been employed:

  • Capable – the components are capable of providing, in total, the features required for a complete high-performance cluster solution
  • Provability – the components have been widely used and proven in production facilities
  • Integrable – the components are capable of being integrated into a single management and deployment environment with uniform semantics, and without multiple conflicting methods for job management or execution
  • Portable – the components can be combined to produce a whole system that can assure application portability across the platform family, and can adopt to future technologies

Typical usage is described in what follows. The XC System can be integrated into an existing network environment and presented to users, including system administrators, as a single host name and IP address on the external network. When a user logs into the XC system, the login session is directed to one of several nodes that have been enabled to accept login sessions. Upon logging into the system, the users may then develop, execute and manage their applications. For development, the environment consists of standard Linux tools such as gcc and the gdb library that are part of the XC System Software. The XC System Software also includes the HP Job Performance Analyzer and the HP-MPI message-passing library. HP-MPI provides binary compatibility across interconnects. The implementation of HP-MPI may invoke low-level interconnect-specific libraries such as the Quadrics QSNetII or the Myricom GM libraries, but entirely isolates the user from the requirement to directly access those libraries or even to be aware of them.

SVA Software extends HP XC System Software to provide a comprehensive set of services for deployment of visualization applications, allowing them to be conveniently run in a Linux clustering environment. Key capabilities include:

  • Extends XC System Software for scheduling and running of visualization applications within the XC environment
    • Supports capturing and managing visualization-specific cluster information
    • Manages visualization resources and provides facilities for requesting and allocating resources for a job in a multi-user, multi-session environment
    • Provides display surface configuration tools to allow easy configuration of multi-panel displays
    • Provides launch tools, both generic and tailored to a specific application, that launch applications with appropriate environments and display surface configurations
    • Provides tools that extend serial applications to run in a clustered, multi-display environment
  • Extends XC System Software to include qualified versions of the necessary graphics drivers and open source libraries for the supported configuration, with these properties:
    • Best/appropriate version of each module
    • Built and qualified on an XC System
    • Verified integration with other Open Source modules
    • Verified integration with elements of the XC System
  • Provides a parallel compositing library that simplifies the task of distributed rendering and image compositing
  • Provides online documentation within a graphical interface for ease of access

HP XC System Software and HP StorageWorks Scalable File Store (SFS) provide a seamless solution for computing, storing and visualizing data. This combination of software on the Cluster Platform System also provides a reference platform for certification of third-party and open source middleware and applications.

All nodes may be used to run user applications, but one or more of the nodes may also be used for other purposes such as login services, job launch, and system administration. Each node runs a full Linux environment. For job execution and policy centric workload management, XC System Software provides an integrated resource management, scheduling and job launch mechanism based on the underlying technologies Platform LSF software (LSF) and SLURM. SLURM is responsible for the allocation of resources to jobs. The resources are provided by nodes that are designated as compute resources. Processes of the job are assigned to (executed on) those allocated resources. As part of that process assignment / setup, the original submission environment is replicated within those allocated resources. SLURM also provides the means to arrange an XC System into distinct partitions. Serial or parallel jobs can be scheduled for execution within a given partition, provided that the partition has sufficient resources (for example, memory, or number of CPUs (also known as cores) to execute that job. The entire system can be designated as a single partition, allowing parallel jobs to run across all of the CPUs of the XC System. Alternatively, the system administrator can divide the system into smaller partitions or can set up the system so that users bypass SLURM altogether. If the system is configured to bypass SLURM, then workload management can be accomplished by either LSF or by third-party workload managers such as PBS Pro which is licensed and supported by Altair Engineering.

File I/O can be performed to locally mounted storage devices, or alternatively, to remote storage using NFS or Lustre - for high performance and high availability - to access the remote file systems. Intra-XC NFS operations can be configured to use either the XC Administrative Network or the XC System Interconnect. By using a separate interconnect for administration and I/O, the XC System administrator is able to isolate user application traffic from administrative operations and monitoring. With this separation, application I/O performance and process communication can be made more predictable while still enabling administrative operations to proceed. Alternatively, the XC System administrator may prefer to route I/O across the administrative network so that the I/O has less impact on the synchronicity of parallel applications. The XC System Software provides the flexibility to support such administrative needs. Any XC System node is permitted to export local files to NFS clients connected to the XC Administrative Network. If the node is also connected to an external network, local files can be exported to an external NFS client. In order to reduce the impact of I/O on parallel computations, administrators may choose to organize the system so that I/O nodes are separately partitioned from compute nodes.

Cluster system administration tasks are performed using standard Linux conventions with a command line interface. This approach is familiar to Linux system administrators, and therefore no prior cluster experience is required in order to manage an XC System. In addition, by following Linux conventions, the administrator is provided with the maximum ease and flexibility for combining XC-specific tasks with more generic administrative tasks.

System administrators install the system software from a node designated as the XC head node (head node) that is used to initiate booting the XC System. The head node is also capable of hosting the daemons that provide system services such as SLURM and LSF. The XC System may also be configured so that system services are distributed amongst several nodes, rather than centralized in a single head node. System services are organized into groups known as roles and the distribution of roles to nodes is done by default (but can be manually changed) with the cluster_config utility. This distribution is based on assumptions about requirements of system availability, scalability, heavy system load, and so on.

The XC System Software installation process is designed to allow the maximum amount of automation with the minimal amount of user intervention. Software installation is done by first loading software onto the XC head node and then distributing the system software to the other nodes using the SystemImager utility. This distribution is performed over the XC Administration network. The XC cluster_config utility is used during the initial configuration, to customize certain system files on each node, based on data gathered during the cluster discovery phase, and stored in the XC Configuration and Management Database.

Subsequent to initial configuration, the administrator can perform and log console commands such as starting, stopping, and booting nodes from the head node using the XC Console Management Facility (CMF) and communicating over the console branch of the XC Administration network. The administrator uses a single user-interface to report statistics and to post events based on data collected by scalable software that collects usage and environmental information (metrics) from the individual nodes. The XC software is delivered with pre-configured rules to monitor all critical systems and to report the information through one centralized interface. The monitoring technology for data-collection is supermon and the user-interface technology is Nagios. Nagios events can be monitored with a browser, as well as a command-line interface. History graphs can be displayed for selected metrics using graphical extensions to Nagios. For more complex system diagnostics, XC System Software also enables local fine-grain data collection, using the collectl monitoring tool. XC System Software also provides a scalable event-logging utility, syslog-ng. This utility aggregates cluster events of interest to a single location. In addition to monitoring and logging global information, it is occasionally necessary for the administrator to perform actions on every node, or on some subset of nodes, of the system. This capability is supplied by the global parallel shell utility, pdsh.

 Overview      Standard Features      Configuration Information      Software      Ordering Information line.gif (50 bytes)

Standard Features

Technology Components

XC System Software integrates technology that has been widely tested and used by high performance computing sites. The key third-party technology components of XC System Software are based on:

  • Open Source Technology Components (see the table and the later section on Linux)
  • Drivers for the selected XC Interconnects
  • LSF Version 6.2
  • MySQL Version 4.1.20-1
  • HP-MPI Version 2.2.5

SVA Software, which is included in XC System Software, contains the following main pieces of visualization support infrastructure:

  • Visualization System Software (VSS): VSS is a set of HP-developed tools that provide additional resource management, configuration and application launch components specific to the HP SVA hardware option. VSS support includes a number of useful features to help in a variety of important tasks:
    • A Display Configuration Tool lets system administrators define single and multi-panel displays.
    • The Application Launch Components simplify the use of installed launch scripts. These scripts encapsulate a series of commands and functions for running a distributed visualization application. Underlying the scripts is a set of useful functions, environment variables, and a cluster configuration database. Users can customize the scripts and use features of the configuration database to develop site-specific and user-specific launch characteristics. Both single and multi-user systems are supported.
  • Drivers: Linux drivers for the NVIDIA® FX 1500, FX 3500, FX 4500, FX 5500, and G-Sync graphics cards.
  • Open Source Technology Components (see the table below).
  • Remote Access: Using the optional HP Remote Graphics Software, users can launch, manipulate, and view rendered images on hosts external to the SVA.

The following table lists the key open source components that have been integrated to provide the clustering and parallel computing features of the XC System Software. The table shows the version number and source URL used for integration in the XC System Software. In addition to these components is the Linux distribution itself, including the packages (RPMs) listed later. All of this technology is covered by HP software support contracts for XC System Software, provided that the only changes made to the software are those authorized by Hewlett-Packard.

Key Open Source Technology Components
SystemImager 3.4.1
Linux Virtual Server 1.2.0
freeglut 2.2.0
Chromium 1.9
SLURM 1.0.15
Nagios 2.0b3
Nagios Plugins 1.4
RRDtool 1.2.15
Nrpe 2.0
Nsca 2.4
SuperMon 1.5
Syslog_ng 1.6.2-5
Parallel Distributed Shell 2.10-4
Modules 3.1.6

Initial Installation, Service Roles and Configuration

XC System Software is delivered on a single DVD. The default configuration includes XC Linux for High Performance Computing (HPC Linux). This default configuration is installed by means of a kickstart file on the DVD, which enables the installation to proceed with minimal user intervention. The DVD is inserted into the DVD drive on the head node; the administrator starts up the Linux installation, and, when prompted, selects a hard drive to be used for installation.

The XC System Software can also be installed on a standard Red Hat Linux distribution. Installation and configuration instructions are provided in the XC installation guide.

The installation then proceeds to completion on the head node, without any further input from the administrator, and without requiring swapping of DVD's or CD's.During installation the user is prompted for Software RAID1 on the head node, disk to use for the installation, where to create /hptc_cluster, disk partition layout if reusing existing partitions, time zone info, and root password. Mirroring is supported on the head node and striping is allowed on client nodes. The installation then proceeds to completion on the head node, without any further input from the administrator, and without requiring swapping of DVD's or CD's. You can install additional software packages (RPMs) from the XC DVD before the cluster configuration is performed. After installing software on the head node, the configuration phase begins, by default, through the process of automated cluster topology discovery, a process facilitated through the pre-defined wiring schemes for the XC Systems. Information gathered during the discovery phase includes network topology, console addresses, network addresses and the binding between the XC System Interconnect and XC Administration networks. Configuration continues through a simple user interface. Most nodes are designated as compute nodes and do not provide system services (daemons) to the rest of the cluster. The remaining nodes are designated as service nodes. These may also be configured, like the compute nodes, to execute user applications. The service nodes are each responsible for one or more service roles, where a role is a collection of related services. For example, a resource management role would include LSF and SLURM services (described in more detail later in this document). Certain roles are restricted to certain nodes. For example, a network role would require a node with an external network connection. On large clusters, roles are distributed amongst several service nodes. In addition, certain services (for example, Nagios) are distributed amongst several nodes. This distribution of roles, and parallelization of services, allows efficient scaling of the XC System and also provides more opportunity for redundancy and availability of the system. The XC System Software determines, based on the hardware configuration, a default distribution of roles to nodes. The administrator can accept the default, or can manually override it. The configuration data is written to the XC Configuration and Management Database (CMDB), for subsequent retrieval by various utilities. The utilities access the CMDB via SQL commands, as provided by MySQL.

In the next step of the installation, a collection of head node system files is assembled for installation on the other nodes. In this context, the head-node is known as the golden client, and this assemblage is known as the golden image. The nodes are each booted over the network (DHCP boot) with an autoinstall kernel which sets up local file systems. The golden image is then distributed to all the nodes using the multicast feature (Flame Thrower) of SystemImager. In the final phase of configuration, node-specific configuration information is imparted to each node, based in part on data in the CMDB. Each node ultimately boots and runs its copy of the operating system, using system files on a local disk. Node-specific updates are done through utilities that modify the CMDB, and then accessing this data the next time the affected nodes are rebooted.

Updating Software

One of the most pervasive problems with traditional cluster system management is the problem of version skew. Immediately after the initial installation of a cluster, all nodes typically have the same system software (including compilers and other tools), with the exception of explicit node-specific files (for example, log files and configuration files). Then, as software-updates become available, it often happens that some nodes acquire the updates, and some don't. There are many ways in which that can happen. For example, the administrator might install the software individually on nodes which are not currently being used for critical jobs, and then forgets to install the software later on the other nodes. Over time, nodes tend to become increasingly dissimilar. This is referred to as version skew. XC System Software has been designed so that version skew can be easily avoided. Simple utilities are provided which update the golden image based on system files that have been modified. Some system files, known as exclude files, have node-specific information. These are not used to update the golden image. XC System Software comes preconfigured with a standard collection of exclude files, but there are simple procedures for modifying this collection. After updating the golden image, various XC System Software utilities can be used to synchronize all of the nodes with the new golden image. Only modified files are propagated. Version skew is thus avoided.

A similar operating philosophy is employed when performing a software upgrade, such as an update of the underlying Linux distribution. One of the potential difficulties encountered when updating the Linux distribution, is that various configuration files require new formats with the new version of the operating system. As part of the update process, many configuration files are identified by the XC System Software, so that they can be preserved from one version to the next, and if necessary, reformatted. After the operating system has been updated on the golden client, then the golden image is updated and distributed to the other nodes as previously described. A system with XC System Software V2.1 (with an intermediate step), V3.0 or V3.1 can be upgraded in this way to V3.2.

Networks, User Accounts and Security

The Linux Virtual Server (LVS) is used to present a single host name to users so that they see the XC System as a single system for login access to prepare and launch high-performance applications. LVS directs each user login session to one of several potential login nodes. LVS can be used on nodes which have external network connections in addition to the connections that are mandatory for the XC. The XC System architecture does not restrict the number or types of nodes that have such external connections, nor does the architecture mandate the use of LVS for such nodes. Nodes without external network connections cannot be opened from external addresses, but the XC System Software optionally supports Network Address Translation (NAT) which allows all nodes in the system to open network connections to external addresses. NAT can be used, for example, to include the XC System as a member of a NIS (Network Information Services) domain. The system administrator can choose to set up NAT on one node or, for extra availability and for scaling, on multiple nodes.

User accounts and access privileges can be set up on an XC System using standard Linux administrative procedures. For example, the system administrator can create a /etc/passwd file. User accounts are created on the head node and then become a part of the golden image which is propagated to all of the nodes. This has the advantage that it is simple; the disadvantage is that /etc/passwd must be pushed out to all the nodes with any user account change. NIS and LDAP are also commonly used in various standard configurations. For example, with NIS, either the head node can act as the NIS server or one or more of the service nodes can act as NIS slaves for an external server.

The XC System is set up by default to use secure shell (ssh). Every node is preconfigured with an IP firewall, for security purposes, to block communications on unused network ports. External system access is restricted to a small number of externally exposed ports. A larger number of ports are open between nodes in the system communicating over the Administration network. There are well-defined procedures for opening additional ports as required by new services that may be introduced on the system.

File System

Each node of the XC System has its own local copy of all the XC System Software files including the Linux distribution and also has its own local user files. Every node may also import files from NFS or Lustre file servers. As well as user files, there are shared system files used in global activities such as a job-launch. XC System Software supports NFS Versions 3 and 4 including both client and server functionality. XC System Software also enables Lustre client services for high-performance and high-availability file I/O. These Lustre client services require the separate installation of Lustre software provided with the HP StorageWorks Scalable File Share (SFS).

In the case of NFS files, these can be shared purely between the nodes of the XC System, or alternatively can be shared between the XC System and external systems. External NFS files can be shared with any node having a direct external network connection. It is also possible to set up NFS to import external files to XC nodes without external network connections, by routing through a node with an external network connection. The administrator can choose to use either the XC Administration network, or the XC System Interconnect, for NFS operations. The XC System Interconnect can potentially offer higher performance, but only at the potential expense of the performance of application communications.

For high-performance and/or high-availability file I/O, the HP Scalable File Share (SFS), based on Lustre file system technology, is recommended. The benefits accrue not only to application I/O, but also to system operations such as job-launch (which has been observed to speed up considerably when using SFS, compared to NFS). The Lustre file system uses POSIX-compliant syntax and semantics. The XC System Software includes kernel modifications required for Lustre client services which enable the operation of the separately installable SFS client software. The industry-leading external SFS file server product is the HP Storage Works SFS, which fully supports the XC System. The HP Scalable File Share (SFS) V2.2-1 includes XC Lustre client software. By default, the SFS file system is integrated with the XC System so that Lustre I/O is performed over the same high-speed interconnect fabric used by the XC system. So, for example, if the XC System Interconnect is based on a Quadrics QsNetII switch, then the SFS will serve files over ports on that switch. The file operations are able to proceed at the full bandwidth of the XC System Interconnect since these operations are implemented directly over the low-level communications libraries. Further optimizations of file I/O can be achieved at the application level using special file system commands – implemented as ioctls – which allow a program to interrogate the attributes of the file system, modify the stripe size and other attributes of new (zero-length) files, and so on. Some of these optimizations are implicit in the HP-MPI I/O library, which implements the MPI-2 file I/O standard.

Workload and Resource Management

XC System Software provides both direct job allocation and policy-scheduled interactive and batch queue management based on Platform's software product – LSF Version 6.2. XC system administrators can choose to run standard LSF (with or without LSF's HPC extensions which include Platform's Parallel Application Manager, or PAM), or they can elect to run LSF-HPC with SLURM. Alternatively, an XC system can be configured to run SLURM standalone or with the Maui scheduler, or it can be configured to run the Altair PBS Pro batch processing system.

LSF provides a rich set of scheduling policies for scheduling and prioritizing jobs based on combinations of static and dynamic system attributes, such as the number of processors, job attributes, time limits, and user-attributes, including uid, gid, account id, project id, and priority-based quotas for size and time allocation. LSF policies include first-come-first-served, fairshare, hierarchical fairshare, deadline-constrained, exclusive, adaptive dispatch, backfill, preemptive, and job slot reservation. An extremely reliable, and richly configurable network-based queuing system provides maximum flexibility and central administration of all work in the cluster while ensuring that all work submitted to the XC System runs to completion. LSF provides comprehensive job management such as tracking and control. An extensible LSF checkpoint interface supports application-level checkpointing on the XC System, when supported by the application. In addition, LSF tracks job ids and elapsed time on the XC System, and provides an accounting log for system utilization and chargeback analysis. Accounting information includes submit, start and stop time (waiting time in the queue can be calculated), total user time, total system time, total maximum memory usage (physical and virtual), job id, user id, project id and number of CPUs used by the job.

Simple Linux Utility for Resource Management (or SLURM) allocates access to nodes within the XC System, launches parallel and serial jobs, manages those jobs, and provides I/O control, signals, limits and so on. More importantly, SLURM allows system administrators to effectively manage hundreds (or even thousands) of compute nodes with a few simple commands. Nodes can be "drained" or marked "down" from further use for maintenance purposes. Nodes can be partitioned into a special partition for use by LSF jobs, and/or one or more other partitions, where users can directly submit jobs without the use of LSF. SLURM supports node 'features' predefined by the system administrator that can be specifically requested by user jobs during submission. These features are useful for identifying compute nodes with special characteristics, such as more memory, faster CPUs, additional local disk space, or any additional hardware installed on those nodes. These SLURM 'features' are recognized and supported by the LSF-HPC integration. Non-LSF partitions would typically be set up for different types of jobs such as development work, or to set aside nodes with different types of characteristics for special-purpose use. In addition, SLURM provides the ability to prevent users from logging into compute nodes that they have not previously allocated. This ensures exclusive access to nodes for parallel applications that could otherwise be affected by any additional undesired activity from other users.

Scalable application resource management is implemented through the combination of a control daemon (slurmctld) responsible for the entire XC System, and a collection of client daemons (slurmd) running on each of the XC nodes. The SLURM control daemon manages the queue of pending jobs, provides node state information, and allocates nodes. The client daemons are responsible for providing node status, job status, implementing remote execution, stream copy (stdin, stdout and stderr), and job control (signals). SLURM provides capabilities for access control, and checks resource limits before scheduling the user's jobs. It is worth noting that SLURM permits a single node to be used for multiple jobs. The SLURM srun command is used, when submitting jobs to the direct-launch partition, to allocate resources, initiate jobs that run interactively, execute jobs submitted by LSF, or attach to running jobs. Options to srun include the processor count, the node count, specific nodes to use or avoid, nodes that can be shared and so on. Groups of nodes can be organized according to user-defined features, and then used by SLURM and LSF for scheduling jobs. For example, if the XC System consists of a combination of HP rx1620 and rx2620 Integrity servers, then all of the rx1620 servers can be assigned the feature 'rx1620' and the others can be assigned the feature 'rx2620'. SLURM also provides an interface (sinfo) for viewing the state of partitions or nodes – for example, which nodes are currently available. If the user wishes to terminate a job or send an arbitrary signal, that is done with the SLURM scancel command, which can specify a user, a program name, a partition, job state, etc. The SLURM squeue command is used to monitor existing SLURM jobs.

LSF-HPC is integrated with SLURM to combine the use of SLURM's extensive launching options and resource knowledge with LSF's powerful scheduling to create a high-quality, comprehensive workload and resource management system on XC. The LSF scheduler obtains all compute resource information from SLURM, and then uses that information to dispatch appropriately-scheduled jobs to SLURM.  SLURM is run across the XC system, and LSF's scheduling operations and overhead for the XC system are confined to one node. In this integration, LSF regards the entire XC system as a single large "SLURM-based multi-processor machine". This makes it easy to add an XC system into an existing LSF cluster as a new compute resource.


The following LSF Version 6.2 features are not supported in XC System Software when using LSF-HPC with SLURM:

  • Interactive remote execution (LSF Base commands) and load-sharing shell between the XC nodes managed by LSF HPC. However, the LSF Base commands and load-sharing shell can be used between the XC system and other LSF hosts.
  • The following LSF job limits are not supported: swap, process, and thread
  • Per-node load index support (given the CPU-based scheduling on XC, CPUs are expected to be either fully loaded with a user's application or idle waiting for a job)
  • LSF Analyzer


The XC System Software administrative tools provide mechanisms for configuring (see above), monitoring, controlling, and managing a node or groups of nodes. Nodes are monitored using Nagios. Nagios is a system and network monitoring application, watching hosts and services specified by the administrator, and providing alerts when problems emerge or when problems subside. Some key features of Nagios include:

  • Monitoring of network services such as SMTP
  • Monitoring of host resources such as processor load
  • Simple plugin design that allows administrators to easily develop their own service checks
  • Parallelized service checks
  • Contact notifications when service or host problems occurs and get resolved
  • Ability to define event handlers to be run during service or host events for proactive problem resolution
  • Support for implementing redundant monitoring hosts

A web interface is available for viewing current network status, notification and problem history, log files, and so on.
is used to report information collected via the infrastructure of SuperMon. The data collected by SuperMon includes both system performance metrics like CPU and memory utilization, as well as environmental data such as fan speed and temperature (note that environmental data is not collected for the HP xw-series workstations.) This data is collected on a regular basis via a call to the SuperMon. This invokes the central monitor-management console (a daemon on the monitor global node), which communicates with daemons on regional nodes that are known as management hubs. These in turn communicate with the node-specific daemons. This hierarchy enables the scalability of data collection. In the event that a management hub should fail, the management console will take over the data-aggregation of data from the nodes that were previously serviced by the monitor-management hub on the failed node. The Nagios plug-in requests the data, causes it to be entered in the CMDB, and then displays the result. The following is a partial list of node-health and processor-health metrics: context switches per second, interrupts per second, I/O in/out of kernel per second, 1-minute, 5-minute and 15-minute processor load averages, free or shared memory, memory used as buffers or cache, processor-usage (idle, nice, system, user), number of processes, condition of realtime clock battery, swap free space, fan speed (if available), temperature (if available), and processor clock speed.

Graphical extensions to Nagios provide history-displays for the following node metrics: CPU usage (system, user, iowait, nice, idle), load average (1 minute, 5 minute, 15 minute), memory usage (user, buffered, shared), swap space, network traffic and interconnect bandwidth. History-displays are also provided for the following cluster-aggregated metrics: CPU usage, memory usage and Ethernet traffic. For each statistic graphed, the user can view any one of a number of graphs: past 1 hour or 2 hours in 1-minute increments; past 6 hours, 12 hours, or 1 day in 5 minute increments; past 2 days or week in 1 hour increments; past 2 weeks, month, or year in daily increments.

The XC System is typically managed from the head node using the XC administration network. One of the key system-control utilities is Parallel Distributed Shell (pdsh). Pdsh is a multithreaded remote shell client used to execute commands on multiple remote hosts in parallel. The implementation of pdsh relies on the use of ssh as the main remote shell service. Separate facilities are provided on the head node, for access to console commands. Subsequent to initial configuration, the administrator can perform and log console commands such as starting, stopping, and booting nodes from the head node communicating over the console branch of the XC administration network (the XC Console Network). This is implemented with the XC Console Management Facility (CMF) which was adopted from HP AlphaServer SC System Software, and depends on the fact that the XC Console Network is connected to the remote management port on each of the ProLiant or Integrity servers. Note that there are no console facilities for the HP xw-series workstations.

Serviceability and Diagnostics

XC System Software includes some core components which facilitate the serviceability of the XC System and the XC System Software. For detection of potential problems relating to the XC System Interconnect, there are interconnect-specific diagnostic tools as well as generic tests which stress the network. XC Installation and Operational Verification Procedures (OVPs), which include a specialized SVA OVP, can be used to check whether system installation has been done correctly, and whether the system behaves correctly. The OVPs include a performance health test for validating performance of various XC System components, to assure that they perform at the expected level. The performance health test can be used by either system administrators to validate the entire system performance or by non-privileged users to check for performance anomalies before and after a job run. The performance health test can be used to verify performance of the following components: CPU usage, memory usage, CPU floating-point performance, memory bandwidth, and interconnect performance (uni-directional, bi-directional, and collectives: all-to-all, allgather, and allreduce). Users can specify cluster nodes on which to run the test, or perform the test within their own allocation. After identifying nodes with high memory usage, users can choose to free up the memory in buffer cache.

During normal operation of the XC System, various facilities are provided to help detect and diagnose problems. Crash dump support is provided as a way to trace the origins of software errors (which, in some cases, could be triggered by hardware failures). The sys_check utility is able to collect data and log files to further facilitate problem analysis. Hardware failures can be anticipated or detected by environmental monitoring using Nagios and Supermon as described elsewhere in this document. Alternatively, for more complex diagnostics, the collectl performance monitoring tool permits fine-grain analysis of local metrics. Collectl is a low-impact performance monitoring tool which is installed on all XC nodes and starts by default on all service nodes. The colplot tool is also installed and allows one to display web-based plots of a wide variety of performance data. For more information on running collectl and colplot, see the associated man pages and FAQs.

During a visualization session, the launch templates rely on system and user data contained in the Configuration Data Files. These files are editable and may become corrupted. The Configuration File Checker Utility will verify that the Configuration Data Files are valid. In some cases it may also be able to correct errors in the files.


XC System Software provides for improved system availability through the failover of key system services. When one of these services fails, there is automatic failover to the other servers with the result that there is no disruption of service. An availability infrastructure provides for a robust and generic system interface to failover tools, such as HP Serviceguard or Heartbeat. XC System Software service failover has been verified with HP Serviceguard (specifically, testing was done using HP Serviceguard version 11.16.04). The XC System Software documentation provides XC-specific information on how to install and configure HP Serviceguard to enable the failover of XC systems services. The XC System Software availability infrastructure is easily leveraged to use tools other than HP Serviceguard through the use of tool translators. Only the Serviceguard translator is supported on XC, but an example translator for Heartbeat and related interface documentation, will be provided by HP Customer Service on request. The following services are failover-enabled at this time: the Nagios master, dbserver, NAT and LVS.

There are other ways in which the XC system can be configured to increase redundancy and availability. For example, you can configure the XC System with the administration and login services available on different nodes (service nodes). For a complete list of XC services, see the HP XC Installation Guide. The XC System may also include separate I/O nodes with shared access to SAN storage. Should a node fail running one of these services (administration, login or I/O), the service can be manually resumed on an alternate node. There are potential single-point failures other than those caused by the loss of a service (daemon). One of the most critical resources is the global file system. For the highest availability, HP recommends that the global file system reside on an available file server such as SFS. In addition, there are some key job-management features that help to limit single points of failure. The SLURM and LSF job launch mechanisms, as well as scheduling and resource management, provide application level failover support between the two administration nodes. For example, the SLURM control daemon can be optionally configured with a backup to allow failover in the event of a node failure. If the administration node running the SLURM and LSF services fails, this failure will be detected. The services will then resume on the alternate administration node (for example, an LSF daemon will be started on that node) with no disruption to users, or to pending jobs. Running jobs will be restarted.


The XC System Software includes the Modules package (not to be confused with kernel modules), which provides for the dynamic modification of a user's environment via module files. The module command processes the information contained in the module file to alter or set the shell environment variables, such as PATH and MANPATH. Users can add their own module files to further modify their environment. The module files that are automatically installed and that are specific to XC System Software include Intel® compilers, mpi, and modules.


XC System Software includes HP-MPI V2.2.5 (HP-MPI), and is fully integrated with SLURM resource management including job-launch as described above. HP-MPI complies with the MPI-1.2 and MPI-2 standards and is a high-performance, robust, high-quality, native implementation. HP-MPI provides low latency and high bandwidth point-to-point and collective communication routines. HP-MPI supports 32- and 64-bit applications, single- and multi-threaded, and provides tools to debug and instrument MPI execution. Also offered is MPICH 1.2.5 object compatibility when an application is built shared with HP-MPI. Included in HP-MPI is support for Quadrics Ltd. QsNetII, InfiniBand and GigaBit Ethernet on Integrity systems and Myrinet GM2.1, InfiniBand and GigaBit Ethernet on HP ProLiant, including standard TCP/IP support. See later for a precise statement of which server-interconnect combinations are supported. Applications built with HP-MPI are interconnect-independent; it is not necessary to recompile or relink an application in order for it to operate on a different XC System Interconnect. XC System Software also permits the installation of third-party MPI libraries which bypass SLURM resource management.

Job Performance Analyzer

HP Job Performance Analyzer provides live, interactive, system-wide monitoring of a user application. It displays resource utilization of all cluster components (CPUs, memory, I/O, interconnect) simultaneously while an application is running. The tool does not require application modifications or repeated runs. It provides an easy-to-use graphical view of the entire cluster, which allows users to easily spot performance bottlenecks (such as hot-spots, load imbalance, or excessive load). The hierarchical display allows users to select any of the cluster nodes to obtain more in-depth information over time in one-second intervals. This includes CPU and memory utilization, local disk traffic, interconnect (InfiniBand, Gigabit Ethernet) traffic, SFS traffic, and kernel information (such as interrupts, context switches, and swapping). This information can be used for improving both application performance and cluster efficiency. In addition to live monitoring, the tool provides ability to save information in a log file that can be later used to either replay the application run (with pause, forward, backward capabilities), or to plot graphs over time that allow users to analyze history information or compare different application runs. The tool imposes negligible overhead on the application performance.

Visualization Resource Management

SVA Software supplies tools to build and maintain a set of Configuration Data Files describing the visualization resources provided by the Scalable Visualization Array components. Both a site configuration file and multiple user configuration files are provided. These files are used by system administrators, users and the job launch system to manage and allocate visualization resources, including visualization nodes, attached display devices, and applications.

The Site Configuration File is created after installing HP SVA Software on the cluster. A post-installation discovery process (the svaconfigure Utility) is run after software installation to identify all the nodes in the cluster, characterize them by role, and define a default set of Display Surfaces. Each Display Surface represents one or more display nodes and their associated display device, and is defined by the display nodes and the display devices that are physically attached to them. In the event that the user adds or removes nodes from the cluster, the system administrator must re-run the cluster discovery process using the svaconfigure Utility.

The initial generation of the Site Configuration File cannot automatically specify all the display surfaces because they are site-specific and may change. Furthermore, it cannot determine how multiple display devices are arranged when used as a single Display Surface. For example, if there are two display devices, they may be arranged from left-to-right or top-to-bottom. To complete the Display Surface definitions, the Display Surface Configuration Tool is provided for the system administrator. This tool allows the user do a number of tasks including:

  • List existing display surfaces (any user).
  • Create, change and delete display surfaces (requires root privileges).
  • Replace nodes in a display surface (requires root privileges).

Each user of the system is provided an associated User Configuration File. This file is a convenient method of defining each user's specific preferences and requirements when using the SVA system. The User Configuration File can override some of the default assumptions set up in the Site Configuration File. If a user has access to several different SVA systems, a User Configuration File can be created for each system.

Visualization Application Launch

The HP Scalable Visualization Array Software provides the necessary tools and settings to allow visualization applications to be conveniently deployed on an XC system. HP SVA Software provides support for the following activities necessary to successfully launch an application:

  • Define the desired display configuration
  • Allocate the required resources such as visualization nodes and display surfaces
  • Set up the environment and launch necessary servers and processes
  • Run the visualization application
  • Terminate the application cleanly, stopping servers and releasing resources

Parallel Compositing Library

The HP Scalable Visualization Array Software provides a tuned implementation of the Parallel Compositing Specification V1.1. This library enables applications to use the resources of a cluster such as the HP Scalable Visualization Array to render images in parallel. The library greatly simplifies the task of rendering partial images on different nodes in the cluster and then combining the multiple images to create and display the final image. The version released with SVA V2.1:

  • Optimizes the use of the network and graphics cards sold with HP SVA
  • Conforms to V1.1 of the Parallel Compositing Specification
  • Comes with a set of C & C++ code examples that illustrate using the library
  • Supports depth compositing and alpha blending
  • Can drive multi-tiled displays


The XC System Software distribution package includes XC Linux for High Performance Computing (HPC Linux), an LSB-compliant Gnu/Linux distribution (kernel and supporting software packages) using Linux Kernel Version 2.6.

The XC System Software customer has the option of using HPC Linux or, on configurations with either GigaBit Ethernet or InfiniBand interconnects, Red Hat Enterprise Linux Workstation 4.0 (RHEL WS 4.0), or Red Hat Enterprise Linux Advanced Server 4.0 (RHEL AS 4.0) as the base operating system on their system. The XC System Software is designed, implemented, and tested to run on top of any of these distributions.

Linux is the open source operating system kernel, created by Linus Torvalds in the early 1990s, and subsequently adopted and maintained by thousands of people world-wide. All official releases of the Linux kernel can be found at the web site, Hewlett-Packard will periodically update the HPC Linux software with new versions of the Linux kernel or with patches to the Linux kernel (see the section on Software Product Services).

The system utilities and services provided with HPC Linux come from the Gnu project and from a variety of other open source projects. The base collection of Gnu/Linux utilities provided with HPC Linux is defined to be compatible with the RHEL AS 4.0 collection of packages of Red Hat, Inc. The set of packages included with the XC System Software is neither supported nor endorsed by Red Hat. However, these packages are covered by HP software support contracts for XC System Software, provided that no changes are made to those packages, except as provided or explicitly authorized by Hewlett-Packard. HPC Linux is designed to be compatible with RHEL4 Update 4. In particular, if an application executes correctly on the standard RHEL released by Red Hat, then it should execute correctly with XC. In the event an application does not execute correctly with XC but does with the standard RHEL distributed by Red Hat, HP will treat this in the same manner as any other reported defect of HP software. Customers using RHEL AS 4.0 or RHEL WS 4.0 obtain their software updates directly from Red Hat, except for drivers for the InfiniBand interconnect, if configured – support and software updates for the InfiniBand interconnect are obtained directly from HP. Customers using HPC Linux obtain their base operating system support and software updates directly from HP.

Support and software updates for all XC System Software are provided directly by HP.

 Overview      Standard Features      Configuration Information      Software      Ordering Information line.gif (50 bytes)

Configuration Information

Hardware Requirements
Overview of Processors and Systems Supported

XC System Software must be run on each node of a valid configuration, such as an HP Cluster Platform (CP) 3000, a CP3000BL, a CP4000, a CP4000BL or a CP6000. (XC System Software also supports Cluster Platform Express systems, which are special cases of the other Cluster Platform systems.) CP3000 and CP3000BL systems are based on servers (in the case of the CP3000BL, the compute nodes are blade servers) or workstations containing Intel® Xeon™ processors with Intel Extended Memory 64 Technology. CP4000 systems and CP4000BL systems are based on servers (in the case of CP4000BL, the compute nodes are blade servers) or workstations containing AMD Opteron processors. CP6000 systems are based on servers containing Intel Itanium® processors. Each server (node) must be connected to the others with a valid instance of an XC System Interconnect, a Cluster Platform Administration network, and a Cluster Platform Console Network. XC Systems with 32 nodes or less, and which have an XC System Interconnect consisting of GigaBit Ethernet, may be configured to combine the XC System Interconnect and the Cluster Platform Administration network into a single network with a single GigaBit Ethernet switch. The supported XC System Interconnects include GigaBit Ethernet and InfiniBand. In addition, XC System Software supports both Myricom Myrinet-2000 and Quadrics QsNetII on servers which have been qualified with those interconnects with XC System Software V3.1. See the section about Interconnects for an explanation of which node combinations support which interconnect architecture. Valid hardware configurations and options must comply with the XC System specification provided in the HP Cluster Platform 3000, HP Cluster Platform 4000 or HP Cluster Platform 6000 Documentation CD-ROMs (part numbers AD162A, AD163A, and AD164A) or that are electronically equivalent to such a specification. Any other hardware configuration or option may be considered invalid unless the software documentation explicitly states otherwise or unless the XC product team explicitly reviews and accepts the configuration as an exception to the Cluster Platform configuration rules. Valid exceptions include select configurations in which Intel® Xeon™ ProLiant servers are combined with AMD Opteron ProLiant servers and also select configurations consisting of blade servers together with regular servers. XC System Software installation instructions can be found in the XC Installation Guide, which is part of the XC System Software Administration Documentation Kit. For more information on customizable configurations, please contact your HP sales representative.

Not all Cluster Platform configurations are supported by the XC System Software or SVA. The following sections describe the Cluster Platform configurations that are supported by XC System Software. For further details, consult the pertinent Cluster Platform documents in the HP Cluster Platform 3000, HP Cluster Platform 4000 or HP Cluster Platform 6000 Documentation CD-ROMs (part numbers AD162A, AD163A, and AD164A). Note that CP3000BL documentation will be found in the HP Cluster Platform 3000 Documentation CD-ROM.

Cluster Platform Administration network
The Cluster Platform Administration network is based on HP ProCurve 2848 and 2824 Gigabit Ethernet switches. Each Cluster Platform system has a single Utility Building Block (UBB) which includes a Cluster Platform Administration network root switch. In addition, each Cluster Platform may have several Compute Building Blocks (CBBs), each of which includes a Cluster Platform Administration network switch, connected to the Cluster Platform Administration network root switch in the UBB.

Scalable Visualization Array
The Cluster Platform 3000 or Cluster Platform 4000 system may be configured with the HP Scalable Visualization Array option consisting of an array of supported visualization nodes. These are configured in Visualization Building Blocks (VBBs) racks housing up to 8 workstation nodes each with a varying number of embedded graphics options, or Compute Building Blocks (CBBs) of ProLiant DL140 G3 or ProLiant DL145 G3 servers each with either an NVIDIA FX1500 or FX3500 graphics card. XC System Software supports a maximum of 96 visualization nodes, with a maximum of 8 synchronized display channels per job.

Cluster Platform Console Network
The Cluster Platform Console Network is based on HP ProCurve 2650 Ethernet 10/100 network switches. Each Cluster Platform system has a single Utility Building Block (UBB) which includes a Cluster Platform Console Network root switch. In addition, each Cluster Platform may have several Compute Building Blocks (CBBs), each of which includes a Cluster Platform Console Network switch, connected to the Cluster Platform Console Network root switch in the UBB. The Cluster Platform Console Network root switch is connected to the Cluster Platform Administration network root switch. Note that the xw-series workstations do not have consoles and are therefore not connected to the Cluster Platform Console Network.

XC System Interconnect
The XC System Interconnect provides high-speed connectivity for parallel applications. The XC System supports several different switch fabrics for use as an XC System Interconnect. For information about which servers are qualified with Myricom and Quadrics interconnects, please see the QuickSpecs provided with earlier versions of the XC System Software. The supported XC System Interconnects for the CP3000, CP4000 and CP6000 systems are GigaBit Ethernet and InfiniBand (SDR and DDR, as supported on the servers). The supported XC System Interconnects for CP3000BL and CP4000BL systems are GigaBit Ethernet and, for systems with more than 2 enclosures, InfiniBand DDR. Note that although XC System Software supports InfiniBand on BladeSystems with 2 or less enclosures, the supported configurations differ from those of the CP3000BL or CP4000BL product, and thus each such system must be reviewed and accepted by the XC product team. XC Systems with 32 nodes or less, and which have an XC System Interconnect consisting of GigaBit Ethernet, may be configured to combine the XC System Interconnect and the Cluster Platform Administration network into a single network with a single GigaBit Ethernet switch. All nodes of the XC System are directly attached to the XC System Interconnect using one adapter per node. For the Gigabit Ethernet-based systems, network connections either use onboard 1000 Base-T NICs that come with certain server models, or separate adapters that are available as options to the servers. For InfiniBand, the supported host channel adapters (HCA) for SDR are Dual Port PCI-X and PCI-e with onboard Memory and Single Port PCI-e with No onboard Memory, and for DDR are Dual Port PCI-e with onboard Memory, Single Port PCI-e with No onboard memory and Single Port Mezzanine HCA with No onboard Memory (for blade servers only).

Nodes (Cluster Platform 3000 and Cluster Platform 3000BL)

XC System Software is supported on CP3000 systems consisting of HP ProLiant DL140 G2 and/or ProLiant DL140 G3 and/or ProLiant DL360 G4 and/or ProLiant DL360G4p and/or ProLiant DL360G5 Woodcrest and/or ProLiant DL380 G4 and/or ProLiant DL380 G5 Woodcrest servers and/or xw8200 workstations and/or xw8400 workstations. XC System Software is also supported on CP3000BL systems consisting of HP ProLiant BL460c and/or ProLiant BL480c blade servers with the option of DL380 G5 utility nodes. Some of the servers are available with the option of either single core or dual core processors. Both these options are supported by XC System Software. The ProLiant DL140G3 server is available with graphics cards, which are supported by the SVA software included with XC System Software. One of the servers – often a ProLiant DL380 server – is designated as the head node. This node must have an internal SCSI disk drive, two processors, and a DVD drive. The head node must be attached to the CP3000's or CP3000BL's rack-mounted keyboard/monitor. The choice of a different head node is generally related to whether more PCI slots are required for extra connectivity to external systems.

Nodes (Cluster Platform 4000 and Cluster Platform 4000BL)

XC System Software is supported on CP4000 consisting of ProLiant DL145 G1 and/or ProLiant DL145 G2 and/or ProLiant DL145 G3 and/or ProLiant DL385 G1 and/or ProLiant DL385 G2 and/or ProLiant DL585 G1 and/or ProLiant DL585 G2 servers and/or xw9300 and/or xw9400 workstations. XC System Software is also supported on CP4000BL systems consisting of HP ProLiant BL465c and/or ProLiant BL485c blade servers with the option of DL385 utility nodes. The ProLiant DL145 G3 server is available with graphics cards, which are supported by the SVA software included with XC System Software. The servers, other than for the ProLiant DL145 G1 server, are available with single core or dual core processors. One of the servers – usually a ProLiant DL385 server – is designated as the head node. The head node cannot be a ProLiant DL145 G1 server. The head node must have an internal SCSI disk drive, two processors, and a DVD drive. The head node must be attached to the CP4000's rack-mounted keyboard/monitor. The choice of a different head node is generally related to whether more PCI slots are required for extra connectivity to external systems.

Nodes (Cluster Platform 6000)

XC System Software is supported on CP6000 consisting of Integrity rx1620 and/or Integrity rx2600 servers and/or Integrity rx2620 servers and/or Integrity rx2660 servers and/or Integrity rx4640 servers. One of the servers – often an rx2620 – is designated as the head node. This node must have an internal disk drive, two processors, and a DVD drive. The head node must be attached to the CP6000's rack-mounted keyboard/monitor. The choice of a different head node is generally related to whether more PCI slots are required for extra connectivity to external systems.


There are no special storage restrictions imposed by XC software. As a general rule, if an HP storage solution (for example, a Storage Works SAN solution) is supported on Linux, then it is supported on XC. As a somewhat more precise statement, the following two conditions are sufficient:

  • The storage hardware is supported by the Cluster Platform system. It suffices to assure that the storage hardware will interface to the Cluster Platform system using an adapter that has been qualified for that Cluster Platform system. See the Cluster Platform documentation to find out which adapters are supported.
  • The storage software is supported on Red Hat Enterprise Linux 4.0 Update 4 (for XC Systems where the XC System Software is layered on a standard Red Hat distribution, the storage software must be supported on whichever Red Hat Enterprise Linux V4.0 update is installed).

In addition, the XC System supports high performance file I/O with the HP StorageWorks Scalable File Share (SFS). The XC System can be connected to SFS over external Ethernet connections or over the XC System Interconnect. When configuring the XC System to connect to the SFS using the XC System Interconnect, there must be enough switch ports available to accommodate the number of nodes in the XC System, as well as the number of nodes required by the SFS (see the HP StorageWorks Scalable File Share for EVA3000 Hardware Installation Guide). The switches for the combination XC/SFS system will be included as part of the CP3000, CP3000BL, CP4000 or CP6000 configuration, rather than as part of the SFS.

Disk Space Requirements for System Files
Precise file system and partition size requirements are provided in the XC System Software Installation Guide, which is part of the XC System Software Administration Documentation Kit. Each node must have one local disk with at least 36 GB to be used for system files, swap and user data. Typical minimal disk requirements for each node are:
/ 8.0 GB
/boot or /boot/efi 0.2 GB
/var 4.0 GB
/cluster 2.0 GB
swap Same as memory size to closest 2 GB.

NOTE: These sizes are approximate; actual sizes may vary depending on the user's system environment, configuration, and software options.

Memory Requirements

At least 1 GB of memory per processor is required on each node of the XC System.

 Overview      Standard Features      Configuration Information      Software      Ordering Information line.gif (50 bytes)


Software Requirements

XC System Software is self-contained and does not generally require the installation of any other software, though most sites will probably want to install compilers and other development tools.

Optional Software

For access to the HP SFS Lustre file system, it is necessary to install client software that is included with HP SFS V2.2-1.

HP Serviceguard can be installed and configured to provide application availability as well as improved availability (e.g. failover) of critical XC system services. The choice of HP Serviceguard product depends on the details of the cluster configuration (ISS and BCS refer to internal ordering processes; for ease of ordering on ProLiant servers, order ISS software products; otherwise order BCS software products).

  • HP Serviceguard for Linux – 307754-B26 (ISS – single license)
  • HP Serviceguard for Linux ProLiant Cluster – 305199-B26 (ISS – 2-node cluster)
  • HP Serviceguard SW & LTU for Linux – B9903BA (BCS – single license)
  • HP Serviceguard for Linux for Integrity – T2391AA#2AH (BCS)

Since XC System Software is compatible with a standard Linux distribution, third party software will work correctly with XC System Software if it works correctly with that Linux distribution. This is also true of many parallel distributed software packages, despite the fact that parallelism typically causes complex interdependencies between separate software components layered on top of the Linux distribution. Among the parallel distributed software packages which have been configured and tested with XC System Software are PBSPro for workload management and MPICH for parallel computing. For more information, see the HP XC HowTo whitepapers at

XC System Software can be layered on the standard Linux distributions RHEL AS 4.0 and RHEL WS 4.0. The distributions cannot be mixed within a single XC System. In other words, if one of these distributions is installed on a node of the XC System, then that distribution will be installed on all the nodes of the XC System. The Linux distributions and support can be purchased from Red Hat or from HP. Some of the more popular HP offerings are:

  • RHEL HPC WS4 1 Yr Base 8 Pk ET /opt SW – 398384-B21
  • RHEL HPC WS4 1 Yr Addon 8 Pk ET/opt SW – 398385-B21
  • HP 1 yr 9x5 10 Incdnt Red Hat IA32 SW Tech Support – U3402E

Users desiring HP-supported remote delivery of graphics from workstations to their own or collaborators' desktops can order the optional HP Remote Graphics Software (RGS). Order one single-seat license (for an HP system) for each SVA workstation that will be running RGS. This license includes one receiver license. If additional receivers (remote desktops that will interact with SVA using Remote Graphics Software) are needed, order additional Single-seat licenses (for a Receiver). See the product datasheet for more information and other options.

Additionally, the following open source visualization software packages have been tested together with XC System Software and SVA:

  • VirtualGL 2.0 and TurboVNC 3.2 (for remote graphics applications, these should be used together)
  • Paraview 2.4.4

Software Licensing Information

Use of XC System Software is subject to an HP license for each processor of the XC System, and the license terms in files in the physical media. Terms of the HP Software License are provided on the end user license agreement that is delivered with the XC System Software. For more information about the Hewlett-Packard Company licensing terms and policies, contact your local Hewlett-Packard office.

License Management Facility Support

XC System Software supports the FlexLM License Management Facility. The FlexLM license key is provided upon presentation of a valid license key request form. This form is delivered with the software license (see the ORDERING INFORMATION section of these QuickSpecs). For more information about installing the XC System Software license keys, refer to the XC System Software Installation Guide.

Software and Services Warranty

The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. For example, the warranty statement for XC System Software is as follows:

  1. HP Branded Software will materially conform to its Specifications. If a warranty period is not specified for HP Branded Software, the warranty period will be ninety (90) days from the delivery date, or the date of installation if installed by HP. If You schedule or delay installation by HP more than thirty (30) days after delivery, the warranty period begins on the 31st day after delivery. This limited warranty is subject to the terms, limitations, and exclusions contained in the limited warranty statement provide for Software in the country where the Software is located when the warranty claim is made.
  2. HP warrants that any physical media containing HP Branded Software will be shipped free of viruses.
  3. HP does not warrant that the operation of Software will be uninterrupted or error free, or that Software will operate in hardware and Software combinations other than as expressly required by HP in the Specifications or that Software will meet requirements specified by You.
  4. HP is not obligated to provide warranty services or support for any claims resulting from:
    1. improper site preparation, or site or environmental conditions that do not conform to HP's site specifications;
    2. Your non-compliance with Specifications;
    3. improper or inadequate maintenance or calibration;
    4. Your or third-party media, software, interfacing, supplies, or other products;
    5. modifications not performed or authorized by HP;
    6. virus, infection, worm or similar malicious code not introduced by HP; or
    7. abuse, negligence, accident, loss or damage in transit, fire or water damage, electrical disturbances, transportation by You, or other causes beyond HP's control.
  5. HP provides third-party products, software, and services that are not HP Branded "AS IS" without warranties of any kind, although the original manufacturers or third party suppliers of such products, software and services may provide their own warranties.
  6. If notified of a valid warranty claim during the warranty period, HP will, at its option, correct the warranty defect for HP Branded Software, or replace such Software. If HP is unable, within a reasonable time, to complete the correction, or replace such Software, You will be entitled to a refund of the purchase price paid upon prompt return of such Software to HP. You will pay expenses for return of such Software to HP. HP will pay expenses for shipment of repaired or replacement Software to You. This section 3.(ii) f states HP's entire liability for warranty claims.

HP provides third-party products, software, and services that are not HP Branded "AS IS" without warranties of any kind, although the original manufacturers or third party suppliers of such products, software and services may provide their own warranties.

 Overview      Standard Features      Configuration Information      Software      Ordering Information line.gif (50 bytes)

Ordering Information

The following information is valid at time of release. Please contact your local Hewlett-Packard office for the most up-to-date information.

Software License and Support Parts

On an XC System, all of the processors must be licensed. So, for example, on an XC System with 128 ProLiant servers, each with 2 dual-core processors, order at least 256 XC System Software licenses (each of which licenses one processor). The licenses provide permanent rights to use a specific version of the software. Standard software support includes 9x5 telephone support, media kits for new distributions, and rights to new versions for the specified period of time. 24x7 software support extends the telephone support window to round-the-clock access. See the section below on Software Product Services.

Order either BCS or ISS license parts. Typically, BCS parts are used when ordering a system consisting of HP Integrity servers and ISS parts are used when ordering a system consisting of HP ProLiant servers.

Part Number Description Product Line (internal ordering info)
BA687A HP XC System Software 1 Proc Flex License BCS
434066-B21 HP XC System Software 1 Proc Flex License ISS
SUPPORT (some sample parts to be used with the ISS licenses)
UF089E SW Phone (9x5 STS & Updates – unlimited) 1 Yr N/A
UF091E SW Phone (9x5 STS & Updates – unlimited) 3 Yrs N/A
UF090E SW 24x7 (24x7 STS & Updates – unlimited) 1 Yr N/A
UF092E SW 24x7 (24x7 STS & Updates – unlimited) 3 Yrs N/A

Distribution Media

XC System Software, including sources for Open Source software, is available on DVDs and its documentation is available on CD. The DVDs and CD are both included in the XC media and manual kits, HP XC System Software Media and Manuals BA686A (product line BCS) and 434067-B21 (product line ISS). Patch kits may be required, and can be downloaded from the website

Software Documentation

The HP System Software Documentation CD includes LSF, SLURM and HP-MPI documentation as well as the following HP XC documents:

  • HP XC System Software Installation Guide
  • HP XC System Software Administration Guide
  • HP XC System Software Hardware Preparation Guide
  • HP XC System Software User's Guide

The following hardcopy documentation is also available for HP-MPI.

  • HP-MPI User's Guide (9th Edition): B6060-96018
  • HP-MPI Release Notes Version 2.2.5: T1919-90011B

Software Product Services

For the purposes of Hewlett-Packard Software Product Services, the product is regarded as XC System Software only if it has not been modified by the customer. Although the software includes licenses permitting modification of certain parts of the software, such modifications – unless they are provided by HP under a support agreement, or provided by HP as a licensed product – will result in a variation of the software that may no longer be called XC System Software, and that therefore is not the subject of any standard service agreement for XC System Software.

Standard software support includes customer access to technical resources during standard hours (see later), problem analysis, escalation management and resolution. HP also provides unlimited access to an electronic facility that includes a knowledge database with known symptoms and solutions, software product descriptions, specifications, and technical literature. In addition, HP will also make available certain software patches, including security patches, to the XC System Software. During the term of a standard software support contract, a customer is entitled to receive new versions of the software. With standard software support, customers can access technical resources via telephone, electronic communications or FAX where available, during standard business hours on standard business days, including the hours of 8:00 am and 5:00 pm, Monday through Friday excluding HP holidays. 24x7 software support extends the access-window to 24 hours a day, from Monday through Sunday, including holidays. Business terms and conditions governing software services can be found at the HP website


A variety of customer service options are available from Hewlett-Packard for XC System Software. Service offerings are also available for additional software packages that may be distributed along with XC System Software but that are otherwise not included as part of the XC System Software. For more information, contact your local Hewlett-Packard office.

Amongst the service options are software factory installation services, where the XC System Software is installed and configured on Cluster Platforms at the factory. There are also on-site Consulting and Integration Services available for XC System Software. These services are listed below.

Product Number Description
HB480A1 HP Cluster Platform 4-8 node factory installation service
HB481A1 HP Cluster Platform 9-17 node factory installation service
HB482A1 HP Cluster Platform 18-33 node factory installation service
HB483A1 HP Cluster Platform 34-65 node factory installation service
HB484A1 HP Cluster Platform 66-129 node factory installation service
HB485A1 HP Cluster Platform 130-257 node factory installation service
U5628A 001 XC – one day on-site systems software knowledge transfer and five hours of customer integration management
U5628A 002 XC – two days on-site systems software knowledge transfer and six hours of customer integration management
U5628A 003 XC – three days on-site systems software knowledge transfer and six hours of customer integration
U5628A 004 XC – five days on-site systems software knowledge transfer and six hours of customer integration management
U5628A 005 XC – ten days on-site systems software knowledge transfer and twelve hours of customer integration management
Product Number Description Duration
U5617A XC Implementation Program Management 40 hours implementation management consulting
U5618A XC cluster systems software QuickStart 5 days on-site consulting and six hours management coordination
U5619A XC Cluster Applications Migration, Development, and Optimization QuickStart 5 days on-site consulting six hours management coordination
U5626A XC Applications Programming and Migration 2 days on-site formal customer training
U5627A XC Cluster Systems Administration Course 4 days on-site formal customer training
U5620A XC Implementation Program Management 80 hours implementation management consulting
U5621A XC Cluster Systems QuickStart 10 days on-site consulting and twelve hours management coordination
U5622A XC Cluster Applications Migration, Development, and Optimization QuickStart 10 days on-site consulting and twelve hours management coordination
U5626A XC Applications Programming and Migration 2 days on-site formal customer training
U5627A XC Cluster Systems Administration Course 4 days on-site formal customer training
For more details, please see your HP Services representative.

© Copyright 2003-2007 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice.

Intel® and Itanium® and Xeon® are trademarks or registered trademarks of Intel Corporation in the U.S. and other countries and are used under license. Pentium® is a U.S. registered trademark of Intel Corporation. UNIX® is a registered trademark of The Open Group. AMD and AMD Opteron are trademarks or registered trademarks of Advanced Micro Devices, Inc

NVIDIA® is a registered trademark of NVIDIA Corporation.

Red Hat® is a registered trademark of Red Hat, Inc. in the United States and other countries.

Linux® is a registered trademark of Linus Torvalds.

Altair® PBS ProfessionalTM is a registered trademark of Altair Engineering.

Restricted Rights Legend
Use, duplication or disclosure by the U.S. Government is subject to restrictions as set forth in subparagraph (c) (1) (ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.227-7013 for DOD agencies, and subparagraphs (c) (1) and (c) (2) of the Commercial Computer Software Restricted Rights clause at FAR 52.227-19 for other agencies.

3000 Hanover Street
Palo Alto, California 94304 U.S.A.

Use of this QuickSpecs and media is restricted to this product only. Additional copies of the programs may be made for security and back-up purposes only. Resale of the programs, in their present form or with alterations, is expressly prohibited.

   DA-12094 - Worldwide - Version 10 - March 22, 2007