Skip navigation and jump directly to page content

 IU Trident Indiana University

UITS Research Technologies

Cyberinfrastructure enabling research and creative activities
banner-image

High Performance Systems - Overview of Resources

Big Red

Big Red II - Cray XE6/XK7

Big Red II is Indiana University's main system for high-performance parallel computing.  With a theoretical peak performance (Rpeak) of one thousand trillion floating-point operations per second (1 petaFLOPS), Big Red II is among the world's fastest research supercomputers.  Owned and operated solely by IU, Big Red II is designed to accelerate discovery in a wide variety of fields, including medicine, physics, fine arts, and global climate research, and enable effective analysis of large, complex data sets (i.e., big data).

System description

  • 344 CPU-only compute nodes, two AMD Opteron 16-core Abu Dhabi x86_64 CPUs and 64 GB RAM
  • 676 CPU/GPU comput nodes, one AMD Opteron 16-core Interlagos x86_64 CPU, one NVIDIA Tesla K20 GPU accelerator with a single Kepler GK110 GPU, 32 GB RAM

Highlights

  • Peak performance: 1petaFLOPS
  • Uses the Data Capacitor 2 as its high performance file system
  • Cray Linux Environment (based on SUSE Linux SLES 11)
  • Execution environments: Extreme Scalability Mode (ESM) and Cluster Compatability Mode (CCM)
  • TORQUE coupled with the Moab jobs scheduler for batch job management
Mason

Mason

Mason (mason.indiana.edu) at Indiana University is a large memory computer cluster configured to support data-intensive high-performance computing tasks such as genome assembly and analysis applications for IU researchers and the National Center for Genome Analysis Support (NCGAS).  

System description
  • 16 HP DL580 servers, 1.87 GHz X86_64 (Intel, Nehalem-EX), 32 cores/node, 512 GB RAM, 500 GB local disk
Highlights
  • Peak performance: 3.83 trillion floating point operations per second (TeraFLOPS)
  • Uses the Data Capacitor 2 as its high performance file system
  • Red Hat Enterprise Linux operating system
  • Each node connects to the IU Research Network via 10 Gigabit ethernet
  • High performance Lustre file systems provided by the Data Capacitor
  • Resource management provided by TORQUE
  • Scheduling services provided by Moab
Getting Started

Quarry Gateway Web Services Hosting System

The Quarry Gateway Web Services Hosting System at Indiana University is used solely for hosting Extreme Science and Engineering Discovery Environment (XSEDE) Science Gateway and Web Service allocations, and is restricted to members of approved projects that have a web service component.

The system consists of multiple Intel-based Hewlett-Packard (HP) servers geographically distributed for failover at the IU Bloomington and IUPUI Data Centers

  • Eight HP ProLiant DL160 and two HP ProLiant DL360 front-end servers at each location host virtual machines (VMs) based on the Kernel-based Virtual Machine (KVM) virtualization infrastructure. Each server runs Ubuntu 14.04 LTS, and is configured with dual quad-core Intel Xeon E5603 processors and a 10 gigabit Ethernet (GbE) adapter. Each DL160 has 96 GB of RAM; each DL360 has 128 GB of RAM.
  • Four HP ProLiant DL180 servers with HP storage arrays and two HP ProLiant DL380 servers at each location provide VM block storage. Each server is configured with a quad-core Intel Xeon E5606 processor, a 10 GbE adapter, and a RAID controller attached to an HP storage array. Each DL180 has 12 GB of RAM; each DL380 has 32 GB of RAM.

A standard VM consists of one virtual CPU, 4 GB of memory, and 10 GB of persistent local storage. Service owners get root access to their VMs. Supported VM operating systems are Red Hat Enterprise Linux (RHEL), CentOS, Debian Stable, and Ubuntu Linux. To request a VM, fill out and submit the VM Request Form.

Note: The Data Capacitor wide-are network (DCWAN) high-throughput file system also can be mounted to VMs. However, access to the DCWAN file system is reserved for long-term projects with storage and ongoing access requirements that cannot be met with other existing systems. To request DCWAN project space, fill out and submit the Project Allocation Request Form. Default allocations provide 10 TB, but requests for greater capacity may be granted after evaluation.

For more about the Quarry Gateway Web Services Hosting System, see IU Quarry User Guide in the XSEDE User Portal. If you have questions or need help, contact the XSEDE Help Desk.

Getting Started

Research Database Complex

The Research Database Complex (RDC) is dedicated to research-related databases and data-intensive applications that require a database. The RDC consists of 3 Hewlett Packard DL 180 G6 servers that provide Oracle database services and one Hewlett Packard DL 180 G6 server that provides MySQL database services, each containing 2 Intel E5620 2.40 GHz processors and 72GB of memory. Access to the RDC is provided to all IU faculty and graduate students, and faculty-sponsored undergraduates and staff. To apply for an account, visit the Account Management Service at: https://itaccounts.iu.edu.

Data Capacitor

Data Capacitor II (DC2)

The Data Capacitor II (DC2) is a high-speed, high-capacity, storage facility for very large data sets.  The DC2 scratch file system is mounted on Big Red II, Quarry, and Mason.  The DC2 consists of DataDirect Networks (DDN) high performance Storage Fusion Architecture.  DDN's SFA12k-40 high-speed storage applicances combine with Lustre open source parallel file system to deliver up to 50 GB/s of file system performance.  To apply for an allocation on the Data Capacitor II, go to this form.

Overview

Research Storage

The Scholarly Data Archive (SDA) is a tape based archive system with a raw tape capacity of 5.7 PetaBytes. The SDA is geographically distributed between the IUPUI and IUB campuses. Each campus has an automated tape library capable of holding over 5,000 tapes and 24 high speed tape drives. SDA has an aggregate transfer rate of over 2GB per second. It is capable of storing files from about 1MB to over 5TB is size.  The Research File System (RFS) is a spinning disk system with currently 30TB of total capacity. RFS can be mounted on the desktop or accessed over the web or SFTP protocol. RFS supports active editing of files and documents unlike the SDA.  At all campuses, accounts are available only for faculty, staff, and graduate students. To apply for an account, visit the Account Management Service at: https://itaccounts.iu.edu.