Getting started on Karst
karst.uits.iu.edu) is Indiana University's newest
high-throughput computing cluster. Designed to deliver large amounts
of processing capacity over long periods of time, Karst's system
architecture provides IU researchers the advanced performance needed
to accommodate high-end, data-intensive applications critical to
scientific discovery and innovation. Karst also serves as a
"condominium cluster" environment for IU researchers, research labs,
departments, and schools.
Karst is equipped with 228 general access compute nodes and 28 condo nodes, plus 16 dedicated data nodes for separate handling of data-intensive operations. All nodes are IBM NeXtScale nx360 M4 servers, each equipped with two Intel Xeon E5-2650 v2 8-core processors. Each compute node has 32 GB of RAM and 250 GB of local disk storage. Each data node has 64 GB of RAM and 24 TB of local storage. All nodes run Red Hat Enterprise Linux (RHEL) 6 and are connected via 10-gigabit Ethernet to the IU Science DMZ.
Karst provides batch processing and node-level co-location services that make it well suited for running high-throughput and data-intensive parallel computing jobs. Karst uses TORQUE integrated with Moab Workload Manager to coordinate resource management and job scheduling. The Data Capacitor II and Data Capacitor Wide Area Network (DC-WAN) parallel file systems are mounted for temporary storage of research data. The Modules environment management package on Karst allows users to dynamically customize their shell environments.
Following are some useful documents to help you get started running compute jobs on Karst:
On this page:
- System overview
- Accounts, access, and user policies
- Setting up your user environment
- Running jobs
- X forwarding and interactive jobs
- Application-specific help
- Getting help
Accounts, access, and user policies
- Requesting an account
- Accessing Karst
- Using Karst Desktop Beta
- Setting up SSH public-key authentication
- What are my responsibilities as a computer user at IU?
- On IU's research systems, how much allocated and short-term storage capacity is available to me?
- Working with research data containing PHI
Setting up your user environment
- Using Modules to manage your software environment
- Changing your login shell or your passphrase
- Compilers on Karst
- Available software
- Queue information
- About Lustre file systems
- Specifying IU's Lustre file systems for batch jobs on Karst
- Using TORQUE to submit and manage batch jobs
- Running jobs on Karst
- Monitoring memory and CPU usage
- Using the IU Cyberinfrastructure Gateway to monitor batch jobs on Karst
X forwarding and interactive jobs
- If you have a system-specific question about Big Red II, Karst, Mason, or the Research Database Complex (RDC) contact the High Performance Systems (HPS) team.
- If you have questions about the Scholarly Data Archive (SDA), contact the Research Storage team.
- If you have questions about shared scratch or project space on the Data Capacitor II or Data Capacitor Wide Area Network (DC-WAN) file system, contact the High Performance File Systems (HPFS) team.
- If you have questions about the development tools, compilers, scientific or numerical libraries, or debuggers available on the research computing system, contact the Scientific Applications and Performance Tuning (SciAPT) team.
- If you have questions about the statistical and mathematical applications available on the research computing systems, contact the Research Analytics group.
- If you have questions about the bioinformatics and genome analysis packages available on the research computing systems, email the National Center for Genome Analysis Support (NCGAS).
For general inquiries about UITS Research Technologies systems and services, complete and submit the Research Technologies request for help form.