High Performance Computing — HPC
Shared supercomputing resources for Columbia researchers.
Also known as HPC and Shared HPC.
CUIT’s High Performance Computing service provides a cluster of computing resources that power transactions across numerous research groups and departments at the University, as well as additional projects and initiatives as demand and resources allow. The Shared Research Computing Policy Advisory Committee (SRCPAC) oversees the operation of existing HPC clusters through faculty-led subcommittees. SRCPAC is also responsible for the governance of the Shared Research Computing Facility (SCRF) as well as making policy recommendations for shared research computing at the University.
The following organizations are available to support research initiatives at Columbia:
Please note that HPC service is available 24x7. Downtimes for maintenance may be scheduled every 3 months. The duration of these planned outages varies but is typically less than a day and is announced to users in advance.
Ginsburg Shared HPC Cluster
Ginsburg went live in February 2021 and is a joint purchase by 33 research groups and departments.
The cluster is faculty-governed by the cross-disciplinary SRCPAC and is administered and supported by CUIT’s Research Computing Services (RCS) team.
Tentative retirement dates
- Ginsburg Phase 1 retirement: February 2025
- Ginsburg Phase 2 retirement: March 2027
- Ginsburg Phase 3 retirement: December 2027
286 nodes with a total of 9152 cores (32 cores per node):
All servers are equipped with Dual Intel Xeon Gold 6226R processors (2.9 GHz):
- 191 Standard Nodes (192 GB)
- 56 High Memory Nodes (768 GB)
- 18 RTX 8000 GPU nodes (2 GPUs modules per server)
- 4 V100S GPU nodes (2 GPU modules per server)
- 8 A100 GPU Nodes (2 GPU modules per server)
- 9 A40 GPU Nodes (2 GPU modules per server)
- 1PB of DDN ES7790 Lustre storage
- HDR Infiniband
- Red Hat Enterprise Linux 8
- Slurm job scheduler
Terremoto Shared HPC Cluster
The Terremoto cluster was launched in December 2018, and is located in the Columbia University Data Center.
The cluster is faculty-governed by the cross-disciplinary SRCPAC and is administered and supported by CUIT’s Research Computing Services (RCS) team.
Tentative retirement dates
- Terremoto Phase 1 retirement: December 2023
- Terremoto Phase 2 retirement: December 2024
137 nodes with a total of 3288 cores (24 cores per node)
Dell C6420 nodes with dual Intel Xeon Gold 6126 Processor (2.6 Ghz):
- 111 Standard Nodes (192 GB)
- 14 High Memory Nodes (768 GB)
- 12 GPU Nodes with two Nvidia V100 GPU modules
- EDR Infiniband
- Red Hat Enterprise Linux 7
- Slurm job scheduler

Habanero Shared HPC Cluster
The Habanero cluster was launched in November 2016. It is available for free use. It is also available for classroom teaching. The cluster is faculty-governed by the cross-disciplinary SRCPAC and is administered and supported by CUIT’s Research Computing Services (RCS) team.
302 nodes with a total of 7248 cores (24 cores per node)
HP ProLiant XL170r Gen9 nodes with dual Intel E5-2650v4 Processors (2.2 GHz):
- 234 standard memory nodes (128 GB)
- 41 high memory nodes (512 GB)
- 14 GPU nodes each have two Nvidia K80 GPUs
- 13 GPU nodes each have two Nvidia P100 GPUs
- 640 TB DDN GS7K GPFS storage
- EDR Infiniband (FDR to storage)
- Red Hat Enterprise Linux 7
- Slurm job scheduler
Yeti Shared HPC Cluster
Yeti, retired in 2019, was located in the Shared Research Computing Facility (SRCF), a dedicated portion of the university data center on the Morningside campus.
Hotfoot Shared HPC Cluster
Hotfoot, now retired, was launched in 2009 as a partnership among: the departments of Astronomy & Astrophysics, Statistics, and Economics plus other groups represented in the Social Science Computing Committee (SSCC); the Stockwell Laboratory; CUIT; and the Office of the Executive Vice President for Research; and Arts & Sciences.
In later years the cluster ran the Torque/Moab resource manager/scheduler software and consisted of 32 nodes which provided 384 cores for running jobs. The system also included a 72 TB array of scratch storage.
CUIT offers four ways to leverage the computing power of our High Performance Computing resources.
Please note: Morningside, Lamont, and Nevis faculty and research staff are eligible for the Purchase option. Morningside, Lamont, and Nevis faculty, research staff, and sponsored students are eligible for the Rent and Free options.
Researchers may purchase servers and storage during periodic purchase opportunities scheduled and approved by faculty and administration governance committees. A variety of purchasing options are available with pricing tiers that reflect the level of computing capability purchased. Purchasers receive higher priority than others leveraging the HPC.
For more information on this option, please email [email protected].
An individual researcher may pay a set fee for a share of the system for one year as a single user with the ability to use additional computing capacity as it is available, based on system policies and availability. The current price is set at $1000/year.
Submit a request form for an HPC rental request now.
Researchers, including graduate students, post-docs, and sponsored undergraduates may use the system on a low-priority, as-available basis. User support is limited to online documentation only.
Submit a request form for free HPC access now.
Instructors teaching a course or workshop addressing an aspect of computational research may request temporary access for their students. Access will typically be arranged in conjunction with a class project or assignment.
Submit a request form for HPC Education access now.
Current HPC contacts can request access to their HPC group for a new user by emailing [email protected]. This option is available to current authorized contacts only.
Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS), formerly XSEDE
As of September 1, 2022, XSEDE is now known as ACCESS, an NSF-funded, nationwide collection of supercomputing systems available to researchers through merit-based allocations. The resources are free, but the application process can be competitive. All Columbia faculty members and postdoctoral researchers who are eligible principal investigators (PIs) can contact RCS to inquire about joining our ACCESS national HPC test allocation as a first step to obtaining your own allocation. If you need to run jobs for more than our current 5-day maximum wall time, or you have a short or one-time project, or if you do not have a budget for our resources, ACCESS can be a good option.
There are a few types of allocations that Columbia researchers can use in order of ease of acquisition and amount of resources available.
Use Columbia's Discover allocation
For very small-scale testing and benchmarking, which Columbia ACCESS representatives can approve.
Accelerate ACCESS Allocation
Accelerate ACCESS allocations are one of the fastest ways to gain access to and start using ACCESS-allocated resources. Accelerate ACCESS requests require minimal documentation: project description, no more than three (3) pages, and the PI's CV. For resources greater than this, you would need to submit a Maximize access allocation request.
Maximize Access Allocation
For projects that have progressed beyond the Accelerate phase, either in purpose or scale of computational activities, a Maximize ACCESS Request is appropriate. Requests for a Maximize ACCESS allocation are accepted and reviewed by the ACCESS Allocation Review Committee (AARC). Research requests are highly competitive.
- To begin, create an account and log in to ACCESS.
- Under Select an Identity Provider, type Columbia and select Columbia University, and click Log On.
- You can choose either option in "Select an information release consent duration:" but "Ask me again if information to be provided to this service changes" is the default.
- Click Accept.
- Click Begin.
- Click Submit and type Columbia and click Select, and an email verification will be sent.
- Read the Terms and Conditions, tick the checkbox next to I Agree, and click Submit.
- Enter a new password and click Submit.
- Send an email to [email protected] letting them know you have created an ACCESS account.
Computing Resources Outside of Columbia
CUIT's Research Computing Services team has put together a list of additional, fee-based, third-party computing and storage platforms as a resource for our research community.
At this time, RCS does not recommend one resource over another, or provide support for these external services.
- Amazon Web Services including Elastic Compute Cloud (EC2) and storage services
- Columbia University Center for Computational Biology and Bioinformatics (C2B2) colocation and hosting services
- Cornell University Center for Advanced Computing and Red Cloud services
- New York State HPC Program: Resources at RPI, Brookhaven
- New York State HPC Consortium: Web-based resources at RPI, Brookhaven, Stony Brook, and University of Buffalo
- San Diego Supercomputing Center (SDSC) Cloud Storage Services
- USC Digital Repository
- Windows Azure