The Research Cluster Grant

The Silicon Mechanics Research Cluster Grant is a program designed to jumpstart research efforts where access to high-performance computing was previously limited, outdated or not available. Each year the awarded clusters drive collaboration within institutions through access to the latest high-performance and GPU technologies.

Home           Details
Grant Details

The Application Submission Window is now Closed

Background
Rules
Program Q&A
Application Questions
Sample Cluster Configuration
Application

Instructions PDF

Background

For the fifth consecutive year, Silicon Mechanics is pleased to announce our sponsorship of a unique grant opportunity. Two institutions will be selected, and both will be awarded a complete high-performance computing cluster using the latest Intel processors and NVIDIA GPU accelerators. This grant program is open to all US and Canadian qualified post-secondary institutions, university-affiliated research institutions, non-profit research institutions, and researchers at federal labs with university affiliation. We are very interested in applications that show the most appropriate use of the technologies to be awarded.

We hope you will consider this opportunity to support your research by submitting a proposal, or by forwarding this to others you know who could benefit through this grant.

If you have any questions, please do not hesitate to contact us at research-grant@siliconmechanics.com.

top

Rules:

1. Open to all US and Canadian qualified post-secondary institutions, university-affiliated research institutions, non-profit research institutions, and researchers at federal labs with university affiliations.

2. The application portal will be accessible from this page at 9:00am on December 15th, 2015.

3. Submissions will be accepted through 11:59pm on March 1st, 2016.

4. The grant recipients will be announced on or before April 1st, 2016.

5. Submissions will be reviewed for merit and related impacts.

6. Applications should be no longer than 5 pages and must address the application questions.

7. The award is made at the sole discretion of Silicon Mechanics and its partner companies.

8. The cluster is covered by standard warranties from Silicon Mechanics.

9. Neither Silicon Mechanics nor its partners will be responsible for any costs incurred by the awarded organization as part of installing, operating, or managing the clusters.

10. Neither Silicon Mechanics nor its partners make any claim to any research on, or intellectual property generated or deployed on the Research Cluster Grant, or used in the application submission.

top

Program Q&A

Q: What is being awarded?
A: The award is for two high-performance computing clusters, one cluster per institution, including all systems and networking in a single rack enclosure. Specifications are listed here.

Q: Does the award include any monetary support?
A: No, the grant is only for the award of the cluster.

Q: Is this award for the use or loan of a cluster, or does my institution take possession of the cluster?
A: The awarded institutions would receive delivery of the cluster and own it.

Q: Does the proposal have to come from an individual, or it can be done on behalf of a department or institution?
A: While the proposal can come from one person, we strongly encourage collaboration. This could include collaboration within and across departments of a single institution, or across multiple institutions. We strongly encourage collaboration in order that the technology can be accessed by a larger research audience.

Q: Is the hardware configuration already determined or can I make changes?
A: As listed below, the configuration is pre-determined. That said, we will work with the receiving institution should there be specific infrastructure requirements needing change (e.g. power specifications). There also could be changes to the configuration based on revisions to technical specifications by the manufacturers and due to component availability. Delivered configuration will be equal to or higher in specifications from what is listed.

Q: What areas of research are of interest to the review committee?
A: There is no one single area that is the focus of this award process. We do not judge the research itself, that is, we are not judging if genomic research is more important than environmental research. We are primarily interested in finding the most innovative uses of the awarded technology, and who could take the most advantage of having this newer technology for their research use.

Q: Should educational aspects/impacts also be outlined in the proposal or do you care primarily about research and science?
A: We are very much interested in "educational aspects/impacts", so please do include those.

Q: We typically submit references as part of a grant submission. Will those need to be included in the 5 pages we submit?
A: References are not requested or required.

Q. What support is required from my institution?
A: There is support within your institution for this grant. - There is sufficient access to a facility to house the proposed cluster. - There are technical resources available for managing the cluster. - Your name and institution may be used for reasonable marketing activities including any (or all) of the following: interviews, photographs, press releases, exhibit participation, cases studies, and brief descriptions of the research being conducted.

Q: What is the deadline for submitting a proposal?
A: March 1st, 2016, 11:59 Pacific.

Q: When will the award be made?
A: On or before April 1st, 2016.

Q: How do we submit our proposal?
A: The application portal will open on this site on December 15th, 2015 at 9:00am Pacific.

Q: If we have any questions, how do we submit those?
A: Send your questions to research-grant@siliconmechanics.com.

top

Application Questions

1. For the PI and any Co-PIs, include name, email address, telephone number, position, department(s), and institution(s) for which you will be conducting this research.

2. Please describe the research that you plan to undertake with this cluster.

3. How important is the proposed research activity to advancing knowledge and understanding within its own field or across different fields?

4. To what extent does the research take advantage of the technologies to be awarded? Be specific by describing the technology and how it is applicable to improving your research capabilities.

5. The goal of the Research Cluster Grant is to provide the latest HPC technology to an institution(s) with the greatest need. Please describe your need and why your institution should be awarded this grant. Detail how any departments, research groups, and/or other institutions will collaboratively utilize the cluster.

6. What current IT resources do you currently have that are used for research computing?

7. Will there be any student access to the cluster? If so, at what level and for what purposes?

top

Sample Cluster Configuration

Note: the following configuration is what was awarded in 2015. The clusters to be awarded in 2016 will be similar, but using the most current components available at that time.

Summary:

  • Head node with storage

  • 4 GPU / compute nodes

  • Gigabit & InfiniBand Networking

  • Rack with Power Distribution

One 2U head node with storage, featuring 2 Intel Xeon E5-2680v2 processors and 128 GB of Kingston DDR3-1600 RAM, 7.2 TB of Seagate Savvio 10K SAS storage controlled by an 8-port LSI RAID card, and 2 mirrored Intel Enterprise SSDs for OS storage. Network and connectivity provided by a Mellanox ConnectX-3 FDR InfiniBand network adapter and an integrated Intel i350 gigabit Ethernet controller. Cluster management and job submission are provided by Bright Cluster Manager.

  • CPU: 2 x Intel Xeon E5-2680v2, 2.8 GHz (10-Core, 115W)

  • RAM: 128GB (8 x 16GB DDR3-1600 Registered ECC DIMMs)

  • Integrated NIC: Intel i350 Dual-Port Gigabit Ethernet Controller

  • InfiniBand: Mellanox Single-Port ConnectX-3 FDR InfiniBand Network Adapter

  • Management: Integrated IPMI 2.0 & KVM with Dedicated LAN

  • Hot-Swap Drives: 8 x 900GB Seagate Savvio 10K.6 (6Gb/s, 10K RPM, 64MB Cache) 2.5” SAS Hard Drives

  • OS Drives: 2 x 80GB Intel DC S3500 Series MLC (6Gb/s, 0.3 DWPD) 2.5" SATA SSDs

  • Drive Controller: LSI 9271-8i (8-Port Internal) 6Gb/s SAS RAID with Cache Vault Module

  • Power Supply: Redundant 740W Power Supplies, 80 PLUS Platinum Certified

  • OS: Current CentOS Distribution

  • Cluster Management: Bright Cluster Manager Advanced Edition - 1 Year Maintenance and Support

One 4U 4-node compute / GPU system (4U total), each node featuring 2 Intel Xeon E5-2680v2 processors, 128 GB of Kingston DDR3-1600 RAM, 2 NVIDIA Tesla K40m GPU accelerators, and 2 400 GB Intel DC S3700 Enterprise SATA SSDs. Network and connectivity provided by Mellanox ConnectX-3 FDR InfiniBand network adapters and integrated Intel i350 gigabit Ethernet controllers in each node. Cluster management and job submission are provided by Bright Cluster Manager running on the current version of the CentOS distribution.

  • CPU: 2 x Intel Xeon E5-2680v2, 2.8 GHz (10-Core, 115W)

  • RAM: 128GB (8 x 16GB DDR3-1600 Registered ECC DIMMs)

  • Integrated NIC: Intel Dual-Port i350 Gigabit Ethernet Controller

  • InfiniBand: Mellanox Single-Port ConnectX-3 FDR InfiniBand Network Adapter

  • Management: Integrated IPMI 2.0 & KVM with Dedicated LAN

  • GPU: 2 x NVIDIA Tesla K40m GPU Accelerators

  • Hot-Swap Drives: 2 x 400GB Intel DC S3700 Series HET-MLC (6Gb/s, 10 DWPD) 2.5" SATA SSDs

  • OS: Current CentOS Distribution

  • Cluster Management: Bright Cluster Manager Advanced Edition with 1 Year Maintenance and Support

One 24U APC Netshelter SX standard rack enclosure, featuring a 1U Mellanox 18-port SwitchX-2 FDR InfiniBand switch for communications and a 1U HP ProCurve gigabit Ethernet switch for management. Metered PDUs and FDR InfiniBand and Ethernet cabling provided.

  • Rack: APC NetShelter SX 24U Standard Rack Enclosure, 600mm (W) x 1070mm (D)

  • InfiniBand: Mellanox 18-Port SwitchX-2 FDR Infiniband Unmanaged Switch with 1 Year Silver Support

  • Ethernet: HP 48-Port ProCurve 1GbE Managed Switch

  • Power Distribution: APC Metered Rack PDU, 20A/120V

  • Interconnects and Cabling:

    • Mellanox FDR InfiniBand Passive Copper Cables

    • Cat6a Ethernet Networking Cables
top