The Research Cluster Grant

The Silicon Mechanics Research Cluster Grant is a program designed to jumpstart research efforts where access to high-performance computing was previously limited, outdated or not available. Each year the awarded cluster drives collaboration within institutions through access to the latest high-performance and GPU technologies.

Home           Details           Apply
Grant Details

Background
Rules
Program Q&A
Application Questions
Sample Cluster Configuration

Instructions PDF

Background

For the sixth consecutive year, Silicon Mechanics is pleased to announce our sponsorship of a unique grant opportunity. One institution will be awarded a complete high-performance computing cluster using the latest Intel processors and NVIDIA GPU accelerators. This grant program is open to all US and Canadian qualified post-secondary institutions, university-affiliated research institutions, non-profit research institutions, and researchers at federal labs with university affiliation. We are very interested in applications that show the most appropriate use of the technologies to be awarded.

We hope you will consider this opportunity to support your research by submitting a proposal, or by forwarding this to others you know who could benefit through this grant.

If you have any questions, please do not hesitate to contact us at research-grant@siliconmechanics.com.

top

Rules

1. Open to all US and Canadian qualified post-secondary institutions, university-affiliated research institutions, non-profit research institutions, and researchers at federal labs with university affiliations.

2. Submissions will be accepted through 11:59pm on March 1st, 2017.

4. The grant recipients will be announced on or before April 3rd, 2017.

5. Submissions will be reviewed for merit and related impacts.

6. Applications should be no longer than 5 pages and must address the application questions.

7. The award is made at the sole discretion of Silicon Mechanics and its partner companies.

8. The cluster is covered by standard warranties from Silicon Mechanics.

9. Neither Silicon Mechanics nor its partners will be responsible for any costs incurred by the awarded organization as part of installing, operating, or managing the cluster.

10. Neither Silicon Mechanics nor its partners make any claim to any research on, or intellectual property generated or deployed on the Research Cluster Grant, or used in the application submission.

top

Program Q&A

Q: What is being awarded?
A: The award is for one high-performance computing cluster, including all systems and networking in a single rack enclosure. Specifications are listed here.

Q: Does the award include any monetary support?
A: No, the grant is only for the award of the cluster.

Q: Is this award for the use or loan of a cluster, or does my institution take possession of the cluster?
A: The awarded institutions would receive delivery of the cluster and own it.

Q: Does the proposal have to come from an individual, or it can be done on behalf of a department or institution?
A: While the proposal can come from one person, we strongly encourage collaboration. This could include collaboration within and across departments of a single institution, or across multiple institutions. We strongly encourage collaboration in order that the technology can be accessed by a larger research audience.

Q: Is the hardware configuration already determined or can I make changes?
A: As listed below, the configuration is pre-determined. That said, we will work with the receiving institution should there be specific infrastructure requirements needing change (e.g. power specifications). There also could be changes to the configuration based on revisions to technical specifications by the manufacturers and due to component availability. Delivered configuration will be equal to or higher in specifications from what is listed.

Q: What areas of research are of interest to the review committee?
A: There is no one single area that is the focus of this award process. We do not judge the research itself, that is, as an example, we are not judging if genomic research is more important than environmental research. We are primarily interested in finding the most innovative uses of the awarded technology, and who could take the most advantage of having this newer technology for their research use.

Q: Should educational aspects/impacts also be outlined in the proposal or do you care primarily about research and science?
A: We are very much interested in "educational aspects/impacts", so please do include those.

Q: We typically submit references as part of a grant submission. Will those need to be included in the 5 pages we submit?
A: References are not requested or required. We prefer that references not be included as they will not be taken into account in the review process.

Q. What support is required from my institution?
A: It must be stated that there is support within your institution for this grant. This will include having a facility to house the proposed cluster and technical resources available for managing the cluster. Also, as part of the award, your name and institution may be used for reasonable marketing activities including any (or all) of the following: interviews, photographs, press releases, exhibit participation, cases studies, and brief descriptions of the research being conducted.

Q: What is the deadline for submitting a proposal?
A: March 1st, 2017, 11:59pm Pacific.

Q: When will the award be made?
A: On or before April 3rd, 2017.

Q: How do we submit our proposal?
A: Apply here. We have created a handy Instructions PDF to help.

Q: If we have any questions, how do we submit those?
A: Send your questions to research-grant@siliconmechanics.com.

top

Application Questions

1. For the PI and any Co-PIs, include name, email address, telephone number, position, department(s), and institution(s) for which you will be conducting this research.

2. Please describe the research that you plan to undertake with this cluster.

3. How important is the proposed research activity to advancing knowledge and understanding within its own field or across different fields?

4. To what extent does the research take advantage of the technologies to be awarded? Be specific by describing the technologies (see the proposed cluster configuration below) and how they are applicable to improving your research capabilities.

5. The goal of the Research Cluster Grant is to provide the latest HPC technology to an institution(s) with the greatest need. Please describe your need and why your institution should be awarded this grant. Detail how any departments, research groups, and/or other institutions will collaboratively utilize the cluster.

6. What current IT resources do you currently have that are used for research computing? That is, do you currently have access to any systems or clusters, either on campus or remotely (e.g. Internet2)? Describe the personnel who will be responsible for managing the cluster.

7. Will there be any student access to the cluster? If so, at what level and for what purposes?

top

Sample Cluster Configuration

Note: the following configuration is what was awarded in 2016. The cluster to be awarded in 2017 will be similar, but not identical.

Summary:

  • Head node with storage

  • 4 GPU / compute nodes

  • Ethernet & InfiniBand Networking

  • Rack with Power Distribution

  • CPU: 2 x Intel Xeon E5-2650v3, 2.3 GHz (10-Core, 105W)

  • RAM: 256GB (8 x 32GB DDR4-2400 Registered ECC 1.2 LRDIMMs)

  • Integrated NIC: Intel X540 Dual-Port 10 Gigabit Ethernet Controller

  • InfiniBand: Mellanox Single-Port ConnectX-4 EDR InfiniBand and 100GbE Network Adapter

  • Management: Integrated IPMI 2.0 with Virtual Media over LAN and KVM-over-LAN Support

  • Hot-Swap Drives: 8 x 6TB Seagate Enterprise (6Gb/s, 7.2K RPM, 256MB Cache, 512e) 3.5” SATA Hard Drives -OR- 8 x 6TB Western Digital RE (6Gb/s, 7.2K RPM, 128MB Cache, 512n) 3.5” SATA Hard Drives

  • OS Drives: 2 x 800GB Intel DC P3700 Series HET-MLC (4GB/s, NVMe, 10 DWPD) 2.5” Solid State Drives -OR- 2 x 800GB HGST Ultrastar SN100 MLC (4GB/s, NVMe, 3 DWPD) 2.5” Solid State Drives

  • Drive Controller: LSI 9361-8i (8-Port Internal) 12Gb/s SAS/SATA RAID with CacheVault Module

  • Power Supply: Redundant 1000W Power Supplies, 80 PLUS Titanium Certified

  • OS: Current CentOS Distribution

  • Cluster Management: Bright Cluster Manager Advanced Edition with 1 Year Maintenance and Support

Four 2U GPU compute nodes (8U Total), each node featuring 2 Intel Xeon E5-2680v3 processors, 256GB of Micro DDR4-2400 RAM, 2 NVIDIA Tesla K80 GPU accelerators, and 480GB Micron M510DC MLC SATA SSDs. Network and connectivity provided by Mellanox ConnectX-4 EDR InfiniBand network adapters and integrated Intel X540 10 gigabit Ethernet controllers. Advanced cluster monitoring and resource management provided by Bright Cluster Manager running on the current version of the CentOS distribution.

  • CPU: 2 x Intel Xeon E5-2680v3, 2.5 GHz (12-Core, HT, 30MB Cache, 120W) per Node

  • RAM: 256GB (8 x 32GB DDR4-2400 Registered ECC 1.2V LRDIMMs) per Node

  • Integrated NIC: Intel Dual-Port X540 10 Gigabit Ethernet Controller per Node

  • InfiniBand: Mellanox Single-Port ConnectX-4 EDR InfiniBand and 100GbE Network Adapter per Node

  • Management: Integrated IPMI 2.0 with Virtual Media over LAN and LVM-over-LAN Support per Node

  • GPU: 2 x NVIDIA Tesla K80 GPU Accelerators per Node

  • Hot-Swap Drives: 2 x 240GB Micron M510DC MLC (6Gb/s, 2 DWPD) 2.5" SATA SSDs per Node

  • OS: Current CentOS Distribution

  • Cluster Management: Bright Cluster Manager Advanced Edition with 1 Year Maintenance and Support

One 24U APC Netshelter SX standard rack enclosure, featuring a 1U Mellanox SB7700 36-port SwitchIB EDR InfiniBand switch for data communications and a 1U HPE 1920-224G 24-port gigabit Ethernet switch for management. Metered PDUs and EDR InfiniBand and Ethernet cabling provided.

  • Rack: APC NetShelter SX 24U Standard Rack Enclosure, 600mm (W) x 1070mm (D)

  • InfiniBand: Mellanox 36-port SB7700 SwitchIB EDR Infiniband Switch with 1 Year Silver Support

  • InfiniBand Interconnect: 5 x Mellanox 1.5m 100Gbps QSFP28 Passive DAC Cable

  • Ethernet: HPE 24-Port 1920-24G 1GbE Switch with 1 Year Foundation Care Support

  • Power Distribution: APC 20A/200-240V Metered Rack PDU

  • Interconnects and Cabling:

    • 5 x Mellanox 1.5m 100Gbps QSFP28 Passive DAC Cables

    • 5 x Cat6a RJ45 Ethernet Networking Cables
top