Texas Advanced Computing Center

New MLPerf Training and HPC Benchmark Results Showcase 49X Performance Gains in 5 Years

Retrieved on: 
Wednesday, November 8, 2023

Today, MLCommons® announced new results from two industry-standard MLPerf™ benchmark suites:

Key Points: 
  • Today, MLCommons® announced new results from two industry-standard MLPerf™ benchmark suites:
    The MLPerf Training v3.1 suite, which measures the performance of training machine learning models.
  • The MLPerf HPC (High Performance Computing) v.3.0 benchmark suite, which is targeted at supercomputers and measures the performance of training machine learning models for scientific applications and data.
  • The MLPerf Training benchmark suite comprises full system tests that stress machine learning models, software, and hardware for a broad range of applications.
  • To view the results for MLPerf Training v3.1 and MLPerf HPC v3.0 and find additional information about the benchmarks, please visit the Training and HPC benchmark pages.

Tokyo Tech, Tohoku University, Fujitsu, and RIKEN start collaboration to develop distributed training of Large Language Models

Retrieved on: 
Monday, May 22, 2023

Tokyo Tech, Tohoku University, Fujitsu, and RIKEN are undertaking an initiative to this end that will focus on research and development toward distributed training of LLMs.

Key Points: 
  • Tokyo Tech, Tohoku University, Fujitsu, and RIKEN are undertaking an initiative to this end that will focus on research and development toward distributed training of LLMs.
  • The technology used in this initiative will allow the organizations to efficiently perform large-scale language model training on the large-scale parallel computing environment of the supercomputer Fugaku.
  • Leveraging the insights gained from deep learning from Japanese natural language processing developed at Tohoku University, we will construct large-scale models.
  • Distributed Training of Large Language Models on Fugaku (Project Number: hp230254)
    Neural networks with hundreds of millions to billions of parameters that have been pre-learned using large amounts of data.

TACC's New Stampede3 Advances NSF Supercomputing Ecosystem

Retrieved on: 
Monday, July 24, 2023

AUSTIN, Texas, July 24, 2023  /PRNewswire/ -- The Texas Advanced Computing Center (TACC) today announced Stampede3, a powerful new Dell Technologies (NYSE:DELL) and Intel based supercomputer that will enable groundbreaking open science research projects in the U.S. while leveraging the nation's previous high performance computing investment funds.

Key Points: 
  • For over a decade, the Stampede systems — Stampede (2012) and Stampede2 (2017) — have been flagships in the U.S. National Science Foundation's (NSF) XSEDE/ACCESS scientific supercomputing ecosystem.
  • Made possible by a $10 million award for new computer hardware from the NSF, Stampede3 will be the newest strategic resource for the nation's open science community when it enters full production in early 2024.
  • "In addition, the transition from Stampede2 to Stampede3 will be transparent to users easing the shift to a new system.
  • In addition, Stampede3 will be the only system in the NSF ACCESS environment to integrate the new Intel Max Series GPUs.

Bionano to Accelerate Data Processing Solution for Optical Genome Mapping Workflow with NVIDIA

Retrieved on: 
Thursday, January 12, 2023

SAN DIEGO, Jan. 12, 2023 (GLOBE NEWSWIRE) -- Bionano Genomics, Inc. (Nasdaq: BNGO) today announced a collaboration with NVIDIA to develop an acceleration platform for use in Bionano’s optical genome mapping (OGM) workflow.

Key Points: 
  • SAN DIEGO, Jan. 12, 2023 (GLOBE NEWSWIRE) -- Bionano Genomics, Inc. (Nasdaq: BNGO) today announced a collaboration with NVIDIA to develop an acceleration platform for use in Bionano’s optical genome mapping (OGM) workflow.
  • This collaboration is expected to significantly improve data processing speed while reducing time and cost associated with secondary analysis of OGM data.
  • The computation platform is designed to enable a small laboratory and information technology footprint, which would allow for rapid decentralized deployment.
  • Bionano will preview the solution with NVIDIA at the Advances in Genome Biology and Technology (AGBT) General Meeting, which will take place February 6-9, 2023, in Hollywood, Florida.

NVIDIA Hopper in Full Production

Retrieved on: 
Tuesday, September 20, 2022

SANTA CLARA, Calif., Sept. 20, 2022 (GLOBE NEWSWIRE) -- GTC—NVIDIA today announced that the NVIDIA H100 Tensor Core GPU is in full production, with global tech partners planning in October to roll out the first wave of products and services based on the groundbreaking NVIDIA Hopper™ architecture.

Key Points: 
  • A five-year license for the NVIDIA AI Enterprise software suite is now included with H100 for mainstream servers.
  • For customers who want to immediately try the new technology, NVIDIA announced that H100 on Dell PowerEdge servers is now available on NVIDIA LaunchPad , which provides free hands-on labs, giving companies access to the latest hardware and NVIDIA AI software.
  • NVIDIA Base Command and NVIDIA AI Enterprise software power every DGX system, enabling deployments from a single node to an NVIDIA DGX SuperPOD supporting advanced AI development of large language models and other massive workloads.
  • To learn more about NVIDIA Hopper and H100, watch Huangs GTC keynote .

Applied Materials Chief Technology Officer Dr. Om Nalamasu Receives IEEE Frederik Philips Award

Retrieved on: 
Friday, August 12, 2022

Dr. Nalamasu received the award for leadership in research and development of semiconductor materials, processes and equipment.

Key Points: 
  • Dr. Nalamasu received the award for leadership in research and development of semiconductor materials, processes and equipment.
  • Dr. Nalamasu is a world-renowned expert in materials science and has made seminal contributions to the fields of optical lithography and polymeric materials science and technology.
  • I am honored and humbled to follow in the footsteps of the many great technologists and engineers that have received this award, said Om Nalamasu.
  • Applied Materials, Inc. (Nasdaq: AMAT) is the leader in materials engineering solutions used to produce virtually every new chip and advanced display in the world.

GigaIO Announces Series of Composability Appliances Powered by AMD, First Edition Purpose-Built for Higher Education and Launched at ISC

Retrieved on: 
Tuesday, May 31, 2022

The GigaIO Composability Appliance: University Edition, powered by AMD, is a flexible environment for heterogeneous compute designed for Higher Education that can easily accommodate the different workloads required for teaching, professor research, and grad-student research.

Key Points: 
  • The GigaIO Composability Appliance: University Edition, powered by AMD, is a flexible environment for heterogeneous compute designed for Higher Education that can easily accommodate the different workloads required for teaching, professor research, and grad-student research.
  • AMD is the perfect partner for this venture because we share a commitment to create an open, industry standards-based platform.
  • GigaIO Composability Appliances are designed to accommodate a variety of accelerator types and brands and provide a truly vendor-agnostic environment.
  • The GigaIO Composability Appliance: University Edition, powered by AMD, is offered in three configurations and is available now.

GigaIO FabreX for Composable Infrastructure Now Supported Natively in NVIDIA Bright Cluster Manager 9.2

Retrieved on: 
Thursday, May 19, 2022

GigaIO, provider of the worlds only open rack-scale computing platform for advanced scale workflows, today announced that GigaIO FabreXTM for composable infrastructure is now natively supported in NVIDIA Bright Cluster Manager 9.2 .

Key Points: 
  • GigaIO, provider of the worlds only open rack-scale computing platform for advanced scale workflows, today announced that GigaIO FabreXTM for composable infrastructure is now natively supported in NVIDIA Bright Cluster Manager 9.2 .
  • With native support for Bright Cluster Manager 9.2, GigaIO FabreX customers can now compose and manage their compute systems to suit the needs of unique workloads from a single management interface.
  • Version 9.2 extends the goals of eliminating complexity and enabling flexibility by adding built-in support for composable infrastructure using GigaIO FabreX, where nodes can now be composed using Bright Cluster Manager Shell or BrightView.
  • Learn more about the native integration of FabreX for composable infrastructure in NVIDIA Bright Cluster Manager 9.2, available now.

GRC Secures $28 Million C Series Investment Led by SK Lubricants

Retrieved on: 
Thursday, March 31, 2022

GRC (Green Revolution Cooling), the leader in single-phase immersion cooling for data centers, today announced it has secured a $28 million C Series investment led by South Korea-based SK Lubricants .

Key Points: 
  • GRC (Green Revolution Cooling), the leader in single-phase immersion cooling for data centers, today announced it has secured a $28 million C Series investment led by South Korea-based SK Lubricants .
  • This most recent equity investment brings the companys total funding to date to $43 million.
  • The new funding will also help GRC to continue to expand its international footprint and global headcount.
  • Last year, GRC secured the Data Centre World Innovation Product of the Year Award .

GigaIO Awarded Lonestar6 Contract in TACC’s First Bid for Composable Disaggregated Infrastructure

Retrieved on: 
Tuesday, March 22, 2022

Lonestar6 is a 600-node system utilizing Milan-based AMD servers from Dell Technologies and A100 GPUs from NVIDIA, and is the first platform at TACC to incorporate Composable Disaggregated Infrastructure (CDI) in order to benefit from de-centralized server infrastructure.

Key Points: 
  • Lonestar6 is a 600-node system utilizing Milan-based AMD servers from Dell Technologies and A100 GPUs from NVIDIA, and is the first platform at TACC to incorporate Composable Disaggregated Infrastructure (CDI) in order to benefit from de-centralized server infrastructure.
  • GigaIOs composable infrastructure platform pairs this unlimited flexibility with the agility of the cloud, allowing researchers to build completely customized and otherwise impossible servers for their AI and HPC workflows.
  • GigaIO is providing Lonestar6s fabric infrastructure, including switches, cards, cables, JBOGs (Just a Bunch Of GPUs), and composition software.
  • Tens of thousands of scientists and students use TACCs supercomputers each year to answer complex questions in every field of science.