National Center for Computational Sciences

Rolls-Royce Rapidly Powers Sustainable Aviation with Ansys and Intel Technologies

Retrieved on: 
Wednesday, June 14, 2023

PITTSBURGH, June 14, 2023 /PRNewswire/ -- Ansys (NASDAQ: ANSS) collaborated with Rolls-Royce and Intel to reduce the simulation time of the thermo-mechanical model of Rolls-Royce's gas-turbine engine from more than 1,000 hours to less than 10 hours, saving energy and development costs. This collaboration was also supported by the computing resources at the Oak Ridge Leadership Computing Facility, HPE, and researchers at the NCSA (National Center for Supercomputing Applications).

Key Points: 
  • The advanced technology from Ansys and Intel also supports digital research and development (R&D), which incorporates simulation and digital twins to improve engine design for more sustainable, climate-neutral solutions for drive, propulsion, and power generation.
  • We believe cutting-edge technologies from Ansys and Intel will enable us to develop smarter, cleaner, and safer engines to power a more sustainable future for aviation while also reducing our operational carbon footprint."
  • "We are confident that Ansys' simulation portfolio and Intel's compute power will equip Rolls-Royce engineers to positively impact the future of aviation."
  • Visit Ansys at the 2023 Paris Air Show in France from June 19-25 to learn more about simulation's impact across the aviation industry.

Green AI Cloud and Cerebras Systems Bring Industry-Leading AI Performance and Sustainability to Europe

Retrieved on: 
Wednesday, December 14, 2022

Cerebras Systems , the pioneer in high performance artificial intelligence (AI) compute, and Green AI Cloud , the most sustainable super compute platform in Europe, today announced the availability of Cerebras Cloud at Green AI.

Key Points: 
  • Cerebras Systems , the pioneer in high performance artificial intelligence (AI) compute, and Green AI Cloud , the most sustainable super compute platform in Europe, today announced the availability of Cerebras Cloud at Green AI.
  • A cloud provider based in the EU, such as Green AI Cloud, enables customers across the EU to benefit from Cerebras industry-leading AI compute and stay within the data privacy structures.
  • As the leader in energy efficient AI compute, it was an obvious choice to partner with and deliver AI compute to the Green AI Cloud.
  • Green AI Cloud - a European Cloud Service Provider offering AI Super Compute for the largest AI models available.

Cerebras Wafer-Scale Cluster Brings Push-Button Ease and Linear Performance Scaling to Large Language Models

Retrieved on: 
Wednesday, September 14, 2022

The key to the new Cerebras Wafer-Scale Cluster is the exclusive use of using data parallelism. Data parallelism is the preferred approach for all AI work. However, data parallelism requires that all the calculations, including the largest matrix multiplications of the largest layer, fit on a single device, and that all the parameters fit in the device’s memory. Only the CS-2 -- and not graphics processing units -- achieves both characteristics for LLMs.

Key Points: 
  • With a Wafer-Scale Cluster, users can distribute even the largest language models from a Jupyter notebook running on a laptop with just a few keystrokes.
  • Large language models (LLMs) are transforming entire industries across healthcare and life sciences , energy , financial services , transportation, entertainment, and more .
  • However, training large models with traditional hardware is challenging and time consuming and has only successfully been accomplished by a few organizations.
  • Instead, Cerebras Wafer-Scale Clusters deliver push-button allocation of work to compute, and linear performance scaling from a single CS-2 to up to 192 CS-2 systems.

Cerebras Systems Enables GPU-Impossible™ Long Sequence Lengths Improving Accuracy in Natural Language Processing Models

Retrieved on: 
Wednesday, August 31, 2022

Customers can now rapidly train Transformer-style natural language AI models with 20x longer sequences than is possible using traditional computer hardware.

Key Points: 
  • Customers can now rapidly train Transformer-style natural language AI models with 20x longer sequences than is possible using traditional computer hardware.
  • Long sequence lengths enable an NLP model to understand a given word, within a larger and broader context.
  • By vastly enlarging the context (the sequence of words within which the target word is understood), Cerebras enables NLP models to demonstrate a more sophisticated understanding of language.
  • Training large models with massive data sets and long sequence lengths is an area that the Cerebras CS-2 system, powered by the Wafer-Scale Engine (WSE-2), excels.

Computer History Museum Honors Cerebras Systems with New Display for Wafer-Scale Engine

Retrieved on: 
Wednesday, August 3, 2022

Cerebras Systems , the pioneer in accelerating artificial intelligence (AI) compute, and the Computer History Museum (CHM), the leading institution decoding technology its computing past, digital present, and future impact on humanity today unveiled a new display featuring Cerebras Wafer-Scale Engine (WSE) .

Key Points: 
  • Cerebras Systems , the pioneer in accelerating artificial intelligence (AI) compute, and the Computer History Museum (CHM), the leading institution decoding technology its computing past, digital present, and future impact on humanity today unveiled a new display featuring Cerebras Wafer-Scale Engine (WSE) .
  • It is the honor of a lifetime to be accepted into the Computer History Museums world-renowned collection, said Andrew Feldman, CEO and co-founder of Cerebras Systems.
  • For more information on the new WSE display at the Computer History Museum, please tune into a livestream conversation on Wednesday, August 3 at 2:30 pm PT with Cerebras Systems CEO Andrew Feldman and Computer History Museum President & CEO Danl Lewin https://www.youtube.com/computerhistory/live .
  • Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types.

Cerebras Systems Sets Record for Largest AI Models Ever Trained on a Single Device

Retrieved on: 
Wednesday, June 22, 2022

By enabling a single CS-2 to train these models, Cerebras reduces the system engineering time necessary to run large natural language processing (NLP) models from months to minutes.

Key Points: 
  • By enabling a single CS-2 to train these models, Cerebras reduces the system engineering time necessary to run large natural language processing (NLP) models from months to minutes.
  • The Cerebras CS-2 is a critical component that allows GSK to train language models using biological datasets at a scale and size previously unattainable.
  • These foundational models form the basis of many of our AI systems and play a vital role in the discovery of transformational medicines.
  • Cerebras ability to bring large language models to the masses with cost-efficient, easy access opens up an exciting new era in AI.

Leading Supercomputer Sites Choose Cerebras for AI Acceleration

Retrieved on: 
Tuesday, May 31, 2022

At Cerebras Systems, our goal is to revolutionize compute, said Andrew Feldman, CEO and co-founder of Cerebras Systems.

Key Points: 
  • At Cerebras Systems, our goal is to revolutionize compute, said Andrew Feldman, CEO and co-founder of Cerebras Systems.
  • LRZ is set to accelerate innovation and scientific discovery in Germany with the CS-2 in its forthcoming AI supercomputer.
  • Coming online this summer, the new supercomputer will enable Germanys researchers to bolster scientific research and innovation with AI.
  • PSC also doubled their AI capacity to 1.7 million AI cores, with two CS-2 systems, powering the centers Neocortex supercomputer for high-performance AI.

NCSA Deploys Cerebras CS-2 in New HOLL-I Supercomputer for Large-Scale Artificial Intelligence

Retrieved on: 
Tuesday, May 31, 2022

“We’re thrilled to have the Cerebras CS-2 system up and running in our Center,” said Dr. Volodymyr Kindratenko, Director of the Center for Artificial Intelligence Innovation at NCSA. “This system is unique in the AI computing space in that we will have multiple clusters at NCSA that address the various levels of AI and machine learning needs -- Delta and HAL, our NVIDIA DGX, and now HOLL-I, consisting of the CS-2, as the crown jewel of our capabilities. Each system is at the correct scale for the various types of usage and all having access to our shared center-wide TAIGA filesystem eliminating delays and slowdowns caused by data migration as users move up the ladder of more intense machine learning computation.”

Key Points: 
  • Cerebras Systems , the pioneer in high performance artificial intelligence (AI) computing, today announced that the National Center for Supercomputing Applications (NCSA) has deployed the Cerebras CS-2 system in their HOLL-I supercomputer .
  • Were thrilled to have the Cerebras CS-2 system up and running in our Center, said Dr. Volodymyr Kindratenko, Director of the Center for Artificial Intelligence Innovation at NCSA.
  • It is powered by the largest processor ever built the Cerebras Wafer-Scale Engine 2 (WSE-2).
  • We founded Cerebras Systems with the audacious goal to forever change the AI compute landscape, said Andrew Feldman, CEO and Co-Founder, Cerebras Systems.

AMD Processors Accelerating Performance of Top Supercomputers Worldwide

Retrieved on: 
Tuesday, November 16, 2021

Finally, AMD EPYC 7003 series processors, which launched eight months ago, are utilized by 17 of the 75 AMD powered supercomputers in the list, demonstrating the rapid adoption of the latest generation of EPYC processors.

Key Points: 
  • Finally, AMD EPYC 7003 series processors, which launched eight months ago, are utilized by 17 of the 75 AMD powered supercomputers in the list, demonstrating the rapid adoption of the latest generation of EPYC processors.
  • AMD is engaged broadly across the HPC industry to deliver the performance and efficiency of AMD EPYC and AMD Instinct products, along with the ROCm open ecosystem, to speed research.
  • The first partition is based on next-generation AMD EPYC processors code named Genoa and the second partition is based on 3rd Gen AMD EPYC processors and AMD Instinct MI250x accelerators.
  • Oak Ridge National Laboratorys Frontier exascale computer which is powered by optimized 3rd Gen AMD EPYC processors and AMD Instinct MI250x accelerators.