Cerebra (British charity)

Cerebras Systems and Jasper Partner on Pioneering Generative AI Work

Retrieved on: 
Tuesday, November 29, 2022

Jasper , the category-leading AI content platform, and Cerebras Systems , the pioneer in accelerating artificial intelligence (AI) compute, today announced a partnership to accelerate adoption and improve the accuracy of generative AI across enterprise and consumer applications.

Key Points: 
  • Jasper , the category-leading AI content platform, and Cerebras Systems , the pioneer in accelerating artificial intelligence (AI) compute, today announced a partnership to accelerate adoption and improve the accuracy of generative AI across enterprise and consumer applications.
  • Using Cerebras newly announced Andromeda AI supercomputer, Jasper can train its profoundly computationally intensive models in a fraction of the time and extend the reach of generative AI models to the masses.
  • With the power of the Cerebras Andromeda supercomputer, Jasper expects to dramatically advance AI work, including training GPT networks to fit AI outputs to all levels of end-user complexity and granularity.
  • Our collaboration with Cerebras accelerates the potential of generative AI, bringing its benefits to our rapidly growing customer base around the globe.

Cerebras Systems and Cirrascale Cloud Services® Introduce Cerebras AI Model Studio to Train GPT-Class Models with 8x Faster Time to Accuracy, at Half the Price of Traditional Cloud Providers

Retrieved on: 
Tuesday, November 29, 2022

Training Large Language Models (LLMs) is challenging and expensive -- multi-billion parameter models require months to train on clusters of GPUs and a team of engineers experienced in distributed programming and hybrid data-model parallelism. It is a multi-million dollar investment that many organizations simply cannot afford.

Key Points: 
  • The Cerebras AI Model Studio offers users the ability to train GPT-class models at half the cost of traditional cloud providers and requires only a few lines of code to get going.
  • The Cerebras AI Model Studio makes this easy and dead simple just load your dataset and run a script.
  • The Cerebras AI Model Studio offers users cloud access to the Cerebras Wafer-Scale Cluster, which enables GPU-impossible work with first-of-its-kind near-perfect linear scale performance.
  • Cirrascale Cloud Services, Cirrascale and the Cirrascale logo are trademarks or registered trademarks of Cirrascale Cloud Services LLC.

Cerebras Unveils Andromeda, a 13.5 Million Core AI Supercomputer that Delivers Near-Perfect Linear Scaling for Large Language Models

Retrieved on: 
Monday, November 14, 2022

Near-perfect scaling means that as additional CS-2s are used, training time is reduced in near perfect proportion. This includes large language models with very large sequence lengths, a task that is impossible to achieve on GPUs. In fact, GPU impossible work was demonstrated by one of Andromeda’s first users, who achieved near perfect scaling on GPT-J at 2.5 billion and 25 billion parameters with long sequence lengths -- MSL of 10,240. The users attempted to do the same work on Polaris, a 2,000 Nvidia A100 cluster, and the GPUs were unable to do the work because of GPU memory and memory bandwidth limitations.

Key Points: 
  • It is the only AI supercomputer to ever demonstrate near-perfect linear scaling on large language model workloads relying on simple data parallelism alone.
  • Unlike any known GPU-based cluster, Andromeda delivers near-perfect scaling via simple data parallelism across GPT-class large language models, including GPT-3, GPT-J and GPT-NeoX.
  • This includes large language models with very large sequence lengths, a task that is impossible to achieve on GPUs.
  • Andromeda delivers 13.5 million AI cores and near perfect linear scaling across the largest language models, without the pain of distributed compute and parallel programing.

Cerebras Systems and National Energy Technology Laboratory Set New Milestones for High-Performance, Energy-Efficient Field Equation Modeling Using Simple Python Interface

Retrieved on: 
Thursday, November 10, 2022

While this performance is consistent with hand-optimized assembly codes, the WFA provides an easy-to-use, high-level Python interface that allows users to form and solve field equations effortlessly.

Key Points: 
  • While this performance is consistent with hand-optimized assembly codes, the WFA provides an easy-to-use, high-level Python interface that allows users to form and solve field equations effortlessly.
  • This work demonstrates the fastest known time-to-solution for field equations in computing history at scales up to several billion cells.
  • In the past, field equations have been memory bound, and in distributed systems, they are limited by node-to-node communication bandwidth.
  • NETL is a U.S. Department of Energy national laboratory that drives innovation and delivers technological solutions for an environmentally sustainable and prosperous energy future.

Sandia Awards Advanced Memory Technology R&D Contract to Cerebras Systems

Retrieved on: 
Monday, October 17, 2022

Sandia National Laboratories and its partners announced a new project today to investigate the application of Cerebras Systems Wafer-Scale Engine technology to accelerate advanced simulation and computing applications in support of the nations stockpile stewardship mission.

Key Points: 
  • Sandia National Laboratories and its partners announced a new project today to investigate the application of Cerebras Systems Wafer-Scale Engine technology to accelerate advanced simulation and computing applications in support of the nations stockpile stewardship mission.
  • The National Nuclear Security Administrations Advanced Simulation and Computing program is sponsoring the work and Sandia, Lawrence Livermore and Los Alamos national labs will collaborate with Cerebras Systems on the project.
  • The goal of NNSAs Advanced Memory Technology research and development program is to develop technologies for use in future computing system procurements, said ASC program director Thuc Hoang.
  • The Cerebras Wafer-Scale Engine, currently the largest computer chip in the world, was built specifically for artificial intelligence and machine learning work, said Andrew Feldman, founder and CEO of Cerebras Systems.

Cerebras Systems Sets Record for Largest AI Models Ever Trained on a Single Device

Retrieved on: 
Wednesday, June 22, 2022

By enabling a single CS-2 to train these models, Cerebras reduces the system engineering time necessary to run large natural language processing (NLP) models from months to minutes.

Key Points: 
  • By enabling a single CS-2 to train these models, Cerebras reduces the system engineering time necessary to run large natural language processing (NLP) models from months to minutes.
  • The Cerebras CS-2 is a critical component that allows GSK to train language models using biological datasets at a scale and size previously unattainable.
  • These foundational models form the basis of many of our AI systems and play a vital role in the discovery of transformational medicines.
  • Cerebras ability to bring large language models to the masses with cost-efficient, easy access opens up an exciting new era in AI.

Leading Supercomputer Sites Choose Cerebras for AI Acceleration

Retrieved on: 
Tuesday, May 31, 2022

At Cerebras Systems, our goal is to revolutionize compute, said Andrew Feldman, CEO and co-founder of Cerebras Systems.

Key Points: 
  • At Cerebras Systems, our goal is to revolutionize compute, said Andrew Feldman, CEO and co-founder of Cerebras Systems.
  • LRZ is set to accelerate innovation and scientific discovery in Germany with the CS-2 in its forthcoming AI supercomputer.
  • Coming online this summer, the new supercomputer will enable Germanys researchers to bolster scientific research and innovation with AI.
  • PSC also doubled their AI capacity to 1.7 million AI cores, with two CS-2 systems, powering the centers Neocortex supercomputer for high-performance AI.

NCSA Deploys Cerebras CS-2 in New HOLL-I Supercomputer for Large-Scale Artificial Intelligence

Retrieved on: 
Tuesday, May 31, 2022

“We’re thrilled to have the Cerebras CS-2 system up and running in our Center,” said Dr. Volodymyr Kindratenko, Director of the Center for Artificial Intelligence Innovation at NCSA. “This system is unique in the AI computing space in that we will have multiple clusters at NCSA that address the various levels of AI and machine learning needs -- Delta and HAL, our NVIDIA DGX, and now HOLL-I, consisting of the CS-2, as the crown jewel of our capabilities. Each system is at the correct scale for the various types of usage and all having access to our shared center-wide TAIGA filesystem eliminating delays and slowdowns caused by data migration as users move up the ladder of more intense machine learning computation.”

Key Points: 
  • Cerebras Systems , the pioneer in high performance artificial intelligence (AI) computing, today announced that the National Center for Supercomputing Applications (NCSA) has deployed the Cerebras CS-2 system in their HOLL-I supercomputer .
  • Were thrilled to have the Cerebras CS-2 system up and running in our Center, said Dr. Volodymyr Kindratenko, Director of the Center for Artificial Intelligence Innovation at NCSA.
  • It is powered by the largest processor ever built the Cerebras Wafer-Scale Engine 2 (WSE-2).
  • We founded Cerebras Systems with the audacious goal to forever change the AI compute landscape, said Andrew Feldman, CEO and Co-Founder, Cerebras Systems.

Leibniz Supercomputing Centre Accelerates AI Innovation in Bavaria with Next-Generation AI System from Cerebras Systems and Hewlett Packard Enterprise

Retrieved on: 
Wednesday, May 25, 2022

The Leibniz Supercomputing Centre (LRZ) , Cerebras Systems , and Hewlett Packard Enterprise (HPE) , today announced the joint development and delivery of a new system featuring next-generation AI technologies to significantly accelerate scientific research and innovation in AI for Bavaria.

Key Points: 
  • The Leibniz Supercomputing Centre (LRZ) , Cerebras Systems , and Hewlett Packard Enterprise (HPE) , today announced the joint development and delivery of a new system featuring next-generation AI technologies to significantly accelerate scientific research and innovation in AI for Bavaria.
  • The system is comprised of the HPE Superdome Flex server and the Cerebras CS-2 system , which makes it the first solution in Europe to leverage the Cerebras CS-2 system.
  • "As an academic computing and national supercomputing centre, we provide researchers with advanced and reliable IT services for their science.
  • "AI methods work on CPU-based systems like SuperMUC-NG, and conversely, high-performance computing algorithms can achieve performance gains on systems like Cerebras.

Cerebras CS-2 System Awarded ‘Best in Show’ at Bio-IT World Conference & Expo

Retrieved on: 
Friday, May 6, 2022

Cerebras Systems , the pioneer in high performance artificial intelligence (AI) computing, received Bio-IT World Conference & Expos Best in Show award for the Cerebras CS-2 system , the worlds fastest AI solution.

Key Points: 
  • Cerebras Systems , the pioneer in high performance artificial intelligence (AI) computing, received Bio-IT World Conference & Expos Best in Show award for the Cerebras CS-2 system , the worlds fastest AI solution.
  • At the conference Cerebras also announced biopharmaceutical leader AbbVie as a customer, achieving 128 times the performance of graphics processing unit (GPU) on a single CS-2.
  • From the pharmaceutical industry, to the energy space, to U.S. national laboratories, Cerebras customers have published CS-2 performance results in excess of 100s GPUs.
  • At Cerebras Systems, our goal is to enable AI that accelerates our customers missions, said Andrew Feldman, CEO and co-founder of Cerebras Systems.