Leibniz Supercomputing Centre

Democratising access to quantum computing: IQM Quantum Computers launches “IQM Spark” for universities and labs

Retrieved on: 
Wednesday, August 30, 2023

IQM Quantum Computers (IQM), the European leader in building quantum computers, today launched “IQM Spark,” comprising a superconducting quantum computer and tailored learning experiences for universities and research labs worldwide.

Key Points: 
  • IQM Quantum Computers (IQM), the European leader in building quantum computers, today launched “IQM Spark,” comprising a superconducting quantum computer and tailored learning experiences for universities and research labs worldwide.
  • To help universities kick-start their quantum program, in addition, universities will have free maintenance for one year, and IQM will also provide training for running the system and learning materials accessible through IQM Academy, a user-friendly online platform.
  • With IQM Spark, students of all levels (bachelor, master, and PhD) will have the opportunity to learn hands-on about quantum computing.
  • With its technical track record and world-class expertise, IQM is also committed to collaborating with universities to drive advancements in quantum science.

Green AI Cloud and Cerebras Systems Bring Industry-Leading AI Performance and Sustainability to Europe

Retrieved on: 
Wednesday, December 14, 2022

Cerebras Systems , the pioneer in high performance artificial intelligence (AI) compute, and Green AI Cloud , the most sustainable super compute platform in Europe, today announced the availability of Cerebras Cloud at Green AI.

Key Points: 
  • Cerebras Systems , the pioneer in high performance artificial intelligence (AI) compute, and Green AI Cloud , the most sustainable super compute platform in Europe, today announced the availability of Cerebras Cloud at Green AI.
  • A cloud provider based in the EU, such as Green AI Cloud, enables customers across the EU to benefit from Cerebras industry-leading AI compute and stay within the data privacy structures.
  • As the leader in energy efficient AI compute, it was an obvious choice to partner with and deliver AI compute to the Green AI Cloud.
  • Green AI Cloud - a European Cloud Service Provider offering AI Super Compute for the largest AI models available.

Cerebras Wafer-Scale Cluster Brings Push-Button Ease and Linear Performance Scaling to Large Language Models

Retrieved on: 
Wednesday, September 14, 2022

The key to the new Cerebras Wafer-Scale Cluster is the exclusive use of using data parallelism. Data parallelism is the preferred approach for all AI work. However, data parallelism requires that all the calculations, including the largest matrix multiplications of the largest layer, fit on a single device, and that all the parameters fit in the device’s memory. Only the CS-2 -- and not graphics processing units -- achieves both characteristics for LLMs.

Key Points: 
  • With a Wafer-Scale Cluster, users can distribute even the largest language models from a Jupyter notebook running on a laptop with just a few keystrokes.
  • Large language models (LLMs) are transforming entire industries across healthcare and life sciences , energy , financial services , transportation, entertainment, and more .
  • However, training large models with traditional hardware is challenging and time consuming and has only successfully been accomplished by a few organizations.
  • Instead, Cerebras Wafer-Scale Clusters deliver push-button allocation of work to compute, and linear performance scaling from a single CS-2 to up to 192 CS-2 systems.

Cerebras Systems Enables GPU-Impossible™ Long Sequence Lengths Improving Accuracy in Natural Language Processing Models

Retrieved on: 
Wednesday, August 31, 2022

Customers can now rapidly train Transformer-style natural language AI models with 20x longer sequences than is possible using traditional computer hardware.

Key Points: 
  • Customers can now rapidly train Transformer-style natural language AI models with 20x longer sequences than is possible using traditional computer hardware.
  • Long sequence lengths enable an NLP model to understand a given word, within a larger and broader context.
  • By vastly enlarging the context (the sequence of words within which the target word is understood), Cerebras enables NLP models to demonstrate a more sophisticated understanding of language.
  • Training large models with massive data sets and long sequence lengths is an area that the Cerebras CS-2 system, powered by the Wafer-Scale Engine (WSE-2), excels.

Computer History Museum Honors Cerebras Systems with New Display for Wafer-Scale Engine

Retrieved on: 
Wednesday, August 3, 2022

Cerebras Systems , the pioneer in accelerating artificial intelligence (AI) compute, and the Computer History Museum (CHM), the leading institution decoding technology its computing past, digital present, and future impact on humanity today unveiled a new display featuring Cerebras Wafer-Scale Engine (WSE) .

Key Points: 
  • Cerebras Systems , the pioneer in accelerating artificial intelligence (AI) compute, and the Computer History Museum (CHM), the leading institution decoding technology its computing past, digital present, and future impact on humanity today unveiled a new display featuring Cerebras Wafer-Scale Engine (WSE) .
  • It is the honor of a lifetime to be accepted into the Computer History Museums world-renowned collection, said Andrew Feldman, CEO and co-founder of Cerebras Systems.
  • For more information on the new WSE display at the Computer History Museum, please tune into a livestream conversation on Wednesday, August 3 at 2:30 pm PT with Cerebras Systems CEO Andrew Feldman and Computer History Museum President & CEO Danl Lewin https://www.youtube.com/computerhistory/live .
  • Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types.

Cerebras Systems Sets Record for Largest AI Models Ever Trained on a Single Device

Retrieved on: 
Wednesday, June 22, 2022

By enabling a single CS-2 to train these models, Cerebras reduces the system engineering time necessary to run large natural language processing (NLP) models from months to minutes.

Key Points: 
  • By enabling a single CS-2 to train these models, Cerebras reduces the system engineering time necessary to run large natural language processing (NLP) models from months to minutes.
  • The Cerebras CS-2 is a critical component that allows GSK to train language models using biological datasets at a scale and size previously unattainable.
  • These foundational models form the basis of many of our AI systems and play a vital role in the discovery of transformational medicines.
  • Cerebras ability to bring large language models to the masses with cost-efficient, easy access opens up an exciting new era in AI.

Leading Supercomputer Sites Choose Cerebras for AI Acceleration

Retrieved on: 
Tuesday, May 31, 2022

At Cerebras Systems, our goal is to revolutionize compute, said Andrew Feldman, CEO and co-founder of Cerebras Systems.

Key Points: 
  • At Cerebras Systems, our goal is to revolutionize compute, said Andrew Feldman, CEO and co-founder of Cerebras Systems.
  • LRZ is set to accelerate innovation and scientific discovery in Germany with the CS-2 in its forthcoming AI supercomputer.
  • Coming online this summer, the new supercomputer will enable Germanys researchers to bolster scientific research and innovation with AI.
  • PSC also doubled their AI capacity to 1.7 million AI cores, with two CS-2 systems, powering the centers Neocortex supercomputer for high-performance AI.

Leibniz Supercomputing Centre Accelerates AI Innovation in Bavaria with Next-Generation AI System from Cerebras Systems and Hewlett Packard Enterprise

Retrieved on: 
Wednesday, May 25, 2022

The Leibniz Supercomputing Centre (LRZ) , Cerebras Systems , and Hewlett Packard Enterprise (HPE) , today announced the joint development and delivery of a new system featuring next-generation AI technologies to significantly accelerate scientific research and innovation in AI for Bavaria.

Key Points: 
  • The Leibniz Supercomputing Centre (LRZ) , Cerebras Systems , and Hewlett Packard Enterprise (HPE) , today announced the joint development and delivery of a new system featuring next-generation AI technologies to significantly accelerate scientific research and innovation in AI for Bavaria.
  • The system is comprised of the HPE Superdome Flex server and the Cerebras CS-2 system , which makes it the first solution in Europe to leverage the Cerebras CS-2 system.
  • "As an academic computing and national supercomputing centre, we provide researchers with advanced and reliable IT services for their science.
  • "AI methods work on CPU-based systems like SuperMUC-NG, and conversely, high-performance computing algorithms can achieve performance gains on systems like Cerebras.

Intel Open-Sources SYCLomatic Migration Tool to Help Developers Create Heterogeneous Code

Retrieved on: 
Thursday, May 19, 2022

Whats New: Intel has released an open source tool to migrate code to SYCL1 through a project called SYCLomatic , which helps developers more easily port CUDA code to SYCL and C++ to accelerate cross-architecture programming for heterogeneous architectures.

Key Points: 
  • Whats New: Intel has released an open source tool to migrate code to SYCL1 through a project called SYCLomatic , which helps developers more easily port CUDA code to SYCL and C++ to accelerate cross-architecture programming for heterogeneous architectures.
  • View the full release here: https://www.businesswire.com/news/home/20220519005346/en/
    SYCLomatic assists developers in porting CUDA code to SYCL, typically migrating 90-95% of CUDA code automatically to SYCL code.
  • How the SYCLomatic Tool Works: SYCLomatic assists developers in porting CUDA code to SYCL, typically migrating 90-95% of CUDA code automatically to SYCL code2.
  • Since the current version of the code migration tool does not support migration to functors, we wrote a simple clang tool to refactor the resulting SYCL source code to meet our needs.

Tachyum Launches Its German Website to Expand Business Development in the Region

Retrieved on: 
Tuesday, March 15, 2022

Tachyum, a semiconductor company with roots in Europe, is essential for the digital and technological sovereignty of the EU.

Key Points: 
  • Tachyum, a semiconductor company with roots in Europe, is essential for the digital and technological sovereignty of the EU.
  • Today, the European Union is heavily dependent on non-EU chip suppliers since it has only 10% of the global fabrication market.
  • Tachyums Prodigy, the worlds first universal processor, will help the EU achieve a leading position in the technology, supercomputing, and data center markets.
  • Prodigys ability to seamlessly switch among these various workloads dramatically changes the competitive landscape and the economics of data centers.