Graphics processing unit

NVIDIA Blackwell Platform Arrives to Power a New Era of Computing

Retrieved on: 
Monday, March 18, 2024

The NVIDIA GB200 Grace Blackwell Superchip connects two NVIDIA B200 Tensor Core GPUs to the NVIDIA Grace CPU over a 900GB/s ultra-low-power NVLink chip-to-chip interconnect.

Key Points: 
  • The NVIDIA GB200 Grace Blackwell Superchip connects two NVIDIA B200 Tensor Core GPUs to the NVIDIA Grace CPU over a 900GB/s ultra-low-power NVLink chip-to-chip interconnect.
  • It combines 36 Grace Blackwell Superchips, which include 72 Blackwell GPUs and 36 Grace CPUs interconnected by fifth-generation NVLink.
  • The Blackwell product portfolio is supported by NVIDIA AI Enterprise , the end-to-end operating system for production-grade AI.
  • To learn more about the NVIDIA Blackwell platform, watch the GTC keynote and register to attend sessions from NVIDIA and industry leaders at GTC, which runs through March 21.

TSMC and Synopsys Bring Breakthrough NVIDIA Computational Lithography Platform to Production

Retrieved on: 
Monday, March 18, 2024

“Computational lithography is a cornerstone of chip manufacturing,” said Jensen Huang, founder and CEO of NVIDIA.

Key Points: 
  • “Computational lithography is a cornerstone of chip manufacturing,” said Jensen Huang, founder and CEO of NVIDIA.
  • Computational lithography is the most compute-intensive workload in the semiconductor manufacturing process, consuming tens of billions of hours per year on CPUs.
  • “We are moving NVIDIA cuLitho into production at TSMC, leveraging this computational lithography technology to drive a critical component of semiconductor scaling.”
    Since its introduction last year, cuLitho has enabled TSMC to open new opportunities for innovative patterning technologies.
  • Synopsys’ Proteus ™ optical proximity correction software running on the NVIDIA cuLitho software library significantly speeds computational workloads compared to current CPU-based methods.

Voltron Data Advances Theseus, Making It the First Petabyte Scale Query Engine for Large Scale Data Processing

Retrieved on: 
Monday, March 18, 2024

“Voltron Data is playing an important role in bridging analytics and AI infrastructure usage in the Era of AI.

Key Points: 
  • “Voltron Data is playing an important role in bridging analytics and AI infrastructure usage in the Era of AI.
  • “Theseus enables enterprises to use GPUs to quickly analyze log data, machine data and large amounts of tabular data generated on the fly – this is because GPUs are great at processing data, especially for data moving too fast to thoroughly index due to its sheer computational power.
  • Theseus points to data where it lies and uses hardware accelerator's sheer computational power to analyze massive data as quickly as possible.
  • HPE is the first partner to embed Theseus as its accelerated data processing engine as part of HPE Ezmeral Unified Analytics Software .

Pliops Unlocks Limitless Potential of AI at NVIDIA GTC 2024

Retrieved on: 
Monday, March 18, 2024

Pliops , a leading provider of data processors for cloud and enterprise data centers, will be on hand at NVIDIA GTC this week to tackle these challenges head on.

Key Points: 
  • Pliops , a leading provider of data processors for cloud and enterprise data centers, will be on hand at NVIDIA GTC this week to tackle these challenges head on.
  • Pliops XDP-AccelKV universal data acceleration engine significantly minimizes the scalability challenges of AI/Gen AI applications.
  • The memory capacity and bandwidth limitations of GPUs to load AI/Gen AI models prevent them from fully utilizing their computational power.
  • It extends HBM memory with fast storage to enable terabytes-scale AI applications to run on a single GPU.

Clique raises $8M in Series A to revolutionize the way smart contracts access data

Retrieved on: 
Thursday, March 14, 2024

This latest funding round aims to power Clique's mission of enabling efficient and optimal compute resource allocation for applications that have different preferences.

Key Points: 
  • This latest funding round aims to power Clique's mission of enabling efficient and optimal compute resource allocation for applications that have different preferences.
  • To date, Clique has enabled over US$3,500,000,000 in on-chain transactions through the use of their protocol.
  • The Clique Compute Coordination Network organizes various off-chain compute resources, allocating them as needed by both general applications and smart contracts.
  • This allows applications to access different compute resources and data easily, with the ability to adjust for preferences around trust, privacy, performance, and cost.

Applied Digital Secures Contract with AI Customer, Together AI

Retrieved on: 
Thursday, March 14, 2024

DALLAS, March 14, 2024 (GLOBE NEWSWIRE) -- Applied Digital Corporation (Nasdaq: APLD) ("Applied Digital" or the "Company"), a designer, builder, and operator of next-generation digital infrastructure designed for High-Performance Computing (“HPC”) applications, today announced the onboarding of another AI customer, Together AI. Applied Digital received a contract prepayment of $18 million as part of the $75 million contract, in connection with which it has fully onboarded one compute cluster of GPUs and has provided access to its second cluster.

Key Points: 
  • • Applied Digital's robust infrastructure aligns with Together AI's commitment to innovation, seeking to meet the surging demands of AI and HPC markets.
  • DALLAS, March 14, 2024 (GLOBE NEWSWIRE) -- Applied Digital Corporation (Nasdaq: APLD) ("Applied Digital" or the "Company"), a designer, builder, and operator of next-generation digital infrastructure designed for High-Performance Computing (“HPC”) applications, today announced the onboarding of another AI customer, Together AI .
  • “Our partnership with Together AI highlights the effectiveness of our cloud service in propelling forward-thinking AI ventures towards scalability,” said Applied Digital CEO and Chairman Wes Cummins.
  • We believe Applied Digital is poised to capitalize on this momentum, spearheading advancements in these complementary sectors.

Broadcom Delivers Industry’s First 51.2-Tbps Co-Packaged Optics Ethernet Switch Platform for Scalable AI Systems

Retrieved on: 
Thursday, March 14, 2024

The product integrates eight silicon photonics based 6.4-Tbps optical engines with Broadcom’s best-in-class StrataXGS® Tomahawk®5 switch chip.

Key Points: 
  • The product integrates eight silicon photonics based 6.4-Tbps optical engines with Broadcom’s best-in-class StrataXGS® Tomahawk®5 switch chip.
  • The optical interconnect is critical for both front-end and back-end networks in large scale generative AI clusters.
  • Today, pluggable optical transceivers consume approximately 50% of system power and constitute more than 50% of the cost of a traditional switch system.
  • “Our partnership with Broadcom to design systems with highly integrated co-packaged optics will enable a more power efficient future network.

Broadcom Extends Technology and Volume Leadership on AI Optical Components

Retrieved on: 
Wednesday, March 13, 2024

Broadcom’s state-of-the-art optics technologies facilitate high speed interconnects for front-end and back-end networks of large-scale generative AI compute clusters.

Key Points: 
  • Broadcom’s state-of-the-art optics technologies facilitate high speed interconnects for front-end and back-end networks of large-scale generative AI compute clusters.
  • VCSEL and EML technologies play a crucial role in enabling high-speed interconnects for AI and ML systems.
  • “Generative AI has unleashed a network transformation necessitating an order of magnitude increase in high-speed optical links compared to standard network requirements,” said Near Margalit, Ph.
  • Broadcom is once again among the first suppliers of components enabling the next generation of optical transceivers.”
    “Enterprises continue to demand larger AI clusters, elevating the importance of cutting-edge optical interconnects,” said Craig Thompson, vice president of LinkX products at NVIDIA.

Worksport to Advance Power Electronics with Gallium Nitride Semiconductors, improving on Silicone-Based Technology

Retrieved on: 
Wednesday, March 13, 2024

West Seneca, New York, March 13, 2024 (GLOBE NEWSWIRE) -- Worksport Ltd. (Nasdaq: WKSP) (“Worksport” or the “Company”), an innovative automotive accessory manufacturer dedicated to developing clean energy solutions, announced today its strategic move to integrate Gallium Nitride (“GaN”) semiconductors in its upcoming product offerings. By choosing GAN semiconductors over more prevalent silicon-based chips that fuel even the AI chip leader, NVIDIA Corporation (“NVIDIA”), Worksport has embarked on an innovative journey to advance its technology to new heights with various partnerships, including a recently announced partnership with Infineon Technologies AG, which is a leading producer of GaN semiconductors.

Key Points: 
  • Silicone has been the dominant material for semiconductor manufacturing for decades due to its widespread availability and processing ease.
  • Compared to silicon, GaN-based power switches feature lower overall capacitances, especially considering they do not have an anti-parallel body diode.
  • This feature can result in power converters with a much higher power density than typically found in silicon-based switches.
  • By leveraging GaN technology, Worksport believes that its future products will be able to deliver unmatched performance while also consuming less energy, resulting in longer battery life due to higher-efficiency power converters.

Dekube Unveils Revolutionary Decentralized Computing Power Network to Democratize AI Development

Retrieved on: 
Tuesday, March 12, 2024

Hong Kong, March 12, 2024 (GLOBE NEWSWIRE) -- In a landmark move poised to change the landscape of Artificial Intelligence (AI) development, Dekube has announced the launch of its decentralized computing power network.

Key Points: 
  • Hong Kong, March 12, 2024 (GLOBE NEWSWIRE) -- In a landmark move poised to change the landscape of Artificial Intelligence (AI) development, Dekube has announced the launch of its decentralized computing power network.
  • AI technology, while advancing rapidly, has been largely dominated by tech giants due to the prohibitive costs of training and developing AI models.
  • In addition to democratizing AI, Dekube addresses the environmental concerns associated with the extensive use of computing resources in AI development.
  • By utilizing idle GPU computing power, Dekube presents a more sustainable model that reduces the carbon footprint associated with AI training and development.