Coprocessors

OSS Expands AI on the Fly® Product Line, Adding PCI Express 4.0 Expansion System with Eight NVIDIA V100S Tensor Core GPUs

Retrieved on: 
Tuesday, January 21, 2020

The 4U value expansion system adds massive compute capability to any Gen 3 or Gen 4 server via two OSS PCIe x16 Gen 4 links.

Key Points: 
  • The 4U value expansion system adds massive compute capability to any Gen 3 or Gen 4 server via two OSS PCIe x16 Gen 4 links.
  • In conjunction with V100S Tensor Core GPUs, it delivers up to 1,040 teraFLOPS of tensor performance and 65.6 teraFLOPS of double precision performance, accelerating both computational science and data science.
  • The NVIDIA V100S Tensor Core GPU brings CUDA Cores and Tensor Cores in a unified architecture to enable mixed-precision computing.
  • In addition, NVIDIA V100S GPUs offer FP64 precision for scientific computing applications like simulations, and INT8 precision for AI inference.

ResNet-50 Score Confirms Leading Inference Performance of Groq Processor

Retrieved on: 
Tuesday, January 7, 2020

MOUNTAIN VIEW, California, Jan. 7, 2020 /PRNewswire/ -- Groq, the inventor of the Tensor Streaming Processor (TSP) architecture and a new class of compute, today announced that the Groq processor has achieved 21,700 inferences per second (IPS) for ResNet-50 v2 inference.

Key Points: 
  • MOUNTAIN VIEW, California, Jan. 7, 2020 /PRNewswire/ -- Groq, the inventor of the Tensor Streaming Processor (TSP) architecture and a new class of compute, today announced that the Groq processor has achieved 21,700 inferences per second (IPS) for ResNet-50 v2 inference.
  • Groq's level of inference performance exceeds that of other commercially available neural network architectures, with throughput that more than doubles the ResNet-50 score of the incumbent GPU-based architecture.
  • ResNet-50 is an inference benchmark for image classification and is often used as a standard for measuring performance of machine learning accelerators.
  • Headquartered in Mountain View, CA, Groq delivers industry leading performance, accuracy and sub-millisecond latency with efficient, software-driven solutions for compute-intensive applications.

OWC Unveils the Akitio Node Titan at CES 2020 -- a Game-Changing eGPU for Desktop-Class Graphics Performance in Notebooks, All-in-Ones and Small Form Factor Slotless Computers

Retrieved on: 
Sunday, January 5, 2020

into a high-performance gaming, video editing and graphics workstation for a fraction of the cost of buying/building a new machine.

Key Points: 
  • into a high-performance gaming, video editing and graphics workstation for a fraction of the cost of buying/building a new machine.
  • The Node Titan turns internally limited graphics editing notebook computers like the MacBook Pro into a powerful NLE (Non-Linear Editing) machine.
  • By adding a second Node Titan, performance-hungry users can tap into the processing power of multiple eGPUs for exponential gains in timeline performance and accelerated renders/exports.
  • The Node Titan features a class-leading 650W power supply that can handle power-hungry cards like the Radeon RX Vega 64.

Silverdraft Announces World's First Graphics Engine With GPUs

Retrieved on: 
Thursday, December 19, 2019

BOISE, Idaho, Dec. 19, 2019 /PRNewswire/ --Silverdraft announced today the full production launch of the world's first 10-GPU system for real-time ray tracing and visual effects rendering.

Key Points: 
  • BOISE, Idaho, Dec. 19, 2019 /PRNewswire/ --Silverdraft announced today the full production launch of the world's first 10-GPU system for real-time ray tracing and visual effects rendering.
  • The Devil GPU is the world's most flexible, scalable, and configurable GPU compute engine designed to accelerate and overcome visualization workloads.
  • The system boasts:
    "I am beyond excited to announce the full production of this groundbreaking system.
  • Silverdraft is unshackling Artists & Engineers so they can freely explore and create in new mediums with dramatic impact.

Global Artificial Intelligence (AI) Chips Market 2019-2023 | 39% CAGR Projection Through 2023 | Technavio

Retrieved on: 
Wednesday, December 18, 2019

The global artificial intelligence (AI) chips market is expected to post a CAGR of around 39% during the period 2019-2023, according to the latest market research report by Technavio.

Key Points: 
  • The global artificial intelligence (AI) chips market is expected to post a CAGR of around 39% during the period 2019-2023, according to the latest market research report by Technavio.
  • View the full release here: https://www.businesswire.com/news/home/20191218005451/en/
    Technavio has announced its latest market research report titled global artificial intelligence (AI) chips market 2019-2023.
  • These factors are fueling the growth of the global artificial intelligence (AI) chips market.
  • Global Artificial Intelligence (AI) Chips Market: Segmentation Analysis
    This market report segments the global artificial intelligence (AI) chips market by product (GPUs, ASICs, CPUs, and FPGAs) and geography (Americas, APAC, and EMEA).

Karma Group to Leverage NVIDIA DRIVE AGX Platform for Next-Generation Autonomous Electric Vehicles

Retrieved on: 
Wednesday, December 18, 2019

andSUZHOU, China, Dec. 18, 2019 /PRNewswire/ --Southern California-based automaker and high-tech incubator Karma Group today announced during the NVIDIA GPU Technology Conference in China that it will leverage the NVIDIA DRIVE AGX Xavier and Pegasus AI computing platforms for future autonomous electric vehicle capabilities.

Key Points: 
  • andSUZHOU, China, Dec. 18, 2019 /PRNewswire/ --Southern California-based automaker and high-tech incubator Karma Group today announced during the NVIDIA GPU Technology Conference in China that it will leverage the NVIDIA DRIVE AGX Xavier and Pegasus AI computing platforms for future autonomous electric vehicle capabilities.
  • Karma's vehicle platforms consist of its family of Revero vehicles and Project e-Klipse, its all-electric global platform starting in 2021.
  • Karma will leverage both NVIDIA DRIVE AGX Xavier and DRIVE AGX Pegasus AI computing platforms for its autonomous driving systems.
  • At the core of the DRIVE AGX platform is the auto-grade NVIDIA Xavier system-on-a-chip, the first processor developed for autonomous driving.

Didi Chuxing Teams with NVIDIA for Autonomous Driving and Cloud Computing

Retrieved on: 
Wednesday, December 18, 2019

SUZHOU, China, Dec. 17, 2019 (GLOBE NEWSWIRE) -- GTC China --NVIDIA and Didi Chuxing (DiDi), the worlds leading mobile transportation platform, today announced that DiDi will leverage NVIDIA GPUs and AI technology to develop autonomous driving and cloud computing solutions.

Key Points: 
  • SUZHOU, China, Dec. 17, 2019 (GLOBE NEWSWIRE) -- GTC China --NVIDIA and Didi Chuxing (DiDi), the worlds leading mobile transportation platform, today announced that DiDi will leverage NVIDIA GPUs and AI technology to develop autonomous driving and cloud computing solutions.
  • DiDi will use NVIDIA GPUs in the data center for training machine learning algorithms and NVIDIA DRIVE for inference on its Level 4 autonomous driving vehicles.
  • Developing safe autonomous vehicles requires end-to-end AI, in the cloud and in the car, said Rishi Dhall, vice president of Autonomous Vehicles at NVIDIA.
  • For cloud computing, DiDi will also build an AI infrastructure and launch virtual GPU (vGPU) cloud servers for computing, rendering and gaming.

NVIDIA Enables Era of Interactive Conversational AI with New Inference Software

Retrieved on: 
Wednesday, December 18, 2019

SUZHOU, China, Dec. 17, 2019 (GLOBE NEWSWIRE) -- GTC China --NVIDIA today introduced groundbreaking inference software that developers everywhere can use to deliver conversational AI applications, slashing inference latency that until now has impeded true, interactive engagement.

Key Points: 
  • SUZHOU, China, Dec. 17, 2019 (GLOBE NEWSWIRE) -- GTC China --NVIDIA today introduced groundbreaking inference software that developers everywhere can use to deliver conversational AI applications, slashing inference latency that until now has impeded true, interactive engagement.
  • Some of the worlds largest, most innovative companies are already taking advantage of NVIDIAs conversational AI acceleration capabilities.
  • NVIDIAs inference platform which includes TensorRT, as well as several NVIDIA CUDA-X AI libraries and NVIDIA GPUs delivers low-latency, high-throughput inference for applications beyond conversational AI, including image classification, fraud detection, segmentation, object detection and recommendation engines.
  • NVIDIA, the NVIDIA logo, CUDA-X AI and TensorRT are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries.

Global Digital Signal Processors Market Analysis, Trends, and Forecasts 2019-2025 - ResearchAndMarkets.com

Retrieved on: 
Thursday, December 12, 2019

The "Digital Signal Processors - Market Analysis, Trends, and Forecasts" report has been added to ResearchAndMarkets.com's offering.

Key Points: 
  • The "Digital Signal Processors - Market Analysis, Trends, and Forecasts" report has been added to ResearchAndMarkets.com's offering.
  • The Digital Signal Processors market worldwide is projected to grow by US$7.7 Billion, driven by a compounded growth of 8.3%.
  • DSP Multiprocessors on a die, one of the segments analyzed and sized in this study, displays the potential to grow at over 9.2%.
  • The shifting dynamics supporting this growth makes it critical for businesses in this space to keep abreast of the changing pulse of the market.

Enflame Technology Announces CloudBlazer with DTU Chip on GLOBALFOUNDRIES 12LP FinFET Platform for Data Center Training

Retrieved on: 
Thursday, December 12, 2019

Shanghai, China, Dec. 12, 2019 (GLOBE NEWSWIRE) -- In conjunction with the launch of Enflames CloudBlazer T10, Enflame Technology and GLOBALFOUNDRIES (GF) today announced a new high-performing deep learning accelerator solution for data center training.

Key Points: 
  • Shanghai, China, Dec. 12, 2019 (GLOBE NEWSWIRE) -- In conjunction with the launch of Enflames CloudBlazer T10, Enflame Technology and GLOBALFOUNDRIES (GF) today announced a new high-performing deep learning accelerator solution for data center training.
  • Designed to accelerate deep learning deployment, the accelerators core Deep Thinking Unit (DTU) is based on GFs 12LP FinFET platform with 2.5D packaging to deliver fast, power-efficient data processing for cloud-based AI training platforms.
  • Enflames DTU leverages GFs 12LP FinFET platform with more than 14 billion transistors packaged in advanced 2.5D, and supports PCIe 4.0 interface and the Enflame Smart Link high-speed interconnection.
  • Enflame is focused on accelerating on-chip communications to increase the speed and accuracy of neural network training while reducing data center power consumption, said Arthur Zhang, Enflame Tech COO.