Coprocessors

NVIDIA Expands NVIDIA Clara, Adds Global Healthcare Partners to Take on COVID-19

Retrieved on: 
Thursday, May 14, 2020

NVIDIA Clara Guardian for smart hospitals Launched today, NVIDIA Clara Guardian uses intelligent video analytics and automatic speech recognition technologies so a new generation of smart hospitals can perform vital sign monitoring while limiting staff exposure.

Key Points: 
  • NVIDIA Clara Guardian for smart hospitals Launched today, NVIDIA Clara Guardian uses intelligent video analytics and automatic speech recognition technologies so a new generation of smart hospitals can perform vital sign monitoring while limiting staff exposure.
  • Running on just-announced NVIDIA A100 GPUs , NVIDIA Clara Parabricks achieved a record for DNA sequencing analysis of the whole genome slashing analysis time to just under 20 minutes.
  • NVIDIA Clara contains domain-specific AI training and deployment workflow tools that allowed NVIDIA and NIH to develop the models in under three weeks.
  • Building robust AI models is a global priority but sharing data is still challenging across global borders.

NVIDIA’s New Ampere Data Center GPU in Full Production

Retrieved on: 
Thursday, May 14, 2020

A universal workload accelerator, the A100 is also built for data analytics, scientific computing and cloud graphics.

Key Points: 
  • A universal workload accelerator, the A100 is also built for data analytics, scientific computing and cloud graphics.
  • NVIDIA A100 GPU is a 20x AI performance leap and an end-to-end machine learning accelerator from data analytics to training to inference.
  • NVIDIA A100 will simultaneously boost throughput and drive down the cost of data centers.
  • NVIDIAs invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics and revolutionized parallel computing.

NVIDIA Jarvis Simplifies Building State-of-the-Art Conversational AI Services

Retrieved on: 
Thursday, May 14, 2020

SANTA CLARA, Calif., May 14, 2020 (GLOBE NEWSWIRE) -- GTC 2020 --NVIDIA today announced the release of NVIDIA Jarvis , a GPU-accelerated application framework that allows companies to use video and speech data to build state-of-the-art conversational AI services customized for their own industry, products and customers.

Key Points: 
  • SANTA CLARA, Calif., May 14, 2020 (GLOBE NEWSWIRE) -- GTC 2020 --NVIDIA today announced the release of NVIDIA Jarvis , a GPU-accelerated application framework that allows companies to use video and speech data to build state-of-the-art conversational AI services customized for their own industry, products and customers.
  • NVIDIA Jarvis can help the healthcare, financial services, education and retail industries automate their overloaded customer support with speed and accuracy.
  • Applications built with Jarvis can take advantage of innovations in the new NVIDIA A100 Tensor Core GPU for AI computing and the latest optimizations in NVIDIA TensorRT for inference.
  • Jarvis addresses these challenges by offering an end-to-end deep learning pipeline for conversational AI.

NVIDIA Ships World’s Most Advanced AI System — NVIDIA DGX A100 — to Fight COVID-19; Third-Generation DGX Packs Record 5 Petaflops of AI Performance

Retrieved on: 
Thursday, May 14, 2020

NVIDIA DGX A100 is the ultimate instrument for advancing AI, said Jensen Huang, founder and CEO of NVIDIA.

Key Points: 
  • NVIDIA DGX A100 is the ultimate instrument for advancing AI, said Jensen Huang, founder and CEO of NVIDIA.
  • NVIDIA DGX is the first AI system built for the end-to-end machine learning workflow from data analytics to training to inference.
  • Multiple smaller workloads can be accelerated by partitioning the DGX A100 into as many as 56 instances per system, using the A100 multi-instance GPU feature.
  • NVIDIA also revealed its next-generation DGX SuperPOD , a cluster of 140 DGX A100 systems capable of achieving 700 petaflops of AI computing power.

Vicor 1200A ChiP-set Enables Higher Performance AI Accelerator Cards

Retrieved on: 
Thursday, May 14, 2020

A driver, MCD4609, and a pair of MCM4609 current multiplier modules supply up to 650A continuous and 1,200A peak.

Key Points: 
  • A driver, MCD4609, and a pair of MCM4609 current multiplier modules supply up to 650A continuous and 1,200A peak.
  • Powering GPU and OCP Accelerator Module (OAM) Artificial Intelligent (AI) cards, the 4609 ChiP-set is in mass production and available to new customers on the Vicor Hydra II evaluation board.
  • Vicor IP on the critical path to Power-on-Package LPD and VPD solutions enables unparalleled current density and efficient current delivery for advanced processors in applications including AI accelerator cards, AI high density clusters and high-speed networking.
  • Vicor, FPA, ChiP, MCM, GCM and MCD are trademarks of Vicor Corporation.

NVIDIA Announces Upcoming Events for Financial Community

Retrieved on: 
Wednesday, May 13, 2020

SANTA CLARA, Calif., May 13, 2020 (GLOBE NEWSWIRE) -- NVIDIA will present at the following events for the financial community:

Key Points: 
  • SANTA CLARA, Calif., May 13, 2020 (GLOBE NEWSWIRE) -- NVIDIA will present at the following events for the financial community:
    Interested parties can listen to the live audio webcast of NVIDIAs presentation at these events, available at investor.nvidia.com .
  • Replays of the webcasts will be available for 90 days afterward.
  • NVIDIA s (NASDAQ: NVDA) invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics and revolutionized parallel computing.
  • More recently, GPU deep learning ignited modern AI the next era of computing with the GPU acting as the brain of computers, robots and self-driving cars that can perceive and understand the world.

NEUCHIPS Announces World's First Deep Learning Recommendation Model (DLRM) Accelerator: RecAccel

Retrieved on: 
Tuesday, May 12, 2020

Running open-source PyTorch DLRM, RecAccelTMoutperforms server-class CPU and inference GPU by 28X and 65X, respectively.

Key Points: 
  • Running open-source PyTorch DLRM, RecAccelTMoutperforms server-class CPU and inference GPU by 28X and 65X, respectively.
  • It is equipped with an ultra-high-capacity, high-bandwidth memory subsystem for embedding table lookup and a massively parallel compute FPGA for neural network inference.
  • "Fast and accurate recommendation inference is the key to e-commerce business success," said Dr.Youn-Long Lin, CEO of NEUCHIPS.
  • About NEUCHIPS: NEUCHIPS Corp. is an application-specific compute solution provider based in Hsinchu, Taiwan.

Run:AI Creates First Fractional GPU Sharing for Kubernetes Deep Learning Workloads

Retrieved on: 
Wednesday, May 6, 2020

TEL AVIV, Israel, May 6, 2020 /PRNewswire/ --Run:AI, a company virtualizing AI infrastructure, today released the first fractional GPU sharing system for deep learning workloads on Kubernetes.

Key Points: 
  • TEL AVIV, Israel, May 6, 2020 /PRNewswire/ --Run:AI, a company virtualizing AI infrastructure, today released the first fractional GPU sharing system for deep learning workloads on Kubernetes.
  • Today's de facto standard for deep learning workloads is to run them in containers orchestrated by Kubernetes.
  • This enables several deep learning workloads to run in containers side-by-side on the same GPU without interfering with each other.
  • "Run:AI's fractional GPU system lets companies unleash the full capacity of their hardware so they can scale up their deep learning more quickly and efficiently."

OSS’ AI on the Fly Autonomous Vehicles Orders Tops $2.1 Million

Retrieved on: 
Thursday, April 30, 2020

ESCONDIDO, Calif., April 30, 2020 (GLOBE NEWSWIRE) -- One Stop Systems, Inc. (Nasdaq: OSS), a leader in specialized high-performance edge computing,has received more than $2.1 million in purchase orders from two customers for OSS AI on the Fly system elements to be used in their next-generation autonomous vehicles.

Key Points: 
  • ESCONDIDO, Calif., April 30, 2020 (GLOBE NEWSWIRE) -- One Stop Systems, Inc. (Nasdaq: OSS), a leader in specialized high-performance edge computing,has received more than $2.1 million in purchase orders from two customers for OSS AI on the Fly system elements to be used in their next-generation autonomous vehicles.
  • Last June, OSS announced an exclusive joint design-in from a leading international rideshare company for the design, engineering, prototyping and production of AI on the Fly system elements for use in 150 next-generation autonomous vehicles.
  • The companys proprietary technology enables AI on the Fly for autonomous vehicles by integrating powerful GPUs and the latest generation of PCI Express.
  • OSS recently featured its solutions for autonomous vehicles at the NVIDIA GTC Digital virtual tradeshow in March.

CyberLink FaceMe® Enables Facial Recognition on NEC’s All-in-One Personal Computers

Retrieved on: 
Thursday, April 23, 2020

Facial recognition is one of the fastest growing technologies, and FaceMe is one of the worlds leading solutions.

Key Points: 
  • Facial recognition is one of the fastest growing technologies, and FaceMe is one of the worlds leading solutions.
  • FaceMe can run on low-power CPUs to enable facial recognition on cost-effective IoT/AIoT devices, as well as high-end servers, workstations and GPU-equipped personal computers to provide high yet efficient performance.
  • Founded in 1996, CyberLink Corp. (5203.TW) is the world leader in multimedia software and AI facial recognition technology.
  • With years of research in the fields of artificial intelligence and facial recognition, CyberLink has developed the FaceMe Facial Recognition Engine.