Coprocessors

Global Digital Signal Processor Market (2019-2025) - Partnerships, Collaborations and Agreements - ResearchAndMarkets.com

Retrieved on: 
Wednesday, April 8, 2020

The Global Digital Signal Processor Market size is expected to reach16.4 billion by 2025, rising at a market growth of 8.2% CAGR during the forecast period.

Key Points: 
  • The Global Digital Signal Processor Market size is expected to reach16.4 billion by 2025, rising at a market growth of 8.2% CAGR during the forecast period.
  • The digital signal processor is a specialized microchip or microprocessor, with its structural design customized for the digital signal processing requirements.
  • Significant investment in research, production, and development of digital signal processors are contributing to the growth of the digital signal processor market across the North American countries.
  • The Asia Pacific is a prominent market for electronics manufacturing and widespread use of digital signal processors in the electronics industry, expected to increase the growth of the digital signal processor market in the region.

Element AI Showcases Element AI Orkestrator GPU Management Product at NVIDIA GTC Digital

Retrieved on: 
Friday, April 3, 2020

Element AI CEO and Co-founder, JF Gagn presents "Enabling Human-Machine Collaboration" , demonstrating how Element AI Orkestrator effectively schedules and allocates GPU clusters for optimal workload balancing on the NVIDIA GTC Inception Startup Showcase.

Key Points: 
  • Element AI CEO and Co-founder, JF Gagn presents "Enabling Human-Machine Collaboration" , demonstrating how Element AI Orkestrator effectively schedules and allocates GPU clusters for optimal workload balancing on the NVIDIA GTC Inception Startup Showcase.
  • Element AI Orkestrator is the first product in a suite of tools from Element AI that will help organizations become AI-ready by accelerating the end-to-end process of building and deploying AI models and applications.
  • Element AI and the Element AI logo are protected by trademarks of Element AI Inc.
  • Element AI Orkestrator is a trademark of Element AI, and may be registered or pending registration in several jurisdictions.

2nd Gen AMD EPYC™ Processors and AMD Radeon Instinct™ MI25 GPUs Extend Microsoft Azure High Performance Cloud Offerings

Retrieved on: 
Wednesday, March 25, 2020

SANTA CLARA, Calif., March 25, 2020 (GLOBE NEWSWIRE) -- Today, AMD announced that the 2nd Gen AMD EPYC processors and AMD Radeon Instinct MI25 GPUs are extending performance advantages through Microsoft Azure NVv4 virtual machines (VMs).

Key Points: 
  • SANTA CLARA, Calif., March 25, 2020 (GLOBE NEWSWIRE) -- Today, AMD announced that the 2nd Gen AMD EPYC processors and AMD Radeon Instinct MI25 GPUs are extending performance advantages through Microsoft Azure NVv4 virtual machines (VMs).
  • The Azure NVv4 VMs are also the first 2nd Gen AMD EPYC- and AMD Radeon Instinct-powered VMs from any cloud provider, and the first Azure virtual desktop supported by AMD processors.
  • Working together with Microsoft Azure, one of our foundational AMD EPYC partners, we are excited to extend our performance advantages to new virtualization workloads with the first ever 2nd Gen AMD EPYC- and AMD Radeon Instinct MI25-powered VMs from any cloud provider.
  • NVv4 VM : Powered by 2nd Gen AMD EPYC CPUs and AMD Radeon InstinctMI25 GPUs, NVv4 delivers a modern desktop and workstation experience in the cloud.

BOXX Introduces New NVIDIA-Powered Data Center System and More at GTC Digital

Retrieved on: 
Wednesday, March 25, 2020

FLEXX provides all the power and performance of a desktop workstation inside a rack-mounted, high-density form factor, said Bill Leasure, BOXX VP of Marketing.

Key Points: 
  • FLEXX provides all the power and performance of a desktop workstation inside a rack-mounted, high-density form factor, said Bill Leasure, BOXX VP of Marketing.
  • Powered by NVIDIA RTX GPUs, this unique system enables organizations to accelerate workflows and work remotely, access data, creative content, team projects, and more.
  • With the new FLEXX system, enterprises can provision Quadro Virtual Workstations in minutes, enabling designers and artists working from home to stay productive.
  • For 24 years, BOXX has combined record-setting performance, speed, and reliability with unparalleled industry knowledge to become the trusted choice of creative professionals worldwide.

Avnet to Distribute Mipsology's Breakthrough FPGA Deep Learning Inference Acceleration Software in APAC

Retrieved on: 
Wednesday, March 25, 2020

This agreement extends Avnet's IoT ecosystem, bringing Mipsology's breakthrough deep learning inference acceleration solution to its Asia customers.

Key Points: 
  • This agreement extends Avnet's IoT ecosystem, bringing Mipsology's breakthrough deep learning inference acceleration solution to its Asia customers.
  • Zebra eliminates the need for FPGA expertise, making them as easy to use for deep learning inference acceleration as CPU/GPU.
  • Zebra dramatically accelerates computation in the inference phase, reducing latency and boosting the performance of machine learning applications.
  • Mipsology is a groundbreaking startup focused on state-of-the-art acceleration for deep learning inference.

Supermicro Accelerates AI and Deep Learning from the Data Center to the Edge with New NVIDIA NGC-Ready Servers

Retrieved on: 
Tuesday, March 24, 2020

Supermicro is highlighting many of these systems today at the Supermicro GPU Live Forum in conjunction with NVIDIA GTC Digital.

Key Points: 
  • Supermicro is highlighting many of these systems today at the Supermicro GPU Live Forum in conjunction with NVIDIA GTC Digital.
  • Supermicro NGC-Ready systems allow customers to train AI models using NVIDIA V100 Tensor Core GPUs and to perform inference using NVIDIA T4 Tensor Core GPUs.
  • "With support for fast networking and storage, as well as NVIDIA GPUs, our Supermicro NGC-Ready systems are the most scalable and reliable servers to support AI.
  • As the leader in AI system technology, Supermicro offers multi-GPU optimized thermal designs that provide the highest performance and reliability for AI, deep learning, and HPC applications.

LUCID Launches Helios™ Flex Time-of-Flight Module for Accurate 3D Object Detection and Measurement

Retrieved on: 
Wednesday, March 11, 2020

LUCID Vision Labs, Inc., a designer and manufacturer of unique and innovative industrial vision cameras, today announced the series production of its new Helios Flex 3D Time-of-Flight module.

Key Points: 
  • LUCID Vision Labs, Inc., a designer and manufacturer of unique and innovative industrial vision cameras, today announced the series production of its new Helios Flex 3D Time-of-Flight module.
  • The Helios Flex module includes a Software Development Kit (SDK) with GPU-accelerated depth processing and runs at 30 frames per second.
  • LUCIDs new Helios Flex ToF module is easily integrated with embedded platforms such as the Nvidia Jetson TX2 for accelerated depth processing.
  • LUCIDs own ArenaFlex SDK includes easy to use controls for the Helios Flex ToF module.

Neousys Technology Announces New Nuvo-7166GC Edge AI Inference Platform, Powered by NVIDIA Accelerated Computing

Retrieved on: 
Thursday, March 5, 2020

TAIPEI, Taiwan, March 5, 2020 /PRNewswire-PRWeb/ -- The NVIDIA edge computing platform spans from the power-efficient NVIDIA Jetson Nano to full-rack NVIDIA T4 servers.

Key Points: 
  • TAIPEI, Taiwan, March 5, 2020 /PRNewswire-PRWeb/ -- The NVIDIA edge computing platform spans from the power-efficient NVIDIA Jetson Nano to full-rack NVIDIA T4 servers.
  • Nuvo-7166GC is a leading ruggedized AI inference platform, comprised of Neousys patented Cassette module technology providing optimized cooling for NVIDIA T4 GPUs to ensure stable system operation in harsh environments.
  • Taking advantage of NVIDIA T4 GPUs, Neousys Technology's Nuvo-7166GC is designed for advanced inference acceleration applications such as voice, video, image and recommendation services.
  • Established in 2010, Neousys Technology designs and manufactures rugged embedded modules and systems with core expertise ranging from embedded computing to data acquisition and processing.

Next-Generation AMD EPYC™ CPUs and Radeon™ Instinct GPUs Enable El Capitan Supercomputer at Lawrence Livermore National Laboratory to Break 2 Exaflops Barrier

Retrieved on: 
Wednesday, March 4, 2020

With delivery expected in early 2023, the El Capitan system is expected to be the worlds fastest supercomputer with more than 2 exaflops of double precision performance.

Key Points: 
  • With delivery expected in early 2023, the El Capitan system is expected to be the worlds fastest supercomputer with more than 2 exaflops of double precision performance.
  • This record setting performance will support National Nuclear Security Administration requirements for its primary mission of ensuring the safety, security and reliability of the nations nuclear stockpile.
  • El Capitan will drive unprecedented advancements in HPC and AI, powered by the next generation AMD EPYC CPUs and Radeon Instinct GPUs, said Forrest Norrod, senior vice president and general manager, Datacenter and Embedded Systems Group, AMD.
  • AMD technology within El Capitan includes:
    Next generation AMD EPYC processors, codenamed Genoa featuring the Zen 4 processor core.

Gyrfalcon Acclaimed by Frost & Sullivan for Optimizing AI-enabled Solutions with AI Accelerator Chipsets

Retrieved on: 
Wednesday, February 26, 2020

This recognition underscores GTI's commitment to optimizing AI-powered solutions that deliver high performance with low energy consumption.

Key Points: 
  • This recognition underscores GTI's commitment to optimizing AI-powered solutions that deliver high performance with low energy consumption.
  • The device sensors' data is processed in a host processor which undergoes further processing at the AI accelerator.
  • Embedded with application specific models, AI accelerator chips will process the incoming data from host processors before routing back to the host device to run the dedicated application.
  • Based on the application requirements models can be designed and embedded in the AI accelerator chip.