PyTorch

Tachyum Prodigy Native AI Supports TensorFlow and PyTorch

Retrieved on: 
Mittwoch, August 26, 2020

They include self-driving vehicles to more sophisticated and control-intensive disciplines like Spiking Neural Nets, Explainable AI, Symbolic AI and Bio AI.

Key Points: 
  • They include self-driving vehicles to more sophisticated and control-intensive disciplines like Spiking Neural Nets, Explainable AI, Symbolic AI and Bio AI.
  • When deployed into AI environments, Prodigy is able to simplify software processes, accelerate performance, save energy and better incorporate rich data sets to allow for faster innovation.
  • With open source solutions like TensorFlow and PyTorch, there are a hundred times more programmers that can leverage the frameworks to code for large-scale ML applications on Prodigy.
  • Tachyum's Prodigy can run HPC applications, convolution AI, explainable AI, general AI, bio AI and spiking neural networks, as well as normal data center workloads on a single homogeneous processor platform with its simple programming model.

Lightmatter Introduces Optical Processor to Speed Compute for Next-Generation Artificial Intelligence

Retrieved on: 
Montag, August 17, 2020

Lightmatter, a leader in silicon photonics processors, today announces its artificial intelligence (AI) photonic processor, a general-purpose AI inference accelerator that uses light to compute and transport data.

Key Points: 
  • Lightmatter, a leader in silicon photonics processors, today announces its artificial intelligence (AI) photonic processor, a general-purpose AI inference accelerator that uses light to compute and transport data.
  • Using light to calculate and communicate within the chip reduces heatleading to orders of magnitude reduction in energy consumption per chip and dramatic improvements in processor speed.
  • On August 18th, Lightmatters VP of Engineering, Carl Ramey, will present their photonic processor architecture at HotChips32.
  • Lightmatters photonic processor runs standard machine learning frameworks including PyTorch and TensorFlow, enabling state-of-the-art AI algorithms.

NEUCHIPS Announces World's First Deep Learning Recommendation Model (DLRM) Accelerator: RecAccel

Retrieved on: 
Dienstag, Mai 12, 2020

Running open-source PyTorch DLRM, RecAccelTMoutperforms server-class CPU and inference GPU by 28X and 65X, respectively.

Key Points: 
  • Running open-source PyTorch DLRM, RecAccelTMoutperforms server-class CPU and inference GPU by 28X and 65X, respectively.
  • It is equipped with an ultra-high-capacity, high-bandwidth memory subsystem for embedding table lookup and a massively parallel compute FPGA for neural network inference.
  • "Fast and accurate recommendation inference is the key to e-commerce business success," said Dr.Youn-Long Lin, CEO of NEUCHIPS.
  • About NEUCHIPS: NEUCHIPS Corp. is an application-specific compute solution provider based in Hsinchu, Taiwan.

OpenCV.org partners with Microsoft Azure to provide GPU computing to its Deep Learning course students

Retrieved on: 
Montag, Februar 24, 2020

The third course covers Deep Learning for solving various computer vision problems.

Key Points: 
  • The third course covers Deep Learning for solving various computer vision problems.
  • In addition to teaching students the theory behind Deep Learning, the hands-on course covers practical considerations needed to successfully train Deep Neural Networks.
  • "OpenCV.org is honored to receive free GPU time by Microsoft on its Azure Platform for our students enrolled in the Deep Learning with PyTorch course.
  • Fortunately for newly-enrolled students of the Deep Learning with PyTorch course, Microsoft will generously provide them with 100 hours of GPU time on their Microsoft Azure Cloud Platform .

Paperspace Introduces Free GPU Cloud Service For ML Developers

Retrieved on: 
Donnerstag, Oktober 10, 2019

NEW YORK, Oct. 10, 2019 /PRNewswire/ -- Paperspace announced today "Gradient Community Notebooks", a free cloud GPU service based on Jupyter notebooks designed for machine learning and deep learning development.

Key Points: 
  • NEW YORK, Oct. 10, 2019 /PRNewswire/ -- Paperspace announced today "Gradient Community Notebooks", a free cloud GPU service based on Jupyter notebooks designed for machine learning and deep learning development.
  • Now, any developer working with popular deep learning frameworks such as PyTorch, TensorFlow, Keras, and OpenCV, can launch and collaborate on their ML projects.
  • "GPUs are essential to ML development, yet the services available today are complex and prohibitively expensive for many developers," said Dillon Erb, CEO & Co-founder, Paperspace.
  • "This is precisely why we created Gradient Community: to make GPU and ML development resources widely accessible and easy to deploy.

Gyrfalcon Technology Introduces IP Licensing Model for Greater Customization for AI Chips from "Edge to Cloud"

Retrieved on: 
Mittwoch, April 24, 2019

The AI cores accelerate the convolutional neural network (CNN) on AI frameworks like Caffe, PyTorch, and TensorFlow.

Key Points: 
  • The AI cores accelerate the convolutional neural network (CNN) on AI frameworks like Caffe, PyTorch, and TensorFlow.
  • "GTI is differentiating itself at a time when the market has been saturated with companies talking about AI chips.
  • We have been producing our three different AI Accelerator chips, now in the hands of customers designing end products, so our silicon is proven.
  • Gyrfalcon Technology Inc. (GTI) is the world's leading developer of high performance AI Accelerators that use low power, packaged in low-cost and small sized chips.

AppTek Announces PyTorch Backend for RETURNN

Retrieved on: 
Dienstag, März 19, 2019

The combination of RETURNN and PyTorch allows high scalability, utilizing high degrees of parallelization to either process high amounts of data simultaneously, or increase the throughput.

Key Points: 
  • The combination of RETURNN and PyTorch allows high scalability, utilizing high degrees of parallelization to either process high amounts of data simultaneously, or increase the throughput.
  • "We are excited to see the power of RETURNN unfold using the PyTorch back-end, we believe that RETURNN will bring benefits to scientists who do rapid product development.
  • AppTeks leading scientist who led this effort, Patrick Doetsch, said: We are happy to announce that we successfully integrated PyTorch as a third back-end into our acoustic model training software RETURNN.
  • Just as with our Tensorflow and Theano back-end, the PyTorch version allows users to train state-of-the-art acoustic LSTM models using PyTorch modules.

Pyro Probabilistic Programming Language Becomes Newest LF Deep Learning Project

Retrieved on: 
Donnerstag, Februar 21, 2019

SAN FRANCISCO, Feb. 21, 2019 /PRNewswire/ -- The LF Deep Learning Foundation (LF DL), a Linux Foundation project that supports and sustains open source innovation in artificial intelligence (AI), machine learning (ML), and deep learning (DL), announces the Pyro project, started by Uber, as its newest incubation project.

Key Points: 
  • SAN FRANCISCO, Feb. 21, 2019 /PRNewswire/ -- The LF Deep Learning Foundation (LF DL), a Linux Foundation project that supports and sustains open source innovation in artificial intelligence (AI), machine learning (ML), and deep learning (DL), announces the Pyro project, started by Uber, as its newest incubation project.
  • Built on top of the PyTorch framework, Pyro is a deep probabilistic programming framework that facilitates large-scale exploration of AI models, making deep learning model development and testing quicker and more seamless.
  • "The LF Deep Learning Foundation is excited to welcome Pyro to our family of projects.
  • Today's announcement of Uber's contribution of the project brings us closer to our goal of building a comprehensive ecosystem of AI, machine learning and deep learning project," said Ibrahim Haddad, Executive Director of the LF DL.