PyTorch

Lightmatter Introduces Optical Processor to Speed Compute for Next-Generation Artificial Intelligence

Retrieved on: 
星期一, 八月 17, 2020

Lightmatter, a leader in silicon photonics processors, today announces its artificial intelligence (AI) photonic processor, a general-purpose AI inference accelerator that uses light to compute and transport data.

Key Points: 
  • Lightmatter, a leader in silicon photonics processors, today announces its artificial intelligence (AI) photonic processor, a general-purpose AI inference accelerator that uses light to compute and transport data.
  • Using light to calculate and communicate within the chip reduces heatleading to orders of magnitude reduction in energy consumption per chip and dramatic improvements in processor speed.
  • On August 18th, Lightmatters VP of Engineering, Carl Ramey, will present their photonic processor architecture at HotChips32.
  • Lightmatters photonic processor runs standard machine learning frameworks including PyTorch and TensorFlow, enabling state-of-the-art AI algorithms.

NEUCHIPS Announces World's First Deep Learning Recommendation Model (DLRM) Accelerator: RecAccel

Retrieved on: 
星期二, 五月 12, 2020

Running open-source PyTorch DLRM, RecAccelTMoutperforms server-class CPU and inference GPU by 28X and 65X, respectively.

Key Points: 
  • Running open-source PyTorch DLRM, RecAccelTMoutperforms server-class CPU and inference GPU by 28X and 65X, respectively.
  • It is equipped with an ultra-high-capacity, high-bandwidth memory subsystem for embedding table lookup and a massively parallel compute FPGA for neural network inference.
  • "Fast and accurate recommendation inference is the key to e-commerce business success," said Dr.Youn-Long Lin, CEO of NEUCHIPS.
  • About NEUCHIPS: NEUCHIPS Corp. is an application-specific compute solution provider based in Hsinchu, Taiwan.

OpenCV.org partners with Microsoft Azure to provide GPU computing to its Deep Learning course students

Retrieved on: 
星期一, 二月 24, 2020

The third course covers Deep Learning for solving various computer vision problems.

Key Points: 
  • The third course covers Deep Learning for solving various computer vision problems.
  • In addition to teaching students the theory behind Deep Learning, the hands-on course covers practical considerations needed to successfully train Deep Neural Networks.
  • "OpenCV.org is honored to receive free GPU time by Microsoft on its Azure Platform for our students enrolled in the Deep Learning with PyTorch course.
  • Fortunately for newly-enrolled students of the Deep Learning with PyTorch course, Microsoft will generously provide them with 100 hours of GPU time on their Microsoft Azure Cloud Platform .

Paperspace Introduces Free GPU Cloud Service For ML Developers

Retrieved on: 
星期四, 十月 10, 2019

NEW YORK, Oct. 10, 2019 /PRNewswire/ -- Paperspace announced today "Gradient Community Notebooks", a free cloud GPU service based on Jupyter notebooks designed for machine learning and deep learning development.

Key Points: 
  • NEW YORK, Oct. 10, 2019 /PRNewswire/ -- Paperspace announced today "Gradient Community Notebooks", a free cloud GPU service based on Jupyter notebooks designed for machine learning and deep learning development.
  • Now, any developer working with popular deep learning frameworks such as PyTorch, TensorFlow, Keras, and OpenCV, can launch and collaborate on their ML projects.
  • "GPUs are essential to ML development, yet the services available today are complex and prohibitively expensive for many developers," said Dillon Erb, CEO & Co-founder, Paperspace.
  • "This is precisely why we created Gradient Community: to make GPU and ML development resources widely accessible and easy to deploy.

Gyrfalcon Technology Introduces IP Licensing Model for Greater Customization for AI Chips from "Edge to Cloud"

Retrieved on: 
星期三, 四月 24, 2019

The AI cores accelerate the convolutional neural network (CNN) on AI frameworks like Caffe, PyTorch, and TensorFlow.

Key Points: 
  • The AI cores accelerate the convolutional neural network (CNN) on AI frameworks like Caffe, PyTorch, and TensorFlow.
  • "GTI is differentiating itself at a time when the market has been saturated with companies talking about AI chips.
  • We have been producing our three different AI Accelerator chips, now in the hands of customers designing end products, so our silicon is proven.
  • Gyrfalcon Technology Inc. (GTI) is the world's leading developer of high performance AI Accelerators that use low power, packaged in low-cost and small sized chips.

AppTek Announces PyTorch Backend for RETURNN

Retrieved on: 
星期二, 三月 19, 2019

The combination of RETURNN and PyTorch allows high scalability, utilizing high degrees of parallelization to either process high amounts of data simultaneously, or increase the throughput.

Key Points: 
  • The combination of RETURNN and PyTorch allows high scalability, utilizing high degrees of parallelization to either process high amounts of data simultaneously, or increase the throughput.
  • "We are excited to see the power of RETURNN unfold using the PyTorch back-end, we believe that RETURNN will bring benefits to scientists who do rapid product development.
  • AppTeks leading scientist who led this effort, Patrick Doetsch, said: We are happy to announce that we successfully integrated PyTorch as a third back-end into our acoustic model training software RETURNN.
  • Just as with our Tensorflow and Theano back-end, the PyTorch version allows users to train state-of-the-art acoustic LSTM models using PyTorch modules.

Pyro Probabilistic Programming Language Becomes Newest LF Deep Learning Project

Retrieved on: 
星期四, 二月 21, 2019

SAN FRANCISCO, Feb. 21, 2019 /PRNewswire/ -- The LF Deep Learning Foundation (LF DL), a Linux Foundation project that supports and sustains open source innovation in artificial intelligence (AI), machine learning (ML), and deep learning (DL), announces the Pyro project, started by Uber, as its newest incubation project.

Key Points: 
  • SAN FRANCISCO, Feb. 21, 2019 /PRNewswire/ -- The LF Deep Learning Foundation (LF DL), a Linux Foundation project that supports and sustains open source innovation in artificial intelligence (AI), machine learning (ML), and deep learning (DL), announces the Pyro project, started by Uber, as its newest incubation project.
  • Built on top of the PyTorch framework, Pyro is a deep probabilistic programming framework that facilitates large-scale exploration of AI models, making deep learning model development and testing quicker and more seamless.
  • "The LF Deep Learning Foundation is excited to welcome Pyro to our family of projects.
  • Today's announcement of Uber's contribution of the project brings us closer to our goal of building a comprehensive ecosystem of AI, machine learning and deep learning project," said Ibrahim Haddad, Executive Director of the LF DL.