PyTorch

AMD Unveils Purpose-Built, FPGA-Based Accelerator for Ultra-Low Latency Electronic Trading

Retrieved on: 
Mittwoch, September 27, 2023

SANTA CLARA, Calif., Sept. 27, 2023 (GLOBE NEWSWIRE) -- AMD (NASDAQ: AMD) today announced the AMD Alveo™ UL3524 accelerator card, a new fintech accelerator designed for ultra-low latency electronic trading applications. Already deployed by leading trading firms and enabling multiple solution partner offerings, the Alveo UL3524 provides proprietary traders, market makers, hedge funds, brokerages, and exchanges with a state-of-the-art FPGA platform for electronic trading at nanosecond (ns) speed.

Key Points: 
  • — Solution partners Alpha Data, Exegy and Hypertec add to growing ecosystem of ultra-low latency solutions for fintech market —
    SANTA CLARA, Calif., Sept. 27, 2023 (GLOBE NEWSWIRE) -- AMD (NASDAQ: AMD) today announced the AMD Alveo™ UL3524 accelerator card , a new fintech accelerator designed for ultra-low latency electronic trading applications.
  • “In ultra-low latency trading, a nanosecond can determine the difference between a profitable or losing trade,” said Hamid Salehi, director of product marketing at AMD.
  • The AMD Virtex™ UltraScale+ VU2P FPGA powering the Alveo UL3524 accelerator card is enabling ultra-low latency appliances from Alpha Data.
  • "The new Virtex UltraScale+ FPGA from AMD brings a step change to ultra-low latency trading and networking,” said David Miller, managing director of Alpha Data.

Vultr Launches GPU Stack and Container Registry for AI Model Acceleration Worldwide

Retrieved on: 
Dienstag, September 26, 2023

The GPU Stack supports instant provisioning of the full array of NVIDIA GPUs, while the new Vultr Container Registry makes AI pre-trained NVIDIA NGC models globally available for on-demand provisioning, development, training, tuning and inference.

Key Points: 
  • The GPU Stack supports instant provisioning of the full array of NVIDIA GPUs, while the new Vultr Container Registry makes AI pre-trained NVIDIA NGC models globally available for on-demand provisioning, development, training, tuning and inference.
  • Available across Vultr’s 32 cloud data center locations, across all six continents, the new Vultr GPU Stack and Container Registry accelerate speed, collaboration and the development and deployment of AI and machine learning (ML) models.
  • Vultr also launched its new Vultr Kubernetes-based Container Registry, fully integrated with Vultr’s GPU stack.
  • That trained and tuned model is then accessible in each company’s private container registry, accessible only to authorized users.

Tachyum Runs x86-64 Binaries on Prodigy FPGA as Its Key Milestone

Retrieved on: 
Dienstag, September 26, 2023

Tachyum ® today announced that it has successfully demonstrated a seamless execution of a non-native (x86_64) application under Linux running on the Prodigy FPGA emulation system.

Key Points: 
  • Tachyum ® today announced that it has successfully demonstrated a seamless execution of a non-native (x86_64) application under Linux running on the Prodigy FPGA emulation system.
  • Instead, a standard dynamic binary translator efficiently provides the ability to run unmodified Linux x86 binaries right out of the box.
  • Prodigy allows users to mix x86 applications with native Prodigy applications, as previously demonstrated, by seamlessly running Prodigy native Apache web servers combined with x86 Linux binary databases.
  • “Demonstrating the ability to run x86-64 binary applications on the Prodigy processor emulation is a key milestone for Tachyum and further validates our architecture before tape out,” said Dr. Radoslav Danilak, founder and CEO of Tachyum.

MongoDB Launches Advanced Data Management Capabilities to Run Applications Anywhere

Retrieved on: 
Dienstag, September 26, 2023

LONDON, Sept. 26, 2023 /PRNewswire/ -- MongoDB, Inc. (NASDAQ: MDB) today at MongoDB.local London announced MongoDB Atlas for the Edge, a set of capabilities that make it easier for organizations to deploy applications closer to where real-time data is generated, processed, and stored—across devices, on-premises data centers, and major cloud providers. With MongoDB Atlas for the Edge, data is securely stored and synchronized in real time across data sources and destinations to provide highly available, resilient, and reliable applications. Organizations can now use MongoDB Atlas for the Edge to build, deploy, and manage applications that are accessible virtually anywhere for use cases like connected vehicles, smart factories, and supply chain optimization—without the complexity typically associated with operating distributed applications at the edge. To get started with MongoDB Atlas for the Edge, visit mongodb.com/use-cases/edge-computing.

Key Points: 
  • With MongoDB Atlas for the Edge, data is securely stored and synchronized in real time across data sources and destinations to provide highly available, resilient, and reliable applications.
  • Tens of thousands of customers and millions of developers today rely on MongoDB Atlas to run business-critical applications for real-time inventory management, predictive maintenance, and high-volume financial transactions.
  • Run applications in locations with intermittent network connectivity: With MongoDB Atlas Edge Server and Atlas Device Sync, organizations can use a pre-built, local-first data synchronization layer for applications running on kiosks or on mobile and IoT devices to prevent data loss and improve offline application experiences.
  • Easily secure edge applications for data privacy and compliance: MongoDB Atlas for the Edge helps organizations ensure their edge deployments are secure with built-in security capabilities.

OctaiPipe releases version 2.0 of its Federated Learning Operations (FL-Ops) platform for Critical Infrastructure on-device AI

Retrieved on: 
Donnerstag, September 21, 2023

OctaiPipe, the Federated Learning Operations (FL-Ops) company, today announced general availability of version 2.0 of its platform.

Key Points: 
  • OctaiPipe, the Federated Learning Operations (FL-Ops) company, today announced general availability of version 2.0 of its platform.
  • Federated Learning Operations (FL-Ops) enables the deployment of AI to the edge and the management of distributed learning across a network of intelligent devices.
  • Society depends on the resilience, performance and security of our Critical Infrastructure – by making Federated Learning for IoT easy to deploy, OctaiPipe is ensuring Critical Infrastructure can continue to be trusted in the age of AI."
  • Watch the OctaiPipe v2.0 demo here: https://youtu.be/NCKB6tI_wck
    Launched in 2022, OctaiPipe is an end-to-end Federated Learning (FL) Edge AI platform optimised for creating, deploying, and managing machine learning IoT solutions in Critical Infrastructure environments.

Intel Innovation 2023: Accelerating the Convergence of AI and Security

Retrieved on: 
Mittwoch, September 20, 2023

(Credit: Intel Corporation)

Key Points: 
  • (Credit: Intel Corporation)
    Developers eager to harness AI face challenges that impede widespread deployment of solutions for client and edge to data center and cloud.
  • Intel is committed to addressing these challenges with a broad software-defined, silicon-accelerated approach that is grounded in openness, choice, trust and security.
  • By delivering the tools that streamline development of secure AI applications and ease the investment required to maintain and scale those solutions, Intel is empowering developers to bring AI everywhere.
  • Visit the Intel Newsroom to catch up on the announcements, which include news from Intel manufacturing, hardware, software and services.

Tachyum Adds LLVM for AI and Linux Rust Support

Retrieved on: 
Dienstag, September 19, 2023

Tachyum ® today announced the expansion of its Prodigy software ecosystem with the addition of LLVM for AI and Linux Rust support.

Key Points: 
  • Tachyum ® today announced the expansion of its Prodigy software ecosystem with the addition of LLVM for AI and Linux Rust support.
  • Rust is a multi-paradigm, general-purpose programming language with an emphasis on performance, type safety and concurrency, and has become the second language officially accepted for Linux kernel development.
  • LLVM plays a large role in every major AI framework, including PyTorch and Tensorflow, with their AI compilers based on LLVM for native instruction generation.
  • Additionally, standalone AI compilers, like Apache TVM, are also based on the LLVM compiler infrastructure.

Cadence Accelerates On-Device and Edge AI Performance and Efficiency with New Neo NPU IP and NeuroWeave SDK for Silicon Design

Retrieved on: 
Mittwoch, September 13, 2023

The new highly scalable Cadence® Neo™ Neural Processing Units (NPUs) deliver a wide range of AI performance in a low-energy footprint, bringing new levels of performance and efficiency to AI SoCs.

Key Points: 
  • The new highly scalable Cadence® Neo™ Neural Processing Units (NPUs) deliver a wide range of AI performance in a low-energy footprint, bringing new levels of performance and efficiency to AI SoCs.
  • Complementing the AI hardware, the new NeuroWeave™ Software Development Kit (SDK) provides developers with a “one-tool” AI software solution across Cadence AI and Tensilica® IP products for no-code AI development.
  • View the full release here: https://www.businesswire.com/news/home/20230913932401/en/
    The new highly scalable Cadence® Neo™ Neural Processing Units (NPUs) deliver a wide range of AI performance in a low-energy footprint, bringing new levels of performance and efficiency to AI SoCs.
  • Complementing the AI hardware, the new NeuroWeave™ Software Development Kit (SDK) provides a "one-tool" AI software solution across Cadence AI and Tensilica IP products for no-code AI development.

Howso Launches Open-Source AI Engine, a Powerful Alternative to Black-Box AI

Retrieved on: 
Mittwoch, September 13, 2023

RALEIGH, N.C., Sept. 13, 2023 /PRNewswire-PRWeb/ -- Howso, provider of explainable AI, today announced the release of a free open-source version of Howso Engine, a fully auditable ML framework that offers a powerful alternative to black- box AI libraries such as PyTorch and JAX. Howso, formerly known as Diveplane, is launching Howso Engine to enable data scientists, ML engineers, and developers to leverage the full power of AI while achieving state-of-the-art transparency and interpretability.

Key Points: 
  • RALEIGH, N.C., Sept. 13, 2023 /PRNewswire-PRWeb/ -- Howso , provider of explainable AI, today announced the release of a free open-source version of Howso Engine, a fully auditable ML framework that offers a powerful alternative to black- box AI libraries such as PyTorch and JAX.
  • Howso, formerly known as Diveplane, is launching Howso Engine to enable data scientists, ML engineers, and developers to leverage the full power of AI while achieving state-of-the-art transparency and interpretability.
  • "With this open-source release, we take a monumental step forward in our mission to make explainable AI the global standard.
  • "At Howso, we refuse to accept the status quo of black-box AI plagued by hallucinations, security risks, bias, and errors."

Dihuni Ships GPU Servers for Generative AI and LLM Applications

Retrieved on: 
Freitag, September 1, 2023

MCLEAN, Va., Sept. 1, 2023 /PRNewswire/ -- Dihuni, a leading artificial intelligence (AI), data center and Internet of Things (IoT) solutions company today announced it has started shipping new OptiReady GPU servers and workstations designed for Generative AI and LLM applications.

Key Points: 
  • MCLEAN, Va., Sept. 1, 2023 /PRNewswire/ -- Dihuni, a leading artificial intelligence (AI), data center and Internet of Things (IoT) solutions company today announced it has started shipping new OptiReady GPU servers and workstations designed for Generative AI and LLM applications.
  • Dihuni has enabled a new suite of new GPU servers with an online configurator for customers to easily select GPU, CPU and other configuration options.
  • Servers can be purchased stand-alone or for larger deployments such as LLM and Generative AI, Dihuni offers fully racked and cabled pods of high-performance GPU clusters.
  • The complete line of new Generative AI accelerated GPU servers allows flexibility for students, researchers, scientists, architects and designers to select systems that can be sized correctly and optimized for their AI and HPC applications.