Press Room Archives - Microway https://www.microway.com/category/press-room/ We Speak HPC & AI Thu, 06 Jun 2024 19:18:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 Microway Selected as NVIDIA Public Sector Partner of the Year for the Americas https://www.microway.com/press-room/microway-selected-as-nvidia-public-sector-partner-of-the-year-for-the-americas/ Tue, 04 Apr 2023 15:00:13 +0000 https://www.microway.com/?post_type=press-room&p=15025 Microway, a leading provider of solutions for the intersection of AI and HPC, today announced that it has been selected as the NVIDIA Partner Network (NPN) 2023 Public Sector Partner of the Year in the Americas.

The post Microway Selected as NVIDIA Public Sector Partner of the Year for the Americas appeared first on Microway.

]]>
Microway recognized for leadership in deployments at major laboratories and defense contractors

April 4, 2023–Plymouth, MA

Microway, a leading provider of solutions for the intersection of AI and HPC, today announced that it has been selected as the NVIDIA Partner Network (NPN) 2023 Public Sector Partner of the Year in the Americas.

Partner logo - Microway is an NVIDIA Elite Partner

The prestigious honor is awarded to a single partner per region in the public sector annually.

Microway architects clusters, servers, and NVIDIA DGX™ system and AI software deployments for customers running workloads in AI and HPC. These end users run the world’s most demanding applications—and trust in Microway’s expertise in designing superior hardware and software deployments to meet their needs and performance requirements.

In 2022, Microway delivered complex deployments to customers throughout the public sector domain. These included multi-hundred GPU clusters with NVIDIA A100 Tensor Core GPUs and NVIDIA DGX AI supercomputing deployments, which include NVIDIA AI Enterprise, the software stack of the NVIDIA AI platform. The NPN program has recognized Microway for its leading role in delivering many robust, successful deployments ready to run applications on day one after power-on.

The NPN program provides tools, training, and support that help enable Microway employees to deliver such deployments with performance, ease of use, and high value-add.

“Microway is honored to be recognized by NVIDIA for our leadership in delivering solutions to the public sector,” said Ann Fried, CEO of Microway. “As we continue to grow, we count on our close collaboration with NVIDIA to ensure our success.”

“Accelerated computing and AI are paving the way for new developments in autonomous systems, robotics, cybersecurity, disaster response and healthcare,” said Anthony Robbins, vice president of the North American public sector business at NVIDIA. “Microway’s expertise in providing custom-built AI systems using NVIDIA technology is helping government agencies and enterprises solve their hardest problems, improve energy efficiency and discovery, and make communities safer and more connected.”

The annual NPN awards program honors a select group of partners that have distinguished themselves as leading providers of NVIDIA accelerated computing technology and service delivery.

About Microway, Inc.

Microway builds solutions for the intersection of AI and HPC.These include clusters, servers, quiet workstations, and NVIDIA DGX system solutions designed for bleeding-edge computational performance. These products serve demanding users in the enterprise, government, and academia.

Since 1982, customers have trusted Microway to deliver unique and superior deployments—enabling them to remain at the forefront of supercomputing and solve the world’s toughest challenges. Microway’s strategic partners include NVIDIA, Intel, AMD, DDN, and IBM. Classified as a small business, woman owned and operated, Microway’s GSA Schedule is GS-35F-0431N.

The post Microway Selected as NVIDIA Public Sector Partner of the Year for the Americas appeared first on Microway.

]]>
Microway Named Partner of the Year for Higher Education by NVIDIA https://www.microway.com/press-room/microway-named-partner-of-the-year-for-higher-education-by-nvidia/ Fri, 17 Jul 2020 13:00:59 +0000 https://www.microway.com/?post_type=press-room&p=12849 Microway, a leading provider of advanced computational clusters, servers, and workstations for AI and HPC applications, today announced that it has been selected by the NVIDIA Partner Network (NPN) as the 2019 Higher Education Partner of the Year for the Americas.

The post Microway Named Partner of the Year for Higher Education by NVIDIA appeared first on Microway.

]]>
Microway delivered, deployed and integrated NVIDIA DGX systems at MSOE, OSU, Clemson, and other higher educational facilities in 2019

Jul 17, 2020–Plymouth, MA

Microway, a leading provider of advanced computational clusters, servers, and workstations for AI and HPC applications, today announced that it has been selected by the NVIDIA Partner Network (NPN) as the 2019 Higher Education Partner of the Year for the Americas.

Partner logo - Microway is an NVIDIA Elite Partner

The NPN selected Microway for leading the way in delivery of and installation support for servers, high-speed storage, networking, software, and management in the higher education space. In 2019, Microway provided several NVIDIA DGX™ systems for deep learning and AI, as well as cutting-edge research applications to the Milwaukee School of Engineering, Oregon State University, Clemson University, and Stevens Institute of Technology—and many others. In each of these deployments, Microway’s expertise in cluster integration and NVIDIA DGX technology was essential to delivering complete solutions that met the schools’ unique needs and were operational right from the start.

“Microway is honored to be recognized as the Higher Education Partner of the Year by NVIDIA,” said Ann Fried, CEO at Microway. “Working together with NVIDIA, we were able to deliver and support a number of NVIDIA DGX systems for research and education at universities in 2019, and we look forward to continuing to support our higher education customers with industry-leading NVIDIA technology in the future.”

“The team at Microway is unmatched in their knowledge of and support for seamlessly integrating NVIDIA technology into higher education research facilities, across a wide range of applications,” said Cheryl Martin, Director of Global Business Development Higher Education and Research at NVIDIA. “We’ve chosen to honor them for their work connecting a number of North American educational facilities with high-performance NVIDIA technology in 2019.”

The NPN honors its top North American partners that have shown growth in their GPU business through leadership and investments they have made throughout the year. The annual awards program honors a select few partners who have distinguished themselves as leading providers of NVIDIA GPU technology and service delivery.

About Microway, Inc.

Microway builds solutions for the intersection of AI and HPC.  These include supercomputers, clusters, servers and quiet workstations designed for bleeding-edge computational performance. These products serve demanding users in the enterprise, government, and academia.

Since 1982, customers have trusted Microway to deliver unique and superior deployments—enabling them to remain at the forefront of supercomputing and solve the world’s toughest challenges. Microway’s strategic partners include NVIDIA, Intel, AMD, DDN, NetApp, and IBM. Classified as a small business, woman owned and operated, Microway’s GSA Schedule is GS-35F-0431N.

The post Microway Named Partner of the Year for Higher Education by NVIDIA appeared first on Microway.

]]>
Microway Supports AI Leaders, Delivers First NVIDIA DGX A100 to a US Educational Institution https://www.microway.com/press-room/microway-delivers-first-nvidia-dgx-a100-to-a-us-educational-institution/ Wed, 20 May 2020 15:00:21 +0000 https://www.microway.com/?post_type=press-room&p=12617 Microway, a leading provider of advanced hardware for AI and HPC, announces it is delivering the first NVIDIA DGX A100 system to a higher education institution in the United States. DGX A100 is the world’s first 5 petaflops AI system to consolidate the power and capabilities of an entire data center, supporting analytics, training and inference in a single flexible platform.

The post Microway Supports AI Leaders, Delivers First NVIDIA DGX A100 to a US Educational Institution appeared first on Microway.

]]>
HPC & AI solution provider to deliver world’s most advanced AI system to the University of Florida

May 20, 2020–Plymouth, MA

DGX A100 Hero Image

Microway, a leading provider of advanced hardware for AI and HPC, announces it is delivering the first NVIDIA DGX A100 system to a higher education institution in the United States. DGX A100 is the world’s first 5 petaflops AI system to consolidate the power and capabilities of an entire data center, supporting analytics, training and inference in a single flexible platform.

The University of Florida will install multiple DGX A100 systems to accelerate AI research as the university works to infuse AI across its curriculum. The systems will be used to explore and validate how new technology can meet the University of Florida’s advanced research and education projects in artificial intelligence and deep learning—including on the Deep Cloud and PRISMA projects. The deployment will include the DGX A100 systems, as well as Microway software integration and deployment services to ensure easy adoption and integration of AI into the university’s infrastructure.

“The University of Florida is doing critical work in advancing AI, and Microway is ensuring that their DGX A100 systems support students and faculty as quickly as possible,” said Ann Fried, CEO of Microway. “AI is rapidly transforming all industries, and Microway is ready to support AI adoption with our expertise in delivering the world’s most advanced AI system – the NVIDIA DGX A100.”

An NVIDIA Partner Network member, Microway is ready to support organizations and enterprises seeking to adopt AI with DGX A100 systems at launch. Customers in all domains, not just higher education, can leverage Microway’s AI and DGX systems delivery expertise to get started with the latest AI system from NVIDIA as rapidly as possible.

About Microway, Inc.

Microway builds solutions for the intersection of AI and HPC.  These include supercomputers, clusters, servers and quiet workstations designed for bleeding-edge computational performance. These products serve demanding users in the enterprise, government, and academia.

Since 1982, customers have trusted Microway to deliver unique and superior deployments—enabling them to remain at the forefront of supercomputing and solve the world’s toughest challenges. Microway’s strategic partners include NVIDIA, Intel, AMD, DDN, Mellanox, NetApp, and IBM. Classified as a small business, woman owned and operated, Microway’s GSA Schedule is GS-35F-0431N.

The post Microway Supports AI Leaders, Delivers First NVIDIA DGX A100 to a US Educational Institution appeared first on Microway.

]]>
Microway Announces Delivery of Its Largest 2nd Gen AMD EPYC™ Processor-powered Cluster to Date https://www.microway.com/press-room/microway-announces-delivery-of-its-largest-2nd-gen-amd-epyc-processor-powered-cluster/ Fri, 27 Mar 2020 19:11:01 +0000 https://www.microway.com/?post_type=press-room&p=12627 Microway, a leading provider of computational clusters, servers, and workstations for AI and HPC applications, announces it has delivered a full-scale 2nd Gen AMD EPYC processor-powered cluster to a major insurance company. Refreshing their existing cluster gave the company more than double the compute cores they had previously, far superior memory bandwidth, and PCIe® 4 capabilities to drive vastly improved HPC application performance.

The post Microway Announces Delivery of Its Largest 2nd Gen AMD EPYC™ Processor-powered Cluster to Date appeared first on Microway.

]]>
Greater core counts, faster memory bandwidth, and up to two times HPC application performance in CPU benchmarks

March 27, 2020–Plymouth, MA

Diagram of the AMD EPYC CPU

Microway, a leading provider of computational clusters, servers, and workstations for AI and HPC applications, announces it has delivered a full-scale 2nd Gen AMD EPYC processor-powered cluster to a major insurance company. Refreshing their existing cluster gave the company more than double the compute cores they had previously, far superior memory bandwidth, and PCIe® 4 capabilities to drive vastly improved HPC application performance.

Featuring dual/redundant head nodes to manage and operate it, the new cluster has a total of 2,816 processor cores, a total of 11TB memory, and a 2GB/s scratch storage device in each node. Also included is Mellanox 100G EDR InfiniBand connectivity between all nodes.

In addition to supplying the cluster, Microway architected a 64-port fabric that optimizes cost without affecting overall workload performance. The fabric minimizes switch count without sacrificing latency and was validated to meet the application bandwidth need—as well as supplies additional ports for cluster growth and to link in parallel storage.

No Cost Remote Benchmarking for Cluster Drives Technology Selection

Working with Microway experts, the company used Microway’s test drive cluster at no cost to test the 2nd Gen AMD EPYC processors and conduct an evaluation of how they ran with the latest CFD application offerings and in-house code. This offering is available to any Microway cluster opportunity in North America.

Based on a head to head comparison, the 2nd Gen AMD EPYC CPU-based cluster came out ahead of existing x86 architecture-based cluster nodes and cloud offerings. The rigorous evaluation process provided a high level of confidence in the company’s decision to deploy in-house resources to meet their exacting performance specifications.

“We are excited to announce this deployment of an extremely cost-effective and high-performance cluster powered by 2nd Gen AMD EPYC processors,” said Eliot Eshelman, VP of HPC Initiatives at Microway. “The 2nd Gen AMD EPYC processors deliver fantastic performance and hold many CPU benchmark world records. Moreover, future generations of AMD EPYC CPUs will be at the heart of the US Department of Energy’s Oak Ridge National Laboratory (ORNL) Frontier and Lawrence Livermore National Laboratory (LLNL) El Capitan exascale supercomputers.”

Full Portfolio of AMD EPYC Based Solutions

Microway now offers a comprehensive portfolio of 2nd Gen AMD EPYC Processor-based solutions. These include WhisperStation quiet workstations in single and dual-socket configurations, the Navion 2U Twin² cluster nodes used in this deployment, and a complete line of Navion 1-4U AMD EPYC CPU-based servers. These solutions can incorporate GPU accelerators with the advantages of PCIe 4.

Microway Navion 2nd Gen AMD EPYC CPU-based clusters are available for a wide array of applications. Experts can tailor the compute nodes, fabric architecture, and parallel storage to specific workload requirements. Cluster sizes can seamlessly scale into the tens of thousands of cores.

Rather than navigate complicated Tier 1 vendor organizations, Microway customers are always assigned a dedicated technical architect who “Speaks HPC & AI.” The result is a superior engagement and often a far superior technical design.

For customers who are considering AMD EPYC processor solutions, Microway technical advisors can walk them through the potential advantages of the platform and architect a custom solution. In addition, an in-depth technical review of the technology is available on Microway’s HPC Blog: 2nd Gen AMD EPYC  CPUs: A Groundbreaking Leap for HPC.

 

About Microway, Inc.
Microway builds solutions for the intersection of AI and HPC.  These include supercomputers, clusters, servers and quiet workstations designed for bleeding-edge computational performance. These products serve demanding users in the enterprise, government, and academia.

Since 1982, customers have trusted Microway to deliver unique and superior deployments—enabling them to remain at the forefront of supercomputing and solve the world’s toughest challenges. Microway’s strategic partners include NVIDIA, Intel, AMD, DDN, Mellanox, NetApp, and IBM. Classified as a small business, woman owned and operated, Microway’s GSA Schedule is GS-35F-0431N.

AMD, the AMD Arrow logo, EPYC and combinations thereof are trademarks of Advanced Micro Devices, Inc.

The post Microway Announces Delivery of Its Largest 2nd Gen AMD EPYC™ Processor-powered Cluster to Date appeared first on Microway.

]]>
NVIDIA DGX-2 Systems Supplied by Microway Accelerate Research at Oak Ridge National Laboratory https://www.microway.com/press-room/microway-nvidia-dgx-2-systems-accelerate-research-at-oak-ridge-national-laboratory/ Wed, 12 Feb 2020 19:55:27 +0000 https://www.microway.com/?post_type=press-room&p=12621 Microway, a leading provider of computational clusters, servers, and workstations for AI and HPC, announces it has delivered 2 NVIDIA DGX-2 AI Systems to the US Department of Energy’s Oak Ridge National Laboratory (ORNL) that have since opened new opportunities and scientific results for machine learning and data-intensive computing groups.

The post NVIDIA DGX-2 Systems Supplied by Microway Accelerate Research at Oak Ridge National Laboratory appeared first on Microway.

]]>
Two systems for machine learning were installed and running benchmarks within 4 hours of first crate being opened

February 12, 2020–Plymouth, MA

NVIDIA DGX-2

Microway, a leading provider of computational clusters, servers, and workstations for AI and HPC, announces it has delivered 2 NVIDIA DGX-2 AI Systems to the US Department of Energy’s Oak Ridge National Laboratory (ORNL) that have since opened new opportunities and scientific results for machine learning and data-intensive computing groups.

Thanks to the unique features of the new NVIDIA DGX-2 AI systems and their rapid and successful installation, ORNL research teams have been able to expand existing and enable new innovative projects focusing on machine learning and AI with advanced architectures throughout the Lab.

The DGX-2 systems feature a unique density of 16 NVIDIA V100 GPUs plus innovative NVIDIA NVSwitch technology to fully interconnect all GPUs. Since their delivery, they have proven extraordinarily complementary to ORNL’s record-breaking 200 petaflop “Summit” supercomputer.

Enabling innovative projects

These specialized systems allow users to work on a uniquely large and complex set of problems that other large GPU solutions are incapable of tackling.

“I get requests for access to these systems often, from both researchers and students from all over the lab who want to learn on the best hardware around. These requests cover the full range of use cases and the DGX-2s never fail to impress. As word of mouth, combined with outreach, ramps up, I only see usage of these systems increasing,” said Chris Layton, Linux Systems Engineer for ORNL’s Compute and Data Environment for Science (CADES) team.

Heng Ma, a postdoctoral research associate in the Center for Molecular Biophysics, shared that the DGX-2 systems have made scaling projects up to the Summit system easier and more successful. “We use machine learning algorithms to control Molecular Dynamics simulations… For my current projects, I use the DGX-2 to produce a prototype of data, which later on we are trying to move to Summit. So, this prototype is like the proof of concept that it actually works before we actually put it on Summit.”

The Compute and Data Environment for Science (CADES) team at ORNL sought this groundbreaking new architecture to help advance their research. The ORNL team then decided to trust AI & HPC specialist Microway with their deployment. They were rewarded with the two DGX-2 systems physically installed, up and running, and doing benchmark testing within 4 hours of the first crate being opened.

Deployment: running benchmarks 4 hours after the crates were opened

As an experienced cluster integrator and NVIDIA Partner Network Elite DGX partner, Microway’s role was essential to delivering a complete solution that was operational as soon as it was installed.

In the weeks before delivery Microway experts performed a careful system, storage, and network architecture design and design review with ORNL personnel and NVIDIA solutions architects—enabling rapid installation and setup once Microway personnel arrived onsite.

Delivery of the two new machines also required careful advanced logistical preparation to ensure that the room, network, contacts, cooling, and system admins were all ready for the installation and launch of the DGX-2 systems. The ORNL admin and Microway teams constantly collaborated via, phone, email, and web prior to the installation to ensure a smooth deployment.

“Microway was able to, via their installation crew, make the integration of the DGX-2 into the CADES environment a smooth process… This was done under a deadline and they met all the timelines flawlessly. Microway was able to update the DGX-2s to a point where they were ready for the CADES team to hit the ground running in configuring them for end users,” Layton shared.

Upon arriving onsite, the Microway team uncrated the systems, readied racks, installed the systems into the racks, ran power and network cabling, updated all firmware, deployed the complete DGX software stack, and readied the systems for benchmark testing.

Into the Future

The new DGX-2 systems have already provided unexpected capacities and insights to the ORNL team, and the researchers expect that this will continue into the future.

Groups in such diverse fields as Molecular Biophysics, geographic data sciences, and AI-Driven Biosystems modeling have all utilized the new hardware deployment to drive their science & research since delivery.

The systems have attracted attention across the lab. Additional users at ORNL have selected a third DGX-2 for a recent deployment. As with the initial systems, the Microway team has ensured a smooth delivery experience and rapid bringup.

About Microway, Inc.
Microway builds solutions for the intersection of AI and HPC.  These include supercomputers, clusters, servers and quiet workstations designed for bleeding-edge computational performance. These products serve demanding users in the enterprise, government, and academia.

Since 1982, customers have trusted Microway to deliver unique and superior deployments—enabling them to remain at the forefront of supercomputing and solve the world’s toughest challenges. Microway’s strategic partners include NVIDIA, Intel, AMD, DDN, Mellanox, NetApp, and IBM. Classified as a small business, woman owned and operated, Microway’s GSA Schedule is GS-35F-0431N.

The post NVIDIA DGX-2 Systems Supplied by Microway Accelerate Research at Oak Ridge National Laboratory appeared first on Microway.

]]>
With Microway Supercomputing Cluster, UMass Dartmouth Ramps Up Research Programs https://www.microway.com/press-room/umass-dartmouth-ramps-research-programs-with-microway-cluster/ Thu, 16 Jan 2020 20:22:55 +0000 https://www.microway.com/?post_type=press-room&p=12632 Microway, a leading provider of computational clusters, servers, and workstations for AI and HPC, announces that research activities are accelerating at the University of Massachusetts Dartmouth since the installation of a new supercomputing cluster.

The post With Microway Supercomputing Cluster, UMass Dartmouth Ramps Up Research Programs appeared first on Microway.

]]>
University scientists enjoy access to the world’s most advanced computing architectures

January 16, 2020–Plymouth, MA

UMass Dartmouth + Microway Logos

Microway, a leading provider of computational clusters, servers, and workstations for AI and HPC, announces that research activities are accelerating at the University of Massachusetts Dartmouth since the installation of a new supercomputing cluster.

UMass Dartmouth’s powerful new cluster from Microway affords the university five times the compute performance its researchers enjoyed previously, with over 85% more total memory and over four times the aggregate memory bandwidth. It includes a heterogeneous system architecture featuring a wide array of computational engines.

Some of the main high-performance computing research activities on campus include deep learning, astrophysical simulation, computational quantum chemistry, molecular dynamics simulation, solids analysis, computational fluid dynamics, systems security research, and the development and application of novel numerical methods.

This new cluster purchase was funded through an Office of Naval Research (ONR) DURIP grant award.

Serving Users Across a Research Campus

The deployment has helped continue to serve, attract and retain faculty, undergraduate students, and those seeking advance degrees to the UMass Dartmouth campus. The Center for Scientific Computing and Visualization Research administers the new compute resource.

With its new cluster, CSCVR is undertaking cutting edge work. Mathematics researchers are developing new numerical algorithms on the new deployment. A primary focus is in astrophysics: with focus on the study of black holes and stars.

“Our engineering researchers,” says Gaurav Khanna, Co-Director of UMass Dartmouth’s Center for Scientific Computing & Visualization Research, “are very actively focused on computational engineering, and there are people in mechanical engineering who look at fluid and solid object interactions.” This type of research is known as two-phase fluid flow. Practical applications can take the form of modelling windmills and coming up with a better design for the materials on the windmill such as the coatings on the blade, as well as improved designs for the blades themselves.

This team is also looking at wave energy converters in ocean buoys. “As buoys bob up and down,” Khanna explains, “you can use that motion to generate electricity. You can model that into the computation of that environment and then try to optimize the parameters needed to have the most efficient design for that type of buoy.”

A final area of interest to this team is ocean weather systems. Here, UMass Dartmouth researchers are building large models to predict regional currents in the ocean, weather patterns, and weather changes.

A Hybrid Architecture for a Broad Array of Workloads

The UMass Dartmouth cluster reflects a hybrid design to appeal to a wide array of the campus’ workloads.

Over 50 nodes include Intel Xeon Scalable Processors, DDR4 memory, SSDs and Mellanox ConnectX-5 EDR 100Gb InfiniBand. A subset of systems also feature NVIDIA V100 GPU Accelerators for GPU computing applications.

Equally important are a second subset of POWER9 with 2nd Generation NVLink- based- IBM Power Systems AC922 Compute nodes. These systems are similar to those utilized in the world’s #1 and #2 most powerful Summit and Sierra supercomputers at ORNL and LLNL. The advanced NVIDIA NVLink interfaces built into POWER9 CPU and NVIDIA GPU ensure a broad pipeline between CPU:GPU for data intensive workloads.

The deployment of the hybrid architecture system was critical to meeting the users’ needs. It also allowed those on the UMass Dartmouth campus to apply to test workloads onto the larger national laboratory systems at ORNL.

Microway was one of the few vendors able to deliver a unified system with a mix of x86 and POWER9 systems, complete software integration across both kinds of nodes in the cluster, and offer a single point of sale and warranty coverage.

Microway was selected as the vendor for the new cluster through an open bidding process. “They not only competed well on the price,” says Khanna, “but they were also the only company that could deliver the kind of heterogeneous system we wanted with a mixture of architecture.”

About Microway, Inc.
Microway builds solutions for the intersection of AI and HPC.These include clusters, servers, quiet workstations designed for bleeding-edge computational performance—that serve demanding users in the enterprise, government, and academia.

Since 1982, customers have trusted Microway to deliver them unique and superior deployments—enabling them to remain at the forefront of supercomputing and solve the world’s toughest challenges. Microway is an NVIDIA NPN Elite Solution Provider and an IBM Business Partner. Classified as a small business, woman owned and operated, Microway’s GSA Schedule is GS-35F-0431N.

The post With Microway Supercomputing Cluster, UMass Dartmouth Ramps Up Research Programs appeared first on Microway.

]]>
ScaleMatrix DDC Technology + Microway Enable Enterprises to Deploy 13 PFLOP NVIDIA DGX-powered ‘AI Anywhere’ Infrastructure with Flexible, ‘No Data Center Required’ Design https://www.microway.com/press-room/ai-anywhere-solution-scalematrix-nvidia-microway/ Mon, 18 Nov 2019 18:14:32 +0000 https://www.microway.com/?post_type=press-room&p=12207 ScaleMatrix, through its subsidiary company, DDC, the global leader in providing scalable data center-to-edge solutions based on its patented Dynamic Density Control™ (DDC) cabinet technology, today announced that it is collaborating with NVIDIA and Microway to deliver SKUs for an 8 petaFLOPS and a 13 petaFLOPS ‘Supercomputer Anywhere’ solution using its DDC S-Series cabinets.

The post ScaleMatrix DDC Technology + Microway Enable Enterprises to Deploy 13 PFLOP NVIDIA DGX-powered ‘AI Anywhere’ Infrastructure with Flexible, ‘No Data Center Required’ Design appeared first on Microway.

]]>
“Deploy Anywhere at Any Scale” Solution for AI Supercomputing Infrastructure Made Possible Using DDC Turn-Key Cabinet Technology + Microway

Nov 18, 2019–Denver, CO

Denver, CO –– ScaleMatrix, through its subsidiary company, DDC, the global leader in providing scalable data center-to-edge solutions based on its patented Dynamic Density Control™ (DDC) cabinet technology, today announced that it is collaborating with NVIDIA and Microway to deliver SKUs for an 8 petaFLOPS and a 13 petaFLOPS ‘Supercomputer Anywhere’ solution using its DDC S-Series cabinets.

DDC Cabinet Technology, purpose built for scaling dense computing, enables a modular ‘deploy anywhere at any scale’ approach to computing through its S-Series platform, a pressurized ‘clean room quality’ air conditioning system combined with a closed-loop, water-chilled liquid cooling system, all encased in a ruggedized cabinet complete with biometric security, air filtration, and fire suppression capabilities. The modular S-Series cabinets can be erected anywhere power and a roof exists. Through this system, ScaleMatrix and Microway will deliver a flexible SKU option, for customizable ‘Supercomputer Anywhere’ systems powered by NVIDIA DGX systems, that can be deployed virtually anywhere, regardless of data center resource availability, to meet the high-performance computing (HPC) and AI needs of any organization.

The ’AI Anywhere‘ composable SKU will offer a design configuration based on the NVIDIA DGX-1 system, which will consist of a single rack containing 13 DGX-1 units, delivering a computing payload of 13 petaFLOPS. Additional configuration options are based on NVIDIA DGX-2 systems, which will house a DGX POD configuration including four DGX-2 systems and which deliver 8 petaFLOPS of compute power. The composable ’AI Anywhere‘ SKU, will operate between 42kW and 49kW fully loaded, within the precision tuned temperature and airflow management system provided by the DDC S-Series cabinet system. The units will be sold complete with storage and networking following DGX POD reference architecture designs such as NetApp’s ONTAP AI solution. Microway will offer services and expertise in integrating the hardware and software within the DDC cabinet solution integrating all hardware and software, including the full NVIDIA DGX software stack, deep learning and AI framework containers, with the DGX systems, NetApp ONTAP storage, and Mellanox switching prior to delivery. The customer experience for end users is simply to physically install the DDC cabinet platform, connect network interfaces, power on the system(s), and begin to load data and start training runs.

“The Dynamic Density Control S-Series cabinet technology has been used in our ScaleMatrix cloud and colocation data centers since 2010, from which we offer customers infinite scalability and density for computing deployments,” said Chris Orlando, co-founder and principal of ScaleMatrix and DDC. “DDC technology is a mature and proven system, which solves the density challenges other complex liquid cooling systems are trying to solve, but without the mess and hassle of immersion cooling or risky hardware modifications to expensive chips. DCC technology offers the familiar “plug-and-play” approach to manipulating computing at the rack level, giving familiarity and peace of mind to IT managers and opening up the world of possibility to where you can place and procure computing power. In addition, DDC provides surgical control of supply side airflow and temperature management, delivering an ideal operating environment which ensures the best performance for critical AI and enterprise hardware. Artificial Intelligence (AI) will one day be looked at as a broad service in the same way that Internet and mobile access technology are looked at today across industries. Through the creation and delivery of these systems with our partners at NVIDIA and Microway, we are taking a big step towards making powerful computing at immense scales possible wherever it is needed, without the hassles associated with traditional data center facilities.”

“Quickly building enterprise-grade AI infrastructure can be a challenge for some organizations which may not have an AI-ready data center,” said Charlie Boyle, vice president and general manager of DGX Systems at NVIDIA. “NVIDIA DGX systems provide world-leading AI compute performance, and DDC technology extends the value of DGX systems in a ‘deploy-anywhere’ form-factor that overcomes the challenge of finding the right facilities to host the infrastructure.”

https://www.scalematrix.com/nvidia-dgx/

About ScaleMatrix
ScaleMatrix delivers colocation, cloud, backup, disaster recovery, and professional support services from national variable-density data centers that leverage the future-proof Dynamic Density Control™ (DDC) cabinet platform. With power density and efficiency significantly impacting IT costs, these specialized data centers enable ScaleMatrix to deliver exceptionally priced, future-proofed colocation services and ultra-dense cloud hosting capabilities which provide valuable differentiation in today’s ever-changing market. These data center and technology innovations provide clients with a competitive edge and scalable efficiency which helps grow their businesses. Visit www.scalematrix.com for more information.

About DDC
DDC, a subsidiary company of ScaleMatrix, designs and manufactures cabinet and enclosure technologies to enable the deployment of any hardware, at any density, anywhere. The portfolio includes a variety of modular and edge solutions which allow efficient operation of IT hardware in nearly any environment, in either modular or self-contained form factors. The DDC family of products include fire suppression, various security options, shock mounting, extreme environment support, and other key features which ensure the success of any IT deployment, anywhere.

About Microway, Inc.

Microway builds solutions for the intersection of AI and HPC.  These include supercomputers, clusters, servers and quiet workstations designed for bleeding-edge computational performance. These products serve demanding users in the enterprise, government, and academia.

Since 1982, customers have trusted Microway to deliver unique and superior deployments—enabling them to remain at the forefront of supercomputing and solve the world’s toughest challenges. Microway’s strategic partners include NVIDIA, Intel, AMD, DDN, Mellanox, NetApp, and IBM. Classified as a small business, woman owned and operated, Microway’s GSA Schedule is GS-35F-0431N.

The post ScaleMatrix DDC Technology + Microway Enable Enterprises to Deploy 13 PFLOP NVIDIA DGX-powered ‘AI Anywhere’ Infrastructure with Flexible, ‘No Data Center Required’ Design appeared first on Microway.

]]>
Microway Will Present Ultra-Quiet WhisperStation for COMSOL Multiphysics® Workloads at COMSOL Conference https://www.microway.com/press-room/whisperstation-for-comsol-multiphysics-workloads-at-comsol-2019-conference/ Tue, 01 Oct 2019 14:29:48 +0000 https://www.microway.com/?post_type=press-room&p=12113 Microway, a leading provider of computational clusters, servers, and workstations for HPC and AI, announces it will introduce an updated WhisperStation workstation for COMSOL Multiphysics at the COMSOL Conference 2019, Oct 2-4, 2019, at the Boston Marriott Newton, in Newton, MA.

The post Microway Will Present Ultra-Quiet WhisperStation for COMSOL Multiphysics® Workloads at COMSOL Conference appeared first on Microway.

]]>
Workstations, servers, and clusters designed for superior performance on multiphysics simulation

Oct 1, 2019–Plymouth, MA

PLYMOUTH, MA Microway, a leading provider of computational clusters, servers, and workstations for HPC and AI, announces it will introduce an updated WhisperStation workstation for COMSOL Multiphysics at the COMSOL Conference 2019, Oct 2-4, 2019, at the Boston Marriott Newton, in Newton, MA.

WhisperStation for COMSOL is architected from the ground up specifically for COMSOL Multiphysics® simulation software workloads. The performance recommendations utilized in the configurations are reflective of a collaboration with COMSOL’s in-house testing teams—for superior performance across a variety of modules and models. The updated systems now include 2nd Generation Intel® Xeon® Scalable Processors (formerly Cascade Lake-SP) or the latest generation of Xeon W-3000 series processors.

With their deep understanding of the COMSOL application, Microway experts can help customers apply their budget dollars to hardware upgrades that are especially effective for the workload. Each configuration is custom-calibrated for the end user.
For budget-limited WhisperStation systems, Microway offers high clock speed CPUs, marrying the high single-threaded throughput required for small models with appropriate levels of parallelism.For beefier WhisperStation for COMSOL configurations, Microway architected-systems offer increased core counts only when they do not compromise clock speeds. This adds support for multi-threaded parallelism without compromising basic performance. WhisperStation COMSOL configurations also consider memory bandwidth and memory capacity for large models, as well as COMSOL’s unique disk/IO requirements. They typically include large memory spaces and NVMe flash to support these needs.

Microway also offers NumberSmasher 1U-4U servers and clusters specially architected for COMSOL users. COMSOL users may schedule a consultation with a Microway HPC expert to help match the right configuration and scale to their workload.
Among notable customers running COMSOL on Microway hardware is Oak Ridge National Laboratory (ORNL), one of the largest users of COMSOL in the world. Notable COMSOL-certified consultants also utilize and recommend Microway hardware for their workloads.

“Microway is one of the few vendors offering a high-performance, yet ultra-quiet workstation offering for COMSOL workloads,” said Eliot Eshelman, VP of HPC Initiatives at Microway. “The net result for end users is that WhisperStation for COMSOL delivers much better performance, performance per dollar, and a superior out-of-the box experience to systems not architected with COMSOL’s unique requirements in mind.”

About Microway, Inc.

Microway builds solutions for the intersection of AI and HPC.  These include supercomputers, clusters, servers and quiet workstations designed for bleeding-edge computational performance. These products serve demanding users in the enterprise, government, and academia.

Since 1982, customers have trusted Microway to deliver unique and superior deployments—enabling them to remain at the forefront of supercomputing and solve the world’s toughest challenges. Microway’s strategic partners include NVIDIA, Intel, AMD, DDN, Mellanox, NetApp, and IBM. Classified as a small business, woman owned and operated, Microway’s GSA Schedule is GS-35F-0431N.

The post Microway Will Present Ultra-Quiet WhisperStation for COMSOL Multiphysics® Workloads at COMSOL Conference appeared first on Microway.

]]>
Microway Supplies Custom NVIDIA DGX POD-based AI Supercomputer to Milwaukee School of Engineering https://www.microway.com/press-room/microway-supplies-nvidia-dgx-pod-based-ai-supercomputer-to-milwaukee-school-of-engineering/ Mon, 16 Sep 2019 14:01:57 +0000 https://www.microway.com/?post_type=press-room&p=12109 Microway, a leading provider of computational clusters, servers, and workstations for AI and HPC applications, announces it supplied Milwaukee School of Engineering (MSOE) with an NVIDIA® DGX™ POD-based supercomputer for education and applied research. This supercomputer forms the centerpiece of the university’s new computer science program and will support an expansion of deep learning and AI education designed to permeate across the institution. On September 13, 2019, MSOE hosted a ribbon-cutting to showcase the new Dwight and Dian Diercks Computational Science Hall and the new cluster, which will be used as part of MSOE’s expanded computer science education programming, applied research, and high-performance computing (HPC).

The post Microway Supplies Custom NVIDIA DGX POD-based AI Supercomputer to Milwaukee School of Engineering appeared first on Microway.

]]>
A turn-key cluster for AI education and applied research

Sep 16, 2019–Plymouth, MA

PLYMOUTH, MA Microway, a leading provider of computational clusters, servers, and workstations for AI and HPC applications, announces it supplied Milwaukee School of Engineering (MSOE) with an NVIDIA® DGX™ POD-based supercomputer for education and applied research. This supercomputer forms the centerpiece of the university’s new computer science program and will support an expansion of deep learning and AI education designed to permeate across the institution. On September 13, 2019, MSOE hosted a ribbon-cutting to showcase the new Dwight and Dian Diercks Computational Science Hall and the new cluster, which will be used as part of MSOE’s expanded computer science education programming, applied research, and high-performance computing (HPC).

DGX POD is a reference architecture that provides a blueprint for designing large-scale data center infrastructure that can support modern  artificial intelligence (AI) development. It is based on the NVIDIA DGX SATURNV AI supercomputer, which powers internal NVIDIA AI research and development used in autonomous vehicles, robotics, graphics, HPC, and other domains.

As an experienced cluster integrator and NVIDIA Partner Network Elite DGX partner, Microway’s role was essential to delivering a complete solution that was operational on day one. Microway experts performed a careful system, storage, and network architecture design and design review with MSOE IT personnel and NVIDIA solutions architects to meet MSOE’s specific AI education and computer science needs.

“ROSIE” – A Centerpiece for AI on the MSOE Campus

The cluster design includes three racks of DGX servers, high-speed storage, 100G networking, and management servers, along with NVIDIA NGC deep learning containers and the NVIDIA DGX Software stack, deployed and managed with NVIDIA DeepOps. It features three NVIDIA DGX-1 AI systems with NVIDIA V100 Tensor Core GPU accelerators; twenty Microway NumberSmasher Xeon + NVIDIA T4 GPU teaching compute nodes; and access to NGC, which provides an online registry of software stacks optimized for deep learning, machine learning and HPC applications, as well as pre-trained models and model training scripts. Also included in the deployment are high-performance storage arrays and a larger general-purpose storage pool from storage partner NetApp.

Microway’s design and integration experts worked closely with the MSOE team to ensure the custom DGX POD-based configuration met user needs. Microway delivered and installed the cluster fully integrated and ready-to-run after many weeks of intensive integration and stress testing were performed at Microway’s facility. Thorough testing ensured not only system functionality/stability, but also performance, with analysis of GPU throughput, local NVMe cache throughput, and network storage throughput. The teams worked together to customize storage, networking, and cluster software.

Revolutionary Deployment for Classroom Computer Science and AI Instruction

Unlike many university programs in which students’ access to supercomputers is usually limited to graduate students in computer labs, this configuration gives undergraduate students at MSOE supercomputer access in the classroom, enabling training of the next AI workforce. Traditional supercomputers require that users be familiar with command line interfaces and workload managers. The DeepOps install Microway has provided to MSOE allows a student to access the “ROSIE” cluster in their web browser and start a DGX-1 or NVIDIA T4 GPU deep learning session with the click of a button.

“We are extremely pleased by the opportunity to work with NVIDIA and MSOE on this significant new education and applied research facility,” said Eliot Eshelman, VP of Strategic Accounts and HPC Initiatives at Microway. “Microway’s expertise, combined with NVIDIA’s DGX POD architecture, enabled us to deliver a new type of cluster that melds the best of HPC with the latest developments in deep learning. In addition to enabling new research, this cluster simplifies student usage for studies of data analytics, AI, and computer science.”

About Microway, Inc.

Microway builds solutions for the intersection of AI and HPC.  These include supercomputers, clusters, servers and quiet workstations designed for bleeding-edge computational performance. These products serve demanding users in the enterprise, government, and academia.

Since 1982, customers have trusted Microway to deliver unique and superior deployments—enabling them to remain at the forefront of supercomputing and solve the world’s toughest challenges. Microway’s strategic partners include NVIDIA, Intel, AMD, DDN, Mellanox, NetApp, and IBM. Classified as a small business, woman owned and operated, Microway’s GSA Schedule is GS-35F-0431N.

The post Microway Supplies Custom NVIDIA DGX POD-based AI Supercomputer to Milwaukee School of Engineering appeared first on Microway.

]]>
Microway Provides Vyasa Analytics NVIDIA® DGX-1™ and NumberSmasher® GPU Server https://www.microway.com/press-room/microway-provides-vyasa-analytics-nvidia-dgx-1-and-gpu-server/ Thu, 15 Aug 2019 14:53:37 +0000 https://www.microway.com/?post_type=press-room&p=12119 Microway, a leading provider of computational clusters, servers, and workstations for AI and HPC applications, announces it has provided an NVIDIA® DGX-1™ supercomputer and Microway NumberSmasher® Tesla® GPU Server to deep-learning leader Vyasa Analytics. The new hardware enables Vyasa Analytics’ next phase of growth.

The post Microway Provides Vyasa Analytics NVIDIA® DGX-1™ and NumberSmasher® GPU Server appeared first on Microway.

]]>
AI analytics leader enhances scale, develops new capabilities with new deployment

Aug 15, 2019–Plymouth, MA

Microway, a leading provider of computational clusters, servers, and workstations for AI and HPC applications, announces it has provided an NVIDIA® DGX-1™ supercomputer and Microway NumberSmasher® Tesla® GPU Server to deep-learning leader Vyasa Analytics. The new hardware enables Vyasa Analytics’ next phase of growth.

The NVIDIA® DGX-1™ Deep Learning Appliance delivers the fastest performance available when training neural networks and running production-scale classification workloads. The system leverages the power of eight built-in NVIDIA® Tesla® V100 GPUs with NVIDIA® NVLink™ Technology and Tensor Cores to boost the speed of deep learning training. NVIDIA® DGX-1™ performs 140X faster deep learning training when compared to a CPU-only server.

The system includes NVIDIA’s Deep Learning software stack and NGC containers. Immediately after installation, the system was ready to train models and scale Vyasa’s software. The easy-to-use DIGITS deep learning training system and interface available on DGX-1™ helps users manage training data, monitor performance, and design, compare, and select networks.

Microway’s NumberSmasher® Tesla® GPU Servers integrate 1–10 NVIDIA® Tesla® V100 GPUs with flexible GPU density. These servers are fully configurable for any customized workload. The Vyasa Analytics deployment utilized this configurability to deploy early R&D environments and test new concepts—scaled up onto the DGX-1™ when ready.

Vyasa Analytics provides a deep learning analytics platform for leading organizations in the life sciences, healthcare, business intelligence, and legal verticals. Vyasa’s highly-scalable deep learning software, Cortex, operating on NVIDIA® GPUs and Microway server hardware, applies deep learning-based analytics to enterprise data of a variety of types: text, image, chemical structure, and more. Use cases include analyzing multiple large-scale text sources and streams that include millions of documents in order to discover patterns, relationships, and trends for patent analysis, competitive intelligence or drug repurposing.

“These systems have enabled us to branch out into a number of R&D areas that were really critical for us to be able to innovate and build out new types of deep learning approaches,” says Dr. Christopher Bouton, founder and CEO of Vyasa Analytics. “As a company working in the deep learning space, we see Microway and NVIDIA® as key partners in our ability to build innovative novel deep learning algorithms for a wide range of content types.”

Microway experts architected, integrated, tested, and installed the deployments.

For NVIDIA® DGX-1™ specifications and pricing, visit https://www.microway.com/preconfiguredsystems/nvidia-dgx-1-deep-learning-system/

# # #

About Microway, Inc.
Microway builds solutions for the intersection of AI and HPC. These include clusters, servers, quiet workstations designed for bleeding-edge computational performance—that serve demanding users in the enterprise, government, and academia.

Since 1982, customers have trusted Microway to deliver them unique and superior deployments—enabling them to remain at the forefront of supercomputing and solve the world’s toughest challenges. Microway is an NVIDIA NPN Elite Solution Provider, an Intel Platinum Technology Provider and HPC Data Center Specialist, and an IBM Business Partner. Classified as a small business, woman owned and operated, Microway’s GSA Schedule is GS-35F-0431N.

The post Microway Provides Vyasa Analytics NVIDIA® DGX-1™ and NumberSmasher® GPU Server appeared first on Microway.

]]>