Articles and news about RedLine Performance Solutions!
IBM Spectrum Scale (formerly known as GPFS) is a high-performance, highly scalable global file system that provides a single namespace to data. Given its long history with high-performance computing (HPC) and data-intensive media and data stream serving applications, Spectrum Scale has traditionally been viewed as a niche data solution: complex to install, optimize, and maintain, […]
Editor’s Note: RedLine Senior Program Analyst Kit Menlove contributed to this post. For industry and academia, managing workflows for large-scale data-intensive computational processes is a constant challenge. As every industry and scientific discipline becomes more reliant on ever increasingly complex and robust computational solutions, the ability to create repeatable and agile end-to-end processes becomes an […]
There are many build systems available for software development, and if your software is still using the venerable Make system, it can certainly pay to investigate an upgrade. Kitware’s cross-platform CMake system is one such framework, and it offers a considerable improvement. At its base level, CMake offers a single approach to building your software […]
While the annual SC conference is always a worthy experience, SC16 may have been the most rewarding ever – filled, as it was, with some fantastic insights from fellow HPC professionals, news on the latest technologies that will advance our industry, and the chance to catch up with colleagues and friends old and new. At […]
3D XPoint is a dramatic new memory technology developed jointly by Intel and Micron. Intel claims that 3D XPoint is 1,000 times faster than NAND, has 1,000 times its endurance, and is 10 times denser than DRAM. Thus, this non-volatile computer storage medium could significantly disrupt current HPC technologies. This technology is more than just […]
The programming language Fortran has been in existence since 1957, and a large percentage of software that runs on HPC systems worldwide is written, at least in part, in Fortran. Since it has been in existence for so long, and so many scientists and engineers learned it early in their careers and have utilized it […]
There’s a big difference between basic system monitoring and performance monitoring. In the world of HPC, this distinction is greatly magnified. In the former case, monitoring often boils down to checking binary indicators to make sure system components are up or down, on or off, available or not. Red light/green light monitoring is certainly a […]
As HPC architectures continue to evolve and offer ever-increasing performance, it has become imperative to adapt existing software in order to fully harness that power. As discussed in an earlier post, the architectural approach to parallelism has come full circle in many ways. Nonetheless, MPI remains a fixture in HPC software design, and finding ways […]
In HPC, dealing with change-related problems is inevitable. But dealing with unauthorized change is unacceptable. Here’s why.
For high-performance computing professionals, a deep understanding of how programs execute is vital. Often, that knowledge can spell the difference between making proper use of computing resources and meeting deadlines – or not. Keys to optimizing HPC resources include profiling program elements such as functions or subroutines and identifying and alleviating potential bottlenecks. Profiling refers […]
Keep connected—subscribe to our blog by email.