List Headline Image
Updated by Discover-HPC on Feb 25, 2017
 REPORT
Discover-HPC Discover-HPC
Owner
100 items   1 followers   0 votes   9 views

HPC 4

transtec

transtec has been developing and producing high-quality IT systems for over 30 years now. Customers can combine our IT modules - clients, servers and storage systems - with services to create efficient and reliable IT systems. We develop individual solutions exactly according to customer requirements which range from special systems to complete IT development plans and we support our customers throughout the IT system's life cycle. It's our passion for high-performance, supercomputing systems and sophisticated solutions that connects us to the universities and research & development institutes and departments we work with. Small and medium-sized businesses as well as the public sector in particular respect our intelligent, affordable and uncomplicated solutions used in the virtualisation of servers, clients and storage systems.

#vendor #hardware #germany #europe #workstations #servers #storage #backup #networking #cabling

2

Avnet

Avnet

Avnet, Inc. is one of the largest distributors of electronic components, IT solutions and embedded technology. Avnet offers a breadth and depth of services capabilities, such as supply-chain and design-chain services, logistics solutions, product assembly, device programming, computer system configuration and integration, financing, marketing services and technical seminars—all in addition to its core distribution services.

#vendor #hardware #services #usa #storage #backup #security #virtualization #networking #servers

CPU 24/7

Welcome to the CPU 24/7, the experts for high performance computing and providing scalable and fast HPC cluster. CPU 24/7 is specialised in providing High Performance Computing (HPC) systems and computing power “on demand” for industry and universities, for applications in development and research either in form of the permanently available Tailored Configurations or as flexibly usable computing capacities via the CPU 24/7 Resource Area - each with a ready-to-work workplace environment.

#vendor #services #germany #europe

4

ANSYS

ANSYS

ANSYS has pioneered the development and application of simulation methods to solve the most challenging product engineering problems. Applied to design concept, final-stage testing, validation and trouble-shooting existing designs, software from ANSYS can significantly speed design and development times, reduce costs, and provide insight and understanding into product and process performance.

#vendor #software #usa

5

ExaCT

ExaCT

The mission of co-design within the Center for Exascale Simulation of Combustion in Turbulence (ExaCT) is to absorb the sweeping changes necessary for exascale computing into software and ensure that the hardware is developed to meet the requirements to perform these real-world combustion computations. ExaCT will perform multi-disciplinary research required to iteratively co-design all aspects of combustion simulation including math algorithms for partial differential equations, programming models, scientific data management and analytics for in situ uncertainty quantification and topological analysis, and architectural simulation to explore hardware tradeoffs with combustion proxy applications representing the workload to the exascale ecosystem. ExaCT is comprised of six DOE laboratories (SNL, ORNL, LLNL, LANL, LBNL, NREL) and five university partners (The University of Texas at Austin, Stanford University, Georgia Institute of Technology, The University of Utah, and Rutgers, The State University of New Jersey) involving the multi-disciplinary interaction of combustion scientists, computer scientists, and applied mathematicians.

#user #consortium #collaboration #usa #research #exascale

Sandia National Laboratories Computing Research

Sandia is a multiprogram engineering and science laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the US Department of Energy's National Nuclear Security Administration. Sandia's enduring mission is to provide engineering and science support for America's nuclear weapons stockpile. Computing Research at Sandia creates technology and solutions for many of our nation's most demanding national security challenges. The Computing Research portfolio spans the spectrum from fundamental research to state-of-the-art applications. Our work includes computer system architecture (both hardware and software); enabling technology for modeling physical and engineering systems; and support research in discrete mathematics, data analytics, cognitive modeling, and decision support. The Computing Research enterprise is closely tied to the laboratories' broader set of missions and strategies. Application areas include nuclear weapons, cyber security, and energy and CO2 challenges such as climate modeling, alternative energy technologies, and improvements to the power grid. We also serve as stewards of important capabilities for the nation in high-strain-rate physics, scientific visualization, mesh generation, and computational materials.

#user #government #usa #research

Los Alamos National Laboratory

Government agencies everywhere look to us to solve difficult issues in areas like biothreats, physics, green technology, and nuclear stockpile stewardship. LANL provides world-class high performance computing capability that enables unsurpassed solutions to complex problems of strategic national interest. The High Performance Computing (HPC) Division supports the Laboratory mission by managing world-class Supercomputing Centers. This includes specifying, operating, and assisting in the use of both open and secure high performance computing, storage, and emerging data-intensive information science production systems for multiple programs.

#user #government #usa #research

NREL HPC

One of NREL's most powerful energy research tools -the High Performance Computing Data Center- leads to increased efficiency and lower costs for important renewable energy research and technologies. It is also one of the most energy efficient data centers in the world. The High Performance Computing (HPC) center at the National Renewable Energy Laboratory (NREL) provides high-speed, large-scale computer processing to advance research on renewable energy and energy efficiency technologies. Through computer modeling and simulation, researchers can explore processes and technologies that cannot be directly observed in a laboratory or that are too expensive or too time-consuming to be conducted otherwise. NREL's HPC center is home to the largest HPC system in the world dedicated to advancing renewable energy and energy efficiency technologies.

#user #government #usa #research

Rutgers, The State University of New Jersey HPC

Rutgers University Parallel Computer is Beowulf cluster operated by Center for Materials Theory at the Department of Physics and Astronomy. Beowulf cluster consists from more than 2938 CPUs/cores and 3584 GPU/cores. Its computational power exceeds 30 Teraflops which ranks it among the most powerful Rutgers University computing recourses.

#user #academia #usa #research

DOE INCITE

The allocations of computing time were awarded to researchers from academia, government, and industry through the Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, program, which was created to provide access to the DOE's Leadership Computing Facility with centers at Argonne and Oak Ridge national laboratories. The program aims to accelerate scientific discoveries and technological innovations by awarding, on a competitive basis, time on supercomputers to researchers with large-scale, computationally intensive projects that address “grand challenges” in science and engineering.

#user #resource #government #usa #research

ORNL - Oak Ridge National Laboratory

Oak Ridge National Laboratory's Computing and Computational Sciences Directorate conducts state-of-the-art research and development in computer and computational sciences in support of DOE's missions and programs. We develop and deploy leading-edge computing and information technology capabilities to keep computational sciences at a level comparable to experimental sciences in the pursuit of scientific discovery and technical innovation.

#user #government #usa #research

University of Texas at Austin HPC

Within the ME Technology and Educational Resources (METER), the Mechanical Engineering (ME) Department maintains a remotely accessible only High Performance Computing (HPC) linux cluster of 12 rack mounted Dell Poweredge 2950 workstations and 9 rack mounted Dell Poweredge T610 running Ubuntu Linux.

#user #academia #usa #research

LLNL - Lawrence Livermore National Laboratory

We’re making the world safer by shaping the frontiers of HPC, data sciences, and computer science. Our programs are meeting our mission to sustain leading-edge discipline research, develop and maintain strong partnerships, and deliver creative technologies and software solutions. We design, develop, and deploy HPC capabilities not only in support of Livermore’s mission and program goals, such as the nation’s Stockpile Stewardship Program, but to improve national security and advance U.S. economic competitiveness. We partner with industry, foreign governments, and nontraditional government sponsors to drive technology advancements, speed development of new applications, and help cultivate other collaborations. HPC powers scientific discovery. We’re advancing the most complex modeling, simulation, and analysis as a peer to theory and experiment.

#user #government #usa #research

DOE ASCR - Advanced Scientific Computing Research

The mission of the Advanced Scientific Computing Research (ASCR) program is to discover, develop, and deploy computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to the Department of Energy (DOE). A particular challenge of this program is fulfilling the science potential of emerging computing systems and other novel computing architectures, which will require numerous significant modifications to today's tools and techniques to deliver on the promise of exascale science.

#user #government #usa #research

Fastmath SciDAC

The FASTMathSciDAC Institute develops and deploys scalable mathematical algorithms and software tools for reliable simulation of complex physical phenomena and collaborates with application scientists to ensure the usefulness and applicability of FASTMath technologies. The FASTMath team brings together preeminent scientists in a broad range of applied mathematics areas. The FASTMath team has a proven track record of developing new mathematical technologies and algorithms, tackling difficult algorithmic and implementation issues as computer architectures undergo a fundamental shift, and engaging multiple application domains to enable new scientific discovery.

#user #government #collaboration #consortium #usa #research

ACLF Argonne National Laboratory

The Argonne Leadership Computing Facility (ALCF) is one of two leadership computing facilities supported by the U.S. Department of Energy (DOE). The ALCF provides the computational science community with a world-class computing capability dedicated to breakthrough science and engineering. It began operation in 2006 with its team providing expertise and assistance to support user projects to achieve top performance of applications and to maximize benefits from the use of ALCF resources.The ALCF's mission is to accelerate major scientific discoveries and engineering breakthroughs for humanity by designing and providing world-leading computing facilities in partnership with the computational science community.

#user #government #usa #research

Southern Methodist University CSC

The mission of the Center for Scientific Computation at Southern Methodist University is to stimulate interdisciplinary education and research in simulation-based engineering and science. The Center for Scientific Computation provides the main resource for scientific computing collaboration at Southern Methodist University. The SMUHPC cluster was an instrumental resource in SMU's contribution to the discovery of the Higgs Boson, as featured in this news story on CBS.

#user #academia #usa #research

Stony Brook University IACS

The mission of IACS is to make sustained advances in the fundamental techniques of computation and in high-impact applications, with a vision that by 2017 we will be an internationally recognized center having vibrant multidisciplinary research and education programs, and demonstrated economic benefit to New York State.

#user #academia #usa #research

BNL Computational Science Center

The Brookhaven National Laboratory Computational Science Center (CSC) brings together researchers in biology, chemistry, physics and medicine with applied mathematicians and computer scientists to take advantage of the new opportunities for scientific discovery made possible by modern computers. In support of this, the center has a close alliance with applied mathematicians and computer scientists at Stony Brook University and Columbia University. At the CSC, computer clusters running the Linux operating system – typically containing 100 to 200 processors – are currently available for performing scientific calculations for Brookhaven researchers and their collaborators.

#user #government #usa #research

UW-Madison ACI

The University of Wisconsin­-Madison Advanced Computing Initiative delivers a combination of shared computing resources and shared human resources to enable a broad range of researchers to improve their use of computers in their scholarly work. The successful outcome provides expertise, hardware, and software in the right ratio to empower the research mission of UW-­Madison through computation.

#user #academia #usa #research

CHTC - Center for High Throughput Computing

The Center for High Throughput Computing (CHTC) offers a variety of large-scale computing resources and services for UW-affiliated researchers and their collaborators, including classically-defined high-throughput computing (HTC) and high-performance computing (HPC) resources. CHTC services and hardware are funded by the National Institute of Health (NIH), the Department of Energy (DOE), the National Science Foundation (NSF), and various grants from the University itself.

#user #academia #usa #research

SUPER SciDAC

The SUPER project is a broadly-based SciDAC institute with expertise in compilers and other system tools, performance engineering, energy management, and resilience. The goal of the project is to ensure that DOE's computational scientists can successfully exploit the emerging generation of high performance computing (HPC) systems. This goal is being met by providing application scientists with strategies and tools to productively maximize performance, conserve energy, and attain resilience.

#user #government #usa #research

SciDAC - Scientific Discovery through Advanced Computing

Scientific computing, including modeling and simulation, has become crucial for research problems that are insoluble by traditional theoretical and experimental approaches, hazardous to study in the laboratory, or time-consuming or expensive to solve by traditional means.

#user #government #usa #research

University of Maryland HPC

The Division of Information Technology at the University of Maryland maintains several High Performance Computing (HPC) clusters. In particular, the two Deepthought HPC clusters are campus resources which are available for production use by campus researchers requiring compute cycles for parallel codes and applications.

#user #academia #usa #research

PNNL HPC

At Pacific Northwest National Laboratory, our mission is to transform the world through courageous discovery and innovation. Our vision: PNNL science and technology inspires and enables the world to live prosperously, safely, and securely. Our values of integrity, creativity, collaboration, impact and courage provide the foundation for all we do. With multidisciplinary expertise spanning technical pillars of high-performance computing, data science, and computational mathematics, we work toward building computational capabilities that position PNNL as a computing powerhouse. We also focus on enhancing the Science of Computing to achieve high-performance, power-efficient, and reliable computing at extreme scales for a spectrum of scientific endeavors that address significant problems of national interest, especially among PNNL’s core pursuits—energy, the environment, national security, and fundamental science. PNNL is successful in high performance computing by merging multiple areas of science and technology.

#user #government #usa #research