Index

A

Accelerated Scientific Discovery (ASD) projects, 208

ACF, see Advanced Computer

Facility ACTA, see Advanced Complex Trait Analysis

Active Sun, 215

Adaptive mesh refinement (AMR), 214

Advanced Complex Trait Analysis (ACTA), 13

Advanced Computer Facility (ACF), 31

Advanced Research WRF (ARW) model, 202

Allocation unit, 34

ALPS, see Application Level Placement Scheduler

Amdahl’s law, 191

AMR, see Adaptive mesh refinement

Application Level Placement Scheduler (ALPS), 103

ARCHER, 740

applications and workloads, 1114

Advanced Complex Trait Analysis open-source software, 13

big data/data intensive computing, 13

computational fluid dynamics, 13

earth sciences, 12

Fluidity ocean modelling application, 12 226

highlights of main applications, 1314

Lattice-Boltzmann approach, 12

Ludwig, 14

materials science and chemistry, 12

nanoscience, 12

physical life sciences, 13

plasma physics, 12

R statistical language, 13

soft matter physics, 12

SPRINT, 14

Unified Model MicroMag, 12

VASP, 14

data center/facility, 3134

Advanced Computer Facility, 31

infrastructure, 3133

innovations and challenges, 3334

location and history, 31

Power Usage Effectiveness, 33

hardware architecture, 1619

Aries router, 17

compute node architecture, 1617

Dragony topology, 17

external login nodes, 1718

home filesystems, 1819

interconnect, 17

non-uniform access, 17

pre- and post-processing nodes, 18

QuickPath Interconnect, 16

service node architecture, 17

storage systems, 1819

work filesystems, 19

long-term storage and data analytics, 2831

access to UK-RDF and moving data, 2930

connectivity, 29

data analytic cluster, 29

data analytic workflows, 3031

Data Transfer Nodes, 30

Globus Toolkit, 30

UK-RDF technical details, 2829

UK research data facility overview, 28

overview, 811

HECToR, 9

Moore’s Law, 9

on-site acceptance tests, 11

peak performance, 9

procurement working group, 10

sponsor/program background, 810

Statement of Requirements, 11

timeline, 1011

programming system, 2127

Allinea MAP, 26

Cray Message Passing Toolkit, 21

CrayPAT, 2526

debugging, 27

Fortran Coarrays, 21

hybrid models, 2324

languages and compilers, 2425

message passing, 21

NUMA-aware memory allocation, 24

OpenSHMEM, 21, 22

performance tools, 2527

PGAS, 2123

POSIX threads, 23

programming models, 2124

Remote Memory Access, 22

Scalasca, 27

shared memory, 23

Unified Parallel C, 21

SAFE (Service Administration from EPCC), 3640

custom analysis, 39

dynamic reports, 38

integrated reports, 38

job wait times, 39

reporting system, 38

resource allocations, 37

service functions, 40

SQL relational database, 36

system overview, 1415

ARCHER software configuration, 16

Cray Linux Environment, 15

file systems, 15

Linux, 15

Lustre, 15

post-processing nodes, 15

system software, 1921

Compute Node Linux, 19

Intel Hyperthreads, 20

interactive access, 20

Job Launcher Nodes, 20

job submission script, 20

job submission system (PBS Pro/ALPS), 2021

operating system, 19

queuing structure, 20

tools, 20

system statistics, 3436

allocation unit, 34

Aries interconnect, 34

dominant codes, 35

job sizes, 36, 37

Aries router, 17, 101

ARW model, see Advanced Research WRF model

ASD projects, see Accelerated Scientific Discovery projects

B

Back-fill processing, 130

Bassi, 43

Bellman, 142, 143

B/F, see Byte/flop value

Big data analysis, 13, 109, 193

Bioenergy, Peregrine and, 169

Bioinformatics, 47

Burst Buffer, 75

Byte/flop value (B/F), 132

C

Caldera, 191

CCM, see Cluster Compatibility Mode

CDC, see Control Data Corporation

CDUs, see Cooling Distribution Units

CESM, see Community Earth System Model

CFD, see Computational Fluid Dynamics

CGCMs, see Coupled general circulation models

CLE, see Cray Linux Environment

CLES, see Cray Lustre File System Cluster Compatibility Mode (CCM), 153

CMU, see Configuration Management Utility

CNL, see Compute Node Linux Community Atmosphere Model, 211

Community Earth System Model (CESM), 201

Computational Fluid Dynamics (CFD), 146

Compute Node Linux (CNL), 19, 65, 100

Computer Room Air Conditioner (CRAC) units, 156

Configuration Management Utility (CMU), 176

Control Data Corporation (CDC), 42

Cooling Distribution Units (CDUs), 167, 174

Coupled general circulation models (CGCMs), 210

CRAC units, see Computer Room Air Conditioner units

Cray

Data Management Platform, 65

Linux Environment (CLE), 15, 19, 152

Lustre File System (CLES), 98

Message Passing Toolkit, 21

Sonexion Storage Manager (CSSM) middleware, 60

CSSM middleware, see Cray Sonexion Storage Manager middleware

CUDA programming language, 200

D

Dark Energy Spectroscopic Instrument (DESI), 71

DARPA, see Defense Advanced Research Projects Agency

DART, see Data Assimilation Research Testbed

Data analysis and visualization (DAV), 177

Data Assimilation Research Testbed (DART), 201

Data Management Platform (DMP), 65

Data Transfer Nodes (DTN), 30

Data virtualization service (DVS), 63

DAV, see Data analysis and visualization

DaVinci, 43

Defense Advanced Research Projects Agency (DARPA), 45

Dell Opteron Infiniband cluster, 142

Density functional theory (DFT), 92

DESI, see Dark Energy Spectroscopic Instrument

Design-build contractor (Peregrine), 166

DFT, see Density functional theory

Direct numerical simulation (DNS), 203

DMP, see Data Management Platform

DNS, see Direct numerical simulation

Dragonfly Network, 53

DTN, see Data Transfer Nodes

DVS, see Data virtualization service

E

Earth system science, see Yellowstone

Edison, 4180

Bassi, 43

batch queues, 44

bottleneck, 43

code changes, 43

DaVinci, 43

early application results, 7174

better combustion for new fuels, 7374

Dark Energy Spectroscopic Instrument, 71

geologic sequestration, 71

graphene and carbon nanotubes, 72–73

large-scale structure of the universe, 71

sequestered CO2, 71

syngas, 73

Edison supercomputer (NERSC-7), 4448

accelerator design and development, 47

astrophysics, 47

bioinformatics, 47

biology, 47

climate science, 47

computer science, 47

data motion, 45

Defense Advanced Research Projects Agency, 45

energy storage, 47

fundamental particles and interactions, 47

fusion science, 47

materials science, 47

NERSC computer time, 48

peak performance, 45

solar energy, 46

user base and science areas, 4648

exascale computing and future of NERSC, 7478

application readiness, 7678

Burst Buffer, 75

chip parallelism, 77

collaboration with ACES, 7576

NERSC-8 as pre-exascale system, 7476

white-box test systems, 77

Xeon Phi processor, 74

Franklin, 43

Jacquard, 43

Parallel Distributed Systems Facility, 43

physical infrastructure, 6770

Computational Research and Theory facility, 6769

Cray XC30 cooling design, 6970

evaporative cooling, 68

procurement strategy, 4851

best value, 51

flop count, 50

Intel tick-tock model, 51

Moore’s law, 51

overlapping systems, 51

requirements gathering, 49

sustained performance, 4951

system architecture, 5266

cabinet and chassis, 5557

compute blades, 55

compute node Linux, 65

Cray Data Management Platform, 65

Cray Sonexion 1600 hardware, 6063

Cray Sonexion Storage Manager middleware, 60

data management platform and login nodes, 65

data virtualization service, 63

Dragonfly Network, 53

Edison file systems, 5860

embedded server modules, 60

FDR InfiniBand rack switches, 61

global file system, 6364

global home and common, 64

global project, 64

global scratch, 64

Hardware Supervisory System, 57

In-Target Probe, 57

interconnect, 5255

intra-chassis network, 53, 54

Lustre file system, 59

Management Target, 62

MDRAID write intent bitmaps, 61

MDS failover, 61

Metadata Management Units, 60

metadata performance, 59

power distribution unit, 55

processing cabinet, 57

processor and memory, 52

quad processor daughter cards, 55

Rank-1 details, 53

Rank-2 details, 54

Rank-3 details, 5455

Scalable Storage Units, 60

service blades, 5557

storage and I/O, 5865

System Management Workstation, 55

system software, 6566

third-party debuggers, 66

user services, 7071

Embedded server modules (ESM), 60

Energy Recover Water (ERW), 167

Energy Reuse Effectiveness, 164

Energy Systems Integration Facility (ESIF), 164

EoR survey, see Epoch of Reionization survey

Epoch of Reionization (EoR) survey, 147

eQuest DOE software, 218

ERW, see Energy Recover Water

ESIF, see Energy Systems Integration Facility

ESM, see Embedded server modules

Extra Packages for Enterprise Linux (EPEL) RPM repository, 176

Extreme Scalability Mode, 153

F

Fast Fourier transforms, 92

FCFS, see First-come, first-served

FDR InfiniBand rack switches, 61

First-come, first-served (FCFS), 123

Fluidity ocean modelling application, 12

Fortran Coarrays, 21

Franklin, 43

Full wave seismic data assimilation (FWSDA), 212214

FWSDA, see Full wave seismic data assimilation

G

General Parallel File System (GPFS), 98

Geologic sequestration, 71

Geyser, 191

GLADE, see Globally Accessible Data Environment

Globally Accessible Data Environment (GLADE), 190, 191

GPCRs, see G protein-coupled receptors

GPFS, see General Parallel File System

G protein-coupled receptors (GPCRs), 151

H

Hardware Supervisory System (HSS), 57

HECToR, 9, 36

Hewlett-Packard (HP), 171

Configuration Management Utility, 176

s8500 liquid cooled enclosure, 174

High Performance Computing (HPC), 1

Cray, 153

evolution of, 42

GROMACS, 151

history of, 15

many-core in, 109

NCAR-Wyoming Supercomputing Center, 188

Red Hat Enterprise Linux, 197

showcase data center, 164

software challenges, 5

UK services, 8

Yellowstone, 189

High-Performance Storage System (HPSS) archival system, 188

History of HPC, 15

benchmarks, 1

commodity clusters, 2

HPC ecosystems, 2

HPC software summary, 4

increased use and scale of, 2

I/O software, 4

significant systems, 3

HLRN-III, 81113

accounting and user management, 106108

central LDAP server, 108

resource allocation, 106

single-system layer, 108

web interfaces, 108

data center facility, 105106

HLRN Supercomputing Alliance, 8590

Administrative Council, 86

application benchmark, 89

benchmarks, 8890

bodies, 8586

funding, 86

HPC Consultants, 86

I/O benchmarks, 89

performance rating methodology, 89

procurements, 8688

science-political endeavor, 85

Scientific Board, 86

single-system view, 88

Technical Council, 86

North-German Vector Computer Association, 82

preparing for the future, 108111

expert groups, 110

full bisection bandwidth, 110

Gottfried system, 109

HLRN-IV and beyond, 110111

Intel Parallel Computing Centers, 110

many-core in HPC, 109110

mean time between failure, 110

second phase installation of HLRN-III, 108109

strong scaling, 110

research fields, 9297

chemistry and material sciences, 9293

code scalability, 93

density functional theory, 92

earth sciences, 9394

engineering, 9596

fast Fourier transforms, 92

OpenFOAM, 95

parallelized large-eddy simulation model, 94

PHOENIX astrophysics code, 96

physics, 9697

scientific computing and computer science at ZIB, 8283

scientific domains and workloads, 9092

climate development, 91

CPU time usage breakdown, 90, 91

job clouds, 91

key performance indicator, 90

wall-clock time, 91

software ecosystem, 100104

Application Level Placement Scheduler, 103

application software, packages and libraries, 104

Compute Node Linux, 100

OpenFOAM, 104

software for program development and optimization, 103

SUSE Linux Enterprise, 100

system software, 100103

storage strategy and pre- and post-processing, 105

supercomputers at ZIB (past to present), 8385

financial burden, 85

massively parallel processing, 83

Moore’s law, 83

shared-memory processing, 83

system architecture, 97100

Cray Lustre File System, 98

General Parallel File System, 98

hardware configuration, 97100

network attached storage, 98

Nvidia Kepler GPUs, 97

Xeon Phi cluster, 97

HP, see Hewlett-Packard

HPC, see High Performance Computing

HPSS archival system, see High-Performance Storage System archival system

HSS, see Hardware Supervisory System

I

IBM

Blue Gene, 27

GPFS, 63, 198

iDataPlex cluster supercomputer, 191

Parallel Environment runtime system, 207208

RS/6000 SP system, 43

TS3500 tape library, 29, 154

Insight Center Visualization Room, 177

In-Target Probe (ITP), 57

Intel

Composer Suite, 25

Hyperthreads, 20

Metadata Servers, 61

Parallel Computing Centers (IPCC), 110

Sandy Bridge CPUs, 65

tick-tock model, 51

Trace Analyzer, 176

Xeon E5–2695 v2 Ivy Bridge processors, 45, 46, 52

Xeon Phi coprocessors, 164, 173

Intra-chassis network, 53, 54

IPCC, see Intel Parallel Computing Centers

ITP test, see In-Target Probe

Ivy Bridge processors (Intel), 46

J

Jacquard, 43

Job

clouds, 91

submission scripts, 21

Job Launcher Nodes, 20

K

K computer, 115139

benchmark applications, 131134

application performance, 132

byte/flop value, 132

FFB, 133

LatticeQCD, 133134

NICAM and Seism3D, 133

optimized performance, 134

PHASE and RSDFT, 134

Trinaryx3 algorithm, 134

early results, 125127

benchmark results, 125126

Gordon Bell prizes, 126127

HPC challenge, 125

real-space density functional theory code, 126

universe, composition of, 127

facilities, 134137

air handling units, 136

chiller building, 135

cross-section of building, 137

pillar-free computer room, 137

research building, 135

seismic isolated structures, 136

operation, 127131

back-fill processing, 130

conditions satisfied, 128

early access prior to official operation, 127129

huge jobs, 129

job limitations, 130

job scheduler, 131

operation policy after official operation, 129130

Tofu network, 129

utilization statistics, 130131

overview, 116120

application software development programs, 119120

development history, 118119

development targets and schedule, 116119

kanji character, 116

LINPACK benchmark program, 116

prototype system, 117

target requirements, 116

system overview, 120125

batch job flow, 124

compute nodes, 122

first-come, first-served, 123

hardware, 120122

hybrid programming model, 124

message passing interface, 123

parallel programming model, 123

programming models, 124

system software, 123124

Tofu network, 121

Knight’s Landing (KNL) processor, 77

KNL processor, see Knight’s Landing (KNL) processor

L

Large-eddy simulation (LES), 203

Lattice QCD simulation code, 132

LDAP server, 108

Leadership in Energy and Environmental Design (LEED), 164

LEED, see Leadership in Energy and Environmental Design

LES, see Large-eddy simulation

Lignocellulose, 169

Linda programming model, 171

Lindgren, 141162

applications and workloads, 146151

climate modeling, 146147

Computational Fluid Dynamics, 146

CRESTA, 147

DNA systems, 151

Epoch of Reionization survey, 147

G protein-coupled receptors, 151

GROMACS, 151

highlights of main applications, 149151

Message Passing Interface, 148

Nek5000, 149150

data center facilities housing, 154155

computer room, 154

cooling capacity, 155

heat exchangers, 155

heat re-use system, 155

power feeds, 155

future after, 161

heat re-use system, 156160

air-water heat exchangers, 158

ambient cooling, 158

Chemistry building, 160

Computer Room Air Conditioner units, 156

district cooling, 157

environmentally friendly system, 156

KTH cooling grid, 159

major problem, 157

risk of water leakages, 157

overview, 141164

Bellman, 142, 143

Dell Opteron Infiniband cluster, 142

Lindgren project timeline, 144146

Lindgren system, 152

PDC Center for High-Performance Computing, 142144

Tier-0 resources, 144

why Lindgren was needed, 144

programming system, 153154

Cray HPC Suite, 153

HPC libraries, 154

storage, visualization, and analytics, 154

system software, 152153

Cluster Compatibility Mode, 153

Cray Linux Environment, 152

Extreme Scalability Mode, 153

system statistics, 155156

allocation rounds, 156

average usage, 156

user quotas, 156

LINPACK benchmark program, 116, 125

Linux

compute node, 65

computing cluster, 43

Cray Linux Environment, 19, 65

data transfer commands, 30

Extra Packages for Enterprise Linux, 176

Jacquard, 43

Red Hat Enterprise Linux, 197

SUSE Linux, 100, 103, 152

Load Sharing Facility (LSF), 198

Lookup tables, 23

LSF, see Load Sharing Facility

Ludwig, 14

M

Madden-Julian Oscillation (MJO), 212, 214

Magnetohydrodynamics (MHD), 203

Management Target (MGT), 62

Massively parallel processing (MPP), 83

Materials physics and chemistry, Peregrine and, 168

MDS failover, 61

Mean time between failure (MTBF), 110 234

Mean time between system failure (MTBSF), 207

Mellanox

ConnectX-3 FDR adapter, 192

FDR Infiniband interconnect, 191

Message passing interface (MPI), 123, 148

Message Passing Toolkit (MPT), 21

Metadata Management Units (MMUs), 60

MGT, see Management Target

MHD, see Magnetohydrodynamics

MJO, see Madden-Julian Oscillation

MMUs, see Metadata Management Units

Model for Prediction Across Scales (MPAS), 202, 214

Moore’s law, 9, 51, 83

MPAS, see Model for Prediction Across Scales

MPI, see Message passing interface

MPP, see Massively parallel processing

MPT, see Message Passing Toolkit

MTBF, see Mean time between failure

MTBSF, see Mean time between system failure

N

NAS, see Network Attached Storage

National Center for Atmospheric Research (NCAR), 187

National Centers for Environmental Prediction (NCEP), 202

National Renewable Energy Laboratory (NREL), see Peregrine

National Science Foundation (NSF), 187

Natural Environment Research Council (NERC), 8

NCAR, see National Center for Atmospheric Research

NCAR-Wyoming Supercomputing Center (NWSC), see Yellowstone

NCEP, see National Centers for Environmental Prediction

NERC, see Natural Environment Research Council

NERSC, see Edison

NERSC-8 Knight’s Landing processor, 77

Nested Regional Climate Model (NRCM), 201

Network Attached Storage (NAS), 98, 171, 174

NMM solver, see Nonhydrostatic Mesoscale Model solver

Nonhydrostatic Mesoscale Model (NMM) solver, 202

Non-uniform access (NUMA), 17

NRCM, see Nested Regional Climate Model

NSF, see National Science Foundation

NUMA, see Non-uniform access

NVIDIA

graphics processing unit, 193

Quadro 6000 graphics processing unit, 193

NWSC (NCAR-Wyoming Supercomputing Center), see Yellowstone

O

Oakland Scientific Center (OSF), 67

OpenFOAM, 95, 104

OpenSHMEM, 21

Oracle SAM-QFS, 100

OSF, see Oakland Scientific Center

P

PALM, see Parallelized large-eddy simulation model

Parallel Distributed Systems Facility (PDSF), 43

Parallel File System (PFS), 171, 174

Parallelized large-eddy simulation model (PALM), 94

Parallel Ocean Program (POP), 202, 211

Partitioned Global Address Space (PGAS) models, 21

PCG solver, see Preconditioned conjugate gradient solver

PDSF, see Parallel Distributed Systems Facility

PDU, see Power distribution unit

Peregrine, 163184

applications and workloads, 168171

application highlights, 169171

bioenergy, 169

computational tasks and domain examples, 168169

Hartree-Fock methods, 170

lignocellulose, 169

Linda programming model, 171

materials physics and chemistry, 168

Vienna Ab Initio Simulation Program, 170

Weather Research and Forecasting Model, 170

wind resource modeling, 169

data center/facility, 179182

ESIF data center electrical power, 181

ESIF data center mechanical infrastructure, 180181

fire suppression, 182

NREL energy efficient data center, 179180

physical security, 182

power interruptions, 181182

return on investment, 182

hardware architecture, 171174

Cooling Distribution Units, 174

HP s8500 liquid cooled enclosure, 174

Hydronics subsystem, 174

interconnect, 173

Network Attached Storage, 171, 174

Parallel File System, 171, 174

SandyBridge nodes, 171

top connectivity, 173

overview, 163168

capture of waste heat, 165

Cooling Distribution Units, 167

design-build contractor, 166

design features, efficiency and sustainability measures, 164166

direct liquid cooling, 165

Energy Recover Water, 167

Energy Reuse Effectiveness, 164

holistic chips to bricks approach, 164

hot work, 167

HPC data center, 164

LEED Platinum Certification, 164

primary side, 167

secondary side, 167

smooth piping, 165

sponsor/program background, 166

timeline, 166168

programming system, 176

Intel Trace Analyzer, 176

Rogue-Wave Totalview debugger, 176

system overview, 171

system software, 175176

Extra Packages for Enterprise Linux RPM repository, 176

HP Configuration Management Utility, 176

RedHat Package Manager, 176

system statistics, 182183

visualization and analysis, 177178

Collaboration Room, 178

data analysis and visualization, 177

OpenGL, 177

TurboVNC, 177

VirtualGL, 177

Visualization Room, 177

PFS, see Parallel File System

PGAS models, see Partitioned Global Address Space models

PHOENIX astrophysics code, 96

POP, see Parallel Ocean Program

Power distribution unit (PDU), 55

Power Usage Effectiveness, 33

Power Use Efficiency, 218

Preconditioned conjugate gradient (PCG) solver, 202

Procurement working group (PWG), 10

PWG, see Procurement working group

Python language, 200

Q

QFT, see Quasi fat tree

QPDCs, see Quad processor daughter cards

QPI, see QuickPath Interconnect

Quad processor daughter cards (QPDCs), 55

Quasi fat tree (QFT), 195

QuickPath Interconnect (QPI), 16

Quiet Sun, 215216

R

RDA, see Research Data Archive

Real-space density functional theory (RSDFT) code, 126

Red Hat Enterprise Linux (RHEL), 197

RedHat Package Manager (RPM), 176

Remote Memory Access (RMA), 22

Research Data Archive (RDA), 188

RHEL, see Red Hat Enterprise Linux

RMA, see Remote Memory Access

Rogue-Wave Totalview debugger, 176

Routing engine chains, 195

RPM, see RedHat Package Manager

RSDFT code, see Real-space density functional theory code

R statistical language, 13

S

Sandia National Laboratory (SNL), 73

Scalable Storage Units (SSUs), 60

SCEC, see Southern California Earthquake Center

Shared-memory processing (SMP), 83

Simulation rate in years per wall-clock day (SYPD), 204

SLES, see SUSE Linux Enterprise

SMP, see Shared-memory processing

SNL, see Sandia National Laboratory

Solar convection zone, 216

Southern California Earthquake Center (SCEC), 212

SPRINT, 14

SQL relational database, 36

SSP, see Sustained System Performance

SSUs, see Scalable Storage Units

SUSE Linux Enterprise (SLES), 100

Sustained System Performance (SSP), 50

Swedish tier-1 system, see Lindgren

Syngas, 73

SYPD, see Simulation rate in years per wall-clock day

T

Third-party debuggers, 66

Tick-tock model, 51

Tier-0 resources, 144

Tofu network, 121, 129

Top connectivity, 173

Top-of-rack (TOR) switches, 193

TOR switches, see Top-of-rack switches

Trinaryx3 algorithm, 134

U

UCAR, see University Corporation for Atmospheric Research

Unified Model MicroMag, 12

Unified Parallel C (UPC), 21

Universe, composition of, 127

University Corporation for Atmospheric Research (UCAR), 187

UPC, see Unified Parallel C

V

Variable frequency drive (VFD) cooling components, 218

VASP, see Vienna Ab Initio Simulation Program

VFD cooling components, see Variable frequency drive cooling components

Vienna Ab Initio Simulation Program (VASP), 14, 170

W

WACCM, see Whole Atmosphere Community Climate Model

Weather Research and Forecasting (WRF) model, 170, 201

White-box test systems, 77

Whole Atmosphere Community Climate Model (WACCM), 202

WIBs, see Write intent bitmaps

Wind resource modeling, 169

Wrapper scripts, 200

WRF model, see Weather Research and Forecasting model

Write intent bitmaps (WIBs), 61

X

Xeon Phi processor (Intel), 45, 74, 164, 173

XML reporting language, 39

Y

Yellowstone, 185224

early science results, 208216

Accelerated Scientific Discovery projects, 208

Active Sun, 215

adaptive mesh refinement, 214

Community Atmosphere Model, 211

coupled general circulation models, 210

full wave seismic data assimilation, 212214

high-resolution coupled climate experiment, 210212

Madden-Julian Oscillation, 212, 214

magnetic field of Quiet Sun, 215216

Parallel Ocean Program, 211

regional-scale prediction of future climate and air quality, 208210

resolving mesoscale features in coupled climate simulations, 212

San Andreas Fault zone, 213

solar convection zone, 216

toward global cloud-resolving models, 214215

visualization clusters, 214

future challenges, 219220

hardware architecture, 192196

A-groups, 194

B-group, 194

interconnect, 193196

Mellanox ConnectX-3 FDR adapter, 192

NVIDIA graphics processing unit, 193

processors and nodes, 192193

quasi fat tree, 195

rack-level packaging, 193

routing engine chains, 195

spine switch modules, 194

storage system, 196

TOR switch, 194

NWSC facility, 216219

early experience in operation, 219

flexibility, 217

green design, 216

initial building shell, 217

overview and design, 216218

Power Use Efficiency, 218

variable frequency drive cooling components, 218

worst-case initial load, 219

overview, 186188

community-driven computer models, 187

High-Performance Storage

System archival system, 188

Research Data Archive, 188

science motivation, 186187

scientific capabilities, 186

sponsor and program background, 187188

project timeline, 188190

NWSC construction, 188189

vendor proposals, 190

Yellowstone procurement, 189190

system overview, 190192

Amdahl’s law, 191

budget allocation, 191

Caldera, 191

Geyser, 191

Globally Accessible Data Environment, 190

IBM iDataPlex cluster supercomputer, 191

Mellanox FDR Infiniband interconnect, 191

system software, 196200

CUDA programming language, 200

disk filesystem and tape archive, 198199

GLADE storage cluster, 198

Load Sharing Facility, 198

Nagios instance, 199

operating systems and system management, 197198

programming environment, 200

Python language, 200

Red Hat Enterprise Linux, 197

system monitoring, 199

wrapper scripts, 200

system statistics, 205208

disk storage usage, 206

job success rates, 207

mean time between system failure, 207

reliability, uptime, and utilization, 207208

system usage patterns, 205206

workload and application performance, 200205

Advanced Research WRF model, 202

application domain descriptions, 201204

application performance, 204205

climate science, 201202

Community Earth System Model, 201

Data Assimilation Research Testbed, 201

data assimilation systems, 203204

direct numerical simulation, 203

earth sciences, 203

fluid dynamics and turbulence, 203

geospace sciences, 202203

large-eddy simulation, 203

magnetohydrodynamics applications, 203

Model for Prediction Across Scales, 202

Nested Regional Climate Model, 201

Nonhydrostatic Mesoscale Model solver, 202

ocean sciences, 202

Parallel Ocean Program, 202

preconditioned conjugate gradient solver, 202

weather prediction and atmospheric chemistry, 202

Weather Research and Forecasting model, 201

Whole Atmosphere Community Climate Model, 202

Z

Zuse Institute Berlin (ZIB), see HLRN-III

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.33.201