A
Accelerated Scientific Discovery (ASD) projects, 208
ACF, see Advanced Computer
Facility ACTA, see Advanced Complex Trait Analysis
Active Sun, 215
Adaptive mesh refinement (AMR), 214
Advanced Complex Trait Analysis (ACTA), 13
Advanced Computer Facility (ACF), 31
Advanced Research WRF (ARW) model, 202
Allocation unit, 34
ALPS, see Application Level Placement Scheduler
Amdahl’s law, 191
AMR, see Adaptive mesh refinement
Application Level Placement Scheduler (ALPS), 103
applications and workloads, 11–14
Advanced Complex Trait Analysis open-source software, 13
big data/data intensive computing, 13
computational fluid dynamics, 13
earth sciences, 12
Fluidity ocean modelling application, 12 226
highlights of main applications, 13–14
Lattice-Boltzmann approach, 12
Ludwig, 14
materials science and chemistry, 12
nanoscience, 12
physical life sciences, 13
plasma physics, 12
R statistical language, 13
soft matter physics, 12
SPRINT, 14
Unified Model MicroMag, 12
VASP, 14
Advanced Computer Facility, 31
innovations and challenges, 33–34
location and history, 31
Power Usage Effectiveness, 33
Aries router, 17
compute node architecture, 16–17
Dragony topology, 17
interconnect, 17
non-uniform access, 17
pre- and post-processing nodes, 18
QuickPath Interconnect, 16
service node architecture, 17
work filesystems, 19
long-term storage and data analytics, 28–31
access to UK-RDF and moving data, 29–30
connectivity, 29
data analytic cluster, 29
data analytic workflows, 30–31
Data Transfer Nodes, 30
Globus Toolkit, 30
UK-RDF technical details, 28–29
UK research data facility overview, 28
HECToR, 9
Moore’s Law, 9
on-site acceptance tests, 11
peak performance, 9
procurement working group, 10
sponsor/program background, 8–10
Statement of Requirements, 11
Allinea MAP, 26
Cray Message Passing Toolkit, 21
debugging, 27
Fortran Coarrays, 21
languages and compilers, 24–25
message passing, 21
NUMA-aware memory allocation, 24
POSIX threads, 23
Remote Memory Access, 22
Scalasca, 27
shared memory, 23
Unified Parallel C, 21
SAFE (Service Administration from EPCC), 36–40
custom analysis, 39
dynamic reports, 38
integrated reports, 38
job wait times, 39
reporting system, 38
resource allocations, 37
service functions, 40
SQL relational database, 36
ARCHER software configuration, 16
Cray Linux Environment, 15
file systems, 15
Linux, 15
Lustre, 15
post-processing nodes, 15
Compute Node Linux, 19
Intel Hyperthreads, 20
interactive access, 20
Job Launcher Nodes, 20
job submission script, 20
job submission system (PBS Pro/ALPS), 20–21
operating system, 19
queuing structure, 20
tools, 20
allocation unit, 34
Aries interconnect, 34
dominant codes, 35
ARW model, see Advanced Research WRF model
ASD projects, see Accelerated Scientific Discovery projects
B
Back-fill processing, 130
Bassi, 43
B/F, see Byte/flop value
Big data analysis, 13, 109, 193
Bioenergy, Peregrine and, 169
Bioinformatics, 47
Burst Buffer, 75
Byte/flop value (B/F), 132
C
Caldera, 191
CCM, see Cluster Compatibility Mode
CDC, see Control Data Corporation
CDUs, see Cooling Distribution Units
CESM, see Community Earth System Model
CFD, see Computational Fluid Dynamics
CGCMs, see Coupled general circulation models
CLE, see Cray Linux Environment
CLES, see Cray Lustre File System Cluster Compatibility Mode (CCM), 153
CMU, see Configuration Management Utility
CNL, see Compute Node Linux Community Atmosphere Model, 211
Community Earth System Model (CESM), 201
Computational Fluid Dynamics (CFD), 146
Compute Node Linux (CNL), 19, 65, 100
Computer Room Air Conditioner (CRAC) units, 156
Configuration Management Utility (CMU), 176
Control Data Corporation (CDC), 42
Cooling Distribution Units (CDUs), 167, 174
Coupled general circulation models (CGCMs), 210
CRAC units, see Computer Room Air Conditioner units
Cray
Data Management Platform, 65
Linux Environment (CLE), 15, 19, 152
Lustre File System (CLES), 98
Message Passing Toolkit, 21
Sonexion Storage Manager (CSSM) middleware, 60
CSSM middleware, see Cray Sonexion Storage Manager middleware
CUDA programming language, 200
D
Dark Energy Spectroscopic Instrument (DESI), 71
DARPA, see Defense Advanced Research Projects Agency
DART, see Data Assimilation Research Testbed
Data analysis and visualization (DAV), 177
Data Assimilation Research Testbed (DART), 201
Data Management Platform (DMP), 65
Data Transfer Nodes (DTN), 30
Data virtualization service (DVS), 63
DAV, see Data analysis and visualization
DaVinci, 43
Defense Advanced Research Projects Agency (DARPA), 45
Dell Opteron Infiniband cluster, 142
Density functional theory (DFT), 92
DESI, see Dark Energy Spectroscopic Instrument
Design-build contractor (Peregrine), 166
DFT, see Density functional theory
Direct numerical simulation (DNS), 203
DMP, see Data Management Platform
DNS, see Direct numerical simulation
Dragonfly Network, 53
DTN, see Data Transfer Nodes
DVS, see Data virtualization service
E
Earth system science, see Yellowstone
Bassi, 43
batch queues, 44
bottleneck, 43
code changes, 43
DaVinci, 43
early application results, 71–74
better combustion for new fuels, 73–74
Dark Energy Spectroscopic Instrument, 71
geologic sequestration, 71
graphene and carbon nanotubes, 72–73
large-scale structure of the universe, 71
sequestered CO2, 71
syngas, 73
Edison supercomputer (NERSC-7), 44–48
accelerator design and development, 47
astrophysics, 47
bioinformatics, 47
biology, 47
climate science, 47
computer science, 47
data motion, 45
Defense Advanced Research Projects Agency, 45
energy storage, 47
fundamental particles and interactions, 47
fusion science, 47
materials science, 47
NERSC computer time, 48
peak performance, 45
solar energy, 46
user base and science areas, 46–48
exascale computing and future of NERSC, 74–78
Burst Buffer, 75
chip parallelism, 77
collaboration with ACES, 75–76
NERSC-8 as pre-exascale system, 74–76
white-box test systems, 77
Xeon Phi processor, 74
Franklin, 43
Jacquard, 43
Parallel Distributed Systems Facility, 43
physical infrastructure, 67–70
Computational Research and Theory facility, 67–69
Cray XC30 cooling design, 69–70
evaporative cooling, 68
best value, 51
flop count, 50
Intel tick-tock model, 51
Moore’s law, 51
overlapping systems, 51
requirements gathering, 49
compute blades, 55
compute node Linux, 65
Cray Data Management Platform, 65
Cray Sonexion 1600 hardware, 60–63
Cray Sonexion Storage Manager middleware, 60
data management platform and login nodes, 65
data virtualization service, 63
Dragonfly Network, 53
embedded server modules, 60
FDR InfiniBand rack switches, 61
global home and common, 64
global project, 64
global scratch, 64
Hardware Supervisory System, 57
In-Target Probe, 57
Lustre file system, 59
Management Target, 62
MDRAID write intent bitmaps, 61
MDS failover, 61
Metadata Management Units, 60
metadata performance, 59
power distribution unit, 55
processing cabinet, 57
processor and memory, 52
quad processor daughter cards, 55
Rank-1 details, 53
Rank-2 details, 54
Scalable Storage Units, 60
System Management Workstation, 55
third-party debuggers, 66
Embedded server modules (ESM), 60
Energy Recover Water (ERW), 167
Energy Reuse Effectiveness, 164
Energy Systems Integration Facility (ESIF), 164
EoR survey, see Epoch of Reionization survey
Epoch of Reionization (EoR) survey, 147
eQuest DOE software, 218
ERW, see Energy Recover Water
ESIF, see Energy Systems Integration Facility
ESM, see Embedded server modules
Extra Packages for Enterprise Linux (EPEL) RPM repository, 176
Extreme Scalability Mode, 153
F
Fast Fourier transforms, 92
FCFS, see First-come, first-served
FDR InfiniBand rack switches, 61
First-come, first-served (FCFS), 123
Fluidity ocean modelling application, 12
Fortran Coarrays, 21
Franklin, 43
Full wave seismic data assimilation (FWSDA), 212–214
FWSDA, see Full wave seismic data assimilation
G
General Parallel File System (GPFS), 98
Geologic sequestration, 71
Geyser, 191
GLADE, see Globally Accessible Data Environment
Globally Accessible Data Environment (GLADE), 190, 191
GPCRs, see G protein-coupled receptors
GPFS, see General Parallel File System
G protein-coupled receptors (GPCRs), 151
H
Hardware Supervisory System (HSS), 57
Hewlett-Packard (HP), 171
Configuration Management Utility, 176
s8500 liquid cooled enclosure, 174
High Performance Computing (HPC), 1
Cray, 153
evolution of, 42
GROMACS, 151
many-core in, 109
NCAR-Wyoming Supercomputing Center, 188
Red Hat Enterprise Linux, 197
showcase data center, 164
software challenges, 5
UK services, 8
Yellowstone, 189
High-Performance Storage System (HPSS) archival system, 188
benchmarks, 1
commodity clusters, 2
HPC ecosystems, 2
HPC software summary, 4
increased use and scale of, 2
I/O software, 4
significant systems, 3
accounting and user management, 106–108
central LDAP server, 108
resource allocation, 106
single-system layer, 108
web interfaces, 108
HLRN Supercomputing Alliance, 85–90
Administrative Council, 86
application benchmark, 89
funding, 86
HPC Consultants, 86
I/O benchmarks, 89
performance rating methodology, 89
science-political endeavor, 85
Scientific Board, 86
single-system view, 88
Technical Council, 86
North-German Vector Computer Association, 82
preparing for the future, 108–111
expert groups, 110
full bisection bandwidth, 110
Gottfried system, 109
Intel Parallel Computing Centers, 110
mean time between failure, 110
second phase installation of HLRN-III, 108–109
strong scaling, 110
chemistry and material sciences, 92–93
code scalability, 93
density functional theory, 92
fast Fourier transforms, 92
OpenFOAM, 95
parallelized large-eddy simulation model, 94
PHOENIX astrophysics code, 96
scientific computing and computer science at ZIB, 82–83
scientific domains and workloads, 90–92
climate development, 91
CPU time usage breakdown, 90, 91
job clouds, 91
key performance indicator, 90
wall-clock time, 91
Application Level Placement Scheduler, 103
application software, packages and libraries, 104
Compute Node Linux, 100
OpenFOAM, 104
software for program development and optimization, 103
SUSE Linux Enterprise, 100
storage strategy and pre- and post-processing, 105
supercomputers at ZIB (past to present), 83–85
financial burden, 85
massively parallel processing, 83
Moore’s law, 83
shared-memory processing, 83
Cray Lustre File System, 98
General Parallel File System, 98
hardware configuration, 97–100
network attached storage, 98
Nvidia Kepler GPUs, 97
Xeon Phi cluster, 97
HP, see Hewlett-Packard
HPC, see High Performance Computing
HPSS archival system, see High-Performance Storage System archival system
HSS, see Hardware Supervisory System
I
IBM
Blue Gene, 27
iDataPlex cluster supercomputer, 191
Parallel Environment runtime system, 207–208
RS/6000 SP system, 43
Insight Center Visualization Room, 177
In-Target Probe (ITP), 57
Intel
Composer Suite, 25
Hyperthreads, 20
Metadata Servers, 61
Parallel Computing Centers (IPCC), 110
Sandy Bridge CPUs, 65
tick-tock model, 51
Trace Analyzer, 176
Xeon E5–2695 v2 Ivy Bridge processors, 45, 46, 52
Xeon Phi coprocessors, 164, 173
IPCC, see Intel Parallel Computing Centers
ITP test, see In-Target Probe
Ivy Bridge processors (Intel), 46
J
Jacquard, 43
Job
clouds, 91
submission scripts, 21
Job Launcher Nodes, 20
K
benchmark applications, 131–134
application performance, 132
byte/flop value, 132
FFB, 133
NICAM and Seism3D, 133
optimized performance, 134
PHASE and RSDFT, 134
Trinaryx3 algorithm, 134
HPC challenge, 125
real-space density functional theory code, 126
universe, composition of, 127
air handling units, 136
chiller building, 135
cross-section of building, 137
pillar-free computer room, 137
research building, 135
seismic isolated structures, 136
back-fill processing, 130
conditions satisfied, 128
early access prior to official operation, 127–129
huge jobs, 129
job limitations, 130
job scheduler, 131
operation policy after official operation, 129–130
Tofu network, 129
utilization statistics, 130–131
application software development programs, 119–120
development targets and schedule, 116–119
kanji character, 116
LINPACK benchmark program, 116
prototype system, 117
target requirements, 116
batch job flow, 124
compute nodes, 122
first-come, first-served, 123
hybrid programming model, 124
message passing interface, 123
parallel programming model, 123
programming models, 124
Tofu network, 121
Knight’s Landing (KNL) processor, 77
KNL processor, see Knight’s Landing (KNL) processor
L
Large-eddy simulation (LES), 203
Lattice QCD simulation code, 132
LDAP server, 108
Leadership in Energy and Environmental Design (LEED), 164
LEED, see Leadership in Energy and Environmental Design
LES, see Large-eddy simulation
Lignocellulose, 169
Linda programming model, 171
applications and workloads, 146–151
Computational Fluid Dynamics, 146
CRESTA, 147
DNA systems, 151
Epoch of Reionization survey, 147
G protein-coupled receptors, 151
GROMACS, 151
highlights of main applications, 149–151
Message Passing Interface, 148
data center facilities housing, 154–155
computer room, 154
cooling capacity, 155
heat exchangers, 155
heat re-use system, 155
power feeds, 155
future after, 161
air-water heat exchangers, 158
ambient cooling, 158
Chemistry building, 160
Computer Room Air Conditioner units, 156
district cooling, 157
environmentally friendly system, 156
KTH cooling grid, 159
major problem, 157
risk of water leakages, 157
Dell Opteron Infiniband cluster, 142
Lindgren project timeline, 144–146
Lindgren system, 152
PDC Center for High-Performance Computing, 142–144
Tier-0 resources, 144
why Lindgren was needed, 144
Cray HPC Suite, 153
HPC libraries, 154
storage, visualization, and analytics, 154
Cluster Compatibility Mode, 153
Cray Linux Environment, 152
Extreme Scalability Mode, 153
allocation rounds, 156
average usage, 156
user quotas, 156
LINPACK benchmark program, 116, 125
Linux
compute node, 65
computing cluster, 43
Cray Linux Environment, 19, 65
data transfer commands, 30
Extra Packages for Enterprise Linux, 176
Jacquard, 43
Red Hat Enterprise Linux, 197
Load Sharing Facility (LSF), 198
Lookup tables, 23
LSF, see Load Sharing Facility
Ludwig, 14
M
Madden-Julian Oscillation (MJO), 212, 214
Magnetohydrodynamics (MHD), 203
Management Target (MGT), 62
Massively parallel processing (MPP), 83
Materials physics and chemistry, Peregrine and, 168
MDS failover, 61
Mean time between failure (MTBF), 110 234
Mean time between system failure (MTBSF), 207
Mellanox
ConnectX-3 FDR adapter, 192
FDR Infiniband interconnect, 191
Message passing interface (MPI), 123, 148
Message Passing Toolkit (MPT), 21
Metadata Management Units (MMUs), 60
MGT, see Management Target
MHD, see Magnetohydrodynamics
MJO, see Madden-Julian Oscillation
MMUs, see Metadata Management Units
Model for Prediction Across Scales (MPAS), 202, 214
MPAS, see Model for Prediction Across Scales
MPI, see Message passing interface
MPP, see Massively parallel processing
MPT, see Message Passing Toolkit
MTBF, see Mean time between failure
MTBSF, see Mean time between system failure
N
NAS, see Network Attached Storage
National Center for Atmospheric Research (NCAR), 187
National Centers for Environmental Prediction (NCEP), 202
National Renewable Energy Laboratory (NREL), see Peregrine
National Science Foundation (NSF), 187
Natural Environment Research Council (NERC), 8
NCAR, see National Center for Atmospheric Research
NCAR-Wyoming Supercomputing Center (NWSC), see Yellowstone
NCEP, see National Centers for Environmental Prediction
NERC, see Natural Environment Research Council
NERSC, see Edison
NERSC-8 Knight’s Landing processor, 77
Nested Regional Climate Model (NRCM), 201
Network Attached Storage (NAS), 98, 171, 174
NMM solver, see Nonhydrostatic Mesoscale Model solver
Nonhydrostatic Mesoscale Model (NMM) solver, 202
Non-uniform access (NUMA), 17
NRCM, see Nested Regional Climate Model
NSF, see National Science Foundation
NUMA, see Non-uniform access
NVIDIA
graphics processing unit, 193
Quadro 6000 graphics processing unit, 193
NWSC (NCAR-Wyoming Supercomputing Center), see Yellowstone
O
Oakland Scientific Center (OSF), 67
OpenSHMEM, 21
Oracle SAM-QFS, 100
OSF, see Oakland Scientific Center
P
PALM, see Parallelized large-eddy simulation model
Parallel Distributed Systems Facility (PDSF), 43
Parallel File System (PFS), 171, 174
Parallelized large-eddy simulation model (PALM), 94
Parallel Ocean Program (POP), 202, 211
Partitioned Global Address Space (PGAS) models, 21
PCG solver, see Preconditioned conjugate gradient solver
PDSF, see Parallel Distributed Systems Facility
PDU, see Power distribution unit
applications and workloads, 168–171
application highlights, 169–171
bioenergy, 169
computational tasks and domain examples, 168–169
Hartree-Fock methods, 170
lignocellulose, 169
Linda programming model, 171
materials physics and chemistry, 168
Vienna Ab Initio Simulation Program, 170
Weather Research and Forecasting Model, 170
wind resource modeling, 169
ESIF data center electrical power, 181
ESIF data center mechanical infrastructure, 180–181
fire suppression, 182
NREL energy efficient data center, 179–180
physical security, 182
return on investment, 182
hardware architecture, 171–174
Cooling Distribution Units, 174
HP s8500 liquid cooled enclosure, 174
Hydronics subsystem, 174
interconnect, 173
Network Attached Storage, 171, 174
Parallel File System, 171, 174
SandyBridge nodes, 171
top connectivity, 173
capture of waste heat, 165
Cooling Distribution Units, 167
design-build contractor, 166
design features, efficiency and sustainability measures, 164–166
direct liquid cooling, 165
Energy Recover Water, 167
Energy Reuse Effectiveness, 164
holistic chips to bricks approach, 164
hot work, 167
HPC data center, 164
LEED Platinum Certification, 164
primary side, 167
secondary side, 167
smooth piping, 165
sponsor/program background, 166
programming system, 176
Intel Trace Analyzer, 176
Rogue-Wave Totalview debugger, 176
system overview, 171
Extra Packages for Enterprise Linux RPM repository, 176
HP Configuration Management Utility, 176
RedHat Package Manager, 176
visualization and analysis, 177–178
Collaboration Room, 178
data analysis and visualization, 177
OpenGL, 177
TurboVNC, 177
VirtualGL, 177
Visualization Room, 177
PFS, see Parallel File System
PGAS models, see Partitioned Global Address Space models
PHOENIX astrophysics code, 96
POP, see Parallel Ocean Program
Power distribution unit (PDU), 55
Power Usage Effectiveness, 33
Power Use Efficiency, 218
Preconditioned conjugate gradient (PCG) solver, 202
Procurement working group (PWG), 10
PWG, see Procurement working group
Python language, 200
Q
QFT, see Quasi fat tree
QPDCs, see Quad processor daughter cards
QPI, see QuickPath Interconnect
Quad processor daughter cards (QPDCs), 55
Quasi fat tree (QFT), 195
QuickPath Interconnect (QPI), 16
R
RDA, see Research Data Archive
Real-space density functional theory (RSDFT) code, 126
Red Hat Enterprise Linux (RHEL), 197
RedHat Package Manager (RPM), 176
Remote Memory Access (RMA), 22
Research Data Archive (RDA), 188
RHEL, see Red Hat Enterprise Linux
RMA, see Remote Memory Access
Rogue-Wave Totalview debugger, 176
Routing engine chains, 195
RPM, see RedHat Package Manager
RSDFT code, see Real-space density functional theory code
R statistical language, 13
S
Sandia National Laboratory (SNL), 73
Scalable Storage Units (SSUs), 60
SCEC, see Southern California Earthquake Center
Shared-memory processing (SMP), 83
Simulation rate in years per wall-clock day (SYPD), 204
SLES, see SUSE Linux Enterprise
SMP, see Shared-memory processing
SNL, see Sandia National Laboratory
Solar convection zone, 216
Southern California Earthquake Center (SCEC), 212
SPRINT, 14
SQL relational database, 36
SSP, see Sustained System Performance
SSUs, see Scalable Storage Units
SUSE Linux Enterprise (SLES), 100
Sustained System Performance (SSP), 50
Swedish tier-1 system, see Lindgren
Syngas, 73
SYPD, see Simulation rate in years per wall-clock day
T
Third-party debuggers, 66
Tick-tock model, 51
Tier-0 resources, 144
Top connectivity, 173
Top-of-rack (TOR) switches, 193
TOR switches, see Top-of-rack switches
Trinaryx3 algorithm, 134
U
UCAR, see University Corporation for Atmospheric Research
Unified Model MicroMag, 12
Unified Parallel C (UPC), 21
Universe, composition of, 127
University Corporation for Atmospheric Research (UCAR), 187
UPC, see Unified Parallel C
V
Variable frequency drive (VFD) cooling components, 218
VASP, see Vienna Ab Initio Simulation Program
VFD cooling components, see Variable frequency drive cooling components
Vienna Ab Initio Simulation Program (VASP), 14, 170
W
WACCM, see Whole Atmosphere Community Climate Model
Weather Research and Forecasting (WRF) model, 170, 201
White-box test systems, 77
Whole Atmosphere Community Climate Model (WACCM), 202
WIBs, see Write intent bitmaps
Wind resource modeling, 169
Wrapper scripts, 200
WRF model, see Weather Research and Forecasting model
Write intent bitmaps (WIBs), 61
X
Xeon Phi processor (Intel), 45, 74, 164, 173
XML reporting language, 39
Y
early science results, 208–216
Accelerated Scientific Discovery projects, 208
Active Sun, 215
adaptive mesh refinement, 214
Community Atmosphere Model, 211
coupled general circulation models, 210
full wave seismic data assimilation, 212–214
high-resolution coupled climate experiment, 210–212
Madden-Julian Oscillation, 212, 214
magnetic field of Quiet Sun, 215–216
Parallel Ocean Program, 211
regional-scale prediction of future climate and air quality, 208–210
resolving mesoscale features in coupled climate simulations, 212
San Andreas Fault zone, 213
solar convection zone, 216
toward global cloud-resolving models, 214–215
visualization clusters, 214
hardware architecture, 192–196
A-groups, 194
B-group, 194
Mellanox ConnectX-3 FDR adapter, 192
NVIDIA graphics processing unit, 193
quasi fat tree, 195
rack-level packaging, 193
routing engine chains, 195
spine switch modules, 194
storage system, 196
TOR switch, 194
early experience in operation, 219
flexibility, 217
green design, 216
initial building shell, 217
Power Use Efficiency, 218
variable frequency drive cooling components, 218
worst-case initial load, 219
community-driven computer models, 187
High-Performance Storage
System archival system, 188
Research Data Archive, 188
scientific capabilities, 186
sponsor and program background, 187–188
vendor proposals, 190
Yellowstone procurement, 189–190
Amdahl’s law, 191
budget allocation, 191
Caldera, 191
Geyser, 191
Globally Accessible Data Environment, 190
IBM iDataPlex cluster supercomputer, 191
Mellanox FDR Infiniband interconnect, 191
CUDA programming language, 200
disk filesystem and tape archive, 198–199
GLADE storage cluster, 198
Load Sharing Facility, 198
Nagios instance, 199
operating systems and system management, 197–198
programming environment, 200
Python language, 200
Red Hat Enterprise Linux, 197
system monitoring, 199
wrapper scripts, 200
disk storage usage, 206
job success rates, 207
mean time between system failure, 207
reliability, uptime, and utilization, 207–208
system usage patterns, 205–206
workload and application performance, 200–205
Advanced Research WRF model, 202
application domain descriptions, 201–204
application performance, 204–205
Community Earth System Model, 201
Data Assimilation Research Testbed, 201
data assimilation systems, 203–204
direct numerical simulation, 203
earth sciences, 203
fluid dynamics and turbulence, 203
large-eddy simulation, 203
magnetohydrodynamics applications, 203
Model for Prediction Across Scales, 202
Nested Regional Climate Model, 201
Nonhydrostatic Mesoscale Model solver, 202
ocean sciences, 202
Parallel Ocean Program, 202
preconditioned conjugate gradient solver, 202
weather prediction and atmospheric chemistry, 202
Weather Research and Forecasting model, 201
Whole Atmosphere Community Climate Model, 202
Z
Zuse Institute Berlin (ZIB), see HLRN-III
3.138.33.201