Glossary

ABMS See agent-based modeling and simulation.

Agent The individually identifiable component of an agent-based model. An agent typical executes some type of behavior and represents a decision maker at some level.

Agent-based modeling and simulation (ABMS) A method of simulation that computes the system-level consequences of the behaviors of individuals. ABMS is also known as individual-based simulation or individual-based modeling.

Ahmad–Cohen neighbor scheme (ACS) Division of gravitational force acting on a particle into two components, an irregular one (from nearby particles) and a regular one, from distant particles. Direct N-body codes of Aarseth of NBODY5 and higher (NBODY6, NBODY6++) employ ACS. Each part of the force can be used to define a separate time step, which determines the update interval for this force component only, while extrapolating the other component with low-order Taylor series polynomials, if needed. The idea of the ACS could be generalized to more levels, but this seems not to be in common use at this time.

Base Object Model (BOM) A Simulation Interoperability Standards Organization (SISO) standard for describing a reusable piece part of a simulation.

BOM See Base Object Model.

CCA See Common Component Architecture.

Cloud computing Cloud computing describes a computing model for IT services based on the Internet. The fundamental concept of cloud computing is that processing and data do not reside in a specified, known, or static location. Typically, cloud computing infrastructures provide dynamically scalable, virtualized resources over the Internet. The user perspective of cloud computing takes the form of Web-based tools and applications that are accessed via a Web browser.

Cluster See cluster computer.

Cluster computer A collection of linked computers working closely together to “emulate” a single computer powerful computer. The components of a cluster computer are typically connected through fast local area networks. Clusters are usually deployed to improve performance and availability while being much more cost-effective than single computers of comparable specification.

Common Component Architecture (CCA) A standard for component-based software engineering used in high-performance (also known as scientific) computing.

Computational model Refers to the computer realization (implemented software) of a simulation model.

Discrete-event simulation A simulation technique where the operation of a system is modeled as a series of events. These events are ordered internally with respect to one another (i.e., earlier events occur before later ones) such that the simulation proceeds by removing events from a queue and executing them.

Distributed simulation system An application consisting of distributed simulation modules.

e-infrastructure An electronic computing and communication environment (hardware, software, data and information repositories, services and digital libraries, and communication networks and protocols) and the people and organizational structures needed to support large-scale research and development efforts. e-Infrastructures enable collaborations across geographically dispersed locations by providing shared access to unique or distributed facilities, resources, instruments, services, and communications. e-Infrastructures usually consist of (1) high-performance communication networks connecting the collaborating sites; (2) grid computing to facilitate the sharing of nontrivial amounts of resources such as processing elements, storage capacity, and network bandwidth; (3) supercomputing capabilities to address large-scale computing tasks; (4) databases and information repositories that are shared by the participating organizations; (5) globally operating research and development communities that collaborate to solve highly challenging scientific and engineering problems; and (6) standards to enable the sharing, interoperation, and communication in e-infrastructures.

Evolutionary algorithm A generic population-based optimization algorithm based on iteration and the principles of evolution (variation, selection, and heredity). At each iteration, operators create diversity and a fitness function evaluates individuals within the population based on some characteristics. Individuals correspond to potential solutions and the fittest individual corresponds to the most optimal solution.

Federation object model (FOM) Describes the shared objects, attributes, and interactions for a distributed simulation system based on HLA.

FOM See federation object model.

Gene regulatory network A set of genes that interact with each other via the products they express, namely, forms of RNA and proteins. Interacting genes may activate or switch on the expression of another gene, or they may repress or switch off the expression of another gene. Gene regulatory networks are used by cells to control their life-support mechanisms.

GPU See graphical processing unit.

Graphical processing unit (GPU) A specialized microprocessor that handles and accelerates 3-D or 2-D graphic rendering. Owing to their highly parallel structure, GPUs are very effective not only in graphics computing but also in a wide range of other complex computing tasks.

GRAPE See Gravity Pipe.

Gravitational potential softening Softening of the singularity of Newtonian (or Coulombian) potential Φ between particles. In astrophysical N-body simulations, a Plummer softening is often used, which is bglossue001 for the potential of a single particle (r is the distance from the particle’s center; ε is the scaling radius for the size of the particle). It is the true potential of a gas sphere whose density is given by Plummer’s model, which is in fact an n = 5 polytrope. Note that Plummer softening differs from the true Newtonian potential Φ(r) = Φ0/r at all radii. More modern variants of softening, which are used for smoothed particle hydrodynamics, use softening kernels, which differ from the Newtonian potential only on a compact subspace around the particle (of the order of a few ε).

Gravity Pipe (GRAPE) A special-purpose computer designed to accelerate the calculation of forces in simulations of interacting particles. GRAPE systems have been used for N-body calculations in astrophysics, molecular dynamics models, the study of magnetism, and many other applications.

Grid computing Grid computing combines computer resources from multiple administrative domains to run complex computing applications. Typically, grid computing environments are less tightly coupled, more heterogeneous, and more geographically dispersed than conventional cluster and supercomputers.

HAND Highly Available Dynamic Deployment Infrastructure for Globus Toolkit facilitates dynamic deployment of grid services.

Hermite scheme Two-point interpolation scheme that uses a function value and its a priori known first derivative in order to obtain fourth-order accuracy. In gravitational N-body simulations, the gravitational force and its time derivative are used to have a fourth-order time integration scheme, with the need to store only particle data for two points in time, which is convenient for parallelization and memory management. Recently, generalizations have been described going up to eighth-order time integration, which requires then a priori knowledge of up to the fourth derivative of the gravitational force.

High-level architecture (HLA) An IEEE standard for distributed computer simulation systems. Communication between simulations is managed by a run-time infrastructure (RTI).

High-performance computing (HPC) HPC uses supercomputers, computer clusters, or other large-scale or distributed computing technology to solve large-scale computing problems. Nowadays, any computer system that is capable of a teraflop computing performance is considered an HPC computer.

HLA See high-level architecture.

HPC See high-performance computing.

Job Usually, a grid job is a binary executable or command to be run in a remote resource (machine). The remote server is sometimes referred to as a “contact” or a “gatekeeper.” When a job is submitted to a remote gatekeeper (server) for execution, it can run in two different modes: batch and nonbatch. Usually, when a job runs in batch mode, the remote submission call will return immediately with a job identifier, which can later be used to obtain the output of the call. In the nonbatch job submission mode, the client will wait for the remote gatekeeper to follow through with the execution and will return the output. Batch mode submission is useful for jobs that take a long time, such as process-intensive computations.

Master–worker pattern The master–worker pattern is used in distributed computing to address easy-to-parallelize large-scale computing tasks. This pattern typically consist of two types of entities: master and worker. The master initiates and controls the computing process by creating a work set of tasks and putting them in some “shared space” and then waits for the tasks to be pulled from the space and completed by the workers. One of the advantages of the master–worker pattern is that it automatically balances the load because the work set is shared and the workers continue to pull work from the set until there is no more work to be done. Algorithms implementing the master–worker pattern usually scale well, provided that the number of tasks is much higher than the number of workers and that the tasks are similar in terms of the amount of time they need to be completed.

MCT See Model Coupling Toolkit.

Message Passing Interface (MPI) A message passing parallel programming model in which data are moved from the address space of one process to that of another process through cooperative operations on each process. The MPI standard includes a language-independent message passing library interface specification.

Model Coupling Toolkit (MCT) A set of tools for coupling message passing parallel models.

Model reduction Refers to the approximation of a model aiming at a simplified model that is easier to analyze but preserves the essential properties of the original model.

MPI See Message Passing Interface.

Multiscale Coupling Library and Environment (MUSCLE) A software framework for building simulations according to the complex automata theory.

Multiscale Multiphysics Scientific Environment (MUSE) A software environment for astrophysical applications in which different simulation models of star systems are incorporated into a single framework.

MUSCLE See Multiscale Coupling Library and Environment.

MUSE See Multiscale Multiphysics Scientific Environment.

Object-oriented programming (OOP) A programming paradigm in which data and the operations on that data are encapsulated in an object. Other features of OOP include inheritance, where one object may inherit the data and operations of parent object(s), and polymorphism, where an object can be used in place of its parent object while retaining its own behavior.

Ordinary differential equation In chemical kinetic theory, the interactions between species are commonly expressed using ordinary differential equations (ODEs). An ODE is a relation that contains functions of only one independent variable (typically t) and one or more of its derivatives with respect to that variable. The order of an ODE is determined by the highest derivative it contains (e.g., a first-order ODE involves only the first derivative of the function). The equation bglossue002 is an example of a first-order ODE involving the independent variable t, a function of this variable, x(t), and a derivative of this function, bglossue003. Since a derivative specifies a rate of change, such an equation states how a function changes but does not specify the function itself. Given sufficient initial conditions, various methods are available to determine the unknown function. The difference between ordinary differential equations and partial differential equations is that partial differential equations involve partial derivatives of several variables.

Parallel discrete-event simulation A distributed version of discrete-event simulation in which the events and their execution may occur in parallel across machines or processes. This usually includes some mechanism(s) for synchronizing the execution of events across machines or processes.

Parameter sweep A technique to explore or characterize a process, procedure, or function by means of a carefully generated set of input parameter combinations or configurations. The term parameter may cover a wide range of concepts, including structured data files, numeric or symbolic values, vectors or matrices, or executable models or programs. For example, a parameter sweep experiment may generate suitable inputs to explore a cost function or to create the energy surface for a 3-D graph. Sensitivity analysis could be viewed as a form of parameter sweep experiment where inputs are systematically varied to analyze how sensitive a model responds to variations of individual or groups of state variables. In contrast to parameter estimation and parameter optimization techniques, the parameter sweep procedure does not normally incorporate feedback from the output of a process or model to iteratively steer and adapt the generation of parameter combinations.

Partial differential equation Similar to an ordinary differential equation except that it involves functions with more than one independent variable.

Regularization Classical method of celestial mechanics to transform equations of motions for the two-body problem to a harmonic oscillator problem, removing the coordinate singularity at zero separation and allowing analytic continuation of solutions through the zero separation point. Regularization is commonly used for some direct N-body simulation codes to allow a more efficient integration of perturbed dominant pairs of particles at nonzero separation.

RTI See run-time infrastructure.

Run-time Infrastructure (RTI) A communication bus used by HLA simulation modules.

SAN See storage area network.

SCA See Service Component Architecture.

Sensitivity analysis An important tool to study the dependence of systems on their parameters. Sensitivity analysis helps to identify those parameters that have a significant impact on the system output and to capture the essential characteristics of the system. Sensitivity analysis is particularly useful for complex biological networks with a large number of variables and parameters.

Service A network-enabled entity that provides a specific capability, for example, the ability to move files, create processes, or verify access rights. A service is defined in terms of the protocol one uses to interact with it and the behavior expected in response to various protocol message exchanges (i.e., service = protocol + behavior.). A service may or may not be persistent (i.e., always be available); be able to detect and/or recover from certain errors; run with privileges; or have a distributed implementation for enhanced scalability. If variants are possible, then discovery mechanisms that allow a client to determine the properties of a particular instantiation of a service are important.

Service Component Architecture (SCA) A set of specifications that describe a model for building applications and systems using a service-oriented architecture.

Service-oriented architecture (SOA) An SOA is a collection of services that communicate with each other by simple data passing or to coordinate the activity of multiple services. Some means of connecting services to each other is needed. The main advantages of SOA include loose coupling, ease and flexibility of reuse, scalability, interoperability, and service abstraction from underlying technology.

Simulation Interoperability Standards Organization (SISO) An international organization dedicated to the promotion of modeling and simulation interoperability and reuse for the benefit of a broad range of modeling and simulation communities

Simulation model A formal (mathematical) description of a natural phenomenon or an engineering artifact used to simulate the behavior of the phenomenon or artifact. To facilitate efficient simulation that a simulation model is typically implemented as a computer program or software (referred to as computational model).

SISO See Simulation Interoperability Standards Organization.

SOA See service-oriented architecture.

Storage area network (SAN) A storage area network is a special type of network that is separate from LANs and WANs. A SAN is usually dedicated to connect all the storage resources connected to various servers.

Supercomputer A supercomputer is a computer that is at the front of current processing capacity. Supercomputers are typically used for highly calculation-intensive tasks such as problems involving quantum physics, weather forecasting, climate research, molecular modeling, physical simulations, and so on. In the context of supercomputing, the distinction between capability computing and capacity computing is becoming relevant. Capability is normally concerned with maximum computing power to solve a large problem in the shortest amount of time. Capacity computing, on the other hand, is concerned with efficient, cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.

Virtualization In computing, virtualization refers to the creation of a virtual (rather than actual) version of something such as an operating system (OS), a server, a storage device, or network resources. The usual goal of virtualization is to centralize administrative tasks while improving scalability and workloads. Virtualization is part of a trend in which computer processing power is seen as a utility that clients can pay for only as needed. Some common virtualizations include the following: Hardware virtualization refers to the execution of software in an environment separated from the underlying hardware resources. Memory virtualization refers to the aggregation RAM resources from networked systems into a single memory pool. Storage virtualization refers to the separation of logical storage from physical storage. Data virtualization refers to the presentation of data as an abstract layer, independent of underlying database systems, structures, and storage. Database virtualization refers to the decoupling of the database layer, which lies between the storage and application layers within the application stack. Network virtualization refers to the creation of a virtualized network addressing space within or across network subnets. Software virtualization: (1) OS-level virtualization refers to the hosting of multiple virtualized environments within a single OS instance; (2) application virtualization refers to the hosting of individual applications in an environment separated from the underlying OS; and (3) virtual machine refers to a software implementation of a computer that executes programs like a real computer.

Virtual machine A software implementation of a computer that executes programs like a physical machine. Software running inside a virtual machine is limited to the resources and abstractions provided by the virtual machine—it cannot cross the boundaries of the “virtual world” established by the virtual machine. Virtual machines are categorized into two major types: (1) a system virtual machine, which provides a complete system platform that supports the execution of a complete operating system, and (2) a process virtual machine, which is designed to run a single program, which means that it supports a single process.

Virtual organization (VO) Refers to a set of individuals and/or institutions that are related to one another by some level of trust and rules of sharing resources, services, and applications.

VO See virtual organization.

Web service (WS) Refers to a Web-based application that uses open, XML-based standards and transport protocols to exchange data with clients. A WS is designed to support interoperable machine-to-machine interaction over a network.

Web Service Resource Framework (WSRF) A Web service extension providing a set of operations that Web services may implement to become stateful.

WS See web service.

WSRF See Web Service Resource Framework.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.10.64