Parallel and Distributed Computational Fluid Dynamics: Experimental Results and Challenges
NASA Technical Reports Server (NTRS)
Djomehri, Mohammad Jahed; Biswas, R.; VanderWijngaart, R.; Yarrow, M.
2000-01-01
This paper describes several results of parallel and distributed computing using a large scale production flow solver program. A coarse grained parallelization based on clustering of discretization grids combined with partitioning of large grids for load balancing is presented. An assessment is given of its performance on distributed and distributed-shared memory platforms using large scale scientific problems. An experiment with this solver, adapted to a Wide Area Network execution environment is presented. We also give a comparative performance assessment of computation and communication times on both the tightly and loosely-coupled machines.
Application of a distributed network in computational fluid dynamic simulations
NASA Technical Reports Server (NTRS)
Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.; Deshpande, Ashish
1994-01-01
A general-purpose 3-D, incompressible Navier-Stokes algorithm is implemented on a network of concurrently operating workstations using parallel virtual machine (PVM) and compared with its performance on a CRAY Y-MP and on an Intel iPSC/860. The problem is relatively computationally intensive, and has a communication structure based primarily on nearest-neighbor communication, making it ideally suited to message passing. Such problems are frequently encountered in computational fluid dynamics (CDF), and their solution is increasingly in demand. The communication structure is explicitly coded in the implementation to fully exploit the regularity in message passing in order to produce a near-optimal solution. Results are presented for various grid sizes using up to eight processors.
Dynamic resource allocation scheme for distributed heterogeneous computer systems
NASA Technical Reports Server (NTRS)
Liu, Howard T. (Inventor); Silvester, John A. (Inventor)
1991-01-01
This invention relates to a resource allocation in computer systems, and more particularly, to a method and associated apparatus for shortening response time and improving efficiency of a heterogeneous distributed networked computer system by reallocating the jobs queued up for busy nodes to idle, or less-busy nodes. In accordance with the algorithm (SIDA for short), the load-sharing is initiated by the server device in a manner such that extra overhead in not imposed on the system during heavily-loaded conditions. The algorithm employed in the present invention uses a dual-mode, server-initiated approach. Jobs are transferred from heavily burdened nodes (i.e., over a high threshold limit) to low burdened nodes at the initiation of the receiving node when: (1) a job finishes at a node which is burdened below a pre-established threshold level, or (2) a node is idle for a period of time as established by a wakeup timer at the node. The invention uses a combination of the local queue length and the local service rate ratio at each node as the workload indicator.
Evidence for complex, collective dynamics and emergent, distributed computation in plants.
Peak, David; West, Jevin D; Messinger, Susanna M; Mott, Keith A
2004-01-27
It has been suggested that some biological processes are equivalent to computation, but quantitative evidence for that view is weak. Plants must solve the problem of adjusting stomatal apertures to allow sufficient CO(2) uptake for photosynthesis while preventing excessive water loss. Under some conditions, stomatal apertures become synchronized into patches that exhibit richly complicated dynamics, similar to behaviors found in cellular automata that perform computational tasks. Using sequences of chlorophyll fluorescence images from leaves of Xanthium strumarium L. (cocklebur), we quantified spatial and temporal correlations in stomatal dynamics. Our values are statistically indistinguishable from those of the same correlations found in the dynamics of automata that compute. These results are consistent with the proposition that a plant solves its optimal gas exchange problem through an emergent, distributed computation performed by its leaves. PMID:14732685
Dynamic Load Balancing for Adaptive Computations on Distributed-Memory Machines
NASA Technical Reports Server (NTRS)
1999-01-01
Dynamic load balancing is central to adaptive mesh-based computations on large-scale parallel computers. The principal investigator has investigated various issues on the dynamic load balancing problem under NASA JOVE and JAG rants. The major accomplishments of the project are two graph partitioning algorithms and a load balancing framework. The S-HARP dynamic graph partitioner is known to be the fastest among the known dynamic graph partitioners to date. It can partition a graph of over 100,000 vertices in 0.25 seconds on a 64- processor Cray T3E distributed-memory multiprocessor while maintaining the scalability of over 16-fold speedup. Other known and widely used dynamic graph partitioners take over a second or two while giving low scalability of a few fold speedup on 64 processors. These results have been published in journals and peer-reviewed flagship conferences.
Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems
NASA Technical Reports Server (NTRS)
Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.
2000-01-01
The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.
Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.
2001-04-09
The estimation of time-activity curves and kinetic model parameters directly from projection data is potentially useful for clinical dynamic single photon emission computed tomography (SPECT) studies, particularly in those clinics that have only single-detector systems and thus are not able to perform rapid tomographic acquisitions. Because the radiopharmaceutical distribution changes while the SPECT gantry rotates, projections at different angles come from different tracer distributions. A dynamic image sequence reconstructed from the inconsistent projections acquired by a slowly rotating gantry can contain artifacts that lead to biases in kinetic parameters estimated from time-activity curves generated by overlaying regions of interest on the images. If cone beam collimators are used and the focal point of the collimators always remains in a particular transaxial plane, additional artifacts can arise in other planes reconstructed using insufficient projection samples [1]. If the projection samples truncate the patient's body, this can result in additional image artifacts. To overcome these sources of bias in conventional image based dynamic data analysis, we and others have been investigating the estimation of time-activity curves and kinetic model parameters directly from dynamic SPECT projection data by modeling the spatial and temporal distribution of the radiopharmaceutical throughout the projected field of view [2-8]. In our previous work we developed a computationally efficient method for fully four-dimensional (4-D) direct estimation of spatiotemporal distributions from dynamic SPECT projection data [5], which extended Formiconi's least squares algorithm for reconstructing temporally static distributions [9]. In addition, we studied the biases that result from modeling various orders temporal continuity and using various time samplings [5]. the present work, we address computational issues associated with evaluating the statistical uncertainty of
Large-Scale Distributed Computational Fluid Dynamics on the Information Power Grid Using Globus
NASA Technical Reports Server (NTRS)
Barnard, Stephen; Biswas, Rupak; Saini, Subhash; VanderWijngaart, Robertus; Yarrow, Maurice; Zechtzer, Lou; Foster, Ian; Larsson, Olle
1999-01-01
This paper describes an experiment in which a large-scale scientific application development for tightly-coupled parallel machines is adapted to the distributed execution environment of the Information Power Grid (IPG). A brief overview of the IPG and a description of the computational fluid dynamics (CFD) algorithm are given. The Globus metacomputing toolkit is used as the enabling device for the geographically-distributed computation. Modifications related to latency hiding and Load balancing were required for an efficient implementation of the CFD application in the IPG environment. Performance results on a pair of SGI Origin 2000 machines indicate that real scientific applications can be effectively implemented on the IPG; however, a significant amount of continued effort is required to make such an environment useful and accessible to scientists and engineers.
Chambers, F.B.; Duce, D.A.; Jones, G.P.
1984-01-01
CONTENTS: The Dataflow Approach: Fundamentals of dataflow. Architecture and performance. Assembler level programming. High level dataflow programming. Declarative systems: Functional programming. Logic programming and prolog. The ''language first'' approach. Towards a successor to von Neumann. Loosely-coupled systems: Architectures. Communications. Distributed filestores. Mechanisms for distributed control. Distributed operating systems. Programming languages. Closely-coupled systems: Architecture. Programming languages. Run-time support. Development aids. Cyba-M. Polyproc. Modeling and verification: Using algebra for concurrency. Reasoning about concurrent systems. Each chapter includes references. Index.
A distributed, dynamic, parallel computational model: the role of noise in velocity storage
Merfeld, Daniel M.
2012-01-01
Networks of neurons perform complex calculations using distributed, parallel computation, including dynamic “real-time” calculations required for motion control. The brain must combine sensory signals to estimate the motion of body parts using imperfect information from noisy neurons. Models and experiments suggest that the brain sometimes optimally minimizes the influence of noise, although it remains unclear when and precisely how neurons perform such optimal computations. To investigate, we created a model of velocity storage based on a relatively new technique–“particle filtering”–that is both distributed and parallel. It extends existing observer and Kalman filter models of vestibular processing by simulating the observer model many times in parallel with noise added. During simulation, the variance of the particles defining the estimator state is used to compute the particle filter gain. We applied our model to estimate one-dimensional angular velocity during yaw rotation, which yielded estimates for the velocity storage time constant, afferent noise, and perceptual noise that matched experimental data. We also found that the velocity storage time constant was Bayesian optimal by comparing the estimate of our particle filter with the estimate of the Kalman filter, which is optimal. The particle filter demonstrated a reduced velocity storage time constant when afferent noise increased, which mimics what is known about aminoglycoside ablation of semicircular canal hair cells. This model helps bridge the gap between parallel distributed neural computation and systems-level behavioral responses like the vestibuloocular response and perception. PMID:22514288
NASA Astrophysics Data System (ADS)
Doebrich, Marcus; Markstaller, Klaus; Karmrodt, Jens; Kauczor, Hans-Ulrich; Eberle, Balthasar; Weiler, Norbert; Thelen, Manfred; Schreiber, Wolfgang G.
2005-04-01
In this study, an algorithm was developed to measure the distribution of pulmonary time constants (TCs) from dynamic computed tomography (CT) data sets during a sudden airway pressure step up. Simulations with synthetic data were performed to test the methodology as well as the influence of experimental noise. Furthermore the algorithm was applied to in vivo data. In five pigs sudden changes in airway pressure were imposed during dynamic CT acquisition in healthy lungs and in a saline lavage ARDS model. The fractional gas content in the imaged slice (FGC) was calculated by density measurements for each CT image. Temporal variations of the FGC were analysed assuming a model with a continuous distribution of exponentially decaying time constants. The simulations proved the feasibility of the method. The influence of experimental noise could be well evaluated. Analysis of the in vivo data showed that in healthy lungs ventilation processes can be more likely characterized by discrete TCs whereas in ARDS lungs continuous distributions of TCs are observed. The temporal behaviour of lung inflation and deflation can be characterized objectively using the described new methodology. This study indicates that continuous distributions of TCs reflect lung ventilation mechanics more accurately compared to discrete TCs.
FPGA-based distributed computing microarchitecture for complex physical dynamics investigation.
Borgese, Gianluca; Pace, Calogero; Pantano, Pietro; Bilotta, Eleonora
2013-09-01
In this paper, we present a distributed computing system, called DCMARK, aimed at solving partial differential equations at the basis of many investigation fields, such as solid state physics, nuclear physics, and plasma physics. This distributed architecture is based on the cellular neural network paradigm, which allows us to divide the differential equation system solving into many parallel integration operations to be executed by a custom multiprocessor system. We push the number of processors to the limit of one processor for each equation. In order to test the present idea, we choose to implement DCMARK on a single FPGA, designing the single processor in order to minimize its hardware requirements and to obtain a large number of easily interconnected processors. This approach is particularly suited to study the properties of 1-, 2- and 3-D locally interconnected dynamical systems. In order to test the computing platform, we implement a 200 cells, Korteweg-de Vries (KdV) equation solver and perform a comparison between simulations conducted on a high performance PC and on our system. Since our distributed architecture takes a constant computing time to solve the equation system, independently of the number of dynamical elements (cells) of the CNN array, it allows us to reduce the elaboration time more than other similar systems in the literature. To ensure a high level of reconfigurability, we design a compact system on programmable chip managed by a softcore processor, which controls the fast data/control communication between our system and a PC Host. An intuitively graphical user interface allows us to change the calculation parameters and plot the results. PMID:24808576
NASA Astrophysics Data System (ADS)
Hopkins, Paul; Fortini, Andrea; Archer, Andrew J.; Schmidt, Matthias
2010-12-01
We describe a test particle approach based on dynamical density functional theory (DDFT) for studying the correlated time evolution of the particles that constitute a fluid. Our theory provides a means of calculating the van Hove distribution function by treating its self and distinct parts as the two components of a binary fluid mixture, with the "self " component having only one particle, the "distinct" component consisting of all the other particles, and using DDFT to calculate the time evolution of the density profiles for the two components. We apply this approach to a bulk fluid of Brownian hard spheres and compare to results for the van Hove function and the intermediate scattering function from Brownian dynamics computer simulations. We find good agreement at low and intermediate densities using the very simple Ramakrishnan-Yussouff [Phys. Rev. B 19, 2775 (1979)] approximation for the excess free energy functional. Since the DDFT is based on the equilibrium Helmholtz free energy functional, we can probe a free energy landscape that underlies the dynamics. Within the mean-field approximation we find that as the particle density increases, this landscape develops a minimum, while an exact treatment of a model confined situation shows that for an ergodic fluid this landscape should be monotonic. We discuss possible implications for slow, glassy, and arrested dynamics at high densities.
Hopkins, Paul; Fortini, Andrea; Archer, Andrew J; Schmidt, Matthias
2010-12-14
We describe a test particle approach based on dynamical density functional theory (DDFT) for studying the correlated time evolution of the particles that constitute a fluid. Our theory provides a means of calculating the van Hove distribution function by treating its self and distinct parts as the two components of a binary fluid mixture, with the "self " component having only one particle, the "distinct" component consisting of all the other particles, and using DDFT to calculate the time evolution of the density profiles for the two components. We apply this approach to a bulk fluid of Brownian hard spheres and compare to results for the van Hove function and the intermediate scattering function from Brownian dynamics computer simulations. We find good agreement at low and intermediate densities using the very simple Ramakrishnan-Yussouff [Phys. Rev. B 19, 2775 (1979)] approximation for the excess free energy functional. Since the DDFT is based on the equilibrium Helmholtz free energy functional, we can probe a free energy landscape that underlies the dynamics. Within the mean-field approximation we find that as the particle density increases, this landscape develops a minimum, while an exact treatment of a model confined situation shows that for an ergodic fluid this landscape should be monotonic. We discuss possible implications for slow, glassy, and arrested dynamics at high densities. PMID:21171689
NASA Astrophysics Data System (ADS)
Alman, D. A.; Ruzic, D. N.; Brooks, J. N.
2001-10-01
Reflection coefficients of carbon and hydrocarbon molecules have been calculated with a molecular dynamics code. The code uses the Brenner hydrocarbon potential, an empirical many-body potential that can model the chemical bonding in small hydrocarbon molecules and graphite surfaces. A variety of incident energies and angles have been studied. Typical results for carbon show reflection coefficients 0.4 at thermal energy, decreasing to a minimum of 0.15 at 10-20 eV, and then increasing again. Distributed computing is used to distribute the work among 10-20 desktop PCs in the laboratory. The system consists of a client application run on all of the PCs and a single server machine that distributes work and compiles the results sent back from the clients. The client-server software is written in Java and requires no commercial software packages. Thus, the MD code benefits from multiprocessor-like speed-up at no additional cost by using the idle CPU cycles that would otherwise be wasted. These calculations represent an important improvement to the WBC code, which has been used to model surface erosion, core plasma contamination, and tritium codeposition in many fusion design studies and experiments.
NASA Astrophysics Data System (ADS)
Zhang, Liang; Chill, Samuel T.; Henkelman, Graeme
2015-11-01
A distributed replica dynamics (DRD) method is proposed to calculate rare-event molecular dynamics using distributed computational resources. Similar to Voter's parallel replica dynamics (PRD) method, the dynamics of independent replicas of the system are calculated on different computational clients. In DRD, each replica runs molecular dynamics from an initial state for a fixed simulation time and then reports information about the trajectory back to the server. A simulation clock on the server accumulates the simulation time of each replica until one reports a transition to a new state. Subsequent calculations are initiated from within this new state and the process is repeated to follow the state-to-state evolution of the system. DRD is designed to work with asynchronous and distributed computing resources in which the clients may not be able to communicate with each other. Additionally, clients can be added or removed from the simulation at any point in the calculation. Even with heterogeneous computing clients, we prove that the DRD method reproduces the correct probability distribution of escape times. We also show this correspondence numerically; molecular dynamics simulations of Al(100) adatom diffusion using PRD and DRD give consistent exponential distributions of escape times. Finally, we discuss guidelines for choosing the optimal number of replicas and replica trajectory length for the DRD method.
Zhang, Liang; Chill, Samuel T; Henkelman, Graeme
2015-11-01
A distributed replica dynamics (DRD) method is proposed to calculate rare-event molecular dynamics using distributed computational resources. Similar to Voter's parallel replica dynamics (PRD) method, the dynamics of independent replicas of the system are calculated on different computational clients. In DRD, each replica runs molecular dynamics from an initial state for a fixed simulation time and then reports information about the trajectory back to the server. A simulation clock on the server accumulates the simulation time of each replica until one reports a transition to a new state. Subsequent calculations are initiated from within this new state and the process is repeated to follow the state-to-state evolution of the system. DRD is designed to work with asynchronous and distributed computing resources in which the clients may not be able to communicate with each other. Additionally, clients can be added or removed from the simulation at any point in the calculation. Even with heterogeneous computing clients, we prove that the DRD method reproduces the correct probability distribution of escape times. We also show this correspondence numerically; molecular dynamics simulations of Al(100) adatom diffusion using PRD and DRD give consistent exponential distributions of escape times. Finally, we discuss guidelines for choosing the optimal number of replicas and replica trajectory length for the DRD method. PMID:26547163
Devereux, Mike; Raghunathan, Shampa; Fedorov, Dmitri G; Meuwly, Markus
2014-10-14
A truncated multipole expansion can be re-expressed exactly using an appropriate arrangement of point charges. This means that groups of point charges that are shifted away from nuclear coordinates can be used to achieve accurate electrostatics for molecular systems. We introduce a multipolar electrostatic model formulated in this way for use in computationally efficient multipolar molecular dynamics simulations with well-defined forces and energy conservation in NVE (constant number-volume-energy) simulations. A framework is introduced to distribute torques arising from multipole moments throughout a molecule, and a refined fitting approach is suggested to obtain atomic multipole moments that are optimized for accuracy and numerical stability in a force field context. The formulation of the charge model is outlined as it has been implemented into CHARMM, with application to test systems involving H2O and chlorobenzene. As well as ease of implementation and computational efficiency, the approach can be used to provide snapshots for multipolar QM/MM calculations in QM/MM-MD studies and easily combined with a standard point-charge force field to allow mixed multipolar/point charge simulations of large systems. PMID:26588121
Simulations of ozone distributions in an aircraft cabin using computational fluid dynamics
NASA Astrophysics Data System (ADS)
Rai, Aakash C.; Chen, Qingyan
2012-07-01
Ozone is a major pollutant of indoor air. Many studies have demonstrated the adverse health effect of ozone and the byproducts generated as a result of ozone-initiated reactive chemistry in an indoor environment. This study developed a Computational Fluid Dynamics (CFD) model to predict the ozone distribution in an aircraft cabin. The model was used to simulate the distribution of ozone in an aircraft cabin mockup for the following cases: (1) empty cabin; (2) cabin with seats; (3) cabin with soiled T-shirts; (4) occupied cabin with simple human geometry; and (5) occupied cabin with detailed human geometry. The agreement was generally good between the CFD results and the available experimental data. The ozone removal rate, deposition velocity, retention ratio, and breathing zone levels were well predicted in those cases. The CFD model predicted breathing zone ozone concentration to be 77-99% of the average cabin ozone concentration depending on the seat location. The ozone concentration at the breathing zone in the cabin environment can better assess the health risk to passengers and can be used to develop strategies for a healthier cabin environment.
NASA Technical Reports Server (NTRS)
Afjeh, Abdollah A.; Reed, John A.
2003-01-01
The following reports are presented on this project:A first year progress report on: Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; A second year progress report on: Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design; Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration; and Improving the Aircraft Design Process Using Web-based Modeling and Simulation.
Davidson, Kyle M; Sushil, Shrinivasan; Eggleton, Charles D; Marten, Mark R
2003-01-01
Nonideal mixing in many fermentation processes can lead to concentration gradients in nutrients, oxygen, and pH, among others. These gradients are likely to influence cellular behavior, growth, or yield of the fermentation process. Frequency of exposure to these gradients can be defined by the circulation time distribution (CTD). There are few examples of CTDs in the literature, and experimental determination of CTD is at best a challenging task. The goal in this study was to determine whether computational fluid dynamics (CFD) software (FLUENT 4 and MixSim) could be used to characterize the CTD in a single-impeller mixing tank. To accomplish this, CFD software was used to simulate flow fields in three different mixing tanks by meshing the tanks with a grid of elements and solving the Navier-Stokes equations using the kappa-epsilon turbulence model. Tracer particles were released from a reference zone within the simulated flow fields, particle trajectories were simulated for 30 s, and the time taken for these tracer particles to return to the reference zone was calculated. CTDs determined by experimental measurement, which showed distinct features (log-normal, bimodal, and unimodal), were compared with CTDs determined using CFD simulation. Reproducing the signal processing procedures used in each of the experiments, CFD simulations captured the characteristic features of the experimentally measured CTDs. The CFD data suggests new signal processing procedures that predict unimodal CTDs for all three tanks. PMID:14524709
Cardea: Providing Support for Dynamic Resource Access in a Distributed Computing Environment
NASA Technical Reports Server (NTRS)
Lepro, Rebekah
2003-01-01
The environment framing the modem authorization process span domains of administration, relies on many different authentication sources, and manages complex attributes as part of the authorization process. Cardea facilitates dynamic access control within this environment as a central function of an inter-operable authorization framework. The system departs from the traditional authorization model by separating the authentication and authorization processes, distributing the responsibility for authorization data and allowing collaborating domains to retain control over their implementation mechanisms. Critical features of the system architecture and its handling of the authorization process differentiate the system from existing authorization components by addressing common needs not adequately addressed by existing systems. Continuing system research seeks to enhance the implementation of the current authorization model employed in Cardea, increase the robustness of current features, further the framework for establishing trust and promote interoperability with existing security mechanisms.
NASA Technical Reports Server (NTRS)
1989-01-01
An overview of computational fluid dynamics (CFD) activities at the Langley Research Center is given. The role of supercomputers in CFD research, algorithm development, multigrid approaches to computational fluid flows, aerodynamics computer programs, computational grid generation, turbulence research, and studies of rarefied gas flows are among the topics that are briefly surveyed.
Coping with distributed computing
Cormell, L.
1992-09-01
The rapid increase in the availability of high performance, cost-effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no longer provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by providing some examples of the approaches taken at various HEP institutions. In addition, a brief review of commercial directions or products for distributed computing and management will be given.
Kuttler, Andreas; Dimke, Thomas; Kern, Steven; Helmlinger, Gabriel; Stanski, Donald; Finelli, Luca A
2010-12-01
We introduce how biophysical modeling in pharmaceutical research and development, combining physiological observations at the tissue, organ and system level with selected drug physiochemical properties, may contribute to a greater and non-intuitive understanding of drug pharmacokinetics and therapeutic design. Based on rich first-principle knowledge combined with experimental data at both conception and calibration stages, and leveraging our insights on disease processes and drug pharmacology, biophysical modeling may provide a novel and unique opportunity to interactively characterize detailed drug transport, distribution, and subsequent therapeutic effects. This innovative approach is exemplified through a three-dimensional (3D) computational fluid dynamics model of the spinal canal motivated by questions arising during pharmaceutical development of one molecular therapy for spinal cord injury. The model was based on actual geometry reconstructed from magnetic resonance imaging data subsequently transformed in a parametric 3D geometry and a corresponding finite-volume representation. With dynamics controlled by transient Navier-Stokes equations, the model was implemented in a commercial multi-physics software environment established in the automotive and aerospace industries. While predictions were performed in silico, the underlying biophysical models relied on multiple sources of experimental data and knowledge from scientific literature. The results have provided insights into the primary factors that can influence the intrathecal distribution of drug after lumbar administration. This example illustrates how the approach connects the causal chain underlying drug distribution, starting with the technical aspect of drug delivery systems, through physiology-driven drug transport, then eventually linking to tissue penetration, binding, residence, and ultimately clearance. Currently supporting our drug development projects with an improved understanding of systems
Computer security in DOE distributed computing systems
Hunteman, W.J.
1990-01-01
The modernization of DOE facilities amid limited funding is creating pressure on DOE facilities to find innovative approaches to their daily activities. Distributed computing systems are becoming cost-effective solutions to improved productivity. This paper defines and describes typical distributed computing systems in the DOE. The special computer security problems present in distributed computing systems are identified and compared with traditional computer systems. The existing DOE computer security policy supports only basic networks and traditional computer systems and does not address distributed computing systems. A review of the existing policy requirements is followed by an analysis of the policy as it applies to distributed computing systems. Suggested changes in the DOE computer security policy are identified and discussed. The long lead time in updating DOE policy will require guidelines for applying the existing policy to distributed systems. Some possible interim approaches are identified and discussed. 2 refs.
NASA Astrophysics Data System (ADS)
Yohana, Eflita; Yulianto, Mohamad Endy; Kwang-Hwang, Choi; Putro, Bondantio; Yohanes Aditya W., A.
2015-12-01
The study of humidity distribution simulation inside a room has been widely conducted by using computational fluid dynamics (CFD). Here, the simulation was done by employing inputs in the experiment of air humidity reduction in a sample house. Liquid dessicant CaCl2was used in this study to absorb humidity in the air, so that the enormity of humidity reduction occured during the experiment could be obtained.The experiment was conducted in the morning at 8 with liquid desiccant concentration of 50%, nozzle dimension of 0.2 mms attached in dehumidifier, and the debit of air which entered the sample house was 2.35 m3/min. Both in inlet and outlet sides of the room, a DHT 11 censor was installed and used to note changes in humidity and temperature during the experiment. In normal condition without turning on the dehumidifier, the censor noted that the average temperature inside the room was 28°C and RH of 65%.The experiment result showed that the relative humidity inside a sample house was decreasing up to 52% in inlet position. Further, through the results obtained from CFD simulation, the temperature distribution and relative humidity inside the sample house could be seen. It showed that the concentration of liquid desiccant of 50% experienced a decrease while the relative humidity distribution was considerably good since the average RH was 55% followed by the increase in air temperature of 29.2° C inside the sample house.
NASA Technical Reports Server (NTRS)
Corker, Kevin M.; Pisanich, Gregory; Lebacqz, J. Victor (Technical Monitor)
1998-01-01
This paper presents a set of studies in full mission simulation and the development of a predictive computational model of human performance in control of complex airspace operations. NASA and the FAA have initiated programs of research and development to provide flight crew, airline operations and air traffic managers with automation aids to increase capacity in en route and terminal area to support the goals of safe, flexible, predictable and efficient operations. In support of these developments, we present a computational model to aid design that includes representation of multiple cognitive agents (both human operators and intelligent aiding systems). The demands of air traffic management require representation of many intelligent agents sharing world-models, coordinating action/intention, and scheduling goals and actions in a potentially unpredictable world of operations. The operator-model structure includes attention functions, action priority, and situation assessment. The cognitive model has been expanded to include working memory operations including retrieval from long-term store, and interference. The operator's activity structures have been developed to provide for anticipation (knowledge of the intention and action of remote operators), and to respond to failures of the system and other operators in the system in situation-specific paradigms. System stability and operator actions can be predicted by using the model. The model's predictive accuracy was verified using the full-mission simulation data of commercial flight deck operations with advanced air traffic management techniques.
Sofronov, I.D.; Voronin, B.L.; Butnev, O.I.
1997-12-31
The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle. The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.
Santoro, Domenico; Raisee, Mehrdad; Moghaddami, Mostafa; Ducoste, Joel; Sasges, Micheal; Liberti, Lorenzo; Notarnicola, Michele
2010-08-15
Advanced Oxidation Processes (AOPs) promoted by ultraviolet light are innovative and potentially cost-effective solutions for treating persistent pollutants recalcitrant to conventional water and wastewater treatment. While several studies have been performed during the past decade to improve the fundamental understanding of the UV-H(2)O(2) AOP and its kinetic modeling, Computational Fluid Dynamics (CFD) has only recently emerged as a powerful tool that allows a deeper understanding of complex photochemical processes in environmental and reactor engineering applications. In this paper, a comprehensive kinetic model of UV-H(2)O(2) AOP was coupled with the Reynolds averaged Navier-Stokes (RANS) equations using CFD to predict the oxidation of tributyl phosphate (TBP) and tri(2-chloroethtyl) phosphate (TCEP) in two different photoreactors: a parallel- and a cross-flow UV device employing a UV lamp emitting primarily 253.7 nm radiation. CFD simulations, obtained for both turbulent and laminar flow regimes and compared with experimental data over a wide range of UV doses, enabled the spatial visualization of hydrogen peroxide and hydroxyl radical distributions in the photoreactor. The annular photoreactor displayed consistently better oxidation performance than the cross-flow system due to the absence of recirculation zones, as confirmed by the hydroxyl radical dose distributions. Notably, such discrepancy was found to be strongly dependent on and directly correlated with the hydroxyl radical rate constant becoming relevant for conditions approaching diffusion-controlled reaction regimes (k(C,OH) > 10(9) M(-1) s(-1)). PMID:20704221
Guyot, Y; Luyten, F P; Schrooten, J; Papantoniou, I; Geris, L
2015-12-01
Bone tissue engineering strategies use flow through perfusion bioreactors to apply mechanical stimuli to cells seeded on porous scaffolds. Cells grow on the scaffold surface but also by bridging the scaffold pores leading a fully filled scaffold following the scaffold's geometric characteristics. Current computational fluid dynamic approaches for tissue engineering bioreactor systems have been mostly carried out for empty scaffolds. The effect of 3D cell growth and extracellular matrix formation (termed in this study as neotissue growth), on its surrounding fluid flow field is a challenge yet to be tackled. In this work a combined approach was followed linking curvature driven cell growth to fluid dynamics modeling. The level-set method (LSM) was employed to capture neotissue growth driven by curvature, while the Stokes and Darcy equations, combined in the Brinkman equation, provided information regarding the distribution of the shear stress profile at the neotissue/medium interface and within the neotissue itself during growth. The neotissue was assumed to be micro-porous allowing flow through its structure while at the same time allowing the simulation of complete scaffold filling without numerical convergence issues. The results show a significant difference in the amplitude of shear stress for cells located within the micro-porous neo-tissue or at the neotissue/medium interface, demonstrating the importance of taking along the neotissue in the calculation of the mechanical stimulation of cells during culture.The presented computational framework is used on different scaffold pore geometries demonstrating its potential to be used a design as tool for scaffold architecture taking into account the growing neotissue. Biotechnol. Bioeng. 2015;112: 2591-2600. © 2015 Wiley Periodicals, Inc. PMID:26059101
Distributed instruction set computer
Wang, L.
1989-01-01
The Distributed Instruction Set Computer, or DISC for short, is an experimental computer system for fine-grained parallel processing. DISC employs a new parallel instruction set, an Early Binding and Scheduling data tagging scheme, and a distributed control mechanism to explore a software dataflow control method in a multiple-functional unit system. With zero system control overhead, multiple instructions are executed in parallel and/or out of order at the highest speed of n instructions/cycle, where n is the number of functional units. The quantitative simulation result indicates that a DISC system with 16 functional units can deliverer a maximal 7.7X performance speedup over a single functional-unit system at the same clock speed. Exploring a new parallel instruction set and distributed control mechanism, DISC represents three major breakthroughs in the domain of fine-grained parallel processing: (1) Fast multiple instruction issuing mechanism; (2) Parallel and/or out-of-order execution; (3) Software dataflow control scheme.
Cooperative Fault Tolerant Distributed Computing
Fagg, Graham E.
2006-03-15
HARNESS was proposed as a system that combined the best of emerging technologies found in current distributed computing research and commercial products into a very flexible, dynamically adaptable framework that could be used by applications to allow them to evolve and better handle their execution environment. The HARNESS system was designed using the considerable experience from previous projects such as PVM, MPI, IceT and Cumulvs. As such, the system was designed to avoid any of the common problems found with using these current systems, such as no single point of failure, ability to survive machine, node and software failures. Additional features included improved inter-component connectivity, with full support for dynamic down loading of addition components at run-time thus reducing the stress on application developers to build in all the libraries they need in advance.
Computational fluid dynamic control
NASA Technical Reports Server (NTRS)
Hartley, Tom T.; Deabreu-Garcia, Alex
1989-01-01
A general technique is presented for modeling fluid, or gas, dynamic systems specifically for the development of control systems. The numerical methods which are generally used in computational fluid dynamics are borrowed to create either continuous-time or discrete-time models of the particular fluid system. The resulting equations can be either left in a nonlinear form, or easily linearized about an operating point. As there are typically very many states in these systems, the usual linear model reduction methods can be used on them to allow a low-order controller to be designed. A simple example is given which typifies many internal flow control problems. The resulting control is termed computational fluid dynamic control.
NASA Astrophysics Data System (ADS)
Chung, T. J.
2002-03-01
Computational fluid dynamics (CFD) techniques are used to study and solve complex fluid flow and heat transfer problems. This comprehensive text ranges from elementary concepts for the beginner to state-of-the-art CFD for the practitioner. It discusses and illustrates the basic principles of finite difference (FD), finite element (FE), and finite volume (FV) methods, with step-by-step hand calculations. Chapters go on to examine structured and unstructured grids, adaptive methods, computing techniques, and parallel processing. Finally, the author describes a variety of practical applications to problems in turbulence, reacting flows and combustion, acoustics, combined mode radiative heat transfer, multiphase flows, electromagnetic fields, and relativistic astrophysical flows. Students and practitioners--particularly in mechanical, aerospace, chemical, and civil engineering--will use this authoritative text to learn about and apply numerical techniques to the solution of fluid dynamics problems.
Computational fluid dynamics research
NASA Technical Reports Server (NTRS)
Chandra, Suresh; Jones, Kenneth; Hassan, Hassan; Mcrae, David Scott
1992-01-01
The focus of research in the computational fluid dynamics (CFD) area is two fold: (1) to develop new approaches for turbulence modeling so that high speed compressible flows can be studied for applications to entry and re-entry flows; and (2) to perform research to improve CFD algorithm accuracy and efficiency for high speed flows. Research activities, faculty and student participation, publications, and financial information are outlined.
Dynamic Load Balancing for Computational Plasticity on Parallel Computers
NASA Technical Reports Server (NTRS)
Pramono, Eddy; Simon, Horst
1994-01-01
The simulation of the computational plasticity on a complex structure remains a formidable computational task, especially when a highly nonlinear, complex material model was used. It appears that the computational requirements for a such problem can only be satisfied by massively parallel architectures. In order to effectively harness the tremendous computational power provided by such architectures, it is imperative to investigate and to study the algorithmic and implementation issues pertaining to dynamic load balancing for computational plasticity on a highly parallel, distributed-memory, multiple-instruction, multiple-data computers. This paper will measure the effectiveness of the algorithms developed in handling the dynamic load balancing.
Computational reacting gas dynamics
NASA Technical Reports Server (NTRS)
Lam, S. H.
1993-01-01
In the study of high speed flows at high altitudes, such as that encountered by re-entry spacecrafts, the interaction of chemical reactions and other non-equilibrium processes in the flow field with the gas dynamics is crucial. Generally speaking, problems of this level of complexity must resort to numerical methods for solutions, using sophisticated computational fluid dynamics (CFD) codes. The difficulties introduced by reacting gas dynamics can be classified into three distinct headings: (1) the usually inadequate knowledge of the reaction rate coefficients in the non-equilibrium reaction system; (2) the vastly larger number of unknowns involved in the computation and the expected stiffness of the equations; and (3) the interpretation of the detailed reacting CFD numerical results. The research performed accepts the premise that reacting flows of practical interest in the future will in general be too complex or 'untractable' for traditional analytical developments. The power of modern computers must be exploited. However, instead of focusing solely on the construction of numerical solutions of full-model equations, attention is also directed to the 'derivation' of the simplified model from the given full-model. In other words, the present research aims to utilize computations to do tasks which have traditionally been done by skilled theoreticians: to reduce an originally complex full-model system into an approximate but otherwise equivalent simplified model system. The tacit assumption is that once the appropriate simplified model is derived, the interpretation of the detailed numerical reacting CFD numerical results will become much easier. The approach of the research is called computational singular perturbation (CSP).
NASA Technical Reports Server (NTRS)
Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.
2014-01-01
Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.
GRIMD: distributed computing for chemists and biologists
Piotto, Stefano; Biasi, Luigi Di; Concilio, Simona; Castiglione, Aniello; Cattaneo, Giuseppe
2014-01-01
Motivation: Biologists and chemists are facing problems of high computational complexity that require the use of several computers organized in clusters or in specialized grids. Examples of such problems can be found in molecular dynamics (MD), in silico screening, and genome analysis. Grid Computing and Cloud Computing are becoming prevalent mainly because of their competitive performance/cost ratio. Regrettably, the diffusion of Grid Computing is strongly limited because two main limitations: it is confined to scientists with strong Computer Science background and the analyses of the large amount of data produced can be cumbersome it. We have developed a package named GRIMD to provide an easy and flexible implementation of distributed computing for the Bioinformatics community. GRIMD is very easy to install and maintain, and it does not require any specific Computer Science skill. Moreover, permits preliminary analysis on the distributed machines to reduce the amount of data to transfer. GRIMD is very flexible because it shields the typical computational biologist from the need to write specific code for tasks such as molecular dynamics or docking calculations. Furthermore, it permits an efficient use of GPU cards whenever is possible. GRIMD calculations scale almost linearly and, therefore, permits to exploit efficiently each machine in the network. Here, we provide few examples of grid computing in computational biology (MD and docking) and bioinformatics (proteome analysis). Availability GRIMD is available for free for noncommercial research at www.yadamp.unisa.it/grimd Supplementary information www.yadamp.unisa.it/grimd/howto.aspx PMID:24516326
Heterogeneous Distributed Computing for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Sunderam, Vaidy S.
1998-01-01
The research supported under this award focuses on heterogeneous distributed computing for high-performance applications, with particular emphasis on computational aerosciences. The overall goal of this project was to and investigate issues in, and develop solutions to, efficient execution of computational aeroscience codes in heterogeneous concurrent computing environments. In particular, we worked in the context of the PVM[1] system and, subsequent to detailed conversion efforts and performance benchmarking, devising novel techniques to increase the efficacy of heterogeneous networked environments for computational aerosciences. Our work has been based upon the NAS Parallel Benchmark suite, but has also recently expanded in scope to include the NAS I/O benchmarks as specified in the NHT-1 document. In this report we summarize our research accomplishments under the auspices of the grant.
Chang, Yang; Zhao, Xiao-zhuo; Wang, Cheng; Ning, Fang-gang; Zhang, Guo-an
2015-01-01
Inhalation injury is an important cause of death after thermal burns. This study was designed to simulate the velocity and temperature distribution of inhalation thermal injury in the upper airway in humans using computational fluid dynamics. Cervical computed tomography images of three Chinese adults were imported to Mimics software to produce three-dimensional models. After grids were established and boundary conditions were defined, the simulation time was set at 1 minute and the gas temperature was set to 80 to 320°C using ANSYS software (ANSYS, Canonsburg, PA) to simulate the velocity and temperature distribution of inhalation thermal injury. Cross-sections were cut at 2-mm intervals, and maximum airway temperature and velocity were recorded for each cross-section. The maximum velocity peaked in the lower part of the nasal cavity and then decreased with air flow. The velocities in the epiglottis and glottis were higher than those in the surrounding areas. Further, the maximum airway temperature decreased from the nasal cavity to the trachea. Computational fluid dynamics technology can be used to simulate the velocity and temperature distribution of inhaled heated air. PMID:25412055
Computational fluid dynamic applications
Chang, S.-L.; Lottes, S. A.; Zhou, C. Q.
2000-04-03
The rapid advancement of computational capability including speed and memory size has prompted the wide use of computational fluid dynamics (CFD) codes to simulate complex flow systems. CFD simulations are used to study the operating problems encountered in system, to evaluate the impacts of operation/design parameters on the performance of a system, and to investigate novel design concepts. CFD codes are generally developed based on the conservation laws of mass, momentum, and energy that govern the characteristics of a flow. The governing equations are simplified and discretized for a selected computational grid system. Numerical methods are selected to simplify and calculate approximate flow properties. For turbulent, reacting, and multiphase flow systems the complex processes relating to these aspects of the flow, i.e., turbulent diffusion, combustion kinetics, interfacial drag and heat and mass transfer, etc., are described in mathematical models, based on a combination of fundamental physics and empirical data, that are incorporated into the code. CFD simulation has been applied to a large variety of practical and industrial scale flow systems.
NASA Technical Reports Server (NTRS)
Afjeh, Abdollah A.; Reed, John A.
2003-01-01
This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.
Distributed GPU Computing in GIScience
NASA Astrophysics Data System (ADS)
Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.
2013-12-01
Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE
Kendziorra, Carsten; Meyer, Henning; Dewey, Marc
2014-01-01
This paper presents a phase detection algorithm for four-dimensional (4D) cardiac computed tomography (CT) analysis. The algorithm detects a phase, i.e. a specific three-dimensional (3D) image out of several time-distributed 3D images, with high contrast in the left ventricle and low contrast in the right ventricle. The purpose is to use the automatically detected phase in an existing algorithm that automatically aligns the images along the heart axis. Decision making is based on the contrast agent distribution over time. It was implemented in KardioPerfusion – a software framework currently being developed for 4D CT myocardial perfusion analysis. Agreement of the phase detection algorithm with two reference readers was 97% (95% CI: 82–100%). Mean duration for detection was 0.020 s (95% CI: 0.018–0.022 s), which was times less than the readers needed (s, ). Thus, this algorithm is an accurate and fast tool that can improve work flow of clinical examinations. PMID:25545863
BESIII production with distributed computing
NASA Astrophysics Data System (ADS)
Zhang, X. M.; Yan, T.; Zhao, X. H.; Ma, Z. T.; Yan, X. F.; Lin, T.; Deng, Z. Y.; Li, W. D.; Belov, S.; Pelevanyuk, I.; Zhemchugov, A.; Cai, H.
2015-12-01
Distributed computing is necessary nowadays for high energy physics experiments to organize heterogeneous computing resources all over the world to process enormous amounts of data. The BESIII experiment in China, has established its own distributed computing system, based on DIRAC, as a supplement to local clusters, collecting cluster, grid, desktop and cloud resources from collaborating member institutes around the world. The system consists of workload management and data management to deal with the BESIII Monte Carlo production workflow in a distributed environment. A dataset-based data transfer system has been developed to support data movements among sites. File and metadata management tools and a job submission frontend have been developed to provide a virtual layer for BESIII physicists to use distributed resources. Moreover, the paper shows the experience to cope with lack of grid experience and low manpower among the BESIII community.
Computational Fluid Dynamics Library
2005-03-04
CFDLib05 is the Los Alamos Computational Fluid Dynamics LIBrary. This is a collection of hydrocodes using a common data structure and a common numerical method, for problems ranging from single-field, incompressible flow, to multi-species, multi-field, compressible flow. The data structure is multi-block, with a so-called structured grid in each block. The numerical method is a Finite-Volume scheme employing a state vector that is fully cell-centered. This means that the integral form of the conservation lawsmore » is solved on the physical domain that is represented by a mesh of control volumes. The typical control volume is an arbitrary quadrilateral in 2D and an arbitrary hexahedron in 3D. The Finite-Volume scheme is for time-unsteady flow and remains well coupled by means of time and space centered fluxes; if a steady state solution is required, the problem is integrated forward in time until the user is satisfied that the state is stationary.« less
Distributed computing at the SSCL
Cormell, L.; White, R.
1993-05-01
The rapid increase in the availability of high performance, cost- effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no linger provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by discussing the approach taken at the Superconducting Super Collider Laboratory. In addition, a brief review of the future directions of commercial products for distributed computing and management will be given.
Hydronic distribution system computer model
Andrews, J.W.; Strasser, J.J.
1994-10-01
A computer model of a hot-water boiler and its associated hydronic thermal distribution loop has been developed at Brookhaven National Laboratory (BNL). It is intended to be incorporated as a submodel in a comprehensive model of residential-scale thermal distribution systems developed at Lawrence Berkeley. This will give the combined model the capability of modeling forced-air and hydronic distribution systems in the same house using the same supporting software. This report describes the development of the BNL hydronics model, initial results and internal consistency checks, and its intended relationship to the LBL model. A method of interacting with the LBL model that does not require physical integration of the two codes is described. This will provide capability now, with reduced up-front cost, as long as the number of runs required is not large.
Bimolecular dynamics by computer analysis
Eilbeck, J.C.; Lomdahl, P.S.; Scott, A.C.
1984-01-01
As numerical tools (computers and display equipment) become more powerful and the atomic structures of important biological molecules become known, the importance of detailed computation of nonequilibrium biomolecular dynamics increases. In this manuscript we report results from a well developed study of the hydrogen bonded polypeptide crystal acetanilide, a model protein. Directions for future research are suggested. 9 references, 6 figures.
Computational aspects of multibody dynamics
NASA Technical Reports Server (NTRS)
Park, K. C.
1989-01-01
Computational aspects are addressed which impact the requirements for developing a next generation software system for flexible multibody dynamics simulation which include: criteria for selecting candidate formulation, pairing of formulations with appropriate solution procedures, need for concurrent algorithms to utilize computer hardware advances, and provisions for allowing open-ended yet modular analysis modules.
ATLAS Distributed Computing in LHC Run2
NASA Astrophysics Data System (ADS)
Campana, Simone
2015-12-01
The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run-2. An increase in both the data rate and the computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (Prodsys-2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward a flexible computing model. A flexible computing utilization exploring the use of opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model; the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover, a new data management strategy, based on a defined lifetime for each dataset, has been defined to better manage the lifecycle of the data. In this note, an overview of an operational experience of the new system and its evolution is presented.
Computer animation challenges for computational fluid dynamics
NASA Astrophysics Data System (ADS)
Vines, Mauricio; Lee, Won-Sook; Mavriplis, Catherine
2012-07-01
Computer animation requirements differ from those of traditional computational fluid dynamics (CFD) investigations in that visual plausibility and rapid frame update rates trump physical accuracy. We present an overview of the main techniques for fluid simulation in computer animation, starting with Eulerian grid approaches, the Lattice Boltzmann method, Fourier transform techniques and Lagrangian particle introduction. Adaptive grid methods, precomputation of results for model reduction, parallelisation and computation on graphical processing units (GPUs) are reviewed in the context of accelerating simulation computations for animation. A survey of current specific approaches for the application of these techniques to the simulation of smoke, fire, water, bubbles, mixing, phase change and solid-fluid coupling is also included. Adding plausibility to results through particle introduction, turbulence detail and concentration on regions of interest by level set techniques has elevated the degree of accuracy and realism of recent animations. Basic approaches are described here. Techniques to control the simulation to produce a desired visual effect are also discussed. Finally, some references to rendering techniques and haptic applications are mentioned to provide the reader with a complete picture of the challenges of simulating fluids in computer animation.
Computational Workbench for Multibody Dynamics
NASA Technical Reports Server (NTRS)
Edmonds, Karina
2007-01-01
PyCraft is a computer program that provides an interactive, workbenchlike computing environment for developing and testing algorithms for multibody dynamics. Examples of multibody dynamic systems amenable to analysis with the help of PyCraft include land vehicles, spacecraft, robots, and molecular models. PyCraft is based on the Spatial-Operator- Algebra (SOA) formulation for multibody dynamics. The SOA operators enable construction of simple and compact representations of complex multibody dynamical equations. Within the Py-Craft computational workbench, users can, essentially, use the high-level SOA operator notation to represent the variety of dynamical quantities and algorithms and to perform computations interactively. PyCraft provides a Python-language interface to underlying C++ code. Working with SOA concepts, a user can create and manipulate Python-level operator classes in order to implement and evaluate new dynamical quantities and algorithms. During use of PyCraft, virtually all SOA-based algorithms are available for computational experiments.
Overlapping clusters for distributed computation.
Mirrokni, Vahab; Andersen, Reid; Gleich, David F.
2010-11-01
Scalable, distributed algorithms must address communication problems. We investigate overlapping clusters, or vertex partitions that intersect, for graph computations. This setup stores more of the graph than required but then affords the ease of implementation of vertex partitioned algorithms. Our hope is that this technique allows us to reduce communication in a computation on a distributed graph. The motivation above draws on recent work in communication avoiding algorithms. Mohiyuddin et al. (SC09) design a matrix-powers kernel that gives rise to an overlapping partition. Fritzsche et al. (CSC2009) develop an overlapping clustering for a Schwarz method. Both techniques extend an initial partitioning with overlap. Our procedure generates overlap directly. Indeed, Schwarz methods are commonly used to capitalize on overlap. Elsewhere, overlapping communities (Ahn et al, Nature 2009; Mishra et al. WAW2007) are now a popular model of structure in social networks. These have long been studied in statistics (Cole and Wishart, CompJ 1970). We present two types of results: (i) an estimated swapping probability {rho}{infinity}; and (ii) the communication volume of a parallel PageRank solution (link-following {alpha} = 0.85) using an additive Schwarz method. The volume ratio is the amount of extra storage for the overlap (2 means we store the graph twice). Below, as the ratio increases, the swapping probability and PageRank communication volume decreases.
ERIC Educational Resources Information Center
Rimland, Jeffrey C.
2013-01-01
In many evolving systems, inputs can be derived from both human observations and physical sensors. Additionally, many computation and analysis tasks can be performed by either human beings or artificial intelligence (AI) applications. For example, weather prediction, emergency event response, assistive technology for various human sensory and…
Molecular dynamics on vector computers
NASA Astrophysics Data System (ADS)
Sullivan, F.; Mountain, R. D.; Oconnell, J.
1985-10-01
An algorithm called the method of lights (MOL) has been developed for the computerized simulation of molecular dynamics. The MOL, implemented on the CYBER 205 computer, is based on sorting and reformulating the manner in which neighbor lists are compiled, and it uses data structures compatible with specialized vector statements that perform parallel computations. The MOL is found to reduce running time over standard methods in scalar form, and vectorization is shown to produce an order-of-magnitude reduction in execution time.
Rakowski, Cynthia L.; Serkowski, John A.; Richmond, Marshall C.
2010-12-01
The U.S. Army Corps of Engineers-Portland District (CENWP) has ongoing work to improve the survival of juvenile salmonids (smolt) migrating past The Dalles Dam. As part of that effort, a spillwall was constructed to improve juvenile egress through the tailrace downstream of the stilling basin. The spillwall was designed to improve smolt survival by decreasing smolt retention time in the spillway tailrace and the exposure to predators on the spillway shelf. The spillwall guides spillway flows, and hence smolt, more quickly into the thalweg. In this study, an existing computational fluid dynamics (CFD) model was modified and used to characterize tailrace hydraulics between the new spillwall and the Washington shore for six different total river flows. The effect of spillway flow distribution was simulated for three spill patterns at the lowest total river flow. The commercial CFD solver, STAR-CD version 4.1, was used to solve the unsteady Reynolds-averaged Navier-Stokes equations together with the k-epsilon turbulence model. Free surface motion was simulated using the volume-of-fluid (VOF) technique. The model results were used in two ways. First, results graphics were provided to CENWP and regional fisheries agency representatives for use and comparison to the same flow conditions at a reduced-scale physical model. The CFD results were very similar in flow pattern to that produced by the reduced-scale physical model but these graphics provided a quantitative view of velocity distribution. During the physical model work, an additional spill pattern was tested. Subsequently, that spill pattern was also simulated in the numerical model. The CFD streamlines showed that the hydraulic conditions were likely to be beneficial to fish egress at the higher total river flows (120 kcfs and greater, uniform flow distribution). At the lowest flow case, 90 kcfs, it was necessary to use a non-uniform distribution. Of the three distributions tested, splitting the flow evenly between
Analog computation with dynamical systems
NASA Astrophysics Data System (ADS)
Siegelmann, Hava T.; Fishman, Shmuel
1998-09-01
Physical systems exhibit various levels of complexity: their long term dynamics may converge to fixed points or exhibit complex chaotic behavior. This paper presents a theory that enables to interpret natural processes as special purpose analog computers. Since physical systems are naturally described in continuous time, a definition of computational complexity for continuous time systems is required. In analogy with the classical discrete theory we develop fundamentals of computational complexity for dynamical systems, discrete or continuous in time, on the basis of an intrinsic time scale of the system. Dissipative dynamical systems are classified into the computational complexity classes P d, Co-RP d, NP d and EXP d, corresponding to their standard counterparts, according to the complexity of their long term behavior. The complexity of chaotic attractors relative to regular ones leads to the conjecture P d ≠ NP d. Continuous time flows have been proven useful in solving various practical problems. Our theory provides the tools for an algorithmic analysis of such flows. As an example we analyze the continuous Hopfield network.
Cooperative Autonomic Management in Dynamic Distributed Systems
NASA Astrophysics Data System (ADS)
Xu, Jing; Zhao, Ming; Fortes, José A. B.
The centralized management of large distributed systems is often impractical, particularly when the both the topology and status of the system change dynamically. This paper proposes an approach to application-centric self-management in large distributed systems consisting of a collection of autonomic components that join and leave the system dynamically. Cooperative autonomic components self-organize into a dynamically created overlay network. Through local information sharing with neighbors, each component gains access to global information as needed for optimizing performance of applications. The approach has been validated and evaluated by developing a decentralized autonomic system consisting of multiple autonomic application managers previously developed for the In-VIGO grid-computing system. Using analytical results from complex random network and measurements done in a prototype system, we demonstrate the robustness, self-organization and adaptability of our approach, both theoretically and experimentally.
Dynamic computing random access memory
NASA Astrophysics Data System (ADS)
Traversa, F. L.; Bonani, F.; Pershin, Y. V.; Di Ventra, M.
2014-07-01
The present von Neumann computing paradigm involves a significant amount of information transfer between a central processing unit and memory, with concomitant limitations in the actual execution speed. However, it has been recently argued that a different form of computation, dubbed memcomputing (Di Ventra and Pershin 2013 Nat. Phys. 9 200-2) and inspired by the operation of our brain, can resolve the intrinsic limitations of present day architectures by allowing for computing and storing of information on the same physical platform. Here we show a simple and practical realization of memcomputing that utilizes easy-to-build memcapacitive systems. We name this architecture dynamic computing random access memory (DCRAM). We show that DCRAM provides massively-parallel and polymorphic digital logic, namely it allows for different logic operations with the same architecture, by varying only the control signals. In addition, by taking into account realistic parameters, its energy expenditures can be as low as a few fJ per operation. DCRAM is fully compatible with CMOS technology, can be realized with current fabrication facilities, and therefore can really serve as an alternative to the present computing technology.
Dynamic computing random access memory.
Traversa, F L; Bonani, F; Pershin, Y V; Di Ventra, M
2014-07-18
The present von Neumann computing paradigm involves a significant amount of information transfer between a central processing unit and memory, with concomitant limitations in the actual execution speed. However, it has been recently argued that a different form of computation, dubbed memcomputing (Di Ventra and Pershin 2013 Nat. Phys. 9 200-2) and inspired by the operation of our brain, can resolve the intrinsic limitations of present day architectures by allowing for computing and storing of information on the same physical platform. Here we show a simple and practical realization of memcomputing that utilizes easy-to-build memcapacitive systems. We name this architecture dynamic computing random access memory (DCRAM). We show that DCRAM provides massively-parallel and polymorphic digital logic, namely it allows for different logic operations with the same architecture, by varying only the control signals. In addition, by taking into account realistic parameters, its energy expenditures can be as low as a few fJ per operation. DCRAM is fully compatible with CMOS technology, can be realized with current fabrication facilities, and therefore can really serve as an alternative to the present computing technology. PMID:24972387
Distributed Computing at Belle II
NASA Astrophysics Data System (ADS)
Bansal, Vikas; Belle Collaboration, II
2016-03-01
The Belle II experiment at the SuperKEKB collider in Tsukuba, Japan, will start physics data taking in 2018 and will accumulate 50 ab-1 of e+e- collision data, about 50 times larger than the data set of the earlier Belle experiment. The computing requirements of Belle II are comparable to those of a RUN I high-pT LHC experiment. Computing will make full use of high speed networking and of the Computing Grids in North America, Asia and Europe. Results of an initial MC simulation campaign with 5 ab-1 equivalent luminosity will be described.
Distributed computing and nuclear reactor analysis
Brown, F.B.; Derstine, K.L.; Blomquist, R.N.
1994-03-01
Large-scale scientific and engineering calculations for nuclear reactor analysis can now be carried out effectively in a distributed computing environment, at costs far lower than for traditional mainframes. The distributed computing environment must include support for traditional system services, such as a queuing system for batch work, reliable filesystem backups, and parallel processing capabilities for large jobs. All ANL computer codes for reactor analysis have been adapted successfully to a distributed system based on workstations and X-terminals. Distributed parallel processing has been demonstrated to be effective for long-running Monte Carlo calculations.
Computational Methods for Structural Mechanics and Dynamics
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)
1989-01-01
Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.
Visualization of Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Gerald-Yamasaki, Michael; Hultquist, Jeff; Bryson, Steve; Kenwright, David; Lane, David; Walatka, Pamela; Clucas, Jean; Watson, Velvin; Lasinski, T. A. (Technical Monitor)
1995-01-01
Scientific visualization serves the dual purpose of exploration and exposition of the results of numerical simulations of fluid flow. Along with the basic visualization process which transforms source data into images, there are four additional components to a complete visualization system: Source Data Processing, User Interface and Control, Presentation, and Information Management. The requirements imposed by the desired mode of operation (i.e. real-time, interactive, or batch) and the source data have their effect on each of these visualization system components. The special requirements imposed by the wide variety and size of the source data provided by the numerical simulation of fluid flow presents an enormous challenge to the visualization system designer. We describe the visualization system components including specific visualization techniques and how the mode of operation and source data requirements effect the construction of computational fluid dynamics visualization systems.
Next generation distributed computing for cancer research.
Agarwal, Pankaj; Owzar, Kouros
2014-01-01
Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing. PMID:25983539
Next Generation Distributed Computing for Cancer Research
Agarwal, Pankaj; Owzar, Kouros
2014-01-01
Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing. PMID:25983539
Evaluation of distributed computing tools
Stanberry, L.
1992-10-28
The original goal stated in the collaboration agreement from LCC`s perspective was ``to show that networking tools available in UNICOS perform well enough to meet the requirements of LCC customers.`` This translated into evaluating how easy it was to port ELROS over CRI`s ISO 2.0, which itself is a port of ISODE to the Cray. In addition we tested the interoperability of ELROS and ISO 2.0 programs running on the Cray, and communicating with each other, and with servers or clients running on other machines. To achieve these goals from LCC`s side, we ported ELROS to the Cray, and also obtained and installed a copy of the ISO 2.0 distribution from CRI. CRI`s goal for the collaboration was to evaluate the usability of ELROS. In particular, we were interested in their potential feedback on the use of ELROS in implementing ISO protocols--whether ELROS would be easter to use and perform better than other tools that form part of the standard ISODE system. To help achieve these goals for CRI, we provided them with a distribution tar file containing the ELROS system, once we had completed our port of ELROS to the Cray.
Evaluation of distributed computing tools
Stanberry, L.
1992-10-28
The original goal stated in the collaboration agreement from LCC's perspective was to show that networking tools available in UNICOS perform well enough to meet the requirements of LCC customers.'' This translated into evaluating how easy it was to port ELROS over CRI's ISO 2.0, which itself is a port of ISODE to the Cray. In addition we tested the interoperability of ELROS and ISO 2.0 programs running on the Cray, and communicating with each other, and with servers or clients running on other machines. To achieve these goals from LCC's side, we ported ELROS to the Cray, and also obtained and installed a copy of the ISO 2.0 distribution from CRI. CRI's goal for the collaboration was to evaluate the usability of ELROS. In particular, we were interested in their potential feedback on the use of ELROS in implementing ISO protocols--whether ELROS would be easter to use and perform better than other tools that form part of the standard ISODE system. To help achieve these goals for CRI, we provided them with a distribution tar file containing the ELROS system, once we had completed our port of ELROS to the Cray.
A Software Rejuvenation Framework for Distributed Computing
NASA Technical Reports Server (NTRS)
Chau, Savio
2009-01-01
A performability-oriented conceptual framework for software rejuvenation has been constructed as a means of increasing levels of reliability and performance in distributed stateful computing. As used here, performability-oriented signifies that the construction of the framework is guided by the concept of analyzing the ability of a given computing system to deliver services with gracefully degradable performance. The framework is especially intended to support applications that involve stateful replicas of server computers.
Traffic Dynamics of Computer Networks
NASA Astrophysics Data System (ADS)
Fekete, Attila
2008-10-01
Two important aspects of the Internet, namely the properties of its topology and the characteristics of its data traffic, have attracted growing attention of the physics community. My thesis has considered problems of both aspects. First I studied the stochastic behavior of TCP, the primary algorithm governing traffic in the current Internet, in an elementary network scenario consisting of a standalone infinite-sized buffer and an access link. The effect of the fast recovery and fast retransmission (FR/FR) algorithms is also considered. I showed that my model can be extended further to involve the effect of link propagation delay, characteristic of WAN. I continued my thesis with the investigation of finite-sized semi-bottleneck buffers, where packets can be dropped not only at the link, but also at the buffer. I demonstrated that the behavior of the system depends only on a certain combination of the parameters. Moreover, an analytic formula was derived that gives the ratio of packet loss rate at the buffer to the total packet loss rate. This formula makes it possible to treat buffer-losses as if they were link-losses. Finally, I studied computer networks from a structural perspective. I demonstrated through fluid simulations that the distribution of resources, specifically the link bandwidth, has a serious impact on the global performance of the network. Then I analyzed the distribution of edge betweenness in a growing scale-free tree under the condition that a local property, the in-degree of the "younger" node of an arbitrary edge, is known in order to find an optimum distribution of link capacity. The derived formula is exact even for finite-sized networks. I also calculated the conditional expectation of edge betweenness, rescaled for infinite networks.
Distributed Dynamic State Estimation with Extended Kalman Filter
Du, Pengwei; Huang, Zhenyu; Sun, Yannan; Diao, Ruisheng; Kalsi, Karanjit; Anderson, Kevin K.; Li, Yulan; Lee, Barry
2011-08-04
Increasing complexity associated with large-scale renewable resources and novel smart-grid technologies necessitates real-time monitoring and control. Our previous work applied the extended Kalman filter (EKF) with the use of phasor measurement data (PMU) for dynamic state estimation. However, high computation complexity creates significant challenges for real-time applications. In this paper, the problem of distributed dynamic state estimation is investigated. One domain decomposition method is proposed to utilize decentralized computing resources. The performance of distributed dynamic state estimation is tested on a 16-machine, 68-bus test system.
Research on Computational Fluid Dynamics and Turbulence
NASA Technical Reports Server (NTRS)
1986-01-01
Preconditioning matrices for Chebyshev derivative operators in several space dimensions; the Jacobi matrix technique in computational fluid dynamics; and Chebyshev techniques for periodic problems are discussed.
Object-oriented Tools for Distributed Computing
NASA Technical Reports Server (NTRS)
Adler, Richard M.
1993-01-01
Distributed computing systems are proliferating, owing to the availability of powerful, affordable microcomputers and inexpensive communication networks. A critical problem in developing such systems is getting application programs to interact with one another across a computer network. Remote interprogram connectivity is particularly challenging across heterogeneous environments, where applications run on different kinds of computers and operating systems. NetWorks! (trademark) is an innovative software product that provides an object-oriented messaging solution to these problems. This paper describes the design and functionality of NetWorks! and illustrates how it is being used to build complex distributed applications for NASA and in the commercial sector.
Parallel Computation Of Forward Dynamics Of Manipulators
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1993-01-01
Report presents parallel algorithms and special parallel architecture for computation of forward dynamics of robotics manipulators. Products of effort to find best method of parallel computation to achieve required computational efficiency. Significant speedup of computation anticipated as well as cost reduction.
Distributed Real-Time Computing with Harness
Di Saverio, Emanuele; Cesati, Marco; Di Biagio, Christian; Pennella, Guido; Engelmann, Christian
2007-01-01
Modern parallel and distributed computing solutions are often built onto a ''middleware'' software layer providing a higher and common level of service between computational nodes. Harness is an adaptable, plugin-based middleware framework for parallel and distributed computing. This paper reports recent research and development results of using Harness for real-time distributed computing applications in the context of an industrial environment with the needs to perform several safety critical tasks. The presented work exploits the modular architecture of Harness in conjunction with a lightweight threaded implementation to resolve several real-time issues by adding three new Harness plug-ins to provide a prioritized lightweight execution environment, low latency communication facilities, and local timestamped event logging.
Predictive Dynamic Security Assessment through Advanced Computing
Huang, Zhenyu; Diao, Ruisheng; Jin, Shuangshuang; Chen, Yousu
2014-11-30
Abstract— Traditional dynamic security assessment is limited by several factors and thus falls short in providing real-time information to be predictive for power system operation. These factors include the steady-state assumption of current operating points, static transfer limits, and low computational speed. This addresses these factors and frames predictive dynamic security assessment. The primary objective of predictive dynamic security assessment is to enhance the functionality and computational process of dynamic security assessment through the use of high-speed phasor measurements and the application of advanced computing technologies for faster-than-real-time simulation. This paper presents algorithms, computing platforms, and simulation frameworks that constitute the predictive dynamic security assessment capability. Examples of phasor application and fast computation for dynamic security assessment are included to demonstrate the feasibility and speed enhancement for real-time applications.
Dance Dynamics: Computers and Dance.
ERIC Educational Resources Information Center
Gray, Judith A., Ed.; And Others
1983-01-01
Five articles discuss the use of computers in dance and dance education. They describe: (1) a computerized behavioral profile of a dance teacher; (2) computer-based dance notation; (3) elementary school computer-assisted dance instruction; (4) quantified analysis of dance criticism; and (5) computerized simulation of human body movements in a…
High-performance computing and distributed systems
Loken, S.C.; Greiman, W.; Jacobson, V.L.; Johnston, W.E.; Robertson, D.W.; Tierney, B.L.
1992-09-01
We present a scenario for a fully distributed computing environment in which computing, storage, and I/O elements are configured on demand into ``virtual systems`` that are optimal for the solution of a particular problem. We also describe present two pilot projects that illustrate some of the elements and issues of this scenario. The goal of this work is to make the most powerful computing systems those that are logically assembled from network based components, and to make those systems available independent of the geographic location of the constituent elements.
High-performance computing and distributed systems
Loken, S.C.; Greiman, W.; Jacobson, V.L.; Johnston, W.E.; Robertson, D.W.; Tierney, B.L.
1992-09-01
We present a scenario for a fully distributed computing environment in which computing, storage, and I/O elements are configured on demand into virtual systems'' that are optimal for the solution of a particular problem. We also describe present two pilot projects that illustrate some of the elements and issues of this scenario. The goal of this work is to make the most powerful computing systems those that are logically assembled from network based components, and to make those systems available independent of the geographic location of the constituent elements.
Data Integration in Computer Distributed Systems
NASA Astrophysics Data System (ADS)
Kwiecień, Błażej
In this article the author analyze a problem of data integration in a computer distributed systems. Exchange of information between different levels in integrated pyramid of enterprise process is fundamental with regard to efficient enterprise work. Communication and data exchange between levels are not always the same cause of necessity of different network protocols usage, communication medium, system response time, etc.
Computer Systems for Distributed and Distance Learning.
ERIC Educational Resources Information Center
Anderson, M.; Jackson, David
2000-01-01
Discussion of network-based learning focuses on a survey of computer systems for distributed and distance learning. Both Web-based systems and non-Web-based systems are reviewed in order to highlight some of the major trends of past projects and to suggest ways in which progress may be made in the future. (Contains 92 references.) (Author/LRW)
Great Expectations: Distributed Financial Computing at Cornell.
ERIC Educational Resources Information Center
Schulden, Louise; Sidle, Clint
1988-01-01
The Cornell University Distributed Accounting (CUDA) system is an attempt to provide departments a software tool for better managing their finances, creating microcomputer standards, creating a vehicle for better administrative microcomputer support, and insuring local systems are consistent with central computer systems. (Author/MLW)
Vectorization of computer programs with applications to computational fluid dynamics
NASA Astrophysics Data System (ADS)
Gentzsch, W.
Techniques for adapting serial computer programs to the architecture of modern vector computers are presented and illustrated with examples, mainly from the field of computational fluid dynamics. The limitations of conventional computers are reviewed; the vector computers CRAY-1S and CDC-CYBER 205 are characterized; and chapters are devoted to vectorization of FORTRAN programs, sample-program vectorization on five different vector and parallel-architecture computers, restructuring of basic linear-algebra algorithms, iterative methods, vectorization of simple numerical algorithms, and fluid-dynamics vectorization on CRAY-1 (including an implicit beam and warming scheme, an implicit finite-difference method for laminar boundary-layer equations, the Galerkin method and a direct Monte Carlo simulation). Diagrams, charts, tables, and photographs are provided.
Distributed Computing Framework for Synthetic Radar Application
NASA Technical Reports Server (NTRS)
Gurrola, Eric M.; Rosen, Paul A.; Aivazis, Michael
2006-01-01
We are developing an extensible software framework, in response to Air Force and NASA needs for distributed computing facilities for a variety of radar applications. The objective of this work is to develop a Python based software framework, that is the framework elements of the middleware that allows developers to control processing flow on a grid in a distributed computing environment. Framework architectures to date allow developers to connect processing functions together as interchangeable objects, thereby allowing a data flow graph to be devised for a specific problem to be solved. The Pyre framework, developed at the California Institute of Technology (Caltech), and now being used as the basis for next-generation radar processing at JPL, is a Python-based software framework. We have extended the Pyre framework to include new facilities to deploy processing components as services, including components that monitor and assess the state of the distributed network for eventual real-time control of grid resources.
Research computing in a distributed cloud environment
NASA Astrophysics Data System (ADS)
Fransham, K.; Agarwal, A.; Armstrong, P.; Bishop, A.; Charbonneau, A.; Desmarais, R.; Hill, N.; Gable, I.; Gaudet, S.; Goliath, S.; Impey, R.; Leavett-Brown, C.; Ouellete, J.; Paterson, M.; Pritchet, C.; Penfold-Brown, D.; Podaima, W.; Schade, D.; Sobie, R. J.
2010-11-01
The recent increase in availability of Infrastructure-as-a-Service (IaaS) computing clouds provides a new way for researchers to run complex scientific applications. However, using cloud resources for a large number of research jobs requires significant effort and expertise. Furthermore, running jobs on many different clouds presents even more difficulty. In order to make it easy for researchers to deploy scientific applications across many cloud resources, we have developed a virtual machine resource manager (Cloud Scheduler) for distributed compute clouds. In response to a user's job submission to a batch system, the Cloud Scheduler manages the distribution and deployment of user-customized virtual machines across multiple clouds. We describe the motivation for and implementation of a distributed cloud using the Cloud Scheduler that is spread across both commercial and dedicated private sites, and present some early results of scientific data analysis using the system.
NASA Technical Reports Server (NTRS)
Rasmussen, Robert D. (Inventor); Manning, Robert M. (Inventor); Lewis, Blair F. (Inventor); Bolotin, Gary S. (Inventor); Ward, Richard S. (Inventor)
1990-01-01
This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided.
Distributed Storage Systems for Data Intensive Computing
Vazhkudai, Sudharshan S; Butt, Ali R; Ma, Xiaosong
2012-01-01
In this chapter, the authors present an overview of the utility of distributed storage systems in supporting modern applications that are increasingly becoming data intensive. Their coverage of distributed storage systems is based on the requirements imposed by data intensive computing and not a mere summary of storage systems. To this end, they delve into several aspects of supporting data-intensive analysis, such as data staging, offloading, checkpointing, and end-user access to terabytes of data, and illustrate the use of novel techniques and methodologies for realizing distributed storage systems therein. The data deluge from scientific experiments, observations, and simulations is affecting all of the aforementioned day-to-day operations in data-intensive computing. Modern distributed storage systems employ techniques that can help improve application performance, alleviate I/O bandwidth bottleneck, mask failures, and improve data availability. They present key guiding principles involved in the construction of such storage systems, associated tradeoffs, design, and architecture, all with an eye toward addressing challenges of data-intensive scientific applications. They highlight the concepts involved using several case studies of state-of-the-art storage systems that are currently available in the data-intensive computing landscape.
Fluid dynamics computer programs for NERVA turbopump
NASA Technical Reports Server (NTRS)
Brunner, J. J.
1972-01-01
During the design of the NERVA turbopump, numerous computer programs were developed for the analyses of fluid dynamic problems within the machine. Program descriptions, example cases, users instructions, and listings for the majority of these programs are presented.
Comparing the Performance of Two Dynamic Load Distribution Methods
NASA Technical Reports Server (NTRS)
Kale, L. V.
1987-01-01
Parallel processing of symbolic computations on a message-passing multi-processor presents one challenge: To effectively utilize the available processors, the load must be distributed uniformly to all the processors. However, the structure of these computations cannot be predicted in advance. go, static scheduling methods are not applicable. In this paper, we compare the performance of two dynamic, distributed load balancing methods with extensive simulation studies. The two schemes are: the Contracting Within a Neighborhood (CWN) scheme proposed by us, and the Gradient Model proposed by Lin and Keller. We conclude that although simpler, the CWN is significantly more effective at distributing the work than the Gradient model.
A Computational Fluid Dynamics Algorithm on a Massively Parallel Computer
NASA Technical Reports Server (NTRS)
Jespersen, Dennis C.; Levit, Creon
1989-01-01
The discipline of computational fluid dynamics is demanding ever-increasing computational power to deal with complex fluid flow problems. We investigate the performance of a finite-difference computational fluid dynamics algorithm on a massively parallel computer, the Connection Machine. Of special interest is an implicit time-stepping algorithm; to obtain maximum performance from the Connection Machine, it is necessary to use a nonstandard algorithm to solve the linear systems that arise in the implicit algorithm. We find that the Connection Machine ran achieve very high computation rates on both explicit and implicit algorithms. The performance of the Connection Machine puts it in the same class as today's most powerful conventional supercomputers.
Distributed computation of supremal conditionally controllable sublanguages
NASA Astrophysics Data System (ADS)
Komenda, Jan; Masopust, Tomáš
2016-02-01
In this paper, we further develop the coordination control framework for discrete-event systems with both complete and partial observations. First, a weaker sufficient condition for the computation of the supremal conditionally controllable sublanguage and conditionally normal sublanguage is presented. Then we show that this condition can be imposed by synthesising a-posteriori supervisors. The paper further generalises the previous study by considering general, non-prefix-closed languages. Moreover, we prove that for prefix-closed languages the supremal conditionally controllable sublanguage and conditionally normal sublanguage can always be computed in the distributed way without any restrictive conditions we have used in the past.
Open Source Live Distributions for Computer Forensics
NASA Astrophysics Data System (ADS)
Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele
Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.
Computational Methods for Dynamic Stability and Control Derivatives
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.
2003-01-01
Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.
Computational Methods for Dynamic Stability and Control Derivatives
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.
2004-01-01
Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.
Subtlenoise: sonification of distributed computing operations
NASA Astrophysics Data System (ADS)
Love, P. A.
2015-12-01
The operation of distributed computing systems requires comprehensive monitoring to ensure reliability and robustness. There are two components found in most monitoring systems: one being visually rich time-series graphs and another being notification systems for alerting operators under certain pre-defined conditions. In this paper the sonification of monitoring messages is explored using an architecture that fits easily within existing infrastructures based on mature opensource technologies such as ZeroMQ, Logstash, and Supercollider (a synth engine). Message attributes are mapped onto audio attributes based on broad classification of the message (continuous or discrete metrics) but keeping the audio stream subtle in nature. The benefits of audio rendering are described in the context of distributed computing operations and may provide a less intrusive way to understand the operational health of these systems.
COLD-SAT Dynamic Model Computer Code
NASA Technical Reports Server (NTRS)
Bollenbacher, G.; Adams, N. S.
1995-01-01
COLD-SAT Dynamic Model (CSDM) computer code implements six-degree-of-freedom, rigid-body mathematical model for simulation of spacecraft in orbit around Earth. Investigates flow dynamics and thermodynamics of subcritical cryogenic fluids in microgravity. Consists of three parts: translation model, rotation model, and slosh model. Written in FORTRAN 77.
Radar data processing using a distributed computational system
NASA Astrophysics Data System (ADS)
Mota, Gilberto F.
1992-06-01
This research specifies and validates a new concurrent decomposition scheme, called Confined Space Search Decomposition (CSSD), to exploit parallelism of Radar Data Processing algorithms using a Distributed Computational System. To formalize the specification, we propose and apply an object-oriented methodology called Decomposition Cost Evaluation Model (DCEM). To reduce the penalties of load imbalance, we propose a distributed dynamic load balance heuristic called Object Reincarnation (OR). To validate the research, we first compare our decomposition with an identified alternative using the proposed DCEM model and then develop a theoretical prediction of selected parameters. We also develop a simulation to check the Object Reincarnation Concept.
Aono, Masashi; Gunji, Yukio-Pegio
2003-10-01
The emergence derived from errors is the key importance for both novel computing and novel usage of the computer. In this paper, we propose an implementable experimental plan for the biological computing so as to elicit the emergent property of complex systems. An individual plasmodium of the true slime mold Physarum polycephalum acts in the slime mold computer. Modifying the Elementary Cellular Automaton as it entails the global synchronization problem upon the parallel computing provides the NP-complete problem solved by the slime mold computer. The possibility to solve the problem by giving neither all possible results nor explicit prescription of solution-seeking is discussed. In slime mold computing, the distributivity in the local computing logic can change dynamically, and its parallel non-distributed computing cannot be reduced into the spatial addition of multiple serial computings. The computing system based on exhaustive absence of the super-system may produce, something more than filling the vacancy. PMID:14563567
Fast Parallel Computation Of Manipulator Inverse Dynamics
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1991-01-01
Method for fast parallel computation of inverse dynamics problem, essential for real-time dynamic control and simulation of robot manipulators, undergoing development. Enables exploitation of high degree of parallelism and, achievement of significant computational efficiency, while minimizing various communication and synchronization overheads as well as complexity of required computer architecture. Universal real-time robotic controller and simulator (URRCS) consists of internal host processor and several SIMD processors with ring topology. Architecture modular and expandable: more SIMD processors added to match size of problem. Operate asynchronously and in MIMD fashion.
Computational strategies for three-dimensional flow simulations on distributed computer systems
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Weed, Richard A.
1995-01-01
This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.
Computational Physics and Evolutionary Dynamics
NASA Astrophysics Data System (ADS)
Fontana, Walter
2000-03-01
One aspect of computational physics deals with the characterization of statistical regularities in materials. Computational physics meets biology when these materials can evolve. RNA molecules are a case in point. The folding of RNA sequences into secondary structures (shapes) inspires a simple biophysically grounded genotype-phenotype map that can be explored computationally and in the laboratory. We have identified some statistical regularities of this map and begin to understand their evolutionary consequences. (1) ``typical shapes'': Only a small subset of shapes realized by the RNA folding map is typical, in the sense of containing shapes that are realized significantly more often than others. Consequence: evolutionary histories mostly involve typical shapes, and thus exhibit generic properties. (2) ``neutral networks'': Sequences folding into the same shape are mutationally connected into a network that reaches across sequence space. Consequence: Evolutionary transitions between shapes reflect the fraction of boundary shared by the corresponding neutral networks in sequence space. The notion of a (dis)continuous transition can be made rigorous. (3) ``shape space covering'': Given a random sequence, a modest number of mutations suffices to reach a sequence realizing any typical shape. Consequence: The effective search space for evolutionary optimization is greatly reduced, and adaptive success is less dependent on initial conditions. (4) ``plasticity mirrors variability'': The repertoire of low energy shapes of a sequence is an indicator of how much and in which ways its energetically optimal shape can be altered by a single point mutation. Consequence: (i) Thermodynamic shape stability and mutational robustness are intimately linked. (ii) When natural selection favors the increase of stability, extreme mutational robustness -- to the point of an evolutionary dead-end -- is produced as a side effect. (iii) The hallmark of robust shapes is modularity.
The future of PanDA in ATLAS distributed computing
NASA Astrophysics Data System (ADS)
De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.
2015-12-01
Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.
Computational fluid dynamics - The coming revolution
NASA Technical Reports Server (NTRS)
Graves, R. A., Jr.
1982-01-01
The development of aerodynamic theory is traced from the days of Aristotle to the present, with the next stage in computational fluid dynamics dependent on superspeed computers for flow calculations. Additional attention is given to the history of numerical methods inherent in writing computer codes applicable to viscous and inviscid analyses for complex configurations. The advent of the superconducting Josephson junction is noted to place configurational demands on computer design to avoid limitations imposed by the speed of light, and a Japanese projection of a computer capable of several hundred billion operations/sec is mentioned. The NASA Numerical Aerodynamic Simulator is described, showing capabilities of a billion operations/sec with a memory of 240 million words using existing technology. Near-term advances in fluid dynamics are discussed.
Single neuron dynamics and computation.
Brunel, Nicolas; Hakim, Vincent; Richardson, Magnus J E
2014-04-01
At the single neuron level, information processing involves the transformation of input spike trains into an appropriate output spike train. Building upon the classical view of a neuron as a threshold device, models have been developed in recent years that take into account the diverse electrophysiological make-up of neurons and accurately describe their input-output relations. Here, we review these recent advances and survey the computational roles that they have uncovered for various electrophysiological properties, for dendritic arbor anatomy as well as for short-term synaptic plasticity. PMID:24492069
Distributed neural computations for embedded sensor networks
NASA Astrophysics Data System (ADS)
Peckens, Courtney A.; Lynch, Jerome P.; Pei, Jin-Song
2011-04-01
Wireless sensing technologies have recently emerged as an inexpensive and robust method of data collection in a variety of structural monitoring applications. In comparison with cabled monitoring systems, wireless systems offer low-cost and low-power communication between a network of sensing devices. Wireless sensing networks possess embedded data processing capabilities which allow for data processing directly at the sensor, thereby eliminating the need for the transmission of raw data. In this study, the Volterra/Weiner neural network (VWNN), a powerful modeling tool for nonlinear hysteretic behavior, is decentralized for embedment in a network of wireless sensors so as to take advantage of each sensor's processing capabilities. The VWNN was chosen for modeling nonlinear dynamic systems because its architecture is computationally efficient and allows computational tasks to be decomposed for parallel execution. In the algorithm, each sensor collects it own data and performs a series of calculations. It then shares its resulting calculations with every other sensor in the network, while the other sensors are simultaneously exchanging their information. Because resource conservation is important in embedded sensor design, the data is pruned wherever possible to eliminate excessive communication between sensors. Once a sensor has its required data, it continues its calculations and computes a prediction of the system acceleration. The VWNN is embedded in the computational core of the Narada wireless sensor node for on-line execution. Data generated by a steel framed structure excited by seismic ground motions is used for validation of the embedded VWNN model.
Distributed Computing for the Pierre Auger Observatory
NASA Astrophysics Data System (ADS)
Chudoba, J.
2015-12-01
Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system.
Three-Dimensional Computational Fluid Dynamics
Haworth, D.C.; O'Rourke, P.J.; Ranganathan, R.
1998-09-01
Computational fluid dynamics (CFD) is one discipline falling under the broad heading of computer-aided engineering (CAE). CAE, together with computer-aided design (CAD) and computer-aided manufacturing (CAM), comprise a mathematical-based approach to engineering product and process design, analysis and fabrication. In this overview of CFD for the design engineer, our purposes are three-fold: (1) to define the scope of CFD and motivate its utility for engineering, (2) to provide a basic technical foundation for CFD, and (3) to convey how CFD is incorporated into engineering product and process design.
Computer simulation of microstructural dynamics
Grest, G.S.; Anderson, M.P.; Srolovitz, D.J.
1985-01-01
Since many of the physical properties of materials are determined by their microstructure, it is important to be able to predict and control microstructural development. A number of approaches have been taken to study this problem, but they assume that the grains can be described as spherical or hexagonal and that growth occurs in an average environment. We have developed a new technique to bridge the gap between the atomistic interactions and the macroscopic scale by discretizing the continuum system such that the microstructure retains its topological connectedness, yet is amenable to computer simulations. Using this technique, we have studied grain growth in polycrystalline aggregates. The temporal evolution and grain morphology of our model are in excellent agreement with experimental results for metals and ceramics.
Progress in the dynamical parton distributions
Jimenez-Delgado, Pedro
2012-06-01
The present status of the (JR) dynamical parton distribution functions is reported. Different theoretical improvements, including the determination of the strange sea input distribution, the treatment of correlated errors and the inclusion of alternative data sets, are discussed. Highlights in the ongoing developments as well as (very) preliminary results in the determination of the strong coupling constant are presented.
Pseudo-interactive monitoring in distributed computing
Sfiligoi, I.; Bradley, D.; Livny, M.; /Wisconsin U., Madison
2009-05-01
Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.
Dynamic void distribution in myoglobin and five mutants.
Jiang, Yingying; Kirmizialtin, Serdal; Sanchez, Isaac C
2014-01-01
Globular proteins contain cavities/voids that play specific roles in controlling protein function. Elongated cavities provide migration channels for the transport of ions and small molecules to the active center of a protein or enzyme. Using Monte Carlo and Molecular Dynamics on fully atomistic protein/water models, a new computational methodology is introduced that takes into account the protein's dynamic structure and maps all the cavities in and on the surface. To demonstrate its utility, the methodology is applied to study cavity structure in myoglobin and five of its mutants. Computed cavity and channel size distributions reveal significant differences relative to the wild type myoglobin. Computer visualization of the channels leading to the heme center indicates restricted ligand access for the mutants consistent with the existing interpretations. The new methodology provides a quantitative measure of cavity structure and distributions and can become a valuable tool for the structural characterization of proteins. PMID:24500195
Interoperable PKI Data Distribution in Computational Grids
Pala, Massimiliano; Cholia, Shreyas; Rea, Scott A.; Smith, Sean W.
2008-07-25
One of the most successful working examples of virtual organizations, computational grids need authentication mechanisms that inter-operate across domain boundaries. Public Key Infrastructures(PKIs) provide sufficient flexibility to allow resource managers to securely grant access to their systems in such distributed environments. However, as PKIs grow and services are added to enhance both security and usability, users and applications must struggle to discover available resources-particularly when the Certification Authority (CA) is alien to the relying party. This article presents how to overcome these limitations of the current grid authentication model by integrating the PKI Resource Query Protocol (PRQP) into the Grid Security Infrastructure (GSI).
Airport Simulations Using Distributed Computational Resources
NASA Technical Reports Server (NTRS)
McDermott, William J.; Maluf, David A.; Gawdiak, Yuri; Tran, Peter; Clancy, Daniel (Technical Monitor)
2002-01-01
The Virtual National Airspace Simulation (VNAS) will improve the safety of Air Transportation. In 2001, using simulation and information management software running over a distributed network of super-computers, researchers at NASA Ames, Glenn, and Langley Research Centers developed a working prototype of a virtual airspace. This VNAS prototype modeled daily operations of the Atlanta airport by integrating measured operational data and simulation data on up to 2,000 flights a day. The concepts and architecture developed by NASA for this prototype are integral to the National Airspace Simulation to support the development of strategies improving aviation safety, identifying precursors to component failure.
Technology Transfer Automated Retrieval System (TEKTRAN)
AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic and water quality (H/WQ) simulation components under the Java Connection Framework (JCF) and the Object Modeling System (OMS) environmental modeling framework. AgES-W is implicitly scala...
Computational fluid dynamics - A personal view
NASA Technical Reports Server (NTRS)
Hussaini, M. Y.
1989-01-01
This paper provides a personal view of computational fluid dynamics. The main theme is divided into two categories - one dealing with algorithms and engineering applications and the other with scientific investigations. The former category may be termed computational aerodynamics, with the objective of providing reliable aerodynamic or engineering predictions. The latter category is essentially basic research, where the algorithmic tools are used to unravel and elucidate fluid-dynamic phenomena hard to obtain in a laboratory. A critique of the numerical solution techniques for both compressible and incompressible flows is included. The discussion on scientific investigations deals in particular with transition and turbulence.
Fast Parallel Computation Of Multibody Dynamics
NASA Technical Reports Server (NTRS)
Fijany, Amir; Kwan, Gregory L.; Bagherzadeh, Nader
1996-01-01
Constraint-force algorithm fast, efficient, parallel-computation algorithm for solving forward dynamics problem of multibody system like robot arm or vehicle. Solves problem in minimum time proportional to log(N) by use of optimal number of processors proportional to N, where N is number of dynamical degrees of freedom: in this sense, constraint-force algorithm both time-optimal and processor-optimal parallel-processing algorithm.
Distributed Data Mining using a Public Resource Computing Framework
NASA Astrophysics Data System (ADS)
Cesario, Eugenio; de Caria, Nicola; Mastroianni, Carlo; Talia, Domenico
The public resource computing paradigm is often used as a successful and low cost mechanism for the management of several classes of scientific and commercial applications that require the execution of a large number of independent tasks. Public computing frameworks, also known as “Desktop Grids”, exploit the computational power and storage facilities of private computers, or “workers”. Despite the inherent decentralized nature of the applications for which they are devoted, these systems often adopt a centralized mechanism for the assignment of jobs and distribution of input data, as is the case for BOINC, the most popular framework in this realm. We present a decentralized framework that aims at increasing the flexibility and robustness of public computing applications, thanks to two basic features: (i) the adoption of a P2P protocol for dynamically matching the job specifications with the worker characteristics, without relying on centralized resources; (ii) the use of distributed cache servers for an efficient dissemination and reutilization of data files. This framework is exploitable for a wide set of applications. In this work, we describe how a Java prototype of the framework was used to tackle the problem of mining frequent itemsets from a transactional dataset, and show some preliminary yet interesting performance results that prove the efficiency improvements that can derive from the presented architecture.
Dynamic object management for distributed data structures
NASA Technical Reports Server (NTRS)
Totty, Brian K.; Reed, Daniel A.
1992-01-01
In distributed-memory multiprocessors, remote memory accesses incur larger delays than local accesses. Hence, insightful allocation and access of distributed data can yield substantial performance gains. The authors argue for the use of dynamic data management policies encapsulated within individual distributed data structures. Distributed data structures offer performance, flexibility, abstraction, and system independence. This approach is supported by data from a trace-driven simulation study of parallel scientific benchmarks. Experimental data on memory locality, message count, message volume, and communication delay suggest that data-structure-specific data management is superior to a single, system-imposed policy.
Visualization of unsteady computational fluid dynamics
NASA Technical Reports Server (NTRS)
Haimes, Robert
1994-01-01
A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.
Graphics supercomputer for computational fluid dynamics research
NASA Astrophysics Data System (ADS)
Liaw, Goang S.
1994-11-01
The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.
Visualization of unsteady computational fluid dynamics
NASA Astrophysics Data System (ADS)
Haimes, Robert
1994-11-01
A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.
Final Report Computational Analysis of Dynamical Systems
Guckenheimer, John
2012-05-08
This is the final report for DOE Grant DE-FG02-93ER25164, initiated in 1993. This grant supported research of John Guckenheimer on computational analysis of dynamical systems. During that period, seventeen individuals received PhD degrees under the supervision of Guckenheimer and over fifty publications related to the grant were produced. This document contains copies of these publications.
Computational fluid dynamics in oil burner design
Butcher, T.A.
1997-09-01
In Computational Fluid Dynamics, the differential equations which describe flow, heat transfer, and mass transfer are approximately solved using a very laborious numerical procedure. Flows of practical interest to burner designs are always turbulent, adding to the complexity of requiring a turbulence model. This paper presents a model for burner design.
From Cnn Dynamics to Cellular Wave Computers
NASA Astrophysics Data System (ADS)
Roska, Tamas
2013-01-01
Embedded in a historical overview, the development of the Cellular Wave Computing paradigm is presented, starting from the standard CNN dynamics. The theoretical aspects, the physical implementation, the innovation process, as well as the biological relevance are discussed in details. Finally, the latest developments, the physical versus virtual cellular machines, as well as some open questions are presented.
Distributed Computing Software Building-Blocks for Ubiquitous Computing Societies
NASA Astrophysics Data System (ADS)
Kim, K. H. (Kane
The steady approach of advanced nations toward realization of ubiquitous computing societies has given birth to rapidly growing demands for new-generation distributed computing (DC) applications. Consequently, economic and reliable construction of new-generation DC applications is currently a major issue faced by the software technology research community. What is needed is a new-generation DC software engineering technology which is at least multiple times more effective in constructing new-generation DC applications than the currently practiced technologies are. In particular, this author believes that a new-generation building-block (BB), which is much more advanced than the current-generation DC object that is a small extension of the object model embedded in languages C++, Java, and C#, is needed. Such a BB should enable systematic and economic construction of DC applications that are capable of taking critical actions with 100-microsecond-level or even 10-microsecond-level timing accuracy, fault tolerance, and security enforcement while being easily expandable and taking advantage of all sorts of network connectivity. Some directions considered worth pursuing for finding such BBs are discussed.
Parallel computation of manipulator inverse dynamics
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1991-01-01
In this article, parallel computation of manipulator inverse dynamics is investigated. A hierarchical graph-based mapping approach is devised to analyze the inherent parallelism in the Newton-Euler formulation at several computational levels, and to derive the features of an abstract architecture for exploitation of parallelism. At each level, a parallel algorithm represents the application of a parallel model of computation that transforms the computation into a graph whose structure defines the features of an abstract architecture, i.e., number of processors, communication structure, etc. Data-flow analysis is employed to derive the time lower bound in the computation as well as the sequencing of the abstract architecture. The features of the target architecture are defined by optimization of the abstract architecture to exploit maximum parallelism while minimizing architectural complexity. An architecture is designed and implemented that is capable of efficient exploitation of parallelism at several computational levels. The computation time of the Newton-Euler formulation for a 6-degree-of-freedom (dof) general manipulator is measured as 187 microsec. The increase in computation time for each additional dof is 23 microsec, which leads to a computation time of less than 500 microsec, even for a 12-dof redundant arm.
Optimal dynamic remapping of parallel computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Reynolds, Paul F., Jr.
1987-01-01
A large class of computations are characterized by a sequence of phases, with phase changes occurring unpredictably. The decision problem was considered regarding the remapping of workload to processors in a parallel computation when the utility of remapping and the future behavior of the workload is uncertain, and phases exhibit stable execution requirements during a given phase, but requirements may change radically between phases. For these problems a workload assignment generated for one phase may hinder performance during the next phase. This problem is treated formally for a probabilistic model of computation with at most two phases. The fundamental problem of balancing the expected remapping performance gain against the delay cost was addressed. Stochastic dynamic programming is used to show that the remapping decision policy minimizing the expected running time of the computation has an extremely simple structure. Because the gain may not be predictable, the performance of a heuristic policy that does not require estimnation of the gain is examined. The heuristic method's feasibility is demonstrated by its use on an adaptive fluid dynamics code on a multiprocessor. The results suggest that except in extreme cases, the remapping decision problem is essentially that of dynamically determining whether gain can be achieved by remapping after a phase change. The results also suggest that this heuristic is applicable to computations with more than two phases.
An Applet-based Anonymous Distributed Computing System.
ERIC Educational Resources Information Center
Finkel, David; Wills, Craig E.; Ciaraldi, Michael J.; Amorin, Kevin; Covati, Adam; Lee, Michael
2001-01-01
Defines anonymous distributed computing systems and focuses on the specifics of a Java, applet-based approach for large-scale, anonymous, distributed computing on the Internet. Explains the possibility of a large number of computers participating in a single computation and describes a test of the functionality of the system. (Author/LRW)
Absolute nonlocality via distributed computing without communication
NASA Astrophysics Data System (ADS)
Czekaj, Ł.; Pawłowski, M.; Vértesi, T.; Grudka, A.; Horodecki, M.; Horodecki, R.
2015-09-01
Understanding the role that quantum entanglement plays as a resource in various information processing tasks is one of the crucial goals of quantum information theory. Here we propose an alternative perspective for studying quantum entanglement: distributed computation of functions without communication between nodes. To formalize this approach, we propose identity games. Surprisingly, despite no signaling, we obtain that nonlocal quantum strategies beat classical ones in terms of winning probability for identity games originating from certain bipartite and multipartite functions. Moreover we show that, for a majority of functions, access to general nonsignaling resources boosts success probability two times in comparison to classical ones for a number of large enough outputs. Because there are no constraints on the inputs and no processing of the outputs in the identity games, they detect very strong types of correlations: absolute nonlocality.
NASA Technical Reports Server (NTRS)
Afjeh, Abdollah A.; Reed, John A.
2003-01-01
Mesh generation has long been recognized as a bottleneck in the CFD process. While much research on automating the volume mesh generation process have been relatively successful,these methods rely on appropriate initial surface triangulation to work properly. Surface discretization has been one of the least automated steps in computational simulation due to its dependence on implicitly defined CAD surfaces and curves. Differences in CAD peometry engines manifest themselves in discrepancies in their interpretation of the same entities. This lack of "good" geometry causes significant problems for mesh generators, requiring users to "repair" the CAD geometry before mesh generation. The problem is exacerbated when CAD geometry is translated to other forms (e.g., IGES )which do not include important topological and construction information in addition to entity geometry. One technique to avoid these problems is to access the CAD geometry directly from the mesh generating software, rather than through files. By accessing the geometry model (not a discretized version) in its native environment, t h s a proach avoids translation to a format which can deplete the model of topological information. Our approach to enable models developed in the Denali software environment to directly access CAD geometry and functions is through an Application Programming Interface (API) known as CAPRI. CAPRI provides a layer of indirection through which CAD-specific data may be accessed by an application program using CAD-system neutral C and FORTRAN language function calls. CAPRI supports a general set of CAD operations such as truth testing, geometry construction and entity queries.
LHCbDirac: distributed computing in LHCb
NASA Astrophysics Data System (ADS)
Stagni, F.; Charpentier, P.; Graciani, R.; Tsaregorodtsev, A.; Closier, J.; Mathe, Z.; Ubeda, M.; Zhelezov, A.; Lanciotti, E.; Romanovskiy, V.; Ciba, K. D.; Casajus, A.; Roiser, S.; Sapunov, M.; Remenska, D.; Bernardoff, V.; Santana, R.; Nandakumar, R.
2012-12-01
We present LHCbDirac, an extension of the DIRAC community Grid solution that handles LHCb specificities. The DIRAC software has been developed for many years within LHCb only. Nowadays it is a generic software, used by many scientific communities worldwide. Each community wanting to take advantage of DIRAC has to develop an extension, containing all the necessary code for handling their specific cases. LHCbDirac is an actively developed extension, implementing the LHCb computing model and workflows handling all the distributed computing activities of LHCb. Such activities include real data processing (reconstruction, stripping and streaming), Monte-Carlo simulation and data replication. Other activities are groups and user analysis, data management, resources management and monitoring, data provenance, accounting for user and production jobs. LHCbDirac also provides extensions of the DIRAC interfaces, including a secure web client, python APIs and CLIs. Before putting in production a new release, a number of certification tests are run in a dedicated setup. This contribution highlights the versatility of the system, also presenting the experience with real data processing, data and resources management, monitoring for activities and resources.
Automating usability of ATLAS Distributed Computing resources
NASA Astrophysics Data System (ADS)
Tupputi, S. A.; Di Girolamo, A.; Kouba, T.; Schovancová, J.; Atlas Collaboration
2014-06-01
The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.
Dynamic data distributions in Vienna Fortran
NASA Technical Reports Server (NTRS)
Chapman, Barbara; Mehrotra, Piyush; Moritsch, Hans; Zima, Hans
1993-01-01
Vienna Fortran is a machine-independent language extension of Fortran, which is based upon the Single-Program-Multiple-Data (SPMD) paradigm and allows the user to write programs for distributed-memory systems using global addresses. The language features focus mainly on the issue of distributing data across virtual processor structures. Those features of Vienna Fortran that allow the data distributions of arrays to change dynamically, depending on runtime conditions are discussed. The relevant language features are discussed, their implementation is outlined, and how they may be used in applications is described.
Dynamic Singularity Spectrum Distribution of Sea Clutter
NASA Astrophysics Data System (ADS)
Xiong, Gang; Yu, Wenxian; Zhang, Shuning
2015-12-01
The fractal and multifractal theory have provided new approaches for radar signal processing and target-detecting under the background of ocean. However, the related research mainly focuses on fractal dimension or multifractal spectrum (MFS) of sea clutter. In this paper, a new dynamic singularity analysis method of sea clutter using MFS distribution is developed, based on moving detrending analysis (DMA-MFSD). Theoretically, we introduce the time information by using cyclic auto-correlation of sea clutter. For transient correlation series, the instantaneous singularity spectrum based on multifractal detrending moving analysis (MF-DMA) algorithm is calculated, and the dynamic singularity spectrum distribution of sea clutter is acquired. In addition, we analyze the time-varying singularity exponent ranges and maximum position function in DMA-MFSD of sea clutter. For the real sea clutter data, we analyze the dynamic singularity spectrum distribution of real sea clutter in level III sea state, and conclude that the radar sea clutter has the non-stationary and time-varying scale characteristic and represents the time-varying singularity spectrum distribution based on the proposed DMA-MFSD method. The DMA-MFSD will also provide reference for nonlinear dynamics and multifractal signal processing.
The brain dynamics of linguistic computation
Murphy, Elliot
2015-01-01
Neural oscillations at distinct frequencies are increasingly being related to a number of basic and higher cognitive faculties. Oscillations enable the construction of coherently organized neuronal assemblies through establishing transitory temporal correlations. By exploring the elementary operations of the language faculty—labeling, concatenation, cyclic transfer—alongside neural dynamics, a new model of linguistic computation is proposed. It is argued that the universality of language, and the true biological source of Universal Grammar, is not to be found purely in the genome as has long been suggested, but more specifically within the extraordinarily preserved nature of mammalian brain rhythms employed in the computation of linguistic structures. Computational-representational theories are used as a guide in investigating the neurobiological foundations of the human “cognome”—the set of computations performed by the nervous system—and new directions are suggested for how the dynamics of the brain (the “dynome”) operate and execute linguistic operations. The extent to which brain rhythms are the suitable neuronal processes which can capture the computational properties of the human language faculty is considered against a backdrop of existing cartographic research into the localization of linguistic interpretation. Particular focus is placed on labeling, the operation elsewhere argued to be species-specific. A Basic Label model of the human cognome-dynome is proposed, leading to clear, causally-addressable empirical predictions, to be investigated by a suggested research program, Dynamic Cognomics. In addition, a distinction between minimal and maximal degrees of explanation is introduced to differentiate between the depth of analysis provided by cartographic, rhythmic, neurochemical, and other approaches to computation. PMID:26528201
The Gain of Resource Delegation in Distributed Computing Environments
NASA Astrophysics Data System (ADS)
Fölling, Alexander; Grimme, Christian; Lepping, Joachim; Papaspyrou, Alexander
In this paper, we address job scheduling in Distributed Computing Infrastructures, that is a loosely coupled network of autonomous acting High Performance Computing systems. In contrast to the common approach of mutual workload exchange, we consider the more intuitive operator's viewpoint of load-dependent resource reconfiguration. In case of a site's over-utilization, the scheduling system is able to lease resources from other sites to keep up service quality for its local user community. Contrary, the granting of idle resources can increase utilization in times of low local workload and thus ensure higher efficiency. The evaluation considers real workload data and is done with respect to common service quality indicators. For two simple resource exchange policies and three basic setups we show the possible gain of this approach and analyze the dynamics in workload-adaptive reconfiguration behavior.
Visualization of unsteady computational fluid dynamics
NASA Technical Reports Server (NTRS)
Haimes, Robert
1995-01-01
The current computing environment that most researchers are using for the calculation of 3D unsteady Computational Fluid Dynamic (CFD) results is a super-computer class machine. The Massively Parallel Processors (MPP's) such as the 160 node IBM SP2 at NAS and clusters of workstations acting as a single MPP (like NAS's SGI Power-Challenge array) provide the required computation bandwidth for CFD calculations of transient problems. Work is in progress on a set of software tools designed specifically to address visualizing 3D unsteady CFD results in these super-computer-like environments. The visualization is concurrently executed with the CFD solver. The parallel version of Visual3, pV3 required splitting up the unsteady visualization task to allow execution across a network of workstation(s) and compute servers. In this computing model, the network is almost always the bottleneck so much of the effort involved techniques to reduce the size of the data transferred between machines.
Visualization of unsteady computational fluid dynamics
NASA Astrophysics Data System (ADS)
Haimes, Robert
1995-10-01
The current computing environment that most researchers are using for the calculation of 3D unsteady Computational Fluid Dynamic (CFD) results is a super-computer class machine. The Massively Parallel Processors (MPP's) such as the 160 node IBM SP2 at NAS and clusters of workstations acting as a single MPP (like NAS's SGI Power-Challenge array) provide the required computation bandwidth for CFD calculations of transient problems. Work is in progress on a set of software tools designed specifically to address visualizing 3D unsteady CFD results in these super-computer-like environments. The visualization is concurrently executed with the CFD solver. The parallel version of Visual3, pV3 required splitting up the unsteady visualization task to allow execution across a network of workstation(s) and compute servers. In this computing model, the network is almost always the bottleneck so much of the effort involved techniques to reduce the size of the data transferred between machines.
Spectral Methods for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Zang, T. A.; Streett, C. L.; Hussaini, M. Y.
1994-01-01
As a tool for large-scale computations in fluid dynamics, spectral methods were prophesized in 1944, born in 1954, virtually buried in the mid-1960's, resurrected in 1969, evangalized in the 1970's, and catholicized in the 1980's. The use of spectral methods for meteorological problems was proposed by Blinova in 1944 and the first numerical computations were conducted by Silberman (1954). By the early 1960's computers had achieved sufficient power to permit calculations with hundreds of degrees of freedom. For problems of this size the traditional way of computing the nonlinear terms in spectral methods was expensive compared with finite-difference methods. Consequently, spectral methods fell out of favor. The expense of computing nonlinear terms remained a severe drawback until Orszag (1969) and Eliasen, Machenauer, and Rasmussen (1970) developed the transform methods that still form the backbone of many large-scale spectral computations. The original proselytes of spectral methods were meteorologists involved in global weather modeling and fluid dynamicists investigating isotropic turbulence. The converts who were inspired by the successes of these pioneers remained, for the most part, confined to these and closely related fields throughout the 1970's. During that decade spectral methods appeared to be well-suited only for problems governed by ordinary diSerential eqllations or by partial differential equations with periodic boundary conditions. And, of course, the solution itself needed to be smooth. Some of the obstacles to wider application of spectral methods were: (1) poor resolution of discontinuous solutions; (2) inefficient implementation of implicit methods; and (3) drastic geometric constraints. All of these barriers have undergone some erosion during the 1980's, particularly the latter two. As a result, the applicability and appeal of spectral methods for computational fluid dynamics has broadened considerably. The motivation for the use of spectral
An Overview of Cloud Computing in Distributed Systems
NASA Astrophysics Data System (ADS)
Divakarla, Usha; Kumari, Geetha
2010-11-01
Cloud computing is the emerging trend in the field of distributed computing. Cloud computing evolved from grid computing and distributed computing. Cloud plays an important role in huge organizations in maintaining huge data with limited resources. Cloud also helps in resource sharing through some specific virtual machines provided by the cloud service provider. This paper gives an overview of the cloud organization and some of the basic security issues pertaining to the cloud.
Fault Diagnosis in a Fully Distributed Local Computer Network.
NASA Astrophysics Data System (ADS)
Kwag, Hye Keun
Local computer networks are being installed in diverse application areas. Many of the networks employ a distributed control scheme, which has advantages in performance and reliability over a centralized one. However, distribution of control increases the difficulty in locating faulty hardware elements. Consequently, advantages may not be fully realized unless measures are taken to account for the difficulties of fault diagnosis; yet, not much work has been done in this area. A hardcore is defined as a node or a part of a node which is fault-free and which can diagnose other elements in a system. Faults are diagnosed in most existing distributed local computer networks by assuming that every node, or a part of every node, is a fixed hardcore: a fixed node or a part of a fixed node is always a hardcore. Maintaining such high reliability may not be possible or cost-effective for some systems. A distributed network contains dynamically redundant elements, and it is reasonable to assume that fewer nodes are simultaneously faulty than are fault-free at any point in the life cycle of the network. A diagnostic model is proposed herein which determines bindary evaluation results according to the status of the testing and tested nodes, and which leads the network to dynamically locate a fault-free node (a hardcore). This diagnostic model is, in most cases, simpler to implement and more cost-effective than the fixed hardcore. The selected hardcore can diagnose the other elements and can locate permanent faults. In a hop-by-hop test, the destination node and every intermediate node in a path test the transmitted data. This dissertation presents another method to locate an element with frequent transient faults; it checks data only at the destination, thereby, eliminating the need for a hop-by-hop test.
The use of computers for instruction in fluid dynamics
NASA Technical Reports Server (NTRS)
Watson, Val
1987-01-01
Applications for computers which improve instruction in fluid dynamics are examined. Computers can be used to illustrate three-dimensional flow fields and simple fluid dynamics mechanisms, to solve fluid dynamics problems, and for electronic sketching. The usefulness of computer applications is limited by computer speed, memory, and software and the clarity and field of view of the projected display. Proposed advances in personal computers which will address these limitations are discussed. Long range applications for computers in education are considered.
Distributed Design and Analysis of Computer Experiments
Doak, Justin
2002-11-11
DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. For example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation of an
Distributed Design and Analysis of Computer Experiments
2002-11-11
DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. Formore » example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation
Computation in Dynamically Bounded Asymmetric Systems
Rutishauser, Ueli; Slotine, Jean-Jacques; Douglas, Rodney
2015-01-01
Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. Here we analyze the behavior of asymmetrical connected networks of linear threshold neurons, whose positive response is unbounded. We show that, for a wide range of parameters, this asymmetry brings interesting and computationally useful dynamical properties. When driven by input, the network explores potential solutions through highly unstable ‘expansion’ dynamics. This expansion is steered and constrained by negative divergence of the dynamics, which ensures that the dimensionality of the solution space continues to reduce until an acceptable solution manifold is reached. Then the system contracts stably on this manifold towards its final solution trajectory. The unstable positive feedback and cross inhibition that underlie expansion and divergence are common motifs in molecular and neuronal networks. Therefore we propose that very simple organizational constraints that combine these motifs can lead to spontaneous computation and so to the spontaneous modification of entropy that is characteristic of living systems. PMID:25617645
Arterioportal shunts on dynamic computed tomography
Nakayama, T.; Hiyama, Y.; Ohnishi, K.; Tsuchiya, S.; Kohno, K.; Nakajima, Y.; Okuda, K.
1983-05-01
Thirty-two patients, 20 with hepatocelluar carcinoma and 12 with liver cirrhosis, were examined by dynamic computed tomography (CT) using intravenous bolus injection of contrast medium and by celiac angiography. Dynamic CT disclosed arterioportal shunting in four cases of hepatocellular carcinoma and in one of cirrhosis. In three of the former, the arterioportal shunt was adjacent to a mass lesion on CT, suggesting tumor invasion into the portal branch. In one with hepatocellular carcinoma, the shunt was remote from the mass. In the case with cirrhosis, there was no mass. In these last two cases, the shunt might have been caused by prior percutaneous needle puncture. In another case of hepatocellular carcinoma, celiac angiography but not CT demonstrated an arterioportal shunt. Thus, dynamic CT was diagnostic in five of six cases of arteriographically demonstrated arterioportal shunts.
A computational model for dynamic vision
NASA Technical Reports Server (NTRS)
Moezzi, Saied; Weymouth, Terry E.
1990-01-01
This paper describes a novel computational model for dynamic vision which promises to be both powerful and robust. Furthermore the paradigm is ideal for an active vision system where camera vergence changes dynamically. Its basis is the retinotopically indexed object-centered encoding of the early visual information. Specifically, the relative distances of objects to a set of referents is encoded in image registered maps. To illustrate the efficacy of the method, it is applied to the problem of dynamic stereo vision. Integration of depth information over multiple frames obtained by a moving robot generally requires precise information about the relative camera position from frame to frame. Usually, this information can only be approximated. The method facilitates the integration of depth information without direct use or knowledge of camera motion.
Human systems dynamics: Toward a computational model
NASA Astrophysics Data System (ADS)
Eoyang, Glenda H.
2012-09-01
A robust and reliable computational model of complex human systems dynamics could support advancements in theory and practice for social systems at all levels, from intrapersonal experience to global politics and economics. Models of human interactions have evolved from traditional, Newtonian systems assumptions, which served a variety of practical and theoretical needs of the past. Another class of models has been inspired and informed by models and methods from nonlinear dynamics, chaos, and complexity science. None of the existing models, however, is able to represent the open, high dimension, and nonlinear self-organizing dynamics of social systems. An effective model will represent interactions at multiple levels to generate emergent patterns of social and political life of individuals and groups. Existing models and modeling methods are considered and assessed against characteristic pattern-forming processes in observed and experienced phenomena of human systems. A conceptual model, CDE Model, based on the conditions for self-organizing in human systems, is explored as an alternative to existing models and methods. While the new model overcomes the limitations of previous models, it also provides an explanatory base and foundation for prospective analysis to inform real-time meaning making and action taking in response to complex conditions in the real world. An invitation is extended to readers to engage in developing a computational model that incorporates the assumptions, meta-variables, and relationships of this open, high dimension, and nonlinear conceptual model of the complex dynamics of human systems.
Computational fluid dynamics uses in fluid dynamics/aerodynamics education
NASA Technical Reports Server (NTRS)
Holst, Terry L.
1994-01-01
The field of computational fluid dynamics (CFD) has advanced to the point where it can now be used for the purpose of fluid dynamics physics education. Because of the tremendous wealth of information available from numerical simulation, certain fundamental concepts can be efficiently communicated using an interactive graphical interrogation of the appropriate numerical simulation data base. In other situations, a large amount of aerodynamic information can be communicated to the student by interactive use of simple CFD tools on a workstation or even in a personal computer environment. The emphasis in this presentation is to discuss ideas for how this process might be implemented. Specific examples, taken from previous publications, will be used to highlight the presentation.
NASA Technical Reports Server (NTRS)
Devasia, Santosh; Bayo, Eduardo
1993-01-01
This paper addresses the problem of inverse dynamics for articulated flexible structures with both lumped and distributed actuators. This problem arises, for example, in the combined vibration minimization and trajectory control of space robots and structures. A new inverse dynamics scheme for computing the nominal lumped and distributed inputs for tracking a prescribed trajectory is given.
Concept for a distributed processor computer
NASA Technical Reports Server (NTRS)
Bogue, P. N.; Burnett, G. J.; Koczela, L. J.
1970-01-01
Future generation computer utilizes cell of single metal oxide semiconductor wafer containing general purpose processor section and small memory of approximately 512 words of 16 bits each. Cells are organized into groups and groups interconnected to form computer.
Computational Fluid Dynamics of rising droplets
Wagner, Matthew; Francois, Marianne M.
2012-09-05
The main goal of this study is to perform simulations of droplet dynamics using Truchas, a LANL-developed computational fluid dynamics (CFD) software, and compare them to a computational study of Hysing et al.[IJNMF, 2009, 60:1259]. Understanding droplet dynamics is of fundamental importance in liquid-liquid extraction, a process used in the nuclear fuel cycle to separate various components. Simulations of a single droplet rising by buoyancy are conducted in two-dimensions. Multiple parametric studies are carried out to ensure the problem set-up is optimized. An Interface Smoothing Length (ISL) study and mesh resolution study are performed to verify convergence of the calculations. ISL is a parameter for the interface curvature calculation. Further, wall effects are investigated and checked against existing correlations. The ISL study found that the optimal ISL value is 2.5{Delta}x, with {Delta}x being the mesh cell spacing. The mesh resolution study found that the optimal mesh resolution is d/h=40, for d=drop diameter and h={Delta}x. In order for wall effects on terminal velocity to be insignificant, a conservative wall width of 9d or a nonconservative wall width of 7d can be used. The percentage difference between Hysing et al.[IJNMF, 2009, 60:1259] and Truchas for the velocity profiles vary from 7.9% to 9.9%. The computed droplet velocity and interface profiles are found in agreement with the study. The CFD calculations are performed on multiple cores, using LANL's Institutional High Performance Computing.
Determination of eigenvalues of dynamical systems by symbolic computation
NASA Technical Reports Server (NTRS)
Howard, J. C.
1982-01-01
A symbolic computation technique for determining the eigenvalues of dynamical systems is described wherein algebraic operations, symbolic differentiation, matrix formulation and inversion, etc., can be performed on a digital computer equipped with a formula-manipulation compiler. An example is included that demonstrates the facility with which the system dynamics matrix and the control distribution matrix from the state space formulation of the equations of motion can be processed to obtain eigenvalue loci as a function of a system parameter. The example chosen to demonstrate the technique is a fourth-order system representing the longitudinal response of a DC 8 aircraft to elevator inputs. This simplified system has two dominant modes, one of which is lightly damped and the other well damped. The loci may be used to determine the value of the controlling parameter that satisfied design requirements. The results were obtained using the MACSYMA symbolic manipulation system.
Advances in computational fluid dynamics solvers for modern computing environments
NASA Astrophysics Data System (ADS)
Hertenstein, Daniel; Humphrey, John R.; Paolini, Aaron L.; Kelmelis, Eric J.
2013-05-01
EM Photonics has been investigating the application of massively multicore processors to a key problem area: Computational Fluid Dynamics (CFD). While the capabilities of CFD solvers have continually increased and improved to support features such as moving bodies and adjoint-based mesh adaptation, the software architecture has often lagged behind. This has led to poor scaling as core counts reach the tens of thousands. In the modern High Performance Computing (HPC) world, clusters with hundreds of thousands of cores are becoming the standard. In addition, accelerator devices such as NVIDIA GPUs and Intel Xeon Phi are being installed in many new systems. It is important for CFD solvers to take advantage of the new hardware as the computations involved are well suited for the massively multicore architecture. In our work, we demonstrate that new features in NVIDIA GPUs are able to empower existing CFD solvers by example using AVUS, a CFD solver developed by the Air Force Research Labratory (AFRL) and the Volcanic Ash Advisory Center (VAAC). The effort has resulted in increased performance and scalability without sacrificing accuracy. There are many well-known codes in the CFD space that can benefit from this work, such as FUN3D, OVERFLOW, and TetrUSS. Such codes are widely used in the commercial, government, and defense sectors.
Accommodating Heterogeneity in a Debugger for Distributed Computations
NASA Technical Reports Server (NTRS)
Hood, Robert; Cheng, Doreen; Cooper, D. M. (Technical Monitor)
1994-01-01
In an ongoing project at NASA Ames Research Center, we are building debugger for distributed computations running on a heterogeneous set of machines. Historically, such debuggers have been built as front-ends to existing source-level debuggers on the target platforms. In effect, these back-end debuggers are providing a collection of debugger services to a client. The major drawback is that because of inconsistencies among the back-end debuggers, the front-end must use a different protocol when talking to each back-end debugger. This can make the front-end quite complex. We have avoided this complexity problem by defining the client-server debugger protocol. While it does require vendors to adapt their existing debugger code to meet the protocol, vendors are generally interested in doing so because the approach has several advantages. In addition to solving the heterogenous platform debugging problem, it will be possible to write interesting debugger user interfaces that can be easily ported across a variety of machines. This will likely encourage investment in application-domain specific debuggers. In fact, the user interface of our debugger will be geared to scientists developing computational fluid dynamics codes. This paper describes some of the problems encountered in developing a portable debugger for heterogenous, distributed computing and how the architecture of our debugger avoids them. It then provides a detailed description of the debugger client-server protocol. Some of the more interesting attributes of the protocol are: (1) It is object-oriented; (2) It uses callback functions to capture the asynchronous nature of debugging in a procedural fashion; (3) It contains abstractions, such as in-line instrumentation, for the debugging of computationally intensive programs; (4) For remote debugging, it has operations that enable the implementor to optimize message passing traffic between client and server. The soundness of the protocol is being tested through
Computational fluid dynamics: Transition to design applications
NASA Technical Reports Server (NTRS)
Bradley, R. G.; Bhateley, I. C.; Howell, G. A.
1987-01-01
The development of aerospace vehicles, over the years, was an evolutionary process in which engineering progress in the aerospace community was based, generally, on prior experience and data bases obtained through wind tunnel and flight testing. Advances in the fundamental understanding of flow physics, wind tunnel and flight test capability, and mathematical insights into the governing flow equations were translated into improved air vehicle design. The modern day field of Computational Fluid Dynamics (CFD) is a continuation of the growth in analytical capability and the digital mathematics needed to solve the more rigorous form of the flow equations. Some of the technical and managerial challenges that result from rapidly developing CFD capabilites, some of the steps being taken by the Fort Worth Division of General Dynamics to meet these challenges, and some of the specific areas of application for high performance air vehicles are presented.
Computational dynamics of acoustically driven microsphere systems
NASA Astrophysics Data System (ADS)
Glosser, Connor; Piermarocchi, Carlo; Li, Jie; Dault, Dan; Shanker, B.
2016-01-01
We propose a computational framework for the self-consistent dynamics of a microsphere system driven by a pulsed acoustic field in an ideal fluid. Our framework combines a molecular dynamics integrator describing the dynamics of the microsphere system with a time-dependent integral equation solver for the acoustic field that makes use of fields represented as surface expansions in spherical harmonic basis functions. The presented approach allows us to describe the interparticle interaction induced by the field as well as the dynamics of trapping in counter-propagating acoustic pulses. The integral equation formulation leads to equations of motion for the microspheres describing the effect of nondissipative drag forces. We show (1) that the field-induced interactions between the microspheres give rise to effective dipolar interactions, with effective dipoles defined by their velocities and (2) that the dominant effect of an ultrasound pulse through a cloud of microspheres gives rise mainly to a translation of the system, though we also observe both expansion and contraction of the cloud determined by the initial system geometry.
Computational dynamics of acoustically driven microsphere systems.
Glosser, Connor; Piermarocchi, Carlo; Li, Jie; Dault, Dan; Shanker, B
2016-01-01
We propose a computational framework for the self-consistent dynamics of a microsphere system driven by a pulsed acoustic field in an ideal fluid. Our framework combines a molecular dynamics integrator describing the dynamics of the microsphere system with a time-dependent integral equation solver for the acoustic field that makes use of fields represented as surface expansions in spherical harmonic basis functions. The presented approach allows us to describe the interparticle interaction induced by the field as well as the dynamics of trapping in counter-propagating acoustic pulses. The integral equation formulation leads to equations of motion for the microspheres describing the effect of nondissipative drag forces. We show (1) that the field-induced interactions between the microspheres give rise to effective dipolar interactions, with effective dipoles defined by their velocities and (2) that the dominant effect of an ultrasound pulse through a cloud of microspheres gives rise mainly to a translation of the system, though we also observe both expansion and contraction of the cloud determined by the initial system geometry. PMID:26871188
Shuttle rocket booster computational fluid dynamics
NASA Technical Reports Server (NTRS)
Chung, T. J.; Park, O. Y.
1988-01-01
Additional results and a revised and improved computer program listing from the shuttle rocket booster computational fluid dynamics formulations are presented. Numerical calculations for the flame zone of solid propellants are carried out using the Galerkin finite elements, with perturbations expanded to the zeroth, first, and second orders. The results indicate that amplification of oscillatory motions does indeed prevail in high frequency regions. For the second order system, the trend is similar to the first order system for low frequencies, but instabilities may appear at frequencies lower than those of the first order system. The most significant effect of the second order system is that the admittance is extremely oscillatory between moderately high frequency ranges.
Computational fluid dynamics of reaction injection moulding
NASA Astrophysics Data System (ADS)
Mateus, Artur; Mitchell, Geoffrey; Bártolo, Paulo
2012-09-01
The modern approach to the development of moulds for injection moulding (Reaction Injection Moulding - RIM, Thermoplastic Injection Moulding - TIM and others) differs from the conventional approach based exclusively on the designer's experience and hypotheses. The increasingly complexityof moulds and the requirement by the clients for the improvement of their quality, shorter delivery times, and lower prices, demand the development of novel approaches to developed optimal moulds and moulded parts. The development of more accurate computational tools is fundamental to optimize both, the injection mouldingprocesses and the design, quality and durability of the moulds. This paper focuses on the RIM process proposing a novel thermo-rheo-kinetic model. The proposed model was implemented in generalpurpose Computational Fluid Dynamics (CFD) software. The model enables to accurately describe both flow and curing stages. Simulation results were validated against experimental results.
LaRC computational dynamics overview
NASA Technical Reports Server (NTRS)
Husner, J. M.
1989-01-01
Present research centers on the development of advanced computational methods for transient simulation analyses. Aircraft, launch vehicles and space structure components are potential applications, but primary focus is presently on large space structures. There are both in-house and out-of-house activities. The in-house activity centers around the development of a multibody simulation tool for truss-like structures called LATDYN for Large Angle Transient DYNamics. Multibody analysis involves articulation of structural components as well as robotic maneuvers. These items are necessary for construction (erection or deployment) of large space structures in orbit and the carrying out of certain operations on board the space station. Thus, part of the in-house activity involves the development of methods which treat the changing mass, stiffness and constraints associated with articulating systems. The out-of-house activity involves subcycling, development of large deformation/motion beam formulation, constraint stabilization and direct time integration transient algorithms in parallel computing.
Computational fluid dynamics in cardiovascular disease.
Lee, Byoung-Kwon
2011-08-01
Computational fluid dynamics (CFD) is a mechanical engineering field for analyzing fluid flow, heat transfer, and associated phenomena, using computer-based simulation. CFD is a widely adopted methodology for solving complex problems in many modern engineering fields. The merit of CFD is developing new and improved devices and system designs, and optimization is conducted on existing equipment through computational simulations, resulting in enhanced efficiency and lower operating costs. However, in the biomedical field, CFD is still emerging. The main reason why CFD in the biomedical field has lagged behind is the tremendous complexity of human body fluid behavior. Recently, CFD biomedical research is more accessible, because high performance hardware and software are easily available with advances in computer science. All CFD processes contain three main components to provide useful information, such as pre-processing, solving mathematical equations, and post-processing. Initial accurate geometric modeling and boundary conditions are essential to achieve adequate results. Medical imaging, such as ultrasound imaging, computed tomography, and magnetic resonance imaging can be used for modeling, and Doppler ultrasound, pressure wire, and non-invasive pressure measurements are used for flow velocity and pressure as a boundary condition. Many simulations and clinical results have been used to study congenital heart disease, heart failure, ventricle function, aortic disease, and carotid and intra-cranial cerebrovascular diseases. With decreasing hardware costs and rapid computing times, researchers and medical scientists may increasingly use this reliable CFD tool to deliver accurate results. A realistic, multidisciplinary approach is essential to accomplish these tasks. Indefinite collaborations between mechanical engineers and clinical and medical scientists are essential. CFD may be an important methodology to understand the pathophysiology of the development and
Two-phase computational fluid dynamics
Rothe, P.H.
1991-07-26
The results of the project illustrate the feasibility of multiphase computerized fluid dynamics (CFD) software. Existing CFD software is capable of simulating particle fields, certain droplet fields, and certain free surface flows, and does so routinely in engineering applications. Stratified flows can be addressed by a multiphase CFD code, once one is developed with suitable capabilities. The groundwork for such a code has been laid. Calculations performed for stratified flows demonstrate the accuracy achievable and the convergence of the methodology. Extension of the stratified flow methodology to other segregated flows such as slug or annular faces no inherent limits. The research has commercial application in the development of multiphase CFD computer programs.
Computational Fluid Dynamics Technology for Hypersonic Applications
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2003-01-01
Several current challenges in computational fluid dynamics and aerothermodynamics for hypersonic vehicle applications are discussed. Example simulations are presented from code validation and code benchmarking efforts to illustrate capabilities and limitations. Opportunities to advance the state-of-art in algorithms, grid generation and adaptation, and code validation are identified. Highlights of diverse efforts to address these challenges are then discussed. One such effort to re-engineer and synthesize the existing analysis capability in LAURA, VULCAN, and FUN3D will provide context for these discussions. The critical (and evolving) role of agile software engineering practice in the capability enhancement process is also noted.
Computational Fluid Dynamics Symposium on Aeropropulsion
NASA Technical Reports Server (NTRS)
1991-01-01
Recognizing the considerable advances that have been made in computational fluid dynamics, the Internal Fluid Mechanics Division of NASA Lewis Research Center sponsored this symposium with the objective of providing a forum for exchanging information regarding recent developments in numerical methods, physical and chemical modeling, and applications. This conference publication is a compilation of 4 invited and 34 contributed papers presented in six sessions: algorithms one and two, turbomachinery, turbulence, components application, and combustors. Topics include numerical methods, grid generation, chemically reacting flows, turbulence modeling, inlets, nozzles, and unsteady flows.
Computational fluid dynamics symposium on aeropropulsion
Not Available
1991-01-01
Recognizing the considerable advances that have been made in computational fluid dynamics, the Internal Fluid Mechanics Division of NASA Lewis Research Center sponsored this symposium with the objective of providing a forum for exchanging information regarding recent developments in numerical methods, physical and chemical modeling, and applications. This conference publication is a compilation of 4 invited and 34 contributed papers presented in six sessions: algorithms one and two, turbomachinery, turbulence, components application, and combustors. Topics include numerical methods, grid generation, chemically reacting flows, turbulence modeling, inlets, nozzles, and unsteady flows.
High performance computations using dynamical nucleation theory
NASA Astrophysics Data System (ADS)
Windus, T. L.; Kathmann, S. M.; Crosby, L. D.
2008-07-01
Chemists continue to explore the use of very large computations to perform simulations that describe the molecular level physics of critical challenges in science. In this paper, we describe the Dynamical Nucleation Theory Monte Carlo (DNTMC) model - a model for determining molecular scale nucleation rate constants - and its parallel capabilities. The potential for bottlenecks and the challenges to running on future petascale or larger resources are delineated. A 'master-slave' solution is proposed to scale to the petascale and will be developed in the NWChem software. In addition, mathematical and data analysis challenges are described.
Verification and Validation in Computational Fluid Dynamics
OBERKAMPF, WILLIAM L.; TRUCANO, TIMOTHY G.
2002-03-01
Verification and validation (V and V) are the primary means to assess accuracy and reliability in computational simulations. This paper presents an extensive review of the literature in V and V in computational fluid dynamics (CFD), discusses methods and procedures for assessing V and V, and develops a number of extensions to existing ideas. The review of the development of V and V terminology and methodology points out the contributions from members of the operations research, statistics, and CFD communities. Fundamental issues in V and V are addressed, such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainty, conceptual sources of error and uncertainty, and the relationship between validation and prediction. The fundamental strategy of verification is the identification and quantification of errors in the computational model and its solution. In verification activities, the accuracy of a computational solution is primarily measured relative to two types of highly accurate solutions: analytical solutions and highly accurate numerical solutions. Methods for determining the accuracy of numerical solutions are presented and the importance of software testing during verification activities is emphasized.
Distributing an executable job load file to compute nodes in a parallel computer
Gooding, Thomas M.
2016-09-13
Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications link over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.
Distributing an executable job load file to compute nodes in a parallel computer
Gooding, Thomas M.
2016-08-09
Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications link over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.
A uniform approach for programming distributed heterogeneous computing systems
Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas
2014-01-01
Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. PMID:25844015
Distributed computing environments for future space control systems
NASA Technical Reports Server (NTRS)
Viallefont, Pierre
1993-01-01
The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.
Optimization of an interactive distributive computer network
NASA Technical Reports Server (NTRS)
Frederick, V.
1985-01-01
The activities under a cooperative agreement for the development of a computer network are briefly summarized. Research activities covered are: computer operating systems optimization and integration; software development and implementation of the IRIS (Infrared Imaging of Shuttle) Experiment; and software design, development, and implementation of the APS (Aerosol Particle System) Experiment.
Direct modeling for computational fluid dynamics
NASA Astrophysics Data System (ADS)
Xu, Kun
2015-06-01
All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct
Utilizing parallel optimization in computational fluid dynamics
NASA Astrophysics Data System (ADS)
Kokkolaras, Michael
1998-12-01
General problems of interest in computational fluid dynamics are investigated by means of optimization. Specifically, in the first part of the dissertation, a method of optimal incremental function approximation is developed for the adaptive solution of differential equations. Various concepts and ideas utilized by numerical techniques employed in computational mechanics and artificial neural networks (e.g. function approximation and error minimization, variational principles and weighted residuals, and adaptive grid optimization) are combined to formulate the proposed method. The basis functions and associated coefficients of a series expansion, representing the solution, are optimally selected by a parallel direct search technique at each step of the algorithm according to appropriate criteria; the solution is built sequentially. In this manner, the proposed method is adaptive in nature, although a grid is neither built nor adapted in the traditional sense using a-posteriori error estimates. Variational principles are utilized for the definition of the objective function to be extremized in the associated optimization problems, ensuring that the problem is well-posed. Complicated data structures and expensive remeshing algorithms and systems solvers are avoided. Computational efficiency is increased by using low-order basis functions and concurrent computing. Numerical results and convergence rates are reported for a range of steady-state problems, including linear and nonlinear differential equations associated with general boundary conditions, and illustrate the potential of the proposed method. Fluid dynamics applications are emphasized. Conclusions are drawn by discussing the method's limitations, advantages, and possible extensions. The second part of the dissertation is concerned with the optimization of the viscous-inviscid-interaction (VII) mechanism in an airfoil flow analysis code. The VII mechanism is based on the concept of a transpiration velocity
Spatiotemporal dynamics of distributed synthetic genetic circuits
NASA Astrophysics Data System (ADS)
Kanakov, Oleg; Laptyeva, Tetyana; Tsimring, Lev; Ivanchenko, Mikhail
2016-04-01
We propose and study models of two distributed synthetic gene circuits, toggle-switch and oscillator, each split between two cell strains and coupled via quorum-sensing signals. The distributed toggle switch relies on mutual repression of the two strains, and oscillator is comprised of two strains, one of which acts as an activator for another that in turn acts as a repressor. Distributed toggle switch can exhibit mobile fronts, switching the system from the weaker to the stronger spatially homogeneous state. The circuit can also act as a biosensor, with the switching front dynamics determined by the properties of an external signal. Distributed oscillator system displays another biosensor functionality: oscillations emerge once a small amount of one cell strain appears amid the other, present in abundance. Distribution of synthetic gene circuits among multiple strains allows one to reduce crosstalk among different parts of the overall system and also decrease the energetic burden of the synthetic circuit per cell, which may allow for enhanced functionality and viability of engineered cells.
Distributed-Computer System Optimizes SRB Joints
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.
1991-01-01
Initial calculations of redesign of joint on solid rocket booster (SRB) that failed during Space Shuttle tragedy showed redesign increased weight. Optimization techniques applied to determine whether weight could be reduced while keeping joint closed and limiting stresses. Analysis system developed by use of existing software coupling structural analysis with optimization computations. Software designed executable on network of computer workstations. Took advantage of parallelism offered by finite-difference technique of computing gradients to enable several workstations to contribute simultaneously to solution of problem. Key features, effective use of redundancies in hardware and flexible software, enabling optimization to proceed with minimal delay and decreased overall time to completion.
Dynamic shared state maintenance in distributed virtual environments
NASA Astrophysics Data System (ADS)
Hamza-Lup, Felix George
Advances in computer networks and rendering systems facilitate the creation of distributed collaborative environments in which the distribution of information at remote locations allows efficient communication. Particularly challenging are distributed interactive Virtual Environments (VE) that allow knowledge sharing through 3D information. The purpose of this work is to address the problem of latency in distributed interactive VE and to develop a conceptual model for consistency maintenance in these environments based on the participant interaction model. An area that needs to be explored is the relationship between the dynamic shared state and the interaction with the virtual entities present in the shared scene. Mixed Reality (MR) and VR environments must bring the human participant interaction into the loop through a wide range of electronic motion sensors, and haptic devices. Part of the work presented here defines a novel criterion for categorization of distributed interactive VE and introduces, as well as analyzes, an adaptive synchronization algorithm for consistency maintenance in such environments. As part of the work, a distributed interactive Augmented Reality (AR) testbed and the algorithm implementation details are presented. Currently the testbed is part of several research efforts at the Optical Diagnostics and Applications Laboratory including 3D visualization applications using custom built head-mounted displays (HMDs) with optical motion tracking and a medical training prototype for endotracheal intubation and medical prognostics. An objective method using quaternion calculus is applied for the algorithm assessment. In spite of significant network latency, results show that the dynamic shared state can be maintained consistent at multiple remotely located sites. In further consideration of the latency problems and in the light of the current trends in interactive distributed VE applications, we propose a hybrid distributed system architecture for
Efficient gradient computation for dynamical models
Sengupta, B.; Friston, K.J.; Penny, W.D.
2014-01-01
Data assimilation is a fundamental issue that arises across many scales in neuroscience — ranging from the study of single neurons using single electrode recordings to the interaction of thousands of neurons using fMRI. Data assimilation involves inverting a generative model that can not only explain observed data but also generate predictions. Typically, the model is inverted or fitted using conventional tools of (convex) optimization that invariably extremise some functional — norms, minimum descriptive length, variational free energy, etc. Generally, optimisation rests on evaluating the local gradients of the functional to be optimized. In this paper, we compare three different gradient estimation techniques that could be used for extremising any functional in time — (i) finite differences, (ii) forward sensitivities and a method based on (iii) the adjoint of the dynamical system. We demonstrate that the first-order gradients of a dynamical system, linear or non-linear, can be computed most efficiently using the adjoint method. This is particularly true for systems where the number of parameters is greater than the number of states. For such systems, integrating several sensitivity equations – as required with forward sensitivities – proves to be most expensive, while finite-difference approximations have an intermediate efficiency. In the context of neuroimaging, adjoint based inversion of dynamical causal models (DCMs) can, in principle, enable the study of models with large numbers of nodes and parameters. PMID:24769182
Cardea: Dynamic Access Control in Distributed Systems
NASA Technical Reports Server (NTRS)
Lepro, Rebekah
2004-01-01
Modern authorization systems span domains of administration, rely on many different authentication sources, and manage complex attributes as part of the authorization process. This . paper presents Cardea, a distributed system that facilitates dynamic access control, as a valuable piece of an inter-operable authorization framework. First, the authorization model employed in Cardea and its functionality goals are examined. Next, critical features of the system architecture and its handling of the authorization process are then examined. Then the S A M L and XACML standards, as incorporated into the system, are analyzed. Finally, the future directions of this project are outlined and connection points with general components of an authorization system are highlighted.
Visualization of Unsteady Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Haimes, Robert
1997-01-01
The current compute environment that most researchers are using for the calculation of 3D unsteady Computational Fluid Dynamic (CFD) results is a super-computer class machine. The Massively Parallel Processors (MPP's) such as the 160 node IBM SP2 at NAS and clusters of workstations acting as a single MPP (like NAS's SGI Power-Challenge array and the J90 cluster) provide the required computation bandwidth for CFD calculations of transient problems. If we follow the traditional computational analysis steps for CFD (and we wish to construct an interactive visualizer) we need to be aware of the following: (1) Disk space requirements. A single snap-shot must contain at least the values (primitive variables) stored at the appropriate locations within the mesh. For most simple 3D Euler solvers that means 5 floating point words. Navier-Stokes solutions with turbulence models may contain 7 state-variables. (2) Disk speed vs. Computational speeds. The time required to read the complete solution of a saved time frame from disk is now longer than the compute time for a set number of iterations from an explicit solver. Depending, on the hardware and solver an iteration of an implicit code may also take less time than reading the solution from disk. If one examines the performance improvements in the last decade or two, it is easy to see that depending on disk performance (vs. CPU improvement) may not be the best method for enhancing interactivity. (3) Cluster and Parallel Machine I/O problems. Disk access time is much worse within current parallel machines and cluster of workstations that are acting in concert to solve a single problem. In this case we are not trying to read the volume of data, but are running the solver and the solver outputs the solution. These traditional network interfaces must be used for the file system. (4) Numerics of particle traces. Most visualization tools can work upon a single snap shot of the data but some visualization tools for transient
Dynamic Voltage Regulation Using Distributed Energy Resources
Xu, Yan; Rizy, D Tom; Li, Fangxing; Kueck, John D
2007-01-01
Many distributed energy resources (DE) are near load centres and equipped with power electronics converters to interface with the grid, therefore it is feasible for DE to provide ancillary services such as voltage regulation, nonactive power compensation, and power factor correction. A synchronous condenser and a microturbine with an inverter interface are implemented in parallel in a distribution system to regulate the local voltage. Voltage control schemes of the inverter and the synchronous condenser are developed. The experimental results show that both the inverter and the synchronous condenser can regulate the local voltage instantaneously, while the dynamic response of the inverter is faster than the synchronous condenser; and that integrated voltage regulation (multiple DE perform voltage regulation) can increase the voltage regulation capability, increase the lifetime of the equipment, and reduce the capital and operation costs.
Verification and validation in computational fluid dynamics
NASA Astrophysics Data System (ADS)
Oberkampf, William L.; Trucano, Timothy G.
2002-04-01
Verification and validation (V&V) are the primary means to assess accuracy and reliability in computational simulations. This paper presents an extensive review of the literature in V&V in computational fluid dynamics (CFD), discusses methods and procedures for assessing V&V, and develops a number of extensions to existing ideas. The review of the development of V&V terminology and methodology points out the contributions from members of the operations research, statistics, and CFD communities. Fundamental issues in V&V are addressed, such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainty, conceptual sources of error and uncertainty, and the relationship between validation and prediction. The fundamental strategy of verification is the identification and quantification of errors in the computational model and its solution. In verification activities, the accuracy of a computational solution is primarily measured relative to two types of highly accurate solutions: analytical solutions and highly accurate numerical solutions. Methods for determining the accuracy of numerical solutions are presented and the importance of software testing during verification activities is emphasized. The fundamental strategy of validation is to assess how accurately the computational results compare with the experimental data, with quantified error and uncertainty estimates for both. This strategy employs a hierarchical methodology that segregates and simplifies the physical and coupling phenomena involved in the complex engineering system of interest. A hypersonic cruise missile is used as an example of how this hierarchical structure is formulated. The discussion of validation assessment also encompasses a number of other important topics. A set of guidelines is proposed for designing and conducting validation experiments, supported by an explanation of how validation experiments are different
Domain decomposition methods in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Gropp, William D.; Keyes, David E.
1992-01-01
The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.
Domain decomposition methods in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Gropp, William D.; Keyes, David E.
1991-01-01
The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.
Computational fluid dynamics of airfoils and wings
NASA Technical Reports Server (NTRS)
Garabedian, P.; Mcfadden, G.
1982-01-01
It is pointed out that transonic flow is one of the fields where computational fluid dynamics turns out to be most effective. Codes for the design and analysis of supercritical airfoils and wings have become standard tools of the aircraft industry. The present investigation is concerned with mathematical models and theorems which account for some of the progress that has been made. The most successful aerodynamics codes are those for the analysis of flow at off-design conditions where weak shock waves appear. A major breakthrough was achieved by Murman and Cole (1971), who conceived of a retarded difference scheme which incorporates artificial viscosity to capture shocks in the supersonic zone. This concept has been used to develop codes for the analysis of transonic flow past a swept wing. Attention is given to the trailing edge and the boundary layer, entropy inequalities and wave drag, shockless airfoils, and the inverse swept wing code.
High performance computations using dynamical nucleation theory
Windus, Theresa L.; Kathmann, Shawn M.; Crosby, Lonnie D.
2008-07-14
Chemists continue to explore the use of very large computations to perform simulations that describe the molecular level physics of critical challenges in science. In this paper, the Dynamical Nucleation Theory Monte Carlo (DNTMC) model - a model for determining molecular scale nucleation rate constants - and its parallel capabilities are described. The potential for bottlenecks and the challenges to running on future petascale or larger resources are delineated. A "master-slave" solution is proposed to scale to the petascale and will be developed in the NWChem software. In addition, mathematical and data analysis challenges are also described. This work was supported by the U.S. Department of Energy's (DOE) Office of Basic Energy Sciences, Chemical Sciences program. The Pacific Northwest National Laboratory is operated by Battelle for DOE.
Lectures series in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Thompson, Kevin W.
1987-01-01
The lecture notes cover the basic principles of computational fluid dynamics (CFD). They are oriented more toward practical applications than theory, and are intended to serve as a unified source for basic material in the CFD field as well as an introduction to more specialized topics in artificial viscosity and boundary conditions. Each chapter in the test is associated with a videotaped lecture. The basic properties of conservation laws, wave equations, and shock waves are described. The duality of the conservation law and wave representations is investigated, and shock waves are examined in some detail. Finite difference techniques are introduced for the solution of wave equations and conservation laws. Stability analysis for finite difference approximations are presented. A consistent description of artificial viscosity methods are provided. Finally, the problem of nonreflecting boundary conditions are treated.
Domain decomposition algorithms and computation fluid dynamics
NASA Technical Reports Server (NTRS)
Chan, Tony F.
1988-01-01
In the past several years, domain decomposition was a very popular topic, partly motivated by the potential of parallelization. While a large body of theory and algorithms were developed for model elliptic problems, they are only recently starting to be tested on realistic applications. The application of some of these methods to two model problems in computational fluid dynamics are investigated. Some examples are two dimensional convection-diffusion problems and the incompressible driven cavity flow problem. The construction and analysis of efficient preconditioners for the interface operator to be used in the iterative solution of the interface solution is described. For the convection-diffusion problems, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is discussed.
Artificial Intelligence In Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Vogel, Alison Andrews
1991-01-01
Paper compares four first-generation artificial-intelligence (Al) software systems for computational fluid dynamics. Includes: Expert Cooling Fan Design System (EXFAN), PAN AIR Knowledge System (PAKS), grid-adaptation program MITOSIS, and Expert Zonal Grid Generation (EZGrid). Focuses on knowledge-based ("expert") software systems. Analyzes intended tasks, kinds of knowledge possessed, magnitude of effort required to codify knowledge, how quickly constructed, performances, and return on investment. On basis of comparison, concludes Al most successful when applied to well-formulated problems solved by classifying or selecting preenumerated solutions. In contrast, application of Al to poorly understood or poorly formulated problems generally results in long development time and large investment of effort, with no guarantee of success.
Computational modeling of intraocular gas dynamics.
Noohi, P; Abdekhodaie, M J; Cheng, Y L
2015-01-01
The purpose of this study was to develop a computational model to simulate the dynamics of intraocular gas behavior in pneumatic retinopexy (PR) procedure. The presented model predicted intraocular gas volume at any time and determined the tolerance angle within which a patient can maneuver and still gas completely covers the tear(s). Computational fluid dynamics calculations were conducted to describe PR procedure. The geometrical model was constructed based on the rabbit and human eye dimensions. SF6 in the form of pure and diluted with air was considered as the injected gas. The presented results indicated that the composition of the injected gas affected the gas absorption rate and gas volume. After injection of pure SF6, the bubble expanded to 2.3 times of its initial volume during the first 23 h, but when diluted SF6 was used, no significant expansion was observed. Also, head positioning for the treatment of retinal tear influenced the rate of gas absorption. Moreover, the determined tolerance angle depended on the bubble and tear size. More bubble expansion and smaller retinal tear caused greater tolerance angle. For example, after 23 h, for the tear size of 2 mm the tolerance angle of using pure SF6 is 1.4 times more than that of using diluted SF6 with 80% air. Composition of the injected gas and conditions of the tear in PR may dramatically affect the gas absorption rate and gas volume. Quantifying these effects helps to predict the tolerance angle and improve treatment efficiency. PMID:26682529
Computational modeling of intraocular gas dynamics
NASA Astrophysics Data System (ADS)
Noohi, P.; Abdekhodaie, M. J.; Cheng, Y. L.
2015-12-01
The purpose of this study was to develop a computational model to simulate the dynamics of intraocular gas behavior in pneumatic retinopexy (PR) procedure. The presented model predicted intraocular gas volume at any time and determined the tolerance angle within which a patient can maneuver and still gas completely covers the tear(s). Computational fluid dynamics calculations were conducted to describe PR procedure. The geometrical model was constructed based on the rabbit and human eye dimensions. SF6 in the form of pure and diluted with air was considered as the injected gas. The presented results indicated that the composition of the injected gas affected the gas absorption rate and gas volume. After injection of pure SF6, the bubble expanded to 2.3 times of its initial volume during the first 23 h, but when diluted SF6 was used, no significant expansion was observed. Also, head positioning for the treatment of retinal tear influenced the rate of gas absorption. Moreover, the determined tolerance angle depended on the bubble and tear size. More bubble expansion and smaller retinal tear caused greater tolerance angle. For example, after 23 h, for the tear size of 2 mm the tolerance angle of using pure SF6 is 1.4 times more than that of using diluted SF6 with 80% air. Composition of the injected gas and conditions of the tear in PR may dramatically affect the gas absorption rate and gas volume. Quantifying these effects helps to predict the tolerance angle and improve treatment efficiency.
Distributed computing environment monitoring and user expectations
Cottrell, R.L.A.; Logg, C.A.
1995-11-01
This paper discusses the growing needs for distributed system monitoring and compares it to current practices. It then goes on to identify the components of distributed system monitoring and shows how they are implemented and successfully used at one site today to address the Local Area Network (LAN), network services and applications, the Wide Area Network (WAN), and host monitoring. It shows how this monitoring can be used to develop realistic service level expectations and also identifies the costs. Finally, the paper briefly discusses the future challenges in network monitoring.
A comparison of queueing, cluster and distributed computing systems
NASA Technical Reports Server (NTRS)
Kaplan, Joseph A.; Nelson, Michael L.
1993-01-01
Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.
Computational fluid dynamics of left ventricular ejection.
Georgiadis, J G; Wang, M; Pasipoularides, A
1992-01-01
The present investigation addresses the effects of simple geometric variations on intraventricular ejection dynamics, by methods from computational fluid dynamics. It is an early step in incorporating more and more relevant characteristics of the ejection process, such as a continuously changing irregular geometry, in numerical simulations. We consider the effects of varying chamber eccentricities and outflow valve orifice-to-inner surface area ratios on instantaneous ejection gradients along the axis of symmetry of the left ventricle. The equation of motion for the streamfunction was discretized and solved iteratively with specified boundary conditions on a boundary-fitted adaptive grid, using an alternating-direction-implicit (ADI) algorithm. The unsteady aspects of the ejection process were subsequently introduced into the numerical simulation. It was shown that for given chamber volume and outflow orifice area, higher chamber eccentricities require higher ejection pressure gradients for the same velocity and local acceleration values at the aortic anulus than more spherical shapes. This finding is referable to the rise in local acceleration effects across the outflow axis. This is to be contrasted with the case of outflow orifice stenosis, in which it was shown that it is the convective acceleration effects that are intensified strongly. PMID:1562106
Nonlinear ship waves and computational fluid dynamics
MIYATA, Hideaki; ORIHARA, Hideo; SATO, Yohei
2014-01-01
Research works undertaken in the first author’s laboratory at the University of Tokyo over the past 30 years are highlighted. Finding of the occurrence of nonlinear waves (named Free-Surface Shock Waves) in the vicinity of a ship advancing at constant speed provided the start-line for the progress of innovative technologies in the ship hull-form design. Based on these findings, a multitude of the Computational Fluid Dynamic (CFD) techniques have been developed over this period, and are highlighted in this paper. The TUMMAC code has been developed for wave problems, based on a rectangular grid system, while the WISDAM code treats both wave and viscous flow problems in the framework of a boundary-fitted grid system. These two techniques are able to cope with almost all fluid dynamical problems relating to ships, including the resistance, ship’s motion and ride-comfort issues. Consequently, the two codes have contributed significantly to the progress in the technology of ship design, and now form an integral part of the ship-designing process. PMID:25311139
Computational Fluid Dynamics - Applications in Manufacturing Processes
NASA Astrophysics Data System (ADS)
Beninati, Maria Laura; Kathol, Austin; Ziemian, Constance
2012-11-01
A new Computational Fluid Dynamics (CFD) exercise has been developed for the undergraduate introductory fluid mechanics course at Bucknell University. The goal is to develop a computational exercise that students complete which links the manufacturing processes course and the concurrent fluid mechanics course in a way that reinforces the concepts in both. In general, CFD is used as a tool to increase student understanding of the fundamentals in a virtual world. A ``learning factory,'' which is currently in development at Bucknell seeks to use the laboratory as a means to link courses that previously seemed to have little correlation at first glance. A large part of the manufacturing processes course is a project using an injection molding machine. The flow of pressurized molten polyurethane into the mold cavity can also be an example of fluid motion (a jet of liquid hitting a plate) that is applied in manufacturing. The students will run a CFD process that captures this flow using their virtual mold created with a graphics package, such as SolidWorks. The laboratory structure is currently being implemented and analyzed as a part of the ``learning factory''. Lastly, a survey taken before and after the CFD exercise demonstrate a better understanding of both the CFD and manufacturing process.
A lightweight communication library for distributed computing
NASA Astrophysics Data System (ADS)
Groen, Derek; Rieder, Steven; Grosso, Paola; de Laat, Cees; Portegies Zwart, Simon
2010-01-01
We present MPWide, a platform-independent communication library for performing message passing between computers. Our library allows coupling of several local message passing interface (MPI) applications through a long-distance network and is specifically optimized for such communications. The implementation is deliberately kept lightweight and platform independent, and the library can be installed and used without administrative privileges. The only requirements are a C++ compiler and at least one open port to a wide-area network on each site. In this paper we present the library, describe the user interface, present performance tests and apply MPWide in a large-scale cosmological N-body simulation on a network of two computers, one in Amsterdam and the other in Tokyo.
Dynamic algorithm for correlation noise estimation in distributed video coding
NASA Astrophysics Data System (ADS)
Thambu, Kuganeswaran; Fernando, Xavier; Guan, Ling
2010-01-01
Low complexity encoders at the expense of high complexity decoders are advantageous in wireless video sensor networks. Distributed video coding (DVC) achieves the above complexity balance, where the receivers compute Side information (SI) by interpolating the key frames. Side information is modeled as a noisy version of input video frame. In practise, correlation noise estimation at the receiver is a complex problem, and currently the noise is estimated based on a residual variance between pixels of the key frames. Then the estimated (fixed) variance is used to calculate the bit-metric values. In this paper, we have introduced the new variance estimation technique that rely on the bit pattern of each pixel, and it is dynamically calculated over the entire motion environment which helps to calculate the soft-value information required by the decoder. Our result shows that the proposed bit based dynamic variance estimation significantly improves the peak signal to noise ratio (PSNR) performance.
Moments of inclination error distribution computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1981-01-01
A FORTRAN coded computer program is described which calculates orbital inclination error statistics using a closed-form solution. This solution uses a data base of trajectory errors from actual flights to predict the orbital inclination error statistics. The Scott flight history data base consists of orbit insertion errors in the trajectory parameters - altitude, velocity, flight path angle, flight azimuth, latitude and longitude. The methods used to generate the error statistics are of general interest since they have other applications. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included.
Parallel Computing Environments and Methods for Power Distribution System Simulation
Lu, Ning; Taylor, Zachary T.; Chassin, David P.; Guttromson, Ross T.; Studham, Scott S.
2005-11-10
The development of cost-effective high-performance parallel computing on multi-processor super computers makes it attractive to port excessively time consuming simulation software from personal computers (PC) to super computes. The power distribution system simulator (PDSS) takes a bottom-up approach and simulates load at appliance level, where detailed thermal models for appliances are used. This approach works well for a small power distribution system consisting of a few thousand appliances. When the number of appliances increases, the simulation uses up the PC memory and its run time increases to a point where the approach is no longer feasible to model a practical large power distribution system. This paper presents an effort made to port a PC-based power distribution system simulator (PDSS) to a 128-processor shared-memory super computer. The paper offers an overview of the parallel computing environment and a description of the modification made to the PDSS model. The performances of the PDSS running on a standalone PC and on the super computer are compared. Future research direction of utilizing parallel computing in the power distribution system simulation is also addressed.
Methodology for Uncertainty Analysis of Dynamic Computational Toxicology Models
The task of quantifying the uncertainty in both parameter estimates and model predictions has become more important with the increased use of dynamic computational toxicology models by the EPA. Dynamic toxicological models include physiologically-based pharmacokinetic (PBPK) mode...
LaRC local area networks to support distributed computing
NASA Technical Reports Server (NTRS)
Riddle, E. P.
1984-01-01
The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.
A distributed computing model for telemetry data processing
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-01-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
A distributed computing model for telemetry data processing
NASA Astrophysics Data System (ADS)
Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.
1994-05-01
We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.
Clock distribution system for digital computers
Wyman, Robert H.; Loomis, Jr., Herschel H.
1981-01-01
Apparatus for eliminating, in each clock distribution amplifier of a clock distribution system, sequential pulse catch-up error due to one pulse "overtaking" a prior clock pulse. The apparatus includes timing means to produce a periodic electromagnetic signal with a fundamental frequency having a fundamental frequency component V'.sub.01 (t); an array of N signal characteristic detector means, with detector means No. 1 receiving the timing means signal and producing a change-of-state signal V.sub.1 (t) in response to receipt of a signal above a predetermined threshold; N substantially identical filter means, one filter means being operatively associated with each detector means, for receiving the change-of-state signal V.sub.n (t) and producing a modified change-of-state signal V'.sub.n (t) (n=1, . . . , N) having a fundamental frequency component that is substantially proportional to V'.sub.01 (t-.theta..sub.n (t) with a cumulative phase shift .theta..sub.n (t) having a time derivative that may be made uniformly and arbitrarily small; and with the detector means n+1 (1.ltoreq.n
Nonlinear Fluid Computations in a Distributed Environment
NASA Technical Reports Server (NTRS)
Atwood, Christopher A.; Smith, Merritt H.
1995-01-01
The performance of a loosely and tightly-coupled workstation cluster is compared against a conventional vector supercomputer for the solution the Reynolds- averaged Navier-Stokes equations. The application geometries include a transonic airfoil, a tiltrotor wing/fuselage, and a wing/body/empennage/nacelle transport. Decomposition is of the manager-worker type, with solution of one grid zone per worker process coupled using the PVM message passing library. Task allocation is determined by grid size and processor speed, subject to available memory penalties. Each fluid zone is computed using an implicit diagonal scheme in an overset mesh framework, while relative body motion is accomplished using an additional worker process to re-establish grid communication.
Distributed sensor networks with collective computation
Lanman, D. R.
2001-01-01
Simulations of a network of N sensors have been performed. The simulation space contains a number of sound sources and a large number of sensors. Each sensor is equipped with an omni-directional microphone and is capable of measuring only the time of arrival of a signal. Sensors are able to wirelessly transmit and receive packets of information, and have some computing power. The sensors were programmed to merge all information (received packets as well as local measurements) into a 'world view' for that node. This world view is then transmitted. In this way, information can slowly diffuse across the network. One node was monitored in the network as a proxy for when information had diffused across the network. Simulations demonstrated that the energy expended per sensor per time step was approximately independent of N.
The process group approach to reliable distributed computing
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.
1992-01-01
The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.
Computational fluid dynamics modelling in cardiovascular medicine
Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P
2016-01-01
This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards ‘digital patient’ or ‘virtual physiological human’ representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. PMID:26512019
Computational fluid dynamics modelling in cardiovascular medicine.
Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P
2016-01-01
This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. PMID:26512019
Transient dynamic distributed strain sensing using photonic crystal fibres
NASA Astrophysics Data System (ADS)
Samad, Shafeek A.; Hegde, G. M.; Roy Mahapatra, D.; Hanagud, S.
2014-02-01
A technique to determine the strain field in one-dimensional (1D) photonic crystal (PC) involving high strain rate, high temperature around shock or ballistic impact is proposed. Transient strain sensing is important in aerospace and other structural health monitoring (SHM) applications. We consider a MEMS based smart sensor design with photonic crystal integrated on a silicon substrate for dynamic strain correlation. Deeply etched silicon rib waveguides with distributed Bragg reflectors are suitable candidates for miniaturization of sensing elements, replacing the conventional FBG. Main objective here is to investigate the effect of non-uniform strain localization on the sensor output. Computational analysis is done to determine the static and dynamic strain sensing characteristics of the 1D photonic crystal based sensor. The structure is designed and modeled using Finite Element Method. Dynamic localization of strain field is observed. The distributed strain field is used to calculated the PC waveguide response. The sensitivity of the proposed sensor is estimated to be 0.6 pm/μɛ.
A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations
NASA Astrophysics Data System (ADS)
Demir, I.; Agliamzanov, R.
2014-12-01
Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.
Computer Power: Part 1: Distribution of Power (and Communications).
ERIC Educational Resources Information Center
Price, Bennett J.
1988-01-01
Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)
Developing a Distributed Computing Architecture at Arizona State University.
ERIC Educational Resources Information Center
Armann, Neil; And Others
1994-01-01
Development of Arizona State University's computing architecture, designed to ensure that all new distributed computing pieces will work together, is described. Aspects discussed include the business rationale, the general architectural approach, characteristics and objectives of the architecture, specific services, and impact on the university…
Pladipus Enables Universal Distributed Computing in Proteomics Bioinformatics.
Verheggen, Kenneth; Maddelein, Davy; Hulstaert, Niels; Martens, Lennart; Barsnes, Harald; Vaudel, Marc
2016-03-01
The use of proteomics bioinformatics substantially contributes to an improved understanding of proteomes, but this novel and in-depth knowledge comes at the cost of increased computational complexity. Parallelization across multiple computers, a strategy termed distributed computing, can be used to handle this increased complexity; however, setting up and maintaining a distributed computing infrastructure requires resources and skills that are not readily available to most research groups. Here we propose a free and open-source framework named Pladipus that greatly facilitates the establishment of distributed computing networks for proteomics bioinformatics tools. Pladipus is straightforward to install and operate thanks to its user-friendly graphical interface, allowing complex bioinformatics tasks to be run easily on a network instead of a single computer. As a result, any researcher can benefit from the increased computational efficiency provided by distributed computing, hence empowering them to tackle more complex bioinformatics challenges. Notably, it enables any research group to perform large-scale reprocessing of publicly available proteomics data, thus supporting the scientific community in mining these data for novel discoveries. PMID:26510693
Product limit estimation for capturing of pressure distribution dynamics.
Wininger, Michael; Crane, Barbara A
2016-05-01
Measurement of contact pressures at the wheelchair-seating interface is a critically important approach for laboratory research and clinical application in monitoring risk for pressure ulceration. As yet, measures obtained from pressure mapping are static in nature: there is no accounting for changes in pressure distribution over time, despite the well-known interaction between time and pressure in risk estimation. Here, we introduce the first dynamic analysis for distribution of pressure data, based on the Kaplan-Meier (KM) Product Limit Estimator (PLE) a ubiquitous tool encountered in clinical trials and survival analysis. In this approach, the pressure array-over-time data set is sub-sampled two frames at a time (random pairing), and their similarity of pressure distribution is quantified via a correlation coefficient. A large number (here: 100) of these frame pairs is then sorted into descending order of correlation value, and visualized as a KM curve; we build confidence limits via a bootstrap computed over 1000 replications. PLEs and the KM have robust statistical support and extensive development: the opportunities for extended application are substantial. We propose that the KM-PLE in particular, and dynamic analysis in general, may provide key leverage on future development of seating technology, and valuable new insight into extant datasets. PMID:27021374
Computational fluid dynamics in ventilation: Practical approach
NASA Astrophysics Data System (ADS)
Fontaine, J. R.
The potential of computation fluid dynamics (CFD) for conceiving ventilation systems is shown through the simulation of five practical cases. The following examples are considered: capture of pollutants on a surface treating tank equipped with a unilateral suction slot in the presence of a disturbing air draft opposed to suction; dispersion of solid aerosols inside fume cupboards; performances comparison of two general ventilation systems in a silkscreen printing workshop; ventilation of a large open painting area; and oil fog removal inside a mechanical engineering workshop. Whereas the two first problems are analyzed through two dimensional numerical simulations, the three other cases require three dimensional modeling. For the surface treating tank case, numerical results are compared to laboratory experiment data. All simulations are carried out using EOL, a CFD software specially devised to deal with air quality problems in industrial ventilated premises. It contains many analysis tools to interpret the results in terms familiar to the industrial hygienist. Much experimental work has been engaged to validate the predictions of EOL for ventilation flows.
Computational social dynamic modeling of group recruitment.
Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken; Smrcka, Julianne D.; Ko, Teresa H.; Moy, Timothy David; Wu, Benjamin C.
2004-01-01
The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.
Object Orientated Methods in Computational Fluid Dynamics.
NASA Astrophysics Data System (ADS)
Tabor, Gavin; Weller, Henry; Jasak, Hrvoje; Fureby, Christer
1997-11-01
We outline the aims of the FOAM code, a Finite Volume Computational Fluid Dynamics code written in C++, and discuss the use of Object Orientated Programming (OOP) methods to achieve these aims. The intention when writing this code was to make it as easy as possible to alter the modelling : this was achieved by making the top level syntax of the code as close as possible to conventional mathematical notation for tensors and partial differential equations. Object orientation enables us to define classes for both types of objects, and the operator overloading possible in C++ allows normal symbols to be used for the basic operations. The introduction of features such as automatic dimension checking of equations helps to enforce correct coding of models. We also discuss the use of OOP techniques such as data encapsulation and code reuse. As examples of the flexibility of this approach, we discuss the implementation of turbulence modelling using RAS and LES. The code is used to simulate turbulent flow for a number of test cases, including fully developed channel flow and flow around obstacles. We also demonstrate the use of the code for solving structures calculations and magnetohydrodynamics.
Childers, L.; Liming, L.; Foster, I.; Mathematics and Computer Science; Univ. of Chicago
2008-10-15
This report summarizes the methodology and results of a user perspectives study conducted by the Community Driven Improvement of Globus Software (CDIGS) project. The purpose of the study was to document the work-related goals and challenges facing today's scientific technology users, to record their perspectives on Globus software and the distributed-computing ecosystem, and to provide recommendations to the Globus community based on the observations. Globus is a set of open source software components intended to provide a framework for collaborative computational science activities. Rather than attempting to characterize all users or potential users of Globus software, our strategy has been to speak in detail with a small group of individuals in the scientific community whose work appears to be the kind that could benefit from Globus software, learn as much as possible about their work goals and the challenges they face, and describe what we found. The result is a set of statements about specific individuals experiences. We do not claim that these are representative of a potential user community, but we do claim to have found commonalities and differences among the interviewees that may be reflected in the user community as a whole. We present these as a series of hypotheses that can be tested by subsequent studies, and we offer recommendations to Globus developers based on the assumption that these hypotheses are representative. Specifically, we conducted interviews with thirty technology users in the scientific community. We included both people who have used Globus software and those who have not. We made a point of including individuals who represent a variety of roles in scientific projects, for example, scientists, software developers, engineers, and infrastructure providers. The following material is included in this report: (1) A summary of the reported work-related goals, significant issues, and points of satisfaction with the use of Globus software; (2
Computation and Optimization of Dose Distributions for Rotational Stereotactic Radiosurgery
NASA Astrophysics Data System (ADS)
Fox, Timothy Harold
1994-01-01
The stereotactic radiosurgery technique presented in this work is the patient rotator method which rotates the patient in a sitting position with a stereotactic head frame attached to the skull while collimated non-coplanar radiation beams from a 6 MV medical linear accelerator are delivered to the target point. The hypothesis of this dissertation is that accurate, three-dimensional dose distributions can be computed and optimized for the patient rotator method used in stereotactic radiosurgery. This dissertation presents research results in three areas related to computing and optimizing dose distributions for the patient rotator method. A three-dimensional dose model was developed to calculate the dose at any point in the cerebral cortex using a circular and adjustable collimator system and the geometry of the radiation beam with respect to the target point. The computed dose distributions compared to experimental measurements had an average maximum deviation of <0.7 mm for the relative isodose distributions greater than 50%. A system was developed to qualitatively and quantitatively visualize the computed dose distributions with patient anatomy. A registration method was presented for transforming each dataset to a common reference system. A method for computing the intersections of anatomical contour's boundaries was developed to calculate dose-volume information. The system efficiently and accurately reduced the large computed, volumetric sets of dose data, medical images, and anatomical contours to manageable images and graphs. A computer-aided optimization method was developed for rigorously selecting beam angles and weights for minimizing the dose to normal tissue. Linear programming was applied as the optimization method. The computed optimal beam angles and weights for a defined objective function and dose constraints exhibited a superior dose distribution compared to a standard plan. The developed dose model, qualitative and quantitative visualization
Distriblets: Java-Based Distributed Computing on the Web.
ERIC Educational Resources Information Center
Finkel, David; Wills, Craig E.; Brennan, Brian; Brennan, Chris
1999-01-01
Describes a system for using the World Wide Web to distribute computational tasks to multiple hosts on the Web that is written in Java programming language. Describes the programs written to carry out the load distribution, the structure of a "distriblet" class, and experiences in using this system. (Author/LRW)
Computational determination of absorbed dose distributions from gamma ray sources
NASA Astrophysics Data System (ADS)
Zhou, Chuanyu; Inanc, Feyzi
2001-04-01
A biomedical procedure known as brachytherapy involves insertion of many radioactive seeds into a sick gland for eliminating sick tissue. For such implementations, the spatial distribution of absorbed dose is very important. A simulation tool has been developed to determine the spatial distribution of absorbed dose in heterogeneous environments where the gamma ray source consists of many small internal radiation emitters. The computation is base on integral transport method and the computations are done in a parallel fashion. Preliminary results involving 137Cs and 125I sources surrounded by water and comparison of the results to the experimental and computational data available in the literature are presented.
Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments
Jin, Shuangshuang; Chen, Yousu; Wu, Di; Diao, Ruisheng; Huang, Zhenyu
2015-12-09
Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Message Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.
A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)
2001-01-01
NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.
Arcade: A Web-Java Based Framework for Distributed Computing
NASA Technical Reports Server (NTRS)
Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.
Computational fluid dynamic modelling of cavitation
NASA Technical Reports Server (NTRS)
Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.
1993-01-01
Models in sheet cavitation in cryogenic fluids are developed for use in Euler and Navier-Stokes codes. The models are based upon earlier potential-flow models but enable the cavity inception point, length, and shape to be determined as part of the computation. In the present paper, numerical solutions are compared with experimental measurements for both pressure distribution and cavity length. Comparisons between models are also presented. The CFD model provides a relatively simple modification to an existing code to enable cavitation performance predictions to be included. The analysis also has the added ability of incorporating thermodynamic effects of cryogenic fluids into the analysis. Extensions of the current two-dimensional steady state analysis to three-dimensions and/or time-dependent flows are, in principle, straightforward although geometrical issues become more complicated. Linearized models, however offer promise of providing effective cavitation modeling in three-dimensions. This analysis presents good potential for improved understanding of many phenomena associated with cavity flows.
Experiment Dashboard for Monitoring of the LHC Distributed Computing Systems
NASA Astrophysics Data System (ADS)
Andreeva, J.; Devesas Campos, M.; Tarragon Cros, J.; Gaidioz, B.; Karavakis, E.; Kokoszkiewicz, L.; Lanciotti, E.; Maier, G.; Ollivier, W.; Nowotka, M.; Rocha, R.; Sadykov, T.; Saiz, P.; Sargsyan, L.; Sidorova, I.; Tuckett, D.
2011-12-01
LHC experiments are currently taking collisions data. A distributed computing model chosen by the four main LHC experiments allows physicists to benefit from resources spread all over the world. The distributed model and the scale of LHC computing activities increase the level of complexity of middleware, and also the chances of possible failures or inefficiencies in involved components. In order to ensure the required performance and functionality of the LHC computing system, monitoring the status of the distributed sites and services as well as monitoring LHC computing activities are among the key factors. Over the last years, the Experiment Dashboard team has been working on a number of applications that facilitate the monitoring of different activities: including following up jobs, transfers, and also site and service availabilities. This presentation describes Experiment Dashboard applications used by the LHC experiments and experience gained during the first months of data taking.
Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds
NASA Astrophysics Data System (ADS)
Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.
In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.
Computer aided analysis and optimization of mechanical system dynamics
NASA Technical Reports Server (NTRS)
Haug, E. J.
1984-01-01
The purpose is to outline a computational approach to spatial dynamics of mechanical systems that substantially enlarges the scope of consideration to include flexible bodies, feedback control, hydraulics, and related interdisciplinary effects. Design sensitivity analysis and optimization is the ultimate goal. The approach to computer generation and solution of the system dynamic equations and graphical methods for creating animations as output is outlined.
NASA Astrophysics Data System (ADS)
Nagiyev, A. E.; Sherstnyova, A. I.; Botygin, I. A.; Galanova, N. Y.
2016-06-01
The results of the statistical model experiments research of various load balancing algorithms in distributed computing systems are presented. Software tools were developed. These tools, which allow to create a virtual infrastructure of distributed computing system in accordance with the intended objective of the research focused on multi-agent and multithreaded data processing were developed. A diagram of the control processing of requests from the terminal devices, providing an effective dynamic horizontal scaling of computing power at peak loads, is proposed.
Trends in computational capabilities for fluid dynamics
NASA Technical Reports Server (NTRS)
Peterson, V. L.
1985-01-01
Milestones in the development of computational aerodynamics are reviewed together with past, present, and future computer performance (speed and memory) trends. Factors influencing computer performance requirements for both steady and unsteady flow simulations are identified. Estimates of computer speed and memory that are required to calculate both inviscid and viscous, steady and unsteady flows about airfoils, wings, and simple wing body configurations are presented and compared to computer performance which is either currently available, or is expected to be available before the end of this decade. Finally, estimates of the amounts of computer time that are required to determine flutter boundaries of airfoils and wings at transonic Mach numbers are presented and discussed.
Trends in computational capabilities for fluid dynamics
NASA Technical Reports Server (NTRS)
Peterson, V. L.
1984-01-01
Milestones in the development of computational aerodynamics are reviewed together with past, present, and future computer performance (speed and memory) trends. Factors influencing computer performance requirements for both steady and unsteady flow simulations are identified. Estimates of computer speed and memory that are required to calculate both inviscid and viscous, steady and unsteady flows about airfoils, wings, and simple wing body configurations are presented and compared to computer performance which is either currently available, or is expected to be available before the end of this decade. Finally, estimates of the amounts of computer time that are required to determine flutter boundaries of airfoils and wings at transonic Mach numbers are presented and discussed.
AIR INGRESS ANALYSIS: COMPUTATIONAL FLUID DYNAMIC MODELS
Chang H. Oh; Eung S. Kim; Richard Schultz; Hans Gougar; David Petti; Hyung S. Kang
2010-08-01
The Idaho National Laboratory (INL), under the auspices of the U.S. Department of Energy, is performing research and development that focuses on key phenomena important during potential scenarios that may occur in very high temperature reactors (VHTRs). Phenomena Identification and Ranking Studies to date have ranked an air ingress event, following on the heels of a VHTR depressurization, as important with regard to core safety. Consequently, the development of advanced air ingress-related models and verification and validation data are a very high priority. Following a loss of coolant and system depressurization incident, air will enter the core of the High Temperature Gas Cooled Reactor through the break, possibly causing oxidation of the in-the core and reflector graphite structure. Simple core and plant models indicate that, under certain circumstances, the oxidation may proceed at an elevated rate with additional heat generated from the oxidation reaction itself. Under postulated conditions of fluid flow and temperature, excessive degradation of the lower plenum graphite can lead to a loss of structural support. Excessive oxidation of core graphite can also lead to the release of fission products into the confinement, which could be detrimental to a reactor safety. Computational fluid dynamic model developed in this study will improve our understanding of this phenomenon. This paper presents two-dimensional and three-dimensional CFD results for the quantitative assessment of the air ingress phenomena. A portion of results of the density-driven stratified flow in the inlet pipe will be compared with results of the experimental results.
COMPUTATIONAL FLUID DYNAMICS MODELING ANALYSIS OF COMBUSTORS
Mathur, M.P.; Freeman, Mark; Gera, Dinesh
2001-11-06
In the current fiscal year FY01, several CFD simulations were conducted to investigate the effects of moisture in biomass/coal, particle injection locations, and flow parameters on carbon burnout and NO{sub x} inside a 150 MW GEEZER industrial boiler. Various simulations were designed to predict the suitability of biomass cofiring in coal combustors, and to explore the possibility of using biomass as a reburning fuel to reduce NO{sub x}. Some additional CFD simulations were also conducted on CERF combustor to examine the combustion characteristics of pulverized coal in enriched O{sub 2}/CO{sub 2} environments. Most of the CFD models available in the literature treat particles to be point masses with uniform temperature inside the particles. This isothermal condition may not be suitable for larger biomass particles. To this end, a stand alone program was developed from the first principles to account for heat conduction from the surface of the particle to its center. It is envisaged that the recently developed non-isothermal stand alone module will be integrated with the Fluent solver during next fiscal year to accurately predict the carbon burnout from larger biomass particles. Anisotropy in heat transfer in radial and axial will be explored using different conductivities in radial and axial directions. The above models will be validated/tested on various fullscale industrial boilers. The current NO{sub x} modules will be modified to account for local CH, CH{sub 2}, and CH{sub 3} radicals chemistry, currently it is based on global chemistry. It may also be worth exploring the effect of enriched O{sub 2}/CO{sub 2} environment on carbon burnout and NO{sub x} concentration. The research objective of this study is to develop a 3-Dimensional Combustor Model for Biomass Co-firing and reburning applications using the Fluent Computational Fluid Dynamics Code.
Integrated computer simulation on FIR FEL dynamics
Furukawa, H.; Kuruma, S.; Imasaki, K.
1995-12-31
An integrated computer simulation code has been developed to analyze the RF-Linac FEL dynamics. First, the simulation code on the electron beam acceleration and transport processes in RF-Linac: (LUNA) has been developed to analyze the characteristics of the electron beam in RF-Linac and to optimize the parameters of RF-Linac. Second, a space-time dependent 3D FEL simulation code (Shipout) has been developed. The RF-Linac FEL total simulations have been performed by using the electron beam data from LUNA in Shipout. The number of particles using in a RF-Linac FEL total simulation is approximately 1000. The CPU time for the simulation of 1 round trip is about 1.5 minutes. At ILT/ILE, Osaka, a 8.5MeV RF-Linac with a photo-cathode RF-gun is used for FEL oscillation experiments. By using 2 cm wiggler, the FEL oscillation in the wavelength approximately 46 {mu}m are investigated. By the simulations using LUNA with the parameters of an ILT/ILE experiment, the pulse shape and the energy spectra of the electron beam at the end of the linac are estimated. The pulse shape of the electron beam at the end of the linac has sharp rise-up and it slowly decays as a function of time. By the RF-linac FEL total simulations with the parameters of an ILT/ILE experiment, the dependencies of the start up of the FEL oscillations on the pulse shape of the electron beam at the end of the linac are estimated. The coherent spontaneous emission effects and the quick start up of FEL oscillations have been observed by the RF-Linac FEL total simulations.
Dynamic leaching test of personal computer components.
Li, Yadong; Richardson, Jay B; Niu, Xiaojun; Jackson, Ollie J; Laster, Jeremy D; Walker, Aaron K
2009-11-15
A dynamic leaching test (DLT) was developed and used to evaluate the leaching of toxic substances for electronic waste in the environment. The major components in personal computers (PCs) including motherboards, hard disc drives, floppy disc drives, and compact disc drives were tested. The tests lasted for 2 years for motherboards and 1.5 year for the disc drives. The extraction fluids for the standard toxicity characteristic leaching procedure (TCLP) and synthetic precipitation leaching procedure (SPLP) were used as the DLT leaching solutions. A total of 18 elements including Ag, Al, As, Au, Ba, Be, Cd, Cr, Cu, Fe, Ga, Ni, Pd, Pb, Sb, Se, Sn, and Zn were analyzed in the DLT leachates. Only Al, Cu, Fe, Ni, Pb, and Zn were commonly found in the DLT leachates of the PC components. Their leaching levels were much higher in TCLP extraction fluid than in SPLP extraction fluid. The toxic heavy metal Pb was found to continuously leach out of the components over the entire test periods. The cumulative amounts of Pb leached out of the motherboards in TCLP extraction fluid reached 2.0 g per motherboard over the 2-year test period, and that in SPLP extraction fluid were 75-90% less. The leaching rates or levels of Pb were largely affected by the content of galvanized steel in the PC components. The higher was the steel content, the lower the Pb leaching rate would be. The findings suggest that the obsolete PCs disposed of in landfills or discarded in the environment continuously release Pb for years when subjected to landfill leachate or rains. PMID:19616380
Bringing the CMS distributed computing system into scalable operations
NASA Astrophysics Data System (ADS)
Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.
2010-04-01
Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.
Computational fluid dynamics on a massively parallel computer
NASA Technical Reports Server (NTRS)
Jespersen, Dennis C.; Levit, Creon
1989-01-01
A finite difference code was implemented for the compressible Navier-Stokes equations on the Connection Machine, a massively parallel computer. The code is based on the ARC2D/ARC3D program and uses the implicit factored algorithm of Beam and Warming. The codes uses odd-even elimination to solve linear systems. Timings and computation rates are given for the code, and a comparison is made with a Cray XMP.
Aircraft T-tail flutter predictions using computational fluid dynamics
NASA Astrophysics Data System (ADS)
Attorni, A.; Cavagna, L.; Quaranta, G.
2011-02-01
The paper presents the application of computational aeroelasticity (CA) methods to the analysis of a T-tail stability in transonic regime. For this flow condition unsteady aerodynamics show a significant dependency from the aircraft equilibrium flight configuration, which rules both the position of shock waves in the flow field and the load distribution on the horizontal tail plane. Both these elements have an influence on the aerodynamic forces, and so on the aeroelastic stability of the system. The numerical procedure proposed allows to investigate flutter stability for a free-flying aircraft, iterating until convergence the following sequence of sub-problems: search for the trimmed condition for the deformable aircraft; linearize the system about the stated equilibrium point; predict the aeroelastic stability boundaries using the inferred linear model. An innovative approach based on sliding meshes allows to represent the changes of the computational fluid domain due to the motion of control surfaces used to trim the aircraft. To highlight the importance of keeping the linear model always aligned to the trim condition, and at the same time the capabilities of the computational fluid dynamics approach, the method is applied to a real aircraft with a T-tail configuration: the P180.
New security infrastructure model for distributed computing systems
NASA Astrophysics Data System (ADS)
Dubenskaya, J.; Kryukov, A.; Demichev, A.; Prikhodko, N.
2016-02-01
At the paper we propose a new approach to setting up a user-friendly and yet secure authentication and authorization procedure in a distributed computing system. The security concept of the most heterogeneous distributed computing systems is based on the public key infrastructure along with proxy certificates which are used for rights delegation. In practice a contradiction between the limited lifetime of the proxy certificates and the unpredictable time of the request processing is a big issue for the end users of the system. We propose to use unlimited in time hashes which are individual for each request instead of proxy certificate. Our approach allows to avoid using of the proxy certificates. Thus the security infrastructure of distributed computing system becomes easier for development, support and use.
Nonlinear structural analysis on distributed-memory computers
NASA Technical Reports Server (NTRS)
Watson, Brian C.; Noor, Ahmed K.
1995-01-01
A computational strategy is presented for the nonlinear static and postbuckling analyses of large complex structures on massively parallel computers. The strategy is designed for distributed-memory, message-passing parallel computer systems. The key elements of the proposed strategy are: (1) a multiple-parameter reduced basis technique; (2) a nested dissection (or multilevel substructuring) ordering scheme; (3) parallel assembly of global matrices; and (4) a parallel sparse equation solver. The effectiveness of the strategy is assessed by applying it to thermo-mechanical postbuckling analyses of stiffened composite panels with cutouts, and nonlinear large-deflection analyses of HSCT models on Intel Paragon XP/S computers. The numerical studies presented demonstrate the advantages of nested dissection-based solvers over traditional skyline-based solvers on distributed memory machines.
Exact score distribution computation for ontological similarity searches
2011-01-01
Background Semantic similarity searches in ontologies are an important component of many bioinformatic algorithms, e.g., finding functionally related proteins with the Gene Ontology or phenotypically similar diseases with the Human Phenotype Ontology (HPO). We have recently shown that the performance of semantic similarity searches can be improved by ranking results according to the probability of obtaining a given score at random rather than by the scores themselves. However, to date, there are no algorithms for computing the exact distribution of semantic similarity scores, which is necessary for computing the exact P-value of a given score. Results In this paper we consider the exact computation of score distributions for similarity searches in ontologies, and introduce a simple null hypothesis which can be used to compute a P-value for the statistical significance of similarity scores. We concentrate on measures based on Resnik's definition of ontological similarity. A new algorithm is proposed that collapses subgraphs of the ontology graph and thereby allows fast score distribution computation. The new algorithm is several orders of magnitude faster than the naive approach, as we demonstrate by computing score distributions for similarity searches in the HPO. It is shown that exact P-value calculation improves clinical diagnosis using the HPO compared to approaches based on sampling. Conclusions The new algorithm enables for the first time exact P-value calculation via exact score distribution computation for ontology similarity searches. The approach is applicable to any ontology for which the annotation-propagation rule holds and can improve any bioinformatic method that makes only use of the raw similarity scores. The algorithm was implemented in Java, supports any ontology in OBO format, and is available for non-commercial and academic usage under: https://compbio.charite.de/svn/hpo/trunk/src/tools/significance/ PMID:22078312
First Experiences with LHC Grid Computing and Distributed Analysis
Fisk, Ian
2010-12-01
In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.
Computation of glint, glare, and solar irradiance distribution
Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh
2015-08-11
Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.
Distributed computer taxonomy based on O/S structure
NASA Technical Reports Server (NTRS)
Foudriat, Edwin C.
1985-01-01
The taxonomy considers the resource structure at the operating system level. It compares a communication based taxonomy with the new taxonomy to illustrate how the latter does a better job when related to the client's view of the distributed computer. The results illustrate the fundamental features and what is required to construct fully distributed processing systems. The problem of using network computers on the space station is addressed. A detailed discussion of the taxonomy is not given here. Information is given in the form of charts and diagrams that were used to illustrate a talk.
Computing Bisectors in a Dynamic Geometry Environment
ERIC Educational Resources Information Center
Botana, Francisco
2013-01-01
In this note, an approach combining dynamic geometry and automated deduction techniques is used to study the bisectors between points and curves. Usual teacher constructions for bisectors are discussed, showing that inherent limitations in dynamic geometry software impede their thorough study. We show that the interactive sketching of bisectors…
Eleventh Workshop for Computational Fluid Dynamic Applications in Rocket Propulsion
NASA Technical Reports Server (NTRS)
Williams, R. W. (Compiler)
1993-01-01
Conference publication includes 79 abstracts and presentations and 3 invited presentations given at the Eleventh Workshop for Computational Fluid Dynamic Applications in Rocket Propulsion held at George C. Marshall Space Flight Center, April 20-22, 1993. The purpose of the workshop is to discuss experimental and computational fluid dynamic activities in rocket propulsion. The workshop is an open meeting for government, industry, and academia. A broad number of topics are discussed including computational fluid dynamic methodology, liquid and solid rocket propulsion, turbomachinery, combustion, heat transfer, and grid generation.
Distribution of dynamic loads for multiple cooperating robot manipulators
NASA Technical Reports Server (NTRS)
Walker, Ian D.; Marcus, Steven I.; Freeman, Robert A.
1989-01-01
For the situation of multiple cooperating manipulators handling a single object, a formulation is presented which allows load distribution of the combined system to be made while taking manipulator dynamics into account. First, object dynamics are used to transform the motion task. An integrated procedure for modeling arm dynamics are used to transform the motion task. An integrated procedure for modeling arm dynamics is detailed. Then, a method is introduced which transforms the object load to the joint level. At this level, various methods of load distribution that allow subtask performance are proposed. These methods allow desired object motion while selecting loads desirable to alleviate manipulator dynamic loads.
Computer architecture evaluation for structural dynamics computations: Project summary
NASA Technical Reports Server (NTRS)
Standley, Hilda M.
1989-01-01
The intent of the proposed effort is the examination of the impact of the elements of parallel architectures on the performance realized in a parallel computation. To this end, three major projects are developed: a language for the expression of high level parallelism, a statistical technique for the synthesis of multicomputer interconnection networks based upon performance prediction, and a queueing model for the analysis of shared memory hierarchies.
Computing Nonequilibrium Conformational Dynamics of Structured Nucleic Acid Assemblies.
Sedeh, Reza Sharifi; Pan, Keyao; Adendorff, Matthew Ralph; Hallatschek, Oskar; Bathe, Klaus-Jürgen; Bathe, Mark
2016-01-12
Synthetic nucleic acids can be programmed to form precise three-dimensional structures on the nanometer-scale. These thermodynamically stable complexes can serve as structural scaffolds to spatially organize functional molecules including multiple enzymes, chromophores, and force-sensing elements with internal dynamics that include substrate reaction-diffusion, excitonic energy transfer, and force-displacement response that often depend critically on both the local and global conformational dynamics of the nucleic acid assembly. However, high molecular weight assemblies exhibit long time-scale and large length-scale motions that cannot easily be sampled using all-atom computational procedures such as molecular dynamics. As an alternative, here we present a computational framework to compute the overdamped conformational dynamics of structured nucleic acid assemblies and apply it to a DNA-based tweezer, a nine-layer DNA origami ring, and a pointer-shaped DNA origami object, which consist of 204, 3,600, and over 7,000 basepairs, respectively. The framework employs a mechanical finite element model for the DNA nanostructure combined with an implicit solvent model to either simulate the Brownian dynamics of the assembly or alternatively compute its Brownian modes. Computational results are compared with an all-atom molecular dynamics simulation of the DNA-based tweezer. Several hundred microseconds of Brownian dynamics are simulated for the nine-layer ring origami object to reveal its long time-scale conformational dynamics, and the first ten Brownian modes of the pointer-shaped structure are predicted. PMID:26636351
Sonovestibular symptoms evaluated by computed dynamic posturography.
Teszler, C B; Ben-David, J; Podoshin, L; Sabo, E
2000-01-01
The investigation of stability under bilateral acoustic stimulation was undertaken in an attempt to mimic the real-life conditions of noisy environment (e.g., industry, aviation). The Tullio phenomenon evaluated by computed dynamic posturography (CDP) under acoustic stimulation is reflected in postural unsteadiness, rather than in the classic nystagmus. With such a method, the dangerous effects of noise-induced instability can be assessed and prevented. Three groups of subjects were submitted. The first (group A) included 20 patients who complained of sonovestibular symptoms (i.e., Tullio phenomenon) on the background of an inner-ear disease. The second group (B) included 20 neurootological patients without a history of Tullio phenomenon. Group C consisted of 20 patients with normal hearing, as controls. A pure-tone stimulus of 1,000 Hz at 110 dB was delivered binaurally for 20 seconds during condition 5 and condition 6 of the CDP sensory organization test. The sequence of six sensory organization conditions was performed three times with two intermissions of 15-20 minutes between the trials. The first was performed in the regular mode (quiet stance). This was followed 20 minutes by a trial carried out in quiet stance in sensory organizations tests (SOTs) 1 through 4, and with acoustic stimulation in SOT 5 and SOT 6. The last test was performed in quiet stance throughout (identical to the first trial). A significant drop in the composite equilibrium score was witnessed in group A patients upon acoustic stimulation (p < .0001). This imbalance did not disappear completely until 20 minutes later when the third sensory organization trial was performed. In fact, the composite score obtained on the last SOT was still significantly worse than the baseline. Group B and the normal subjects (group C) showed no significant change in composite score. As regards the vestibular ratio score, again, group A marked a drop on stimulation with sound (p < .004). This decrease
Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications
NASA Technical Reports Server (NTRS)
Sun, Xian-He
1997-01-01
Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm
MPWide: Light-weight communication library for distributed computing
NASA Astrophysics Data System (ADS)
Groen, Derek; Rieder, Steven; Grosso, Paola; de Laat, Cees; Portegies Zwart, Simon
2012-12-01
MPWide is a light-weight communication library for distributed computing. It is specifically developed to allow message passing over long-distance networks using path-specific optimizations. An early version of MPWide was used in the Gravitational Billion Body Project to allow simulations across multiple supercomputers.
SAGA: A standardized access layer to heterogeneous Distributed Computing Infrastructure
NASA Astrophysics Data System (ADS)
Merzky, Andre; Weidner, Ole; Jha, Shantenu
2015-09-01
Distributed Computing Infrastructure is characterized by interfaces that are heterogeneous-syntactically and semantically. SAGA represents the most comprehensive community effort to date to address the heterogeneity by defining a simple, uniform access layer. In this paper, we describe the basic concepts underpinning its design and development. We also discuss RADICAL-SAGA which is the most widely used implementation of SAGA.
Chandrasekhar equations and computational algorithms for distributed parameter systems
NASA Technical Reports Server (NTRS)
Burns, J. A.; Ito, K.; Powers, R. K.
1984-01-01
The Chandrasekhar equations arising in optimal control problems for linear distributed parameter systems are considered. The equations are derived via approximation theory. This approach is used to obtain existence, uniqueness, and strong differentiability of the solutions and provides the basis for a convergent computation scheme for approximating feedback gain operators. A numerical example is presented to illustrate these ideas.
ADDRESSING ENVIRONMENTAL ENGINEERING CHALLENGES WITH COMPUTATIONAL FLUID DYNAMICS
This paper discusses the status and application of Computational Fluid Dynamics )CFD) models to address environmental engineering challenges for more detailed understanding of air pollutant source emissions, atmospheric dispersion and resulting human exposure. CFD simulations ...
PC BEEPOP - A PERSONAL COMPUTER HONEY BEE POPULATION DYNAMICS MODEL
PC BEEPOP is a computer model that simulates honey bee (Apis mellifera L.) colony population dynamics. he model consists of a system of interdependent elements, including colony condition, environmental variability, colony energetics, and contaminant exposure. t includes a mortal...
Distributed computation of graphics primitives on a transputer network
NASA Technical Reports Server (NTRS)
Ellis, Graham K.
1988-01-01
A method is developed for distributing the computation of graphics primitives on a parallel processing network. Off-the-shelf transputer boards are used to perform the graphics transformations and scan-conversion tasks that would normally be assigned to a single transputer based display processor. Each node in the network performs a single graphics primitive computation. Frequently requested tasks can be duplicated on several nodes. The results indicate that the current distribution of commands on the graphics network shows a performance degradation when compared to the graphics display board alone. A change to more computation per node for every communication (perform more complex tasks on each node) may cause the desired increase in throughput.
A fault detection service for wide area distributed computations.
Stelling, P.
1998-06-09
The potential for faults in distributed computing systems is a significant complicating factor for application developers. While a variety of techniques exist for detecting and correcting faults, the implementation of these techniques in a particular context can be difficult. Hence, we propose a fault detection service designed to be incorporated, in a modular fashion, into distributed computing systems, tools, or applications. This service uses well-known techniques based on unreliable fault detectors to detect and report component failure, while allowing the user to tradeoff timeliness of reporting against false positive rates. We describe the architecture of this service, report on experimental results that quantify its cost and accuracy, and describe its use in two applications, monitoring the status of system components of the GUSTO computational grid testbed and as part of the NetSolve network-enabled numerical solver.
NASA Astrophysics Data System (ADS)
Giorgino, Toni; Harvey, M. J.; de Fabritiis, Gianni
2010-08-01
Distributed computing (DC) projects tackle large computational problems by exploiting the donated processing power of thousands of volunteered computers, connected through the Internet. To efficiently employ the computational resources of one of world's largest DC efforts, GPUGRID, the project scientists require tools that handle hundreds of thousands of tasks which run asynchronously and generate gigabytes of data every day. We describe RBoinc, an interface that allows computational scientists to embed the DC methodology into the daily work-flow of high-throughput experiments. By extending the Berkeley Open Infrastructure for Network Computing (BOINC), the leading open-source middleware for current DC projects, with mechanisms to submit and manage large-scale distributed computations from individual workstations, RBoinc turns distributed grids into cost-effective virtual resources that can be employed by researchers in work-flows similar to conventional supercomputers. The GPUGRID project is currently using RBoinc for all of its in silico experiments based on molecular dynamics methods, including the determination of binding free energies and free energy profiles in all-atom models of biomolecules.
Embedding dynamical networks into distributed models
NASA Astrophysics Data System (ADS)
Innocenti, Giacomo; Paoletti, Paolo
2015-07-01
Large networks of interacting dynamical systems are well-known for the complex behaviours they are able to display, even when each node features a quite simple dynamics. Despite examples of such networks being widespread both in nature and in technological applications, the interplay between the local and the macroscopic behaviour, through the interconnection topology, is still not completely understood. Moreover, traditional analytical methods for dynamical response analysis fail because of the intrinsically large dimension of the phase space of the network which makes the general problem intractable. Therefore, in this paper we develop an approach aiming to condense all the information in a compact description based on partial differential equations. By focusing on propagative phenomena, rigorous conditions under which the original network dynamical properties can be successfully analysed within the proposed framework are derived as well. A network of Fitzhugh-Nagumo systems is finally used to illustrate the effectiveness of the proposed method.
Video/Computer Techniques for Static and Dynamic Experimental Mechanics
NASA Astrophysics Data System (ADS)
Maddux, Gene E.
1987-09-01
Recent advances in video camera and processing technology, coupled with the development of relatively inexpensive but powerful mini- and micro-computers are providing new capabilities for the experimentalist. This paper will present an overview of current areas of application and an insight into the selection of video/computer systems. The application of optical techniques for most experimental mechanics efforts involves the generation of fringe patterns that can be related to the response of an object to some loading condition. The data reduction process may be characterized as a search for fringe position information. These techniques include methods such as holographic interferometry, speckle metrology, moire, and photoelasticity. Although considerable effort has been expended in developing specialized techniques to convert these patterns to useful engineering data, there are particular advantages to the video approach. Other optical techniques are used which do not produce fringe patterns. Among these is a relatively new area of video application; that of determining the time-history of the response of a structure to dynamic excitation. In particular, these systems have been used to perform modal surveys of large, flexible space structures which make the use of conventional test instrumentation difficult, if not impossible. Video recordings of discrete targets distributed on a vibrating structure can be processed to obtain displacement, velocity, and acceleration data.
Parallel matrix transpose algorithms on distributed memory concurrent computers
Choi, Jaeyoung; Dongarra, J. |; Walker, D.W.
1994-12-31
This paper describes parallel matrix transpose algorithms on distributed memory concurrent processors. We assume that the matrix is distributed over a P {times} Q processor template with a block scattered data distribution. P, Q, and the block size can be arbitrary, so the algorithms have wide applicability. The algorithms make use of non-blocking, point-to-point communication between processors. The use of nonblocking communication allows a processor to overlap the messages that it sends to different processors, thereby avoiding unnecessary synchronization. Combined with the matrix multiplication routine, C = A {center_dot} B, the algorithms are used to compute parallel multiplications of transposed matrices, C = A{sup T} {center_dot} B{sup T}, in the PUMMA package. Details of the parallel implementation of the algorithms are given, and results are presented for runs on the Intel Touchstone Delta computer.
(U) Computation acceleration using dynamic memory
Hakel, Peter
2014-10-24
Many computational applications require the repeated use of quantities, whose calculations can be expensive. In order to speed up the overall execution of the program, it is often advantageous to replace computation with extra memory usage. In this approach, computed values are stored and then, when they are needed again, they are quickly retrieved from memory rather than being calculated again at great cost. Sometimes, however, the precise amount of memory needed to store such a collection is not known in advance, and only emerges in the course of running the calculation. One problem accompanying such a situation is wasted memory space in overdimensioned (and possibly sparse) arrays. Another issue is the overhead of copying existing values to a new, larger memory space, if the original allocation turns out to be insufficient. In order to handle these runtime problems, the programmer therefore has the extra task of addressing them in the code.
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob F.; Biswas, Rupak; Frumkin, Michael; Feng, Huiyu; Biegel, Bryan (Technical Monitor)
2001-01-01
The contents include: 1) A brief history of NPB; 2) What is (not) being measured by NPB; 3) Irregular dynamic applications (UA Benchmark); and 4) Wide area distributed computing (NAS Grid Benchmarks-NGB). This paper is presented in viewgraph form.
Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing
NASA Technical Reports Server (NTRS)
Ozguner, Fusun
1996-01-01
Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.
AGIS: Evolution of Distributed Computing information system for ATLAS
NASA Astrophysics Data System (ADS)
Anisenkov, A.; Di Girolamo, A.; Alandes, M.; Karavakis, E.
2015-12-01
ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produces petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization of computing resources in order to meet the ATLAS requirements of petabytes scale data operations. It has been evolved after the first period of LHC data taking (Run-1) in order to cope with new challenges of the upcoming Run- 2. In this paper we describe the evolution and recent developments of the ATLAS Grid Information System (AGIS), developed in order to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.
Some Aspects of uncertainty in computational fluid dynamics results
NASA Technical Reports Server (NTRS)
Mehta, U. B.
1991-01-01
Uncertainties are inherent in computational fluid dynamics (CFD). These uncertainties need to be systematically addressed and managed. Sources of these uncertainty analysis are discussed. Some recommendations are made for quantification of CFD uncertainties. A practical method of uncertainty analysis is based on sensitivity analysis. When CFD is used to design fluid dynamic systems, sensitivity-uncertainty analysis is essential.
Computer Visualization of Many-Particle Quantum Dynamics
Ozhigov, A. Y.
2009-03-10
In this paper I show the importance of computer visualization in researching of many-particle quantum dynamics. Such a visualization becomes an indispensable illustrative tool for understanding the behavior of dynamic swarm-based quantum systems. It is also an important component of the corresponding simulation framework, and can simplify the studies of underlying algorithms for multi-particle quantum systems.
Computer Visualization of Many-Particle Quantum Dynamics
NASA Astrophysics Data System (ADS)
Ozhigov, A. Y.
2009-03-01
In this paper I show the importance of computer visualization in researching of many-particle quantum dynamics. Such a visualization becomes an indispensable illustrative tool for understanding the behavior of dynamic swarm-based quantum systems. It is also an important component of the corresponding simulation framework, and can simplify the studies of underlying algorithms for multi-particle quantum systems.
The Computer Simulation of Liquids by Molecular Dynamics.
ERIC Educational Resources Information Center
Smith, W.
1987-01-01
Proposes a mathematical computer model for the behavior of liquids using the classical dynamic principles of Sir Isaac Newton and the molecular dynamics method invented by other scientists. Concludes that other applications will be successful using supercomputers to go beyond simple Newtonian physics. (CW)
Computational Methods for Analyzing Fluid Flow Dynamics from Digital Imagery
Luttman, A.
2012-03-30
The main goal (long term) of this work is to perform computational dynamics analysis and quantify uncertainty from vector fields computed directly from measured data. Global analysis based on observed spatiotemporal evolution is performed by objective function based on expected physics and informed scientific priors, variational optimization to compute vector fields from measured data, and transport analysis proceeding with observations and priors. A mathematical formulation for computing flow fields is set up for computing the minimizer for the problem. An application to oceanic flow based on sea surface temperature is presented.
Dynamic traffic assignment on parallel computers
Nagel, K.; Frye, R.; Jakob, R.; Rickert, M.; Stretz, P.
1998-12-01
The authors describe part of the current framework of the TRANSIMS traffic research project at the Los Alamos National Laboratory. It includes parallel implementations of a route planner and a microscopic traffic simulation model. They present performance figures and results of an offline load-balancing scheme used in one of the iterative re-planning runs required for dynamic route assignment.
Computer program for flexible rotor dynamics analysis
NASA Technical Reports Server (NTRS)
Shen, F. A.
1974-01-01
Program analyzes general nonaxisymmetric and nonsynchronous transient and steady-state rotor dynamic performance of bending- and shear-wise flexible rotor-bearing system under various operating conditions. Program can be used as analytical study tool for general transient spin-speed and/or non-axisymmetric rotor motion.
Generating dynamic simulations of movement using computed muscle control.
Thelen, Darryl G; Anderson, Frank C; Delp, Scott L
2003-03-01
Computation of muscle excitation patterns that produce coordinated movements of muscle-actuated dynamic models is an important and challenging problem. Using dynamic optimization to compute excitation patterns comes at a large computational cost, which has limited the use of muscle-actuated simulations. This paper introduces a new algorithm, which we call computed muscle control, that uses static optimization along with feedforward and feedback controls to drive the kinematic trajectory of a musculoskeletal model toward a set of desired kinematics. We illustrate the algorithm by computing a set of muscle excitations that drive a 30-muscle, 3-degree-of-freedom model of pedaling to track measured pedaling kinematics and forces. Only 10 min of computer time were required to compute muscle excitations that reproduced the measured pedaling dynamics, which is over two orders of magnitude faster than conventional dynamic optimization techniques. Simulated kinematics were within 1 degrees of experimental values, simulated pedal forces were within one standard deviation of measured pedal forces for nearly all of the crank cycle, and computed muscle excitations were similar in timing to measured electromyographic patterns. The speed and accuracy of this new algorithm improves the feasibility of using detailed musculoskeletal models to simulate and analyze movement. PMID:12594980
Potential applications of computational fluid dynamics to biofluid analysis
NASA Technical Reports Server (NTRS)
Kwak, D.; Chang, J. L. C.; Rogers, S. E.; Rosenfeld, M.; Kwak, D.
1988-01-01
Computational fluid dynamics was developed to the stage where it has become an indispensable part of aerospace research and design. In view of advances made in aerospace applications, the computational approach can be used for biofluid mechanics research. Several flow simulation methods developed for aerospace problems are briefly discussed for potential applications to biofluids, especially to blood flow analysis.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Smith, Ralph C.; Wang, Yun
1994-01-01
Based on a distributed parameter model for vibrations, an approximate finite dimensional dynamic compensator is designed to suppress vibrations (multiple modes with a broad band of frequencies) of a circular plate with Kelvin-Voigt damping and clamped boundary conditions. The control is realized via piezoceramic patches bonded to the plate and is calculated from information available from several pointwise observed state variables. Examples from computational studies as well as use in laboratory experiments are presented to demonstrate the effectiveness of this design.
Distributed parallel computing in stochastic modeling of groundwater systems.
Dong, Yanhui; Li, Guomin; Xu, Haizhen
2013-03-01
Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. PMID:22823593
Dynamics of money and income distributions
NASA Astrophysics Data System (ADS)
Repetowicz, Przemysław; Hutzler, Stefan; Richmond, Peter
2005-10-01
We study the model of interacting agents proposed by Chakraborti and Chakrabarti [Eur. Phys. J. B 17 (2000) 167] that allows agents to both save and exchange wealth. Closed equations for the wealth distribution are developed using a mean field approximation. We show that when all agents have the same fixed savings propensity, subject to certain well-defined approximations defined in the text, these equations yield the conjecture proposed by Chakraborti and Chakrabarti [Eur. Phys. J. B 17 (2000) 167] for the form of the stationary agent wealth distribution. If the savings propensity for the equations is chosen according to some random distribution, we show further that the wealth distribution for large values of wealth displays a Pareto-like power-law tail, i.e., P(w)∼w. However, the value of a for the model is exactly 1. Exact numerical simulations for the model illustrate how, as the savings distribution function narrows to zero, the wealth distribution changes from a Pareto form to an exponential function. Intermediate regions of wealth may be approximately described by a power law with a>1. However, the value never reaches values of ∼1.6-1.7 that characterise empirical wealth data. This conclusion is not changed if three-body agent exchange processes are allowed. We conclude that other mechanisms are required if the model is to agree with empirical wealth data.
Parallel Domain Decomposition Preconditioning for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai; Kutler, Paul (Technical Monitor)
1998-01-01
This viewgraph presentation gives an overview of the parallel domain decomposition preconditioning for computational fluid dynamics. Details are given on some difficult fluid flow problems, stabilized spatial discretizations, and Newton's method for solving the discretized flow equations. Schur complement domain decomposition is described through basic formulation, simplifying strategies (including iterative subdomain and Schur complement solves, matrix element dropping, localized Schur complement computation, and supersparse computations), and performance evaluation.
Common Accounting System for Monitoring the ATLAS Distributed Computing Resources
NASA Astrophysics Data System (ADS)
Karavakis, E.; Andreeva, J.; Campana, S.; Gayazov, S.; Jezequel, S.; Saiz, P.; Sargsyan, L.; Schovancova, J.; Ueda, I.; Atlas Collaboration
2014-06-01
This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.
Distributed Computer Networks in Support of Complex Group Practices
Wess, Bernard P.
1978-01-01
The economics of medical computer networks are presented in context with the patient care and administrative goals of medical networks. Design alternatives and network topologies are discussed with an emphasis on medical network design requirements in distributed data base design, telecommunications, satellite systems, and software engineering. The success of the medical computer networking technology is predicated on the ability of medical and data processing professionals to design comprehensive, efficient, and virtually impenetrable security systems to protect data bases, network access and services, and patient confidentiality.
Osmosis : a molecular dynamics computer simulation study
NASA Astrophysics Data System (ADS)
Lion, Thomas
Osmosis is a phenomenon of critical importance in a variety of processes ranging from the transport of ions across cell membranes and the regulation of blood salt levels by the kidneys to the desalination of water and the production of clean energy using potential osmotic power plants. However, despite its importance and over one hundred years of study, there is an ongoing confusion concerning the nature of the microscopic dynamics of the solvent particles in their transfer across the membrane. In this thesis the microscopic dynamical processes underlying osmotic pressure and concentration gradients are investigated using molecular dynamics (MD) simulations. I first present a new derivation for the local pressure that can be used for determining osmotic pressure gradients. Using this result, the steady-state osmotic pressure is studied in a minimal model for an osmotic system and the steady-state density gradients are explained using a simple mechanistic hopping model for the solvent particles. The simulation setup is then modified, allowing us to explore the timescales involved in the relaxation dynamics of the system in the period preceding the steady state. Further consideration is also given to the relative roles of diffusive and non-diffusive solvent transport in this period. Finally, in a novel modification to the classic osmosis experiment, the solute particles are driven out-of-equilibrium by the input of energy. The effect of this modification on the osmotic pressure and the osmotic ow is studied and we find that active solute particles can cause reverse osmosis to occur. The possibility of defining a new "osmotic effective temperature" is also considered and compared to the results of diffusive and kinetic temperatures..
Dynamics of Bottlebrush Networks: A Computational Study
NASA Astrophysics Data System (ADS)
Dobrynin, Andrey; Cao, Zhen; Sheiko, Sergei
We study dynamics of deformation of bottlebrush networks using molecular dynamics simulations and theoretical calculations. Analysis of our simulation results show that the dynamics of bottlebrush network deformation can be described by a Rouse model for polydisperse networks with effective Rouse time of the bottlebrush network strand, τR =τ0Ns2 (Nsc + 1) where, Ns is the number-average degree of polymerization of the bottlebrush backbone strands between crosslinks, Nsc is the degree of polymerization of the side chains and τ0is a characteristic monomeric relaxation time. At time scales t smaller than the Rouse time, t <τR , the time dependent network shear modulus decays with time as G (t) ~ ρkB T(τ0 / t) 1 / 2 , where ρis the monomer number density. However, at the time scale t larger than the Rouse time of the bottlebrush strands between crosslinks, the network response is pure elastic with shear modulus G (t) =G0 , where G0 is the equilibrium shear modulus at small deformation. The stress evolution in the bottlebrush networks can be described by a universal function of t /τR . NSF DMR-1409710.
Borštnik, Urban; Miller, Benjamin T; Brooks, Bernard R; Janežič, Dušanka
2011-11-15
Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007
A fractal approach to dynamic inference and distribution analysis
van Rooij, Marieke M. J. W.; Nash, Bertha A.; Rajaraman, Srinivasan; Holden, John G.
2013-01-01
Event-distributions inform scientists about the variability and dispersion of repeated measurements. This dispersion can be understood from a complex systems perspective, and quantified in terms of fractal geometry. The key premise is that a distribution's shape reveals information about the governing dynamics of the system that gave rise to the distribution. Two categories of characteristic dynamics are distinguished: additive systems governed by component-dominant dynamics and multiplicative or interdependent systems governed by interaction-dominant dynamics. A logic by which systems governed by interaction-dominant dynamics are expected to yield mixtures of lognormal and inverse power-law samples is discussed. These mixtures are described by a so-called cocktail model of response times derived from human cognitive performances. The overarching goals of this article are twofold: First, to offer readers an introduction to this theoretical perspective and second, to offer an overview of the related statistical methods. PMID:23372552
The process group approach to reliable distributed computing
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.
1991-01-01
The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.
A new computational structure for real-time dynamics
Izaguirre, A. ); Hashimoto, Minoru )
1992-08-01
The authors present an efficient structure for the computation of robot dynamics in real time. The fundamental characteristic of this structure is the division of the computation into a high-priority synchronous task and low-priority background tasks, possibly sharing the resources of a conventional computing unit based on commercial microprocessors. The background tasks compute the inertial and gravitational coefficients as well as the forces due to the velocities of the joints. In each control sample period, the high-priority synchronous task computes the product of the inertial coefficients by the accelerations of the joints and performs the summation of the torques due to the velocities and gravitational forces. Kircanski et al. (1986) have shown that the bandwidth of the variation of joint angles and of their velocities is an order of magnitude less than the variation of joint accelerations. This result agrees with the experiments the authors have carried out using a PUMA 260 robot. Two main strategies contribute to reduce the computational burden associated with the evaluation of the dynamic equations. The first involves the use of efficient algorithms for the evaluation of the equations. The second is aimed at reducing the number of dynamic parameters by identifying beforehand the linear dependencies among these parameters, as well as carrying out a significance analysis of the parameters' contribution to the final joint torques. The actual code used to evaluate this dynamic model is entirely computer generated from experimental data, requiring no other manual intervention than performing a campaign of measurements.
Distributed-Memory Computing With the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA)
NASA Technical Reports Server (NTRS)
Riley, Christopher J.; Cheatwood, F. McNeil
1997-01-01
The Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA), a Navier-Stokes solver, has been modified for use in a parallel, distributed-memory environment using the Message-Passing Interface (MPI) standard. A standard domain decomposition strategy is used in which the computational domain is divided into subdomains with each subdomain assigned to a processor. Performance is examined on dedicated parallel machines and a network of desktop workstations. The effect of domain decomposition and frequency of boundary updates on performance and convergence is also examined for several realistic configurations and conditions typical of large-scale computational fluid dynamic analysis.
Dynamics of the Kuramoto equation with spatially distributed control
NASA Astrophysics Data System (ADS)
Kashchenko, Ilia; Kaschenko, Sergey
2016-05-01
We consider the scalar complex equation with spatially distributed control. Its dynamical properties are studied by asymptotic methods when the control coefficient is either sufficiently large or sufficiently small and the function of distribution is either almost symmetric or significantly nonsymmetric relative to zero. In all cases we reduce original equation to quasinormal form - the family of special parabolic equations, which do not contain big and small parameters, which nonlocal dynamics determines the behaviour of solutions of the original equation.
BRaTS@Home and BOINC Distributed Computing for Parallel Computation
NASA Astrophysics Data System (ADS)
Coss, David Raymond; Flores, R.
2008-09-01
Utilizing Internet connectivity, the Berkeley Open Infrastructure for Network Computing (BOINC) provides parallel computing power without the expense of purchasing a computer cluster. BOINC, written in C++, is an open source system, acting as an intermediary between the project server and the BOINC client on the volunteer's computer. By using the idle time of computers of volunteer participants, BOINC allows scientists to build a computer cluster at the price of one server. As an example of such computational capabilities, I have developed BRaTS@Home, standing for BRaTS Ray Trace Simulation, using the BOINC distributed computing system to perform gravitational lensing ray-tracing simulations. Though BRaTS@Home is only one of many projects, 182 users in 26 different countries participate in the project. From June 2007 to April 2008, 795 computers have connected to the project server, providing an average computing power of 1.1 billion floating point operations per second(FLOPS), while the entire BOINC system averages over 1000 teraFLOPS, as of April 2008. Preliminary results of the project's gravitational ray-tracing simulations will be shown.
Semiquantum key distribution with secure delegated quantum computation
NASA Astrophysics Data System (ADS)
Li, Qin; Chan, Wai Hong; Zhang, Shengyu
2016-01-01
Semiquantum key distribution allows a quantum party to share a random key with a “classical” party who only can prepare and measure qubits in the computational basis or reorder some qubits when he has access to a quantum channel. In this work, we present a protocol where a secret key can be established between a quantum user and an almost classical user who only needs the quantum ability to access quantum channels, by securely delegating quantum computation to a quantum server. We show the proposed protocol is robust even when the delegated quantum server is a powerful adversary, and is experimentally feasible with current technology. As one party of our protocol is the most quantum-resource efficient, it can be more practical and significantly widen the applicability scope of quantum key distribution.
Semiquantum key distribution with secure delegated quantum computation.
Li, Qin; Chan, Wai Hong; Zhang, Shengyu
2016-01-01
Semiquantum key distribution allows a quantum party to share a random key with a "classical" party who only can prepare and measure qubits in the computational basis or reorder some qubits when he has access to a quantum channel. In this work, we present a protocol where a secret key can be established between a quantum user and an almost classical user who only needs the quantum ability to access quantum channels, by securely delegating quantum computation to a quantum server. We show the proposed protocol is robust even when the delegated quantum server is a powerful adversary, and is experimentally feasible with current technology. As one party of our protocol is the most quantum-resource efficient, it can be more practical and significantly widen the applicability scope of quantum key distribution. PMID:26813384
Semiquantum key distribution with secure delegated quantum computation
Li, Qin; Chan, Wai Hong; Zhang, Shengyu
2016-01-01
Semiquantum key distribution allows a quantum party to share a random key with a “classical” party who only can prepare and measure qubits in the computational basis or reorder some qubits when he has access to a quantum channel. In this work, we present a protocol where a secret key can be established between a quantum user and an almost classical user who only needs the quantum ability to access quantum channels, by securely delegating quantum computation to a quantum server. We show the proposed protocol is robust even when the delegated quantum server is a powerful adversary, and is experimentally feasible with current technology. As one party of our protocol is the most quantum-resource efficient, it can be more practical and significantly widen the applicability scope of quantum key distribution. PMID:26813384
Elemental: a new framework for distributed memory dense matrix computations.
Romero, N.; Poulson, J.; Marker, B.; Hammond, J.; Van de Geijn, R.
2012-02-14
Parallelizing dense matrix computations to distributed memory architectures is a well-studied subject and generally considered to be among the best understood domains of parallel computing. Two packages, developed in the mid 1990s, still enjoy regular use: ScaLAPACK and PLAPACK. With the advent of many-core architectures, which may very well take the shape of distributed memory architectures within a single processor, these packages must be revisited since the traditional MPI-based approaches will likely need to be extended. Thus, this is a good time to review lessons learned since the introduction of these two packages and to propose a simple yet effective alternative. Preliminary performance results show the new solution achieves competitive, if not superior, performance on large clusters.
Accuracy of subsurface temperature distributions computed from pulsed photothermal radiometry.
Smithies, D J; Milner, T E; Tanenbaum, B S; Goodman, D M; Nelson, J S
1998-09-01
Pulsed photothermal radiometry (PPTR) is a non-contact method for determining the temperature increase in subsurface chromophore layers immediately following pulsed laser irradiation. In this paper the inherent limitations of PPTR are identified. A time record of infrared emission from a test material due to laser heating of a subsurface chromophore layer is calculated and used as input data for a non-negatively constrained conjugate gradient algorithm. Position and magnitude of temperature increase in a model chromophore layer immediately following pulsed laser irradiation are computed. Differences between simulated and computed temperature increase are reported as a function of thickness, depth and signal-to-noise ratio (SNR). The average depth of the chromophore layer and integral of temperature increase in the test material are accurately predicted by the algorithm. When the thickness/depth ratio is less than 25%, the computed peak temperature increase is always significantly less than the true value. Moreover, the computed thickness of the chromophore layer is much larger than the true value. The accuracy of the computed subsurface temperature distribution is investigated with the singular value decomposition of the kernel matrix. The relatively small number of right singular vectors that may be used (8% of the rank of the kernel matrix) to represent the simulated temperature increase in the test material limits the accuracy of PPTR. We show that relative error between simulated and computed temperature increase is essentially constant for a particular thickness/depth ratio. PMID:9755938
Integrating Xgrid into the HENP distributed computing model
NASA Astrophysics Data System (ADS)
Hajdu, L.; Kocoloski, A.; Lauret, J.; Miller, M.
2008-07-01
Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.
Exponential rise of dynamical complexity in quantum computing through projections
Burgarth, Daniel Klaus; Facchi, Paolo; Giovannetti, Vittorio; Nakazato, Hiromichi; Pascazio, Saverio; Yuasa, Kazuya
2014-01-01
The ability of quantum systems to host exponentially complex dynamics has the potential to revolutionize science and technology. Therefore, much effort has been devoted to developing of protocols for computation, communication and metrology, which exploit this scaling, despite formidable technical difficulties. Here we show that the mere frequent observation of a small part of a quantum system can turn its dynamics from a very simple one into an exponentially complex one, capable of universal quantum computation. After discussing examples, we go on to show that this effect is generally to be expected: almost any quantum dynamics becomes universal once ‘observed’ as outlined above. Conversely, we show that any complex quantum dynamics can be ‘purified’ into a simpler one in larger dimensions. We conclude by demonstrating that even local noise can lead to an exponentially complex dynamics. PMID:25300692
Computing interface motion in compressible gas dynamics
NASA Technical Reports Server (NTRS)
Mulder, W.; Osher, S.; Sethan, James A.
1992-01-01
An analysis is conducted of the coupling of Osher and Sethian's (1988) 'Hamilton-Jacobi' level set formulation of the equations of motion for propagating interfaces to a system of conservation laws for compressible gas dynamics, giving attention to both the conservative and nonconservative differencing of the level set function. The capabilities of the method are illustrated in view of the results of numerical convergence studies of the compressible Rayleigh-Taylor and Kelvin-Helmholtz instabilities for air-air and air-helium boundaries.
The magnitude distribution of dynamically triggered earthquakes
NASA Astrophysics Data System (ADS)
Hernandez, Stephen
Large dynamic strains carried by seismic waves are known to trigger seismicity far from their source region. It is unknown, however, whether surface waves trigger only small earthquakes, or whether they can also trigger large, societally significant earthquakes. To address this question, we use a mixing model approach in which total seismicity is decomposed into 2 broad subclasses: "triggered" events initiated or advanced by far-field dynamic strains, and "untriggered" spontaneous events consisting of everything else. The b-value of a mixed data set, b MIX, is decomposed into a weighted sum of b-values of its constituent components, bT and bU. For populations of earthquakes subjected to dynamic strain, the fraction of earthquakes that are likely triggered, f T, is estimated via inter-event time ratios and used to invert for bT. The confidence bounds on b T are estimated by multiple inversions of bootstrap resamplings of bMIX and fT. For Californian seismicity, data are consistent with a single-parameter Gutenberg-Richter hypothesis governing the magnitudes of both triggered and untriggered earthquakes. Triggered earthquakes therefore seem just as likely to be societally significant as any other population of earthquakes.
Photonic nonlinear transient computing with multiple-delay wavelength dynamics.
Martinenghi, Romain; Rybalko, Sergei; Jacquot, Maxime; Chembo, Yanne K; Larger, Laurent
2012-06-15
We report on the experimental demonstration of a hybrid optoelectronic neuromorphic computer based on a complex nonlinear wavelength dynamics including multiple delayed feedbacks with randomly defined weights. This neuromorphic approach is based on a new paradigm of a brain-inspired computational unit, intrinsically differing from Turing machines. This recent paradigm consists in expanding the input information to be processed into a higher dimensional phase space, through the nonlinear transient response of a complex dynamics excited by the input information. The computed output is then extracted via a linear separation of the transient trajectory in the complex phase space. The hyperplane separation is derived from a learning phase consisting of the resolution of a regression problem. The processing capability originates from the nonlinear transient, resulting in nonlinear transient computing. The computational performance is successfully evaluated on a standard benchmark test, namely, a spoken digit recognition task. PMID:23004274
A new technique for fast dynamic focusing law computing
NASA Astrophysics Data System (ADS)
Fritsch, C.; Cruza, J. F.; Brizuela, J.; Camacho, J.; Moreno, J. M.
2012-05-01
Dynamic focusing requires computing the individual delays for every element and every focus in the image. This is an easy and relatively fast task if the inspected medium is homogeneous. Nevertheless, some difficulties arise in presence of interfaces (i.e, wedges, immersion, etc.): refraction effects require computing the Snell's law for every focus and element to find the fastest ray entry point in the interface. The process is easy but takes a long time. This work presents a new technique to compute the focusing delays for an equivalent virtual array that operates in the second medium only, thus avoiding any interface. It is nearly as fast as computing the focal laws in the homogeneous case and an order of magnitude faster than Snell's or Fermat's principle based methods. Furthermore, the technique is completely general and can be applied to any equipment having dynamic focusing capabilities. In fact, the technique is especially well suited for real-time focal law computing hardware.
Photonic Nonlinear Transient Computing with Multiple-Delay Wavelength Dynamics
NASA Astrophysics Data System (ADS)
Martinenghi, Romain; Rybalko, Sergei; Jacquot, Maxime; Chembo, Yanne K.; Larger, Laurent
2012-06-01
We report on the experimental demonstration of a hybrid optoelectronic neuromorphic computer based on a complex nonlinear wavelength dynamics including multiple delayed feedbacks with randomly defined weights. This neuromorphic approach is based on a new paradigm of a brain-inspired computational unit, intrinsically differing from Turing machines. This recent paradigm consists in expanding the input information to be processed into a higher dimensional phase space, through the nonlinear transient response of a complex dynamics excited by the input information. The computed output is then extracted via a linear separation of the transient trajectory in the complex phase space. The hyperplane separation is derived from a learning phase consisting of the resolution of a regression problem. The processing capability originates from the nonlinear transient, resulting in nonlinear transient computing. The computational performance is successfully evaluated on a standard benchmark test, namely, a spoken digit recognition task.
Computational fluid dynamics combustion analysis evaluation
NASA Technical Reports Server (NTRS)
Kim, Y. M.; Shang, H. M.; Chen, C. P.; Ziebarth, J. P.
1992-01-01
This study involves the development of numerical modelling in spray combustion. These modelling efforts are mainly motivated to improve the computational efficiency in the stochastic particle tracking method as well as to incorporate the physical submodels of turbulence, combustion, vaporization, and dense spray effects. The present mathematical formulation and numerical methodologies can be casted in any time-marching pressure correction methodologies (PCM) such as FDNS code and MAST code. A sequence of validation cases involving steady burning sprays and transient evaporating sprays will be included.
A computer test of holographic flavour dynamics
NASA Astrophysics Data System (ADS)
Filev, Veselin G.; O'Connor, Denjoe
2016-05-01
We perform computer simulations of the Berkooz-Douglas (BD) matrix model, holographically dual to the D0/D4-brane intersection. We generate the fundamental condensate versus bare mass curve of the theory both holographically and from simulations of the BD model. Our studies show excellent agreement of the two approaches in the deconfined phase of the theory and significant deviations in the confined phase. We argue the discrepancy in the confined phase is explained by the embedding of the D4-brane which yields stronger α' corrections to the condensate in this phase.
Perspective: Computer simulations of long time dynamics.
Elber, Ron
2016-02-14
Atomically detailed computer simulations of complex molecular events attracted the imagination of many researchers in the field as providing comprehensive information on chemical, biological, and physical processes. However, one of the greatest limitations of these simulations is of time scales. The physical time scales accessible to straightforward simulations are too short to address many interesting and important molecular events. In the last decade significant advances were made in different directions (theory, software, and hardware) that significantly expand the capabilities and accuracies of these techniques. This perspective describes and critically examines some of these advances. PMID:26874473
Perspective: Computer simulations of long time dynamics
Elber, Ron
2016-01-01
Atomically detailed computer simulations of complex molecular events attracted the imagination of many researchers in the field as providing comprehensive information on chemical, biological, and physical processes. However, one of the greatest limitations of these simulations is of time scales. The physical time scales accessible to straightforward simulations are too short to address many interesting and important molecular events. In the last decade significant advances were made in different directions (theory, software, and hardware) that significantly expand the capabilities and accuracies of these techniques. This perspective describes and critically examines some of these advances. PMID:26874473
Dynamics of strongly coupled spatially distributed logistic equations with delay
NASA Astrophysics Data System (ADS)
Kashchenko, I. S.; Kashchenko, S. A.
2015-04-01
The dynamics of a system of two logistic delay equations with spatially distributed coupling is studied. The coupling coefficient is assumed to be sufficiently large. Special nonlinear systems of parabolic equations are constructed such that the behavior of their solutions is determined in the first approximation by the dynamical properties of the original system.
Multi-VO support in IHEP's distributed computing environment
NASA Astrophysics Data System (ADS)
Yan, T.; Suo, B.; Zhao, X. H.; Zhang, X. M.; Ma, Z. T.; Yan, X. F.; Lin, T.; Deng, Z. Y.; Li, W. D.; Belov, S.; Pelevanyuk, I.; Zhemchugov, A.; Cai, H.
2015-12-01
Inspired by the success of BESDIRAC, the distributed computing environment based on DIRAC for BESIII experiment, several other experiments operated by Institute of High Energy Physics (IHEP), such as Circular Electron Positron Collider (CEPC), Jiangmen Underground Neutrino Observatory (JUNO), Large High Altitude Air Shower Observatory (LHAASO) and Hard X-ray Modulation Telescope (HXMT) etc, are willing to use DIRAC to integrate the geographically distributed computing resources available by their collaborations. In order to minimize manpower and hardware cost, we extended the BESDIRAC platform to support multi-VO scenario, instead of setting up a self-contained distributed computing environment for each VO. This makes DIRAC as a service for the community of those experiments. To support multi-VO, the system architecture of BESDIRAC is adjusted for scalability. The VOMS and DIRAC servers are reconfigured to manage users and groups belong to several VOs. A lightweight storage resource manager StoRM is employed as the central SE to integrate local and grid data. A frontend system is designed for user's massive job splitting, submission and management, with plugins to support new VOs. A monitoring and accounting system is also considered to easy the system administration and VO related resources usage accounting.
Computationally intensive econometrics using a distributed matrix-programming language.
Doornik, Jurgen A; Hendry, David F; Shephard, Neil
2002-06-15
This paper reviews the need for powerful computing facilities in econometrics, focusing on concrete problems which arise in financial economics and in macroeconomics. We argue that the profession is being held back by the lack of easy-to-use generic software which is able to exploit the availability of cheap clusters of distributed computers. Our response is to extend, in a number of directions, the well-known matrix-programming interpreted language Ox developed by the first author. We note three possible levels of extensions: (i) Ox with parallelization explicit in the Ox code; (ii) Ox with a parallelized run-time library; and (iii) Ox with a parallelized interpreter. This paper studies and implements the first case, emphasizing the need for deterministic computing in science. We give examples in the context of financial economics and time-series modelling. PMID:12804277
Algorithm-dependent fault tolerance for distributed computing
P. D. Hough; M. e. Goldsby; E. J. Walsh
2000-02-01
Large-scale distributed systems assembled from commodity parts, like CPlant, have become common tools in the distributed computing world. Because of their size and diversity of parts, these systems are prone to failures. Applications that are being run on these systems have not been equipped to efficiently deal with failures, nor is there vendor support for fault tolerance. Thus, when a failure occurs, the application crashes. While most programmers make use of checkpoints to allow for restarting of their applications, this is cumbersome and incurs substantial overhead. In many cases, there are more efficient and more elegant ways in which to address failures. The goal of this project is to develop a software architecture for the detection of and recovery from faults in a cluster computing environment. The detection phase relies on the latest techniques developed in the fault tolerance community. Recovery is being addressed in an application-dependent manner, thus allowing the programmer to take advantage of algorithmic characteristics to reduce the overhead of fault tolerance. This architecture will allow large-scale applications to be more robust in high-performance computing environments that are comprised of clusters of commodity computers such as CPlant and SMP clusters.
Distributing Data from Desktop to Hand-Held Computers
NASA Technical Reports Server (NTRS)
Elmore, Jason L.
2005-01-01
A system of server and client software formats and redistributes data from commercially available desktop to commercially available hand-held computers via both wired and wireless networks. This software is an inexpensive means of enabling engineers and technicians to gain access to current sensor data while working in locations in which such data would otherwise be inaccessible. The sensor data are first gathered by a data-acquisition server computer, then transmitted via a wired network to a data-distribution computer that executes the server portion of the present software. Data in all sensor channels -- both raw sensor outputs in millivolt units and results of conversion to engineering units -- are made available for distribution. Selected subsets of the data are transmitted to each hand-held computer via the wired and then a wireless network. The selection of the subsets and the choice of the sequences and formats for displaying the data is made by means of a user interface generated by the client portion of the software. The data displayed on the screens of hand-held units can be updated at rates from 1 to
Incomplete fusion dynamics by spin distribution measurements
Singh, D.; Ali, R.; Ansari, M. Afzal; Singh, Pushpendra P.; Sharma, M. K.; Singh, B. P.; Babu, K. Surendra; Sinha, Rishi K.; Kumar, R.; Muralithar, S.; Singh, R. P.; Bhowmik, R. K.
2010-02-15
Spin distributions for various evaporation residues populated via complete and incomplete fusion of {sup 16}O with {sup 124}Sn at 6.3 MeV/nucleon have been measured, using charged particles (Z=1,2)-{gamma} coincidence technique. Experimentally measured spin distributions of the residues produced as incomplete fusion products associated with 'fast'{alpha}- and 2{alpha}-emission channels observed in the 'forward cone' are found to be distinctly different from those of the residues produced as complete fusion products. Moreover, 'fast'{alpha}-particles that arise from larger angular momentum in the entrance channel are populated at relatively higher driving input angular momentum than those produced through complete fusion. The incomplete fusion residues are populated in a limited, higher-angular-momentum range, in contrast to the complete fusion products, which are populated over a broad spin range.
Dynamics of Numerics & Spurious Behaviors in CFD Computations. Revised
NASA Technical Reports Server (NTRS)
Yee, Helen C.; Sweby, Peter K.
1997-01-01
The global nonlinear behavior of finite discretizations for constant time steps and fixed or adaptive grid spacings is studied using tools from dynamical systems theory. Detailed analysis of commonly used temporal and spatial discretizations for simple model problems is presented. The role of dynamics in the understanding of long time behavior of numerical integration and the nonlinear stability, convergence, and reliability of using time-marching approaches for obtaining steady-state numerical solutions in computational fluid dynamics (CFD) is explored. The study is complemented with examples of spurious behavior observed in steady and unsteady CFD computations. The CFD examples were chosen to illustrate non-apparent spurious behavior that was difficult to detect without extensive grid and temporal refinement studies and some knowledge from dynamical systems theory. Studies revealed the various possible dangers of misinterpreting numerical simulation of realistic complex flows that are constrained by available computing power. In large scale computations where the physics of the problem under study is not well understood and numerical simulations are the only viable means of solution, extreme care must be taken in both computation and interpretation of the numerical data. The goal of this paper is to explore the important role that dynamical systems theory can play in the understanding of the global nonlinear behavior of numerical algorithms and to aid the identification of the sources of numerical uncertainties in CFD.
User's Manual for Computer Program ROTOR. [to calculate tilt-rotor aircraft dynamic characteristics
NASA Technical Reports Server (NTRS)
Yasue, M.
1974-01-01
A detailed description of a computer program to calculate tilt-rotor aircraft dynamic characteristics is presented. This program consists of two parts: (1) the natural frequencies and corresponding mode shapes of the rotor blade and wing are developed from structural data (mass distribution and stiffness distribution); and (2) the frequency response (to gust and blade pitch control inputs) and eigenvalues of the tilt-rotor dynamic system, based on the natural frequencies and mode shapes, are derived. Sample problems are included to assist the user.
Challenges to computing plasma thruster dynamics
Smith, G.A. )
1992-01-01
This paper describes computational challenges in describing high thrust and I[sub sp] expected from the proposed ion-compressed antimatter nuclear (ICAN) propulsion system. This concept uses antiprotons to induce fission reactions that jump start a microfission/fusion process in a target compressed by low-energy ion beams. The ICAN system could readily provide the high energy density required for interplanetary space missions of short duration. In conventional rocket design, thrust is obtained by expelling a propellant under high pressure through a nozzle. A larger I[sub sp] can be achieved by operating the system at a higher temperature. Full ionization of propellant at high temperature introduces new and challenging questions in the design of plasma thrusters.
Distributed Computation Resources for Earth System Grid Federation (ESGF)
NASA Astrophysics Data System (ADS)
Duffy, D.; Doutriaux, C.; Williams, D. N.
2014-12-01
The Intergovernmental Panel on Climate Change (IPCC), prompted by the United Nations General Assembly, has published a series of papers in their Fifth Assessment Report (AR5) on processes, impacts, and mitigations of climate change in 2013. The science used in these reports was generated by an international group of domain experts. They studied various scenarios of climate change through the use of highly complex computer models to simulate the Earth's climate over long periods of time. The resulting total data of approximately five petabytes are stored in a distributed data grid known as the Earth System Grid Federation (ESGF). Through the ESGF, consumers of the data can find and download data with limited capabilities for server-side processing. The Sixth Assessment Report (AR6) is already in the planning stages and is estimated to create as much as two orders of magnitude more data than the AR5 distributed archive. It is clear that data analysis capabilities currently in use will be inadequate to allow for the necessary science to be done with AR6 data—the data will just be too big. A major paradigm shift from downloading data to local systems to perform data analytics must evolve to moving the analysis routines to the data and performing these computations on distributed platforms. In preparation for this need, the ESGF has started a Compute Working Team (CWT) to create solutions that allow users to perform distributed, high-performance data analytics on the AR6 data. The team will be designing and developing a general Application Programming Interface (API) to enable highly parallel, server-side processing throughout the ESGF data grid. This API will be integrated with multiple analysis and visualization tools, such as the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT), netCDF Operator (NCO), and others. This presentation will provide an update on the ESGF CWT's overall approach toward enabling the necessary storage proximal computational
[Dysfunction of mitochondrial dynamic and distribution in Amyotrophic Lateral Sclerosis].
Walczak, Jarosław; Szczepanowska, Joanna
2015-01-01
Amyotrophic lateral sclerosis (ALS) is a complex disease leading to degradation of motor neurons. One of the early symptoms of many neurodegenerative disorders are mitochondrial dysfunctions. Since few decades mitochondrial morphology changes have been observed in tissues of patients with ALS. Mitochondria are highly dynamic organelles which constantly undergo continuous process of fusion and fission and are actively transported within the cell. Proper functioning of mitochondrial dynamics and distribution is crucial for cell survival, especially neuronal cells that have long axons. This article summarizes the current knowledge about the role of mitochondrial dynamics and distribution in pathophysiology of familial and sporadic form of ALS. PMID:26689011
Use of computational fluid dynamics in the design of dynamic contrast enhanced imaging phantoms.
Hariharan, Prasanna; Freed, Melanie; Myers, Matthew R
2013-09-21
Phantoms for dynamic contrast enhanced (DCE) imaging modalities such as DCE computed tomography (DCE-CT) and DCE magnetic resonance imaging (DCE-MRI) are valuable tools for evaluating and comparing imaging systems. It is important for the contrast-agent distribution within the phantom to possess a time dependence that replicates a curve observed clinically, known as the 'tumor-enhancement curve'. It is also important for the concentration field within the lesion to be as uniform as possible. This study demonstrates how computational fluid dynamics (CFD) can be applied to achieve these goals within design constraints. The distribution of the contrast agent within the simulated phantoms was investigated in relation to the influence of three factors of the phantom design. First, the interaction between the inlets and the uniformity of the contrast agent within the phantom was modeled. Second, pumps were programmed using a variety of schemes and the resultant dynamic uptake curves were compared to tumor-enhancement curves obtained from clinical data. Third, the effectiveness of pulsing the inlet flow rate to produce faster equilibration of the contrast-agent distribution was quantified. The models employed a spherical lesion and design constraints (lesion diameter, inlet-tube size and orientation, contrast-agent flow rates and fluid properties) taken from a recently published DCE-MRI phantom study. For DCE-MRI in breast cancer detection, where the target tumor-enhancement curve varies on the scale of hundreds of seconds, optimizing the number of inlet tubes and their orientation was found to be adequate for attaining concentration uniformity and reproducing the target tumor-enhancement curve. For DCE-CT in liver tumor detection, where the tumor-enhancement curve varies on a scale of tens of seconds, the use of an iterated inlet condition (programmed into the pump) enabled the phantom to reproduce the target tumor-enhancement curve within a few per cent beyond about 6
Use of computational fluid dynamics in the design of dynamic contrast enhanced imaging phantoms
NASA Astrophysics Data System (ADS)
Hariharan, Prasanna; Freed, Melanie; Myers, Matthew R.
2013-09-01
Phantoms for dynamic contrast enhanced (DCE) imaging modalities such as DCE computed tomography (DCE-CT) and DCE magnetic resonance imaging (DCE-MRI) are valuable tools for evaluating and comparing imaging systems. It is important for the contrast-agent distribution within the phantom to possess a time dependence that replicates a curve observed clinically, known as the ‘tumor-enhancement curve’. It is also important for the concentration field within the lesion to be as uniform as possible. This study demonstrates how computational fluid dynamics (CFD) can be applied to achieve these goals within design constraints. The distribution of the contrast agent within the simulated phantoms was investigated in relation to the influence of three factors of the phantom design. First, the interaction between the inlets and the uniformity of the contrast agent within the phantom was modeled. Second, pumps were programmed using a variety of schemes and the resultant dynamic uptake curves were compared to tumor-enhancement curves obtained from clinical data. Third, the effectiveness of pulsing the inlet flow rate to produce faster equilibration of the contrast-agent distribution was quantified. The models employed a spherical lesion and design constraints (lesion diameter, inlet-tube size and orientation, contrast-agent flow rates and fluid properties) taken from a recently published DCE-MRI phantom study. For DCE-MRI in breast cancer detection, where the target tumor-enhancement curve varies on the scale of hundreds of seconds, optimizing the number of inlet tubes and their orientation was found to be adequate for attaining concentration uniformity and reproducing the target tumor-enhancement curve. For DCE-CT in liver tumor detection, where the tumor-enhancement curve varies on a scale of tens of seconds, the use of an iterated inlet condition (programmed into the pump) enabled the phantom to reproduce the target tumor-enhancement curve within a few per cent beyond about
Secure distributed genome analysis for GWAS and sequence comparison computation
2015-01-01
Background The rapid increase in the availability and volume of genomic data makes significant advances in biomedical research possible, but sharing of genomic data poses challenges due to the highly sensitive nature of such data. To address the challenges, a competition for secure distributed processing of genomic data was organized by the iDASH research center. Methods In this work we propose techniques for securing computation with real-life genomic data for minor allele frequency and chi-squared statistics computation, as well as distance computation between two genomic sequences, as specified by the iDASH competition tasks. We put forward novel optimizations, including a generalization of a version of mergesort, which might be of independent interest. Results We provide implementation results of our techniques based on secret sharing that demonstrate practicality of the suggested protocols and also report on performance improvements due to our optimization techniques. Conclusions This work describes our techniques, findings, and experimental results developed and obtained as part of iDASH 2015 research competition to secure real-life genomic computations and shows feasibility of securely computing with genomic data in practice. PMID:26733307
SD-CAS: Spin Dynamics by Computer Algebra System.
Filip, Xenia; Filip, Claudiu
2010-11-01
A computer algebra tool for describing the Liouville-space quantum evolution of nuclear 1/2-spins is introduced and implemented within a computational framework named Spin Dynamics by Computer Algebra System (SD-CAS). A distinctive feature compared with numerical and previous computer algebra approaches to solving spin dynamics problems results from the fact that no matrix representation for spin operators is used in SD-CAS, which determines a full symbolic character to the performed computations. Spin correlations are stored in SD-CAS as four-entry nested lists of which size increases linearly with the number of spins into the system and are easily mapped into analytical expressions in terms of spin operator products. For the so defined SD-CAS spin correlations a set of specialized functions and procedures is introduced that are essential for implementing basic spin algebra operations, such as the spin operator products, commutators, and scalar products. They provide results in an abstract algebraic form: specific procedures to quantitatively evaluate such symbolic expressions with respect to the involved spin interaction parameters and experimental conditions are also discussed. Although the main focus in the present work is on laying the foundation for spin dynamics symbolic computation in NMR based on a non-matrix formalism, practical aspects are also considered throughout the theoretical development process. In particular, specific SD-CAS routines have been implemented using the YACAS computer algebra package (http://yacas.sourceforge.net), and their functionality was demonstrated on a few illustrative examples. PMID:20843716
NASA Technical Reports Server (NTRS)
Kleis, Stanley J.; Truong, Tuan; Goodwin, Thomas J,
2004-01-01
This report is a documentation of a fluid dynamic analysis of the proposed Automated Static Culture System (ASCS) cell module mixing protocol. The report consists of a review of some basic fluid dynamics principles appropriate for the mixing of a patch of high oxygen content media into the surrounding media which is initially depleted of oxygen, followed by a computational fluid dynamics (CFD) study of this process for the proposed protocol over a range of the governing parameters. The time histories of oxygen concentration distributions and mechanical shear levels generated are used to characterize the mixing process for different parameter values.
A computationally efficient OMP-based compressed sensing reconstruction for dynamic MRI
NASA Astrophysics Data System (ADS)
Usman, M.; Prieto, C.; Odille, F.; Atkinson, D.; Schaeffter, T.; Batchelor, P. G.
2011-04-01
Compressed sensing (CS) methods in MRI are computationally intensive. Thus, designing novel CS algorithms that can perform faster reconstructions is crucial for everyday applications. We propose a computationally efficient orthogonal matching pursuit (OMP)-based reconstruction, specifically suited to cardiac MR data. According to the energy distribution of a y-f space obtained from a sliding window reconstruction, we label the y-f space as static or dynamic. For static y-f space images, a computationally efficient masked OMP reconstruction is performed, whereas for dynamic y-f space images, standard OMP reconstruction is used. The proposed method was tested on a dynamic numerical phantom and two cardiac MR datasets. Depending on the field of view composition of the imaging data, compared to the standard OMP method, reconstruction speedup factors ranging from 1.5 to 2.5 are achieved.
Experience with automatic, dynamic load balancing and adaptive finite element computation
Wheat, S.R.; Devine, K.D.; Maccabe, A.B.
1993-10-01
Distributed memory, Massively Parallel (MP), MIMD technology has enabled the development of applications requiring computational resources previously unobtainable. Structural mechanics and fluid dynamics applications, for example, are often solved by finite element methods (FEMs) requiring, millions of degrees of freedom to accurately simulate physical phenomenon. Adaptive methods, which automatically refine or coarsen meshes and vary the order of accuracy of the numerical solution, offer greater robustness and computational efficiency than traditional FEMs by reducing the amount of computation required away from physical structures such as shock waves and boundary layers. On MP computers, FEMs frequently result in distributed processor load imbalances. To overcome load imbalance, many MP FEMs use static load balancing as a preprocessor to the finite element calculation. Adaptive methods complicate the load imbalance problem since the work per element is not uniform across the solution domain and changes as the computation proceeds. Therefore, dynamic load balancing is required to maintain global load balance. We describe a dynamic, fine-grained, element-based data migration system that maintains global load balance and is effective in the presence of changing work loads. Global load balance is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method utilizes an automatic element management system library to which a programmer integrates the application`s computational description. The library`s flexibility supports a large class of finite element and finite difference based applications.
Introducing Computational Fluid Dynamics Simulation into Olfactory Display
NASA Astrophysics Data System (ADS)
Ishida, Hiroshi; Yoshida, Hitoshi; Nakamoto, Takamichi
An olfactory display is a device that delivers various odors to the user's nose. It can be used to add special effects to movies and games by releasing odors relevant to the scenes shown on the screen. In order to provide high-presence olfactory stimuli to the users, the display must be able to generate realistic odors with appropriate concentrations in a timely manner together with visual and audio playbacks. In this paper, we propose to use computational fluid dynamics (CFD) simulations in conjunction with the olfactory display. Odor molecules released from their source are transported mainly by turbulent flow, and their behavior can be extremely complicated even in a simple indoor environment. In the proposed system, a CFD solver is employed to calculate the airflow field and the odor dispersal in the given environment. An odor blender is used to generate the odor with the concentration determined based on the calculated odor distribution. Experimental results on presenting odor stimuli synchronously with movie clips show the effectiveness of the proposed system.
Computational Fluid Dynamics Analysis of Canadian Supercritical Water Reactor (SCWR)
NASA Astrophysics Data System (ADS)
Movassat, Mohammad; Bailey, Joanne; Yetisir, Metin
2015-11-01
A Computational Fluid Dynamics (CFD) simulation was performed on the proposed design for the Canadian SuperCritical Water Reactor (SCWR). The proposed Canadian SCWR is a 1200 MW(e) supercritical light-water cooled nuclear reactor with pressurized fuel channels. The reactor concept uses an inlet plenum that all fuel channels are attached to and an outlet header nested inside the inlet plenum. The coolant enters the inlet plenum at 350 C and exits the outlet header at 625 C. The operating pressure is approximately 26 MPa. The high pressure and high temperature outlet conditions result in a higher electric conversion efficiency as compared to existing light water reactors. In this work, CFD simulations were performed to model fluid flow and heat transfer in the inlet plenum, outlet header, and various parts of the fuel assembly. The ANSYS Fluent solver was used for simulations. Results showed that mass flow rate distribution in fuel channels varies radially and the inner channels achieve higher outlet temperatures. At the outlet header, zones with rotational flow were formed as the fluid from 336 fuel channels merged. Results also suggested that insulation of the outlet header should be considered to reduce the thermal stresses caused by the large temperature gradients.
Numerical simulation of landfill aeration using computational fluid dynamics.
Fytanidis, Dimitrios K; Voudrias, Evangelos A
2014-04-01
The present study is an application of Computational Fluid Dynamics (CFD) to the numerical simulation of landfill aeration systems. Specifically, the CFD algorithms provided by the commercial solver ANSYS Fluent 14.0, combined with an in-house source code developed to modify the main solver, were used. The unsaturated multiphase flow of air and liquid phases and the biochemical processes for aerobic biodegradation of the organic fraction of municipal solid waste were simulated taking into consideration their temporal and spatial evolution, as well as complex effects, such as oxygen mass transfer across phases, unsaturated flow effects (capillary suction and unsaturated hydraulic conductivity), temperature variations due to biochemical processes and environmental correction factors for the applied kinetics (Monod and 1st order kinetics). The developed model results were compared with literature experimental data. Also, pilot scale simulations and sensitivity analysis were implemented. Moreover, simulation results of a hypothetical single aeration well were shown, while its zone of influence was estimated using both the pressure and oxygen distribution. Finally, a case study was simulated for a hypothetical landfill aeration system. Both a static (steadily positive or negative relative pressure with time) and a hybrid (following a square wave pattern of positive and negative values of relative pressure with time) scenarios for the aeration wells were examined. The results showed that the present model is capable of simulating landfill aeration and the obtained results were in good agreement with corresponding previous experimental and numerical investigations. PMID:24525420
Dynamic ventilation imaging from four-dimensional computed tomography
NASA Astrophysics Data System (ADS)
Guerrero, Thomas; Sanders, Kevin; Castillo, Edward; Zhang, Yin; Bidaut, Luc; Pan, Tinsu; Komaki, Ritsuko
2006-02-01
A novel method for dynamic ventilation imaging of the full respiratory cycle from four-dimensional computed tomography (4D CT) acquired without added contrast is presented. Three cases with 4D CT images obtained with respiratory gated acquisition for radiotherapy treatment planning were selected. Each of the 4D CT data sets was acquired during resting tidal breathing. A deformable image registration algorithm mapped each (voxel) corresponding tissue element across the 4D CT data set. From local average CT values, the change in fraction of air per voxel (i.e. local ventilation) was calculated. A 4D ventilation image set was calculated using pairs formed with the maximum expiration image volume, first the exhalation then the inhalation phases representing a complete breath cycle. A preliminary validation using manually determined lung volumes was performed. The calculated total ventilation was compared to the change in contoured lung volumes between the CT pairs (measured volume). A linear regression resulted in a slope of 1.01 and a correlation coefficient of 0.984 for the ventilation images. The spatial distribution of ventilation was found to be case specific and a 30% difference in mass-specific ventilation between the lower and upper lung halves was found. These images may be useful in radiotherapy planning.
Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations
NASA Technical Reports Server (NTRS)
Chrisochoides, Nikos
1995-01-01
We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.
Distributed dynamic load on composite laminates
NASA Astrophysics Data System (ADS)
Langella, A.; Lopresto, V.; Caprino, G.
2016-05-01
An experimental activity conducted in order to assess the impact behavior at room and low temperature of carbon fibre in vinylester resin laminates used in the shipbuilding industry, was reported. The conditions which reproduce the impact of a hull at low temperature with a solid body suspended in the water was reproduced. A test equipment was designed and realized to reproduce the real material behaviour in water to obtain a load distribution on the entire surface of the specimen. The results were obtained impacting the laminates placed between the cilyndrical steel impactor and a bag containing water. A falling weight machine, equipped with an instrumented steel impactor and a thermal chamber, was adopted for the experimental tests. The impact behaviour in hostile environments was compared to the behaviour at room temperature and the data obtained under distributed load conditions were compared with the results from concentrated loads: a completely different behaviour was observed between the two different loading conditions in terms of load-displacement curve. The effect of the impact on the laminates has been related with the delaminations, evaluated by ultrasonic scanning, and the indentation.
Computational Fluid Dynamics Simulation of Fluidized Bed Polymerization Reactors
Rong Fan
2006-08-09
Fluidized beds (FB) reactors are widely used in the polymerization industry due to their superior heat- and mass-transfer characteristics. Nevertheless, problems associated with local overheating of polymer particles and excessive agglomeration leading to FB reactors defluidization still persist and limit the range of operating temperatures that can be safely achieved in plant-scale reactors. Many people have been worked on the modeling of FB polymerization reactors, and quite a few models are available in the open literature, such as the well-mixed model developed by McAuley, Talbot, and Harris (1994), the constant bubble size model (Choi and Ray, 1985) and the heterogeneous three phase model (Fernandes and Lona, 2002). Most these research works focus on the kinetic aspects, but from industrial viewpoint, the behavior of FB reactors should be modeled by considering the particle and fluid dynamics in the reactor. Computational fluid dynamics (CFD) is a powerful tool for understanding the effect of fluid dynamics on chemical reactor performance. For single-phase flows, CFD models for turbulent reacting flows are now well understood and routinely applied to investigate complex flows with detailed chemistry. For multiphase flows, the state-of-the-art in CFD models is changing rapidly and it is now possible to predict reasonably well the flow characteristics of gas-solid FB reactors with mono-dispersed, non-cohesive solids. This thesis is organized into seven chapters. In Chapter 2, an overview of fluidized bed polymerization reactors is given, and a simplified two-site kinetic mechanism are discussed. Some basic theories used in our work are given in detail in Chapter 3. First, the governing equations and other constitutive equations for the multi-fluid model are summarized, and the kinetic theory for describing the solid stress tensor is discussed. The detailed derivation of DQMOM for the population balance equation is given as the second section. In this section
Portable lamp with dynamically controlled lighting distribution
Siminovitch, Michael J.; Page, Erik R.
2001-01-01
A double lamp table or floor lamp lighting system has a pair of compact fluorescent lamps (CFLs) arranged vertically with a reflective septum in between. By selectively turning on one or both of the CFLs, down lighting, up lighting, or both up and down lighting is produced. The control system can also vary the light intensity from each CFL. The reflective septum insures that almost all the light produced by each lamp will be directed into the desired light distribution pattern which is selected and easily changed by the user. Planar compact fluorescent lamps, e.g. circular CFLs, particularly oriented horizontally, are preferable. CFLs provide energy efficiency. The lighting system may be designed for the home, hospitality, office or other environments.
Computational fluid dynamics - Current capabilities and directions for the future
NASA Technical Reports Server (NTRS)
Kutler, Paul
1989-01-01
Computational fluid dynamics (CFD) has made great strides in the detailed simulation of complex fluid flows, including some of those not before understood. It is now being routinely applied to some rather complicated problems and starting to affect the design cycle of aerospace flight vehicles and their components. It is being used to complement, and is being complemented by, experimental studies. Several examples are presented in the paper to illustrate the current state of the art. Included is a discussion of the barriers to accomplishing the basic objective of numerical simulation. In addition, the directions for the future in the discipline of computational fluid dynamics are addressed.
ATLAS Distributed Computing Monitoring tools during the LHC Run I
NASA Astrophysics Data System (ADS)
Schovancová, J.; Campana, S.; Di Girolamo, A.; Jézéquel, S.; Ueda, I.; Wenaus, T.; Atlas Collaboration
2014-06-01
This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources. During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visualization bits across the different tools. A rich family of various filtering and searching options enhancing available user interfaces comes naturally with the data and visualization layer separation. With a variety of reliable monitoring data accessible through standardized interfaces, the possibility of automating actions under well defined conditions correlating multiple data sources has become feasible. In this contribution we discuss also about the automated exclusion of degraded resources and their automated recovery in various activities.
Qualification of a computer program for drill string dynamics
Stone, C.M.; Carne, T.G.; Caskey, B.C.
1985-01-01
A four point plan for the qualification of the GEODYN drill string dynamics computer program is described. The qualification plan investigates both modal response and transient response of a short drill string subjected to simulated cutting loads applied through a polycrystalline diamond compact (PDC) bit. The experimentally based qualification shows that the analytical techniques included in Phase 1 GEODYN correctly simulate the dynamic response of the bit-drill string system. 6 refs., 8 figs.
Population-based learning of load balancing policies for a distributed computer system
NASA Technical Reports Server (NTRS)
Mehra, Pankaj; Wah, Benjamin W.
1993-01-01
Effective load-balancing policies use dynamic resource information to schedule tasks in a distributed computer system. We present a novel method for automatically learning such policies. At each site in our system, we use a comparator neural network to predict the relative speedup of an incoming task using only the resource-utilization patterns obtained prior to the task's arrival. Outputs of these comparator networks are broadcast periodically over the distributed system, and the resource schedulers at each site use these values to determine the best site for executing an incoming task. The delays incurred in propagating workload information and tasks from one site to another, as well as the dynamic and unpredictable nature of workloads in multiprogrammed multiprocessors, may cause the workload pattern at the time of execution to differ from patterns prevailing at the times of load-index computation and decision making. Our load-balancing policy accommodates this uncertainty by using certain tunable parameters. We present a population-based machine-learning algorithm that adjusts these parameters in order to achieve high average speedups with respect to local execution. Our results show that our load-balancing policy, when combined with the comparator neural network for workload characterization, is effective in exploiting idle resources in a distributed computer system.
Forward and adjoint sensitivity computation of chaotic dynamical systems
Wang, Qiqi
2013-02-15
This paper describes a forward algorithm and an adjoint algorithm for computing sensitivity derivatives in chaotic dynamical systems, such as the Lorenz attractor. The algorithms compute the derivative of long time averaged “statistical” quantities to infinitesimal perturbations of the system parameters. The algorithms are demonstrated on the Lorenz attractor. We show that sensitivity derivatives of statistical quantities can be accurately estimated using a single, short trajectory (over a time interval of 20) on the Lorenz attractor.
Computational Fluid Dynamics. [numerical methods and algorithm development
NASA Technical Reports Server (NTRS)
1992-01-01
This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.
Aono, Masashi; Naruse, Makoto; Kim, Song-Ju; Wakabayashi, Masamitsu; Hori, Hirokazu; Ohtsu, Motoichi; Hara, Masahiko
2013-06-18
Biologically inspired computing devices and architectures are expected to overcome the limitations of conventional technologies in terms of solving computationally demanding problems, adapting to complex environments, reducing energy consumption, and so on. We previously demonstrated that a primitive single-celled amoeba (a plasmodial slime mold), which exhibits complex spatiotemporal oscillatory dynamics and sophisticated computing capabilities, can be used to search for a solution to a very hard combinatorial optimization problem. We successfully extracted the essential spatiotemporal dynamics by which the amoeba solves the problem. This amoeba-inspired computing paradigm can be implemented by various physical systems that exhibit suitable spatiotemporal dynamics resembling the amoeba's problem-solving process. In this Article, we demonstrate that photoexcitation transfer phenomena in certain quantum nanostructures mediated by optical near-field interactions generate the amoebalike spatiotemporal dynamics and can be used to solve the satisfiability problem (SAT), which is the problem of judging whether a given logical proposition (a Boolean formula) is self-consistent. SAT is related to diverse application problems in artificial intelligence, information security, and bioinformatics and is a crucially important nondeterministic polynomial time (NP)-complete problem, which is believed to become intractable for conventional digital computers when the problem size increases. We show that our amoeba-inspired computing paradigm dramatically outperforms a conventional stochastic search method. These results indicate the potential for developing highly versatile nanoarchitectonic computers that realize powerful solution searching with low energy consumption. PMID:23565603
Analytical and Computational Properties of Distributed Approaches to MDO
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.; Lewis, Robert Michael
2000-01-01
Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis
Next generation database relational solutions for ATLAS distributed computing
NASA Astrophysics Data System (ADS)
Dimitrov, G.; Maeno, T.; Garonne, V.; Atlas Collaboration
2014-06-01
The ATLAS Distributed Computing (ADC) project delivers production tools and services for ATLAS offline activities such as data placement and data processing on the Grid. The system has been capable of sustaining with high efficiency the needed computing activities during the first run of LHC data taking, and has demonstrated flexibility in reacting promptly to new challenges. Databases are a vital part of the whole ADC system. The Oracle Relational Database Management System (RDBMS) has been addressing a majority of the ADC database requirements for many years. Much expertise was gained through the years and without a doubt will be used as a good foundation for the next generation PanDA (Production ANd Distributed Analysis) and DDM (Distributed Data Management) systems. In this paper we present the current production ADC database solutions and notably the planned changes on the PanDA system, and the next generation ATLAS DDM system called Rucio. Significant work was performed on studying different solutions to arrive at the best relational and physical database model for performance and scalability in order to be ready for deployment and operation in 2014.
Parallel matrix transpose algorithms on distributed memory concurrent computers
Choi, J.; Walker, D.W.; Dongarra, J.J. |
1993-10-01
This paper describes parallel matrix transpose algorithms on distributed memory concurrent processors. It is assumed that the matrix is distributed over a P x Q processor template with a block scattered data distribution. P, Q, and the block size can be arbitrary, so the algorithms have wide applicability. The communication schemes of the algorithms are determined by the greatest common divisor (GCD) of P and Q. If P and Q are relatively prime, the matrix transpose algorithm involves complete exchange communication. If P and Q are not relatively prime, processors are divided into GCD groups and the communication operations are overlapped for different groups of processors. Processors transpose GCD wrapped diagonal blocks simultaneously, and the matrix can be transposed with LCM/GCD steps, where LCM is the least common multiple of P and Q. The algorithms make use of non-blocking, point-to-point communication between processors. The use of nonblocking communication allows a processor to overlap the messages that it sends to different processors, thereby avoiding unnecessary synchronization. Combined with the matrix multiplication routine, C = A{center_dot}B, the algorithms are used to compute parallel multiplications of transposed matrices, C = A{sup T}{center_dot}B{sup T}, in the PUMMA package. Details of the parallel implementation of the algorithms are given, and results are presented for runs on the Intel Touchstone Delta computer.
Modeling and Computer Simulation: Molecular Dynamics and Kinetic Monte Carlo
Wirth, B.D.; Caturla, M.J.; Diaz de la Rubia, T.
2000-10-10
Recent years have witnessed tremendous advances in the realistic multiscale simulation of complex physical phenomena, such as irradiation and aging effects of materials, made possible by the enormous progress achieved in computational physics for calculating reliable, yet tractable interatomic potentials and the vast improvements in computational power and parallel computing. As a result, computational materials science is emerging as an important complement to theory and experiment to provide fundamental materials science insight. This article describes the atomistic modeling techniques of molecular dynamics (MD) and kinetic Monte Carlo (KMC), and an example of their application to radiation damage production and accumulation in metals. It is important to note at the outset that the primary objective of atomistic computer simulation should be obtaining physical insight into atomic-level processes. Classical molecular dynamics is a powerful method for obtaining insight about the dynamics of physical processes that occur on relatively short time scales. Current computational capability allows treatment of atomic systems containing as many as 10{sup 9} atoms for times on the order of 100 ns (10{sup -7}s). The main limitation of classical MD simulation is the relatively short times accessible. Kinetic Monte Carlo provides the ability to reach macroscopic times by modeling diffusional processes and time-scales rather than individual atomic vibrations. Coupling MD and KMC has developed into a powerful, multiscale tool for the simulation of radiation damage in metals.
NASA Astrophysics Data System (ADS)
Dobes, Jiri; Deconinck, Herman
2008-06-01
Multidimensional upwind residual distribution (RD) schemes have become an appealing alternative to more widespread finite volume and finite element methods (FEM) for solving compressible fluid flows. The RD approach allows to construct nonlinear second order and non-oscillatory methods at the same time. They are routinely used for steady state calculations of the complex flow problems, e.g., 3D turbulent transonic industrial-type simulations [H. Deconinck, K. Sermeus, R. Abgrall, Status of multidimensional upwind residual distribution schemes and applications in aeronautics, AAIA Paper 2000-2328, AIAA, 2000; K. Sermeus, H. Deconinck, Drag prediction validation of a multi-dimensional upwind solver, CFD-based aircraft drag prediction and reduction, VKI Lecture Series 2003-02, Von Karman Institute for Fluid Dynamics, Chausee do Waterloo 72, B-1640 Rhode Saint Genese, Belgium, 2003]. Despite its maturity, some problems are still present for the nonlinear schemes developed up to now: namely a poor iterative convergence for the transonic problems and a decrease of accuracy in smooth parts of the flow, caused by a weak L2 instability [M. Ricchiuto, Construction and analysis of compact residual discretizations for conservation laws on unstructured meshes. Ph.DE Thesis, Universite Libre de Bruxelles, Von Karman Institute for Fluid Dynamics, 2005]. We have developed a new formulation of a blended scheme between the second order linear LDA [R. Abgrall, M. Mezine, Residual distribution scheme for steady problems, 33rd Computational Fluid Dynamics course, VKI Lecture Series 2003-05, Von Karman Institute for Fluid Dynamics, Chausee do Waterloo 72, B-1640 Rhode Saint Genese, Belgium, 2003] scheme and the first order N scheme. The blending coefficient is based on a simple shock capturing operator and it is properly scaled such that second order accuracy is preserved. The approach is extended to unsteady flows problems using consistent formulation of the LDA scheme with the mass
Contributed Review: Distributed optical fibre dynamic strain sensing
NASA Astrophysics Data System (ADS)
Masoudi, Ali; Newson, Trevor P.
2016-01-01
Extensive research on Brillouin- and Raman-based distributed optical fibre sensors over the past two decades has resulted in the commercialization of distributed sensors capable of measuring static and quasi-static phenomena such as temperature and strain. Recently, the focus has been shifted towards developing distributed sensors for measurement of dynamic phenomena such as dynamic strain and sound waves. This article reviews the current state of the art distributed optical fibre sensors capable of quantifying dynamic vibrations. The most important aspect of Rayleigh and Brillouin scattering processes which have been used for distributed dynamic measurement are studied. The principle of the sensing techniques used to measure dynamic perturbations are analyzed followed by a case study of the most recent advances in this field. It is shown that the Rayleigh-based sensors have longer sensing range and higher frequency range, but their spatial resolution is limited to 1 m. On the other hand, the Brillouin-based sensors have shown a higher spatial resolution, but relatively lower frequency and sensing ranges.
Contributed Review: Distributed optical fibre dynamic strain sensing.
Masoudi, Ali; Newson, Trevor P
2016-01-01
Extensive research on Brillouin- and Raman-based distributed optical fibre sensors over the past two decades has resulted in the commercialization of distributed sensors capable of measuring static and quasi-static phenomena such as temperature and strain. Recently, the focus has been shifted towards developing distributed sensors for measurement of dynamic phenomena such as dynamic strain and sound waves. This article reviews the current state of the art distributed optical fibre sensors capable of quantifying dynamic vibrations. The most important aspect of Rayleigh and Brillouin scattering processes which have been used for distributed dynamic measurement are studied. The principle of the sensing techniques used to measure dynamic perturbations are analyzed followed by a case study of the most recent advances in this field. It is shown that the Rayleigh-based sensors have longer sensing range and higher frequency range, but their spatial resolution is limited to 1 m. On the other hand, the Brillouin-based sensors have shown a higher spatial resolution, but relatively lower frequency and sensing ranges. PMID:26827302
In-Memory Computing Architectures for Sparse Distributed Memory.
Kang, Mingu; Shanbhag, Naresh R
2016-08-01
This paper presents an energy-efficient and high-throughput architecture for Sparse Distributed Memory (SDM)-a computational model of the human brain [1]. The proposed SDM architecture is based on the recently proposed in-memory computing kernel for machine learning applications called Compute Memory (CM) [2], [3]. CM achieves energy and throughput efficiencies by deeply embedding computation into the memory array. SDM-specific techniques such as hierarchical binary decision (HBD) are employed to reduce the delay and energy further. The CM-based SDM (CM-SDM) is a mixed-signal circuit, and hence circuit-aware behavioral, energy, and delay models in a 65 nm CMOS process are developed in order to predict system performance of SDM architectures in the auto- and hetero-associative modes. The delay and energy models indicate that CM-SDM, in general, can achieve up to 25 × and 12 × delay and energy reduction, respectively, over conventional SDM. When classifying 16 × 16 binary images with high noise levels (input bad pixel ratios: 15%-25%) into nine classes, all SDM architectures are able to generate output bad pixel ratios (Bo) ≤ 2%. The CM-SDM exhibits negligible loss in accuracy, i.e., its Bo degradation is within 0.4% as compared to that of the conventional SDM. PMID:27305686
Overset grid applications on distributed memory MIMD computers
NASA Technical Reports Server (NTRS)
Chawla, Kalpana; Weeratunga, Sisira
1994-01-01
Analysis of modern aerospace vehicles requires the computation of flowfields about complex three dimensional geometries composed of regions with varying spatial resolution requirements. Overset grid methods allow the use of proven structured grid flow solvers to address the twin issues of geometrical complexity and the resolution variation by decomposing the complex physical domain into a collection of overlapping subdomains. This flexibility is accompanied by the need for irregular intergrid boundary communication among the overlapping component grids. This study investigates a strategy for implementing such a static overset grid implicit flow solver on distributed memory, MIMD computers; i.e., the 128 node Intel iPSC/860 and the 208 node Intel Paragon. Performance data for two composite grid configurations characteristic of those encountered in present day aerodynamic analysis are also presented.
A Combined Geometric Approach for Computational Fluid Dynamics on Dynamic Grids
NASA Technical Reports Server (NTRS)
Slater, John W.
1995-01-01
A combined geometric approach for computational fluid dynamics is presented for the analysis of unsteady flow about mechanisms in which its components are in moderate relative motion. For a CFD analysis, the total dynamics problem involves the dynamics of the aspects of geometry modeling, grid generation, and flow modeling. The interrelationships between these three aspects allow for a more natural formulation of the problem and the sharing of information which can be advantageous to the computation of the dynamics. The approach is applied to planar geometries with the use of an efficient multi-block, structured grid generation method to compute unsteady, two-dimensional and axisymmetric flow. The applications presented include the computation of the unsteady, inviscid flow about a hinged-flap with flap deflections and a high-speed inlet with centerbody motion as part of the unstart / restart operation.
A spatiotemporal dynamic distributed solution to the MEG inverse problem.
Lamus, Camilo; Hämäläinen, Matti S; Temereanca, Simona; Brown, Emery N; Purdon, Patrick L
2012-11-01
MEG/EEG are non-invasive imaging techniques that record brain activity with high temporal resolution. However, estimation of brain source currents from surface recordings requires solving an ill-conditioned inverse problem. Converging lines of evidence in neuroscience, from neuronal network models to resting-state imaging and neurophysiology, suggest that cortical activation is a distributed spatiotemporal dynamic process, supported by both local and long-distance neuroanatomic connections. Because spatiotemporal dynamics of this kind are central to brain physiology, inverse solutions could be improved by incorporating models of these dynamics. In this article, we present a model for cortical activity based on nearest-neighbor autoregression that incorporates local spatiotemporal interactions between distributed sources in a manner consistent with neurophysiology and neuroanatomy. We develop a dynamic maximum a posteriori expectation-maximization (dMAP-EM) source localization algorithm for estimation of cortical sources and model parameters based on the Kalman Filter, the Fixed Interval Smoother, and the EM algorithms. We apply the dMAP-EM algorithm to simulated experiments as well as to human experimental data. Furthermore, we derive expressions to relate our dynamic estimation formulas to those of standard static models, and show how dynamic methods optimally assimilate past and future data. Our results establish the feasibility of spatiotemporal dynamic estimation in large-scale distributed source spaces with several thousand source locations and hundreds of sensors, with resulting inverse solutions that provide substantial performance improvements over static methods. PMID:22155043
A spatiotemporal dynamic distributed solution to the MEG inverse problem
Lamus, Camilo; Hämäläinen, Matti S.; Temereanca, Simona; Brown, Emery N.; Purdon, Patrick L.
2012-01-01
MEG/EEG are non-invasive imaging techniques that record brain activity with high temporal resolution. However, estimation of brain source currents from surface recordings requires solving an ill-conditioned inverse problem. Converging lines of evidence in neuroscience, from neuronal network models to resting-state imaging and neurophysiology, suggest that cortical activation is a distributed spatiotemporal dynamic process, supported by both local and long-distance neuroanatomic connections. Because spatiotemporal dynamics of this kind are central to brain physiology, inverse solutions could be improved by incorporating models of these dynamics. In this article, we present a model for cortical activity based on nearest-neighbor autoregression that incorporates local spatiotemporal interactions between distributed sources in a manner consistent with neurophysiology and neuroanatomy. We develop a dynamic Maximum a Posteriori Expectation-Maximization (dMAP-EM) source localization algorithm for estimation of cortical sources and model parameters based on the Kalman Filter, the Fixed Interval Smoother, and the EM algorithms. We apply the dMAP-EM algorithm to simulated experiments as well as to human experimental data. Furthermore, we derive expressions to relate our dynamic estimation formulas to those of standard static models, and show how dynamic methods optimally assimilate past and future data. Our results establish the feasibility of spatiotemporal dynamic estimation in large-scale distributed source spaces with several thousand source locations and hundreds of sensors, with resulting inverse solutions that provide substantial performance improvements over static methods. PMID:22155043
A model of cerebellar computations for dynamical state estimation
NASA Technical Reports Server (NTRS)
Paulin, M. G.; Hoffman, L. F.; Assad, C.
2001-01-01
The cerebellum is a neural structure that is essential for agility in vertebrate movements. Its contribution to motor control appears to be due to a fundamental role in dynamical state estimation, which also underlies its role in various non-motor tasks. Single spikes in vestibular sensory neurons carry information about head state. We show how computations for optimal dynamical state estimation may be accomplished when signals are encoded in spikes. This provides a novel way to design dynamical state estimators, and a novel way to interpret the structure and function of the cerebellum.
A scalable parallel graph coloring algorithm for distributed memory computers.
Bozdag, Doruk; Manne, Fredrik; Gebremedhin, Assefaw H.; Catalyurek, Umit; Boman, Erik Gunnar
2005-02-01
In large-scale parallel applications a graph coloring is often carried out to schedule computational tasks. In this paper, we describe a new distributed memory algorithm for doing the coloring itself in parallel. The algorithm operates in an iterative fashion; in each round vertices are speculatively colored based on limited information, and then a set of incorrectly colored vertices, to be recolored in the next round, is identified. Parallel speedup is achieved in part by reducing the frequency of communication among processors. Experimental results on a PC cluster using up to 16 processors show that the algorithm is scalable.
Job monitoring on DIRAC for Belle II distributed computing
NASA Astrophysics Data System (ADS)
Kato, Yuji; Hayasaka, Kiyoshi; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo
2015-12-01
We developed a monitoring system for Belle II distributed computing, which consists of active and passive methods. In this paper we describe the passive monitoring system, where information stored in the DIRAC database is processed and visualized. We divide the DIRAC workload management flow into steps and store characteristic variables which indicate issues. These variables are chosen carefully based on our experiences, then visualized. As a result, we are able to effectively detect issues. Finally, we discuss the future development for automating log analysis, notification of issues, and disabling problematic sites.
KNET - DISTRIBUTED COMPUTING AND/OR DATA TRANSFER PROGRAM
NASA Technical Reports Server (NTRS)
Hui, J.
1994-01-01
KNET facilitates distributed computing between a UNIX compatible local host and a remote host which may or may not be UNIX compatible. It is capable of automatic remote login. That is, it performs on the user's behalf the chore of handling host selection, user name, and password to the designated host. Once the login has been successfully completed, the user may interactively communicate with the remote host. Data output from the remote host may be directed to the local screen, to a local file, and/or to a local process. Conversely, data input from the keyboard, a local file, or a local process may be directed to the remote host. KNET takes advantage of the multitasking and terminal mode control features of the UNIX operating system. A parent process is used as the upper layer for interfacing with the local user. A child process is used for a lower layer for interfacing with the remote host computer, and optionally one or more child processes can be used for the remote data output. Output may be directed to the screen and/or to the local processes under the control of a data pipe switch. In order for KNET to operate, the local and remote hosts must observe a common communications protocol. KNET is written in ANSI standard C-language for computers running UNIX. It has been successfully implemented on several Sun series computers and a DECstation 3100 and used to run programs remotely on VAX VMS and UNIX based computers. It requires 100K of RAM under SunOS and 120K of RAM under DEC RISC ULTRIX. An electronic copy of the documentation is provided on the distribution medium. The standard distribution medium for KNET is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. KNET was developed in 1991 and is a copyrighted work with all copyright vested in NASA. UNIX is a registered trademark of AT&T Bell Laboratories. Sun and SunOS are trademarks of Sun Microsystems, Inc. DECstation, VAX, VMS, and
Performance Evaluation of Three Distributed Computing Environments for Scientific Applications
NASA Technical Reports Server (NTRS)
Fatoohi, Rod; Weeratunga, Sisira; Lasinski, T. A. (Technical Monitor)
1994-01-01
We present performance results for three distributed computing environments using the three simulated CFD applications in the NAS Parallel Benchmark suite. These environments are the DCF cluster, the LACE cluster, and an Intel iPSC/860 machine. The DCF is a prototypic cluster of loosely coupled SGI R3000 machines connected by Ethernet. The LACE cluster is a tightly coupled cluster of 32 IBM RS6000/560 machines connected by Ethernet as well as by either FDDI or an IBM Allnode switch. Results of several parallel algorithms for the three simulated applications are presented and analyzed based on the interplay between the communication requirements of an algorithm and the characteristics of the communication network of a distributed system.
Textbook Multigrid Efficiency for Computational Fluid Dynamics Simulations
NASA Technical Reports Server (NTRS)
Brandt, Achi; Thomas, James L.; Diskin, Boris
2001-01-01
Considerable progress over the past thirty years has been made in the development of large-scale computational fluid dynamics (CFD) solvers for the Euler and Navier-Stokes equations. Computations are used routinely to design the cruise shapes of transport aircraft through complex-geometry simulations involving the solution of 25-100 million equations; in this arena the number of wind-tunnel tests for a new design has been substantially reduced. However, simulations of the entire flight envelope of the vehicle, including maximum lift, buffet onset, flutter, and control effectiveness have not been as successful in eliminating the reliance on wind-tunnel testing. These simulations involve unsteady flows with more separation and stronger shock waves than at cruise. The main reasons limiting further inroads of CFD into the design process are: (1) the reliability of turbulence models; and (2) the time and expense of the numerical simulation. Because of the prohibitive resolution requirements of direct simulations at high Reynolds numbers, transition and turbulence modeling is expected to remain an issue for the near term. The focus of this paper addresses the latter problem by attempting to attain optimal efficiencies in solving the governing equations. Typically current CFD codes based on the use of multigrid acceleration techniques and multistage Runge-Kutta time-stepping schemes are able to converge lift and drag values for cruise configurations within approximately 1000 residual evaluations. An optimally convergent method is defined as having textbook multigrid efficiency (TME), meaning the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in the discretized system of equations (residual equations). In this paper, a distributed relaxation approach to achieving TME for Reynolds-averaged Navier-Stokes (RNAS) equations are discussed along with the foundations that form the
Remote Visualization and Remote Collaboration On Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Watson, Val; Lasinski, T. A. (Technical Monitor)
1995-01-01
A new technology has been developed for remote visualization that provides remote, 3D, high resolution, dynamic, interactive viewing of scientific data (such as fluid dynamics simulations or measurements). Based on this technology, some World Wide Web sites on the Internet are providing fluid dynamics data for educational or testing purposes. This technology is also being used for remote collaboration in joint university, industry, and NASA projects in computational fluid dynamics and wind tunnel testing. Previously, remote visualization of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewer's local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit).
Applied Computational Fluid Dynamics at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Kwak, Dochan (Technical Monitor)
1994-01-01
The field of Computational Fluid Dynamics (CFD) has advanced to the point where it can now be used for many applications in fluid mechanics research and aerospace vehicle design. A few applications being explored at NASA Ames Research Center will be presented and discussed. The examples presented will range in speed from hypersonic to low speed incompressible flow applications. Most of the results will be from numerical solutions of the Navier-Stokes or Euler equations in three space dimensions for general geometry applications. Computational results will be used to highlight the presentation as appropriate. Advances in computational facilities including those associated with NASA's CAS (Computational Aerosciences) Project of the Federal HPCC (High Performance Computing and Communications) Program will be discussed. Finally, opportunities for future research will be presented and discussed. All material will be taken from non-sensitive, previously-published and widely-disseminated work.
Current capabilities and future directions in computational fluid dynamics
NASA Technical Reports Server (NTRS)
1986-01-01
A summary of significant findings is given, followed by specific recommendations for future directions of emphasis for computational fluid dynamics development. The discussion is organized into three application areas: external aerodynamics, hypersonics, and propulsion - and followed by a turbulence modeling synopsis.
Computational fluid dynamics development and validation at Bell Helicopter
NASA Astrophysics Data System (ADS)
Narramore, J. C.
1995-08-01
An overview of the development of the Computational Fluid Dynamics (CFD) methodology at Bell Helicopter Textron is given. As new technologies have been developed their functionality has been assessed by their ability to reproduce wind tunnel measurements in a timely manner. Examples of some of these correlation study results are provided.
Computational fluid dynamics applications to improve crop production systems
Technology Transfer Automated Retrieval System (TEKTRAN)
Computational fluid dynamics (CFD), numerical analysis and simulation tools of fluid flow processes have emerged from the development stage and become nowadays a robust design tool. It is widely used to study various transport phenomena which involve fluid flow, heat and mass transfer, providing det...
Visualizing Instructional Design: The Potential of Dynamic Computer Presentations.
ERIC Educational Resources Information Center
Knupfer, Nancy Nelson; And Others
Graduate students often have difficulty understanding the concepts behind the various models of instructional design (ID). In order to help students in an introductory ID course come to a better understanding of the similarities and differences between various instructional models, the models were developed into dynamic computer graphics to use…
System design and algorithmic development for computational steering in distributed environments
Wu, Qishi; Zhu, Mengxia; Gu, Yi; Rao, Nageswara S
2010-03-01
Supporting visualization pipelines over wide-area networks is critical to enabling large-scale scientific applications that require visual feedback to interactively steer online computations. We propose a remote computational steering system that employs analytical models to estimate the cost of computing and communication components and optimizes the overall system performance in distributed environments with heterogeneous resources. We formulate and categorize the visualization pipeline configuration problems for maximum frame rate into three classes according to the constraints on node reuse or resource sharing, namely no, contiguous, and arbitrary reuse. We prove all three problems to be NP-complete and present heuristic approaches based on a dynamic programming strategy. The superior performance of the proposed solution is demonstrated with extensive simulation results in comparison with existing algorithms and is further evidenced by experimental results collected on a prototype implementation deployed over the Internet.
Flap Dynamics in Aspartic Proteases: A Computational Perspective.
Mahanti, Mukul; Bhakat, Soumendranath; Nilsson, Ulf J; Söderhjelm, Pär
2016-08-01
Recent advances in biochemistry and drug design have placed proteases as one of the critical target groups for developing novel small-molecule inhibitors. Among all proteases, aspartic proteases have gained significant attention due to their role in HIV/AIDS, malaria, Alzheimer's disease, etc. The binding cleft is covered by one or two β-hairpins (flaps) which need to be opened before a ligand can bind. After binding, the flaps close to retain the ligand in the active site. Development of computational tools has improved our understanding of flap dynamics and its role in ligand recognition. In the past decade, several computational approaches, for example molecular dynamics (MD) simulations, coarse-grained simulations, replica-exchange molecular dynamics (REMD) and metadynamics, have been used to understand flap dynamics and conformational motions associated with flap movements. This review is intended to summarize the computational progress towards understanding the flap dynamics of proteases and to be a reference for future studies in this field. PMID:26872937
Computational Fluid Dynamics Demonstration of Rigid Bodies in Motion
NASA Technical Reports Server (NTRS)
Camarena, Ernesto; Vu, Bruce T.
2011-01-01
The Design Analysis Branch (NE-Ml) at the Kennedy Space Center has not had the ability to accurately couple Rigid Body Dynamics (RBD) and Computational Fluid Dynamics (CFD). OVERFLOW-D is a flow solver that has been developed by NASA to have the capability to analyze and simulate dynamic motions with up to six Degrees of Freedom (6-DOF). Two simulations were prepared over the course of the internship to demonstrate 6DOF motion of rigid bodies under aerodynamic loading. The geometries in the simulations were based on a conceptual Space Launch System (SLS). The first simulation that was prepared and computed was the motion of a Solid Rocket Booster (SRB) as it separates from its core stage. To reduce computational time during the development of the simulation, only half of the physical domain with respect to the symmetry plane was simulated. Then a full solution was prepared and computed. The second simulation was a model of the SLS as it departs from a launch pad under a 20 knot crosswind. This simulation was reduced to Two Dimensions (2D) to reduce both preparation and computation time. By allowing 2-DOF for translations and 1-DOF for rotation, the simulation predicted unrealistic rotation. The simulation was then constrained to only allow translations.
An Optimization Framework for Dynamic, Distributed Real-Time Systems
NASA Technical Reports Server (NTRS)
Eckert, Klaus; Juedes, David; Welch, Lonnie; Chelberg, David; Bruggerman, Carl; Drews, Frank; Fleeman, David; Parrott, David; Pfarr, Barbara
2003-01-01
Abstract. This paper presents a model that is useful for developing resource allocation algorithms for distributed real-time systems .that operate in dynamic environments. Interesting aspects of the model include dynamic environments, utility and service levels, which provide a means for graceful degradation in resource-constrained situations and support optimization of the allocation of resources. The paper also provides an allocation algorithm that illustrates how to use the model for producing feasible, optimal resource allocations.
Performance Evaluation of Communication Software Systems for Distributed Computing
NASA Technical Reports Server (NTRS)
Fatoohi, Rod
1996-01-01
In recent years there has been an increasing interest in object-oriented distributed computing since it is better quipped to deal with complex systems while providing extensibility, maintainability, and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI, and ATM. The performance results for three communication software systems are presented, analyzed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification, and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.
Localised distributions and criteria for correctness in complex Langevin dynamics
Aarts, Gert; Giudice, Pietro; Seiler, Erhard
2013-10-15
Complex Langevin dynamics can solve the sign problem appearing in numerical simulations of theories with a complex action. In order to justify the procedure, it is important to understand the properties of the real and positive distribution, which is effectively sampled during the stochastic process. In the context of a simple model, we study this distribution by solving the Fokker–Planck equation as well as by brute force and relate the results to the recently derived criteria for correctness. We demonstrate analytically that it is possible that the distribution has support in a strip in the complexified configuration space only, in which case correct results are expected. -- Highlights: •Characterisation of the equilibrium distribution sampled in complex Langevin dynamics. •Connection between criteria for correctness and breakdown. •Solution of the Fokker–Planck equation in the case of real noise. •Analytical determination of support in complexified space.
A global plan policy for coherent co-operation in distributed dynamic load balancing algorithms
NASA Astrophysics Data System (ADS)
Kara, M.
1995-12-01
Distributed-controlled dynamic load balancing algorithms are known to have several advantages over centralized algorithms such as scalability, and fault tolerance. Distributed implies that the control is decentralized and that a copy of the algorithm (called a scheduler) is replicated on each host of the network. However, distributed control also contributes to the lack of global goals and lack of coherence. This paper presents a new algorithm called DGP (decentralized global plans) that addresses the problem of coherence and co-ordination in distributed dynamic load balancing algorithms. The DGP algorithm is based on a strategy called global plans (GP), and aims at maintaining all computational loads of a distributed system within a band called delta . The rationale for the design of DGP is to allow each scheduler to consider the actions of its peer schedulers. With this level of co-ordination, the schedulers can act more as a coherent team. This new approach first explicitly specifies a global goal and then designs a strategy around this global goal such that each scheduler (i) takes into account local decisions made by other schedulers; (ii) takes into account the effect of its local decisions on the overall system and (iii) ensures load balancing. An experimental evaluation of DGP with two other well known dynamic load balancing algorithms published in the literature shows that DGP performs consistently better. More significantly, the results indicate that the global plan approach provides a better framework for the design of distributed dynamic load balancing algorithms.
GAiN: Distributed Array Computation with Python
Daily, Jeffrey A.
2009-05-01
Scientific computing makes use of very large, multidimensional numerical arrays - typically, gigabytes to terabytes in size - much larger than can fit on even the largest single compute node. Such arrays must be distributed across a "cluster" of nodes. Global Arrays is a cluster-based software system from Battelle Pacific Northwest National Laboratory that enables an efficient, portable, and parallel shared-memory programming interface to manipulate these arrays. Written in and for the C and FORTRAN programming languages, it takes advantage of high-performance cluster interconnections to allow any node in the cluster to access data on any other node very rapidly. The "numpy" module is the de facto standard for numerical calculation in the Python programming language, a language whose use is growing rapidly in the scientific and engineering communities. numpy provides a powerful N-dimensional array class as well as other scientific computing capabilities. However, like the majority of the core Python modules, numpy is inherently serial. Our system, GAiN (Global Arrays in NumPy), is a parallel extension to Python that accesses Global Arrays through numpy. This allows parallel processing and/or larger problem sizes to be harnessed almost transparently within new or existing numpy programs.
A Riemannian framework for orientation distribution function computing.
Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid
2009-01-01
Compared with Diffusion Tensor Imaging (DTI), High Angular Resolution Imaging (HARDI) can better explore the complex microstructure of white matter. Orientation Distribution Function (ODF) is used to describe the probability of the fiber direction. Fisher information metric has been constructed for probability density family in Information Geometry theory and it has been successfully applied for tensor computing in DTI. In this paper, we present a state of the art Riemannian framework for ODF computing based on Information Geometry and sparse representation of orthonormal bases. In this Riemannian framework, the exponential map, logarithmic map and geodesic have closed forms. And the weighted Frechet mean exists uniquely on this manifold. We also propose a novel scalar measurement, named Geometric Anisotropy (GA), which is the Riemannian geodesic distance between the ODF and the isotropic ODF. The Renyi entropy H1/2 of the ODF can be computed from the GA. Moreover, we present an Affine-Euclidean framework and a Log-Euclidean framework so that we can work in an Euclidean space. As an application, Lagrange interpolation on ODF field is proposed based on weighted Frechet mean. We validate our methods on synthetic and real data experiments. Compared with existing Riemannian frameworks on ODF, our framework is model-free. The estimation of the parameters, i.e. Riemannian coordinates, is robust and linear. Moreover it should be noted that our theoretical results can be used for any probability density function (PDF) under an orthonormal basis representation. PMID:20426075
Computational dynamics for robotics systems using a non-strict computational approach
NASA Technical Reports Server (NTRS)
Orin, David E.; Wong, Ho-Cheung; Sadayappan, P.
1989-01-01
A Non-Strict computational approach for real-time robotics control computations is proposed. In contrast to the traditional approach to scheduling such computations, based strictly on task dependence relations, the proposed approach relaxes precedence constraints and scheduling is guided instead by the relative sensitivity of the outputs with respect to the various paths in the task graph. An example of the computation of the Inverse Dynamics of a simple inverted pendulum is used to demonstrate the reduction in effective computational latency through use of the Non-Strict approach. A speedup of 5 has been obtained when the processes of the task graph are scheduled to reduce the latency along the crucial path of the computation. While error is introduced by the relaxation of precedence constraints, the Non-Strict approach has a smaller error than the conventional Strict approach for a wide range of input conditions.
Towards dynamic remote data auditing in computational clouds.
Sookhak, Mehdi; Akhunzada, Adnan; Gani, Abdullah; Khurram Khan, Muhammad; Anuar, Nor Badrul
2014-01-01
Cloud computing is a significant shift of computational paradigm where computing as a utility and storing data remotely have a great potential. Enterprise and businesses are now more interested in outsourcing their data to the cloud to lessen the burden of local data storage and maintenance. However, the outsourced data and the computation outcomes are not continuously trustworthy due to the lack of control and physical possession of the data owners. To better streamline this issue, researchers have now focused on designing remote data auditing (RDA) techniques. The majority of these techniques, however, are only applicable for static archive data and are not subject to audit the dynamically updated outsourced data. We propose an effectual RDA technique based on algebraic signature properties for cloud storage system and also present a new data structure capable of efficiently supporting dynamic data operations like append, insert, modify, and delete. Moreover, this data structure empowers our method to be applicable for large-scale data with minimum computation cost. The comparative analysis with the state-of-the-art RDA schemes shows that the proposed scheme is secure and highly efficient in terms of the computation and communication overhead on the auditor and server. PMID:25121114
Towards Dynamic Remote Data Auditing in Computational Clouds
Khurram Khan, Muhammad; Anuar, Nor Badrul
2014-01-01
Cloud computing is a significant shift of computational paradigm where computing as a utility and storing data remotely have a great potential. Enterprise and businesses are now more interested in outsourcing their data to the cloud to lessen the burden of local data storage and maintenance. However, the outsourced data and the computation outcomes are not continuously trustworthy due to the lack of control and physical possession of the data owners. To better streamline this issue, researchers have now focused on designing remote data auditing (RDA) techniques. The majority of these techniques, however, are only applicable for static archive data and are not subject to audit the dynamically updated outsourced data. We propose an effectual RDA technique based on algebraic signature properties for cloud storage system and also present a new data structure capable of efficiently supporting dynamic data operations like append, insert, modify, and delete. Moreover, this data structure empowers our method to be applicable for large-scale data with minimum computation cost. The comparative analysis with the state-of-the-art RDA schemes shows that the proposed scheme is secure and highly efficient in terms of the computation and communication overhead on the auditor and server. PMID:25121114
Characteristic ion distributions in the dynamic auroral transition region
NASA Astrophysics Data System (ADS)
Zeng, W.; Horwitz, J. L.; Tu, J.-N.
2006-04-01
A Dynamic Fluid Kinetic (DyFK) simulation is conducted to study the H+/O+ flows and distribution functions in the high-latitude dynamic transition region, specifically from 1000 km to about 4000 km altitude. Here, the collisional-to-collisionless transition region is that region where Coulomb collisions have significant but not dominant effects on the ion distributions. In this study, a simulation flux tube, which extends from 120 km to 3 RE altitude, is assumed to experience a pulse of auroral effects for approximately 20 minutes, including both soft electron precipitation and transverse wave heating, and then according to different geophysical circumstances, either to relax following the cessation of such auroral effects or to be heated further continuously by waves with power at higher frequencies. Our principal purpose in this investigation is to elicit the characteristic ion distribution functions in the auroral transition region, where both collisions and kinetic processes play significant roles. The characteristics of the simulated O+ and H+ velocity distributions, such as kidney bean shaped H+ distributions, and O+ distributions having cold cores with upward folded conic wings, resemble those observed by satellites at similar altitudes and geographic conditions. From the simulated distribution function results under different geophysical conditions, we find that O+-O+ and O+-H+ collisions, in conjunction with the kinetic and auroral processes, are key factors in the velocity distributions up to 4000 km altitude, especially for the low speed portions, for both O+ and H+ ions.
Enabling 3D-Liver Perfusion Mapping from MR-DCE Imaging Using Distributed Computing
Leporq, Benjamin; Camarasu-Pop, Sorina; Davila-Serrano, Eduardo E.; Pilleul, Frank; Beuf, Olivier
2013-01-01
An MR acquisition protocol and a processing method using distributed computing on the European Grid Infrastructure (EGI) to allow 3D liver perfusion parametric mapping after Magnetic Resonance Dynamic Contrast Enhanced (MR-DCE) imaging are presented. Seven patients (one healthy control and six with chronic liver diseases) were prospectively enrolled after liver biopsy. MR-dynamic acquisition was continuously performed in free-breathing during two minutes after simultaneous intravascular contrast agent (MS-325 blood pool agent) injection. Hepatic capillary system was modeled by a 3-parameters one-compartment pharmacokinetic model. The processing step was parallelized and executed on the EGI. It was modeled and implemented as a grid workflow using the Gwendia language and the MOTEUR workflow engine. Results showed good reproducibility in repeated processing on the grid. The results obtained from the grid were well correlated with ROI-based reference method ran locally on a personal computer. The speed-up range was 71 to 242 with an average value of 126. In conclusion, distributed computing applied to perfusion mapping brings significant speed-up to quantification step to be used for further clinical studies in a research context. Accuracy would be improved with higher image SNR accessible on the latest 3T MR systems available today. PMID:27006915
Evensky, D.A.; Gentile, A.C.; Armstrong, R.C.
1998-03-19
Increasingly, high performance computing constitutes the use of very large heterogeneous clusters of machines. The use and maintenance of such clusters are subject to complexities of communication between the machines in a time efficient and secure manner. Lilith is a general purpose tool that provides a highly scalable, secure, and easy distribution of user code across a heterogeneous computing platform. By handling the details of code distribution and communication, such a framework allows for the rapid development of tools for the use and management of large distributed systems. Lilith is written in Java, taking advantage of Java`s unique features of loading and distributing code dynamically, its platform independence, its thread support, and its provision of graphical components to facilitate easy to use resultant tools. The authors describe the use of Lilith in a tool developed for the maintenance of the large distributed cluster at their institution and present details of the Lilith architecture and user API for the general user development of scalable tools.
NASA Technical Reports Server (NTRS)
Thorp, Scott A.
1992-01-01
This presentation will discuss the development of a NASA Geometry Exchange Specification for transferring aerodynamic surface geometry between LeRC systems and grid generation software used for computational fluid dynamics research. The proposed specification is based on a subset of the Initial Graphics Exchange Specification (IGES). The presentation will include discussion of how the NASA-IGES standard will accommodate improved computer aided design inspection methods and reverse engineering techniques currently being developed. The presentation is in viewgraph format.
Incorporating geometrically complex vegetation in a computational fluid dynamic framework
NASA Astrophysics Data System (ADS)
Boothroyd, Richard; Hardy, Richard; Warburton, Jeff; Rosser, Nick
2015-04-01
Vegetation is known to have a significant influence on the hydraulic, geomorphological, and ecological functioning of river systems. Vegetation acts as a blockage to flow, thereby causing additional flow resistance and influencing flow dynamics, in particular flow conveyance. These processes need to be incorporated into flood models to improve predictions used in river management. However, the current practice in representing vegetation in hydraulic models is either through roughness parameterisation or process understanding derived experimentally from flow through highly simplified configurations of fixed, rigid cylinders. It is suggested that such simplifications inadequately describe the geometric complexity that characterises vegetation, and therefore the modelled flow dynamics may be oversimplified. This paper addresses this issue by using an approach combining field and numerical modelling techniques. Terrestrial Laser Scanning (TLS) with waveform processing has been applied to collect a sub-mm, 3-dimensional representation of Prunus laurocerasus, an invasive species to the UK that has been increasingly recorded in riparian zones. Multiple scan perspectives produce a highly detailed point cloud (>5,000,000 individual data points) which is reduced in post processing using an octree-based voxelisation technique. The method retains the geometric complexity of the vegetation by subdividing the point cloud into 0.01 m3 cubic voxels. The voxelised representation is subsequently read into a computational fluid dynamic (CFD) model using a Mass Flux Scaling Algorithm, allowing the vegetation to be directly represented in the modelling framework. Results demonstrate the development of a complex flow field around the vegetation. The downstream velocity profile is characterised by two distinct inflection points. A high velocity zone in the near-bed (plant-stem) region is apparent due to the lack of significant near-bed foliage. Above this, a zone of reduced velocity is
A Textbook for a First Course in Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Zingg, D. W.; Pulliam, T. H.; Nixon, David (Technical Monitor)
1999-01-01
This paper describes and discusses the textbook, Fundamentals of Computational Fluid Dynamics by Lomax, Pulliam, and Zingg, which is intended for a graduate level first course in computational fluid dynamics. This textbook emphasizes fundamental concepts in developing, analyzing, and understanding numerical methods for the partial differential equations governing the physics of fluid flow. Its underlying philosophy is that the theory of linear algebra and the attendant eigenanalysis of linear systems provides a mathematical framework to describe and unify most numerical methods in common use in the field of fluid dynamics. Two linear model equations, the linear convection and diffusion equations, are used to illustrate concepts throughout. Emphasis is on the semi-discrete approach, in which the governing partial differential equations (PDE's) are reduced to systems of ordinary differential equations (ODE's) through a discretization of the spatial derivatives. The ordinary differential equations are then reduced to ordinary difference equations (O(Delta)E's) using a time-marching method. This methodology, using the progression from PDE through ODE's to O(Delta)E's, together with the use of the eigensystems of tridiagonal matrices and the theory of O(Delta)E's, gives the book its distinctiveness and provides a sound basis for a deep understanding of fundamental concepts in computational fluid dynamics.
Computational fluid dynamic analysis of hybrid rocket combustor flowfields
NASA Technical Reports Server (NTRS)
Venkateswaran, S.; Merkle, C. L.
1995-01-01
Computational fluid dynamic analyses of the Navier-Stokes equations coupled with solid-phase pyrolysis, gas-phase combustion, turbulence and radiation are performed to study hybrid rocket combustor flowfields. The computational study is closely co-ordinated with a companion experimental program using a planar slab burner configuration with HTPB as fuel and gaseous oxygen. Computational predictions agree reasonably well with measurement data of fuel regression rates and surface temperatures. Additionally, most of the parametric trends predicted by the model are in general agreement with experimental trends. The computational model is applied to extend the results from the lab-scale to a full-scale axisymmetric configuration. The numerical predictions indicate that the full-scale configuration burns at a slower rate than the lab-scale combustor under identical specific flow rate conditions. The results demonstrate that detailed CFD analyses can play a useful role in the design of hybrid combustors.
Parallel Computational Fluid Dynamics: Current Status and Future Requirements
NASA Technical Reports Server (NTRS)
Simon, Horst D.; VanDalsem, William R.; Dagum, Leonardo; Kutler, Paul (Technical Monitor)
1994-01-01
One or the key objectives of the Applied Research Branch in the Numerical Aerodynamic Simulation (NAS) Systems Division at NASA Allies Research Center is the accelerated introduction of highly parallel machines into a full operational environment. In this report we discuss the performance results obtained from the implementation of some computational fluid dynamics (CFD) applications on the Connection Machine CM-2 and the Intel iPSC/860. We summarize some of the experiences made so far with the parallel testbed machines at the NAS Applied Research Branch. Then we discuss the long term computational requirements for accomplishing some of the grand challenge problems in computational aerosciences. We argue that only massively parallel machines will be able to meet these grand challenge requirements, and we outline the computer science and algorithm research challenges ahead.
Dynamic overset grid communication on distributed memory parallel processors
NASA Technical Reports Server (NTRS)
Barszcz, Eric; Weeratunga, Sisira K.; Meakin, Robert L.
1993-01-01
A parallel distributed memory implementation of intergrid communication for dynamic overset grids is presented. Included are discussions of various options considered during development. Results are presented comparing an Intel iPSC/860 to a single processor Cray Y-MP. Results for grids in relative motion show the iPSC/860 implementation to be faster than the Cray implementation.
Distributed Framework for Dynamic Telescope and Instrument Control
NASA Technical Reports Server (NTRS)
Ames, Troy J.; Case, Lynne
2002-01-01
Traditionally, instrument command and control systems have been developed specifically for a single instrument. Such solutions are frequently expensive and are inflexible to support the next instrument development effort. NASA Goddard Space Flight Center is developing an extensible framework, known as Instrument Remote Control (IRC) that applies to any kind of instrument that can be controlled by a computer. IRC combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML). A key aspect of the architecture is software that is driven by an instrument description, written using the Instrument Markup Language (IML). IML is an XML dialect used to describe graphical user interfaces to control and monitor the instrument, command sets and command formats, data streams, communication mechanisms, and data processing algorithms. The IRC framework provides the ability to communicate to components anywhere on a network using the JXTA protocol for dynamic discovery of distributed components. JXTA (see httD://www.jxta.org,) is a generalized protocol that allows any devices connected by a network to communicate in a peer-to-peer manner. IRC uses JXTA to advertise a device's IML and discover devices of interest on the network. Devices can join or leave the network and thus join or leave the instrument control environment of IRC. Currently, several astronomical instruments are working with the IRC development team to develop custom components for IRC to control their instruments. These instruments include: High resolution Airborne Wideband Camera (HAWC), a first light instrument for the Stratospheric Observatory for Infrared Astronomy (SOFIA); Submillimeter And Far Infrared Experiment (SAFIRE), a Principal Investigator instrument for SOFIA; and Fabry-Perot Interferometer Bolometer Research Experiment (FIBRE), a prototype of the SAFIRE instrument, used at the Caltech Submillimeter Observatory (CSO). Most recently, we have
NASA Technical Reports Server (NTRS)
Kramer, Williams T. C.; Simon, Horst D.
1994-01-01
This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.
The coupling of fluids, dynamics, and controls on advanced architecture computers
NASA Technical Reports Server (NTRS)
Atwood, Christopher
1995-01-01
This grant provided for the demonstration of coupled controls, body dynamics, and fluids computations in a workstation cluster environment; and an investigation of the impact of peer-peer communication on flow solver performance and robustness. The findings of these investigations were documented in the conference articles.The attached publication, 'Towards Distributed Fluids/Controls Simulations', documents the solution and scaling of the coupled Navier-Stokes, Euler rigid-body dynamics, and state feedback control equations for a two-dimensional canard-wing. The poor scaling shown was due to serialized grid connectivity computation and Ethernet bandwidth limits. The scaling of a peer-to-peer communication flow code on an IBM SP-2 was also shown. The scaling of the code on the switched fabric-linked nodes was good, with a 2.4 percent loss due to communication of intergrid boundary point information. The code performance on 30 worker nodes was 1.7 (mu)s/point/iteration, or a factor of three over a Cray C-90 head. The attached paper, 'Nonlinear Fluid Computations in a Distributed Environment', documents the effect of several computational rate enhancing methods on convergence. For the cases shown, the highest throughput was achieved using boundary updates at each step, with the manager process performing communication tasks only. Constrained domain decomposition of the implicit fluid equations did not degrade the convergence rate or final solution. The scaling of a coupled body/fluid dynamics problem on an Ethernet-linked cluster was also shown.
NASA Technical Reports Server (NTRS)
Weed, Richard Allen; Sankar, L. N.
1994-01-01
An increasing amount of research activity in computational fluid dynamics has been devoted to the development of efficient algorithms for parallel computing systems. The increasing performance to price ratio of engineering workstations has led to research to development procedures for implementing a parallel computing system composed of distributed workstations. This thesis proposal outlines an ongoing research program to develop efficient strategies for performing three-dimensional flow analysis on distributed computing systems. The PVM parallel programming interface was used to modify an existing three-dimensional flow solver, the TEAM code developed by Lockheed for the Air Force, to function as a parallel flow solver on clusters of workstations. Steady flow solutions were generated for three different wing and body geometries to validate the code and evaluate code performance. The proposed research will extend the parallel code development to determine the most efficient strategies for unsteady flow simulations.
Parallel algorithms and architecture for computation of manipulator forward dynamics
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1989-01-01
Parallel computation of manipulator forward dynamics is investigated. Considering three classes of algorithms for the solution of the problem, that is, the O(n), the O(n exp 2), and the O(n exp 3) algorithms, parallelism in the problem is analyzed. It is shown that the problem belongs to the class of NC and that the time and processors bounds are of O(log2/2n) and O(n exp 4), respectively. However, the fastest stable parallel algorithms achieve the computation time of O(n) and can be derived by parallelization of the O(n exp 3) serial algorithms. Parallel computation of the O(n exp 3) algorithms requires the development of parallel algorithms for a set of fundamentally different problems, that is, the Newton-Euler formulation, the computation of the inertia matrix, decomposition of the symmetric, positive definite matrix, and the solution of triangular systems. Parallel algorithms for this set of problems are developed which can be efficiently implemented on a unique architecture, a triangular array of n(n+2)/2 processors with a simple nearest-neighbor interconnection. This architecture is particularly suitable for VLSI and WSI implementations. The developed parallel algorithm, compared to the best serial O(n) algorithm, achieves an asymptotic speedup of more than two orders-of-magnitude in the computation the forward dynamics.
Dynamic species distribution models from categorical survey data.
Mieszkowska, Nova; Milligan, Gregg; Burrows, Michael T; Freckleton, Rob; Spencer, Matthew
2013-11-01
1. Species distribution models are static models for the distribution of a species, based on Hutchinson's niche concept. They make probabilistic predictions about the distribution of a species, but do not have a temporal interpretation. In contrast, density-structured models based on categorical abundance data make it possible to incorporate population dynamics into species distribution modelling. 2. Using dynamic species distribution models, temporal aspects of a species' distribution can be investigated, including the predictability of future abundance categories and the expected persistence times of local populations, and how these may respond to environmental or anthropogenic drivers. 3. We built density-structured models for two intertidal marine invertebrates, the Lusitanian trochid gastropods Phorcus lineatus and Gibbula umbilicalis, based on 9 years of field data from around the United Kingdom. Abundances were recorded on a categorical scale, and stochastic models for year-to-year changes in abundance category were constructed with winter mean sea surface temperature (SST) and wave fetch (a measure of the exposure of a shore) as explanatory variables. 4. Both species were more likely to be present at sites with high SST, but differed in their responses to wave fetch. Phorcus lineatus had more predictable future abundance and longer expected persistence times than G. umbilicalis. This is consistent with the longer lifespan of P. lineatus. 5. Where data from multiple time points are available, dynamic species distribution models of the kind described here have many applications in population and conservation biology. These include allowing for changes over time when combining historical and contemporary data, and predicting how climate change might alter future abundance conditional on current distributions. PMID:23889003
Immersed boundary conditions method for computational fluid dynamics problems
NASA Astrophysics Data System (ADS)
Husain, Syed Zahid
This dissertation presents implicit spectrally-accurate algorithms based on the concept of immersed boundary conditions (IBC) for solving a range of computational fluid dynamics (CFD) problems where the physical domains involve boundary irregularities. Both fixed and moving irregularities are considered with particular emphasis placed on the two-dimensional moving boundary problems. The physical model problems considered are comprised of the Laplace operator, the biharmonic operator and the Navier-Stokes equations, and thus cover the most commonly encountered types of operators in CFD analyses. The IBC algorithm uses a fixed and regular computational domain with flow domain immersed inside the computational domain. Boundary conditions along the edges of the time-dependent flow domain enter the algorithm in the form of internal constraints. Spectral spatial discretization for two-dimensional problems is based on Fourier expansions in the stream-wise direction and Chebyshev expansions in the normal-to-the-wall direction. Up to fourth-order implicit temporal discretization methods have been implemented. The IBC algorithm is shown to deliver the theoretically predicted accuracy in both time and space. Construction of the boundary constraints in the IBC algorithm provides degrees of freedom in excess of that required to formulate a closed system of algebraic equations. The 'classical IBC formulation' works by retaining number boundary constraints that are just sufficient to form a closed system of equations. The use of additional boundary constraints leads to the 'over-determined formulation' of the IBC algorithm. Over-determined systems are explored in order to improve the accuracy of the IBC method and to expand its applicability to more extreme geometries. Standard direct over-determined solvers based on evaluation of pseudo-inverses of the complete coefficient matrices have been tested on three model problems, namely, the Laplace equation, the biharmonic equation
Static and dynamic assessment of myocardial perfusion by computed tomography.
Danad, Ibrahim; Szymonifka, Jackie; Schulman-Marcus, Joshua; Min, James K
2016-08-01
Recent developments in computed tomography (CT) technology have fulfilled the prerequisites for the clinical application of myocardial CT perfusion (CTP) imaging. The evaluation of myocardial perfusion by CT can be achieved by static or dynamic scan acquisitions. Although both approaches have proved clinically feasible, substantial barriers need to be overcome before its routine clinical application. The current review provides an outline of the current status of CTP imaging and also focuses on disparities between static and dynamic CTPs for the evaluation of myocardial blood flow. PMID:27013250
Computer simulation of multigrid body dynamics and control
NASA Technical Reports Server (NTRS)
Swaminadham, M.; Moon, Young I.; Venkayya, V. B.
1990-01-01
The objective is to set up and analyze benchmark problems on multibody dynamics and to verify the predictions of two multibody computer simulation codes. TREETOPS and DISCOS have been used to run three example problems - one degree-of-freedom spring mass dashpot system, an inverted pendulum system, and a triple pendulum. To study the dynamics and control interaction, an inverted planar pendulum with an external body force and a torsional control spring was modeled as a hinge connected two-rigid body system. TREETOPS and DISCOS affected the time history simulation of this problem. System state space variables and their time derivatives from two simulation codes were compared.
Measurement and Information Extraction in Complex Dynamics Quantum Computation
NASA Astrophysics Data System (ADS)
Casati, Giulio; Montangero, Simone
Quantum Information processing has several di.erent applications: some of them can be performed controlling only few qubits simultaneously (e.g. quantum teleportation or quantum cryptography) [1]. Usually, the transmission of large amount of information is performed repeating several times the scheme implemented for few qubits. However, to exploit the advantages of quantum computation, the simultaneous control of many qubits is unavoidable [2]. This situation increases the experimental di.culties of quantum computing: maintaining quantum coherence in a large quantum system is a di.cult task. Indeed a quantum computer is a many-body complex system and decoherence, due to the interaction with the external world, will eventually corrupt any quantum computation. Moreover, internal static imperfections can lead to quantum chaos in the quantum register thus destroying computer operability [3]. Indeed, as it has been shown in [4], a critical imperfection strength exists above which the quantum register thermalizes and quantum computation becomes impossible. We showed such e.ects on a quantum computer performing an e.cient algorithm to simulate complex quantum dynamics [5,6].
Evaluation of Secure Computation in a Distributed Healthcare Setting.
Kimura, Eizen; Hamada, Koki; Kikuchi, Ryo; Chida, Koji; Okamoto, Kazuya; Manabe, Shirou; Kuroda, Tomohiko; Matsumura, Yasushi; Takeda, Toshihiro; Mihara, Naoki
2016-01-01
Issues related to ensuring patient privacy and data ownership in clinical repositories prevent the growth of translational research. Previous studies have used an aggregator agent to obscure clinical repositories from the data user, and to ensure the privacy of output using statistical disclosure control. However, there remain several issues that must be considered. One such issue is that a data breach may occur when multiple nodes conspire. Another is that the agent may eavesdrop on or leak a user's queries and their results. We have implemented a secure computing method so that the data used by each party can be kept confidential even if all of the other parties conspire to crack the data. We deployed our implementation at three geographically distributed nodes connected to a high-speed layer two network. The performance of our method, with respect to processing times, suggests suitability for practical use. PMID:27577361
Research into display sharing techniques for distributed computing environments
NASA Technical Reports Server (NTRS)
Hugg, Steven B.; Fitzgerald, Paul F., Jr.; Rosson, Nina Y.; Johns, Stephen R.
1990-01-01
The X-based Display Sharing solution for distributed computing environments is described. The Display Sharing prototype includes the base functionality for telecast and display copy requirements. Since the prototype implementation is modular and the system design provided flexibility for the Mission Control Center Upgrade (MCCU) operational consideration, the prototype implementation can be the baseline for a production Display Sharing implementation. To facilitate the process the following discussions are presented: Theory of operation; System of architecture; Using the prototype; Software description; Research tools; Prototype evaluation; and Outstanding issues. The prototype is based on the concept of a dedicated central host performing the majority of the Display Sharing processing, allowing minimal impact on each individual workstation. Each workstation participating in Display Sharing hosts programs to facilitate the user's access to Display Sharing as host machine.
Computational fluid dynamics applications at McDonnel Douglas
NASA Technical Reports Server (NTRS)
Hakkinen, R. J.
1987-01-01
Representative examples are presented of applications and development of advanced Computational Fluid Dynamics (CFD) codes for aerodynamic design at the McDonnell Douglas Corporation (MDC). Transonic potential and Euler codes, interactively coupled with boundary layer computation, and solutions of slender-layer Navier-Stokes approximation are applied to aircraft wing/body calculations. An optimization procedure using evolution theory is described in the context of transonic wing design. Euler methods are presented for analysis of hypersonic configurations, and helicopter rotors in hover and forward flight. Several of these projects were accepted for access to the Numerical Aerodynamic Simulation (NAS) facility at the NASA-Ames Research Center.
Computational fluid dynamics studies of nuclear rocket performance
NASA Technical Reports Server (NTRS)
Stubbs, Robert M.; Kim, Suk C.; Benson, Thomas J.
1994-01-01
A CFD analysis of a low pressure nuclear rocket concept is presented with the use of an advanced chemical kinetics, Navier-Stokes code. The computations describe the flow field in detail, including gas dynamic, thermodynamic and chemical properties, as well as global performance quantities such as specific impulse. Computational studies of several rocket nozzle shapes are conducted in an attempt to maximize hydrogen recombination. These Navier-Stokes calculations, which include real gas and viscous effects, predict lower performance values than have been reported heretofore.
Operational computer graphics in the flight dynamics environment
NASA Technical Reports Server (NTRS)
Jeletic, James F.
1989-01-01
Over the past five years, the Flight Dynamics Division of the National Aeronautics and Space Administration's (NASA's) Goddard Space Flight Center has incorporated computer graphics technology into its operational environment. In an attempt to increase the effectiveness and productivity of the Division, computer graphics software systems have been developed that display spacecraft tracking and telemetry data in 2-d and 3-d graphic formats that are more comprehensible than the alphanumeric tables of the past. These systems vary in functionality from real-time mission monitoring system, to mission planning utilities, to system development tools. Here, the capabilities and architecture of these systems are discussed.
Computational fluid dynamics studies of nuclear rocket performance
NASA Technical Reports Server (NTRS)
Stubbs, Robert M.; Benson, Thomas J.; Kim, Suk C.
1991-01-01
A CFD analysis of a low pressure nuclear rocket concept is presented with the use of an advanced chemical kinetics, Navier-Stokes code. The computations describe the flow field in detail,including gas dynamic, thermodynamic and chemical properties, as well as global performance quantities such as specific impulse. Computational studies of several rocket nozzle shapes are conducted in an attempt to maximize hydrogen recombination. These Navier-Stokes calculations, which include real gas and viscous effects, predict lower performance values than have been reported heretofore.
Computational fluid dynamics applications at McDonnel Douglas
NASA Astrophysics Data System (ADS)
Hakkinen, R. J.
1987-03-01
Representative examples are presented of applications and development of advanced Computational Fluid Dynamics (CFD) codes for aerodynamic design at the McDonnell Douglas Corporation (MDC). Transonic potential and Euler codes, interactively coupled with boundary layer computation, and solutions of slender-layer Navier-Stokes approximation are applied to aircraft wing/body calculations. An optimization procedure using evolution theory is described in the context of transonic wing design. Euler methods are presented for analysis of hypersonic configurations, and helicopter rotors in hover and forward flight. Several of these projects were accepted for access to the Numerical Aerodynamic Simulation (NAS) facility at the NASA-Ames Research Center.
Computational Fluid Dynamics Analysis of Thoracic Aortic Dissection
NASA Astrophysics Data System (ADS)
Tang, Yik; Fan, Yi; Cheng, Stephen; Chow, Kwok
2011-11-01
Thoracic Aortic Dissection (TAD) is a cardiovascular disease with high mortality. An aortic dissection is formed when blood infiltrates the layers of the vascular wall, and a new artificial channel, the false lumen, is created. The expansion of the blood vessel due to the weakened wall enhances the risk of rupture. Computational fluid dynamics analysis is performed to study the hemodynamics of this pathological condition. Both idealized geometry and realistic patient configurations from computed tomography (CT) images are investigated. Physiological boundary conditions from in vivo measurements are employed. Flow configuration and biomechanical forces are studied. Quantitative analysis allows clinicians to assess the risk of rupture in making decision regarding surgical intervention.
Symbolic dynamics and computation in model gene networks.
Edwards, R.; Siegelmann, H. T.; Aziza, K.; Glass, L.
2001-03-01
We analyze a class of ordinary differential equations representing a simplified model of a genetic network. In this network, the model genes control the production rates of other genes by a logical function. The dynamics in these equations are represented by a directed graph on an n-dimensional hypercube (n-cube) in which each edge is directed in a unique orientation. The vertices of the n-cube correspond to orthants of state space, and the edges correspond to boundaries between adjacent orthants. The dynamics in these equations can be represented symbolically. Starting from a point on the boundary between neighboring orthants, the equation is integrated until the boundary is crossed for a second time. Each different cycle, corresponding to a different sequence of orthants that are traversed during the integration of the equation always starting on a boundary and ending the first time that same boundary is reached, generates a different letter of the alphabet. A word consists of a sequence of letters corresponding to a possible sequence of orthants that arise from integration of the equation starting and ending on the same boundary. The union of the words defines the language. Letters and words correspond to analytically computable Poincare maps of the equation. This formalism allows us to define bifurcations of chaotic dynamics of the differential equation that correspond to changes in the associated language. Qualitative knowledge about the dynamics found by integrating the equation can be used to help solve the inverse problem of determining the underlying network generating the dynamics. This work places the study of dynamics in genetic networks in a context comprising both nonlinear dynamics and the theory of computation. (c) 2001 American Institute of Physics. PMID:12779450
Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex
Procyk, Emmanuel; Dominey, Peter Ford
2016-01-01
Primates display a remarkable ability to adapt to novel situations. Determining what is most pertinent in these situations is not always possible based only on the current sensory inputs, and often also depends on recent inputs and behavioral outputs that contribute to internal states. Thus, one can ask how cortical dynamics generate representations of these complex situations. It has been observed that mixed selectivity in cortical neurons contributes to represent diverse situations defined by a combination of the current stimuli, and that mixed selectivity is readily obtained in randomly connected recurrent networks. In this context, these reservoir networks reproduce the highly recurrent nature of local cortical connectivity. Recombining present and past inputs, random recurrent networks from the reservoir computing framework generate mixed selectivity which provides pre-coded representations of an essentially universal set of contexts. These representations can then be selectively amplified through learning to solve the task at hand. We thus explored their representational power and dynamical properties after training a reservoir to perform a complex cognitive task initially developed for monkeys. The reservoir model inherently displayed a dynamic form of mixed selectivity, key to the representation of the behavioral context over time. The pre-coded representation of context was amplified by training a feedback neuron to explicitly represent this context, thereby reproducing the effect of learning and allowing the model to perform more robustly. This second version of the model demonstrates how a hybrid dynamical regime combining spatio-temporal processing of reservoirs, and input driven attracting dynamics generated by the feedback neuron, can be used to solve a complex cognitive task. We compared reservoir activity to neural activity of dorsal anterior cingulate cortex of monkeys which revealed similar network dynamics. We argue that reservoir computing is a
Toward unification of taxonomy databases in a distributed computer environment
Kitakami, Hajime; Tateno, Yoshio; Gojobori, Takashi
1994-12-31
All the taxonomy databases constructed with the DNA databases of the international DNA data banks are powerful electronic dictionaries which aid in biological research by computer. The taxonomy databases are, however not consistently unified with a relational format. If we can achieve consistent unification of the taxonomy databases, it will be useful in comparing many research results, and investigating future research directions from existent research results. In particular, it will be useful in comparing relationships between phylogenetic trees inferred from molecular data and those constructed from morphological data. The goal of the present study is to unify the existent taxonomy databases and eliminate inconsistencies (errors) that are present in them. Inconsistencies occur particularly in the restructuring of the existent taxonomy databases, since classification rules for constructing the taxonomy have rapidly changed with biological advancements. A repair system is needed to remove inconsistencies in each data bank and mismatches among data banks. This paper describes a new methodology for removing both inconsistencies and mismatches from the databases on a distributed computer environment. The methodology is implemented in a relational database management system, SYBASE.
Distributed computer system enhances productivity for SRB joint optimization
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.
1987-01-01
Initial calculations of a redesign of the solid rocket booster joint that failed during the shuttle tragedy showed that the design had a weight penalty associated with it. Optimization techniques were to be applied to determine if there was any way to reduce the weight while keeping the joint opening closed and limiting the stresses. To allow engineers to examine as many alternatives as possible, a system was developed consisting of existing software that coupled structural analysis with optimization which would execute on a network of computer workstations. To increase turnaround, this system took advantage of the parallelism offered by the finite difference technique of computing gradients to allow several workstations to contribute to the solution of the problem simultaneously. The resulting system reduced the amount of time to complete one optimization cycle from two hours to one-half hour with a potential of reducing it to 15 minutes. The current distributed system, which contains numerous extensions, requires one hour turnaround per optimization cycle. This would take four hours for the sequential system.
Convergence dynamics of the Bak Sneppen model: Activity rate and waiting time distribution
NASA Astrophysics Data System (ADS)
Tirnakli, Ugur; Lyra, Marcelo L.
2007-02-01
In this work, we study the convergence dynamics of two independent random configurations of the Bak-Sneppen model of self-organized criticality evolving under the same external noise. A recently proposed measure of the Hamming distance which considers the minimum difference between displaced configurations is used. The displacement evolves in time intermittently. We compute the jump activity rate and waiting time distribution and report on their asymptotic power-law scaling which characterizes the slow relaxation and the absence of typical length and time scales typical of critical dynamical systems.
Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang
2011-01-01
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons. PMID:22096452
Combining dynamical decoupling with fault-tolerant quantum computation
Ng, Hui Khoon; Preskill, John; Lidar, Daniel A.
2011-07-15
We study how dynamical decoupling (DD) pulse sequences can improve the reliability of quantum computers. We prove upper bounds on the accuracy of DD-protected quantum gates and derive sufficient conditions for DD-protected gates to outperform unprotected gates. Under suitable conditions, fault-tolerant quantum circuits constructed from DD-protected gates can tolerate stronger noise and have a lower overhead cost than fault-tolerant circuits constructed from unprotected gates. Our accuracy estimates depend on the dynamics of the bath that couples to the quantum computer and can be expressed either in terms of the operator norm of the bath's Hamiltonian or in terms of the power spectrum of bath correlations; we explain in particular how the performance of recursively generated concatenated pulse sequences can be analyzed from either viewpoint. Our results apply to Hamiltonian noise models with limited spatial correlations.
Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid
NASA Astrophysics Data System (ADS)
Andrade, P.; Babik, M.; Bhatt, K.; Chand, P.; Collados, D.; Duggal, V.; Fuente, P.; Hayashi, S.; Imamagic, E.; Joshi, P.; Kalmady, R.; Karnani, U.; Kumar, V.; Lapka, W.; Quick, R.; Tarragon, J.; Teige, S.; Triantafyllidis, C.
2012-12-01
The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO[1] managers, service managers, management), from different middleware providers (ARC[2], dCache[3], gLite[4], UNICORE[5] and VDT[6]), consortiums (WLCG[7], EMI[11], EGI[15], OSG[13]), and operational teams (GOC[16], OMB[8], OTAG[9], CSIRT[10]). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG[27] portal where it is exposed to other clients. This monitoring workflow profits from the interoperability established between the SAM[19] and RSV[20] frameworks. We show how these two distributed structures are capable of uniting technologies and hiding the complexity around them, making them easy to be used by the community. Finally, the different supported deployment strategies, tailored not only for monitoring the entire infrastructure but also for monitoring sites and virtual organizations, are presented and the associated operational benefits highlighted.
Piccinelli, Marina; Vergara, Christian; Antiga, Luca; Forzenigo, Laura; Biondetti, Pietro; Domanin, Maurizio
2013-11-01
The aim of the present work is to quantitatively assess the three-dimensional distributions of the displacements experienced during the cardiac cycle by the luminal boundary of abdominal aortic aneurysm (AAA) and to correlate them with the local bulk hemodynamics. Ten patients were acquired by means of time resolved computed tomography, and each patient-specific vascular morphology was reconstructed for all available time frames. The AAA lumen boundary motion was tracked, and the lumen boundary displacements (LBD) computed for each time frame. The intra-aneurysm hemodynamic quantities, specifically wall shear stress (WSS), were evaluated with computational fluid dynamics simulations. Co-localization of LBD and WSS distributions was evaluated by means of Pearson correlation coefficient. A clear anisotropic distribution of LBD was evidenced in both space and time; a combination of AAA lumen boundary inward- and outward-directed motions was assessed. A co-localization between largest outward LBD and high WSS was demonstrated supporting the hypothesis of a mechanistic relationship between anisotropic displacement and hemodynamic forces related to the impingement of the blood on the lumen boundary. The presence of anisotropic displacement of the AAA lumen boundary and their link to hemodynamic forces have been assessed, highlighting a new possible role for hemodynamics in the study of AAA progression. PMID:23446648
Microphysical and Dynamical Influences on Cirrus Cloud Optical Depth Distributions
Kay, J.; Baker, M.; Hegg, D.
2005-03-18
Cirrus cloud inhomogeneity occurs at scales greater than the cirrus radiative smoothing scale ({approx}100 m), but less than typical global climate model (GCM) resolutions ({approx}300 km). Therefore, calculating cirrus radiative impacts in GCMs requires an optical depth distribution parameterization. Radiative transfer calculations are sensitive to optical depth distribution assumptions (Fu et al. 2000; Carlin et al. 2002). Using raman lidar observations, we quantify cirrus timescales and optical depth distributions at the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site in Lamont, OK (USA). We demonstrate the sensitivity of outgoing longwave radiation (OLR) calculations to assumed optical depth distributions and to the temporal resolution of optical depth measurements. Recent work has highlighted the importance of dynamics and nucleation for cirrus evolution (Haag and Karcher 2004; Karcher and Strom 2003). We need to understand the main controls on cirrus optical depth distributions to incorporate cirrus variability into model radiative transfer calculations. With an explicit ice microphysics parcel model, we aim to understand the influence of ice nucleation mechanism and imposed dynamics on cirrus optical depth distributions.
Dynamic computed tomographic scans in experimental brain abscess.
Enzmann, D R; Placone, R C; Britt, R H
1984-01-01
Dynamic computed tomographic scans were performed in an experimental brain abscess model to establish criteria that could be utilized in abscess staging. The vascular phase of the time-density curves did not differentiate cerebritis and capsule stages. The amount of residual enhancement after the first pass of an intra-arterial contrast bolus differed between major abscess stages, the greater residual enhancement being noted in the capsule stage. PMID:6462439
COMPUTER MODEL OF TEMPERATURE DISTRIBUTION IN OPTICALLY PUMPED LASER RODS
NASA Technical Reports Server (NTRS)
Farrukh, U. O.
1994-01-01
Managing the thermal energy that accumulates within a solid-state laser material under active pumping is of critical importance in the design of laser systems. Earlier models that calculated the temperature distribution in laser rods were single dimensional and assumed laser rods of infinite length. This program presents a new model which solves the temperature distribution problem for finite dimensional laser rods and calculates both the radial and axial components of temperature distribution in these rods. The modeled rod is either side-pumped or end-pumped by a continuous or a single pulse pump beam. (At the present time, the model cannot handle a multiple pulsed pump source.) The optical axis is assumed to be along the axis of the rod. The program also assumes that it is possible to cool different surfaces of the rod at different rates. The user defines the laser rod material characteristics, determines the types of cooling and pumping to be modeled, and selects the time frame desired via the input file. The program contains several self checking schemes to prevent overwriting memory blocks and to provide simple tracing of information in case of trouble. Output for the program consists of 1) an echo of the input file, 2) diffusion properties, radius and length, and time for each data block, 3) the radial increments from the center of the laser rod to the outer edge of the laser rod, and 4) the axial increments from the front of the laser rod to the other end of the rod. This program was written in Microsoft FORTRAN77 and implemented on a Tandon AT with a 287 math coprocessor. The program can also run on a VAX 750 mini-computer. It has a memory requirement of about 147 KB and was developed in 1989.
Metastable atoms in a Mg beam: Excitation dynamics and velocity distribution
Giusfredi, G.; Godone, A.; Bava, E.; Novero, C.
1988-03-01
We describe the realization of a source of Mg atoms in the metastable triplet 3s3p/sup 3/P and we report a theoretical model describing the dynamic process of production of metastable atoms via electron impact excitation. Experimental results concerning atomic flux and velocity distribution are reported and compared with the theoretical model; the efficiency of production of metastable atoms was 40%, in good agreement with the computed value, and the velocity distribution showed a dependence from the discharge current close to the theoretical prediction.
Dynamic analysis of spur gears using computer program DANST
NASA Technical Reports Server (NTRS)
Oswald, Fred B.; Lin, Hsiang Hsi; Liou, Chuen-Huei; Valco, Mark J.
1993-01-01
DANST is a computer program for static and dynamic analysis of spur gear systems. The program can be used for parametric studies to predict the effect on dynamic load and tooth bending stress of spur gears due to operating speed, torque, stiffness, damping, inertia, and tooth profile. DANST performs geometric modeling and dynamic analysis for low- or high-contact-ratio spur gears. DANST can simulate gear systems with contact ratio ranging from one to three. It was designed to be easy to use, and it is extensively documented by comments in the source code. This report describes the installation and use of DANST. It covers input data requirements and presents examples. The report also compares DANST predictions for gear tooth loads and bending stress to experimental and finite element results.
Universality in survivor distributions: Characterizing the winners of competitive dynamics.
Luck, J M; Mehta, A
2015-11-01
We investigate the survivor distributions of a spatially extended model of competitive dynamics in different geometries. The model consists of a deterministic dynamical system of individual agents at specified nodes, which might or might not survive the predatory dynamics: all stochasticity is brought in by the initial state. Every such initial state leads to a unique and extended pattern of survivors and nonsurvivors, which is known as an attractor of the dynamics. We show that the number of such attractors grows exponentially with system size, so that their exact characterization is limited to only very small systems. Given this, we construct an analytical approach based on inhomogeneous mean-field theory to calculate survival probabilities for arbitrary networks. This powerful (albeit approximate) approach shows how universality arises in survivor distributions via a key concept-the dynamical fugacity. Remarkably, in the large-mass limit, the survivor probability of a node becomes independent of network geometry and assumes a simple form which depends only on its mass and degree. PMID:26651747
Universality in survivor distributions: Characterizing the winners of competitive dynamics
NASA Astrophysics Data System (ADS)
Luck, J. M.; Mehta, A.
2015-11-01
We investigate the survivor distributions of a spatially extended model of competitive dynamics in different geometries. The model consists of a deterministic dynamical system of individual agents at specified nodes, which might or might not survive the predatory dynamics: all stochasticity is brought in by the initial state. Every such initial state leads to a unique and extended pattern of survivors and nonsurvivors, which is known as an attractor of the dynamics. We show that the number of such attractors grows exponentially with system size, so that their exact characterization is limited to only very small systems. Given this, we construct an analytical approach based on inhomogeneous mean-field theory to calculate survival probabilities for arbitrary networks. This powerful (albeit approximate) approach shows how universality arises in survivor distributions via a key concept—the dynamical fugacity. Remarkably, in the large-mass limit, the survivor probability of a node becomes independent of network geometry and assumes a simple form which depends only on its mass and degree.
An investigation of computational modeling on phase distribution phenomena in vertical pipes
Bangxian Wu; Chang, S.L.; Lottes, S.A.
1995-07-01
A phase distribution phenomenon is observed in many gas/solid flows. An analysis of this phenomenon indicates that particle turbulence has a significant impact on the dispersion of particles in a vertical pipe flow. A new particle turbulent model has been developed to describe the phenomenon based on the inclusion of particle turbulence dynamics in transport equations. The main features of the model include an new transport equation of particle turbulent kinetic energy, a new expression of radial particle diffusion flux replacing Fick`s Law, and new turbulent viscosity correlation. The particle turbulent model was incorporated into a computational fluid dynamic code to predict particle dispersion in a vertical pipe flow. Preliminary results show the expected trend of particle accumulation near the wall.
Dynamics of Biofilm Regrowth in Drinking Water Distribution Systems
Husband, S.; Loza, V.; Boxall, J.
2016-01-01
ABSTRACT The majority of biomass within water distribution systems is in the form of attached biofilm. This is known to be central to drinking water quality degradation following treatment, yet little understanding of the dynamics of these highly heterogeneous communities exists. This paper presents original information on such dynamics, with findings demonstrating patterns of material accumulation, seasonality, and influential factors. Rigorous flushing operations repeated over a 1-year period on an operational chlorinated system in the United Kingdom are presented here. Intensive monitoring and sampling were undertaken, including time-series turbidity and detailed microbial analysis using 16S rRNA Illumina MiSeq sequencing. The results show that bacterial dynamics were influenced by differences in the supplied water and by the material remaining attached to the pipe wall following flushing. Turbidity, metals, and phosphate were the main factors correlated with the distribution of bacteria in the samples. Coupled with the lack of inhibition of biofilm development due to residual chlorine, this suggests that limiting inorganic nutrients, rather than organic carbon, might be a viable component in treatment strategies to manage biofilms. The research also showed that repeat flushing exerted beneficial selective pressure, giving another reason for flushing being a viable advantageous biofilm management option. This work advances our understanding of microbiological processes in drinking water distribution systems and helps inform strategies to optimize asset performance. IMPORTANCE This research provides novel information regarding the dynamics of biofilm formation in real drinking water distribution systems made of different materials. This new knowledge on microbiological process in water supply systems can be used to optimize the performance of the distribution network and to guarantee safe and good-quality drinking water to consumers. PMID:27208119
Applying uncertainty quantification to multiphase flow computational fluid dynamics
Gel, A; Garg, R; Tong, C; Shahnam, M; Guenther, C
2013-07-01
Multiphase computational fluid dynamics plays a major role in design and optimization of fossil fuel based reactors. There is a growing interest in accounting for the influence of uncertainties associated with physical systems to increase the reliability of computational simulation based engineering analysis. The U.S. Department of Energy's National Energy Technology Laboratory (NETL) has recently undertaken an initiative to characterize uncertainties associated with computer simulation of reacting multiphase flows encountered in energy producing systems such as a coal gasifier. The current work presents the preliminary results in applying non-intrusive parametric uncertainty quantification and propagation techniques with NETL's open-source multiphase computational fluid dynamics software MFIX. For this purpose an open-source uncertainty quantification toolkit, PSUADE developed at the Lawrence Livermore National Laboratory (LLNL) has been interfaced with MFIX software. In this study, the sources of uncertainty associated with numerical approximation and model form have been neglected, and only the model input parametric uncertainty with forward propagation has been investigated by constructing a surrogate model based on data-fitted response surface for a multiphase flow demonstration problem. Monte Carlo simulation was employed for forward propagation of the aleatory type input uncertainties. Several insights gained based on the outcome of these simulations are presented such as how inadequate characterization of uncertainties can affect the reliability of the prediction results. Also a global sensitivity study using Sobol' indices was performed to better understand the contribution of input parameters to the variability observed in response variable.
Finite element dynamic analysis on CDC STAR-100 computer
NASA Technical Reports Server (NTRS)
Noor, A. K.; Lambiotte, J. J., Jr.
1978-01-01
Computational algorithms are presented for the finite element dynamic analysis of structures on the CDC STAR-100 computer. The spatial behavior is described using higher-order finite elements. The temporal behavior is approximated by using either the central difference explicit scheme or Newmark's implicit scheme. In each case the analysis is broken up into a number of basic macro-operations. Discussion is focused on the organization of the computation and the mode of storage of different arrays to take advantage of the STAR pipeline capability. The potential of the proposed algorithms is discussed and CPU times are given for performing the different macro-operations for a shell modeled by higher order composite shallow shell elements having 80 degrees of freedom.
NASA Technical Reports Server (NTRS)
Sohn, Andrew; Biswas, Rupak; Simon, Horst D.
1996-01-01
The computational requirements for an adaptive solution of unsteady problems change as the simulation progresses. This causes workload imbalance among processors on a parallel machine which, in turn, requires significant data movement at runtime. We present a new dynamic load-balancing framework, called JOVE, that balances the workload across all processors with a global view. Whenever the computational mesh is adapted, JOVE is activated to eliminate the load imbalance. JOVE has been implemented on an IBM SP2 distributed-memory machine in MPI for portability. Experimental results for two model meshes demonstrate that mesh adaption with load balancing gives more than a sixfold improvement over one without load balancing. We also show that JOVE gives a 24-fold speedup on 64 processors compared to sequential execution.
High-Performance Computation of Distributed-Memory Parallel 3D Voronoi and Delaunay Tessellation
Peterka, Tom; Morozov, Dmitriy; Phillips, Carolyn
2014-11-14
Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets: N-body simulations, molecular dynamics codes, and LIDAR point clouds are just a few examples. Such computational geometry methods are common in data analysis and visualization; but as the scale of simulations and observations surpasses billions of particles, the existing serial and shared-memory algorithms no longer suffice. A distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this paper is a new parallel Delaunay and Voronoi tessellation algorithm that automatically determines which neighbor points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include periodic and wall boundary conditions, comparison of our method using two popular serial libraries, and application to numerous science datasets.
Computational Fluid Dynamics Program at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Holst, Terry L.
1989-01-01
The Computational Fluid Dynamics (CFD) Program at NASA Ames Research Center is reviewed and discussed. The technical elements of the CFD Program are listed and briefly discussed. These elements include algorithm research, research and pilot code development, scientific visualization, advanced surface representation, volume grid generation, and numerical optimization. Next, the discipline of CFD is briefly discussed and related to other areas of research at NASA Ames including experimental fluid dynamics, computer science research, computational chemistry, and numerical aerodynamic simulation. These areas combine with CFD to form a larger area of research, which might collectively be called computational technology. The ultimate goal of computational technology research at NASA Ames is to increase the physical understanding of the world in which we live, solve problems of national importance, and increase the technical capabilities of the aerospace community. Next, the major programs at NASA Ames that either use CFD technology or perform research in CFD are listed and discussed. Briefly, this list includes turbulent/transition physics and modeling, high-speed real gas flows, interdisciplinary research, turbomachinery demonstration computations, complete aircraft aerodynamics, rotorcraft applications, powered lift flows, high alpha flows, multiple body aerodynamics, and incompressible flow applications. Some of the individual problems actively being worked in each of these areas is listed to help define the breadth or extent of CFD involvement in each of these major programs. State-of-the-art examples of various CFD applications are presented to highlight most of these areas. The main emphasis of this portion of the presentation is on examples which will not otherwise be treated at this conference by the individual presentations. Finally, a list of principal current limitations and expected future directions is given.
Distributed computing feasibility in a non-dedicated homogeneous distributed system
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Sun, Xian-He
1993-01-01
The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.
An immunity-based model for dynamic distributed intrusion detection
NASA Astrophysics Data System (ADS)
Qiao, Peili; Wang, Tong; Su, Jie
2008-03-01
The traditional intrusion detection systems mostly adopt the analysis engine of the concentrating type, so the misinformation rate is higher and lack of self-adaptability, which is already difficult to meet increasing extensive security demand of the distributed network environment. An immunity-based model combining immune theory, data mining and data fusion technique for dynamic distributed intrusion detection is proposed in this paper. This system presents the method of establishing and evolving the set of early gene, and defines the sets of Self, Nonself and Immunity cells. Moreover, a detailed description is given to the architecture and work mechanism of the model, and the characters of the model are analyzed.
Autonomous Dynamic Soaring Platform for Distributed Mobile Sensor Arrays
BOSLOUGH, MARK B. E.
2002-06-01
This project makes use of ''biomimetic behavioral engineering'' in which adaptive strategies used by animals in the real world are applied to the development of autonomous robots. The key elements of the biomimetic approach are to observe and understand a survival behavior exhibited in nature, to create a mathematical model and simulation capability for that behavior, to modify and optimize the behavior for a desired robotics application, and to implement it. The application described in this report is dynamic soaring, a behavior that certain sea birds use to extract flight energy from laminar wind velocity gradients in the shallow atmospheric boundary layer directly above the ocean surface. Theoretical calculations, computational proof-of-principle demonstrations, and the first instrumented experimental flight test data for dynamic soaring are presented to address the feasibility of developing dynamic soaring flight control algorithms to sustain the flight of unmanned airborne vehicles (UAVs). Both hardware and software were developed for this application. Eight-foot custom foam sailplanes were built and flown in a steep shear gradient. A logging device was designed and constructed with custom software to record flight data during dynamic soaring maneuvers. A computational toolkit was developed to simulate dynamic soaring in special cases and with a full 6-degree of freedom flight dynamics model in a generalized time-dependent wind field. Several 3-dimensional visualization tools were built to replay the flight simulations. A realistic aerodynamics model of an eight-foot sailplane was developed using measured aerodynamic derivatives. Genetic programming methods were developed and linked to the simulations and visualization tools. These tools can now be generalized for other biomimetic behavior applications.
PHENIX On-Line Distributed Computing System Architecture
Desmond, Edmond; Haggerty, John; Kehayias, Hyon Joo; Purschke, Martin L.; Witzig, Chris; Kozlowski, Thomas
1997-05-22
PHENIX is one of the two large experiments at the Relativistic Heavy Ion Collider (RHIC) currently under construction at Brookhaven National Laboratory. The detector consists of 11 sub-detectors, that are further subdivided into 29 units (``granules``) that can be operated independently, which includes simultaneous data taking with independent data streams and independent triggers. The detector has 250,000 channels and is read out by front end modules, where the data is buffered in a pipeline while awaiting the level trigger decision. Zero suppression and calibration is done after the level accept in custom built data collection modules (DCMs) with DSPs before the data is sent to an event builder (design throughput of 2 Gb/sec) and higher level triggers. The On-line Computing Systems Group (ONCS) has two responsibilities. Firstly it is responsible for receiving the data from the event builder, routing it through a network of workstations to consumer processes and archiving it at a data rate of 20 MB/sec. Secondly it is also responsible for the overall configuration, control and operation of the detector and data acquisition chain, which comprises the software integration for several thousand custom built hardware modules. The software must furthermore support the independent operation of the above mentioned granules, which includes the coordination of processes that run in 60-100 VME processors and workstations. ONOS has adapted the Shlaer- Mellor Object Oriented Methodology for the design of the top layer software. CORBA is used as communication layer between the distributed objects, which are implemented as asynchronous finite state machines. We will give an overview of the PHENIX online system with the main focus on the system architecture, software components and integration tasks of the On-line Computing group ONCS and report on the status of the current prototypes.
Maintaining Traceability in an Evolving Distributed Computing Environment
NASA Astrophysics Data System (ADS)
Collier, I.; Wartel, R.
2015-12-01
The management of risk is fundamental to the operation of any distributed computing infrastructure. Identifying the cause of incidents is essential to prevent them from re-occurring. In addition, it is a goal to contain the impact of an incident while keeping services operational. For response to incidents to be acceptable this needs to be commensurate with the scale of the problem. The minimum level of traceability for distributed computing infrastructure usage is to be able to identify the source of all actions (executables, file transfers, pilot jobs, portal jobs, etc.) and the individual who initiated them. In addition, sufficiently fine-grained controls, such as blocking the originating user and monitoring to detect abnormal behaviour, are necessary for keeping services operational. It is essential to be able to understand the cause and to fix any problems before re-enabling access for the user. The aim is to be able to answer the basic questions who, what, where, and when concerning any incident. This requires retaining all relevant information, including timestamps and the digital identity of the user, sufficient to identify, for each service instance, and for every security event including at least the following: connect, authenticate, authorize (including identity changes) and disconnect. In traditional grid infrastructures (WLCG, EGI, OSG etc.) best practices and procedures for gathering and maintaining the information required to maintain traceability are well established. In particular, sites collect and store information required to ensure traceability of events at their sites. With the increased use of virtualisation and private and public clouds for HEP workloads established procedures, which are unable to see 'inside' running virtual machines no longer capture all the information required. Maintaining traceability will at least involve a shift of responsibility from sites to Virtual Organisations (VOs) bringing with it new requirements for their
The Relation between Approximation in Distribution and Shadowing in Molecular Dynamics
NASA Astrophysics Data System (ADS)
Tupper, Paul
2009-01-01
Molecular dynamics refers to the computer simulation of a material at the atomic level. An open problem in numerical analysis is to explain the apparent reliability of molecular dynamics simulations. The difficulty is that individual trajectories computed in molecular dynamics are accurate for only short time intervals, whereas apparently reliable information can be extracted from very long-time simulations. It has been conjectured that long molecular dynamics trajectories have low-dimensional statistical features that accurately approximate those of the original system. Another conjecture is that numerical trajectories satisfy the shadowing property: they are close over long time intervals to exact trajectories but with different initial conditions. We prove that these two views are actually equivalent to each other, after we suitably modify the concept of shadowing. A key ingredient of our result is a general theorem that allows us to take random elements of a metric space that are close in distribution and embed them in the same probability space so that they are close in a strong sense. This result is similar to the Strassen-Dudley theorem except that a mapping is provided between the two random elements. Our results on shadowing are motivated by molecular dynamics but apply to the approximation of any dynamical system when initial conditions are selected according to a probability measure.
Computational Fluid Dynamic simulations of pipe elbow flow.
Homicz, Gregory Francis
2004-08-01
One problem facing today's nuclear power industry is flow-accelerated corrosion and erosion in pipe elbows. The Korean Atomic Energy Research Institute (KAERI) is performing experiments in their Flow-Accelerated Corrosion (FAC) test loop to better characterize these phenomena, and develop advanced sensor technologies for the condition monitoring of critical elbows on a continuous basis. In parallel with these experiments, Sandia National Laboratories is performing Computational Fluid Dynamic (CFD) simulations of the flow in one elbow of the FAC test loop. The simulations are being performed using the FLUENT commercial software developed and marketed by Fluent, Inc. The model geometry and mesh were created using the GAMBIT software, also from Fluent, Inc. This report documents the results of the simulations that have been made to date; baseline results employing the RNG k-e turbulence model are presented. The predicted value for the diametrical pressure coefficient is in reasonably good agreement with published correlations. Plots of the velocities, pressure field, wall shear stress, and turbulent kinetic energy adjacent to the wall are shown within the elbow section. Somewhat to our surprise, these indicate that the maximum values of both wall shear stress and turbulent kinetic energy occur near the elbow entrance, on the inner radius of the bend. Additional simulations were performed for the same conditions, but with the RNG k-e model replaced by either the standard k-{var_epsilon}, or the realizable k-{var_epsilon} turbulence model. The predictions using the standard k-{var_epsilon} model are quite similar to those obtained in the baseline simulation. However, with the realizable k-{var_epsilon} model, more significant differences are evident. The maximums in both wall shear stress and turbulent kinetic energy now appear on the outer radius, near the elbow exit, and are {approx}11% and 14% greater, respectively, than those predicted in the baseline calculation
Dynamical next-to-next-to-leading order parton distributions
Jimenez-Delgado, P.; Reya, E.
2009-04-01
Utilizing recent deep inelastic scattering measurements ({sigma}{sub r},F{sub 2,3,L}) and data on hadronic dilepton production we determine at next-to-next-to-leading order (NNLO) (3-loop) of QCD the dynamical parton distributions of the nucleon generated radiatively from valencelike positive input distributions at an optimally chosen low resolution scale (Q{sub 0}{sup 2}<1 GeV{sup 2}). These are compared with 'standard' NNLO distributions generated from positive input distributions at some fixed and higher resolution scale (Q{sub 0}{sup 2}>1 GeV{sup 2}). Although the NNLO corrections imply in both approaches an improved value of {chi}{sup 2}, typically {chi}{sub NNLO}{sup 2}{approx_equal}0.9{chi}{sub NLO}{sup 2}, present deep inelastic scattering data are still not sufficiently accurate to distinguish between NLO results and the minute NNLO effects of a few percent, despite the fact that the dynamical NNLO uncertainties are somewhat smaller than the NLO ones and both are, as expected, smaller than those of their standard counterparts. The dynamical predictions for F{sub L}(x,Q{sup 2}) become perturbatively stable already at Q{sup 2}=2-3 GeV{sup 2} where precision measurements could even delineate NNLO effects in the very small-x region. This is in contrast to the common standard approach but NNLO/NLO differences are here less distinguishable due to the larger 1{sigma} uncertainty bands. Within the dynamical approach we obtain {alpha}{sub s}(M{sub Z}{sup 2})=0.1124{+-}0.0020, whereas the somewhat less constrained standard fit gives {alpha}{sub s}(M{sub Z}{sup 2})=0.1158{+-}0.0035.
Dynamic stall computations using a zonal Navier-Stokes model. Master's thesis
Conroyd, J.H.
1988-06-01
A zonal Navier-Stokes model is installed and verified on the NASA Ames Cray X/MP-48 computer and is used to calculate the flow field about a NACA 0012 airfoil oscillating in pitch. Surface-pressure distributions and integrated lift, pitching moment, and drag coefficient versus angle of attack are compared to existing experimental data for four cases and existing computational data for one case. These cases involve deep dynamic stall and fully detached flow at and below a free-stream Mach number of .184. The flow field about the oscillating airfoil is investigated through the study of pressure, vorticity, local velocity, and stream function. Finally, the effects of pitch rate on dynamic stall are investigated.
Computational modeling of dynamic behaviors of human teeth.
Liao, Zhipeng; Chen, Junning; Zhang, Zhongpu; Li, Wei; Swain, Michael; Li, Qing
2015-12-16
Despite the importance of dynamic behaviors of dental and periodontal structures to clinics, the biomechanical roles of anatomic sophistication and material properties in quantification of vibratory characteristics remain under-studied. This paper aimed to generate an anatomically accurate and structurally detailed 3D finite element (FE) maxilla model and explore the dynamic behaviors of human teeth through characterizing the natural frequencies (NFs) and mode shapes. The FE models with different levels of structural integrities and material properties were established to quantify the effects of modeling techniques on the computation of vibratory characteristics. The results showed that the integrity of computational model considerably influences the characterization of vibratory behaviors, as evidenced by declined NFs and perceptibly altered mode shapes resulting from the models with higher degrees of completeness and accuracy. A primary NF of 889Hz and the corresponding mode shape featuring linguo-buccal vibration of maxillary right 2nd molar were obtained based on the complete maxilla model. It was found that the periodontal ligament (PDL), a connective soft tissue, plays an important role in quantifying NFs. It was also revealed that damping and heterogeneity of materials contribute to the quantification of vibratory characteristics. The study provided important biomechanical insights and clinical references for future studies on dynamic behaviors of dental and periodontal structures. PMID:26584964
Digital computer program for generating dynamic turbofan engine models (DIGTEM)
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Krosel, S. M.; Szuch, J. R.; Westerkamp, E. J.
1983-01-01
This report describes DIGTEM, a digital computer program that simulates two spool, two-stream turbofan engines. The turbofan engine model in DIGTEM contains steady-state performance maps for all of the components and has control volumes where continuity and energy balances are maintained. Rotor dynamics and duct momentum dynamics are also included. Altogether there are 16 state variables and state equations. DIGTEM features a backward-differnce integration scheme for integrating stiff systems. It trims the model equations to match a prescribed design point by calculating correction coefficients that balance out the dynamic equations. It uses the same coefficients at off-design points and iterates to a balanced engine condition. Transients can also be run. They are generated by defining controls as a function of time (open-loop control) in a user-written subroutine (TMRSP). DIGTEM has run on the IBM 370/3033 computer using implicit integration with time steps ranging from 1.0 msec to 1.0 sec. DIGTEM is generalized in the aerothermodynamic treatment of components.
Above the cloud computing orbital services distributed data model
NASA Astrophysics Data System (ADS)
Straub, Jeremy
2014-05-01
Technology miniaturization and system architecture advancements have created an opportunity to significantly lower the cost of many types of space missions by sharing capabilities between multiple spacecraft. Historically, most spacecraft have been atomic entities that (aside from their communications with and tasking by ground controllers) operate in isolation. Several notable example exist; however, these are purpose-designed systems that collaborate to perform a single goal. The above the cloud computing (ATCC) concept aims to create ad-hoc collaboration between service provider and consumer craft. Consumer craft can procure processing, data transmission, storage, imaging and other capabilities from provider craft. Because of onboard storage limitations, communications link capability limitations and limited windows of communication, data relevant to or required for various operations may span multiple craft. This paper presents a model for the identification, storage and accessing of this data. This model includes appropriate identification features for this highly distributed environment. It also deals with business model constraints such as data ownership, retention and the rights of the storing craft to access, resell, transmit or discard the data in its possession. The model ensures data integrity and confidentiality (to the extent applicable to a given data item), deals with unique constraints of the orbital environment and tags data with business model (contractual) obligation data.
Automatic distribution of vision-tasks on computing clusters
NASA Astrophysics Data System (ADS)
Müller, Thomas; Tran, Binh An; Knoll, Alois
2011-01-01
In this paper a consistent and efficient but yet convenient system for parallel computer vision, and in fact also realtime actuator control is proposed. The system implements the multi-agent paradigm and a blackboard information storage. This, in combination with a generic interface for hardware abstraction and integration of external software components, is setup on basis of the message passing interface (MPI). The system allows for data- and task-parallel processing, and supports both synchronous communication, as data exchange can be triggered by events, and asynchronous communication, as data can be polled, strategies. Also, by duplication of processing units (agents) redundant processing is possible to achieve greater robustness. As the system automatically distributes the task units to available resources, and a monitoring concept allows for combination of tasks and their composition to complex processes, it is easy to develop efficient parallel vision / robotics applications quickly. Multiple vision based applications have already been implemented, including academic, research related fields and prototypes for industrial automation. For the scientific community the system has been recently launched open-source.
Computational spectroscopy using the Quantum ESPRESSO distribution (Invited)
NASA Astrophysics Data System (ADS)
Baroni, S.; Giannozzi, P.
2009-12-01
Quantum ESPRESSO (QE) [1,2] is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves, and pseudopotentials. QE freely available to researchers around the world under the terms of the GNU general public licence. In this talk I will introduce the QE distribution, with emphasis on some of its features that may appeal to the Earth Sciences and Mineralogy communities. I will focus on the determination of vibrational frequencies to be used for spectroscopic purposes, for the determination of soft modes leading to mechanical instabilities, and as ingredients for the simulation of thermal properties in the (quasi-) harmonic approximations. I will conclude with some recent developments which are allowing for the simulation of electronic (absorption and photo-emission) spectroscopies, using many-body and time-dependent density-functional perturbation theories. [1] P. Giannozzi et al. J. Phys.: Condens. Matter 21, 395502 (2009); http://dx.doi.org/10.1088/0953-8984/21/39/395502 [2] http://www.quantum-espresso.org
Reviews of computing technology: Fiber distributed data interface
Johnson, A.J.
1991-12-01
Fiber Distributed Data Interface, more commonly known as FDDI, is the name of the standard that describes a new local area network (LAN) technology for the 90`s. This technology is based on fiber optics communications and, at a data transmission rate of 100 million bits per second (mbps), provides a full order of magnitude improvement over previous LAN standards such as Ethernet and Token Ring. FDDI as a standard has been accepted by all major computer manufacturers and is a national standard as defined by the American National Standards Institute (ANSI). FDDI will become part of the US Government Open Systems Interconnection Profile (GOSIP) under Version 3 GOSIP and will become an international standard promoted by the International Standards Organization (ISO). It is important to note that there are no competing standards for high performance LAN`s so that FDDI acceptance is nearly universal. This technology report describes FDDI as a technology, looks at the applications of this technology, examine the current economics of using it, and describe activities and plans by the Information Resource Management (IRM) department to implement this technology at the Savannah River Site.
Reviews of computing technology: Fiber distributed data interface
Johnson, A.J.
1991-12-01
Fiber Distributed Data Interface, more commonly known as FDDI, is the name of the standard that describes a new local area network (LAN) technology for the 90's. This technology is based on fiber optics communications and, at a data transmission rate of 100 million bits per second (mbps), provides a full order of magnitude improvement over previous LAN standards such as Ethernet and Token Ring. FDDI as a standard has been accepted by all major computer manufacturers and is a national standard as defined by the American National Standards Institute (ANSI). FDDI will become part of the US Government Open Systems Interconnection Profile (GOSIP) under Version 3 GOSIP and will become an international standard promoted by the International Standards Organization (ISO). It is important to note that there are no competing standards for high performance LAN's so that FDDI acceptance is nearly universal. This technology report describes FDDI as a technology, looks at the applications of this technology, examine the current economics of using it, and describe activities and plans by the Information Resource Management (IRM) department to implement this technology at the Savannah River Site.
Parallelizing Sylvester-like operations on a distributed memory computer
Hu, D.Y.; Sorensen, D.C.
1994-12-31
Discretization of linear operators arising in applied mathematics often leads to matrices with the following structure: M(x) = (D {circle_times} A + B {circle_times} I{sub n} + V)x, where x {element_of} R{sup mn}, B, D {element_of} R{sup nxn}, A {element_of} R{sup mxm} and V {element_of} R{sup mnxmn}; both D and V are diagonal. For the notational convenience, the authors assume that both A and B are symmetric. All the results through this paper can be easily extended to the cases with general A and B. The linear operator on R{sup mn} defined above can be viewed as a generalization of the Sylvester operator: S(x) = (I{sub m} {circle_times} A + B {circle_times} I{sub n})x. The authors therefore refer to it as a Sylvester-like operator. The schemes discussed in this paper therefore also apply to Sylvester operator. In this paper, the authors present the SIMD scheme for parallelization of the Sylvester-like operator on a distributed memory computer. This scheme is designed to approach the best possible efficiency by avoiding unnecessary communication among processors.
NASA Technical Reports Server (NTRS)
Schilling, D. L.; Oh, S. J.; Thau, F.
1975-01-01
Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.
Applications of Computational Methods for Dynamic Stability and Control Derivatives
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Spence, Angela M.
2004-01-01
Initial steps in the application o f a low-order panel method computational fluid dynamic (CFD) code to the calculation of aircraft dynamic stability and control (S&C) derivatives are documented. Several capabilities, unique to CFD but not unique to this particular demonstration, are identified and demonstrated in this paper. These unique capabilities complement conventional S&C techniques and they include the ability to: 1) perform maneuvers without the flow-kinematic restrictions and support interference commonly associated with experimental S&C facilities, 2) easily simulate advanced S&C testing techniques, 3) compute exact S&C derivatives with uncertainty propagation bounds, and 4) alter the flow physics associated with a particular testing technique from those observed in a wind or water tunnel test in order to isolate effects. Also presented are discussions about some computational issues associated with the simulation of S&C tests and selected results from numerous surface grid resolution studies performed during the course of the study.
Ah Min, Kyoung; Zhang, Xinyuan; Yu, Jing-yu; Rosania, Gus R.
2013-01-01
Quantitative structure-activity relationship (QSAR) studies and mechanistic mathematical modeling approaches have been independently employed for analyzing and predicting the transport and distribution of small molecule chemical agents in living organisms. Both of these computational approaches have been useful to interpret experiments measuring the transport properties of small molecule chemical agents, in vitro and in vivo. Nevertheless, mechanistic cell-based pharmacokinetic models have been especially useful to guide the design of experiments probing the molecular pathways underlying small molecule transport phenomena. Unlike QSAR models, mechanistic models can be integrated from microscopic to macroscopic levels, to analyze the spatiotemporal dynamics of small molecule chemical agents from intracellular organelles to whole organs, well beyond the experiments and training data sets upon which the models are based. Based on differential equations, mechanistic models can also be integrated with other differential equations-based systems biology models of biochemical networks or signaling pathways. Although the origin and evolution of mathematical modeling approaches aimed at predicting drug transport and distribution has occurred independently from systems biology, we propose that the incorporation of mechanistic cell-based computational models of drug transport and distribution into a systems biology modeling framework is a logical next-step for the advancement of systems pharmacology research. PMID:24218242
Local Community Mining on Distributed and Dynamic Networks From a Multiagent Perspective.
Bu, Zhan; Wu, Zhiang; Cao, Jie; Jiang, Yichuan
2016-04-01
Distributed and dynamic networks are ubiquitous in many real-world applications. Due to the huge-scale, decentralized, and dynamic characteristics, the global topological view is either too hard to obtain or even not available. So, most existing community detection methods working on the global view fail to handle such decentralized and dynamic large networks. In this paper, we propose a novel autonomy-oriented computing-based method for community mining (AOCCM) from the multiagent perspective in the distributed environment. In particular, AOCCM utilizes reactive agents to pick the neighborhood node with the largest structural similarity as the candidate node, and thus determine whether it should be added into local community based on the modularity gain. We further improve AOCCM to a more efficient incremental version named AOCCM-i for mining communities from dynamic networks. AOCCM and AOCCM-i can be easily expanded to detect both nonoverlapping and overlapping global community structures. Experimental results on real-life networks demonstrate that the proposed methods can reduce the computational cost by avoiding repeated structural similarity calculation and can still obtain the high-quality communities. PMID:26087512
ERIC Educational Resources Information Center
Klaff, Vivian; Handler, Paul
Available on the University of Illinois PLATO IV Computer system, the Population Dynamic Group computer-aided instruction program for teaching population dynamics is described and explained. The computer-generated visual graphics enable fast and intuitive understanding of the dynamics of population and of the concepts and data of population. The…
Aggregation dynamics explain vegetation patch-size distributions.
Irvine, M A; Bull, J C; Keeling, M J
2016-04-01
Vegetation patch-size distributions have been an intense area of study for theoreticians and applied ecologists alike in recent years. Of particular interest is the seemingly ubiquitous nature of power-law patch-size distributions emerging in a number of diverse ecosystems. The leading explanation of the emergence of these power-laws is due to local facilitative mechanisms. There is also a common transition from power law to exponential distribution when a system is under global pressure, such as grazing or lack of rainfall. These phenomena require a simple mechanistic explanation. Here, we study vegetation patches from a spatially implicit, patch dynamic viewpoint. We show that under minimal assumptions a power-law patch-size distribution appears as a natural consequence of aggregation. A linear death term also leads to an exponential term in the distribution for any non-zero death rate. This work shows the origin of the breakdown of the power-law under increasing pressure and shows that in general, we expect to observe a power law with an exponential cutoff (rather than pure power laws). The estimated parameters of this distribution also provide insight into the underlying ecological mechanisms of aggregation and death. PMID:26742959
Design & implementation of distributed spatial computing node based on WPS
NASA Astrophysics Data System (ADS)
Liu, Liping; Li, Guoqing; Xie, Jibo
2014-03-01
Currently, the research work of SIG (Spatial Information Grid) technology mostly emphasizes on the spatial data sharing in grid environment, while the importance of spatial computing resources is ignored. In order to implement the sharing and cooperation of spatial computing resources in grid environment, this paper does a systematical research of the key technologies to construct Spatial Computing Node based on the WPS (Web Processing Service) specification by OGC (Open Geospatial Consortium). And a framework of Spatial Computing Node is designed according to the features of spatial computing resources. Finally, a prototype of Spatial Computing Node is implemented and the relevant verification work under the environment is completed.
Molecular Dynamics, Monte Carlo Simulations, and Langevin Dynamics: A Computational Review
Paquet, Eric; Viktor, Herna L.
2015-01-01
Macromolecular structures, such as neuraminidases, hemagglutinins, and monoclonal antibodies, are not rigid entities. Rather, they are characterised by their flexibility, which is the result of the interaction and collective motion of their constituent atoms. This conformational diversity has a significant impact on their physicochemical and biological properties. Among these are their structural stability, the transport of ions through the M2 channel, drug resistance, macromolecular docking, binding energy, and rational epitope design. To assess these properties and to calculate the associated thermodynamical observables, the conformational space must be efficiently sampled and the dynamic of the constituent atoms must be simulated. This paper presents algorithms and techniques that address the abovementioned issues. To this end, a computational review of molecular dynamics, Monte Carlo simulations, Langevin dynamics, and free energy calculation is presented. The exposition is made from first principles to promote a better understanding of the potentialities, limitations, applications, and interrelations of these computational methods. PMID:25785262
Molecular dynamics, monte carlo simulations, and langevin dynamics: a computational review.
Paquet, Eric; Viktor, Herna L
2015-01-01
Macromolecular structures, such as neuraminidases, hemagglutinins, and monoclonal antibodies, are not rigid entities. Rather, they are characterised by their flexibility, which is the result of the interaction and collective motion of their constituent atoms. This conformational diversity has a significant impact on their physicochemical and biological properties. Among these are their structural stability, the transport of ions through the M2 channel, drug resistance, macromolecular docking, binding energy, and rational epitope design. To assess these properties and to calculate the associated thermodynamical observables, the conformational space must be efficiently sampled and the dynamic of the constituent atoms must be simulated. This paper presents algorithms and techniques that address the abovementioned issues. To this end, a computational review of molecular dynamics, Monte Carlo simulations, Langevin dynamics, and free energy calculation is presented. The exposition is made from first principles to promote a better understanding of the potentialities, limitations, applications, and interrelations of these computational methods. PMID:25785262
Theoretical and computational dynamics of a compressible flow
NASA Technical Reports Server (NTRS)
Pai, Shih-I; Luo, Shijun
1991-01-01
An introduction to the theoretical and computational fluid dynamics of a compressible fluid is presented. The general topics addressed include: thermodynamics and physical properties of compressible fluids; 1D flow of an inviscid compressible fluid; shock waves; fundamental equations of the dynamics of a compressible inviscid non-heat-conducting and radiating fluid, method of small perturbations, linearized theory; 2D subsonic steady potential flow; hodograph and rheograph methods, exact solutions of 2D insentropic steady flow equations, 2D steady transonic and hypersonic flows, method of characteristics, linearized theory of 3D potential flow, nonlinear theory of 3D compressibe flow, anisentropic (rotational) flow of inviscid compressible fluid, electromagnetogasdynamics, multiphase flows, flows of a compressible fluid with transport phenomena.
Computational modeling approaches to the dynamics of oncolytic viruses.
Wodarz, Dominik
2016-05-01
Replicating oncolytic viruses represent a promising treatment approach against cancer, specifically targeting the tumor cells. Significant progress has been made through experimental and clinical studies. Besides these approaches, however, mathematical models can be useful when analyzing the dynamics of virus spread through tumors, because the interactions between a growing tumor and a replicating virus are complex and nonlinear, making them difficult to understand by experimentation alone. Mathematical models have provided significant biological insight into the field of virus dynamics, and similar approaches can be adopted to study oncolytic viruses. The review discusses this approach and highlights some of the challenges that need to be overcome in order to build mathematical and computation models that are clinically predictive. WIREs Syst Biol Med 2016, 8:242-252. doi: 10.1002/wsbm.1332 For further resources related to this article, please visit the WIREs website. PMID:27001049
SciDAC Advances and Applications in Computational Beam Dynamics
Ryne, R.; Abell, D.; Adelmann, A.; Amundson, J.; Bohn, C.; Cary, J.; Colella, P.; Dechow, D.; Decyk, V.; Dragt, A.; Gerber, R.; Habib, S.; Higdon, D.; Katsouleas, T.; Ma, K.-L.; McCorquodale, P.; Mihalcea, D.; Mitchell, C.; Mori, W.; Mottershead, C.T.; Neri, F.; Pogorelov, I.; Qiang, J.; Samulyak, R.; Serafini, D.; Shalf, J.; Siegerist, C.; Spentzouris, P.; Stoltz, P.; Terzic, B.; Venturini, M.; Walstrom, P.
2005-06-26
SciDAC has had a major impact on computational beam dynamics and the design of particle accelerators. Particle accelerators--which account for half of the facilities in the DOE Office of Science Facilities for the Future of Science 20 Year Outlook--are crucial for US scientific, industrial, and economic competitiveness. Thanks to SciDAC, accelerator design calculations that were once thought impossible are now carried routinely, and new challenging and important calculations are within reach. SciDAC accelerator modeling codes are being used to get the most science out of existing facilities, to produce optimal designs for future facilities, and to explore advanced accelerator concepts that may hold the key to qualitatively new ways of accelerating charged particle beams. In this poster we present highlights from the SciDAC Accelerator Science and Technology (AST) project Beam Dynamics focus area in regard to algorithm development, software development, and applications.
Emotions are emergent processes: they require a dynamic computational architecture
Scherer, Klaus R.
2009-01-01
Emotion is a cultural and psychobiological adaptation mechanism which allows each individual to react flexibly and dynamically to environmental contingencies. From this claim flows a description of the elements theoretically needed to construct a virtual agent with the ability to display human-like emotions and to respond appropriately to human emotional expression. This article offers a brief survey of the desirable features of emotion theories that make them ideal blueprints for agent models. In particular, the component process model of emotion is described, a theory which postulates emotion-antecedent appraisal on different levels of processing that drive response system patterning predictions. In conclusion, investing seriously in emergent computational modelling of emotion using a nonlinear dynamic systems approach is suggested. PMID:19884141
Distributed Energy Resources and Dynamic Microgrid: An Integrated Assessment
NASA Astrophysics Data System (ADS)
Shang, Duo Rick
The overall goal of this thesis is to improve understanding in terms of the benefit of DERs to both utility and to electricity end-users when integrated in power distribution system. To achieve this goal, a series of two studies was conducted to assess the value of DERs when integrated with new power paradigms. First, the arbitrage value of DERs was examined in markets with time-variant electricity pricing rates (e.g., time of use, real time pricing) under a smart grid distribution paradigm. This study uses a stochastic optimization model to estimate the potential profit from electricity price arbitrage over a five-year period. The optimization process involves two types of PHEVs (PHEV-10, and PHEV-40) under three scenarios with different assumptions on technology performance, electricity market and PHEV owner types. The simulation results indicate that expected arbitrage profit is not a viable option to engage PHEVs in dispatching and in providing ancillary services without more favorable policy and PHEV battery technologies. Subsidy or change in electricity tariff or both are needed. Second, it examined the concept of dynamic microgrid as a measure to improve distribution resilience, and estimates the prices of this emerging service. An economic load dispatch (ELD) model is developed to estimate the market-clearing price in a hypothetical community with single bid auction electricity market. The results show that the electricity market clearing price on the dynamic microgrid is predominantly decided by power output and cost of electricity of each type of DGs. At circumstances where CHP is the only source, the electricity market clearing price in the island is even cheaper than the on-grid electricity price at normal times. Integration of PHEVs in the dynamic microgrid will increase electricity market clearing prices. It demonstrates that dynamic microgrid is an economically viable alternative to enhance grid resilience.
Toward a Dynamically Reconfigurable Computing and Communication System for Small Spacecraft
NASA Technical Reports Server (NTRS)
Kifle, Muli; Andro, Monty; Tran, Quang K.; Fujikawa, Gene; Chu, Pong P.
2003-01-01
Future science missions will require the use of multiple spacecraft with multiple sensor nodes autonomously responding and adapting to a dynamically changing space environment. The acquisition of random scientific events will require rapidly changing network topologies, distributed processing power, and a dynamic resource management strategy. Optimum utilization and configuration of spacecraft communications and navigation resources will be critical in meeting the demand of these stringent mission requirements. There are two important trends to follow with respect to NASA's (National Aeronautics and Space Administration) future scientific missions: the use of multiple satellite systems and the development of an integrated space communications network. Reconfigurable computing and communication systems may enable versatile adaptation of a spacecraft system's resources by dynamic allocation of the processor hardware to perform new operations or to maintain functionality due to malfunctions or hardware faults. Advancements in FPGA (Field Programmable Gate Array) technology make it possible to incorporate major communication and network functionalities in FPGA chips and provide the basis for a dynamically reconfigurable communication system. Advantages of higher computation speeds and accuracy are envisioned with tremendous hardware flexibility to ensure maximum survivability of future science mission spacecraft. This paper discusses the requirements, enabling technologies, and challenges associated with dynamically reconfigurable space communications systems.
Immersive visualization for enhanced computational fluid dynamics analysis.
Quam, David J; Gundert, Timothy J; Ellwein, Laura; Larkee, Christopher E; Hayden, Paul; Migrino, Raymond Q; Otake, Hiromasa; LaDisa, John F
2015-03-01
Modern biomedical computer simulations produce spatiotemporal results that are often viewed at a single point in time on standard 2D displays. An immersive visualization environment (IVE) with 3D stereoscopic capability can mitigate some shortcomings of 2D displays via improved depth cues and active movement to further appreciate the spatial localization of imaging data with temporal computational fluid dynamics (CFD) results. We present a semi-automatic workflow for the import, processing, rendering, and stereoscopic visualization of high resolution, patient-specific imaging data, and CFD results in an IVE. Versatility of the workflow is highlighted with current clinical sequelae known to be influenced by adverse hemodynamics to illustrate potential clinical utility. PMID:25378201
Computational strategies in the dynamic simulation of constrained flexible MBS
NASA Technical Reports Server (NTRS)
Amirouche, F. M. L.; Xie, M.
1993-01-01
This research focuses on the computational dynamics of flexible constrained multibody systems. At first a recursive mapping formulation of the kinematical expressions in a minimum dimension as well as the matrix representation of the equations of motion are presented. The method employs Kane's equation, FEM, and concepts of continuum mechanics. The generalized active forces are extended to include the effects of high temperature conditions, such as creep, thermal stress, and elastic-plastic deformation. The time variant constraint relations for rolling/contact conditions between two flexible bodies are also studied. The constraints for validation of MBS simulation of gear meshing contact using a modified Timoshenko beam theory are also presented. The last part deals with minimization of vibration/deformation of the elastic beam in multibody systems making use of time variant boundary conditions. The above methodologies and computational procedures developed are being implemented in a program called DYAMUS.
Computational Fluid Dynamics at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Kutler, Paul
1994-01-01
Computational fluid dynamics (CFD) is beginning to play a major role in the aircraft industry of the United States because of the realization that CFD can be a new and effective design tool and thus could provide a company with a competitive advantage. It is also playing a significant role in research institutions, both governmental and academic, as a tool for researching new fluid physics, as well as supplementing and complementing experimental testing. In this presentation, some of the progress made to date in CFD at NASA Ames will be reviewed. The presentation addresses the status of CFD in terms of methods, examples of CFD solutions, and computer technology. In addition, the role CFD will play in supporting the revolutionary goals set forth by the Aeronautical Policy Review Committee established by the Office of Science and Technology Policy is noted. The need for validated CFD tools is also briefly discussed.
Parallelization of implicit finite difference schemes in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Decker, Naomi H.; Naik, Vijay K.; Nicoules, Michel
1990-01-01
Implicit finite difference schemes are often the preferred numerical schemes in computational fluid dynamics, requiring less stringent stability bounds than the explicit schemes. Each iteration in an implicit scheme involves global data dependencies in the form of second and higher order recurrences. Efficient parallel implementations of such iterative methods are considerably more difficult and non-intuitive. The parallelization of the implicit schemes that are used for solving the Euler and the thin layer Navier-Stokes equations and that require inversions of large linear systems in the form of block tri-diagonal and/or block penta-diagonal matrices is discussed. Three-dimensional cases are emphasized and schemes that minimize the total execution time are presented. Partitioning and scheduling schemes for alleviating the effects of the global data dependencies are described. An analysis of the communication and the computation aspects of these methods is presented. The effect of the boundary conditions on the parallel schemes is also discussed.
Computational methods. [Calculation of dynamic loading to offshore platforms
Maeda, H. . Inst. of Industrial Science)
1993-02-01
With regard to the computational methods for hydrodynamic forces, first identification of marine hydrodynamics in offshore technology is discussed. Then general computational methods, the state of the arts and uncertainty on flow problems in offshore technology in which developed, developing and undeveloped problems are categorized and future works follow. Marine hydrodynamics consists of water surface and underwater fluid dynamics. Marine hydrodynamics covers, not only hydro, but also aerodynamics such as wind load or current-wave-wind interaction, hydrodynamics such as cavitation, underwater noise, multi-phase flow such as two-phase flow in pipes or air bubble in water or surface and internal waves, and magneto-hydrodynamics such as propulsion due to super conductivity. Among them, two key words are focused on as the identification of marine hydrodynamics in offshore technology; they are free surface and vortex shedding.
Data Point Averaging for Computational Fluid Dynamics Data
NASA Technical Reports Server (NTRS)
Norman, Jr., David (Inventor)
2016-01-01
A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.
Use of computational fluid dynamics in respiratory medicine.
Fernández Tena, Ana; Casan Clarà, Pere
2015-06-01
Computational Fluid Dynamics (CFD) is a computer-based tool for simulating fluid movement. The main advantages of CFD over other fluid mechanics studies include: substantial savings in time and cost, the analysis of systems or conditions that are very difficult to simulate experimentally (as is the case of the airways), and a practically unlimited level of detail. We used the Ansys-Fluent CFD program to develop a conducting airway model to simulate different inspiratory flow rates and the deposition of inhaled particles of varying diameters, obtaining results consistent with those reported in the literature using other procedures. We hope this approach will enable clinicians to further individualize the treatment of different respiratory diseases. PMID:25618456
Data Point Averaging for Computational Fluid Dynamics Data
NASA Technical Reports Server (NTRS)
Norman, David, Jr. (Inventor)
2014-01-01
A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.
The very local Hubble flow: Computer simulations of dynamical history
NASA Astrophysics Data System (ADS)
Chernin, A. D.; Karachentsev, I. D.; Valtonen, M. J.; Dolgachev, V. P.; Domozhilova, L. M.; Makarov, D. I.
2004-02-01
The phenomenon of the very local (≤3 Mpc) Hubble flow is studied on the basis of the data of recent precision observations. A set of computer simulations is performed to trace the trajectories of the flow galaxies back in time to the epoch of the formation of the Local Group. It is found that the ``initial conditions'' of the flow are drastically different from the linear velocity-distance relation. The simulations enable one also to recognize the major trends of the flow evolution and identify the dynamical role of universal antigravity produced by the cosmic vacuum.
Executive Summary: Special Section on Credible Computational Fluid Dynamics Simulations
NASA Technical Reports Server (NTRS)
Mehta, Unmeel B.
1998-01-01
This summary presents the motivation for the Special Section on the credibility of computational fluid dynamics (CFD) simulations, its objective, its background and context, its content, and its major conclusions. Verification and validation (V&V) are the processes for establishing the credibility of CFD simulations. Validation assesses whether correct things are performed and verification assesses whether they are performed correctly. Various aspects of V&V are discussed. Progress is made in verification of simulation models. Considerable effort is still needed for developing a systematic validation method that can assess the credibility of simulated reality.
Computer studies of multiple-quantum spin dynamics
Murdoch, J.B.
1982-11-01
The excitation and detection of multiple-quantum (MQ) transitions in Fourier transform NMR spectroscopy is an interesting problem in the quantum mechanical dynamics of spin systems as well as an important new technique for investigation of molecular structure. In particular, multiple-quantum spectroscopy can be used to simplify overly complex spectra or to separate the various interactions between a nucleus and its environment. The emphasis of this work is on computer simulation of spin-system evolution to better relate theory and experiment.
Continuing Validation of Computational Fluid Dynamics for Supersonic Retropropulsion
NASA Technical Reports Server (NTRS)
Schauerhamer, Daniel Guy; Trumble, Kerry A.; Kleb, Bil; Carlson, Jan-Renee; Edquist, Karl T.
2011-01-01
A large step in the validation of Computational Fluid Dynamics (CFD) for Supersonic Retropropulsion (SRP) is shown through the comparison of three Navier-Stokes solvers (DPLR, FUN3D, and OVERFLOW) and wind tunnel test results. The test was designed specifically for CFD validation and was conducted in the Langley supersonic 4 x4 Unitary Plan Wind Tunnel and includes variations in the number of nozzles, Mach and Reynolds numbers, thrust coefficient, and angles of orientation. Code-to-code and code-to-test comparisons are encouraging and possible error sources are discussed.
Computer Modeling of Real-Time Dynamic Lighting
NASA Technical Reports Server (NTRS)
Maida, James C.; Pace, J.; Novak, J.; Russo, Dane M. (Technical Monitor)
2000-01-01
Space Station tasks involve procedures that are very complex and highly dependent on the availability of visual information. In many situations, cameras are used as tools to help overcome the visual and physical restrictions associated with space flight. However, these cameras are effected by the dynamic lighting conditions of space. Training for these is conditions is necessary. The current project builds on the findings of an earlier NRA funded project, which revealed improved performance by humans when trained with computer graphics and lighting effects such as shadows and glare.
New Challenges in Visualization of Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Gerald-Yamasaki, Michael; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
The development of visualization systems for analyzing computational fluid dynamics data has been driven by increasing size and complexity of the data. New extensions to the system domain into analysis of data from multiple sources, parameter space studies, and multidisciplinary studies in support of integrated aeronautical design systems provide new g challenges for the visualization system developer. Recent work at NASA Ames Research Center in visualization systems, automatic flow feature detection, unsteady flow visualization techniques, and a new area, data exploitation, will be discussed in the context of NASA information technology initiatives.
Computational complexity of ecological and evolutionary spatial dynamics
Ibsen-Jensen, Rasmus; Chatterjee, Krishnendu; Nowak, Martin A.
2015-01-01
There are deep, yet largely unexplored, connections between computer science and biology. Both disciplines examine how information proliferates in time and space. Central results in computer science describe the complexity of algorithms that solve certain classes of problems. An algorithm is deemed efficient if it can solve a problem in polynomial time, which means the running time of the algorithm is a polynomial function of the length of the input. There are classes of harder problems for which the fastest possible algorithm requires exponential time. Another criterion is the space requirement of the algorithm. There is a crucial distinction between algorithms that can find a solution, verify a solution, or list several distinct solutions in given time and space. The complexity hierarchy that is generated in this way is the foundation of theoretical computer science. Precise complexity results can be notoriously difficult. The famous question whether polynomial time equals nondeterministic polynomial time (i.e., P = NP) is one of the hardest open problems in computer science and all of mathematics. Here, we consider simple processes of ecological and evolutionary spatial dynamics. The basic question is: What is the probability that a new invader (or a new mutant) will take over a resident population? We derive precise complexity results for a variety of scenarios. We therefore show that some fundamental questions in this area cannot be answered by simple equations (assuming that P is not equal to NP). PMID:26644569
PArallel Reacting Multiphase FLOw Computational Fluid Dynamic Analysis
2002-06-01
PARMFLO is a parallel multiphase reacting flow computational fluid dynamics (CFD) code. It can perform steady or unsteady simulations in three space dimensions. It is intended for use in engineering CFD analysis of industrial flow system components. Its parallel processing capabilities allow it to be applied to problems that use at least an order of magnitude more computational cells than the number that can be used on a typical single processor workstation (about 106 cellsmore » in parallel processing mode versus about io cells in serial processing mode). Alternately, by spreading the work of a CFD problem that could be run on a single workstation over a group of computers on a network, it can bring the runtime down by an order of magnitude or more (typically from many days to less than one day). The software was implemented using the industry standard Message-Passing Interface (MPI) and domain decomposition in one spatial direction. The phases of a flow problem may include an ideal gas mixture with an arbitrary number of chemical species, and dispersed droplet and particle phases. Regions of porous media may also be included within the domain. The porous media may be packed beds, foams, or monolith catalyst supports. With these features, the code is especially suited to analysis of mixing of reactants in the inlet chamber of catalytic reactors coupled to computation of product yields that result from the flow of the mixture through the catalyst coaled support structure.« less
PArallel Reacting Multiphase FLOw Computational Fluid Dynamic Analysis
Lottes, Steven A.
2002-06-01
PARMFLO is a parallel multiphase reacting flow computational fluid dynamics (CFD) code. It can perform steady or unsteady simulations in three space dimensions. It is intended for use in engineering CFD analysis of industrial flow system components. Its parallel processing capabilities allow it to be applied to problems that use at least an order of magnitude more computational cells than the number that can be used on a typical single processor workstation (about 106 cells in parallel processing mode versus about io cells in serial processing mode). Alternately, by spreading the work of a CFD problem that could be run on a single workstation over a group of computers on a network, it can bring the runtime down by an order of magnitude or more (typically from many days to less than one day). The software was implemented using the industry standard Message-Passing Interface (MPI) and domain decomposition in one spatial direction. The phases of a flow problem may include an ideal gas mixture with an arbitrary number of chemical species, and dispersed droplet and particle phases. Regions of porous media may also be included within the domain. The porous media may be packed beds, foams, or monolith catalyst supports. With these features, the code is especially suited to analysis of mixing of reactants in the inlet chamber of catalytic reactors coupled to computation of product yields that result from the flow of the mixture through the catalyst coaled support structure.
Computational simulation of hematocrit effects on arterial gas embolism dynamics
Mukundakrishnan, Karthik; Ayyaswamy, Portonovo S.; Eckmann, David M.
2012-01-01
Background Recent computational investigations have shed light into the various hydrodynamic mechanisms at play during arterial gas embolism that may result in endothelial cell (EC) injury. Other recent studies have suggested that variations in hematocrit level may play an important role in determining the severity of neurological complications due to decompression sickness associated with gas embolism. Methods Towards developing a comprehensive picture, we have computationally modeled the effect of hematocrit variations on the motion of a nearly occluding gas bubble in arterial blood vessels of various sizes. The computational methodology is based on an axisymmetric finite difference immersed boundary numerical method to precisely track the blood-bubble dynamics of the interface. Hematocrit variations are taken to be in the range 0.2–0.6. The chosen blood vessel sizes correspond to small arteries, and small and large arterioles in normal humans. Results Relevant hydrodynamic interactions between the gas bubble and EC-lined vessel lumen have been characterized and quantified as a function of hematocrit levels. In particular, the variations in shear stress, spatial and temporal shear stress gradients, and the gap between bubble and vascular endothelium surfaces that contribute to EC injury have been computed. Discussion The results suggest that in small arteries, the deleterious hydrodynamic effects of the gas embolism on EC-lined cell wall are significantly amplified as the hematocrit levels increase. However, such pronounced variations with hematocrit levels are not observed in the arterioles. PMID:22303587
Computational complexity of ecological and evolutionary spatial dynamics.
Ibsen-Jensen, Rasmus; Chatterjee, Krishnendu; Nowak, Martin A
2015-12-22
There are deep, yet largely unexplored, connections between computer science and biology. Both disciplines examine how information proliferates in time and space. Central results in computer science describe the complexity of algorithms that solve certain classes of problems. An algorithm is deemed efficient if it can solve a problem in polynomial time, which means the running time of the algorithm is a polynomial function of the length of the input. There are classes of harder problems for which the fastest possible algorithm requires exponential time. Another criterion is the space requirement of the algorithm. There is a crucial distinction between algorithms that can find a solution, verify a solution, or list several distinct solutions in given time and space. The complexity hierarchy that is generated in this way is the foundation of theoretical computer science. Precise complexity results can be notoriously difficult. The famous question whether polynomial time equals nondeterministic polynomial time (i.e., P = NP) is one of the hardest open problems in computer science and all of mathematics. Here, we consider simple processes of ecological and evolutionary spatial dynamics. The basic question is: What is the probability that a new invader (or a new mutant) will take over a resident population? We derive precise complexity results for a variety of scenarios. We therefore show that some fundamental questions in this area cannot be answered by simple equations (assuming that P is not equal to NP). PMID:26644569
NASA Technical Reports Server (NTRS)
Hardman, R. R.; Mahan, J. R.; Smith, M. H.; Gelhausen, P. A.; Van Dalsem, W. R.
1991-01-01
The need for a validation technique for computational fluid dynamics (CFD) codes in STOVL applications has led to research efforts to apply infrared thermal imaging techniques to visualize gaseous flow fields. Specifically, a heated, free-jet test facility was constructed. The gaseous flow field of the jet exhaust was characterized using an infrared imaging technique in the 2 to 5.6 micron wavelength band as well as conventional pitot tube and thermocouple methods. These infrared images are compared to computer-generated images using the equations of radiative exchange based on the temperature distribution in the jet exhaust measured with the thermocouple traverses. Temperature and velocity measurement techniques, infrared imaging, and the computer model of the infrared imaging technique are presented and discussed. From the study, it is concluded that infrared imaging techniques coupled with the radiative exchange equations applied to CFD models are a valid method to qualitatively verify CFD codes used in STOVL applications.
Mackiewicz, Dorota; de Oliveira, Paulo Murilo Castro; Moss de Oliveira, Suzana; Cebrat, Stanisław
2013-01-01
Recombination is the main cause of genetic diversity. Thus, errors in this process can lead to chromosomal abnormalities. Recombination events are confined to narrow chromosome regions called hotspots in which characteristic DNA motifs are found. Genomic analyses have shown that both recombination hotspots and DNA motifs are distributed unevenly along human chromosomes and are much more frequent in the subtelomeric regions of chromosomes than in their central parts. Clusters of motifs roughly follow the distribution of recombination hotspots whereas single motifs show a negative correlation with the hotspot distribution. To model the phenomena related to recombination, we carried out computer Monte Carlo simulations of genome evolution. Computer simulations generated uneven distribution of hotspots with their domination in the subtelomeric regions of chromosomes. They also revealed that purifying selection eliminating defective alleles is strong enough to cause such hotspot distribution. After sufficiently long time of simulations, the structure of chromosomes reached a dynamic equilibrium, in which number and global distribution of both hotspots and defective alleles remained statistically unchanged, while their precise positions were shifted. This resembles the dynamic structure of human and chimpanzee genomes, where hotspots change their exact locations but the global distributions of recombination events are very similar. PMID:23776462
Analytical formulae for computing dominance from species-abundance distributions.
Fung, Tak; Villain, Laura; Chisholm, Ryan A
2015-12-01
The evenness of an ecological community affects ecosystem structure, functioning and stability, and has implications for biodiversity conservation. In uneven communities, most species are rare while a few dominant species drive ecosystem-level properties. In even communities, dominance is lower, with possibly many species playing key ecological roles. The dominance aspect of evenness can be measured as a decreasing function of the proportion of species required to make up a fixed fraction (e.g., half) of individuals in a community. Here we sought general rules about dominance in ecological communities by linking dominance mathematically to the parameters of common theoretical species-abundance distributions (SADs). We found that if a community's SAD was log-series or lognormal, then dominance was almost inevitably high, with fewer than 40% of species required to account for 90% of all individuals. Dominance for communities with an exponential SAD was lower but still typically high, with fewer than 40% of species required to account for 70% of all individuals. In contrast, communities with a gamma SAD only exhibited high dominance when the average species abundance was below a threshold of approximately 100. Furthermore, we showed that exact values of dominance were highly scale-dependent, exhibiting non-linear trends with changing average species abundance. We also applied our formulae to SADs derived from a mechanistic community model to demonstrate how dominance can increase with environmental variance. Overall, our study provides a rigorous basis for theoretical explorations of the dynamics of dominance in ecological communities, and how this affects ecosystem functioning and stability. PMID:26409166
Evaluation of DEC`s GIGAswitch for distributed parallel computing
Chen, H.; Hutchins, J.; Brandt, J.
1993-10-01
One of Sandia`s research efforts is to reduce the end-to-end communication delay in a parallel-distributed computing environment. GIGAswitch is DEC`s implementation of a gigabit local area network based on switched FDDI technology. Using the GIGAswitch, the authors intend to minimize the medium access latency suffered by shared-medium FDDI technology. Experimental results show that the GIGAswitch adds 16.5 microseconds of switching and bridging delay to an end-to-end communication. Although the added latency causes a 1.8% throughput degradation and a 5% line efficiency degradation, the availability of dedicated bandwidth is much more than what is available to a workstation on a shared medium. For example, ten directly connected workstations each would have a dedicated bandwidth of 95 Mbps, but if they were sharing the FDDI bandwidth, each would have 10% of the total bandwidth, i.e., less than 10 Mbps. In addition, they have found that when there is no output port contention, the switch`s aggregate bandwidth will scale up to multiples of its port bandwidth. However, with output port contention, the throughput and latency performance suffered significantly. Their mathematical and simulation models indicate that the GIGAswitch line efficiency could be as low as 63% when there are nine input ports contending for the same output port. The data indicate that the delay introduced by contention at the server workstation is 50 times that introduced by the GIGAswitch. The authors conclude that the GIGAswitch meets the performance requirements of today`s high-end workstations and that the switched FDDI technology provides an alternative that utilizes existing workstation interfaces while increasing the aggregate bandwidth. However, because the speed of workstations is increasing by a factor of 2 every 1.5 years, the switched FDDI technology is only good as an interim solution.
NASA Astrophysics Data System (ADS)
Murphy, Shane; Scala, Antonio; Lorito, Stefano; Herrero, Andre; Festa, Gaetano; Nielsen, Stefan; Trasatti, Elisa; Tonini, Roberto; Romano, Fabrizio; Molinari, Irene
2016-04-01
Stochastic slip modelling based on general scaling features with uniform slip probability over the fault plane is commonly employed in tsunami and seismic hazard. However, dynamic rupture effects driven by specific fault geometry and frictional conditions can potentially control the slip probability. Unfortunately dynamic simulations can be computationally intensive, preventing their extensive use for hazard analysis. The aim of this study is to produce a computationally efficient stochastic model that incorporates slip features observed in dynamic simulations. Dynamic rupture simulations are performed along a transect representing an average along-depth profile on the Tohoku subduction interface. The surrounding media, effective normal stress and friction law are simplified. Uncertainty in the nucleation location and pre-stress distribution are accounted for by using randomly located nucleation patches and stochastic pre-stress distributions for 500 simulations. The 1D slip distributions are approximated as moment magnitudes on the fault plane based on empirical scaling laws with the ensemble producing a magnitude range of 7.8 - 9.6. To measure the systematic spatial slip variation and its dependence on earthquake magnitude we introduce the concept of the Slip Probability density Function (SPF). We find that while the stochastic SPF is magnitude invariant, the dynamically derived SPF is magnitude-dependent and shows pronounced slip amplification near the surface for M > 8.6 events. To incorporate these dynamic features in the stochastic source models, we sub-divide the dynamically derived SPFs into 0.2 magnitude bins and compare them with the stochastic SPF in order to generate a depth and magnitude dependent transfer function. Applying this function to the traditional stochastic slip distribution allows for an approximated but efficient incorporation of regionally specific dynamic features in a modified source model, to be used specifically when a significant
A dynamic computer model of a kicking well
Nickens, H.V.
1987-06-01
The dynamic effects of variable pump flow rate, formation-influx distribution, blowout preventer (BOP) and choke closure, choke adjustments, and well stabilization are accounted for in the analysis of a kicking well by solution of the appropriate mass- and momentum-balance equations. In the gas/liquid regions, the system is modeled as separated flow. The mass-balance equations are solved for the gas and liquid separately, but the momentum equation is solved for the mixture. The model predicts the detailed flow and pressure response of the well at all times and wellbore locations during the kick. The model is currently capable of simulating a single kick in a vertical hole (surface or subsea with water-based mud and the bit on bottom. The drillers' method (DM), wait/weight (WW), or dynamic kill-control procedures for either a ''perfect controller'' (constant bottomhole pressure (BHP)) or for any of several choke-control procedures may be simulated.
NASA Technical Reports Server (NTRS)
Norby, W. P.; Ladd, J. A.; Yuhas, A. J.
1996-01-01
A procedure has been developed for predicting peak dynamic inlet distortion. This procedure combines Computational Fluid Dynamics (CFD) and distortion synthesis analysis to obtain a prediction of peak dynamic distortion intensity and the associated instantaneous total pressure pattern. A prediction of the steady state total pressure pattern at the Aerodynamic Interface Plane is first obtained using an appropriate CFD flow solver. A corresponding inlet turbulence pattern is obtained from the CFD solution via a correlation linking root mean square (RMS) inlet turbulence to a formulation of several CFD parameters representative of flow turbulence intensity. This correlation was derived using flight data obtained from the NASA High Alpha Research Vehicle flight test program and several CFD solutions at conditions matching the flight test data. A distortion synthesis analysis is then performed on the predicted steady state total pressure and RMS turbulence patterns to yield a predicted value of dynamic distortion intensity and the associated instantaneous total pressure pattern.
The Influence of Synaptic Weight Distribution on Neuronal Population Dynamics
Buice, Michael; Koch, Christof; Mihalas, Stefan
2013-01-01
The manner in which different distributions of synaptic weights onto cortical neurons shape their spiking activity remains open. To characterize a homogeneous neuronal population, we use the master equation for generalized leaky integrate-and-fire neurons with shot-noise synapses. We develop fast semi-analytic numerical methods to solve this equation for either current or conductance synapses, with and without synaptic depression. We show that its solutions match simulations of equivalent neuronal networks better than those of the Fokker-Planck equation and we compute bounds on the network response to non-instantaneous synapses. We apply these methods to study different synaptic weight distributions in feed-forward networks. We characterize the synaptic amplitude distributions using a set of measures, called tail weight numbers, designed to quantify the preponderance of very strong synapses. Even if synaptic amplitude distributions are equated for both the total current and average synaptic weight, distributions with sparse but strong synapses produce higher responses for small inputs, leading to a larger operating range. Furthermore, despite their small number, such synapses enable the network to respond faster and with more stability in the face of external fluctuations. PMID:24204219
Dynamic Root Distribution in the Community Land Model
NASA Astrophysics Data System (ADS)
Drewniak, B. A.
2015-12-01
Roots are responsible for water and nutrient uptake for plant needs, functioning to couple the above and belowground ecosystems as a photosynthesis driver. Roots respond to their environment with foraging strategies to maximize nutrient acquisition. However, roots have one of the simplest representations in Earth System Models (ESMs). Most root algorithms in ESMs consist of a fixed rooting depth and distribution, which varies only with plant functional type (PFT). Although this method works in general for many ecosystems, there are several regions (e.g., arid, boreal) where root distribution is either overestimated or underestimated resulting in plant stress induced lost productivity. In order to allow ecosystems to respond to changes in environment such as from climate change, roots require a time varying structure to adapt to heterogeneity of water and nitrogen in the soil. This work presents a new approach to representing roots in the Community Land Model. The methodology is designed to optimize root distribution for both water and nitrogen uptake, with a priority given to plant water needs. The roots can respond to the soil vertical profile of nutrients, influencing the plant extractable resources and therefore the above ground vegetation dynamics. The dynamic root profile results in an increase in gross primary productivity and crop yield.
A Process for Comparing Dynamics of Distributed Space Systems Simulations
NASA Technical Reports Server (NTRS)
Cures, Edwin Z.; Jackson, Albert A.; Morris, Jeffery C.
2009-01-01
The paper describes a process that was developed for comparing the primary orbital dynamics behavior between space systems distributed simulations. This process is used to characterize and understand the fundamental fidelities and compatibilities of the modeling of orbital dynamics between spacecraft simulations. This is required for high-latency distributed simulations such as NASA s Integrated Mission Simulation and must be understood when reporting results from simulation executions. This paper presents 10 principal comparison tests along with their rationale and examples of the results. The Integrated Mission Simulation (IMSim) (formerly know as the Distributed Space Exploration Simulation (DSES)) is a NASA research and development project focusing on the technologies and processes that are related to the collaborative simulation of complex space systems involved in the exploration of our solar system. Currently, the NASA centers that are actively participating in the IMSim project are the Ames Research Center, the Jet Propulsion Laboratory (JPL), the Johnson Space Center (JSC), the Kennedy Space Center, the Langley Research Center and the Marshall Space Flight Center. In concept, each center participating in IMSim has its own set of simulation models and environment(s). These simulation tools are used to build the various simulation products that are used for scientific investigation, engineering analysis, system design, training, planning, operations and more. Working individually, these production simulations provide important data to various NASA projects.
Asteroids@home-A BOINC distributed computing project for asteroid shape reconstruction
NASA Astrophysics Data System (ADS)
Ďurech, J.; Hanuš, J.; Vančo, R.
2015-11-01
We present the project Asteroids@home that uses distributed computing to solve the time-consuming inverse problem of shape reconstruction of asteroids. The project uses the Berkeley Open Infrastructure for Network Computing (BOINC) framework to distribute, collect, and validate small computational units that are solved independently at individual computers of volunteers connected to the project. Shapes, rotational periods, and orientations of the spin axes of asteroids are reconstructed from their disk-integrated photometry by the lightcurve inversion method.
Adaptive-mesh algorithms for computational fluid dynamics
NASA Technical Reports Server (NTRS)
Powell, Kenneth G.; Roe, Philip L.; Quirk, James
1993-01-01
The basic goal of adaptive-mesh algorithms is to distribute computational resources wisely by increasing the resolution of 'important' regions of the flow and decreasing the resolution of regions that are less important. While this goal is one that is worthwhile, implementing schemes that have this degree of sophistication remains more of an art than a science. In this paper, the basic pieces of adaptive-mesh algorithms are described and some of the possible ways to implement them are discussed and compared. These basic pieces are the data structure to be used, the generation of an initial mesh, the criterion to be used to adapt the mesh to the solution, and the flow-solver algorithm on the resulting mesh. Each of these is discussed, with particular emphasis on methods suitable for the computation of compressible flows.
NASA Astrophysics Data System (ADS)
Gullberg, Grant T.; Reutter, Bryan W.; Sitek, Arkadiusz; Maltz, Jonathan S.; Budinger, Thomas F.
2010-10-01
The very nature of nuclear medicine, the visual representation of injected radiopharmaceuticals, implies imaging of dynamic processes such as the uptake and wash-out of radiotracers from body organs. For years, nuclear medicine has been touted as the modality of choice for evaluating function in health and disease. This evaluation is greatly enhanced using single photon emission computed tomography (SPECT), which permits three-dimensional (3D) visualization of tracer distributions in the body. However, to fully realize the potential of the technique requires the imaging of in vivo dynamic processes of flow and metabolism. Tissue motion and deformation must also be addressed. Absolute quantification of these dynamic processes in the body has the potential to improve diagnosis. This paper presents a review of advancements toward the realization of the potential of dynamic SPECT imaging and a brief history of the development of the instrumentation. A major portion of the paper is devoted to the review of special data processing methods that have been developed for extracting kinetics from dynamic cardiac SPECT data acquired using rotating detector heads that move as radiopharmaceuticals exchange between biological compartments. Recent developments in multi-resolution spatiotemporal methods enable one to estimate kinetic parameters of compartment models of dynamic processes using data acquired from a single camera head with slow gantry rotation. The estimation of kinetic parameters directly from projection measurements improves bias and variance over the conventional method of first reconstructing 3D dynamic images, generating time-activity curves from selected regions of interest and then estimating the kinetic parameters from the generated time-activity curves. Although the potential applications of SPECT for imaging dynamic processes have not been fully realized in the clinic, it is hoped that this review illuminates the potential of SPECT for dynamic imaging
Dynamic single photon emission computed tomography—basic principles and cardiac applications
Gullberg, Grant T; Reutter, Bryan W; Sitek, Arkadiusz; Maltz, Jonathan S; Budinger, Thomas F
2011-01-01
The very nature of nuclear medicine, the visual representation of injected radiopharmaceuticals, implies imaging of dynamic processes such as the uptake and wash-out of radiotracers from body organs. For years, nuclear medicine has been touted as the modality of choice for evaluating function in health and disease. This evaluation is greatly enhanced using single photon emission computed tomography (SPECT), which permits three-dimensional (3D) visualization of tracer distributions in the body. However, to fully realize the potential of the technique requires the imaging of in vivo dynamic processes of flow and metabolism. Tissue motion and deformation must also be addressed. Absolute quantification of these dynamic processes in the body has the potential to improve diagnosis. This paper presents a review of advancements toward the realization of the potential of dynamic SPECT imaging and a brief history of the development of the instrumentation. A major portion of the paper is devoted to the review of special data processing methods that have been developed for extracting kinetics from dynamic cardiac SPECT data acquired using rotating detector heads that move as radiopharmaceuticals exchange between biological compartments. Recent developments in multi-resolution spatiotemporal methods enable one to estimate kinetic parameters of compartment models of dynamic processes using data acquired from a single camera head with slow gantry rotation. The estimation of kinetic parameters directly from projection measurements improves bias and variance over the conventional method of first reconstructing 3D dynamic images, generating time–activity curves from selected regions of interest and then estimating the kinetic parameters from the generated time–activity curves. Although the potential applications of SPECT for imaging dynamic processes have not been fully realized in the clinic, it is hoped that this review illuminates the potential of SPECT for dynamic imaging
Nelsen, J.M.
1995-06-01
Computational fluid dynamic studies of 3-D, fixed geometry, gore-shaped parachute canopies are presented. Both solid and ribbon canopies with a 10% vent diameter are investigated. The flowfields analyzed are laminar and compressible, broaching both the subsonic and supersonic regimes. Results presented include characterization of the local and global flowfields and the internal and external canopy surface pressure distributions. The canopy surface pressure distributions may be utilized in subsequent structural analyses to assess the integrity of the parachute canopy fabric components.
Improvement in computational fluid dynamics through boundary verification and preconditioning
NASA Astrophysics Data System (ADS)
Folkner, David E.
This thesis provides improvements to computational fluid dynamics accuracy and efficiency through two main methods: a new boundary condition verification procedure and preconditioning techniques. First, a new verification approach that addresses boundary conditions was developed. In order to apply the verification approach to a large range of arbitrary boundary conditions, it was necessary to develop unifying mathematical formulation. A framework was developed that allows for the application of Dirichlet, Neumann, and extrapolation boundary condition, or in some cases the equations of motion directly. Verification of boundary condition techniques was performed using exact solutions from canonical fluid dynamic test cases. Second, to reduce computation time and improve accuracy, preconditioning algorithms were applied via artificial dissipation schemes. A new convective upwind and split pressure (CUSP) scheme was devised and was shown to be more effective than traditional preconditioning schemes in certain scenarios. The new scheme was compared with traditional schemes for unsteady flows for which both convective and acoustic effects dominated. Both boundary conditions and preconditioning algorithms were implemented in the context of a "strand grid" solver. While not the focus of this thesis, strand grids provide automatic viscous quality meshing and are suitable for moving mesh overset problems.
Issues in computational fluid dynamics code verification and validation
Oberkampf, W.L.; Blottner, F.G.
1997-09-01
A broad range of mathematical modeling errors of fluid flow physics and numerical approximation errors are addressed in computational fluid dynamics (CFD). It is strongly believed that if CFD is to have a major impact on the design of engineering hardware and flight systems, the level of confidence in complex simulations must substantially improve. To better understand the present limitations of CFD simulations, a wide variety of physical modeling, discretization, and solution errors are identified and discussed. Here, discretization and solution errors refer to all errors caused by conversion of the original partial differential, or integral, conservation equations representing the physical process, to algebraic equations and their solution on a computer. The impact of boundary conditions on the solution of the partial differential equations and their discrete representation will also be discussed. Throughout the article, clear distinctions are made between the analytical mathematical models of fluid dynamics and the numerical models. Lax`s Equivalence Theorem and its frailties in practical CFD solutions are pointed out. Distinctions are also made between the existence and uniqueness of solutions to the partial differential equations as opposed to the discrete equations. Two techniques are briefly discussed for the detection and quantification of certain types of discretization and grid resolution errors.
Computational fluid dynamics modeling for emergency preparedness & response
Lee, R.L.; Albritton, J.R.; Ermak, D.L.; Kim, J.
1995-07-01
Computational fluid dynamics (CFD) has played an increasing role in the improvement of atmospheric dispersion modeling. This is because many dispersion models are now driven by meteorological fields generated from CFD models or, in numerical weather prediction`s terminology, prognostic models. Whereas most dispersion models typically involve one or a few scalar, uncoupled equations, the prognostic equations are a set of highly-coupled, nonlinear equations whose solution requires a significant level of computational power. Until recently, such computer power could be found only in CRAY-class supercomputers. Recent advances in computer hardware and software have enabled modestly-priced, high performance, workstations to exhibit the equivalent computation power of some mainframes. Thus desktop-class machines that were limited to performing dispersion calculations driven by diagnostic wind fields may now be used to calculate complex flows using prognostic CFD models. The Atmospheric Release and Advisory Capability (ARAC) program at Lawrence Livermore National Laboratory (LLNL) has, for the past several years, taken advantage of the improvements in hardware technology to develop a national emergency response capability based on executing diagnostic models on workstations. Diagnostic models that provide wind fields are, in general, simple to implement, robust and require minimal time for execution. Such models have been the cornerstones of the ARAC operational system for the past ten years. Kamada (1992) provides a review of diagnostic models and their applications to dispersion problems. However, because these models typically contain little physics beyond mass-conservation, their performance is extremely sensitive to the quantity and quality of input meteorological data and, in spite of their utility, can be applied with confidence to only modestly complex flows.