Science.gov

Sample records for distributed dynamical computation

  1. Parallel and Distributed Computational Fluid Dynamics: Experimental Results and Challenges

    NASA Technical Reports Server (NTRS)

    Djomehri, Mohammad Jahed; Biswas, R.; VanderWijngaart, R.; Yarrow, M.

    2000-01-01

    This paper describes several results of parallel and distributed computing using a large scale production flow solver program. A coarse grained parallelization based on clustering of discretization grids combined with partitioning of large grids for load balancing is presented. An assessment is given of its performance on distributed and distributed-shared memory platforms using large scale scientific problems. An experiment with this solver, adapted to a Wide Area Network execution environment is presented. We also give a comparative performance assessment of computation and communication times on both the tightly and loosely-coupled machines.

  2. Application of a distributed network in computational fluid dynamic simulations

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.; Deshpande, Ashish

    1994-01-01

    A general-purpose 3-D, incompressible Navier-Stokes algorithm is implemented on a network of concurrently operating workstations using parallel virtual machine (PVM) and compared with its performance on a CRAY Y-MP and on an Intel iPSC/860. The problem is relatively computationally intensive, and has a communication structure based primarily on nearest-neighbor communication, making it ideally suited to message passing. Such problems are frequently encountered in computational fluid dynamics (CDF), and their solution is increasingly in demand. The communication structure is explicitly coded in the implementation to fully exploit the regularity in message passing in order to produce a near-optimal solution. Results are presented for various grid sizes using up to eight processors.

  3. Dynamic resource allocation scheme for distributed heterogeneous computer systems

    NASA Technical Reports Server (NTRS)

    Liu, Howard T. (Inventor); Silvester, John A. (Inventor)

    1991-01-01

    This invention relates to a resource allocation in computer systems, and more particularly, to a method and associated apparatus for shortening response time and improving efficiency of a heterogeneous distributed networked computer system by reallocating the jobs queued up for busy nodes to idle, or less-busy nodes. In accordance with the algorithm (SIDA for short), the load-sharing is initiated by the server device in a manner such that extra overhead in not imposed on the system during heavily-loaded conditions. The algorithm employed in the present invention uses a dual-mode, server-initiated approach. Jobs are transferred from heavily burdened nodes (i.e., over a high threshold limit) to low burdened nodes at the initiation of the receiving node when: (1) a job finishes at a node which is burdened below a pre-established threshold level, or (2) a node is idle for a period of time as established by a wakeup timer at the node. The invention uses a combination of the local queue length and the local service rate ratio at each node as the workload indicator.

  4. Evidence for complex, collective dynamics and emergent, distributed computation in plants.

    PubMed

    Peak, David; West, Jevin D; Messinger, Susanna M; Mott, Keith A

    2004-01-27

    It has been suggested that some biological processes are equivalent to computation, but quantitative evidence for that view is weak. Plants must solve the problem of adjusting stomatal apertures to allow sufficient CO(2) uptake for photosynthesis while preventing excessive water loss. Under some conditions, stomatal apertures become synchronized into patches that exhibit richly complicated dynamics, similar to behaviors found in cellular automata that perform computational tasks. Using sequences of chlorophyll fluorescence images from leaves of Xanthium strumarium L. (cocklebur), we quantified spatial and temporal correlations in stomatal dynamics. Our values are statistically indistinguishable from those of the same correlations found in the dynamics of automata that compute. These results are consistent with the proposition that a plant solves its optimal gas exchange problem through an emergent, distributed computation performed by its leaves. PMID:14732685

  5. Dynamic Load Balancing for Adaptive Computations on Distributed-Memory Machines

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Dynamic load balancing is central to adaptive mesh-based computations on large-scale parallel computers. The principal investigator has investigated various issues on the dynamic load balancing problem under NASA JOVE and JAG rants. The major accomplishments of the project are two graph partitioning algorithms and a load balancing framework. The S-HARP dynamic graph partitioner is known to be the fastest among the known dynamic graph partitioners to date. It can partition a graph of over 100,000 vertices in 0.25 seconds on a 64- processor Cray T3E distributed-memory multiprocessor while maintaining the scalability of over 16-fold speedup. Other known and widely used dynamic graph partitioners take over a second or two while giving low scalability of a few fold speedup on 64 processors. These results have been published in journals and peer-reviewed flagship conferences.

  6. Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.

    2000-01-01

    The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.

  7. Fast computation of statistical uncertainty for spatiotemporal distributions estimated directly from dynamic cone beam SPECT projections

    SciTech Connect

    Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.

    2001-04-09

    The estimation of time-activity curves and kinetic model parameters directly from projection data is potentially useful for clinical dynamic single photon emission computed tomography (SPECT) studies, particularly in those clinics that have only single-detector systems and thus are not able to perform rapid tomographic acquisitions. Because the radiopharmaceutical distribution changes while the SPECT gantry rotates, projections at different angles come from different tracer distributions. A dynamic image sequence reconstructed from the inconsistent projections acquired by a slowly rotating gantry can contain artifacts that lead to biases in kinetic parameters estimated from time-activity curves generated by overlaying regions of interest on the images. If cone beam collimators are used and the focal point of the collimators always remains in a particular transaxial plane, additional artifacts can arise in other planes reconstructed using insufficient projection samples [1]. If the projection samples truncate the patient's body, this can result in additional image artifacts. To overcome these sources of bias in conventional image based dynamic data analysis, we and others have been investigating the estimation of time-activity curves and kinetic model parameters directly from dynamic SPECT projection data by modeling the spatial and temporal distribution of the radiopharmaceutical throughout the projected field of view [2-8]. In our previous work we developed a computationally efficient method for fully four-dimensional (4-D) direct estimation of spatiotemporal distributions from dynamic SPECT projection data [5], which extended Formiconi's least squares algorithm for reconstructing temporally static distributions [9]. In addition, we studied the biases that result from modeling various orders temporal continuity and using various time samplings [5]. the present work, we address computational issues associated with evaluating the statistical uncertainty of

  8. Large-Scale Distributed Computational Fluid Dynamics on the Information Power Grid Using Globus

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen; Biswas, Rupak; Saini, Subhash; VanderWijngaart, Robertus; Yarrow, Maurice; Zechtzer, Lou; Foster, Ian; Larsson, Olle

    1999-01-01

    This paper describes an experiment in which a large-scale scientific application development for tightly-coupled parallel machines is adapted to the distributed execution environment of the Information Power Grid (IPG). A brief overview of the IPG and a description of the computational fluid dynamics (CFD) algorithm are given. The Globus metacomputing toolkit is used as the enabling device for the geographically-distributed computation. Modifications related to latency hiding and Load balancing were required for an efficient implementation of the CFD application in the IPG environment. Performance results on a pair of SGI Origin 2000 machines indicate that real scientific applications can be effectively implemented on the IPG; however, a significant amount of continued effort is required to make such an environment useful and accessible to scientists and engineers.

  9. A distributed, dynamic, parallel computational model: the role of noise in velocity storage

    PubMed Central

    Merfeld, Daniel M.

    2012-01-01

    Networks of neurons perform complex calculations using distributed, parallel computation, including dynamic “real-time” calculations required for motion control. The brain must combine sensory signals to estimate the motion of body parts using imperfect information from noisy neurons. Models and experiments suggest that the brain sometimes optimally minimizes the influence of noise, although it remains unclear when and precisely how neurons perform such optimal computations. To investigate, we created a model of velocity storage based on a relatively new technique–“particle filtering”–that is both distributed and parallel. It extends existing observer and Kalman filter models of vestibular processing by simulating the observer model many times in parallel with noise added. During simulation, the variance of the particles defining the estimator state is used to compute the particle filter gain. We applied our model to estimate one-dimensional angular velocity during yaw rotation, which yielded estimates for the velocity storage time constant, afferent noise, and perceptual noise that matched experimental data. We also found that the velocity storage time constant was Bayesian optimal by comparing the estimate of our particle filter with the estimate of the Kalman filter, which is optimal. The particle filter demonstrated a reduced velocity storage time constant when afferent noise increased, which mimics what is known about aminoglycoside ablation of semicircular canal hair cells. This model helps bridge the gap between parallel distributed neural computation and systems-level behavioral responses like the vestibuloocular response and perception. PMID:22514288

  10. Distributed computing

    SciTech Connect

    Chambers, F.B.; Duce, D.A.; Jones, G.P.

    1984-01-01

    CONTENTS: The Dataflow Approach: Fundamentals of dataflow. Architecture and performance. Assembler level programming. High level dataflow programming. Declarative systems: Functional programming. Logic programming and prolog. The ''language first'' approach. Towards a successor to von Neumann. Loosely-coupled systems: Architectures. Communications. Distributed filestores. Mechanisms for distributed control. Distributed operating systems. Programming languages. Closely-coupled systems: Architecture. Programming languages. Run-time support. Development aids. Cyba-M. Polyproc. Modeling and verification: Using algebra for concurrency. Reasoning about concurrent systems. Each chapter includes references. Index.

  11. Analysis of discrete and continuous distributions of ventilatory time constants from dynamic computed tomography

    NASA Astrophysics Data System (ADS)

    Doebrich, Marcus; Markstaller, Klaus; Karmrodt, Jens; Kauczor, Hans-Ulrich; Eberle, Balthasar; Weiler, Norbert; Thelen, Manfred; Schreiber, Wolfgang G.

    2005-04-01

    In this study, an algorithm was developed to measure the distribution of pulmonary time constants (TCs) from dynamic computed tomography (CT) data sets during a sudden airway pressure step up. Simulations with synthetic data were performed to test the methodology as well as the influence of experimental noise. Furthermore the algorithm was applied to in vivo data. In five pigs sudden changes in airway pressure were imposed during dynamic CT acquisition in healthy lungs and in a saline lavage ARDS model. The fractional gas content in the imaged slice (FGC) was calculated by density measurements for each CT image. Temporal variations of the FGC were analysed assuming a model with a continuous distribution of exponentially decaying time constants. The simulations proved the feasibility of the method. The influence of experimental noise could be well evaluated. Analysis of the in vivo data showed that in healthy lungs ventilation processes can be more likely characterized by discrete TCs whereas in ARDS lungs continuous distributions of TCs are observed. The temporal behaviour of lung inflation and deflation can be characterized objectively using the described new methodology. This study indicates that continuous distributions of TCs reflect lung ventilation mechanics more accurately compared to discrete TCs.

  12. FPGA-based distributed computing microarchitecture for complex physical dynamics investigation.

    PubMed

    Borgese, Gianluca; Pace, Calogero; Pantano, Pietro; Bilotta, Eleonora

    2013-09-01

    In this paper, we present a distributed computing system, called DCMARK, aimed at solving partial differential equations at the basis of many investigation fields, such as solid state physics, nuclear physics, and plasma physics. This distributed architecture is based on the cellular neural network paradigm, which allows us to divide the differential equation system solving into many parallel integration operations to be executed by a custom multiprocessor system. We push the number of processors to the limit of one processor for each equation. In order to test the present idea, we choose to implement DCMARK on a single FPGA, designing the single processor in order to minimize its hardware requirements and to obtain a large number of easily interconnected processors. This approach is particularly suited to study the properties of 1-, 2- and 3-D locally interconnected dynamical systems. In order to test the computing platform, we implement a 200 cells, Korteweg-de Vries (KdV) equation solver and perform a comparison between simulations conducted on a high performance PC and on our system. Since our distributed architecture takes a constant computing time to solve the equation system, independently of the number of dynamical elements (cells) of the CNN array, it allows us to reduce the elaboration time more than other similar systems in the literature. To ensure a high level of reconfigurability, we design a compact system on programmable chip managed by a softcore processor, which controls the fast data/control communication between our system and a PC Host. An intuitively graphical user interface allows us to change the calculation parameters and plot the results. PMID:24808576

  13. The van Hove distribution function for Brownian hard spheres: Dynamical test particle theory and computer simulations for bulk dynamics

    NASA Astrophysics Data System (ADS)

    Hopkins, Paul; Fortini, Andrea; Archer, Andrew J.; Schmidt, Matthias

    2010-12-01

    We describe a test particle approach based on dynamical density functional theory (DDFT) for studying the correlated time evolution of the particles that constitute a fluid. Our theory provides a means of calculating the van Hove distribution function by treating its self and distinct parts as the two components of a binary fluid mixture, with the "self " component having only one particle, the "distinct" component consisting of all the other particles, and using DDFT to calculate the time evolution of the density profiles for the two components. We apply this approach to a bulk fluid of Brownian hard spheres and compare to results for the van Hove function and the intermediate scattering function from Brownian dynamics computer simulations. We find good agreement at low and intermediate densities using the very simple Ramakrishnan-Yussouff [Phys. Rev. B 19, 2775 (1979)] approximation for the excess free energy functional. Since the DDFT is based on the equilibrium Helmholtz free energy functional, we can probe a free energy landscape that underlies the dynamics. Within the mean-field approximation we find that as the particle density increases, this landscape develops a minimum, while an exact treatment of a model confined situation shows that for an ergodic fluid this landscape should be monotonic. We discuss possible implications for slow, glassy, and arrested dynamics at high densities.

  14. The van Hove distribution function for brownian hard spheres: dynamical test particle theory and computer simulations for bulk dynamics.

    PubMed

    Hopkins, Paul; Fortini, Andrea; Archer, Andrew J; Schmidt, Matthias

    2010-12-14

    We describe a test particle approach based on dynamical density functional theory (DDFT) for studying the correlated time evolution of the particles that constitute a fluid. Our theory provides a means of calculating the van Hove distribution function by treating its self and distinct parts as the two components of a binary fluid mixture, with the "self " component having only one particle, the "distinct" component consisting of all the other particles, and using DDFT to calculate the time evolution of the density profiles for the two components. We apply this approach to a bulk fluid of Brownian hard spheres and compare to results for the van Hove function and the intermediate scattering function from Brownian dynamics computer simulations. We find good agreement at low and intermediate densities using the very simple Ramakrishnan-Yussouff [Phys. Rev. B 19, 2775 (1979)] approximation for the excess free energy functional. Since the DDFT is based on the equilibrium Helmholtz free energy functional, we can probe a free energy landscape that underlies the dynamics. Within the mean-field approximation we find that as the particle density increases, this landscape develops a minimum, while an exact treatment of a model confined situation shows that for an ergodic fluid this landscape should be monotonic. We discuss possible implications for slow, glassy, and arrested dynamics at high densities. PMID:21171689

  15. Molecular Dynamics Calculation of Carbon/Hydrocarbon Reflection Coefficients on a Graphite Surface Employing Distributed Computing

    NASA Astrophysics Data System (ADS)

    Alman, D. A.; Ruzic, D. N.; Brooks, J. N.

    2001-10-01

    Reflection coefficients of carbon and hydrocarbon molecules have been calculated with a molecular dynamics code. The code uses the Brenner hydrocarbon potential, an empirical many-body potential that can model the chemical bonding in small hydrocarbon molecules and graphite surfaces. A variety of incident energies and angles have been studied. Typical results for carbon show reflection coefficients 0.4 at thermal energy, decreasing to a minimum of 0.15 at 10-20 eV, and then increasing again. Distributed computing is used to distribute the work among 10-20 desktop PCs in the laboratory. The system consists of a client application run on all of the PCs and a single server machine that distributes work and compiles the results sent back from the clients. The client-server software is written in Java and requires no commercial software packages. Thus, the MD code benefits from multiprocessor-like speed-up at no additional cost by using the idle CPU cycles that would otherwise be wasted. These calculations represent an important improvement to the WBC code, which has been used to model surface erosion, core plasma contamination, and tritium codeposition in many fusion design studies and experiments.

  16. Distributed replica dynamics

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Chill, Samuel T.; Henkelman, Graeme

    2015-11-01

    A distributed replica dynamics (DRD) method is proposed to calculate rare-event molecular dynamics using distributed computational resources. Similar to Voter's parallel replica dynamics (PRD) method, the dynamics of independent replicas of the system are calculated on different computational clients. In DRD, each replica runs molecular dynamics from an initial state for a fixed simulation time and then reports information about the trajectory back to the server. A simulation clock on the server accumulates the simulation time of each replica until one reports a transition to a new state. Subsequent calculations are initiated from within this new state and the process is repeated to follow the state-to-state evolution of the system. DRD is designed to work with asynchronous and distributed computing resources in which the clients may not be able to communicate with each other. Additionally, clients can be added or removed from the simulation at any point in the calculation. Even with heterogeneous computing clients, we prove that the DRD method reproduces the correct probability distribution of escape times. We also show this correspondence numerically; molecular dynamics simulations of Al(100) adatom diffusion using PRD and DRD give consistent exponential distributions of escape times. Finally, we discuss guidelines for choosing the optimal number of replicas and replica trajectory length for the DRD method.

  17. Distributed replica dynamics.

    PubMed

    Zhang, Liang; Chill, Samuel T; Henkelman, Graeme

    2015-11-01

    A distributed replica dynamics (DRD) method is proposed to calculate rare-event molecular dynamics using distributed computational resources. Similar to Voter's parallel replica dynamics (PRD) method, the dynamics of independent replicas of the system are calculated on different computational clients. In DRD, each replica runs molecular dynamics from an initial state for a fixed simulation time and then reports information about the trajectory back to the server. A simulation clock on the server accumulates the simulation time of each replica until one reports a transition to a new state. Subsequent calculations are initiated from within this new state and the process is repeated to follow the state-to-state evolution of the system. DRD is designed to work with asynchronous and distributed computing resources in which the clients may not be able to communicate with each other. Additionally, clients can be added or removed from the simulation at any point in the calculation. Even with heterogeneous computing clients, we prove that the DRD method reproduces the correct probability distribution of escape times. We also show this correspondence numerically; molecular dynamics simulations of Al(100) adatom diffusion using PRD and DRD give consistent exponential distributions of escape times. Finally, we discuss guidelines for choosing the optimal number of replicas and replica trajectory length for the DRD method. PMID:26547163

  18. A Novel, Computationally Efficient Multipolar Model Employing Distributed Charges for Molecular Dynamics Simulations.

    PubMed

    Devereux, Mike; Raghunathan, Shampa; Fedorov, Dmitri G; Meuwly, Markus

    2014-10-14

    A truncated multipole expansion can be re-expressed exactly using an appropriate arrangement of point charges. This means that groups of point charges that are shifted away from nuclear coordinates can be used to achieve accurate electrostatics for molecular systems. We introduce a multipolar electrostatic model formulated in this way for use in computationally efficient multipolar molecular dynamics simulations with well-defined forces and energy conservation in NVE (constant number-volume-energy) simulations. A framework is introduced to distribute torques arising from multipole moments throughout a molecule, and a refined fitting approach is suggested to obtain atomic multipole moments that are optimized for accuracy and numerical stability in a force field context. The formulation of the charge model is outlined as it has been implemented into CHARMM, with application to test systems involving H2O and chlorobenzene. As well as ease of implementation and computational efficiency, the approach can be used to provide snapshots for multipolar QM/MM calculations in QM/MM-MD studies and easily combined with a standard point-charge force field to allow mixed multipolar/point charge simulations of large systems. PMID:26588121

  19. Simulations of ozone distributions in an aircraft cabin using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Rai, Aakash C.; Chen, Qingyan

    2012-07-01

    Ozone is a major pollutant of indoor air. Many studies have demonstrated the adverse health effect of ozone and the byproducts generated as a result of ozone-initiated reactive chemistry in an indoor environment. This study developed a Computational Fluid Dynamics (CFD) model to predict the ozone distribution in an aircraft cabin. The model was used to simulate the distribution of ozone in an aircraft cabin mockup for the following cases: (1) empty cabin; (2) cabin with seats; (3) cabin with soiled T-shirts; (4) occupied cabin with simple human geometry; and (5) occupied cabin with detailed human geometry. The agreement was generally good between the CFD results and the available experimental data. The ozone removal rate, deposition velocity, retention ratio, and breathing zone levels were well predicted in those cases. The CFD model predicted breathing zone ozone concentration to be 77-99% of the average cabin ozone concentration depending on the seat location. The ozone concentration at the breathing zone in the cabin environment can better assess the health risk to passengers and can be used to develop strategies for a healthier cabin environment.

  20. Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    The following reports are presented on this project:A first year progress report on: Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; A second year progress report on: Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design; Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration; and Improving the Aircraft Design Process Using Web-based Modeling and Simulation.

  1. Using computational fluid dynamics software to estimate circulation time distributions in bioreactors.

    PubMed

    Davidson, Kyle M; Sushil, Shrinivasan; Eggleton, Charles D; Marten, Mark R

    2003-01-01

    Nonideal mixing in many fermentation processes can lead to concentration gradients in nutrients, oxygen, and pH, among others. These gradients are likely to influence cellular behavior, growth, or yield of the fermentation process. Frequency of exposure to these gradients can be defined by the circulation time distribution (CTD). There are few examples of CTDs in the literature, and experimental determination of CTD is at best a challenging task. The goal in this study was to determine whether computational fluid dynamics (CFD) software (FLUENT 4 and MixSim) could be used to characterize the CTD in a single-impeller mixing tank. To accomplish this, CFD software was used to simulate flow fields in three different mixing tanks by meshing the tanks with a grid of elements and solving the Navier-Stokes equations using the kappa-epsilon turbulence model. Tracer particles were released from a reference zone within the simulated flow fields, particle trajectories were simulated for 30 s, and the time taken for these tracer particles to return to the reference zone was calculated. CTDs determined by experimental measurement, which showed distinct features (log-normal, bimodal, and unimodal), were compared with CTDs determined using CFD simulation. Reproducing the signal processing procedures used in each of the experiments, CFD simulations captured the characteristic features of the experimentally measured CTDs. The CFD data suggests new signal processing procedures that predict unimodal CTDs for all three tanks. PMID:14524709

  2. Cardea: Providing Support for Dynamic Resource Access in a Distributed Computing Environment

    NASA Technical Reports Server (NTRS)

    Lepro, Rebekah

    2003-01-01

    The environment framing the modem authorization process span domains of administration, relies on many different authentication sources, and manages complex attributes as part of the authorization process. Cardea facilitates dynamic access control within this environment as a central function of an inter-operable authorization framework. The system departs from the traditional authorization model by separating the authentication and authorization processes, distributing the responsibility for authorization data and allowing collaborating domains to retain control over their implementation mechanisms. Critical features of the system architecture and its handling of the authorization process differentiate the system from existing authorization components by addressing common needs not adequately addressed by existing systems. Continuing system research seeks to enhance the implementation of the current authorization model employed in Cardea, increase the robustness of current features, further the framework for establishing trust and promote interoperability with existing security mechanisms.

  3. Computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    1989-01-01

    An overview of computational fluid dynamics (CFD) activities at the Langley Research Center is given. The role of supercomputers in CFD research, algorithm development, multigrid approaches to computational fluid flows, aerodynamics computer programs, computational grid generation, turbulence research, and studies of rarefied gas flows are among the topics that are briefly surveyed.

  4. Coping with distributed computing

    SciTech Connect

    Cormell, L.

    1992-09-01

    The rapid increase in the availability of high performance, cost-effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no longer provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by providing some examples of the approaches taken at various HEP institutions. In addition, a brief review of commercial directions or products for distributed computing and management will be given.

  5. Understanding pharmacokinetics using realistic computational models of fluid dynamics: biosimulation of drug distribution within the CSF space for intrathecal drugs.

    PubMed

    Kuttler, Andreas; Dimke, Thomas; Kern, Steven; Helmlinger, Gabriel; Stanski, Donald; Finelli, Luca A

    2010-12-01

    We introduce how biophysical modeling in pharmaceutical research and development, combining physiological observations at the tissue, organ and system level with selected drug physiochemical properties, may contribute to a greater and non-intuitive understanding of drug pharmacokinetics and therapeutic design. Based on rich first-principle knowledge combined with experimental data at both conception and calibration stages, and leveraging our insights on disease processes and drug pharmacology, biophysical modeling may provide a novel and unique opportunity to interactively characterize detailed drug transport, distribution, and subsequent therapeutic effects. This innovative approach is exemplified through a three-dimensional (3D) computational fluid dynamics model of the spinal canal motivated by questions arising during pharmaceutical development of one molecular therapy for spinal cord injury. The model was based on actual geometry reconstructed from magnetic resonance imaging data subsequently transformed in a parametric 3D geometry and a corresponding finite-volume representation. With dynamics controlled by transient Navier-Stokes equations, the model was implemented in a commercial multi-physics software environment established in the automotive and aerospace industries. While predictions were performed in silico, the underlying biophysical models relied on multiple sources of experimental data and knowledge from scientific literature. The results have provided insights into the primary factors that can influence the intrathecal distribution of drug after lumbar administration. This example illustrates how the approach connects the causal chain underlying drug distribution, starting with the technical aspect of drug delivery systems, through physiology-driven drug transport, then eventually linking to tissue penetration, binding, residence, and ultimately clearance. Currently supporting our drug development projects with an improved understanding of systems

  6. Computer security in DOE distributed computing systems

    SciTech Connect

    Hunteman, W.J.

    1990-01-01

    The modernization of DOE facilities amid limited funding is creating pressure on DOE facilities to find innovative approaches to their daily activities. Distributed computing systems are becoming cost-effective solutions to improved productivity. This paper defines and describes typical distributed computing systems in the DOE. The special computer security problems present in distributed computing systems are identified and compared with traditional computer systems. The existing DOE computer security policy supports only basic networks and traditional computer systems and does not address distributed computing systems. A review of the existing policy requirements is followed by an analysis of the policy as it applies to distributed computing systems. Suggested changes in the DOE computer security policy are identified and discussed. The long lead time in updating DOE policy will require guidelines for applying the existing policy to distributed systems. Some possible interim approaches are identified and discussed. 2 refs.

  7. The simulation of temperature distribution and relative humidity with liquid concentration of 50% using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Yohana, Eflita; Yulianto, Mohamad Endy; Kwang-Hwang, Choi; Putro, Bondantio; Yohanes Aditya W., A.

    2015-12-01

    The study of humidity distribution simulation inside a room has been widely conducted by using computational fluid dynamics (CFD). Here, the simulation was done by employing inputs in the experiment of air humidity reduction in a sample house. Liquid dessicant CaCl2was used in this study to absorb humidity in the air, so that the enormity of humidity reduction occured during the experiment could be obtained.The experiment was conducted in the morning at 8 with liquid desiccant concentration of 50%, nozzle dimension of 0.2 mms attached in dehumidifier, and the debit of air which entered the sample house was 2.35 m3/min. Both in inlet and outlet sides of the room, a DHT 11 censor was installed and used to note changes in humidity and temperature during the experiment. In normal condition without turning on the dehumidifier, the censor noted that the average temperature inside the room was 28°C and RH of 65%.The experiment result showed that the relative humidity inside a sample house was decreasing up to 52% in inlet position. Further, through the results obtained from CFD simulation, the temperature distribution and relative humidity inside the sample house could be seen. It showed that the concentration of liquid desiccant of 50% experienced a decrease while the relative humidity distribution was considerably good since the average RH was 55% followed by the increase in air temperature of 29.2° C inside the sample house.

  8. Computational Model of Human and System Dynamics in Free Flight: Studies in Distributed Control Technologies

    NASA Technical Reports Server (NTRS)

    Corker, Kevin M.; Pisanich, Gregory; Lebacqz, J. Victor (Technical Monitor)

    1998-01-01

    This paper presents a set of studies in full mission simulation and the development of a predictive computational model of human performance in control of complex airspace operations. NASA and the FAA have initiated programs of research and development to provide flight crew, airline operations and air traffic managers with automation aids to increase capacity in en route and terminal area to support the goals of safe, flexible, predictable and efficient operations. In support of these developments, we present a computational model to aid design that includes representation of multiple cognitive agents (both human operators and intelligent aiding systems). The demands of air traffic management require representation of many intelligent agents sharing world-models, coordinating action/intention, and scheduling goals and actions in a potentially unpredictable world of operations. The operator-model structure includes attention functions, action priority, and situation assessment. The cognitive model has been expanded to include working memory operations including retrieval from long-term store, and interference. The operator's activity structures have been developed to provide for anticipation (knowledge of the intention and action of remote operators), and to respond to failures of the system and other operators in the system in situation-specific paradigms. System stability and operator actions can be predicted by using the model. The model's predictive accuracy was verified using the full-mission simulation data of commercial flight deck operations with advanced air traffic management techniques.

  9. 3-D parallel program for numerical calculation of gas dynamics problems with heat conductivity on distributed memory computational systems (CS)

    SciTech Connect

    Sofronov, I.D.; Voronin, B.L.; Butnev, O.I.

    1997-12-31

    The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle. The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.

  10. Modeling hydroxyl radical distribution and trialkyl phosphates oxidation in UV-H2O2 photoreactors using computational fluid dynamics.

    PubMed

    Santoro, Domenico; Raisee, Mehrdad; Moghaddami, Mostafa; Ducoste, Joel; Sasges, Micheal; Liberti, Lorenzo; Notarnicola, Michele

    2010-08-15

    Advanced Oxidation Processes (AOPs) promoted by ultraviolet light are innovative and potentially cost-effective solutions for treating persistent pollutants recalcitrant to conventional water and wastewater treatment. While several studies have been performed during the past decade to improve the fundamental understanding of the UV-H(2)O(2) AOP and its kinetic modeling, Computational Fluid Dynamics (CFD) has only recently emerged as a powerful tool that allows a deeper understanding of complex photochemical processes in environmental and reactor engineering applications. In this paper, a comprehensive kinetic model of UV-H(2)O(2) AOP was coupled with the Reynolds averaged Navier-Stokes (RANS) equations using CFD to predict the oxidation of tributyl phosphate (TBP) and tri(2-chloroethtyl) phosphate (TCEP) in two different photoreactors: a parallel- and a cross-flow UV device employing a UV lamp emitting primarily 253.7 nm radiation. CFD simulations, obtained for both turbulent and laminar flow regimes and compared with experimental data over a wide range of UV doses, enabled the spatial visualization of hydrogen peroxide and hydroxyl radical distributions in the photoreactor. The annular photoreactor displayed consistently better oxidation performance than the cross-flow system due to the absence of recirculation zones, as confirmed by the hydroxyl radical dose distributions. Notably, such discrepancy was found to be strongly dependent on and directly correlated with the hydroxyl radical rate constant becoming relevant for conditions approaching diffusion-controlled reaction regimes (k(C,OH) > 10(9) M(-1) s(-1)). PMID:20704221

  11. A three-dimensional computational fluid dynamics model of shear stress distribution during neotissue growth in a perfusion bioreactor.

    PubMed

    Guyot, Y; Luyten, F P; Schrooten, J; Papantoniou, I; Geris, L

    2015-12-01

    Bone tissue engineering strategies use flow through perfusion bioreactors to apply mechanical stimuli to cells seeded on porous scaffolds. Cells grow on the scaffold surface but also by bridging the scaffold pores leading a fully filled scaffold following the scaffold's geometric characteristics. Current computational fluid dynamic approaches for tissue engineering bioreactor systems have been mostly carried out for empty scaffolds. The effect of 3D cell growth and extracellular matrix formation (termed in this study as neotissue growth), on its surrounding fluid flow field is a challenge yet to be tackled. In this work a combined approach was followed linking curvature driven cell growth to fluid dynamics modeling. The level-set method (LSM) was employed to capture neotissue growth driven by curvature, while the Stokes and Darcy equations, combined in the Brinkman equation, provided information regarding the distribution of the shear stress profile at the neotissue/medium interface and within the neotissue itself during growth. The neotissue was assumed to be micro-porous allowing flow through its structure while at the same time allowing the simulation of complete scaffold filling without numerical convergence issues. The results show a significant difference in the amplitude of shear stress for cells located within the micro-porous neo-tissue or at the neotissue/medium interface, demonstrating the importance of taking along the neotissue in the calculation of the mechanical stimulation of cells during culture.The presented computational framework is used on different scaffold pore geometries demonstrating its potential to be used a design as tool for scaffold architecture taking into account the growing neotissue. Biotechnol. Bioeng. 2015;112: 2591-2600. © 2015 Wiley Periodicals, Inc. PMID:26059101

  12. Distributed instruction set computer

    SciTech Connect

    Wang, L.

    1989-01-01

    The Distributed Instruction Set Computer, or DISC for short, is an experimental computer system for fine-grained parallel processing. DISC employs a new parallel instruction set, an Early Binding and Scheduling data tagging scheme, and a distributed control mechanism to explore a software dataflow control method in a multiple-functional unit system. With zero system control overhead, multiple instructions are executed in parallel and/or out of order at the highest speed of n instructions/cycle, where n is the number of functional units. The quantitative simulation result indicates that a DISC system with 16 functional units can deliverer a maximal 7.7X performance speedup over a single functional-unit system at the same clock speed. Exploring a new parallel instruction set and distributed control mechanism, DISC represents three major breakthroughs in the domain of fine-grained parallel processing: (1) Fast multiple instruction issuing mechanism; (2) Parallel and/or out-of-order execution; (3) Software dataflow control scheme.

  13. Cooperative Fault Tolerant Distributed Computing

    SciTech Connect

    Fagg, Graham E.

    2006-03-15

    HARNESS was proposed as a system that combined the best of emerging technologies found in current distributed computing research and commercial products into a very flexible, dynamically adaptable framework that could be used by applications to allow them to evolve and better handle their execution environment. The HARNESS system was designed using the considerable experience from previous projects such as PVM, MPI, IceT and Cumulvs. As such, the system was designed to avoid any of the common problems found with using these current systems, such as no single point of failure, ability to survive machine, node and software failures. Additional features included improved inter-component connectivity, with full support for dynamic down loading of addition components at run-time thus reducing the stress on application developers to build in all the libraries they need in advance.

  14. Computational fluid dynamic control

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.; Deabreu-Garcia, Alex

    1989-01-01

    A general technique is presented for modeling fluid, or gas, dynamic systems specifically for the development of control systems. The numerical methods which are generally used in computational fluid dynamics are borrowed to create either continuous-time or discrete-time models of the particular fluid system. The resulting equations can be either left in a nonlinear form, or easily linearized about an operating point. As there are typically very many states in these systems, the usual linear model reduction methods can be used on them to allow a low-order controller to be designed. A simple example is given which typifies many internal flow control problems. The resulting control is termed computational fluid dynamic control.

  15. Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Chung, T. J.

    2002-03-01

    Computational fluid dynamics (CFD) techniques are used to study and solve complex fluid flow and heat transfer problems. This comprehensive text ranges from elementary concepts for the beginner to state-of-the-art CFD for the practitioner. It discusses and illustrates the basic principles of finite difference (FD), finite element (FE), and finite volume (FV) methods, with step-by-step hand calculations. Chapters go on to examine structured and unstructured grids, adaptive methods, computing techniques, and parallel processing. Finally, the author describes a variety of practical applications to problems in turbulence, reacting flows and combustion, acoustics, combined mode radiative heat transfer, multiphase flows, electromagnetic fields, and relativistic astrophysical flows. Students and practitioners--particularly in mechanical, aerospace, chemical, and civil engineering--will use this authoritative text to learn about and apply numerical techniques to the solution of fluid dynamics problems.

  16. Computational fluid dynamics research

    NASA Technical Reports Server (NTRS)

    Chandra, Suresh; Jones, Kenneth; Hassan, Hassan; Mcrae, David Scott

    1992-01-01

    The focus of research in the computational fluid dynamics (CFD) area is two fold: (1) to develop new approaches for turbulence modeling so that high speed compressible flows can be studied for applications to entry and re-entry flows; and (2) to perform research to improve CFD algorithm accuracy and efficiency for high speed flows. Research activities, faculty and student participation, publications, and financial information are outlined.

  17. Dynamic Load Balancing for Computational Plasticity on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Pramono, Eddy; Simon, Horst

    1994-01-01

    The simulation of the computational plasticity on a complex structure remains a formidable computational task, especially when a highly nonlinear, complex material model was used. It appears that the computational requirements for a such problem can only be satisfied by massively parallel architectures. In order to effectively harness the tremendous computational power provided by such architectures, it is imperative to investigate and to study the algorithmic and implementation issues pertaining to dynamic load balancing for computational plasticity on a highly parallel, distributed-memory, multiple-instruction, multiple-data computers. This paper will measure the effectiveness of the algorithms developed in handling the dynamic load balancing.

  18. Computational reacting gas dynamics

    NASA Technical Reports Server (NTRS)

    Lam, S. H.

    1993-01-01

    In the study of high speed flows at high altitudes, such as that encountered by re-entry spacecrafts, the interaction of chemical reactions and other non-equilibrium processes in the flow field with the gas dynamics is crucial. Generally speaking, problems of this level of complexity must resort to numerical methods for solutions, using sophisticated computational fluid dynamics (CFD) codes. The difficulties introduced by reacting gas dynamics can be classified into three distinct headings: (1) the usually inadequate knowledge of the reaction rate coefficients in the non-equilibrium reaction system; (2) the vastly larger number of unknowns involved in the computation and the expected stiffness of the equations; and (3) the interpretation of the detailed reacting CFD numerical results. The research performed accepts the premise that reacting flows of practical interest in the future will in general be too complex or 'untractable' for traditional analytical developments. The power of modern computers must be exploited. However, instead of focusing solely on the construction of numerical solutions of full-model equations, attention is also directed to the 'derivation' of the simplified model from the given full-model. In other words, the present research aims to utilize computations to do tasks which have traditionally been done by skilled theoreticians: to reduce an originally complex full-model system into an approximate but otherwise equivalent simplified model system. The tacit assumption is that once the appropriate simplified model is derived, the interpretation of the detailed numerical reacting CFD numerical results will become much easier. The approach of the research is called computational singular perturbation (CSP).

  19. Numerical Uncertainty Analysis for Computational Fluid Dynamics using Student T Distribution -- Application of CFD Uncertainty Analysis Compared to Exact Analytical Solution

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.

    2014-01-01

    Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.

  20. GRIMD: distributed computing for chemists and biologists

    PubMed Central

    Piotto, Stefano; Biasi, Luigi Di; Concilio, Simona; Castiglione, Aniello; Cattaneo, Giuseppe

    2014-01-01

    Motivation: Biologists and chemists are facing problems of high computational complexity that require the use of several computers organized in clusters or in specialized grids. Examples of such problems can be found in molecular dynamics (MD), in silico screening, and genome analysis. Grid Computing and Cloud Computing are becoming prevalent mainly because of their competitive performance/cost ratio. Regrettably, the diffusion of Grid Computing is strongly limited because two main limitations: it is confined to scientists with strong Computer Science background and the analyses of the large amount of data produced can be cumbersome it. We have developed a package named GRIMD to provide an easy and flexible implementation of distributed computing for the Bioinformatics community. GRIMD is very easy to install and maintain, and it does not require any specific Computer Science skill. Moreover, permits preliminary analysis on the distributed machines to reduce the amount of data to transfer. GRIMD is very flexible because it shields the typical computational biologist from the need to write specific code for tasks such as molecular dynamics or docking calculations. Furthermore, it permits an efficient use of GPU cards whenever is possible. GRIMD calculations scale almost linearly and, therefore, permits to exploit efficiently each machine in the network. Here, we provide few examples of grid computing in computational biology (MD and docking) and bioinformatics (proteome analysis). Availability GRIMD is available for free for noncommercial research at www.yadamp.unisa.it/grimd Supplementary information www.yadamp.unisa.it/grimd/howto.aspx PMID:24516326

  1. Heterogeneous Distributed Computing for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Sunderam, Vaidy S.

    1998-01-01

    The research supported under this award focuses on heterogeneous distributed computing for high-performance applications, with particular emphasis on computational aerosciences. The overall goal of this project was to and investigate issues in, and develop solutions to, efficient execution of computational aeroscience codes in heterogeneous concurrent computing environments. In particular, we worked in the context of the PVM[1] system and, subsequent to detailed conversion efforts and performance benchmarking, devising novel techniques to increase the efficacy of heterogeneous networked environments for computational aerosciences. Our work has been based upon the NAS Parallel Benchmark suite, but has also recently expanded in scope to include the NAS I/O benchmarks as specified in the NHT-1 document. In this report we summarize our research accomplishments under the auspices of the grant.

  2. Simulation of the Velocity and Temperature Distribution of Inhalation Thermal Injury in a Human Upper Airway Model by Application of Computational Fluid Dynamics.

    PubMed

    Chang, Yang; Zhao, Xiao-zhuo; Wang, Cheng; Ning, Fang-gang; Zhang, Guo-an

    2015-01-01

    Inhalation injury is an important cause of death after thermal burns. This study was designed to simulate the velocity and temperature distribution of inhalation thermal injury in the upper airway in humans using computational fluid dynamics. Cervical computed tomography images of three Chinese adults were imported to Mimics software to produce three-dimensional models. After grids were established and boundary conditions were defined, the simulation time was set at 1 minute and the gas temperature was set to 80 to 320°C using ANSYS software (ANSYS, Canonsburg, PA) to simulate the velocity and temperature distribution of inhalation thermal injury. Cross-sections were cut at 2-mm intervals, and maximum airway temperature and velocity were recorded for each cross-section. The maximum velocity peaked in the lower part of the nasal cavity and then decreased with air flow. The velocities in the epiglottis and glottis were higher than those in the surrounding areas. Further, the maximum airway temperature decreased from the nasal cavity to the trachea. Computational fluid dynamics technology can be used to simulate the velocity and temperature distribution of inhaled heated air. PMID:25412055

  3. Computational fluid dynamic applications

    SciTech Connect

    Chang, S.-L.; Lottes, S. A.; Zhou, C. Q.

    2000-04-03

    The rapid advancement of computational capability including speed and memory size has prompted the wide use of computational fluid dynamics (CFD) codes to simulate complex flow systems. CFD simulations are used to study the operating problems encountered in system, to evaluate the impacts of operation/design parameters on the performance of a system, and to investigate novel design concepts. CFD codes are generally developed based on the conservation laws of mass, momentum, and energy that govern the characteristics of a flow. The governing equations are simplified and discretized for a selected computational grid system. Numerical methods are selected to simplify and calculate approximate flow properties. For turbulent, reacting, and multiphase flow systems the complex processes relating to these aspects of the flow, i.e., turbulent diffusion, combustion kinetics, interfacial drag and heat and mass transfer, etc., are described in mathematical models, based on a combination of fundamental physics and empirical data, that are incorporated into the code. CFD simulation has been applied to a large variety of practical and industrial scale flow systems.

  4. Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.

  5. Distributed GPU Computing in GIScience

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.

    2013-12-01

    Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE

  6. Implementation of a Phase Detection Algorithm for Dynamic Cardiac Computed Tomography Analysis Based on Time Dependent Contrast Agent Distribution

    PubMed Central

    Kendziorra, Carsten; Meyer, Henning; Dewey, Marc

    2014-01-01

    This paper presents a phase detection algorithm for four-dimensional (4D) cardiac computed tomography (CT) analysis. The algorithm detects a phase, i.e. a specific three-dimensional (3D) image out of several time-distributed 3D images, with high contrast in the left ventricle and low contrast in the right ventricle. The purpose is to use the automatically detected phase in an existing algorithm that automatically aligns the images along the heart axis. Decision making is based on the contrast agent distribution over time. It was implemented in KardioPerfusion – a software framework currently being developed for 4D CT myocardial perfusion analysis. Agreement of the phase detection algorithm with two reference readers was 97% (95% CI: 82–100%). Mean duration for detection was 0.020 s (95% CI: 0.018–0.022 s), which was times less than the readers needed (s, ). Thus, this algorithm is an accurate and fast tool that can improve work flow of clinical examinations. PMID:25545863

  7. BESIII production with distributed computing

    NASA Astrophysics Data System (ADS)

    Zhang, X. M.; Yan, T.; Zhao, X. H.; Ma, Z. T.; Yan, X. F.; Lin, T.; Deng, Z. Y.; Li, W. D.; Belov, S.; Pelevanyuk, I.; Zhemchugov, A.; Cai, H.

    2015-12-01

    Distributed computing is necessary nowadays for high energy physics experiments to organize heterogeneous computing resources all over the world to process enormous amounts of data. The BESIII experiment in China, has established its own distributed computing system, based on DIRAC, as a supplement to local clusters, collecting cluster, grid, desktop and cloud resources from collaborating member institutes around the world. The system consists of workload management and data management to deal with the BESIII Monte Carlo production workflow in a distributed environment. A dataset-based data transfer system has been developed to support data movements among sites. File and metadata management tools and a job submission frontend have been developed to provide a virtual layer for BESIII physicists to use distributed resources. Moreover, the paper shows the experience to cope with lack of grid experience and low manpower among the BESIII community.

  8. Computational Fluid Dynamics Library

    Energy Science and Technology Software Center (ESTSC)

    2005-03-04

    CFDLib05 is the Los Alamos Computational Fluid Dynamics LIBrary. This is a collection of hydrocodes using a common data structure and a common numerical method, for problems ranging from single-field, incompressible flow, to multi-species, multi-field, compressible flow. The data structure is multi-block, with a so-called structured grid in each block. The numerical method is a Finite-Volume scheme employing a state vector that is fully cell-centered. This means that the integral form of the conservation lawsmore » is solved on the physical domain that is represented by a mesh of control volumes. The typical control volume is an arbitrary quadrilateral in 2D and an arbitrary hexahedron in 3D. The Finite-Volume scheme is for time-unsteady flow and remains well coupled by means of time and space centered fluxes; if a steady state solution is required, the problem is integrated forward in time until the user is satisfied that the state is stationary.« less

  9. Distributed computing at the SSCL

    SciTech Connect

    Cormell, L.; White, R.

    1993-05-01

    The rapid increase in the availability of high performance, cost- effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no linger provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by discussing the approach taken at the Superconducting Super Collider Laboratory. In addition, a brief review of the future directions of commercial products for distributed computing and management will be given.

  10. Hydronic distribution system computer model

    SciTech Connect

    Andrews, J.W.; Strasser, J.J.

    1994-10-01

    A computer model of a hot-water boiler and its associated hydronic thermal distribution loop has been developed at Brookhaven National Laboratory (BNL). It is intended to be incorporated as a submodel in a comprehensive model of residential-scale thermal distribution systems developed at Lawrence Berkeley. This will give the combined model the capability of modeling forced-air and hydronic distribution systems in the same house using the same supporting software. This report describes the development of the BNL hydronics model, initial results and internal consistency checks, and its intended relationship to the LBL model. A method of interacting with the LBL model that does not require physical integration of the two codes is described. This will provide capability now, with reduced up-front cost, as long as the number of runs required is not large.

  11. Bimolecular dynamics by computer analysis

    SciTech Connect

    Eilbeck, J.C.; Lomdahl, P.S.; Scott, A.C.

    1984-01-01

    As numerical tools (computers and display equipment) become more powerful and the atomic structures of important biological molecules become known, the importance of detailed computation of nonequilibrium biomolecular dynamics increases. In this manuscript we report results from a well developed study of the hydrogen bonded polypeptide crystal acetanilide, a model protein. Directions for future research are suggested. 9 references, 6 figures.

  12. Computational aspects of multibody dynamics

    NASA Technical Reports Server (NTRS)

    Park, K. C.

    1989-01-01

    Computational aspects are addressed which impact the requirements for developing a next generation software system for flexible multibody dynamics simulation which include: criteria for selecting candidate formulation, pairing of formulations with appropriate solution procedures, need for concurrent algorithms to utilize computer hardware advances, and provisions for allowing open-ended yet modular analysis modules.

  13. ATLAS Distributed Computing in LHC Run2

    NASA Astrophysics Data System (ADS)

    Campana, Simone

    2015-12-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run-2. An increase in both the data rate and the computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (Prodsys-2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward a flexible computing model. A flexible computing utilization exploring the use of opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model; the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover, a new data management strategy, based on a defined lifetime for each dataset, has been defined to better manage the lifecycle of the data. In this note, an overview of an operational experience of the new system and its evolution is presented.

  14. Computer animation challenges for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Vines, Mauricio; Lee, Won-Sook; Mavriplis, Catherine

    2012-07-01

    Computer animation requirements differ from those of traditional computational fluid dynamics (CFD) investigations in that visual plausibility and rapid frame update rates trump physical accuracy. We present an overview of the main techniques for fluid simulation in computer animation, starting with Eulerian grid approaches, the Lattice Boltzmann method, Fourier transform techniques and Lagrangian particle introduction. Adaptive grid methods, precomputation of results for model reduction, parallelisation and computation on graphical processing units (GPUs) are reviewed in the context of accelerating simulation computations for animation. A survey of current specific approaches for the application of these techniques to the simulation of smoke, fire, water, bubbles, mixing, phase change and solid-fluid coupling is also included. Adding plausibility to results through particle introduction, turbulence detail and concentration on regions of interest by level set techniques has elevated the degree of accuracy and realism of recent animations. Basic approaches are described here. Techniques to control the simulation to produce a desired visual effect are also discussed. Finally, some references to rendering techniques and haptic applications are mentioned to provide the reader with a complete picture of the challenges of simulating fluids in computer animation.

  15. Computational Workbench for Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Edmonds, Karina

    2007-01-01

    PyCraft is a computer program that provides an interactive, workbenchlike computing environment for developing and testing algorithms for multibody dynamics. Examples of multibody dynamic systems amenable to analysis with the help of PyCraft include land vehicles, spacecraft, robots, and molecular models. PyCraft is based on the Spatial-Operator- Algebra (SOA) formulation for multibody dynamics. The SOA operators enable construction of simple and compact representations of complex multibody dynamical equations. Within the Py-Craft computational workbench, users can, essentially, use the high-level SOA operator notation to represent the variety of dynamical quantities and algorithms and to perform computations interactively. PyCraft provides a Python-language interface to underlying C++ code. Working with SOA concepts, a user can create and manipulate Python-level operator classes in order to implement and evaluate new dynamical quantities and algorithms. During use of PyCraft, virtually all SOA-based algorithms are available for computational experiments.

  16. Overlapping clusters for distributed computation.

    SciTech Connect

    Mirrokni, Vahab; Andersen, Reid; Gleich, David F.

    2010-11-01

    Scalable, distributed algorithms must address communication problems. We investigate overlapping clusters, or vertex partitions that intersect, for graph computations. This setup stores more of the graph than required but then affords the ease of implementation of vertex partitioned algorithms. Our hope is that this technique allows us to reduce communication in a computation on a distributed graph. The motivation above draws on recent work in communication avoiding algorithms. Mohiyuddin et al. (SC09) design a matrix-powers kernel that gives rise to an overlapping partition. Fritzsche et al. (CSC2009) develop an overlapping clustering for a Schwarz method. Both techniques extend an initial partitioning with overlap. Our procedure generates overlap directly. Indeed, Schwarz methods are commonly used to capitalize on overlap. Elsewhere, overlapping communities (Ahn et al, Nature 2009; Mishra et al. WAW2007) are now a popular model of structure in social networks. These have long been studied in statistics (Cole and Wishart, CompJ 1970). We present two types of results: (i) an estimated swapping probability {rho}{infinity}; and (ii) the communication volume of a parallel PageRank solution (link-following {alpha} = 0.85) using an additive Schwarz method. The volume ratio is the amount of extra storage for the overlap (2 means we store the graph twice). Below, as the ratio increases, the swapping probability and PageRank communication volume decreases.

  17. Hybrid Human-Computing Distributed Sense-Making: Extending the SOA Paradigm for Dynamic Adjudication and Optimization of Human and Computer Roles

    ERIC Educational Resources Information Center

    Rimland, Jeffrey C.

    2013-01-01

    In many evolving systems, inputs can be derived from both human observations and physical sensors. Additionally, many computation and analysis tasks can be performed by either human beings or artificial intelligence (AI) applications. For example, weather prediction, emergency event response, assistive technology for various human sensory and…

  18. Molecular dynamics on vector computers

    NASA Astrophysics Data System (ADS)

    Sullivan, F.; Mountain, R. D.; Oconnell, J.

    1985-10-01

    An algorithm called the method of lights (MOL) has been developed for the computerized simulation of molecular dynamics. The MOL, implemented on the CYBER 205 computer, is based on sorting and reformulating the manner in which neighbor lists are compiled, and it uses data structures compatible with specialized vector statements that perform parallel computations. The MOL is found to reduce running time over standard methods in scalar form, and vectorization is shown to produce an order-of-magnitude reduction in execution time.

  19. Computational Fluid Dynamics Modeling of The Dalles Project: Effects of Spill Flow Distribution Between the Washington Shore and the Tailrace Spillwall

    SciTech Connect

    Rakowski, Cynthia L.; Serkowski, John A.; Richmond, Marshall C.

    2010-12-01

    The U.S. Army Corps of Engineers-Portland District (CENWP) has ongoing work to improve the survival of juvenile salmonids (smolt) migrating past The Dalles Dam. As part of that effort, a spillwall was constructed to improve juvenile egress through the tailrace downstream of the stilling basin. The spillwall was designed to improve smolt survival by decreasing smolt retention time in the spillway tailrace and the exposure to predators on the spillway shelf. The spillwall guides spillway flows, and hence smolt, more quickly into the thalweg. In this study, an existing computational fluid dynamics (CFD) model was modified and used to characterize tailrace hydraulics between the new spillwall and the Washington shore for six different total river flows. The effect of spillway flow distribution was simulated for three spill patterns at the lowest total river flow. The commercial CFD solver, STAR-CD version 4.1, was used to solve the unsteady Reynolds-averaged Navier-Stokes equations together with the k-epsilon turbulence model. Free surface motion was simulated using the volume-of-fluid (VOF) technique. The model results were used in two ways. First, results graphics were provided to CENWP and regional fisheries agency representatives for use and comparison to the same flow conditions at a reduced-scale physical model. The CFD results were very similar in flow pattern to that produced by the reduced-scale physical model but these graphics provided a quantitative view of velocity distribution. During the physical model work, an additional spill pattern was tested. Subsequently, that spill pattern was also simulated in the numerical model. The CFD streamlines showed that the hydraulic conditions were likely to be beneficial to fish egress at the higher total river flows (120 kcfs and greater, uniform flow distribution). At the lowest flow case, 90 kcfs, it was necessary to use a non-uniform distribution. Of the three distributions tested, splitting the flow evenly between

  20. Analog computation with dynamical systems

    NASA Astrophysics Data System (ADS)

    Siegelmann, Hava T.; Fishman, Shmuel

    1998-09-01

    Physical systems exhibit various levels of complexity: their long term dynamics may converge to fixed points or exhibit complex chaotic behavior. This paper presents a theory that enables to interpret natural processes as special purpose analog computers. Since physical systems are naturally described in continuous time, a definition of computational complexity for continuous time systems is required. In analogy with the classical discrete theory we develop fundamentals of computational complexity for dynamical systems, discrete or continuous in time, on the basis of an intrinsic time scale of the system. Dissipative dynamical systems are classified into the computational complexity classes P d, Co-RP d, NP d and EXP d, corresponding to their standard counterparts, according to the complexity of their long term behavior. The complexity of chaotic attractors relative to regular ones leads to the conjecture P d ≠ NP d. Continuous time flows have been proven useful in solving various practical problems. Our theory provides the tools for an algorithmic analysis of such flows. As an example we analyze the continuous Hopfield network.

  1. Cooperative Autonomic Management in Dynamic Distributed Systems

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Zhao, Ming; Fortes, José A. B.

    The centralized management of large distributed systems is often impractical, particularly when the both the topology and status of the system change dynamically. This paper proposes an approach to application-centric self-management in large distributed systems consisting of a collection of autonomic components that join and leave the system dynamically. Cooperative autonomic components self-organize into a dynamically created overlay network. Through local information sharing with neighbors, each component gains access to global information as needed for optimizing performance of applications. The approach has been validated and evaluated by developing a decentralized autonomic system consisting of multiple autonomic application managers previously developed for the In-VIGO grid-computing system. Using analytical results from complex random network and measurements done in a prototype system, we demonstrate the robustness, self-organization and adaptability of our approach, both theoretically and experimentally.

  2. Dynamic computing random access memory.

    PubMed

    Traversa, F L; Bonani, F; Pershin, Y V; Di Ventra, M

    2014-07-18

    The present von Neumann computing paradigm involves a significant amount of information transfer between a central processing unit and memory, with concomitant limitations in the actual execution speed. However, it has been recently argued that a different form of computation, dubbed memcomputing (Di Ventra and Pershin 2013 Nat. Phys. 9 200-2) and inspired by the operation of our brain, can resolve the intrinsic limitations of present day architectures by allowing for computing and storing of information on the same physical platform. Here we show a simple and practical realization of memcomputing that utilizes easy-to-build memcapacitive systems. We name this architecture dynamic computing random access memory (DCRAM). We show that DCRAM provides massively-parallel and polymorphic digital logic, namely it allows for different logic operations with the same architecture, by varying only the control signals. In addition, by taking into account realistic parameters, its energy expenditures can be as low as a few fJ per operation. DCRAM is fully compatible with CMOS technology, can be realized with current fabrication facilities, and therefore can really serve as an alternative to the present computing technology. PMID:24972387

  3. Dynamic computing random access memory

    NASA Astrophysics Data System (ADS)

    Traversa, F. L.; Bonani, F.; Pershin, Y. V.; Di Ventra, M.

    2014-07-01

    The present von Neumann computing paradigm involves a significant amount of information transfer between a central processing unit and memory, with concomitant limitations in the actual execution speed. However, it has been recently argued that a different form of computation, dubbed memcomputing (Di Ventra and Pershin 2013 Nat. Phys. 9 200-2) and inspired by the operation of our brain, can resolve the intrinsic limitations of present day architectures by allowing for computing and storing of information on the same physical platform. Here we show a simple and practical realization of memcomputing that utilizes easy-to-build memcapacitive systems. We name this architecture dynamic computing random access memory (DCRAM). We show that DCRAM provides massively-parallel and polymorphic digital logic, namely it allows for different logic operations with the same architecture, by varying only the control signals. In addition, by taking into account realistic parameters, its energy expenditures can be as low as a few fJ per operation. DCRAM is fully compatible with CMOS technology, can be realized with current fabrication facilities, and therefore can really serve as an alternative to the present computing technology.

  4. Distributed Computing at Belle II

    NASA Astrophysics Data System (ADS)

    Bansal, Vikas; Belle Collaboration, II

    2016-03-01

    The Belle II experiment at the SuperKEKB collider in Tsukuba, Japan, will start physics data taking in 2018 and will accumulate 50 ab-1 of e+e- collision data, about 50 times larger than the data set of the earlier Belle experiment. The computing requirements of Belle II are comparable to those of a RUN I high-pT LHC experiment. Computing will make full use of high speed networking and of the Computing Grids in North America, Asia and Europe. Results of an initial MC simulation campaign with 5 ab-1 equivalent luminosity will be described.

  5. Distributed computing and nuclear reactor analysis

    SciTech Connect

    Brown, F.B.; Derstine, K.L.; Blomquist, R.N.

    1994-03-01

    Large-scale scientific and engineering calculations for nuclear reactor analysis can now be carried out effectively in a distributed computing environment, at costs far lower than for traditional mainframes. The distributed computing environment must include support for traditional system services, such as a queuing system for batch work, reliable filesystem backups, and parallel processing capabilities for large jobs. All ANL computer codes for reactor analysis have been adapted successfully to a distributed system based on workstations and X-terminals. Distributed parallel processing has been demonstrated to be effective for long-running Monte Carlo calculations.

  6. Computational Methods for Structural Mechanics and Dynamics

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)

    1989-01-01

    Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.

  7. Visualization of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Gerald-Yamasaki, Michael; Hultquist, Jeff; Bryson, Steve; Kenwright, David; Lane, David; Walatka, Pamela; Clucas, Jean; Watson, Velvin; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Scientific visualization serves the dual purpose of exploration and exposition of the results of numerical simulations of fluid flow. Along with the basic visualization process which transforms source data into images, there are four additional components to a complete visualization system: Source Data Processing, User Interface and Control, Presentation, and Information Management. The requirements imposed by the desired mode of operation (i.e. real-time, interactive, or batch) and the source data have their effect on each of these visualization system components. The special requirements imposed by the wide variety and size of the source data provided by the numerical simulation of fluid flow presents an enormous challenge to the visualization system designer. We describe the visualization system components including specific visualization techniques and how the mode of operation and source data requirements effect the construction of computational fluid dynamics visualization systems.

  8. Next generation distributed computing for cancer research.

    PubMed

    Agarwal, Pankaj; Owzar, Kouros

    2014-01-01

    Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing. PMID:25983539

  9. Next Generation Distributed Computing for Cancer Research

    PubMed Central

    Agarwal, Pankaj; Owzar, Kouros

    2014-01-01

    Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing. PMID:25983539

  10. Evaluation of distributed computing tools

    SciTech Connect

    Stanberry, L.

    1992-10-28

    The original goal stated in the collaboration agreement from LCC`s perspective was ``to show that networking tools available in UNICOS perform well enough to meet the requirements of LCC customers.`` This translated into evaluating how easy it was to port ELROS over CRI`s ISO 2.0, which itself is a port of ISODE to the Cray. In addition we tested the interoperability of ELROS and ISO 2.0 programs running on the Cray, and communicating with each other, and with servers or clients running on other machines. To achieve these goals from LCC`s side, we ported ELROS to the Cray, and also obtained and installed a copy of the ISO 2.0 distribution from CRI. CRI`s goal for the collaboration was to evaluate the usability of ELROS. In particular, we were interested in their potential feedback on the use of ELROS in implementing ISO protocols--whether ELROS would be easter to use and perform better than other tools that form part of the standard ISODE system. To help achieve these goals for CRI, we provided them with a distribution tar file containing the ELROS system, once we had completed our port of ELROS to the Cray.

  11. Evaluation of distributed computing tools

    SciTech Connect

    Stanberry, L.

    1992-10-28

    The original goal stated in the collaboration agreement from LCC's perspective was to show that networking tools available in UNICOS perform well enough to meet the requirements of LCC customers.'' This translated into evaluating how easy it was to port ELROS over CRI's ISO 2.0, which itself is a port of ISODE to the Cray. In addition we tested the interoperability of ELROS and ISO 2.0 programs running on the Cray, and communicating with each other, and with servers or clients running on other machines. To achieve these goals from LCC's side, we ported ELROS to the Cray, and also obtained and installed a copy of the ISO 2.0 distribution from CRI. CRI's goal for the collaboration was to evaluate the usability of ELROS. In particular, we were interested in their potential feedback on the use of ELROS in implementing ISO protocols--whether ELROS would be easter to use and perform better than other tools that form part of the standard ISODE system. To help achieve these goals for CRI, we provided them with a distribution tar file containing the ELROS system, once we had completed our port of ELROS to the Cray.

  12. A Software Rejuvenation Framework for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Chau, Savio

    2009-01-01

    A performability-oriented conceptual framework for software rejuvenation has been constructed as a means of increasing levels of reliability and performance in distributed stateful computing. As used here, performability-oriented signifies that the construction of the framework is guided by the concept of analyzing the ability of a given computing system to deliver services with gracefully degradable performance. The framework is especially intended to support applications that involve stateful replicas of server computers.

  13. Distributed Dynamic State Estimation with Extended Kalman Filter

    SciTech Connect

    Du, Pengwei; Huang, Zhenyu; Sun, Yannan; Diao, Ruisheng; Kalsi, Karanjit; Anderson, Kevin K.; Li, Yulan; Lee, Barry

    2011-08-04

    Increasing complexity associated with large-scale renewable resources and novel smart-grid technologies necessitates real-time monitoring and control. Our previous work applied the extended Kalman filter (EKF) with the use of phasor measurement data (PMU) for dynamic state estimation. However, high computation complexity creates significant challenges for real-time applications. In this paper, the problem of distributed dynamic state estimation is investigated. One domain decomposition method is proposed to utilize decentralized computing resources. The performance of distributed dynamic state estimation is tested on a 16-machine, 68-bus test system.

  14. Traffic Dynamics of Computer Networks

    NASA Astrophysics Data System (ADS)

    Fekete, Attila

    2008-10-01

    Two important aspects of the Internet, namely the properties of its topology and the characteristics of its data traffic, have attracted growing attention of the physics community. My thesis has considered problems of both aspects. First I studied the stochastic behavior of TCP, the primary algorithm governing traffic in the current Internet, in an elementary network scenario consisting of a standalone infinite-sized buffer and an access link. The effect of the fast recovery and fast retransmission (FR/FR) algorithms is also considered. I showed that my model can be extended further to involve the effect of link propagation delay, characteristic of WAN. I continued my thesis with the investigation of finite-sized semi-bottleneck buffers, where packets can be dropped not only at the link, but also at the buffer. I demonstrated that the behavior of the system depends only on a certain combination of the parameters. Moreover, an analytic formula was derived that gives the ratio of packet loss rate at the buffer to the total packet loss rate. This formula makes it possible to treat buffer-losses as if they were link-losses. Finally, I studied computer networks from a structural perspective. I demonstrated through fluid simulations that the distribution of resources, specifically the link bandwidth, has a serious impact on the global performance of the network. Then I analyzed the distribution of edge betweenness in a growing scale-free tree under the condition that a local property, the in-degree of the "younger" node of an arbitrary edge, is known in order to find an optimum distribution of link capacity. The derived formula is exact even for finite-sized networks. I also calculated the conditional expectation of edge betweenness, rescaled for infinite networks.

  15. Research on Computational Fluid Dynamics and Turbulence

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Preconditioning matrices for Chebyshev derivative operators in several space dimensions; the Jacobi matrix technique in computational fluid dynamics; and Chebyshev techniques for periodic problems are discussed.

  16. Parallel Computation Of Forward Dynamics Of Manipulators

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1993-01-01

    Report presents parallel algorithms and special parallel architecture for computation of forward dynamics of robotics manipulators. Products of effort to find best method of parallel computation to achieve required computational efficiency. Significant speedup of computation anticipated as well as cost reduction.

  17. Object-oriented Tools for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.

    1993-01-01

    Distributed computing systems are proliferating, owing to the availability of powerful, affordable microcomputers and inexpensive communication networks. A critical problem in developing such systems is getting application programs to interact with one another across a computer network. Remote interprogram connectivity is particularly challenging across heterogeneous environments, where applications run on different kinds of computers and operating systems. NetWorks! (trademark) is an innovative software product that provides an object-oriented messaging solution to these problems. This paper describes the design and functionality of NetWorks! and illustrates how it is being used to build complex distributed applications for NASA and in the commercial sector.

  18. Distributed Real-Time Computing with Harness

    SciTech Connect

    Di Saverio, Emanuele; Cesati, Marco; Di Biagio, Christian; Pennella, Guido; Engelmann, Christian

    2007-01-01

    Modern parallel and distributed computing solutions are often built onto a ''middleware'' software layer providing a higher and common level of service between computational nodes. Harness is an adaptable, plugin-based middleware framework for parallel and distributed computing. This paper reports recent research and development results of using Harness for real-time distributed computing applications in the context of an industrial environment with the needs to perform several safety critical tasks. The presented work exploits the modular architecture of Harness in conjunction with a lightweight threaded implementation to resolve several real-time issues by adding three new Harness plug-ins to provide a prioritized lightweight execution environment, low latency communication facilities, and local timestamped event logging.

  19. Predictive Dynamic Security Assessment through Advanced Computing

    SciTech Connect

    Huang, Zhenyu; Diao, Ruisheng; Jin, Shuangshuang; Chen, Yousu

    2014-11-30

    Abstract— Traditional dynamic security assessment is limited by several factors and thus falls short in providing real-time information to be predictive for power system operation. These factors include the steady-state assumption of current operating points, static transfer limits, and low computational speed. This addresses these factors and frames predictive dynamic security assessment. The primary objective of predictive dynamic security assessment is to enhance the functionality and computational process of dynamic security assessment through the use of high-speed phasor measurements and the application of advanced computing technologies for faster-than-real-time simulation. This paper presents algorithms, computing platforms, and simulation frameworks that constitute the predictive dynamic security assessment capability. Examples of phasor application and fast computation for dynamic security assessment are included to demonstrate the feasibility and speed enhancement for real-time applications.

  20. Dance Dynamics: Computers and Dance.

    ERIC Educational Resources Information Center

    Gray, Judith A., Ed.; And Others

    1983-01-01

    Five articles discuss the use of computers in dance and dance education. They describe: (1) a computerized behavioral profile of a dance teacher; (2) computer-based dance notation; (3) elementary school computer-assisted dance instruction; (4) quantified analysis of dance criticism; and (5) computerized simulation of human body movements in a…

  1. High-performance computing and distributed systems

    SciTech Connect

    Loken, S.C.; Greiman, W.; Jacobson, V.L.; Johnston, W.E.; Robertson, D.W.; Tierney, B.L.

    1992-09-01

    We present a scenario for a fully distributed computing environment in which computing, storage, and I/O elements are configured on demand into ``virtual systems`` that are optimal for the solution of a particular problem. We also describe present two pilot projects that illustrate some of the elements and issues of this scenario. The goal of this work is to make the most powerful computing systems those that are logically assembled from network based components, and to make those systems available independent of the geographic location of the constituent elements.

  2. High-performance computing and distributed systems

    SciTech Connect

    Loken, S.C.; Greiman, W.; Jacobson, V.L.; Johnston, W.E.; Robertson, D.W.; Tierney, B.L.

    1992-09-01

    We present a scenario for a fully distributed computing environment in which computing, storage, and I/O elements are configured on demand into virtual systems'' that are optimal for the solution of a particular problem. We also describe present two pilot projects that illustrate some of the elements and issues of this scenario. The goal of this work is to make the most powerful computing systems those that are logically assembled from network based components, and to make those systems available independent of the geographic location of the constituent elements.

  3. Data Integration in Computer Distributed Systems

    NASA Astrophysics Data System (ADS)

    Kwiecień, Błażej

    In this article the author analyze a problem of data integration in a computer distributed systems. Exchange of information between different levels in integrated pyramid of enterprise process is fundamental with regard to efficient enterprise work. Communication and data exchange between levels are not always the same cause of necessity of different network protocols usage, communication medium, system response time, etc.

  4. Great Expectations: Distributed Financial Computing at Cornell.

    ERIC Educational Resources Information Center

    Schulden, Louise; Sidle, Clint

    1988-01-01

    The Cornell University Distributed Accounting (CUDA) system is an attempt to provide departments a software tool for better managing their finances, creating microcomputer standards, creating a vehicle for better administrative microcomputer support, and insuring local systems are consistent with central computer systems. (Author/MLW)

  5. Computer Systems for Distributed and Distance Learning.

    ERIC Educational Resources Information Center

    Anderson, M.; Jackson, David

    2000-01-01

    Discussion of network-based learning focuses on a survey of computer systems for distributed and distance learning. Both Web-based systems and non-Web-based systems are reviewed in order to highlight some of the major trends of past projects and to suggest ways in which progress may be made in the future. (Contains 92 references.) (Author/LRW)

  6. Vectorization of computer programs with applications to computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Gentzsch, W.

    Techniques for adapting serial computer programs to the architecture of modern vector computers are presented and illustrated with examples, mainly from the field of computational fluid dynamics. The limitations of conventional computers are reviewed; the vector computers CRAY-1S and CDC-CYBER 205 are characterized; and chapters are devoted to vectorization of FORTRAN programs, sample-program vectorization on five different vector and parallel-architecture computers, restructuring of basic linear-algebra algorithms, iterative methods, vectorization of simple numerical algorithms, and fluid-dynamics vectorization on CRAY-1 (including an implicit beam and warming scheme, an implicit finite-difference method for laminar boundary-layer equations, the Galerkin method and a direct Monte Carlo simulation). Diagrams, charts, tables, and photographs are provided.

  7. Distributed Computing Framework for Synthetic Radar Application

    NASA Technical Reports Server (NTRS)

    Gurrola, Eric M.; Rosen, Paul A.; Aivazis, Michael

    2006-01-01

    We are developing an extensible software framework, in response to Air Force and NASA needs for distributed computing facilities for a variety of radar applications. The objective of this work is to develop a Python based software framework, that is the framework elements of the middleware that allows developers to control processing flow on a grid in a distributed computing environment. Framework architectures to date allow developers to connect processing functions together as interchangeable objects, thereby allowing a data flow graph to be devised for a specific problem to be solved. The Pyre framework, developed at the California Institute of Technology (Caltech), and now being used as the basis for next-generation radar processing at JPL, is a Python-based software framework. We have extended the Pyre framework to include new facilities to deploy processing components as services, including components that monitor and assess the state of the distributed network for eventual real-time control of grid resources.

  8. Research computing in a distributed cloud environment

    NASA Astrophysics Data System (ADS)

    Fransham, K.; Agarwal, A.; Armstrong, P.; Bishop, A.; Charbonneau, A.; Desmarais, R.; Hill, N.; Gable, I.; Gaudet, S.; Goliath, S.; Impey, R.; Leavett-Brown, C.; Ouellete, J.; Paterson, M.; Pritchet, C.; Penfold-Brown, D.; Podaima, W.; Schade, D.; Sobie, R. J.

    2010-11-01

    The recent increase in availability of Infrastructure-as-a-Service (IaaS) computing clouds provides a new way for researchers to run complex scientific applications. However, using cloud resources for a large number of research jobs requires significant effort and expertise. Furthermore, running jobs on many different clouds presents even more difficulty. In order to make it easy for researchers to deploy scientific applications across many cloud resources, we have developed a virtual machine resource manager (Cloud Scheduler) for distributed compute clouds. In response to a user's job submission to a batch system, the Cloud Scheduler manages the distribution and deployment of user-customized virtual machines across multiple clouds. We describe the motivation for and implementation of a distributed cloud using the Cloud Scheduler that is spread across both commercial and dedicated private sites, and present some early results of scientific data analysis using the system.

  9. Distributed computing system with dual independent communications paths between computers and employing split tokens

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert D. (Inventor); Manning, Robert M. (Inventor); Lewis, Blair F. (Inventor); Bolotin, Gary S. (Inventor); Ward, Richard S. (Inventor)

    1990-01-01

    This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided.

  10. Distributed Storage Systems for Data Intensive Computing

    SciTech Connect

    Vazhkudai, Sudharshan S; Butt, Ali R; Ma, Xiaosong

    2012-01-01

    In this chapter, the authors present an overview of the utility of distributed storage systems in supporting modern applications that are increasingly becoming data intensive. Their coverage of distributed storage systems is based on the requirements imposed by data intensive computing and not a mere summary of storage systems. To this end, they delve into several aspects of supporting data-intensive analysis, such as data staging, offloading, checkpointing, and end-user access to terabytes of data, and illustrate the use of novel techniques and methodologies for realizing distributed storage systems therein. The data deluge from scientific experiments, observations, and simulations is affecting all of the aforementioned day-to-day operations in data-intensive computing. Modern distributed storage systems employ techniques that can help improve application performance, alleviate I/O bandwidth bottleneck, mask failures, and improve data availability. They present key guiding principles involved in the construction of such storage systems, associated tradeoffs, design, and architecture, all with an eye toward addressing challenges of data-intensive scientific applications. They highlight the concepts involved using several case studies of state-of-the-art storage systems that are currently available in the data-intensive computing landscape.

  11. Fluid dynamics computer programs for NERVA turbopump

    NASA Technical Reports Server (NTRS)

    Brunner, J. J.

    1972-01-01

    During the design of the NERVA turbopump, numerous computer programs were developed for the analyses of fluid dynamic problems within the machine. Program descriptions, example cases, users instructions, and listings for the majority of these programs are presented.

  12. Comparing the Performance of Two Dynamic Load Distribution Methods

    NASA Technical Reports Server (NTRS)

    Kale, L. V.

    1987-01-01

    Parallel processing of symbolic computations on a message-passing multi-processor presents one challenge: To effectively utilize the available processors, the load must be distributed uniformly to all the processors. However, the structure of these computations cannot be predicted in advance. go, static scheduling methods are not applicable. In this paper, we compare the performance of two dynamic, distributed load balancing methods with extensive simulation studies. The two schemes are: the Contracting Within a Neighborhood (CWN) scheme proposed by us, and the Gradient Model proposed by Lin and Keller. We conclude that although simpler, the CWN is significantly more effective at distributing the work than the Gradient model.

  13. A Computational Fluid Dynamics Algorithm on a Massively Parallel Computer

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.; Levit, Creon

    1989-01-01

    The discipline of computational fluid dynamics is demanding ever-increasing computational power to deal with complex fluid flow problems. We investigate the performance of a finite-difference computational fluid dynamics algorithm on a massively parallel computer, the Connection Machine. Of special interest is an implicit time-stepping algorithm; to obtain maximum performance from the Connection Machine, it is necessary to use a nonstandard algorithm to solve the linear systems that arise in the implicit algorithm. We find that the Connection Machine ran achieve very high computation rates on both explicit and implicit algorithms. The performance of the Connection Machine puts it in the same class as today's most powerful conventional supercomputers.

  14. Distributed computation of supremal conditionally controllable sublanguages

    NASA Astrophysics Data System (ADS)

    Komenda, Jan; Masopust, Tomáš

    2016-02-01

    In this paper, we further develop the coordination control framework for discrete-event systems with both complete and partial observations. First, a weaker sufficient condition for the computation of the supremal conditionally controllable sublanguage and conditionally normal sublanguage is presented. Then we show that this condition can be imposed by synthesising a-posteriori supervisors. The paper further generalises the previous study by considering general, non-prefix-closed languages. Moreover, we prove that for prefix-closed languages the supremal conditionally controllable sublanguage and conditionally normal sublanguage can always be computed in the distributed way without any restrictive conditions we have used in the past.

  15. Open Source Live Distributions for Computer Forensics

    NASA Astrophysics Data System (ADS)

    Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele

    Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.

  16. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2003-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  17. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2004-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  18. COLD-SAT Dynamic Model Computer Code

    NASA Technical Reports Server (NTRS)

    Bollenbacher, G.; Adams, N. S.

    1995-01-01

    COLD-SAT Dynamic Model (CSDM) computer code implements six-degree-of-freedom, rigid-body mathematical model for simulation of spacecraft in orbit around Earth. Investigates flow dynamics and thermodynamics of subcritical cryogenic fluids in microgravity. Consists of three parts: translation model, rotation model, and slosh model. Written in FORTRAN 77.

  19. Subtlenoise: sonification of distributed computing operations

    NASA Astrophysics Data System (ADS)

    Love, P. A.

    2015-12-01

    The operation of distributed computing systems requires comprehensive monitoring to ensure reliability and robustness. There are two components found in most monitoring systems: one being visually rich time-series graphs and another being notification systems for alerting operators under certain pre-defined conditions. In this paper the sonification of monitoring messages is explored using an architecture that fits easily within existing infrastructures based on mature opensource technologies such as ZeroMQ, Logstash, and Supercollider (a synth engine). Message attributes are mapped onto audio attributes based on broad classification of the message (continuous or discrete metrics) but keeping the audio stream subtle in nature. The benefits of audio rendering are described in the context of distributed computing operations and may provide a less intrusive way to understand the operational health of these systems.

  20. Radar data processing using a distributed computational system

    NASA Astrophysics Data System (ADS)

    Mota, Gilberto F.

    1992-06-01

    This research specifies and validates a new concurrent decomposition scheme, called Confined Space Search Decomposition (CSSD), to exploit parallelism of Radar Data Processing algorithms using a Distributed Computational System. To formalize the specification, we propose and apply an object-oriented methodology called Decomposition Cost Evaluation Model (DCEM). To reduce the penalties of load imbalance, we propose a distributed dynamic load balance heuristic called Object Reincarnation (OR). To validate the research, we first compare our decomposition with an identified alternative using the proposed DCEM model and then develop a theoretical prediction of selected parameters. We also develop a simulation to check the Object Reincarnation Concept.

  1. Beyond input-output computings: error-driven emergence with parallel non-distributed slime mold computer.

    PubMed

    Aono, Masashi; Gunji, Yukio-Pegio

    2003-10-01

    The emergence derived from errors is the key importance for both novel computing and novel usage of the computer. In this paper, we propose an implementable experimental plan for the biological computing so as to elicit the emergent property of complex systems. An individual plasmodium of the true slime mold Physarum polycephalum acts in the slime mold computer. Modifying the Elementary Cellular Automaton as it entails the global synchronization problem upon the parallel computing provides the NP-complete problem solved by the slime mold computer. The possibility to solve the problem by giving neither all possible results nor explicit prescription of solution-seeking is discussed. In slime mold computing, the distributivity in the local computing logic can change dynamically, and its parallel non-distributed computing cannot be reduced into the spatial addition of multiple serial computings. The computing system based on exhaustive absence of the super-system may produce, something more than filling the vacancy. PMID:14563567

  2. Fast Parallel Computation Of Manipulator Inverse Dynamics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1991-01-01

    Method for fast parallel computation of inverse dynamics problem, essential for real-time dynamic control and simulation of robot manipulators, undergoing development. Enables exploitation of high degree of parallelism and, achievement of significant computational efficiency, while minimizing various communication and synchronization overheads as well as complexity of required computer architecture. Universal real-time robotic controller and simulator (URRCS) consists of internal host processor and several SIMD processors with ring topology. Architecture modular and expandable: more SIMD processors added to match size of problem. Operate asynchronously and in MIMD fashion.

  3. Computational strategies for three-dimensional flow simulations on distributed computer systems

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-01-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  4. Computational Physics and Evolutionary Dynamics

    NASA Astrophysics Data System (ADS)

    Fontana, Walter

    2000-03-01

    One aspect of computational physics deals with the characterization of statistical regularities in materials. Computational physics meets biology when these materials can evolve. RNA molecules are a case in point. The folding of RNA sequences into secondary structures (shapes) inspires a simple biophysically grounded genotype-phenotype map that can be explored computationally and in the laboratory. We have identified some statistical regularities of this map and begin to understand their evolutionary consequences. (1) ``typical shapes'': Only a small subset of shapes realized by the RNA folding map is typical, in the sense of containing shapes that are realized significantly more often than others. Consequence: evolutionary histories mostly involve typical shapes, and thus exhibit generic properties. (2) ``neutral networks'': Sequences folding into the same shape are mutationally connected into a network that reaches across sequence space. Consequence: Evolutionary transitions between shapes reflect the fraction of boundary shared by the corresponding neutral networks in sequence space. The notion of a (dis)continuous transition can be made rigorous. (3) ``shape space covering'': Given a random sequence, a modest number of mutations suffices to reach a sequence realizing any typical shape. Consequence: The effective search space for evolutionary optimization is greatly reduced, and adaptive success is less dependent on initial conditions. (4) ``plasticity mirrors variability'': The repertoire of low energy shapes of a sequence is an indicator of how much and in which ways its energetically optimal shape can be altered by a single point mutation. Consequence: (i) Thermodynamic shape stability and mutational robustness are intimately linked. (ii) When natural selection favors the increase of stability, extreme mutational robustness -- to the point of an evolutionary dead-end -- is produced as a side effect. (iii) The hallmark of robust shapes is modularity.

  5. Computational fluid dynamics - The coming revolution

    NASA Technical Reports Server (NTRS)

    Graves, R. A., Jr.

    1982-01-01

    The development of aerodynamic theory is traced from the days of Aristotle to the present, with the next stage in computational fluid dynamics dependent on superspeed computers for flow calculations. Additional attention is given to the history of numerical methods inherent in writing computer codes applicable to viscous and inviscid analyses for complex configurations. The advent of the superconducting Josephson junction is noted to place configurational demands on computer design to avoid limitations imposed by the speed of light, and a Japanese projection of a computer capable of several hundred billion operations/sec is mentioned. The NASA Numerical Aerodynamic Simulator is described, showing capabilities of a billion operations/sec with a memory of 240 million words using existing technology. Near-term advances in fluid dynamics are discussed.

  6. The future of PanDA in ATLAS distributed computing

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.

  7. Single neuron dynamics and computation.

    PubMed

    Brunel, Nicolas; Hakim, Vincent; Richardson, Magnus J E

    2014-04-01

    At the single neuron level, information processing involves the transformation of input spike trains into an appropriate output spike train. Building upon the classical view of a neuron as a threshold device, models have been developed in recent years that take into account the diverse electrophysiological make-up of neurons and accurately describe their input-output relations. Here, we review these recent advances and survey the computational roles that they have uncovered for various electrophysiological properties, for dendritic arbor anatomy as well as for short-term synaptic plasticity. PMID:24492069

  8. Distributed neural computations for embedded sensor networks

    NASA Astrophysics Data System (ADS)

    Peckens, Courtney A.; Lynch, Jerome P.; Pei, Jin-Song

    2011-04-01

    Wireless sensing technologies have recently emerged as an inexpensive and robust method of data collection in a variety of structural monitoring applications. In comparison with cabled monitoring systems, wireless systems offer low-cost and low-power communication between a network of sensing devices. Wireless sensing networks possess embedded data processing capabilities which allow for data processing directly at the sensor, thereby eliminating the need for the transmission of raw data. In this study, the Volterra/Weiner neural network (VWNN), a powerful modeling tool for nonlinear hysteretic behavior, is decentralized for embedment in a network of wireless sensors so as to take advantage of each sensor's processing capabilities. The VWNN was chosen for modeling nonlinear dynamic systems because its architecture is computationally efficient and allows computational tasks to be decomposed for parallel execution. In the algorithm, each sensor collects it own data and performs a series of calculations. It then shares its resulting calculations with every other sensor in the network, while the other sensors are simultaneously exchanging their information. Because resource conservation is important in embedded sensor design, the data is pruned wherever possible to eliminate excessive communication between sensors. Once a sensor has its required data, it continues its calculations and computes a prediction of the system acceleration. The VWNN is embedded in the computational core of the Narada wireless sensor node for on-line execution. Data generated by a steel framed structure excited by seismic ground motions is used for validation of the embedded VWNN model.

  9. Three-Dimensional Computational Fluid Dynamics

    SciTech Connect

    Haworth, D.C.; O'Rourke, P.J.; Ranganathan, R.

    1998-09-01

    Computational fluid dynamics (CFD) is one discipline falling under the broad heading of computer-aided engineering (CAE). CAE, together with computer-aided design (CAD) and computer-aided manufacturing (CAM), comprise a mathematical-based approach to engineering product and process design, analysis and fabrication. In this overview of CFD for the design engineer, our purposes are three-fold: (1) to define the scope of CFD and motivate its utility for engineering, (2) to provide a basic technical foundation for CFD, and (3) to convey how CFD is incorporated into engineering product and process design.

  10. Distributed Computing for the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Chudoba, J.

    2015-12-01

    Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system.

  11. Progress in the dynamical parton distributions

    SciTech Connect

    Jimenez-Delgado, Pedro

    2012-06-01

    The present status of the (JR) dynamical parton distribution functions is reported. Different theoretical improvements, including the determination of the strange sea input distribution, the treatment of correlated errors and the inclusion of alternative data sets, are discussed. Highlights in the ongoing developments as well as (very) preliminary results in the determination of the strong coupling constant are presented.

  12. Computer simulation of microstructural dynamics

    SciTech Connect

    Grest, G.S.; Anderson, M.P.; Srolovitz, D.J.

    1985-01-01

    Since many of the physical properties of materials are determined by their microstructure, it is important to be able to predict and control microstructural development. A number of approaches have been taken to study this problem, but they assume that the grains can be described as spherical or hexagonal and that growth occurs in an average environment. We have developed a new technique to bridge the gap between the atomistic interactions and the macroscopic scale by discretizing the continuum system such that the microstructure retains its topological connectedness, yet is amenable to computer simulations. Using this technique, we have studied grain growth in polycrystalline aggregates. The temporal evolution and grain morphology of our model are in excellent agreement with experimental results for metals and ceramics.

  13. Pseudo-interactive monitoring in distributed computing

    SciTech Connect

    Sfiligoi, I.; Bradley, D.; Livny, M.; /Wisconsin U., Madison

    2009-05-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  14. Dynamic void distribution in myoglobin and five mutants.

    PubMed

    Jiang, Yingying; Kirmizialtin, Serdal; Sanchez, Isaac C

    2014-01-01

    Globular proteins contain cavities/voids that play specific roles in controlling protein function. Elongated cavities provide migration channels for the transport of ions and small molecules to the active center of a protein or enzyme. Using Monte Carlo and Molecular Dynamics on fully atomistic protein/water models, a new computational methodology is introduced that takes into account the protein's dynamic structure and maps all the cavities in and on the surface. To demonstrate its utility, the methodology is applied to study cavity structure in myoglobin and five of its mutants. Computed cavity and channel size distributions reveal significant differences relative to the wild type myoglobin. Computer visualization of the channels leading to the heme center indicates restricted ligand access for the mutants consistent with the existing interpretations. The new methodology provides a quantitative measure of cavity structure and distributions and can become a valuable tool for the structural characterization of proteins. PMID:24500195

  15. Interoperable PKI Data Distribution in Computational Grids

    SciTech Connect

    Pala, Massimiliano; Cholia, Shreyas; Rea, Scott A.; Smith, Sean W.

    2008-07-25

    One of the most successful working examples of virtual organizations, computational grids need authentication mechanisms that inter-operate across domain boundaries. Public Key Infrastructures(PKIs) provide sufficient flexibility to allow resource managers to securely grant access to their systems in such distributed environments. However, as PKIs grow and services are added to enhance both security and usability, users and applications must struggle to discover available resources-particularly when the Certification Authority (CA) is alien to the relying party. This article presents how to overcome these limitations of the current grid authentication model by integrating the PKI Resource Query Protocol (PRQP) into the Grid Security Infrastructure (GSI).

  16. Airport Simulations Using Distributed Computational Resources

    NASA Technical Reports Server (NTRS)

    McDermott, William J.; Maluf, David A.; Gawdiak, Yuri; Tran, Peter; Clancy, Daniel (Technical Monitor)

    2002-01-01

    The Virtual National Airspace Simulation (VNAS) will improve the safety of Air Transportation. In 2001, using simulation and information management software running over a distributed network of super-computers, researchers at NASA Ames, Glenn, and Langley Research Centers developed a working prototype of a virtual airspace. This VNAS prototype modeled daily operations of the Atlanta airport by integrating measured operational data and simulation data on up to 2,000 flights a day. The concepts and architecture developed by NASA for this prototype are integral to the National Airspace Simulation to support the development of strategies improving aviation safety, identifying precursors to component failure.

  17. Advances in the spatially distributed ages-w model: parallel computation, java connection framework (JCF) integration, and streamflow/nitrogen dynamics assessment

    Technology Transfer Automated Retrieval System (TEKTRAN)

    AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic and water quality (H/WQ) simulation components under the Java Connection Framework (JCF) and the Object Modeling System (OMS) environmental modeling framework. AgES-W is implicitly scala...

  18. Computational fluid dynamics - A personal view

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.

    1989-01-01

    This paper provides a personal view of computational fluid dynamics. The main theme is divided into two categories - one dealing with algorithms and engineering applications and the other with scientific investigations. The former category may be termed computational aerodynamics, with the objective of providing reliable aerodynamic or engineering predictions. The latter category is essentially basic research, where the algorithmic tools are used to unravel and elucidate fluid-dynamic phenomena hard to obtain in a laboratory. A critique of the numerical solution techniques for both compressible and incompressible flows is included. The discussion on scientific investigations deals in particular with transition and turbulence.

  19. Fast Parallel Computation Of Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Kwan, Gregory L.; Bagherzadeh, Nader

    1996-01-01

    Constraint-force algorithm fast, efficient, parallel-computation algorithm for solving forward dynamics problem of multibody system like robot arm or vehicle. Solves problem in minimum time proportional to log(N) by use of optimal number of processors proportional to N, where N is number of dynamical degrees of freedom: in this sense, constraint-force algorithm both time-optimal and processor-optimal parallel-processing algorithm.

  20. Distributed Data Mining using a Public Resource Computing Framework

    NASA Astrophysics Data System (ADS)

    Cesario, Eugenio; de Caria, Nicola; Mastroianni, Carlo; Talia, Domenico

    The public resource computing paradigm is often used as a successful and low cost mechanism for the management of several classes of scientific and commercial applications that require the execution of a large number of independent tasks. Public computing frameworks, also known as “Desktop Grids”, exploit the computational power and storage facilities of private computers, or “workers”. Despite the inherent decentralized nature of the applications for which they are devoted, these systems often adopt a centralized mechanism for the assignment of jobs and distribution of input data, as is the case for BOINC, the most popular framework in this realm. We present a decentralized framework that aims at increasing the flexibility and robustness of public computing applications, thanks to two basic features: (i) the adoption of a P2P protocol for dynamically matching the job specifications with the worker characteristics, without relying on centralized resources; (ii) the use of distributed cache servers for an efficient dissemination and reutilization of data files. This framework is exploitable for a wide set of applications. In this work, we describe how a Java prototype of the framework was used to tackle the problem of mining frequent itemsets from a transactional dataset, and show some preliminary yet interesting performance results that prove the efficiency improvements that can derive from the presented architecture.

  1. Dynamic object management for distributed data structures

    NASA Technical Reports Server (NTRS)

    Totty, Brian K.; Reed, Daniel A.

    1992-01-01

    In distributed-memory multiprocessors, remote memory accesses incur larger delays than local accesses. Hence, insightful allocation and access of distributed data can yield substantial performance gains. The authors argue for the use of dynamic data management policies encapsulated within individual distributed data structures. Distributed data structures offer performance, flexibility, abstraction, and system independence. This approach is supported by data from a trace-driven simulation study of parallel scientific benchmarks. Experimental data on memory locality, message count, message volume, and communication delay suggest that data-structure-specific data management is superior to a single, system-imposed policy.

  2. Visualization of unsteady computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Haimes, Robert

    1994-11-01

    A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.

  3. Visualization of unsteady computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1994-01-01

    A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.

  4. Graphics supercomputer for computational fluid dynamics research

    NASA Astrophysics Data System (ADS)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  5. Final Report Computational Analysis of Dynamical Systems

    SciTech Connect

    Guckenheimer, John

    2012-05-08

    This is the final report for DOE Grant DE-FG02-93ER25164, initiated in 1993. This grant supported research of John Guckenheimer on computational analysis of dynamical systems. During that period, seventeen individuals received PhD degrees under the supervision of Guckenheimer and over fifty publications related to the grant were produced. This document contains copies of these publications.

  6. Computational fluid dynamics in oil burner design

    SciTech Connect

    Butcher, T.A.

    1997-09-01

    In Computational Fluid Dynamics, the differential equations which describe flow, heat transfer, and mass transfer are approximately solved using a very laborious numerical procedure. Flows of practical interest to burner designs are always turbulent, adding to the complexity of requiring a turbulence model. This paper presents a model for burner design.

  7. From Cnn Dynamics to Cellular Wave Computers

    NASA Astrophysics Data System (ADS)

    Roska, Tamas

    2013-01-01

    Embedded in a historical overview, the development of the Cellular Wave Computing paradigm is presented, starting from the standard CNN dynamics. The theoretical aspects, the physical implementation, the innovation process, as well as the biological relevance are discussed in details. Finally, the latest developments, the physical versus virtual cellular machines, as well as some open questions are presented.

  8. Parallel computation of manipulator inverse dynamics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1991-01-01

    In this article, parallel computation of manipulator inverse dynamics is investigated. A hierarchical graph-based mapping approach is devised to analyze the inherent parallelism in the Newton-Euler formulation at several computational levels, and to derive the features of an abstract architecture for exploitation of parallelism. At each level, a parallel algorithm represents the application of a parallel model of computation that transforms the computation into a graph whose structure defines the features of an abstract architecture, i.e., number of processors, communication structure, etc. Data-flow analysis is employed to derive the time lower bound in the computation as well as the sequencing of the abstract architecture. The features of the target architecture are defined by optimization of the abstract architecture to exploit maximum parallelism while minimizing architectural complexity. An architecture is designed and implemented that is capable of efficient exploitation of parallelism at several computational levels. The computation time of the Newton-Euler formulation for a 6-degree-of-freedom (dof) general manipulator is measured as 187 microsec. The increase in computation time for each additional dof is 23 microsec, which leads to a computation time of less than 500 microsec, even for a 12-dof redundant arm.

  9. Optimal dynamic remapping of parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Reynolds, Paul F., Jr.

    1987-01-01

    A large class of computations are characterized by a sequence of phases, with phase changes occurring unpredictably. The decision problem was considered regarding the remapping of workload to processors in a parallel computation when the utility of remapping and the future behavior of the workload is uncertain, and phases exhibit stable execution requirements during a given phase, but requirements may change radically between phases. For these problems a workload assignment generated for one phase may hinder performance during the next phase. This problem is treated formally for a probabilistic model of computation with at most two phases. The fundamental problem of balancing the expected remapping performance gain against the delay cost was addressed. Stochastic dynamic programming is used to show that the remapping decision policy minimizing the expected running time of the computation has an extremely simple structure. Because the gain may not be predictable, the performance of a heuristic policy that does not require estimnation of the gain is examined. The heuristic method's feasibility is demonstrated by its use on an adaptive fluid dynamics code on a multiprocessor. The results suggest that except in extreme cases, the remapping decision problem is essentially that of dynamically determining whether gain can be achieved by remapping after a phase change. The results also suggest that this heuristic is applicable to computations with more than two phases.

  10. Distributed Computing Software Building-Blocks for Ubiquitous Computing Societies

    NASA Astrophysics Data System (ADS)

    Kim, K. H. (Kane

    The steady approach of advanced nations toward realization of ubiquitous computing societies has given birth to rapidly growing demands for new-generation distributed computing (DC) applications. Consequently, economic and reliable construction of new-generation DC applications is currently a major issue faced by the software technology research community. What is needed is a new-generation DC software engineering technology which is at least multiple times more effective in constructing new-generation DC applications than the currently practiced technologies are. In particular, this author believes that a new-generation building-block (BB), which is much more advanced than the current-generation DC object that is a small extension of the object model embedded in languages C++, Java, and C#, is needed. Such a BB should enable systematic and economic construction of DC applications that are capable of taking critical actions with 100-microsecond-level or even 10-microsecond-level timing accuracy, fault tolerance, and security enforcement while being easily expandable and taking advantage of all sorts of network connectivity. Some directions considered worth pursuing for finding such BBs are discussed.

  11. An Applet-based Anonymous Distributed Computing System.

    ERIC Educational Resources Information Center

    Finkel, David; Wills, Craig E.; Ciaraldi, Michael J.; Amorin, Kevin; Covati, Adam; Lee, Michael

    2001-01-01

    Defines anonymous distributed computing systems and focuses on the specifics of a Java, applet-based approach for large-scale, anonymous, distributed computing on the Internet. Explains the possibility of a large number of computers participating in a single computation and describes a test of the functionality of the system. (Author/LRW)

  12. Absolute nonlocality via distributed computing without communication

    NASA Astrophysics Data System (ADS)

    Czekaj, Ł.; Pawłowski, M.; Vértesi, T.; Grudka, A.; Horodecki, M.; Horodecki, R.

    2015-09-01

    Understanding the role that quantum entanglement plays as a resource in various information processing tasks is one of the crucial goals of quantum information theory. Here we propose an alternative perspective for studying quantum entanglement: distributed computation of functions without communication between nodes. To formalize this approach, we propose identity games. Surprisingly, despite no signaling, we obtain that nonlocal quantum strategies beat classical ones in terms of winning probability for identity games originating from certain bipartite and multipartite functions. Moreover we show that, for a majority of functions, access to general nonsignaling resources boosts success probability two times in comparison to classical ones for a number of large enough outputs. Because there are no constraints on the inputs and no processing of the outputs in the identity games, they detect very strong types of correlations: absolute nonlocality.

  13. Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation: Second Year Progress Report

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    Mesh generation has long been recognized as a bottleneck in the CFD process. While much research on automating the volume mesh generation process have been relatively successful,these methods rely on appropriate initial surface triangulation to work properly. Surface discretization has been one of the least automated steps in computational simulation due to its dependence on implicitly defined CAD surfaces and curves. Differences in CAD peometry engines manifest themselves in discrepancies in their interpretation of the same entities. This lack of "good" geometry causes significant problems for mesh generators, requiring users to "repair" the CAD geometry before mesh generation. The problem is exacerbated when CAD geometry is translated to other forms (e.g., IGES )which do not include important topological and construction information in addition to entity geometry. One technique to avoid these problems is to access the CAD geometry directly from the mesh generating software, rather than through files. By accessing the geometry model (not a discretized version) in its native environment, t h s a proach avoids translation to a format which can deplete the model of topological information. Our approach to enable models developed in the Denali software environment to directly access CAD geometry and functions is through an Application Programming Interface (API) known as CAPRI. CAPRI provides a layer of indirection through which CAD-specific data may be accessed by an application program using CAD-system neutral C and FORTRAN language function calls. CAPRI supports a general set of CAD operations such as truth testing, geometry construction and entity queries.

  14. LHCbDirac: distributed computing in LHCb

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Charpentier, P.; Graciani, R.; Tsaregorodtsev, A.; Closier, J.; Mathe, Z.; Ubeda, M.; Zhelezov, A.; Lanciotti, E.; Romanovskiy, V.; Ciba, K. D.; Casajus, A.; Roiser, S.; Sapunov, M.; Remenska, D.; Bernardoff, V.; Santana, R.; Nandakumar, R.

    2012-12-01

    We present LHCbDirac, an extension of the DIRAC community Grid solution that handles LHCb specificities. The DIRAC software has been developed for many years within LHCb only. Nowadays it is a generic software, used by many scientific communities worldwide. Each community wanting to take advantage of DIRAC has to develop an extension, containing all the necessary code for handling their specific cases. LHCbDirac is an actively developed extension, implementing the LHCb computing model and workflows handling all the distributed computing activities of LHCb. Such activities include real data processing (reconstruction, stripping and streaming), Monte-Carlo simulation and data replication. Other activities are groups and user analysis, data management, resources management and monitoring, data provenance, accounting for user and production jobs. LHCbDirac also provides extensions of the DIRAC interfaces, including a secure web client, python APIs and CLIs. Before putting in production a new release, a number of certification tests are run in a dedicated setup. This contribution highlights the versatility of the system, also presenting the experience with real data processing, data and resources management, monitoring for activities and resources.

  15. Automating usability of ATLAS Distributed Computing resources

    NASA Astrophysics Data System (ADS)

    Tupputi, S. A.; Di Girolamo, A.; Kouba, T.; Schovancová, J.; Atlas Collaboration

    2014-06-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  16. Dynamic data distributions in Vienna Fortran

    NASA Technical Reports Server (NTRS)

    Chapman, Barbara; Mehrotra, Piyush; Moritsch, Hans; Zima, Hans

    1993-01-01

    Vienna Fortran is a machine-independent language extension of Fortran, which is based upon the Single-Program-Multiple-Data (SPMD) paradigm and allows the user to write programs for distributed-memory systems using global addresses. The language features focus mainly on the issue of distributing data across virtual processor structures. Those features of Vienna Fortran that allow the data distributions of arrays to change dynamically, depending on runtime conditions are discussed. The relevant language features are discussed, their implementation is outlined, and how they may be used in applications is described.

  17. Dynamic Singularity Spectrum Distribution of Sea Clutter

    NASA Astrophysics Data System (ADS)

    Xiong, Gang; Yu, Wenxian; Zhang, Shuning

    2015-12-01

    The fractal and multifractal theory have provided new approaches for radar signal processing and target-detecting under the background of ocean. However, the related research mainly focuses on fractal dimension or multifractal spectrum (MFS) of sea clutter. In this paper, a new dynamic singularity analysis method of sea clutter using MFS distribution is developed, based on moving detrending analysis (DMA-MFSD). Theoretically, we introduce the time information by using cyclic auto-correlation of sea clutter. For transient correlation series, the instantaneous singularity spectrum based on multifractal detrending moving analysis (MF-DMA) algorithm is calculated, and the dynamic singularity spectrum distribution of sea clutter is acquired. In addition, we analyze the time-varying singularity exponent ranges and maximum position function in DMA-MFSD of sea clutter. For the real sea clutter data, we analyze the dynamic singularity spectrum distribution of real sea clutter in level III sea state, and conclude that the radar sea clutter has the non-stationary and time-varying scale characteristic and represents the time-varying singularity spectrum distribution based on the proposed DMA-MFSD method. The DMA-MFSD will also provide reference for nonlinear dynamics and multifractal signal processing.

  18. The brain dynamics of linguistic computation

    PubMed Central

    Murphy, Elliot

    2015-01-01

    Neural oscillations at distinct frequencies are increasingly being related to a number of basic and higher cognitive faculties. Oscillations enable the construction of coherently organized neuronal assemblies through establishing transitory temporal correlations. By exploring the elementary operations of the language faculty—labeling, concatenation, cyclic transfer—alongside neural dynamics, a new model of linguistic computation is proposed. It is argued that the universality of language, and the true biological source of Universal Grammar, is not to be found purely in the genome as has long been suggested, but more specifically within the extraordinarily preserved nature of mammalian brain rhythms employed in the computation of linguistic structures. Computational-representational theories are used as a guide in investigating the neurobiological foundations of the human “cognome”—the set of computations performed by the nervous system—and new directions are suggested for how the dynamics of the brain (the “dynome”) operate and execute linguistic operations. The extent to which brain rhythms are the suitable neuronal processes which can capture the computational properties of the human language faculty is considered against a backdrop of existing cartographic research into the localization of linguistic interpretation. Particular focus is placed on labeling, the operation elsewhere argued to be species-specific. A Basic Label model of the human cognome-dynome is proposed, leading to clear, causally-addressable empirical predictions, to be investigated by a suggested research program, Dynamic Cognomics. In addition, a distinction between minimal and maximal degrees of explanation is introduced to differentiate between the depth of analysis provided by cartographic, rhythmic, neurochemical, and other approaches to computation. PMID:26528201

  19. The Gain of Resource Delegation in Distributed Computing Environments

    NASA Astrophysics Data System (ADS)

    Fölling, Alexander; Grimme, Christian; Lepping, Joachim; Papaspyrou, Alexander

    In this paper, we address job scheduling in Distributed Computing Infrastructures, that is a loosely coupled network of autonomous acting High Performance Computing systems. In contrast to the common approach of mutual workload exchange, we consider the more intuitive operator's viewpoint of load-dependent resource reconfiguration. In case of a site's over-utilization, the scheduling system is able to lease resources from other sites to keep up service quality for its local user community. Contrary, the granting of idle resources can increase utilization in times of low local workload and thus ensure higher efficiency. The evaluation considers real workload data and is done with respect to common service quality indicators. For two simple resource exchange policies and three basic setups we show the possible gain of this approach and analyze the dynamics in workload-adaptive reconfiguration behavior.

  20. Visualization of unsteady computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Haimes, Robert

    1995-10-01

    The current computing environment that most researchers are using for the calculation of 3D unsteady Computational Fluid Dynamic (CFD) results is a super-computer class machine. The Massively Parallel Processors (MPP's) such as the 160 node IBM SP2 at NAS and clusters of workstations acting as a single MPP (like NAS's SGI Power-Challenge array) provide the required computation bandwidth for CFD calculations of transient problems. Work is in progress on a set of software tools designed specifically to address visualizing 3D unsteady CFD results in these super-computer-like environments. The visualization is concurrently executed with the CFD solver. The parallel version of Visual3, pV3 required splitting up the unsteady visualization task to allow execution across a network of workstation(s) and compute servers. In this computing model, the network is almost always the bottleneck so much of the effort involved techniques to reduce the size of the data transferred between machines.

  1. Visualization of unsteady computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1995-01-01

    The current computing environment that most researchers are using for the calculation of 3D unsteady Computational Fluid Dynamic (CFD) results is a super-computer class machine. The Massively Parallel Processors (MPP's) such as the 160 node IBM SP2 at NAS and clusters of workstations acting as a single MPP (like NAS's SGI Power-Challenge array) provide the required computation bandwidth for CFD calculations of transient problems. Work is in progress on a set of software tools designed specifically to address visualizing 3D unsteady CFD results in these super-computer-like environments. The visualization is concurrently executed with the CFD solver. The parallel version of Visual3, pV3 required splitting up the unsteady visualization task to allow execution across a network of workstation(s) and compute servers. In this computing model, the network is almost always the bottleneck so much of the effort involved techniques to reduce the size of the data transferred between machines.

  2. Spectral Methods for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Streett, C. L.; Hussaini, M. Y.

    1994-01-01

    As a tool for large-scale computations in fluid dynamics, spectral methods were prophesized in 1944, born in 1954, virtually buried in the mid-1960's, resurrected in 1969, evangalized in the 1970's, and catholicized in the 1980's. The use of spectral methods for meteorological problems was proposed by Blinova in 1944 and the first numerical computations were conducted by Silberman (1954). By the early 1960's computers had achieved sufficient power to permit calculations with hundreds of degrees of freedom. For problems of this size the traditional way of computing the nonlinear terms in spectral methods was expensive compared with finite-difference methods. Consequently, spectral methods fell out of favor. The expense of computing nonlinear terms remained a severe drawback until Orszag (1969) and Eliasen, Machenauer, and Rasmussen (1970) developed the transform methods that still form the backbone of many large-scale spectral computations. The original proselytes of spectral methods were meteorologists involved in global weather modeling and fluid dynamicists investigating isotropic turbulence. The converts who were inspired by the successes of these pioneers remained, for the most part, confined to these and closely related fields throughout the 1970's. During that decade spectral methods appeared to be well-suited only for problems governed by ordinary diSerential eqllations or by partial differential equations with periodic boundary conditions. And, of course, the solution itself needed to be smooth. Some of the obstacles to wider application of spectral methods were: (1) poor resolution of discontinuous solutions; (2) inefficient implementation of implicit methods; and (3) drastic geometric constraints. All of these barriers have undergone some erosion during the 1980's, particularly the latter two. As a result, the applicability and appeal of spectral methods for computational fluid dynamics has broadened considerably. The motivation for the use of spectral

  3. An Overview of Cloud Computing in Distributed Systems

    NASA Astrophysics Data System (ADS)

    Divakarla, Usha; Kumari, Geetha

    2010-11-01

    Cloud computing is the emerging trend in the field of distributed computing. Cloud computing evolved from grid computing and distributed computing. Cloud plays an important role in huge organizations in maintaining huge data with limited resources. Cloud also helps in resource sharing through some specific virtual machines provided by the cloud service provider. This paper gives an overview of the cloud organization and some of the basic security issues pertaining to the cloud.

  4. Fault Diagnosis in a Fully Distributed Local Computer Network.

    NASA Astrophysics Data System (ADS)

    Kwag, Hye Keun

    Local computer networks are being installed in diverse application areas. Many of the networks employ a distributed control scheme, which has advantages in performance and reliability over a centralized one. However, distribution of control increases the difficulty in locating faulty hardware elements. Consequently, advantages may not be fully realized unless measures are taken to account for the difficulties of fault diagnosis; yet, not much work has been done in this area. A hardcore is defined as a node or a part of a node which is fault-free and which can diagnose other elements in a system. Faults are diagnosed in most existing distributed local computer networks by assuming that every node, or a part of every node, is a fixed hardcore: a fixed node or a part of a fixed node is always a hardcore. Maintaining such high reliability may not be possible or cost-effective for some systems. A distributed network contains dynamically redundant elements, and it is reasonable to assume that fewer nodes are simultaneously faulty than are fault-free at any point in the life cycle of the network. A diagnostic model is proposed herein which determines bindary evaluation results according to the status of the testing and tested nodes, and which leads the network to dynamically locate a fault-free node (a hardcore). This diagnostic model is, in most cases, simpler to implement and more cost-effective than the fixed hardcore. The selected hardcore can diagnose the other elements and can locate permanent faults. In a hop-by-hop test, the destination node and every intermediate node in a path test the transmitted data. This dissertation presents another method to locate an element with frequent transient faults; it checks data only at the destination, thereby, eliminating the need for a hop-by-hop test.

  5. The use of computers for instruction in fluid dynamics

    NASA Technical Reports Server (NTRS)

    Watson, Val

    1987-01-01

    Applications for computers which improve instruction in fluid dynamics are examined. Computers can be used to illustrate three-dimensional flow fields and simple fluid dynamics mechanisms, to solve fluid dynamics problems, and for electronic sketching. The usefulness of computer applications is limited by computer speed, memory, and software and the clarity and field of view of the projected display. Proposed advances in personal computers which will address these limitations are discussed. Long range applications for computers in education are considered.

  6. Computation in Dynamically Bounded Asymmetric Systems

    PubMed Central

    Rutishauser, Ueli; Slotine, Jean-Jacques; Douglas, Rodney

    2015-01-01

    Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. Here we analyze the behavior of asymmetrical connected networks of linear threshold neurons, whose positive response is unbounded. We show that, for a wide range of parameters, this asymmetry brings interesting and computationally useful dynamical properties. When driven by input, the network explores potential solutions through highly unstable ‘expansion’ dynamics. This expansion is steered and constrained by negative divergence of the dynamics, which ensures that the dimensionality of the solution space continues to reduce until an acceptable solution manifold is reached. Then the system contracts stably on this manifold towards its final solution trajectory. The unstable positive feedback and cross inhibition that underlie expansion and divergence are common motifs in molecular and neuronal networks. Therefore we propose that very simple organizational constraints that combine these motifs can lead to spontaneous computation and so to the spontaneous modification of entropy that is characteristic of living systems. PMID:25617645

  7. Distributed Design and Analysis of Computer Experiments

    SciTech Connect

    Doak, Justin

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. For example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation of an

  8. Distributed Design and Analysis of Computer Experiments

    Energy Science and Technology Software Center (ESTSC)

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. Formore » example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation

  9. A computational model for dynamic vision

    NASA Technical Reports Server (NTRS)

    Moezzi, Saied; Weymouth, Terry E.

    1990-01-01

    This paper describes a novel computational model for dynamic vision which promises to be both powerful and robust. Furthermore the paradigm is ideal for an active vision system where camera vergence changes dynamically. Its basis is the retinotopically indexed object-centered encoding of the early visual information. Specifically, the relative distances of objects to a set of referents is encoded in image registered maps. To illustrate the efficacy of the method, it is applied to the problem of dynamic stereo vision. Integration of depth information over multiple frames obtained by a moving robot generally requires precise information about the relative camera position from frame to frame. Usually, this information can only be approximated. The method facilitates the integration of depth information without direct use or knowledge of camera motion.

  10. Arterioportal shunts on dynamic computed tomography

    SciTech Connect

    Nakayama, T.; Hiyama, Y.; Ohnishi, K.; Tsuchiya, S.; Kohno, K.; Nakajima, Y.; Okuda, K.

    1983-05-01

    Thirty-two patients, 20 with hepatocelluar carcinoma and 12 with liver cirrhosis, were examined by dynamic computed tomography (CT) using intravenous bolus injection of contrast medium and by celiac angiography. Dynamic CT disclosed arterioportal shunting in four cases of hepatocellular carcinoma and in one of cirrhosis. In three of the former, the arterioportal shunt was adjacent to a mass lesion on CT, suggesting tumor invasion into the portal branch. In one with hepatocellular carcinoma, the shunt was remote from the mass. In the case with cirrhosis, there was no mass. In these last two cases, the shunt might have been caused by prior percutaneous needle puncture. In another case of hepatocellular carcinoma, celiac angiography but not CT demonstrated an arterioportal shunt. Thus, dynamic CT was diagnostic in five of six cases of arteriographically demonstrated arterioportal shunts.

  11. Human systems dynamics: Toward a computational model

    NASA Astrophysics Data System (ADS)

    Eoyang, Glenda H.

    2012-09-01

    A robust and reliable computational model of complex human systems dynamics could support advancements in theory and practice for social systems at all levels, from intrapersonal experience to global politics and economics. Models of human interactions have evolved from traditional, Newtonian systems assumptions, which served a variety of practical and theoretical needs of the past. Another class of models has been inspired and informed by models and methods from nonlinear dynamics, chaos, and complexity science. None of the existing models, however, is able to represent the open, high dimension, and nonlinear self-organizing dynamics of social systems. An effective model will represent interactions at multiple levels to generate emergent patterns of social and political life of individuals and groups. Existing models and modeling methods are considered and assessed against characteristic pattern-forming processes in observed and experienced phenomena of human systems. A conceptual model, CDE Model, based on the conditions for self-organizing in human systems, is explored as an alternative to existing models and methods. While the new model overcomes the limitations of previous models, it also provides an explanatory base and foundation for prospective analysis to inform real-time meaning making and action taking in response to complex conditions in the real world. An invitation is extended to readers to engage in developing a computational model that incorporates the assumptions, meta-variables, and relationships of this open, high dimension, and nonlinear conceptual model of the complex dynamics of human systems.

  12. Computational fluid dynamics uses in fluid dynamics/aerodynamics education

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    1994-01-01

    The field of computational fluid dynamics (CFD) has advanced to the point where it can now be used for the purpose of fluid dynamics physics education. Because of the tremendous wealth of information available from numerical simulation, certain fundamental concepts can be efficiently communicated using an interactive graphical interrogation of the appropriate numerical simulation data base. In other situations, a large amount of aerodynamic information can be communicated to the student by interactive use of simple CFD tools on a workstation or even in a personal computer environment. The emphasis in this presentation is to discuss ideas for how this process might be implemented. Specific examples, taken from previous publications, will be used to highlight the presentation.

  13. Inverse dynamics: Simultaneous trajectory tracking and vibration reduction with distributed actuators

    NASA Technical Reports Server (NTRS)

    Devasia, Santosh; Bayo, Eduardo

    1993-01-01

    This paper addresses the problem of inverse dynamics for articulated flexible structures with both lumped and distributed actuators. This problem arises, for example, in the combined vibration minimization and trajectory control of space robots and structures. A new inverse dynamics scheme for computing the nominal lumped and distributed inputs for tracking a prescribed trajectory is given.

  14. Computational Fluid Dynamics of rising droplets

    SciTech Connect

    Wagner, Matthew; Francois, Marianne M.

    2012-09-05

    The main goal of this study is to perform simulations of droplet dynamics using Truchas, a LANL-developed computational fluid dynamics (CFD) software, and compare them to a computational study of Hysing et al.[IJNMF, 2009, 60:1259]. Understanding droplet dynamics is of fundamental importance in liquid-liquid extraction, a process used in the nuclear fuel cycle to separate various components. Simulations of a single droplet rising by buoyancy are conducted in two-dimensions. Multiple parametric studies are carried out to ensure the problem set-up is optimized. An Interface Smoothing Length (ISL) study and mesh resolution study are performed to verify convergence of the calculations. ISL is a parameter for the interface curvature calculation. Further, wall effects are investigated and checked against existing correlations. The ISL study found that the optimal ISL value is 2.5{Delta}x, with {Delta}x being the mesh cell spacing. The mesh resolution study found that the optimal mesh resolution is d/h=40, for d=drop diameter and h={Delta}x. In order for wall effects on terminal velocity to be insignificant, a conservative wall width of 9d or a nonconservative wall width of 7d can be used. The percentage difference between Hysing et al.[IJNMF, 2009, 60:1259] and Truchas for the velocity profiles vary from 7.9% to 9.9%. The computed droplet velocity and interface profiles are found in agreement with the study. The CFD calculations are performed on multiple cores, using LANL's Institutional High Performance Computing.

  15. Concept for a distributed processor computer

    NASA Technical Reports Server (NTRS)

    Bogue, P. N.; Burnett, G. J.; Koczela, L. J.

    1970-01-01

    Future generation computer utilizes cell of single metal oxide semiconductor wafer containing general purpose processor section and small memory of approximately 512 words of 16 bits each. Cells are organized into groups and groups interconnected to form computer.

  16. Determination of eigenvalues of dynamical systems by symbolic computation

    NASA Technical Reports Server (NTRS)

    Howard, J. C.

    1982-01-01

    A symbolic computation technique for determining the eigenvalues of dynamical systems is described wherein algebraic operations, symbolic differentiation, matrix formulation and inversion, etc., can be performed on a digital computer equipped with a formula-manipulation compiler. An example is included that demonstrates the facility with which the system dynamics matrix and the control distribution matrix from the state space formulation of the equations of motion can be processed to obtain eigenvalue loci as a function of a system parameter. The example chosen to demonstrate the technique is a fourth-order system representing the longitudinal response of a DC 8 aircraft to elevator inputs. This simplified system has two dominant modes, one of which is lightly damped and the other well damped. The loci may be used to determine the value of the controlling parameter that satisfied design requirements. The results were obtained using the MACSYMA symbolic manipulation system.

  17. Advances in computational fluid dynamics solvers for modern computing environments

    NASA Astrophysics Data System (ADS)

    Hertenstein, Daniel; Humphrey, John R.; Paolini, Aaron L.; Kelmelis, Eric J.

    2013-05-01

    EM Photonics has been investigating the application of massively multicore processors to a key problem area: Computational Fluid Dynamics (CFD). While the capabilities of CFD solvers have continually increased and improved to support features such as moving bodies and adjoint-based mesh adaptation, the software architecture has often lagged behind. This has led to poor scaling as core counts reach the tens of thousands. In the modern High Performance Computing (HPC) world, clusters with hundreds of thousands of cores are becoming the standard. In addition, accelerator devices such as NVIDIA GPUs and Intel Xeon Phi are being installed in many new systems. It is important for CFD solvers to take advantage of the new hardware as the computations involved are well suited for the massively multicore architecture. In our work, we demonstrate that new features in NVIDIA GPUs are able to empower existing CFD solvers by example using AVUS, a CFD solver developed by the Air Force Research Labratory (AFRL) and the Volcanic Ash Advisory Center (VAAC). The effort has resulted in increased performance and scalability without sacrificing accuracy. There are many well-known codes in the CFD space that can benefit from this work, such as FUN3D, OVERFLOW, and TetrUSS. Such codes are widely used in the commercial, government, and defense sectors.

  18. Accommodating Heterogeneity in a Debugger for Distributed Computations

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Cheng, Doreen; Cooper, D. M. (Technical Monitor)

    1994-01-01

    In an ongoing project at NASA Ames Research Center, we are building debugger for distributed computations running on a heterogeneous set of machines. Historically, such debuggers have been built as front-ends to existing source-level debuggers on the target platforms. In effect, these back-end debuggers are providing a collection of debugger services to a client. The major drawback is that because of inconsistencies among the back-end debuggers, the front-end must use a different protocol when talking to each back-end debugger. This can make the front-end quite complex. We have avoided this complexity problem by defining the client-server debugger protocol. While it does require vendors to adapt their existing debugger code to meet the protocol, vendors are generally interested in doing so because the approach has several advantages. In addition to solving the heterogenous platform debugging problem, it will be possible to write interesting debugger user interfaces that can be easily ported across a variety of machines. This will likely encourage investment in application-domain specific debuggers. In fact, the user interface of our debugger will be geared to scientists developing computational fluid dynamics codes. This paper describes some of the problems encountered in developing a portable debugger for heterogenous, distributed computing and how the architecture of our debugger avoids them. It then provides a detailed description of the debugger client-server protocol. Some of the more interesting attributes of the protocol are: (1) It is object-oriented; (2) It uses callback functions to capture the asynchronous nature of debugging in a procedural fashion; (3) It contains abstractions, such as in-line instrumentation, for the debugging of computationally intensive programs; (4) For remote debugging, it has operations that enable the implementor to optimize message passing traffic between client and server. The soundness of the protocol is being tested through

  19. Computational fluid dynamics: Transition to design applications

    NASA Technical Reports Server (NTRS)

    Bradley, R. G.; Bhateley, I. C.; Howell, G. A.

    1987-01-01

    The development of aerospace vehicles, over the years, was an evolutionary process in which engineering progress in the aerospace community was based, generally, on prior experience and data bases obtained through wind tunnel and flight testing. Advances in the fundamental understanding of flow physics, wind tunnel and flight test capability, and mathematical insights into the governing flow equations were translated into improved air vehicle design. The modern day field of Computational Fluid Dynamics (CFD) is a continuation of the growth in analytical capability and the digital mathematics needed to solve the more rigorous form of the flow equations. Some of the technical and managerial challenges that result from rapidly developing CFD capabilites, some of the steps being taken by the Fort Worth Division of General Dynamics to meet these challenges, and some of the specific areas of application for high performance air vehicles are presented.

  20. Computational dynamics of acoustically driven microsphere systems.

    PubMed

    Glosser, Connor; Piermarocchi, Carlo; Li, Jie; Dault, Dan; Shanker, B

    2016-01-01

    We propose a computational framework for the self-consistent dynamics of a microsphere system driven by a pulsed acoustic field in an ideal fluid. Our framework combines a molecular dynamics integrator describing the dynamics of the microsphere system with a time-dependent integral equation solver for the acoustic field that makes use of fields represented as surface expansions in spherical harmonic basis functions. The presented approach allows us to describe the interparticle interaction induced by the field as well as the dynamics of trapping in counter-propagating acoustic pulses. The integral equation formulation leads to equations of motion for the microspheres describing the effect of nondissipative drag forces. We show (1) that the field-induced interactions between the microspheres give rise to effective dipolar interactions, with effective dipoles defined by their velocities and (2) that the dominant effect of an ultrasound pulse through a cloud of microspheres gives rise mainly to a translation of the system, though we also observe both expansion and contraction of the cloud determined by the initial system geometry. PMID:26871188

  1. Computational dynamics of acoustically driven microsphere systems

    NASA Astrophysics Data System (ADS)

    Glosser, Connor; Piermarocchi, Carlo; Li, Jie; Dault, Dan; Shanker, B.

    2016-01-01

    We propose a computational framework for the self-consistent dynamics of a microsphere system driven by a pulsed acoustic field in an ideal fluid. Our framework combines a molecular dynamics integrator describing the dynamics of the microsphere system with a time-dependent integral equation solver for the acoustic field that makes use of fields represented as surface expansions in spherical harmonic basis functions. The presented approach allows us to describe the interparticle interaction induced by the field as well as the dynamics of trapping in counter-propagating acoustic pulses. The integral equation formulation leads to equations of motion for the microspheres describing the effect of nondissipative drag forces. We show (1) that the field-induced interactions between the microspheres give rise to effective dipolar interactions, with effective dipoles defined by their velocities and (2) that the dominant effect of an ultrasound pulse through a cloud of microspheres gives rise mainly to a translation of the system, though we also observe both expansion and contraction of the cloud determined by the initial system geometry.

  2. Shuttle rocket booster computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Chung, T. J.; Park, O. Y.

    1988-01-01

    Additional results and a revised and improved computer program listing from the shuttle rocket booster computational fluid dynamics formulations are presented. Numerical calculations for the flame zone of solid propellants are carried out using the Galerkin finite elements, with perturbations expanded to the zeroth, first, and second orders. The results indicate that amplification of oscillatory motions does indeed prevail in high frequency regions. For the second order system, the trend is similar to the first order system for low frequencies, but instabilities may appear at frequencies lower than those of the first order system. The most significant effect of the second order system is that the admittance is extremely oscillatory between moderately high frequency ranges.

  3. LaRC computational dynamics overview

    NASA Technical Reports Server (NTRS)

    Husner, J. M.

    1989-01-01

    Present research centers on the development of advanced computational methods for transient simulation analyses. Aircraft, launch vehicles and space structure components are potential applications, but primary focus is presently on large space structures. There are both in-house and out-of-house activities. The in-house activity centers around the development of a multibody simulation tool for truss-like structures called LATDYN for Large Angle Transient DYNamics. Multibody analysis involves articulation of structural components as well as robotic maneuvers. These items are necessary for construction (erection or deployment) of large space structures in orbit and the carrying out of certain operations on board the space station. Thus, part of the in-house activity involves the development of methods which treat the changing mass, stiffness and constraints associated with articulating systems. The out-of-house activity involves subcycling, development of large deformation/motion beam formulation, constraint stabilization and direct time integration transient algorithms in parallel computing.

  4. Computational fluid dynamics of reaction injection moulding

    NASA Astrophysics Data System (ADS)

    Mateus, Artur; Mitchell, Geoffrey; Bártolo, Paulo

    2012-09-01

    The modern approach to the development of moulds for injection moulding (Reaction Injection Moulding - RIM, Thermoplastic Injection Moulding - TIM and others) differs from the conventional approach based exclusively on the designer's experience and hypotheses. The increasingly complexityof moulds and the requirement by the clients for the improvement of their quality, shorter delivery times, and lower prices, demand the development of novel approaches to developed optimal moulds and moulded parts. The development of more accurate computational tools is fundamental to optimize both, the injection mouldingprocesses and the design, quality and durability of the moulds. This paper focuses on the RIM process proposing a novel thermo-rheo-kinetic model. The proposed model was implemented in generalpurpose Computational Fluid Dynamics (CFD) software. The model enables to accurately describe both flow and curing stages. Simulation results were validated against experimental results.

  5. Computational fluid dynamics in cardiovascular disease.

    PubMed

    Lee, Byoung-Kwon

    2011-08-01

    Computational fluid dynamics (CFD) is a mechanical engineering field for analyzing fluid flow, heat transfer, and associated phenomena, using computer-based simulation. CFD is a widely adopted methodology for solving complex problems in many modern engineering fields. The merit of CFD is developing new and improved devices and system designs, and optimization is conducted on existing equipment through computational simulations, resulting in enhanced efficiency and lower operating costs. However, in the biomedical field, CFD is still emerging. The main reason why CFD in the biomedical field has lagged behind is the tremendous complexity of human body fluid behavior. Recently, CFD biomedical research is more accessible, because high performance hardware and software are easily available with advances in computer science. All CFD processes contain three main components to provide useful information, such as pre-processing, solving mathematical equations, and post-processing. Initial accurate geometric modeling and boundary conditions are essential to achieve adequate results. Medical imaging, such as ultrasound imaging, computed tomography, and magnetic resonance imaging can be used for modeling, and Doppler ultrasound, pressure wire, and non-invasive pressure measurements are used for flow velocity and pressure as a boundary condition. Many simulations and clinical results have been used to study congenital heart disease, heart failure, ventricle function, aortic disease, and carotid and intra-cranial cerebrovascular diseases. With decreasing hardware costs and rapid computing times, researchers and medical scientists may increasingly use this reliable CFD tool to deliver accurate results. A realistic, multidisciplinary approach is essential to accomplish these tasks. Indefinite collaborations between mechanical engineers and clinical and medical scientists are essential. CFD may be an important methodology to understand the pathophysiology of the development and

  6. Computational fluid dynamics symposium on aeropropulsion

    SciTech Connect

    Not Available

    1991-01-01

    Recognizing the considerable advances that have been made in computational fluid dynamics, the Internal Fluid Mechanics Division of NASA Lewis Research Center sponsored this symposium with the objective of providing a forum for exchanging information regarding recent developments in numerical methods, physical and chemical modeling, and applications. This conference publication is a compilation of 4 invited and 34 contributed papers presented in six sessions: algorithms one and two, turbomachinery, turbulence, components application, and combustors. Topics include numerical methods, grid generation, chemically reacting flows, turbulence modeling, inlets, nozzles, and unsteady flows.

  7. Two-phase computational fluid dynamics

    SciTech Connect

    Rothe, P.H.

    1991-07-26

    The results of the project illustrate the feasibility of multiphase computerized fluid dynamics (CFD) software. Existing CFD software is capable of simulating particle fields, certain droplet fields, and certain free surface flows, and does so routinely in engineering applications. Stratified flows can be addressed by a multiphase CFD code, once one is developed with suitable capabilities. The groundwork for such a code has been laid. Calculations performed for stratified flows demonstrate the accuracy achievable and the convergence of the methodology. Extension of the stratified flow methodology to other segregated flows such as slug or annular faces no inherent limits. The research has commercial application in the development of multiphase CFD computer programs.

  8. Computational Fluid Dynamics Technology for Hypersonic Applications

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2003-01-01

    Several current challenges in computational fluid dynamics and aerothermodynamics for hypersonic vehicle applications are discussed. Example simulations are presented from code validation and code benchmarking efforts to illustrate capabilities and limitations. Opportunities to advance the state-of-art in algorithms, grid generation and adaptation, and code validation are identified. Highlights of diverse efforts to address these challenges are then discussed. One such effort to re-engineer and synthesize the existing analysis capability in LAURA, VULCAN, and FUN3D will provide context for these discussions. The critical (and evolving) role of agile software engineering practice in the capability enhancement process is also noted.

  9. High performance computations using dynamical nucleation theory

    NASA Astrophysics Data System (ADS)

    Windus, T. L.; Kathmann, S. M.; Crosby, L. D.

    2008-07-01

    Chemists continue to explore the use of very large computations to perform simulations that describe the molecular level physics of critical challenges in science. In this paper, we describe the Dynamical Nucleation Theory Monte Carlo (DNTMC) model - a model for determining molecular scale nucleation rate constants - and its parallel capabilities. The potential for bottlenecks and the challenges to running on future petascale or larger resources are delineated. A 'master-slave' solution is proposed to scale to the petascale and will be developed in the NWChem software. In addition, mathematical and data analysis challenges are described.

  10. Computational Fluid Dynamics Symposium on Aeropropulsion

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Recognizing the considerable advances that have been made in computational fluid dynamics, the Internal Fluid Mechanics Division of NASA Lewis Research Center sponsored this symposium with the objective of providing a forum for exchanging information regarding recent developments in numerical methods, physical and chemical modeling, and applications. This conference publication is a compilation of 4 invited and 34 contributed papers presented in six sessions: algorithms one and two, turbomachinery, turbulence, components application, and combustors. Topics include numerical methods, grid generation, chemically reacting flows, turbulence modeling, inlets, nozzles, and unsteady flows.

  11. Verification and Validation in Computational Fluid Dynamics

    SciTech Connect

    OBERKAMPF, WILLIAM L.; TRUCANO, TIMOTHY G.

    2002-03-01

    Verification and validation (V and V) are the primary means to assess accuracy and reliability in computational simulations. This paper presents an extensive review of the literature in V and V in computational fluid dynamics (CFD), discusses methods and procedures for assessing V and V, and develops a number of extensions to existing ideas. The review of the development of V and V terminology and methodology points out the contributions from members of the operations research, statistics, and CFD communities. Fundamental issues in V and V are addressed, such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainty, conceptual sources of error and uncertainty, and the relationship between validation and prediction. The fundamental strategy of verification is the identification and quantification of errors in the computational model and its solution. In verification activities, the accuracy of a computational solution is primarily measured relative to two types of highly accurate solutions: analytical solutions and highly accurate numerical solutions. Methods for determining the accuracy of numerical solutions are presented and the importance of software testing during verification activities is emphasized.

  12. Distributing an executable job load file to compute nodes in a parallel computer

    DOEpatents

    Gooding, Thomas M.

    2016-09-13

    Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications link over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.

  13. Distributing an executable job load file to compute nodes in a parallel computer

    DOEpatents

    Gooding, Thomas M.

    2016-08-09

    Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications link over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.

  14. A uniform approach for programming distributed heterogeneous computing systems

    PubMed Central

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-01-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. PMID:25844015

  15. Distributed computing environments for future space control systems

    NASA Technical Reports Server (NTRS)

    Viallefont, Pierre

    1993-01-01

    The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.

  16. Direct modeling for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Xu, Kun

    2015-06-01

    All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct

  17. Optimization of an interactive distributive computer network

    NASA Technical Reports Server (NTRS)

    Frederick, V.

    1985-01-01

    The activities under a cooperative agreement for the development of a computer network are briefly summarized. Research activities covered are: computer operating systems optimization and integration; software development and implementation of the IRIS (Infrared Imaging of Shuttle) Experiment; and software design, development, and implementation of the APS (Aerosol Particle System) Experiment.

  18. Spatiotemporal dynamics of distributed synthetic genetic circuits

    NASA Astrophysics Data System (ADS)

    Kanakov, Oleg; Laptyeva, Tetyana; Tsimring, Lev; Ivanchenko, Mikhail

    2016-04-01

    We propose and study models of two distributed synthetic gene circuits, toggle-switch and oscillator, each split between two cell strains and coupled via quorum-sensing signals. The distributed toggle switch relies on mutual repression of the two strains, and oscillator is comprised of two strains, one of which acts as an activator for another that in turn acts as a repressor. Distributed toggle switch can exhibit mobile fronts, switching the system from the weaker to the stronger spatially homogeneous state. The circuit can also act as a biosensor, with the switching front dynamics determined by the properties of an external signal. Distributed oscillator system displays another biosensor functionality: oscillations emerge once a small amount of one cell strain appears amid the other, present in abundance. Distribution of synthetic gene circuits among multiple strains allows one to reduce crosstalk among different parts of the overall system and also decrease the energetic burden of the synthetic circuit per cell, which may allow for enhanced functionality and viability of engineered cells.

  19. Utilizing parallel optimization in computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Kokkolaras, Michael

    1998-12-01

    General problems of interest in computational fluid dynamics are investigated by means of optimization. Specifically, in the first part of the dissertation, a method of optimal incremental function approximation is developed for the adaptive solution of differential equations. Various concepts and ideas utilized by numerical techniques employed in computational mechanics and artificial neural networks (e.g. function approximation and error minimization, variational principles and weighted residuals, and adaptive grid optimization) are combined to formulate the proposed method. The basis functions and associated coefficients of a series expansion, representing the solution, are optimally selected by a parallel direct search technique at each step of the algorithm according to appropriate criteria; the solution is built sequentially. In this manner, the proposed method is adaptive in nature, although a grid is neither built nor adapted in the traditional sense using a-posteriori error estimates. Variational principles are utilized for the definition of the objective function to be extremized in the associated optimization problems, ensuring that the problem is well-posed. Complicated data structures and expensive remeshing algorithms and systems solvers are avoided. Computational efficiency is increased by using low-order basis functions and concurrent computing. Numerical results and convergence rates are reported for a range of steady-state problems, including linear and nonlinear differential equations associated with general boundary conditions, and illustrate the potential of the proposed method. Fluid dynamics applications are emphasized. Conclusions are drawn by discussing the method's limitations, advantages, and possible extensions. The second part of the dissertation is concerned with the optimization of the viscous-inviscid-interaction (VII) mechanism in an airfoil flow analysis code. The VII mechanism is based on the concept of a transpiration velocity

  20. Distributed-Computer System Optimizes SRB Joints

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.

    1991-01-01

    Initial calculations of redesign of joint on solid rocket booster (SRB) that failed during Space Shuttle tragedy showed redesign increased weight. Optimization techniques applied to determine whether weight could be reduced while keeping joint closed and limiting stresses. Analysis system developed by use of existing software coupling structural analysis with optimization computations. Software designed executable on network of computer workstations. Took advantage of parallelism offered by finite-difference technique of computing gradients to enable several workstations to contribute simultaneously to solution of problem. Key features, effective use of redundancies in hardware and flexible software, enabling optimization to proceed with minimal delay and decreased overall time to completion.

  1. Dynamic shared state maintenance in distributed virtual environments

    NASA Astrophysics Data System (ADS)

    Hamza-Lup, Felix George

    Advances in computer networks and rendering systems facilitate the creation of distributed collaborative environments in which the distribution of information at remote locations allows efficient communication. Particularly challenging are distributed interactive Virtual Environments (VE) that allow knowledge sharing through 3D information. The purpose of this work is to address the problem of latency in distributed interactive VE and to develop a conceptual model for consistency maintenance in these environments based on the participant interaction model. An area that needs to be explored is the relationship between the dynamic shared state and the interaction with the virtual entities present in the shared scene. Mixed Reality (MR) and VR environments must bring the human participant interaction into the loop through a wide range of electronic motion sensors, and haptic devices. Part of the work presented here defines a novel criterion for categorization of distributed interactive VE and introduces, as well as analyzes, an adaptive synchronization algorithm for consistency maintenance in such environments. As part of the work, a distributed interactive Augmented Reality (AR) testbed and the algorithm implementation details are presented. Currently the testbed is part of several research efforts at the Optical Diagnostics and Applications Laboratory including 3D visualization applications using custom built head-mounted displays (HMDs) with optical motion tracking and a medical training prototype for endotracheal intubation and medical prognostics. An objective method using quaternion calculus is applied for the algorithm assessment. In spite of significant network latency, results show that the dynamic shared state can be maintained consistent at multiple remotely located sites. In further consideration of the latency problems and in the light of the current trends in interactive distributed VE applications, we propose a hybrid distributed system architecture for

  2. Efficient gradient computation for dynamical models

    PubMed Central

    Sengupta, B.; Friston, K.J.; Penny, W.D.

    2014-01-01

    Data assimilation is a fundamental issue that arises across many scales in neuroscience — ranging from the study of single neurons using single electrode recordings to the interaction of thousands of neurons using fMRI. Data assimilation involves inverting a generative model that can not only explain observed data but also generate predictions. Typically, the model is inverted or fitted using conventional tools of (convex) optimization that invariably extremise some functional — norms, minimum descriptive length, variational free energy, etc. Generally, optimisation rests on evaluating the local gradients of the functional to be optimized. In this paper, we compare three different gradient estimation techniques that could be used for extremising any functional in time — (i) finite differences, (ii) forward sensitivities and a method based on (iii) the adjoint of the dynamical system. We demonstrate that the first-order gradients of a dynamical system, linear or non-linear, can be computed most efficiently using the adjoint method. This is particularly true for systems where the number of parameters is greater than the number of states. For such systems, integrating several sensitivity equations – as required with forward sensitivities – proves to be most expensive, while finite-difference approximations have an intermediate efficiency. In the context of neuroimaging, adjoint based inversion of dynamical causal models (DCMs) can, in principle, enable the study of models with large numbers of nodes and parameters. PMID:24769182

  3. Cardea: Dynamic Access Control in Distributed Systems

    NASA Technical Reports Server (NTRS)

    Lepro, Rebekah

    2004-01-01

    Modern authorization systems span domains of administration, rely on many different authentication sources, and manage complex attributes as part of the authorization process. This . paper presents Cardea, a distributed system that facilitates dynamic access control, as a valuable piece of an inter-operable authorization framework. First, the authorization model employed in Cardea and its functionality goals are examined. Next, critical features of the system architecture and its handling of the authorization process are then examined. Then the S A M L and XACML standards, as incorporated into the system, are analyzed. Finally, the future directions of this project are outlined and connection points with general components of an authorization system are highlighted.

  4. Dynamic Voltage Regulation Using Distributed Energy Resources

    SciTech Connect

    Xu, Yan; Rizy, D Tom; Li, Fangxing; Kueck, John D

    2007-01-01

    Many distributed energy resources (DE) are near load centres and equipped with power electronics converters to interface with the grid, therefore it is feasible for DE to provide ancillary services such as voltage regulation, nonactive power compensation, and power factor correction. A synchronous condenser and a microturbine with an inverter interface are implemented in parallel in a distribution system to regulate the local voltage. Voltage control schemes of the inverter and the synchronous condenser are developed. The experimental results show that both the inverter and the synchronous condenser can regulate the local voltage instantaneously, while the dynamic response of the inverter is faster than the synchronous condenser; and that integrated voltage regulation (multiple DE perform voltage regulation) can increase the voltage regulation capability, increase the lifetime of the equipment, and reduce the capital and operation costs.

  5. Visualization of Unsteady Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1997-01-01

    The current compute environment that most researchers are using for the calculation of 3D unsteady Computational Fluid Dynamic (CFD) results is a super-computer class machine. The Massively Parallel Processors (MPP's) such as the 160 node IBM SP2 at NAS and clusters of workstations acting as a single MPP (like NAS's SGI Power-Challenge array and the J90 cluster) provide the required computation bandwidth for CFD calculations of transient problems. If we follow the traditional computational analysis steps for CFD (and we wish to construct an interactive visualizer) we need to be aware of the following: (1) Disk space requirements. A single snap-shot must contain at least the values (primitive variables) stored at the appropriate locations within the mesh. For most simple 3D Euler solvers that means 5 floating point words. Navier-Stokes solutions with turbulence models may contain 7 state-variables. (2) Disk speed vs. Computational speeds. The time required to read the complete solution of a saved time frame from disk is now longer than the compute time for a set number of iterations from an explicit solver. Depending, on the hardware and solver an iteration of an implicit code may also take less time than reading the solution from disk. If one examines the performance improvements in the last decade or two, it is easy to see that depending on disk performance (vs. CPU improvement) may not be the best method for enhancing interactivity. (3) Cluster and Parallel Machine I/O problems. Disk access time is much worse within current parallel machines and cluster of workstations that are acting in concert to solve a single problem. In this case we are not trying to read the volume of data, but are running the solver and the solver outputs the solution. These traditional network interfaces must be used for the file system. (4) Numerics of particle traces. Most visualization tools can work upon a single snap shot of the data but some visualization tools for transient

  6. Verification and validation in computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Oberkampf, William L.; Trucano, Timothy G.

    2002-04-01

    Verification and validation (V&V) are the primary means to assess accuracy and reliability in computational simulations. This paper presents an extensive review of the literature in V&V in computational fluid dynamics (CFD), discusses methods and procedures for assessing V&V, and develops a number of extensions to existing ideas. The review of the development of V&V terminology and methodology points out the contributions from members of the operations research, statistics, and CFD communities. Fundamental issues in V&V are addressed, such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainty, conceptual sources of error and uncertainty, and the relationship between validation and prediction. The fundamental strategy of verification is the identification and quantification of errors in the computational model and its solution. In verification activities, the accuracy of a computational solution is primarily measured relative to two types of highly accurate solutions: analytical solutions and highly accurate numerical solutions. Methods for determining the accuracy of numerical solutions are presented and the importance of software testing during verification activities is emphasized. The fundamental strategy of validation is to assess how accurately the computational results compare with the experimental data, with quantified error and uncertainty estimates for both. This strategy employs a hierarchical methodology that segregates and simplifies the physical and coupling phenomena involved in the complex engineering system of interest. A hypersonic cruise missile is used as an example of how this hierarchical structure is formulated. The discussion of validation assessment also encompasses a number of other important topics. A set of guidelines is proposed for designing and conducting validation experiments, supported by an explanation of how validation experiments are different

  7. Domain decomposition algorithms and computation fluid dynamics

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.

    1988-01-01

    In the past several years, domain decomposition was a very popular topic, partly motivated by the potential of parallelization. While a large body of theory and algorithms were developed for model elliptic problems, they are only recently starting to be tested on realistic applications. The application of some of these methods to two model problems in computational fluid dynamics are investigated. Some examples are two dimensional convection-diffusion problems and the incompressible driven cavity flow problem. The construction and analysis of efficient preconditioners for the interface operator to be used in the iterative solution of the interface solution is described. For the convection-diffusion problems, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is discussed.

  8. High performance computations using dynamical nucleation theory

    SciTech Connect

    Windus, Theresa L.; Kathmann, Shawn M.; Crosby, Lonnie D.

    2008-07-14

    Chemists continue to explore the use of very large computations to perform simulations that describe the molecular level physics of critical challenges in science. In this paper, the Dynamical Nucleation Theory Monte Carlo (DNTMC) model - a model for determining molecular scale nucleation rate constants - and its parallel capabilities are described. The potential for bottlenecks and the challenges to running on future petascale or larger resources are delineated. A "master-slave" solution is proposed to scale to the petascale and will be developed in the NWChem software. In addition, mathematical and data analysis challenges are also described. This work was supported by the U.S. Department of Energy's (DOE) Office of Basic Energy Sciences, Chemical Sciences program. The Pacific Northwest National Laboratory is operated by Battelle for DOE.

  9. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1992-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  10. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1991-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  11. Computational fluid dynamics of airfoils and wings

    NASA Technical Reports Server (NTRS)

    Garabedian, P.; Mcfadden, G.

    1982-01-01

    It is pointed out that transonic flow is one of the fields where computational fluid dynamics turns out to be most effective. Codes for the design and analysis of supercritical airfoils and wings have become standard tools of the aircraft industry. The present investigation is concerned with mathematical models and theorems which account for some of the progress that has been made. The most successful aerodynamics codes are those for the analysis of flow at off-design conditions where weak shock waves appear. A major breakthrough was achieved by Murman and Cole (1971), who conceived of a retarded difference scheme which incorporates artificial viscosity to capture shocks in the supersonic zone. This concept has been used to develop codes for the analysis of transonic flow past a swept wing. Attention is given to the trailing edge and the boundary layer, entropy inequalities and wave drag, shockless airfoils, and the inverse swept wing code.

  12. Lectures series in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Thompson, Kevin W.

    1987-01-01

    The lecture notes cover the basic principles of computational fluid dynamics (CFD). They are oriented more toward practical applications than theory, and are intended to serve as a unified source for basic material in the CFD field as well as an introduction to more specialized topics in artificial viscosity and boundary conditions. Each chapter in the test is associated with a videotaped lecture. The basic properties of conservation laws, wave equations, and shock waves are described. The duality of the conservation law and wave representations is investigated, and shock waves are examined in some detail. Finite difference techniques are introduced for the solution of wave equations and conservation laws. Stability analysis for finite difference approximations are presented. A consistent description of artificial viscosity methods are provided. Finally, the problem of nonreflecting boundary conditions are treated.

  13. Artificial Intelligence In Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Vogel, Alison Andrews

    1991-01-01

    Paper compares four first-generation artificial-intelligence (Al) software systems for computational fluid dynamics. Includes: Expert Cooling Fan Design System (EXFAN), PAN AIR Knowledge System (PAKS), grid-adaptation program MITOSIS, and Expert Zonal Grid Generation (EZGrid). Focuses on knowledge-based ("expert") software systems. Analyzes intended tasks, kinds of knowledge possessed, magnitude of effort required to codify knowledge, how quickly constructed, performances, and return on investment. On basis of comparison, concludes Al most successful when applied to well-formulated problems solved by classifying or selecting preenumerated solutions. In contrast, application of Al to poorly understood or poorly formulated problems generally results in long development time and large investment of effort, with no guarantee of success.

  14. Computational modeling of intraocular gas dynamics.

    PubMed

    Noohi, P; Abdekhodaie, M J; Cheng, Y L

    2015-01-01

    The purpose of this study was to develop a computational model to simulate the dynamics of intraocular gas behavior in pneumatic retinopexy (PR) procedure. The presented model predicted intraocular gas volume at any time and determined the tolerance angle within which a patient can maneuver and still gas completely covers the tear(s). Computational fluid dynamics calculations were conducted to describe PR procedure. The geometrical model was constructed based on the rabbit and human eye dimensions. SF6 in the form of pure and diluted with air was considered as the injected gas. The presented results indicated that the composition of the injected gas affected the gas absorption rate and gas volume. After injection of pure SF6, the bubble expanded to 2.3 times of its initial volume during the first 23 h, but when diluted SF6 was used, no significant expansion was observed. Also, head positioning for the treatment of retinal tear influenced the rate of gas absorption. Moreover, the determined tolerance angle depended on the bubble and tear size. More bubble expansion and smaller retinal tear caused greater tolerance angle. For example, after 23 h, for the tear size of 2 mm the tolerance angle of using pure SF6 is 1.4 times more than that of using diluted SF6 with 80% air. Composition of the injected gas and conditions of the tear in PR may dramatically affect the gas absorption rate and gas volume. Quantifying these effects helps to predict the tolerance angle and improve treatment efficiency. PMID:26682529

  15. Computational modeling of intraocular gas dynamics

    NASA Astrophysics Data System (ADS)

    Noohi, P.; Abdekhodaie, M. J.; Cheng, Y. L.

    2015-12-01

    The purpose of this study was to develop a computational model to simulate the dynamics of intraocular gas behavior in pneumatic retinopexy (PR) procedure. The presented model predicted intraocular gas volume at any time and determined the tolerance angle within which a patient can maneuver and still gas completely covers the tear(s). Computational fluid dynamics calculations were conducted to describe PR procedure. The geometrical model was constructed based on the rabbit and human eye dimensions. SF6 in the form of pure and diluted with air was considered as the injected gas. The presented results indicated that the composition of the injected gas affected the gas absorption rate and gas volume. After injection of pure SF6, the bubble expanded to 2.3 times of its initial volume during the first 23 h, but when diluted SF6 was used, no significant expansion was observed. Also, head positioning for the treatment of retinal tear influenced the rate of gas absorption. Moreover, the determined tolerance angle depended on the bubble and tear size. More bubble expansion and smaller retinal tear caused greater tolerance angle. For example, after 23 h, for the tear size of 2 mm the tolerance angle of using pure SF6 is 1.4 times more than that of using diluted SF6 with 80% air. Composition of the injected gas and conditions of the tear in PR may dramatically affect the gas absorption rate and gas volume. Quantifying these effects helps to predict the tolerance angle and improve treatment efficiency.

  16. Distributed computing environment monitoring and user expectations

    SciTech Connect

    Cottrell, R.L.A.; Logg, C.A.

    1995-11-01

    This paper discusses the growing needs for distributed system monitoring and compares it to current practices. It then goes on to identify the components of distributed system monitoring and shows how they are implemented and successfully used at one site today to address the Local Area Network (LAN), network services and applications, the Wide Area Network (WAN), and host monitoring. It shows how this monitoring can be used to develop realistic service level expectations and also identifies the costs. Finally, the paper briefly discusses the future challenges in network monitoring.

  17. A comparison of queueing, cluster and distributed computing systems

    NASA Technical Reports Server (NTRS)

    Kaplan, Joseph A.; Nelson, Michael L.

    1993-01-01

    Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.

  18. Computational fluid dynamics of left ventricular ejection.

    PubMed

    Georgiadis, J G; Wang, M; Pasipoularides, A

    1992-01-01

    The present investigation addresses the effects of simple geometric variations on intraventricular ejection dynamics, by methods from computational fluid dynamics. It is an early step in incorporating more and more relevant characteristics of the ejection process, such as a continuously changing irregular geometry, in numerical simulations. We consider the effects of varying chamber eccentricities and outflow valve orifice-to-inner surface area ratios on instantaneous ejection gradients along the axis of symmetry of the left ventricle. The equation of motion for the streamfunction was discretized and solved iteratively with specified boundary conditions on a boundary-fitted adaptive grid, using an alternating-direction-implicit (ADI) algorithm. The unsteady aspects of the ejection process were subsequently introduced into the numerical simulation. It was shown that for given chamber volume and outflow orifice area, higher chamber eccentricities require higher ejection pressure gradients for the same velocity and local acceleration values at the aortic anulus than more spherical shapes. This finding is referable to the rise in local acceleration effects across the outflow axis. This is to be contrasted with the case of outflow orifice stenosis, in which it was shown that it is the convective acceleration effects that are intensified strongly. PMID:1562106

  19. Nonlinear ship waves and computational fluid dynamics

    PubMed Central

    MIYATA, Hideaki; ORIHARA, Hideo; SATO, Yohei

    2014-01-01

    Research works undertaken in the first author’s laboratory at the University of Tokyo over the past 30 years are highlighted. Finding of the occurrence of nonlinear waves (named Free-Surface Shock Waves) in the vicinity of a ship advancing at constant speed provided the start-line for the progress of innovative technologies in the ship hull-form design. Based on these findings, a multitude of the Computational Fluid Dynamic (CFD) techniques have been developed over this period, and are highlighted in this paper. The TUMMAC code has been developed for wave problems, based on a rectangular grid system, while the WISDAM code treats both wave and viscous flow problems in the framework of a boundary-fitted grid system. These two techniques are able to cope with almost all fluid dynamical problems relating to ships, including the resistance, ship’s motion and ride-comfort issues. Consequently, the two codes have contributed significantly to the progress in the technology of ship design, and now form an integral part of the ship-designing process. PMID:25311139

  20. Computational Fluid Dynamics - Applications in Manufacturing Processes

    NASA Astrophysics Data System (ADS)

    Beninati, Maria Laura; Kathol, Austin; Ziemian, Constance

    2012-11-01

    A new Computational Fluid Dynamics (CFD) exercise has been developed for the undergraduate introductory fluid mechanics course at Bucknell University. The goal is to develop a computational exercise that students complete which links the manufacturing processes course and the concurrent fluid mechanics course in a way that reinforces the concepts in both. In general, CFD is used as a tool to increase student understanding of the fundamentals in a virtual world. A ``learning factory,'' which is currently in development at Bucknell seeks to use the laboratory as a means to link courses that previously seemed to have little correlation at first glance. A large part of the manufacturing processes course is a project using an injection molding machine. The flow of pressurized molten polyurethane into the mold cavity can also be an example of fluid motion (a jet of liquid hitting a plate) that is applied in manufacturing. The students will run a CFD process that captures this flow using their virtual mold created with a graphics package, such as SolidWorks. The laboratory structure is currently being implemented and analyzed as a part of the ``learning factory''. Lastly, a survey taken before and after the CFD exercise demonstrate a better understanding of both the CFD and manufacturing process.

  1. A lightweight communication library for distributed computing

    NASA Astrophysics Data System (ADS)

    Groen, Derek; Rieder, Steven; Grosso, Paola; de Laat, Cees; Portegies Zwart, Simon

    2010-01-01

    We present MPWide, a platform-independent communication library for performing message passing between computers. Our library allows coupling of several local message passing interface (MPI) applications through a long-distance network and is specifically optimized for such communications. The implementation is deliberately kept lightweight and platform independent, and the library can be installed and used without administrative privileges. The only requirements are a C++ compiler and at least one open port to a wide-area network on each site. In this paper we present the library, describe the user interface, present performance tests and apply MPWide in a large-scale cosmological N-body simulation on a network of two computers, one in Amsterdam and the other in Tokyo.

  2. Dynamic algorithm for correlation noise estimation in distributed video coding

    NASA Astrophysics Data System (ADS)

    Thambu, Kuganeswaran; Fernando, Xavier; Guan, Ling

    2010-01-01

    Low complexity encoders at the expense of high complexity decoders are advantageous in wireless video sensor networks. Distributed video coding (DVC) achieves the above complexity balance, where the receivers compute Side information (SI) by interpolating the key frames. Side information is modeled as a noisy version of input video frame. In practise, correlation noise estimation at the receiver is a complex problem, and currently the noise is estimated based on a residual variance between pixels of the key frames. Then the estimated (fixed) variance is used to calculate the bit-metric values. In this paper, we have introduced the new variance estimation technique that rely on the bit pattern of each pixel, and it is dynamically calculated over the entire motion environment which helps to calculate the soft-value information required by the decoder. Our result shows that the proposed bit based dynamic variance estimation significantly improves the peak signal to noise ratio (PSNR) performance.

  3. Moments of inclination error distribution computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1981-01-01

    A FORTRAN coded computer program is described which calculates orbital inclination error statistics using a closed-form solution. This solution uses a data base of trajectory errors from actual flights to predict the orbital inclination error statistics. The Scott flight history data base consists of orbit insertion errors in the trajectory parameters - altitude, velocity, flight path angle, flight azimuth, latitude and longitude. The methods used to generate the error statistics are of general interest since they have other applications. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included.

  4. Methodology for Uncertainty Analysis of Dynamic Computational Toxicology Models

    EPA Science Inventory

    The task of quantifying the uncertainty in both parameter estimates and model predictions has become more important with the increased use of dynamic computational toxicology models by the EPA. Dynamic toxicological models include physiologically-based pharmacokinetic (PBPK) mode...

  5. Parallel Computing Environments and Methods for Power Distribution System Simulation

    SciTech Connect

    Lu, Ning; Taylor, Zachary T.; Chassin, David P.; Guttromson, Ross T.; Studham, Scott S.

    2005-11-10

    The development of cost-effective high-performance parallel computing on multi-processor super computers makes it attractive to port excessively time consuming simulation software from personal computers (PC) to super computes. The power distribution system simulator (PDSS) takes a bottom-up approach and simulates load at appliance level, where detailed thermal models for appliances are used. This approach works well for a small power distribution system consisting of a few thousand appliances. When the number of appliances increases, the simulation uses up the PC memory and its run time increases to a point where the approach is no longer feasible to model a practical large power distribution system. This paper presents an effort made to port a PC-based power distribution system simulator (PDSS) to a 128-processor shared-memory super computer. The paper offers an overview of the parallel computing environment and a description of the modification made to the PDSS model. The performances of the PDSS running on a standalone PC and on the super computer are compared. Future research direction of utilizing parallel computing in the power distribution system simulation is also addressed.

  6. LaRC local area networks to support distributed computing

    NASA Technical Reports Server (NTRS)

    Riddle, E. P.

    1984-01-01

    The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.

  7. A distributed computing model for telemetry data processing

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  8. A distributed computing model for telemetry data processing

    NASA Astrophysics Data System (ADS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  9. Clock distribution system for digital computers

    DOEpatents

    Wyman, Robert H.; Loomis, Jr., Herschel H.

    1981-01-01

    Apparatus for eliminating, in each clock distribution amplifier of a clock distribution system, sequential pulse catch-up error due to one pulse "overtaking" a prior clock pulse. The apparatus includes timing means to produce a periodic electromagnetic signal with a fundamental frequency having a fundamental frequency component V'.sub.01 (t); an array of N signal characteristic detector means, with detector means No. 1 receiving the timing means signal and producing a change-of-state signal V.sub.1 (t) in response to receipt of a signal above a predetermined threshold; N substantially identical filter means, one filter means being operatively associated with each detector means, for receiving the change-of-state signal V.sub.n (t) and producing a modified change-of-state signal V'.sub.n (t) (n=1, . . . , N) having a fundamental frequency component that is substantially proportional to V'.sub.01 (t-.theta..sub.n (t) with a cumulative phase shift .theta..sub.n (t) having a time derivative that may be made uniformly and arbitrarily small; and with the detector means n+1 (1.ltoreq.n

  10. Distributed sensor networks with collective computation

    SciTech Connect

    Lanman, D. R.

    2001-01-01

    Simulations of a network of N sensors have been performed. The simulation space contains a number of sound sources and a large number of sensors. Each sensor is equipped with an omni-directional microphone and is capable of measuring only the time of arrival of a signal. Sensors are able to wirelessly transmit and receive packets of information, and have some computing power. The sensors were programmed to merge all information (received packets as well as local measurements) into a 'world view' for that node. This world view is then transmitted. In this way, information can slowly diffuse across the network. One node was monitored in the network as a proxy for when information had diffused across the network. Simulations demonstrated that the energy expended per sensor per time step was approximately independent of N.

  11. Nonlinear Fluid Computations in a Distributed Environment

    NASA Technical Reports Server (NTRS)

    Atwood, Christopher A.; Smith, Merritt H.

    1995-01-01

    The performance of a loosely and tightly-coupled workstation cluster is compared against a conventional vector supercomputer for the solution the Reynolds- averaged Navier-Stokes equations. The application geometries include a transonic airfoil, a tiltrotor wing/fuselage, and a wing/body/empennage/nacelle transport. Decomposition is of the manager-worker type, with solution of one grid zone per worker process coupled using the PVM message passing library. Task allocation is determined by grid size and processor speed, subject to available memory penalties. Each fluid zone is computed using an implicit diagonal scheme in an overset mesh framework, while relative body motion is accomplished using an additional worker process to re-establish grid communication.

  12. Computational fluid dynamics modelling in cardiovascular medicine.

    PubMed

    Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P

    2016-01-01

    This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. PMID:26512019

  13. Computational fluid dynamics modelling in cardiovascular medicine

    PubMed Central

    Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P

    2016-01-01

    This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards ‘digital patient’ or ‘virtual physiological human’ representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. PMID:26512019

  14. Transient dynamic distributed strain sensing using photonic crystal fibres

    NASA Astrophysics Data System (ADS)

    Samad, Shafeek A.; Hegde, G. M.; Roy Mahapatra, D.; Hanagud, S.

    2014-02-01

    A technique to determine the strain field in one-dimensional (1D) photonic crystal (PC) involving high strain rate, high temperature around shock or ballistic impact is proposed. Transient strain sensing is important in aerospace and other structural health monitoring (SHM) applications. We consider a MEMS based smart sensor design with photonic crystal integrated on a silicon substrate for dynamic strain correlation. Deeply etched silicon rib waveguides with distributed Bragg reflectors are suitable candidates for miniaturization of sensing elements, replacing the conventional FBG. Main objective here is to investigate the effect of non-uniform strain localization on the sensor output. Computational analysis is done to determine the static and dynamic strain sensing characteristics of the 1D photonic crystal based sensor. The structure is designed and modeled using Finite Element Method. Dynamic localization of strain field is observed. The distributed strain field is used to calculated the PC waveguide response. The sensitivity of the proposed sensor is estimated to be 0.6 pm/μɛ.

  15. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  16. Product limit estimation for capturing of pressure distribution dynamics.

    PubMed

    Wininger, Michael; Crane, Barbara A

    2016-05-01

    Measurement of contact pressures at the wheelchair-seating interface is a critically important approach for laboratory research and clinical application in monitoring risk for pressure ulceration. As yet, measures obtained from pressure mapping are static in nature: there is no accounting for changes in pressure distribution over time, despite the well-known interaction between time and pressure in risk estimation. Here, we introduce the first dynamic analysis for distribution of pressure data, based on the Kaplan-Meier (KM) Product Limit Estimator (PLE) a ubiquitous tool encountered in clinical trials and survival analysis. In this approach, the pressure array-over-time data set is sub-sampled two frames at a time (random pairing), and their similarity of pressure distribution is quantified via a correlation coefficient. A large number (here: 100) of these frame pairs is then sorted into descending order of correlation value, and visualized as a KM curve; we build confidence limits via a bootstrap computed over 1000 replications. PLEs and the KM have robust statistical support and extensive development: the opportunities for extended application are substantial. We propose that the KM-PLE in particular, and dynamic analysis in general, may provide key leverage on future development of seating technology, and valuable new insight into extant datasets. PMID:27021374

  17. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    NASA Astrophysics Data System (ADS)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  18. Computer Power: Part 1: Distribution of Power (and Communications).

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1988-01-01

    Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)

  19. Pladipus Enables Universal Distributed Computing in Proteomics Bioinformatics.

    PubMed

    Verheggen, Kenneth; Maddelein, Davy; Hulstaert, Niels; Martens, Lennart; Barsnes, Harald; Vaudel, Marc

    2016-03-01

    The use of proteomics bioinformatics substantially contributes to an improved understanding of proteomes, but this novel and in-depth knowledge comes at the cost of increased computational complexity. Parallelization across multiple computers, a strategy termed distributed computing, can be used to handle this increased complexity; however, setting up and maintaining a distributed computing infrastructure requires resources and skills that are not readily available to most research groups. Here we propose a free and open-source framework named Pladipus that greatly facilitates the establishment of distributed computing networks for proteomics bioinformatics tools. Pladipus is straightforward to install and operate thanks to its user-friendly graphical interface, allowing complex bioinformatics tasks to be run easily on a network instead of a single computer. As a result, any researcher can benefit from the increased computational efficiency provided by distributed computing, hence empowering them to tackle more complex bioinformatics challenges. Notably, it enables any research group to perform large-scale reprocessing of publicly available proteomics data, thus supporting the scientific community in mining these data for novel discoveries. PMID:26510693

  20. Developing a Distributed Computing Architecture at Arizona State University.

    ERIC Educational Resources Information Center

    Armann, Neil; And Others

    1994-01-01

    Development of Arizona State University's computing architecture, designed to ensure that all new distributed computing pieces will work together, is described. Aspects discussed include the business rationale, the general architectural approach, characteristics and objectives of the architecture, specific services, and impact on the university…

  1. Computational social dynamic modeling of group recruitment.

    SciTech Connect

    Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken; Smrcka, Julianne D.; Ko, Teresa H.; Moy, Timothy David; Wu, Benjamin C.

    2004-01-01

    The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.

  2. Object Orientated Methods in Computational Fluid Dynamics.

    NASA Astrophysics Data System (ADS)

    Tabor, Gavin; Weller, Henry; Jasak, Hrvoje; Fureby, Christer

    1997-11-01

    We outline the aims of the FOAM code, a Finite Volume Computational Fluid Dynamics code written in C++, and discuss the use of Object Orientated Programming (OOP) methods to achieve these aims. The intention when writing this code was to make it as easy as possible to alter the modelling : this was achieved by making the top level syntax of the code as close as possible to conventional mathematical notation for tensors and partial differential equations. Object orientation enables us to define classes for both types of objects, and the operator overloading possible in C++ allows normal symbols to be used for the basic operations. The introduction of features such as automatic dimension checking of equations helps to enforce correct coding of models. We also discuss the use of OOP techniques such as data encapsulation and code reuse. As examples of the flexibility of this approach, we discuss the implementation of turbulence modelling using RAS and LES. The code is used to simulate turbulent flow for a number of test cases, including fully developed channel flow and flow around obstacles. We also demonstrate the use of the code for solving structures calculations and magnetohydrodynamics.

  3. Computational fluid dynamics in ventilation: Practical approach

    NASA Astrophysics Data System (ADS)

    Fontaine, J. R.

    The potential of computation fluid dynamics (CFD) for conceiving ventilation systems is shown through the simulation of five practical cases. The following examples are considered: capture of pollutants on a surface treating tank equipped with a unilateral suction slot in the presence of a disturbing air draft opposed to suction; dispersion of solid aerosols inside fume cupboards; performances comparison of two general ventilation systems in a silkscreen printing workshop; ventilation of a large open painting area; and oil fog removal inside a mechanical engineering workshop. Whereas the two first problems are analyzed through two dimensional numerical simulations, the three other cases require three dimensional modeling. For the surface treating tank case, numerical results are compared to laboratory experiment data. All simulations are carried out using EOL, a CFD software specially devised to deal with air quality problems in industrial ventilated premises. It contains many analysis tools to interpret the results in terms familiar to the industrial hygienist. Much experimental work has been engaged to validate the predictions of EOL for ventilation flows.

  4. Perspectives on distributed computing : thirty people, four user types, and the distributed computing user experience.

    SciTech Connect

    Childers, L.; Liming, L.; Foster, I.; Mathematics and Computer Science; Univ. of Chicago

    2008-10-15

    This report summarizes the methodology and results of a user perspectives study conducted by the Community Driven Improvement of Globus Software (CDIGS) project. The purpose of the study was to document the work-related goals and challenges facing today's scientific technology users, to record their perspectives on Globus software and the distributed-computing ecosystem, and to provide recommendations to the Globus community based on the observations. Globus is a set of open source software components intended to provide a framework for collaborative computational science activities. Rather than attempting to characterize all users or potential users of Globus software, our strategy has been to speak in detail with a small group of individuals in the scientific community whose work appears to be the kind that could benefit from Globus software, learn as much as possible about their work goals and the challenges they face, and describe what we found. The result is a set of statements about specific individuals experiences. We do not claim that these are representative of a potential user community, but we do claim to have found commonalities and differences among the interviewees that may be reflected in the user community as a whole. We present these as a series of hypotheses that can be tested by subsequent studies, and we offer recommendations to Globus developers based on the assumption that these hypotheses are representative. Specifically, we conducted interviews with thirty technology users in the scientific community. We included both people who have used Globus software and those who have not. We made a point of including individuals who represent a variety of roles in scientific projects, for example, scientists, software developers, engineers, and infrastructure providers. The following material is included in this report: (1) A summary of the reported work-related goals, significant issues, and points of satisfaction with the use of Globus software; (2

  5. Computation and Optimization of Dose Distributions for Rotational Stereotactic Radiosurgery

    NASA Astrophysics Data System (ADS)

    Fox, Timothy Harold

    1994-01-01

    The stereotactic radiosurgery technique presented in this work is the patient rotator method which rotates the patient in a sitting position with a stereotactic head frame attached to the skull while collimated non-coplanar radiation beams from a 6 MV medical linear accelerator are delivered to the target point. The hypothesis of this dissertation is that accurate, three-dimensional dose distributions can be computed and optimized for the patient rotator method used in stereotactic radiosurgery. This dissertation presents research results in three areas related to computing and optimizing dose distributions for the patient rotator method. A three-dimensional dose model was developed to calculate the dose at any point in the cerebral cortex using a circular and adjustable collimator system and the geometry of the radiation beam with respect to the target point. The computed dose distributions compared to experimental measurements had an average maximum deviation of <0.7 mm for the relative isodose distributions greater than 50%. A system was developed to qualitatively and quantitatively visualize the computed dose distributions with patient anatomy. A registration method was presented for transforming each dataset to a common reference system. A method for computing the intersections of anatomical contour's boundaries was developed to calculate dose-volume information. The system efficiently and accurately reduced the large computed, volumetric sets of dose data, medical images, and anatomical contours to manageable images and graphs. A computer-aided optimization method was developed for rigorously selecting beam angles and weights for minimizing the dose to normal tissue. Linear programming was applied as the optimization method. The computed optimal beam angles and weights for a defined objective function and dose constraints exhibited a superior dose distribution compared to a standard plan. The developed dose model, qualitative and quantitative visualization

  6. Distriblets: Java-Based Distributed Computing on the Web.

    ERIC Educational Resources Information Center

    Finkel, David; Wills, Craig E.; Brennan, Brian; Brennan, Chris

    1999-01-01

    Describes a system for using the World Wide Web to distribute computational tasks to multiple hosts on the Web that is written in Java programming language. Describes the programs written to carry out the load distribution, the structure of a "distriblet" class, and experiences in using this system. (Author/LRW)

  7. Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments

    SciTech Connect

    Jin, Shuangshuang; Chen, Yousu; Wu, Di; Diao, Ruisheng; Huang, Zhenyu

    2015-12-09

    Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Message Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.

  8. Computational determination of absorbed dose distributions from gamma ray sources

    NASA Astrophysics Data System (ADS)

    Zhou, Chuanyu; Inanc, Feyzi

    2001-04-01

    A biomedical procedure known as brachytherapy involves insertion of many radioactive seeds into a sick gland for eliminating sick tissue. For such implementations, the spatial distribution of absorbed dose is very important. A simulation tool has been developed to determine the spatial distribution of absorbed dose in heterogeneous environments where the gamma ray source consists of many small internal radiation emitters. The computation is base on integral transport method and the computations are done in a parallel fashion. Preliminary results involving 137Cs and 125I sources surrounded by water and comparison of the results to the experimental and computational data available in the literature are presented.

  9. A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)

    2001-01-01

    NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.

  10. Arcade: A Web-Java Based Framework for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.

  11. Computational fluid dynamic modelling of cavitation

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    Models in sheet cavitation in cryogenic fluids are developed for use in Euler and Navier-Stokes codes. The models are based upon earlier potential-flow models but enable the cavity inception point, length, and shape to be determined as part of the computation. In the present paper, numerical solutions are compared with experimental measurements for both pressure distribution and cavity length. Comparisons between models are also presented. The CFD model provides a relatively simple modification to an existing code to enable cavitation performance predictions to be included. The analysis also has the added ability of incorporating thermodynamic effects of cryogenic fluids into the analysis. Extensions of the current two-dimensional steady state analysis to three-dimensions and/or time-dependent flows are, in principle, straightforward although geometrical issues become more complicated. Linearized models, however offer promise of providing effective cavitation modeling in three-dimensions. This analysis presents good potential for improved understanding of many phenomena associated with cavity flows.

  12. Computer aided analysis and optimization of mechanical system dynamics

    NASA Technical Reports Server (NTRS)

    Haug, E. J.

    1984-01-01

    The purpose is to outline a computational approach to spatial dynamics of mechanical systems that substantially enlarges the scope of consideration to include flexible bodies, feedback control, hydraulics, and related interdisciplinary effects. Design sensitivity analysis and optimization is the ultimate goal. The approach to computer generation and solution of the system dynamic equations and graphical methods for creating animations as output is outlined.

  13. Experiment Dashboard for Monitoring of the LHC Distributed Computing Systems

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Devesas Campos, M.; Tarragon Cros, J.; Gaidioz, B.; Karavakis, E.; Kokoszkiewicz, L.; Lanciotti, E.; Maier, G.; Ollivier, W.; Nowotka, M.; Rocha, R.; Sadykov, T.; Saiz, P.; Sargsyan, L.; Sidorova, I.; Tuckett, D.

    2011-12-01

    LHC experiments are currently taking collisions data. A distributed computing model chosen by the four main LHC experiments allows physicists to benefit from resources spread all over the world. The distributed model and the scale of LHC computing activities increase the level of complexity of middleware, and also the chances of possible failures or inefficiencies in involved components. In order to ensure the required performance and functionality of the LHC computing system, monitoring the status of the distributed sites and services as well as monitoring LHC computing activities are among the key factors. Over the last years, the Experiment Dashboard team has been working on a number of applications that facilitate the monitoring of different activities: including following up jobs, transfers, and also site and service availabilities. This presentation describes Experiment Dashboard applications used by the LHC experiments and experience gained during the first months of data taking.

  14. Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds

    NASA Astrophysics Data System (ADS)

    Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.

    In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.

  15. Description and development of the means of a model experiment for load balancing in distributed computing systems

    NASA Astrophysics Data System (ADS)

    Nagiyev, A. E.; Sherstnyova, A. I.; Botygin, I. A.; Galanova, N. Y.

    2016-06-01

    The results of the statistical model experiments research of various load balancing algorithms in distributed computing systems are presented. Software tools were developed. These tools, which allow to create a virtual infrastructure of distributed computing system in accordance with the intended objective of the research focused on multi-agent and multithreaded data processing were developed. A diagram of the control processing of requests from the terminal devices, providing an effective dynamic horizontal scaling of computing power at peak loads, is proposed.

  16. Trends in computational capabilities for fluid dynamics

    NASA Technical Reports Server (NTRS)

    Peterson, V. L.

    1985-01-01

    Milestones in the development of computational aerodynamics are reviewed together with past, present, and future computer performance (speed and memory) trends. Factors influencing computer performance requirements for both steady and unsteady flow simulations are identified. Estimates of computer speed and memory that are required to calculate both inviscid and viscous, steady and unsteady flows about airfoils, wings, and simple wing body configurations are presented and compared to computer performance which is either currently available, or is expected to be available before the end of this decade. Finally, estimates of the amounts of computer time that are required to determine flutter boundaries of airfoils and wings at transonic Mach numbers are presented and discussed.

  17. Trends in computational capabilities for fluid dynamics

    NASA Technical Reports Server (NTRS)

    Peterson, V. L.

    1984-01-01

    Milestones in the development of computational aerodynamics are reviewed together with past, present, and future computer performance (speed and memory) trends. Factors influencing computer performance requirements for both steady and unsteady flow simulations are identified. Estimates of computer speed and memory that are required to calculate both inviscid and viscous, steady and unsteady flows about airfoils, wings, and simple wing body configurations are presented and compared to computer performance which is either currently available, or is expected to be available before the end of this decade. Finally, estimates of the amounts of computer time that are required to determine flutter boundaries of airfoils and wings at transonic Mach numbers are presented and discussed.

  18. Dynamic leaching test of personal computer components.

    PubMed

    Li, Yadong; Richardson, Jay B; Niu, Xiaojun; Jackson, Ollie J; Laster, Jeremy D; Walker, Aaron K

    2009-11-15

    A dynamic leaching test (DLT) was developed and used to evaluate the leaching of toxic substances for electronic waste in the environment. The major components in personal computers (PCs) including motherboards, hard disc drives, floppy disc drives, and compact disc drives were tested. The tests lasted for 2 years for motherboards and 1.5 year for the disc drives. The extraction fluids for the standard toxicity characteristic leaching procedure (TCLP) and synthetic precipitation leaching procedure (SPLP) were used as the DLT leaching solutions. A total of 18 elements including Ag, Al, As, Au, Ba, Be, Cd, Cr, Cu, Fe, Ga, Ni, Pd, Pb, Sb, Se, Sn, and Zn were analyzed in the DLT leachates. Only Al, Cu, Fe, Ni, Pb, and Zn were commonly found in the DLT leachates of the PC components. Their leaching levels were much higher in TCLP extraction fluid than in SPLP extraction fluid. The toxic heavy metal Pb was found to continuously leach out of the components over the entire test periods. The cumulative amounts of Pb leached out of the motherboards in TCLP extraction fluid reached 2.0 g per motherboard over the 2-year test period, and that in SPLP extraction fluid were 75-90% less. The leaching rates or levels of Pb were largely affected by the content of galvanized steel in the PC components. The higher was the steel content, the lower the Pb leaching rate would be. The findings suggest that the obsolete PCs disposed of in landfills or discarded in the environment continuously release Pb for years when subjected to landfill leachate or rains. PMID:19616380

  19. AIR INGRESS ANALYSIS: COMPUTATIONAL FLUID DYNAMIC MODELS

    SciTech Connect

    Chang H. Oh; Eung S. Kim; Richard Schultz; Hans Gougar; David Petti; Hyung S. Kang

    2010-08-01

    The Idaho National Laboratory (INL), under the auspices of the U.S. Department of Energy, is performing research and development that focuses on key phenomena important during potential scenarios that may occur in very high temperature reactors (VHTRs). Phenomena Identification and Ranking Studies to date have ranked an air ingress event, following on the heels of a VHTR depressurization, as important with regard to core safety. Consequently, the development of advanced air ingress-related models and verification and validation data are a very high priority. Following a loss of coolant and system depressurization incident, air will enter the core of the High Temperature Gas Cooled Reactor through the break, possibly causing oxidation of the in-the core and reflector graphite structure. Simple core and plant models indicate that, under certain circumstances, the oxidation may proceed at an elevated rate with additional heat generated from the oxidation reaction itself. Under postulated conditions of fluid flow and temperature, excessive degradation of the lower plenum graphite can lead to a loss of structural support. Excessive oxidation of core graphite can also lead to the release of fission products into the confinement, which could be detrimental to a reactor safety. Computational fluid dynamic model developed in this study will improve our understanding of this phenomenon. This paper presents two-dimensional and three-dimensional CFD results for the quantitative assessment of the air ingress phenomena. A portion of results of the density-driven stratified flow in the inlet pipe will be compared with results of the experimental results.

  20. COMPUTATIONAL FLUID DYNAMICS MODELING ANALYSIS OF COMBUSTORS

    SciTech Connect

    Mathur, M.P.; Freeman, Mark; Gera, Dinesh

    2001-11-06

    In the current fiscal year FY01, several CFD simulations were conducted to investigate the effects of moisture in biomass/coal, particle injection locations, and flow parameters on carbon burnout and NO{sub x} inside a 150 MW GEEZER industrial boiler. Various simulations were designed to predict the suitability of biomass cofiring in coal combustors, and to explore the possibility of using biomass as a reburning fuel to reduce NO{sub x}. Some additional CFD simulations were also conducted on CERF combustor to examine the combustion characteristics of pulverized coal in enriched O{sub 2}/CO{sub 2} environments. Most of the CFD models available in the literature treat particles to be point masses with uniform temperature inside the particles. This isothermal condition may not be suitable for larger biomass particles. To this end, a stand alone program was developed from the first principles to account for heat conduction from the surface of the particle to its center. It is envisaged that the recently developed non-isothermal stand alone module will be integrated with the Fluent solver during next fiscal year to accurately predict the carbon burnout from larger biomass particles. Anisotropy in heat transfer in radial and axial will be explored using different conductivities in radial and axial directions. The above models will be validated/tested on various fullscale industrial boilers. The current NO{sub x} modules will be modified to account for local CH, CH{sub 2}, and CH{sub 3} radicals chemistry, currently it is based on global chemistry. It may also be worth exploring the effect of enriched O{sub 2}/CO{sub 2} environment on carbon burnout and NO{sub x} concentration. The research objective of this study is to develop a 3-Dimensional Combustor Model for Biomass Co-firing and reburning applications using the Fluent Computational Fluid Dynamics Code.

  1. Integrated computer simulation on FIR FEL dynamics

    SciTech Connect

    Furukawa, H.; Kuruma, S.; Imasaki, K.

    1995-12-31

    An integrated computer simulation code has been developed to analyze the RF-Linac FEL dynamics. First, the simulation code on the electron beam acceleration and transport processes in RF-Linac: (LUNA) has been developed to analyze the characteristics of the electron beam in RF-Linac and to optimize the parameters of RF-Linac. Second, a space-time dependent 3D FEL simulation code (Shipout) has been developed. The RF-Linac FEL total simulations have been performed by using the electron beam data from LUNA in Shipout. The number of particles using in a RF-Linac FEL total simulation is approximately 1000. The CPU time for the simulation of 1 round trip is about 1.5 minutes. At ILT/ILE, Osaka, a 8.5MeV RF-Linac with a photo-cathode RF-gun is used for FEL oscillation experiments. By using 2 cm wiggler, the FEL oscillation in the wavelength approximately 46 {mu}m are investigated. By the simulations using LUNA with the parameters of an ILT/ILE experiment, the pulse shape and the energy spectra of the electron beam at the end of the linac are estimated. The pulse shape of the electron beam at the end of the linac has sharp rise-up and it slowly decays as a function of time. By the RF-linac FEL total simulations with the parameters of an ILT/ILE experiment, the dependencies of the start up of the FEL oscillations on the pulse shape of the electron beam at the end of the linac are estimated. The coherent spontaneous emission effects and the quick start up of FEL oscillations have been observed by the RF-Linac FEL total simulations.

  2. Bringing the CMS distributed computing system into scalable operations

    NASA Astrophysics Data System (ADS)

    Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.

    2010-04-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.

  3. Computational fluid dynamics on a massively parallel computer

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.; Levit, Creon

    1989-01-01

    A finite difference code was implemented for the compressible Navier-Stokes equations on the Connection Machine, a massively parallel computer. The code is based on the ARC2D/ARC3D program and uses the implicit factored algorithm of Beam and Warming. The codes uses odd-even elimination to solve linear systems. Timings and computation rates are given for the code, and a comparison is made with a Cray XMP.

  4. Aircraft T-tail flutter predictions using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Attorni, A.; Cavagna, L.; Quaranta, G.

    2011-02-01

    The paper presents the application of computational aeroelasticity (CA) methods to the analysis of a T-tail stability in transonic regime. For this flow condition unsteady aerodynamics show a significant dependency from the aircraft equilibrium flight configuration, which rules both the position of shock waves in the flow field and the load distribution on the horizontal tail plane. Both these elements have an influence on the aerodynamic forces, and so on the aeroelastic stability of the system. The numerical procedure proposed allows to investigate flutter stability for a free-flying aircraft, iterating until convergence the following sequence of sub-problems: search for the trimmed condition for the deformable aircraft; linearize the system about the stated equilibrium point; predict the aeroelastic stability boundaries using the inferred linear model. An innovative approach based on sliding meshes allows to represent the changes of the computational fluid domain due to the motion of control surfaces used to trim the aircraft. To highlight the importance of keeping the linear model always aligned to the trim condition, and at the same time the capabilities of the computational fluid dynamics approach, the method is applied to a real aircraft with a T-tail configuration: the P180.

  5. New security infrastructure model for distributed computing systems

    NASA Astrophysics Data System (ADS)

    Dubenskaya, J.; Kryukov, A.; Demichev, A.; Prikhodko, N.

    2016-02-01

    At the paper we propose a new approach to setting up a user-friendly and yet secure authentication and authorization procedure in a distributed computing system. The security concept of the most heterogeneous distributed computing systems is based on the public key infrastructure along with proxy certificates which are used for rights delegation. In practice a contradiction between the limited lifetime of the proxy certificates and the unpredictable time of the request processing is a big issue for the end users of the system. We propose to use unlimited in time hashes which are individual for each request instead of proxy certificate. Our approach allows to avoid using of the proxy certificates. Thus the security infrastructure of distributed computing system becomes easier for development, support and use.

  6. Nonlinear structural analysis on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Watson, Brian C.; Noor, Ahmed K.

    1995-01-01

    A computational strategy is presented for the nonlinear static and postbuckling analyses of large complex structures on massively parallel computers. The strategy is designed for distributed-memory, message-passing parallel computer systems. The key elements of the proposed strategy are: (1) a multiple-parameter reduced basis technique; (2) a nested dissection (or multilevel substructuring) ordering scheme; (3) parallel assembly of global matrices; and (4) a parallel sparse equation solver. The effectiveness of the strategy is assessed by applying it to thermo-mechanical postbuckling analyses of stiffened composite panels with cutouts, and nonlinear large-deflection analyses of HSCT models on Intel Paragon XP/S computers. The numerical studies presented demonstrate the advantages of nested dissection-based solvers over traditional skyline-based solvers on distributed memory machines.

  7. Exact score distribution computation for ontological similarity searches

    PubMed Central

    2011-01-01

    Background Semantic similarity searches in ontologies are an important component of many bioinformatic algorithms, e.g., finding functionally related proteins with the Gene Ontology or phenotypically similar diseases with the Human Phenotype Ontology (HPO). We have recently shown that the performance of semantic similarity searches can be improved by ranking results according to the probability of obtaining a given score at random rather than by the scores themselves. However, to date, there are no algorithms for computing the exact distribution of semantic similarity scores, which is necessary for computing the exact P-value of a given score. Results In this paper we consider the exact computation of score distributions for similarity searches in ontologies, and introduce a simple null hypothesis which can be used to compute a P-value for the statistical significance of similarity scores. We concentrate on measures based on Resnik's definition of ontological similarity. A new algorithm is proposed that collapses subgraphs of the ontology graph and thereby allows fast score distribution computation. The new algorithm is several orders of magnitude faster than the naive approach, as we demonstrate by computing score distributions for similarity searches in the HPO. It is shown that exact P-value calculation improves clinical diagnosis using the HPO compared to approaches based on sampling. Conclusions The new algorithm enables for the first time exact P-value calculation via exact score distribution computation for ontology similarity searches. The approach is applicable to any ontology for which the annotation-propagation rule holds and can improve any bioinformatic method that makes only use of the raw similarity scores. The algorithm was implemented in Java, supports any ontology in OBO format, and is available for non-commercial and academic usage under: https://compbio.charite.de/svn/hpo/trunk/src/tools/significance/ PMID:22078312

  8. Computation of glint, glare, and solar irradiance distribution

    SciTech Connect

    Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh

    2015-08-11

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.

  9. First Experiences with LHC Grid Computing and Distributed Analysis

    SciTech Connect

    Fisk, Ian

    2010-12-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  10. Distributed computer taxonomy based on O/S structure

    NASA Technical Reports Server (NTRS)

    Foudriat, Edwin C.

    1985-01-01

    The taxonomy considers the resource structure at the operating system level. It compares a communication based taxonomy with the new taxonomy to illustrate how the latter does a better job when related to the client's view of the distributed computer. The results illustrate the fundamental features and what is required to construct fully distributed processing systems. The problem of using network computers on the space station is addressed. A detailed discussion of the taxonomy is not given here. Information is given in the form of charts and diagrams that were used to illustrate a talk.