Science.gov

Sample records for distributed dynamical computation

  1. Improving flow distribution in influent channels using computational fluid dynamics.

    PubMed

    Park, No-Suk; Yoon, Sukmin; Jeong, Woochang; Lee, Seungjae

    2016-10-01

    Although the flow distribution in an influent channel where the inflow is split into each treatment process in a wastewater treatment plant greatly affects the efficiency of the process, and a weir is the typical structure for the flow distribution, to the authors' knowledge, there is a paucity of research on the flow distribution in an open channel with a weir. In this study, the influent channel of a real-scale wastewater treatment plant was used, installing a suppressed rectangular weir that has a horizontal crest to cross the full channel width. The flow distribution in the influent channel was analyzed using a validated computational fluid dynamics model to investigate (1) the comparison of single-phase and two-phase simulation, (2) the improved procedure of the prototype channel, and (3) the effect of the inflow rate on flow distribution. The results show that two-phase simulation is more reliable due to the description of the free-surface fluctuations. It should first be considered for improving flow distribution to prevent a short-circuit flow, and the difference in the kinetic energy with the inflow rate makes flow distribution trends different. The authors believe that this case study is helpful for improving flow distribution in an influent channel.

  2. Parallel and Distributed Computational Fluid Dynamics: Experimental Results and Challenges

    NASA Technical Reports Server (NTRS)

    Djomehri, Mohammad Jahed; Biswas, R.; VanderWijngaart, R.; Yarrow, M.

    2000-01-01

    This paper describes several results of parallel and distributed computing using a large scale production flow solver program. A coarse grained parallelization based on clustering of discretization grids combined with partitioning of large grids for load balancing is presented. An assessment is given of its performance on distributed and distributed-shared memory platforms using large scale scientific problems. An experiment with this solver, adapted to a Wide Area Network execution environment is presented. We also give a comparative performance assessment of computation and communication times on both the tightly and loosely-coupled machines.

  3. An evaluation of biosurveillance grid--dynamic algorithm distribution across multiple computer nodes.

    PubMed

    Tsai, Ming-Chi; Tsui, Fu-Chiang; Wagner, Michael M

    2007-10-11

    Performing fast data analysis to detect disease outbreaks plays a critical role in real-time biosurveillance. In this paper, we described and evaluated an Algorithm Distribution Manager Service (ADMS) based on grid technologies, which dynamically partition and distribute detection algorithms across multiple computers. We compared the execution time to perform the analysis on a single computer and on a grid network (3 computing nodes) with and without using dynamic algorithm distribution. We found that algorithms with long runtime completed approximately three times earlier in distributed environment than in a single computer while short runtime algorithms performed worse in distributed environment. A dynamic algorithm distribution approach also performed better than static algorithm distribution approach. This pilot study shows a great potential to reduce lengthy analysis time through dynamic algorithm partitioning and parallel processing, and provides the opportunity of distributing algorithms from a client to remote computers in a grid network.

  4. Dynamic resource allocation scheme for distributed heterogeneous computer systems

    NASA Technical Reports Server (NTRS)

    Liu, Howard T. (Inventor); Silvester, John A. (Inventor)

    1991-01-01

    This invention relates to a resource allocation in computer systems, and more particularly, to a method and associated apparatus for shortening response time and improving efficiency of a heterogeneous distributed networked computer system by reallocating the jobs queued up for busy nodes to idle, or less-busy nodes. In accordance with the algorithm (SIDA for short), the load-sharing is initiated by the server device in a manner such that extra overhead in not imposed on the system during heavily-loaded conditions. The algorithm employed in the present invention uses a dual-mode, server-initiated approach. Jobs are transferred from heavily burdened nodes (i.e., over a high threshold limit) to low burdened nodes at the initiation of the receiving node when: (1) a job finishes at a node which is burdened below a pre-established threshold level, or (2) a node is idle for a period of time as established by a wakeup timer at the node. The invention uses a combination of the local queue length and the local service rate ratio at each node as the workload indicator.

  5. Distributed Computing.

    ERIC Educational Resources Information Center

    Ryland, Jane N.

    1988-01-01

    The microcomputer revolution, in which small and large computers have gained tremendously in capability, has created a distributed computing environment. This circumstance presents administrators with the opportunities and the dilemmas of choosing appropriate computing resources for each situation. (Author/MSE)

  6. Dynamic Load Balancing for Adaptive Computations on Distributed-Memory Machines

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Dynamic load balancing is central to adaptive mesh-based computations on large-scale parallel computers. The principal investigator has investigated various issues on the dynamic load balancing problem under NASA JOVE and JAG rants. The major accomplishments of the project are two graph partitioning algorithms and a load balancing framework. The S-HARP dynamic graph partitioner is known to be the fastest among the known dynamic graph partitioners to date. It can partition a graph of over 100,000 vertices in 0.25 seconds on a 64- processor Cray T3E distributed-memory multiprocessor while maintaining the scalability of over 16-fold speedup. Other known and widely used dynamic graph partitioners take over a second or two while giving low scalability of a few fold speedup on 64 processors. These results have been published in journals and peer-reviewed flagship conferences.

  7. Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.

    2000-01-01

    The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.

  8. Fast computation of statistical uncertainty for spatiotemporal distributions estimated directly from dynamic cone beam SPECT projections

    SciTech Connect

    Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.

    2001-04-09

    The estimation of time-activity curves and kinetic model parameters directly from projection data is potentially useful for clinical dynamic single photon emission computed tomography (SPECT) studies, particularly in those clinics that have only single-detector systems and thus are not able to perform rapid tomographic acquisitions. Because the radiopharmaceutical distribution changes while the SPECT gantry rotates, projections at different angles come from different tracer distributions. A dynamic image sequence reconstructed from the inconsistent projections acquired by a slowly rotating gantry can contain artifacts that lead to biases in kinetic parameters estimated from time-activity curves generated by overlaying regions of interest on the images. If cone beam collimators are used and the focal point of the collimators always remains in a particular transaxial plane, additional artifacts can arise in other planes reconstructed using insufficient projection samples [1]. If the projection samples truncate the patient's body, this can result in additional image artifacts. To overcome these sources of bias in conventional image based dynamic data analysis, we and others have been investigating the estimation of time-activity curves and kinetic model parameters directly from dynamic SPECT projection data by modeling the spatial and temporal distribution of the radiopharmaceutical throughout the projected field of view [2-8]. In our previous work we developed a computationally efficient method for fully four-dimensional (4-D) direct estimation of spatiotemporal distributions from dynamic SPECT projection data [5], which extended Formiconi's least squares algorithm for reconstructing temporally static distributions [9]. In addition, we studied the biases that result from modeling various orders temporal continuity and using various time samplings [5]. the present work, we address computational issues associated with evaluating the statistical uncertainty of

  9. Large-Scale Distributed Computational Fluid Dynamics on the Information Power Grid Using Globus

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen; Biswas, Rupak; Saini, Subhash; VanderWijngaart, Robertus; Yarrow, Maurice; Zechtzer, Lou; Foster, Ian; Larsson, Olle

    1999-01-01

    This paper describes an experiment in which a large-scale scientific application development for tightly-coupled parallel machines is adapted to the distributed execution environment of the Information Power Grid (IPG). A brief overview of the IPG and a description of the computational fluid dynamics (CFD) algorithm are given. The Globus metacomputing toolkit is used as the enabling device for the geographically-distributed computation. Modifications related to latency hiding and Load balancing were required for an efficient implementation of the CFD application in the IPG environment. Performance results on a pair of SGI Origin 2000 machines indicate that real scientific applications can be effectively implemented on the IPG; however, a significant amount of continued effort is required to make such an environment useful and accessible to scientists and engineers.

  10. A distributed, dynamic, parallel computational model: the role of noise in velocity storage

    PubMed Central

    Merfeld, Daniel M.

    2012-01-01

    Networks of neurons perform complex calculations using distributed, parallel computation, including dynamic “real-time” calculations required for motion control. The brain must combine sensory signals to estimate the motion of body parts using imperfect information from noisy neurons. Models and experiments suggest that the brain sometimes optimally minimizes the influence of noise, although it remains unclear when and precisely how neurons perform such optimal computations. To investigate, we created a model of velocity storage based on a relatively new technique–“particle filtering”–that is both distributed and parallel. It extends existing observer and Kalman filter models of vestibular processing by simulating the observer model many times in parallel with noise added. During simulation, the variance of the particles defining the estimator state is used to compute the particle filter gain. We applied our model to estimate one-dimensional angular velocity during yaw rotation, which yielded estimates for the velocity storage time constant, afferent noise, and perceptual noise that matched experimental data. We also found that the velocity storage time constant was Bayesian optimal by comparing the estimate of our particle filter with the estimate of the Kalman filter, which is optimal. The particle filter demonstrated a reduced velocity storage time constant when afferent noise increased, which mimics what is known about aminoglycoside ablation of semicircular canal hair cells. This model helps bridge the gap between parallel distributed neural computation and systems-level behavioral responses like the vestibuloocular response and perception. PMID:22514288

  11. Distributed dynamical computation in neural circuits with propagating coherent activity patterns.

    PubMed

    Gong, Pulin; van Leeuwen, Cees

    2009-12-01

    Activity in neural circuits is spatiotemporally organized. Its spatial organization consists of multiple, localized coherent patterns, or patchy clusters. These patterns propagate across the circuits over time. This type of collective behavior has ubiquitously been observed, both in spontaneous activity and evoked responses; its function, however, has remained unclear. We construct a spatially extended, spiking neural circuit that generates emergent spatiotemporal activity patterns, thereby capturing some of the complexities of the patterns observed empirically. We elucidate what kind of fundamental function these patterns can serve by showing how they process information. As self-sustained objects, localized coherent patterns can signal information by propagating across the neural circuit. Computational operations occur when these emergent patterns interact, or collide with each other. The ongoing behaviors of these patterns naturally embody both distributed, parallel computation and cascaded logical operations. Such distributed computations enable the system to work in an inherently flexible and efficient way. Our work leads us to propose that propagating coherent activity patterns are the underlying primitives with which neural circuits carry out distributed dynamical computation.

  12. FPGA-based distributed computing microarchitecture for complex physical dynamics investigation.

    PubMed

    Borgese, Gianluca; Pace, Calogero; Pantano, Pietro; Bilotta, Eleonora

    2013-09-01

    In this paper, we present a distributed computing system, called DCMARK, aimed at solving partial differential equations at the basis of many investigation fields, such as solid state physics, nuclear physics, and plasma physics. This distributed architecture is based on the cellular neural network paradigm, which allows us to divide the differential equation system solving into many parallel integration operations to be executed by a custom multiprocessor system. We push the number of processors to the limit of one processor for each equation. In order to test the present idea, we choose to implement DCMARK on a single FPGA, designing the single processor in order to minimize its hardware requirements and to obtain a large number of easily interconnected processors. This approach is particularly suited to study the properties of 1-, 2- and 3-D locally interconnected dynamical systems. In order to test the computing platform, we implement a 200 cells, Korteweg-de Vries (KdV) equation solver and perform a comparison between simulations conducted on a high performance PC and on our system. Since our distributed architecture takes a constant computing time to solve the equation system, independently of the number of dynamical elements (cells) of the CNN array, it allows us to reduce the elaboration time more than other similar systems in the literature. To ensure a high level of reconfigurability, we design a compact system on programmable chip managed by a softcore processor, which controls the fast data/control communication between our system and a PC Host. An intuitively graphical user interface allows us to change the calculation parameters and plot the results.

  13. The van Hove distribution function for brownian hard spheres: dynamical test particle theory and computer simulations for bulk dynamics.

    PubMed

    Hopkins, Paul; Fortini, Andrea; Archer, Andrew J; Schmidt, Matthias

    2010-12-14

    We describe a test particle approach based on dynamical density functional theory (DDFT) for studying the correlated time evolution of the particles that constitute a fluid. Our theory provides a means of calculating the van Hove distribution function by treating its self and distinct parts as the two components of a binary fluid mixture, with the "self " component having only one particle, the "distinct" component consisting of all the other particles, and using DDFT to calculate the time evolution of the density profiles for the two components. We apply this approach to a bulk fluid of Brownian hard spheres and compare to results for the van Hove function and the intermediate scattering function from Brownian dynamics computer simulations. We find good agreement at low and intermediate densities using the very simple Ramakrishnan-Yussouff [Phys. Rev. B 19, 2775 (1979)] approximation for the excess free energy functional. Since the DDFT is based on the equilibrium Helmholtz free energy functional, we can probe a free energy landscape that underlies the dynamics. Within the mean-field approximation we find that as the particle density increases, this landscape develops a minimum, while an exact treatment of a model confined situation shows that for an ergodic fluid this landscape should be monotonic. We discuss possible implications for slow, glassy, and arrested dynamics at high densities.

  14. On the simulation of protein folding by short time scale molecular dynamics and distributed computing.

    PubMed

    Fersht, Alan R

    2002-10-29

    There are proposals to overcome the current incompatibilities between the time scales of protein folding and molecular dynamics simulation by using a large number of short simulations of only tens of nanoseconds (distributed computing). According to the principles of first-order kinetic processes, a sufficiently large number of short simulations will include, de facto, a small number of long time scale events that have proceeded to completion. But protein folding is not an elementary kinetic step: folding has a series of early conformational steps that lead to lag phases at the beginning of the kinetics. The presence of these lag phases can bias short simulations toward selecting minor pathways that have fewer or faster lag steps and so miss the major folding pathways. Attempts to circumvent the lags by using loosely coupled parallel simulations that search for first-order transitions are also problematic because of the difficulty of detecting transitions in molecular dynamics simulations. Nevertheless, the procedure of using parallel independent simulations is perfectly valid and quite feasible once the time scale of simulation proceeds past the lag phases into a single exponential region.

  15. Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    The following reports are presented on this project:A first year progress report on: Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; A second year progress report on: Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design; Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration; and Improving the Aircraft Design Process Using Web-based Modeling and Simulation.

  16. Quantifying the residence time distribution of surface transient storage in streams: A computational fluid dynamics approach

    NASA Astrophysics Data System (ADS)

    Jackson, T. R.; Drost, K. J.; Haggerty, R.; Apte, S. V.

    2011-12-01

    Transient storage is the sum of surface transient storage (STS) and hyporheic transient storage (HTS) and separating the two storage components is challenging. A number of studies have attempted to determine the relationship between transient storage and stream channel properties; however, difficulties ensue when attempting to calculate STS. The present study attempts to develop a predictive relationship between a stream's STS residence time distribution (RTD) to physically-based and field-measureable properties of natural streams. Our approach is to use field measurements to constrain a computational fluid dynamics (CFD) model of STS and use both to develop and test a predictive model of STS RTD. Field sites were located on Oak and Soap creeks in the Willamette Valley near Corvallis, Oregon. Data collection included: (1) obtaining detailed stream and STS zone morphologies through dense survey measurements; (2) determining turbulence parameters and CFD model boundary inputs from stream and storage zone velocity measurements with a Marsh-McBirney and acoustic Doppler velocimeter; (3) quantifying the RTD and its mean using salt tracer injections and electrical conductivity probes; and (4) estimating mixing layer parameters using velocity measurements and a visual dye. Preliminary results from the CFD model and comparison to field data will be presented, and resulting insights into the RTD.

  17. Using computational fluid dynamics software to estimate circulation time distributions in bioreactors.

    PubMed

    Davidson, Kyle M; Sushil, Shrinivasan; Eggleton, Charles D; Marten, Mark R

    2003-01-01

    Nonideal mixing in many fermentation processes can lead to concentration gradients in nutrients, oxygen, and pH, among others. These gradients are likely to influence cellular behavior, growth, or yield of the fermentation process. Frequency of exposure to these gradients can be defined by the circulation time distribution (CTD). There are few examples of CTDs in the literature, and experimental determination of CTD is at best a challenging task. The goal in this study was to determine whether computational fluid dynamics (CFD) software (FLUENT 4 and MixSim) could be used to characterize the CTD in a single-impeller mixing tank. To accomplish this, CFD software was used to simulate flow fields in three different mixing tanks by meshing the tanks with a grid of elements and solving the Navier-Stokes equations using the kappa-epsilon turbulence model. Tracer particles were released from a reference zone within the simulated flow fields, particle trajectories were simulated for 30 s, and the time taken for these tracer particles to return to the reference zone was calculated. CTDs determined by experimental measurement, which showed distinct features (log-normal, bimodal, and unimodal), were compared with CTDs determined using CFD simulation. Reproducing the signal processing procedures used in each of the experiments, CFD simulations captured the characteristic features of the experimentally measured CTDs. The CFD data suggests new signal processing procedures that predict unimodal CTDs for all three tanks.

  18. Chapter on Distributed Computing

    DTIC Science & Technology

    1989-02-01

    MASSACHUSETTS LABORATORY FOR INSTITUTE OF COMPUTER SCIENCE TECHNOLOGY ("D / o O MIT/LCS/TM-384 CHAPTER ON DISTRIBUTED COMPUTING Leslie Lamport Nancy...22217 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Miude Secuwity Ciaifiation) Chapter on Distributed Computing 12. PERSONAL AUTHOR(S) Lamport... distributed computing , distributed systems models, dis- tributed algorithms, message-passing, shared variables, 19. UBSTRACT (Continue on reverse if

  19. Design and performance evaluation of dynamic wavelength scheduled hybrid WDM/TDM PON for distributed computing applications.

    PubMed

    Zhu, Min; Guo, Wei; Xiao, Shilin; Dong, Yi; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2009-01-19

    This paper investigates the design and implementation of distributed computing applications in local area network. We propose a novel Dynamical Wavelength Scheduled Hybrid WDM/TDM Passive Optical Network, which is termed as DWS-HPON. The system is implemented by using spectrum slicing techniques of broadband light source and overlay broadcast-signaling scheme. The Time-Wavelength Co-Allocation (TWCA) Problem is defined and an effective greedy approach to this problem is presented for aggregating large files in distributed computing applications. The simulations demonstrate that the performance is improved significantly compared with the conventional TDM-over-WDM PON.

  20. High-throughput all-atom molecular dynamics simulations using distributed computing.

    PubMed

    Buch, I; Harvey, M J; Giorgino, T; Anderson, D P; De Fabritiis, G

    2010-03-22

    Although molecular dynamics simulation methods are useful in the modeling of macromolecular systems, they remain computationally expensive, with production work requiring costly high-performance computing (HPC) resources. We review recent innovations in accelerating molecular dynamics on graphics processing units (GPUs), and we describe GPUGRID, a volunteer computing project that uses the GPU resources of nondedicated desktop and workstation computers. In particular, we demonstrate the capability of simulating thousands of all-atom molecular trajectories generated at an average of 20 ns/day each (for systems of approximately 30 000-80 000 atoms). In conjunction with a potential of mean force (PMF) protocol for computing binding free energies, we demonstrate the use of GPUGRID in the computation of accurate binding affinities of the Src SH2 domain/pYEEI ligand complex by reconstructing the PMF over 373 umbrella sampling windows of 55 ns each (20.5 mus of total data). We obtain a standard free energy of binding of -8.7 +/- 0.4 kcal/mol within 0.7 kcal/mol from experimental results. This infrastructure will provide the basis for a robust system for high-throughput accurate binding affinity prediction.

  1. Program Facilitates Distributed Computing

    NASA Technical Reports Server (NTRS)

    Hui, Joseph

    1993-01-01

    KNET computer program facilitates distribution of computing between UNIX-compatible local host computer and remote host computer, which may or may not be UNIX-compatible. Capable of automatic remote log-in. User communicates interactively with remote host computer. Data output from remote host computer directed to local screen, to local file, and/or to local process. Conversely, data input from keyboard, local file, or local process directed to remote host computer. Written in ANSI standard C language.

  2. Distributed computing in bioinformatics.

    PubMed

    Jain, Eric

    2002-01-01

    This paper provides an overview of methods and current applications of distributed computing in bioinformatics. Distributed computing is a strategy of dividing a large workload among multiple computers to reduce processing time, or to make use of resources such as programs and databases that are not available on all computers. Participating computers may be connected either through a local high-speed network or through the Internet.

  3. Parallel and Distributed Computing.

    DTIC Science & Technology

    1986-12-12

    program was devoted to parallel and distributed computing . Support for this part of the program was obtained from the present Army contract and a...Umesh Vazirani. A workshop on parallel and distributed computing was held from May 19 to May 23, 1986 and drew 141 participants. Keywords: Mathematical programming; Protocols; Randomized algorithms. (Author)

  4. Cardea: Providing Support for Dynamic Resource Access in a Distributed Computing Environment

    NASA Technical Reports Server (NTRS)

    Lepro, Rebekah

    2003-01-01

    The environment framing the modem authorization process span domains of administration, relies on many different authentication sources, and manages complex attributes as part of the authorization process. Cardea facilitates dynamic access control within this environment as a central function of an inter-operable authorization framework. The system departs from the traditional authorization model by separating the authentication and authorization processes, distributing the responsibility for authorization data and allowing collaborating domains to retain control over their implementation mechanisms. Critical features of the system architecture and its handling of the authorization process differentiate the system from existing authorization components by addressing common needs not adequately addressed by existing systems. Continuing system research seeks to enhance the implementation of the current authorization model employed in Cardea, increase the robustness of current features, further the framework for establishing trust and promote interoperability with existing security mechanisms.

  5. Using computer simulations to determine the limitations of dynamic clamp stimuli applied at the soma in mimicking distributed conductance sources

    PubMed Central

    Lin, Risa J.

    2011-01-01

    In previous studies we used the technique of dynamic clamp to study how temporal modulation of inhibitory and excitatory inputs control the frequency and precise timing of spikes in neurons of the deep cerebellar nuclei (DCN). Although this technique is now widely used, it is limited to interpreting conductance inputs as being location independent; i.e., all inputs that are biologically distributed across the dendritic tree are applied to the soma. We used computer simulations of a morphologically realistic model of DCN neurons to compare the effects of purely somatic vs. distributed dendritic inputs in this cell type. We applied the same conductance stimuli used in our published experiments to the model. To simulate variability in neuronal responses to repeated stimuli, we added a somatic white current noise to reproduce subthreshold fluctuations in the membrane potential. We were able to replicate our dynamic clamp results with respect to spike rates and spike precision for different patterns of background synaptic activity. We found only minor differences in the spike pattern generation between focal or distributed input in this cell type even when strong inhibitory or excitatory bursts were applied. However, the location dependence of dynamic clamp stimuli is likely to be different for each cell type examined, and the simulation approach developed in the present study will allow a careful assessment of location dependence in all cell types. PMID:21325676

  6. Estimation of equivalence ratio distribution in diesel spray using a computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Suzuki, Yasumasa; Tsujimura, Taku; Kusaka, Jin

    2014-08-01

    It is important to understand the mechanism of mixing and atomization of the diesel spray. In addition, the computational prediction of mixing behavior and internal structure of a diesel spray is expected to promote the further understanding about a diesel spray and development of the diesel engine including devices for fuel injection. In this study, we predicted the formation of diesel fuel spray with 3D-CFD code and validated the application by comparing experimental results of the fuel spray behavior and the equivalence ratio visualized by Layleigh-scatter imaging under some ambient, injection and fuel conditions. Using the applicable constants of KH-RT model, we can predict the liquid length spray on a quantitative level. under various fuel injection, ambient and fuel conditions. On the other hand, the change of the vapor penetration and the fuel mass fraction and equivalence ratio distribution with change of fuel injection and ambient conditions quantitatively. The 3D-CFD code used in this study predicts the spray cone angle and entrainment of ambient gas are predicted excessively, therefore there is the possibility of the improvement in the prediction accuracy by the refinement of fuel droplets breakup and evaporation model and the quantitative prediction of spray cone angle.

  7. Coping with distributed computing

    SciTech Connect

    Cormell, L.

    1992-09-01

    The rapid increase in the availability of high performance, cost-effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no longer provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by providing some examples of the approaches taken at various HEP institutions. In addition, a brief review of commercial directions or products for distributed computing and management will be given.

  8. Understanding pharmacokinetics using realistic computational models of fluid dynamics: biosimulation of drug distribution within the CSF space for intrathecal drugs

    PubMed Central

    Kuttler, Andreas; Dimke, Thomas; Kern, Steven; Helmlinger, Gabriel; Stanski, Donald

    2010-01-01

    We introduce how biophysical modeling in pharmaceutical research and development, combining physiological observations at the tissue, organ and system level with selected drug physiochemical properties, may contribute to a greater and non-intuitive understanding of drug pharmacokinetics and therapeutic design. Based on rich first-principle knowledge combined with experimental data at both conception and calibration stages, and leveraging our insights on disease processes and drug pharmacology, biophysical modeling may provide a novel and unique opportunity to interactively characterize detailed drug transport, distribution, and subsequent therapeutic effects. This innovative approach is exemplified through a three-dimensional (3D) computational fluid dynamics model of the spinal canal motivated by questions arising during pharmaceutical development of one molecular therapy for spinal cord injury. The model was based on actual geometry reconstructed from magnetic resonance imaging data subsequently transformed in a parametric 3D geometry and a corresponding finite-volume representation. With dynamics controlled by transient Navier–Stokes equations, the model was implemented in a commercial multi-physics software environment established in the automotive and aerospace industries. While predictions were performed in silico, the underlying biophysical models relied on multiple sources of experimental data and knowledge from scientific literature. The results have provided insights into the primary factors that can influence the intrathecal distribution of drug after lumbar administration. This example illustrates how the approach connects the causal chain underlying drug distribution, starting with the technical aspect of drug delivery systems, through physiology-driven drug transport, then eventually linking to tissue penetration, binding, residence, and ultimately clearance. Currently supporting our drug development projects with an improved understanding of

  9. Understanding pharmacokinetics using realistic computational models of fluid dynamics: biosimulation of drug distribution within the CSF space for intrathecal drugs.

    PubMed

    Kuttler, Andreas; Dimke, Thomas; Kern, Steven; Helmlinger, Gabriel; Stanski, Donald; Finelli, Luca A

    2010-12-01

    We introduce how biophysical modeling in pharmaceutical research and development, combining physiological observations at the tissue, organ and system level with selected drug physiochemical properties, may contribute to a greater and non-intuitive understanding of drug pharmacokinetics and therapeutic design. Based on rich first-principle knowledge combined with experimental data at both conception and calibration stages, and leveraging our insights on disease processes and drug pharmacology, biophysical modeling may provide a novel and unique opportunity to interactively characterize detailed drug transport, distribution, and subsequent therapeutic effects. This innovative approach is exemplified through a three-dimensional (3D) computational fluid dynamics model of the spinal canal motivated by questions arising during pharmaceutical development of one molecular therapy for spinal cord injury. The model was based on actual geometry reconstructed from magnetic resonance imaging data subsequently transformed in a parametric 3D geometry and a corresponding finite-volume representation. With dynamics controlled by transient Navier-Stokes equations, the model was implemented in a commercial multi-physics software environment established in the automotive and aerospace industries. While predictions were performed in silico, the underlying biophysical models relied on multiple sources of experimental data and knowledge from scientific literature. The results have provided insights into the primary factors that can influence the intrathecal distribution of drug after lumbar administration. This example illustrates how the approach connects the causal chain underlying drug distribution, starting with the technical aspect of drug delivery systems, through physiology-driven drug transport, then eventually linking to tissue penetration, binding, residence, and ultimately clearance. Currently supporting our drug development projects with an improved understanding of systems

  10. The Survivable Distributed Computing Environment

    DTIC Science & Technology

    1994-06-01

    an architecture for a survivable Distributed Computing Environment (SDCE). In essence, the SDCE will be a base upon which survivable distributed...and/or ISIS distributed Computing Environments to provide many of the SDCE requirements.

  11. Computational Model of Human and System Dynamics in Free Flight: Studies in Distributed Control Technologies

    NASA Technical Reports Server (NTRS)

    Corker, Kevin M.; Pisanich, Gregory; Lebacqz, J. Victor (Technical Monitor)

    1998-01-01

    This paper presents a set of studies in full mission simulation and the development of a predictive computational model of human performance in control of complex airspace operations. NASA and the FAA have initiated programs of research and development to provide flight crew, airline operations and air traffic managers with automation aids to increase capacity in en route and terminal area to support the goals of safe, flexible, predictable and efficient operations. In support of these developments, we present a computational model to aid design that includes representation of multiple cognitive agents (both human operators and intelligent aiding systems). The demands of air traffic management require representation of many intelligent agents sharing world-models, coordinating action/intention, and scheduling goals and actions in a potentially unpredictable world of operations. The operator-model structure includes attention functions, action priority, and situation assessment. The cognitive model has been expanded to include working memory operations including retrieval from long-term store, and interference. The operator's activity structures have been developed to provide for anticipation (knowledge of the intention and action of remote operators), and to respond to failures of the system and other operators in the system in situation-specific paradigms. System stability and operator actions can be predicted by using the model. The model's predictive accuracy was verified using the full-mission simulation data of commercial flight deck operations with advanced air traffic management techniques.

  12. 3-D parallel program for numerical calculation of gas dynamics problems with heat conductivity on distributed memory computational systems (CS)

    SciTech Connect

    Sofronov, I.D.; Voronin, B.L.; Butnev, O.I.

    1997-12-31

    The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle. The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.

  13. Modeling hydroxyl radical distribution and trialkyl phosphates oxidation in UV-H2O2 photoreactors using computational fluid dynamics.

    PubMed

    Santoro, Domenico; Raisee, Mehrdad; Moghaddami, Mostafa; Ducoste, Joel; Sasges, Micheal; Liberti, Lorenzo; Notarnicola, Michele

    2010-08-15

    Advanced Oxidation Processes (AOPs) promoted by ultraviolet light are innovative and potentially cost-effective solutions for treating persistent pollutants recalcitrant to conventional water and wastewater treatment. While several studies have been performed during the past decade to improve the fundamental understanding of the UV-H(2)O(2) AOP and its kinetic modeling, Computational Fluid Dynamics (CFD) has only recently emerged as a powerful tool that allows a deeper understanding of complex photochemical processes in environmental and reactor engineering applications. In this paper, a comprehensive kinetic model of UV-H(2)O(2) AOP was coupled with the Reynolds averaged Navier-Stokes (RANS) equations using CFD to predict the oxidation of tributyl phosphate (TBP) and tri(2-chloroethtyl) phosphate (TCEP) in two different photoreactors: a parallel- and a cross-flow UV device employing a UV lamp emitting primarily 253.7 nm radiation. CFD simulations, obtained for both turbulent and laminar flow regimes and compared with experimental data over a wide range of UV doses, enabled the spatial visualization of hydrogen peroxide and hydroxyl radical distributions in the photoreactor. The annular photoreactor displayed consistently better oxidation performance than the cross-flow system due to the absence of recirculation zones, as confirmed by the hydroxyl radical dose distributions. Notably, such discrepancy was found to be strongly dependent on and directly correlated with the hydroxyl radical rate constant becoming relevant for conditions approaching diffusion-controlled reaction regimes (k(C,OH) > 10(9) M(-1) s(-1)).

  14. A three-dimensional computational fluid dynamics model of shear stress distribution during neotissue growth in a perfusion bioreactor.

    PubMed

    Guyot, Y; Luyten, F P; Schrooten, J; Papantoniou, I; Geris, L

    2015-12-01

    Bone tissue engineering strategies use flow through perfusion bioreactors to apply mechanical stimuli to cells seeded on porous scaffolds. Cells grow on the scaffold surface but also by bridging the scaffold pores leading a fully filled scaffold following the scaffold's geometric characteristics. Current computational fluid dynamic approaches for tissue engineering bioreactor systems have been mostly carried out for empty scaffolds. The effect of 3D cell growth and extracellular matrix formation (termed in this study as neotissue growth), on its surrounding fluid flow field is a challenge yet to be tackled. In this work a combined approach was followed linking curvature driven cell growth to fluid dynamics modeling. The level-set method (LSM) was employed to capture neotissue growth driven by curvature, while the Stokes and Darcy equations, combined in the Brinkman equation, provided information regarding the distribution of the shear stress profile at the neotissue/medium interface and within the neotissue itself during growth. The neotissue was assumed to be micro-porous allowing flow through its structure while at the same time allowing the simulation of complete scaffold filling without numerical convergence issues. The results show a significant difference in the amplitude of shear stress for cells located within the micro-porous neo-tissue or at the neotissue/medium interface, demonstrating the importance of taking along the neotissue in the calculation of the mechanical stimulation of cells during culture.The presented computational framework is used on different scaffold pore geometries demonstrating its potential to be used a design as tool for scaffold architecture taking into account the growing neotissue. Biotechnol. Bioeng. 2015;112: 2591-2600. © 2015 Wiley Periodicals, Inc.

  15. Cooperative Fault Tolerant Distributed Computing

    SciTech Connect

    Fagg, Graham E.

    2006-03-15

    HARNESS was proposed as a system that combined the best of emerging technologies found in current distributed computing research and commercial products into a very flexible, dynamically adaptable framework that could be used by applications to allow them to evolve and better handle their execution environment. The HARNESS system was designed using the considerable experience from previous projects such as PVM, MPI, IceT and Cumulvs. As such, the system was designed to avoid any of the common problems found with using these current systems, such as no single point of failure, ability to survive machine, node and software failures. Additional features included improved inter-component connectivity, with full support for dynamic down loading of addition components at run-time thus reducing the stress on application developers to build in all the libraries they need in advance.

  16. Task allocation in a distributed computing system

    NASA Technical Reports Server (NTRS)

    Seward, Walter D.

    1987-01-01

    A conceptual framework is examined for task allocation in distributed systems. Application and computing system parameters critical to task allocation decision processes are discussed. Task allocation techniques are addressed which focus on achieving a balance in the load distribution among the system's processors. Equalization of computing load among the processing elements is the goal. Examples of system performance are presented for specific applications. Both static and dynamic allocation of tasks are considered and system performance is evaluated using different task allocation methodologies.

  17. Computational reacting gas dynamics

    NASA Technical Reports Server (NTRS)

    Lam, S. H.

    1993-01-01

    In the study of high speed flows at high altitudes, such as that encountered by re-entry spacecrafts, the interaction of chemical reactions and other non-equilibrium processes in the flow field with the gas dynamics is crucial. Generally speaking, problems of this level of complexity must resort to numerical methods for solutions, using sophisticated computational fluid dynamics (CFD) codes. The difficulties introduced by reacting gas dynamics can be classified into three distinct headings: (1) the usually inadequate knowledge of the reaction rate coefficients in the non-equilibrium reaction system; (2) the vastly larger number of unknowns involved in the computation and the expected stiffness of the equations; and (3) the interpretation of the detailed reacting CFD numerical results. The research performed accepts the premise that reacting flows of practical interest in the future will in general be too complex or 'untractable' for traditional analytical developments. The power of modern computers must be exploited. However, instead of focusing solely on the construction of numerical solutions of full-model equations, attention is also directed to the 'derivation' of the simplified model from the given full-model. In other words, the present research aims to utilize computations to do tasks which have traditionally been done by skilled theoreticians: to reduce an originally complex full-model system into an approximate but otherwise equivalent simplified model system. The tacit assumption is that once the appropriate simplified model is derived, the interpretation of the detailed numerical reacting CFD numerical results will become much easier. The approach of the research is called computational singular perturbation (CSP).

  18. Knowledge and Distributed computation

    DTIC Science & Technology

    1990-05-01

    convincing evidence that reasoning in terms of knowledge can lead to .. n... uif.yi ...... lts" about diStfibuLuc computation, and we extend the standard...can be made precise in the context of computer science. In this thesis, we pro- vide convincing evidence that reasoning in terms of knowledge can lead ...against different adversaries. We show how different adversaries lead to different definitions of probabilistic knowledge, and given a particular adversary

  19. Numerical Uncertainty Analysis for Computational Fluid Dynamics using Student T Distribution -- Application of CFD Uncertainty Analysis Compared to Exact Analytical Solution

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.

    2014-01-01

    Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.

  20. GRIMD: distributed computing for chemists and biologists

    PubMed Central

    Piotto, Stefano; Biasi, Luigi Di; Concilio, Simona; Castiglione, Aniello; Cattaneo, Giuseppe

    2014-01-01

    Motivation: Biologists and chemists are facing problems of high computational complexity that require the use of several computers organized in clusters or in specialized grids. Examples of such problems can be found in molecular dynamics (MD), in silico screening, and genome analysis. Grid Computing and Cloud Computing are becoming prevalent mainly because of their competitive performance/cost ratio. Regrettably, the diffusion of Grid Computing is strongly limited because two main limitations: it is confined to scientists with strong Computer Science background and the analyses of the large amount of data produced can be cumbersome it. We have developed a package named GRIMD to provide an easy and flexible implementation of distributed computing for the Bioinformatics community. GRIMD is very easy to install and maintain, and it does not require any specific Computer Science skill. Moreover, permits preliminary analysis on the distributed machines to reduce the amount of data to transfer. GRIMD is very flexible because it shields the typical computational biologist from the need to write specific code for tasks such as molecular dynamics or docking calculations. Furthermore, it permits an efficient use of GPU cards whenever is possible. GRIMD calculations scale almost linearly and, therefore, permits to exploit efficiently each machine in the network. Here, we provide few examples of grid computing in computational biology (MD and docking) and bioinformatics (proteome analysis). Availability GRIMD is available for free for noncommercial research at www.yadamp.unisa.it/grimd Supplementary information www.yadamp.unisa.it/grimd/howto.aspx PMID:24516326

  1. Heterogeneous Distributed Computing for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Sunderam, Vaidy S.

    1998-01-01

    The research supported under this award focuses on heterogeneous distributed computing for high-performance applications, with particular emphasis on computational aerosciences. The overall goal of this project was to and investigate issues in, and develop solutions to, efficient execution of computational aeroscience codes in heterogeneous concurrent computing environments. In particular, we worked in the context of the PVM[1] system and, subsequent to detailed conversion efforts and performance benchmarking, devising novel techniques to increase the efficacy of heterogeneous networked environments for computational aerosciences. Our work has been based upon the NAS Parallel Benchmark suite, but has also recently expanded in scope to include the NAS I/O benchmarks as specified in the NHT-1 document. In this report we summarize our research accomplishments under the auspices of the grant.

  2. Simulation of the Velocity and Temperature Distribution of Inhalation Thermal Injury in a Human Upper Airway Model by Application of Computational Fluid Dynamics.

    PubMed

    Chang, Yang; Zhao, Xiao-zhuo; Wang, Cheng; Ning, Fang-gang; Zhang, Guo-an

    2015-01-01

    Inhalation injury is an important cause of death after thermal burns. This study was designed to simulate the velocity and temperature distribution of inhalation thermal injury in the upper airway in humans using computational fluid dynamics. Cervical computed tomography images of three Chinese adults were imported to Mimics software to produce three-dimensional models. After grids were established and boundary conditions were defined, the simulation time was set at 1 minute and the gas temperature was set to 80 to 320°C using ANSYS software (ANSYS, Canonsburg, PA) to simulate the velocity and temperature distribution of inhalation thermal injury. Cross-sections were cut at 2-mm intervals, and maximum airway temperature and velocity were recorded for each cross-section. The maximum velocity peaked in the lower part of the nasal cavity and then decreased with air flow. The velocities in the epiglottis and glottis were higher than those in the surrounding areas. Further, the maximum airway temperature decreased from the nasal cavity to the trachea. Computational fluid dynamics technology can be used to simulate the velocity and temperature distribution of inhaled heated air.

  3. Computational fluid dynamic applications

    SciTech Connect

    Chang, S.-L.; Lottes, S. A.; Zhou, C. Q.

    2000-04-03

    The rapid advancement of computational capability including speed and memory size has prompted the wide use of computational fluid dynamics (CFD) codes to simulate complex flow systems. CFD simulations are used to study the operating problems encountered in system, to evaluate the impacts of operation/design parameters on the performance of a system, and to investigate novel design concepts. CFD codes are generally developed based on the conservation laws of mass, momentum, and energy that govern the characteristics of a flow. The governing equations are simplified and discretized for a selected computational grid system. Numerical methods are selected to simplify and calculate approximate flow properties. For turbulent, reacting, and multiphase flow systems the complex processes relating to these aspects of the flow, i.e., turbulent diffusion, combustion kinetics, interfacial drag and heat and mass transfer, etc., are described in mathematical models, based on a combination of fundamental physics and empirical data, that are incorporated into the code. CFD simulation has been applied to a large variety of practical and industrial scale flow systems.

  4. Distributed GPU Computing in GIScience

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.

    2013-12-01

    Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE

  5. Computational astrophysical fluid dynamics

    NASA Technical Reports Server (NTRS)

    Norman, Michael L.; Clarke, David A.; Stone, James M.

    1991-01-01

    The field of astrophysical fluid dynamics (AFD) is described as an emerging discipline which derives historically from both the theory of stellar evolution and space plasma physics. The fundamental physical assumption behind AFD is that fluid equations of motion accurately describe the evolution of plasmas on scales that are large in comparison with particle interaction length scales. Particular attention is given to purely fluid models of large-scale astrophysical plasmas. The role of computer simulation in AFD research is also highlighted and a suite of general-purpose application codes for AFD research is discussed. The codes are called ZEUS-2D and ZEUS-3D and solve the equations of AFD in two and three dimensions, respectively, in several coordinate geometries for general initial and boundary conditions. The topics of bipolar outflows from protostars, galactic superbubbles and supershells, and extragalactic radio sources are addressed.

  6. Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.

  7. BESIII production with distributed computing

    NASA Astrophysics Data System (ADS)

    Zhang, X. M.; Yan, T.; Zhao, X. H.; Ma, Z. T.; Yan, X. F.; Lin, T.; Deng, Z. Y.; Li, W. D.; Belov, S.; Pelevanyuk, I.; Zhemchugov, A.; Cai, H.

    2015-12-01

    Distributed computing is necessary nowadays for high energy physics experiments to organize heterogeneous computing resources all over the world to process enormous amounts of data. The BESIII experiment in China, has established its own distributed computing system, based on DIRAC, as a supplement to local clusters, collecting cluster, grid, desktop and cloud resources from collaborating member institutes around the world. The system consists of workload management and data management to deal with the BESIII Monte Carlo production workflow in a distributed environment. A dataset-based data transfer system has been developed to support data movements among sites. File and metadata management tools and a job submission frontend have been developed to provide a virtual layer for BESIII physicists to use distributed resources. Moreover, the paper shows the experience to cope with lack of grid experience and low manpower among the BESIII community.

  8. Distributed computing at the SSCL

    SciTech Connect

    Cormell, L.; White, R.

    1993-05-01

    The rapid increase in the availability of high performance, cost- effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no linger provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by discussing the approach taken at the Superconducting Super Collider Laboratory. In addition, a brief review of the future directions of commercial products for distributed computing and management will be given.

  9. A Wigner Distribution Analysis of Scattering Dynamics

    NASA Astrophysics Data System (ADS)

    Weeks, David; Lacy, Brent

    2009-04-01

    Using the time dependent Channel Packet Method (CPM),ootnotetextD.E.Weeks, T.A.Niday, S.H.Yang, J Chem Phys. 125, 164301 (2006). a Fourier transformation of the correlation function between evolving wave packets is used to compute scattering matrix elements. The correlation function can also be used to compute a Wigner distribution as a function of time and energy. This scattering Wigner distribution is then used to investigate times at which various energetic contributions to the scattering matrix are made during a molecular collision. We compute scattering Wigner distributions for a variety of molecular systems and use them to characterize the associated molecular dynamics. In particular, the square well provides a simple and easily modified potential to study the relationship between the scattering Wigner distribution and wave packet dynamics. Additional systems that are being studied include the collinear H + H2 molecular reaction, and the non-adiabatic B + H2 molecular collision.

  10. Towards an Infrastructure for MLS Distributed Computing

    DTIC Science & Technology

    1998-01-01

    Distributed computing owes its success to the development of infrastructure, middleware, and standards (e.g., CORBA) by the computing industry. This...Government must protect national security information against unauthorized information flow. To support MLS distributed computing , a MLS infrastructure...protection of classified information and use both the emerging distributed computing and commercial security infrastructures. The resulting infrastructure

  11. Bimolecular dynamics by computer analysis

    SciTech Connect

    Eilbeck, J.C.; Lomdahl, P.S.; Scott, A.C.

    1984-01-01

    As numerical tools (computers and display equipment) become more powerful and the atomic structures of important biological molecules become known, the importance of detailed computation of nonequilibrium biomolecular dynamics increases. In this manuscript we report results from a well developed study of the hydrogen bonded polypeptide crystal acetanilide, a model protein. Directions for future research are suggested. 9 references, 6 figures.

  12. Distributed Computing in Universities and Colleges.

    ERIC Educational Resources Information Center

    Sircar, Sumit

    1979-01-01

    Analyzes the implications of distributed computing in institutions of higher education. Discusses (1) the extent to which the quality of computing might be enhanced by adopting a distributed computing approach, (2) variations in distributed systems design and the cost of adoption, and (3) administration of distributed systems. (Author/CMV)

  13. Computer animation challenges for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Vines, Mauricio; Lee, Won-Sook; Mavriplis, Catherine

    2012-07-01

    Computer animation requirements differ from those of traditional computational fluid dynamics (CFD) investigations in that visual plausibility and rapid frame update rates trump physical accuracy. We present an overview of the main techniques for fluid simulation in computer animation, starting with Eulerian grid approaches, the Lattice Boltzmann method, Fourier transform techniques and Lagrangian particle introduction. Adaptive grid methods, precomputation of results for model reduction, parallelisation and computation on graphical processing units (GPUs) are reviewed in the context of accelerating simulation computations for animation. A survey of current specific approaches for the application of these techniques to the simulation of smoke, fire, water, bubbles, mixing, phase change and solid-fluid coupling is also included. Adding plausibility to results through particle introduction, turbulence detail and concentration on regions of interest by level set techniques has elevated the degree of accuracy and realism of recent animations. Basic approaches are described here. Techniques to control the simulation to produce a desired visual effect are also discussed. Finally, some references to rendering techniques and haptic applications are mentioned to provide the reader with a complete picture of the challenges of simulating fluids in computer animation.

  14. Computational Workbench for Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Edmonds, Karina

    2007-01-01

    PyCraft is a computer program that provides an interactive, workbenchlike computing environment for developing and testing algorithms for multibody dynamics. Examples of multibody dynamic systems amenable to analysis with the help of PyCraft include land vehicles, spacecraft, robots, and molecular models. PyCraft is based on the Spatial-Operator- Algebra (SOA) formulation for multibody dynamics. The SOA operators enable construction of simple and compact representations of complex multibody dynamical equations. Within the Py-Craft computational workbench, users can, essentially, use the high-level SOA operator notation to represent the variety of dynamical quantities and algorithms and to perform computations interactively. PyCraft provides a Python-language interface to underlying C++ code. Working with SOA concepts, a user can create and manipulate Python-level operator classes in order to implement and evaluate new dynamical quantities and algorithms. During use of PyCraft, virtually all SOA-based algorithms are available for computational experiments.

  15. Overlapping clusters for distributed computation.

    SciTech Connect

    Mirrokni, Vahab; Andersen, Reid; Gleich, David F.

    2010-11-01

    Scalable, distributed algorithms must address communication problems. We investigate overlapping clusters, or vertex partitions that intersect, for graph computations. This setup stores more of the graph than required but then affords the ease of implementation of vertex partitioned algorithms. Our hope is that this technique allows us to reduce communication in a computation on a distributed graph. The motivation above draws on recent work in communication avoiding algorithms. Mohiyuddin et al. (SC09) design a matrix-powers kernel that gives rise to an overlapping partition. Fritzsche et al. (CSC2009) develop an overlapping clustering for a Schwarz method. Both techniques extend an initial partitioning with overlap. Our procedure generates overlap directly. Indeed, Schwarz methods are commonly used to capitalize on overlap. Elsewhere, overlapping communities (Ahn et al, Nature 2009; Mishra et al. WAW2007) are now a popular model of structure in social networks. These have long been studied in statistics (Cole and Wishart, CompJ 1970). We present two types of results: (i) an estimated swapping probability {rho}{infinity}; and (ii) the communication volume of a parallel PageRank solution (link-following {alpha} = 0.85) using an additive Schwarz method. The volume ratio is the amount of extra storage for the overlap (2 means we store the graph twice). Below, as the ratio increases, the swapping probability and PageRank communication volume decreases.

  16. Hybrid Human-Computing Distributed Sense-Making: Extending the SOA Paradigm for Dynamic Adjudication and Optimization of Human and Computer Roles

    ERIC Educational Resources Information Center

    Rimland, Jeffrey C.

    2013-01-01

    In many evolving systems, inputs can be derived from both human observations and physical sensors. Additionally, many computation and analysis tasks can be performed by either human beings or artificial intelligence (AI) applications. For example, weather prediction, emergency event response, assistive technology for various human sensory and…

  17. Computational Fluid Dynamics Modeling of The Dalles Project: Effects of Spill Flow Distribution Between the Washington Shore and the Tailrace Spillwall

    SciTech Connect

    Rakowski, Cynthia L.; Serkowski, John A.; Richmond, Marshall C.

    2010-12-01

    The U.S. Army Corps of Engineers-Portland District (CENWP) has ongoing work to improve the survival of juvenile salmonids (smolt) migrating past The Dalles Dam. As part of that effort, a spillwall was constructed to improve juvenile egress through the tailrace downstream of the stilling basin. The spillwall was designed to improve smolt survival by decreasing smolt retention time in the spillway tailrace and the exposure to predators on the spillway shelf. The spillwall guides spillway flows, and hence smolt, more quickly into the thalweg. In this study, an existing computational fluid dynamics (CFD) model was modified and used to characterize tailrace hydraulics between the new spillwall and the Washington shore for six different total river flows. The effect of spillway flow distribution was simulated for three spill patterns at the lowest total river flow. The commercial CFD solver, STAR-CD version 4.1, was used to solve the unsteady Reynolds-averaged Navier-Stokes equations together with the k-epsilon turbulence model. Free surface motion was simulated using the volume-of-fluid (VOF) technique. The model results were used in two ways. First, results graphics were provided to CENWP and regional fisheries agency representatives for use and comparison to the same flow conditions at a reduced-scale physical model. The CFD results were very similar in flow pattern to that produced by the reduced-scale physical model but these graphics provided a quantitative view of velocity distribution. During the physical model work, an additional spill pattern was tested. Subsequently, that spill pattern was also simulated in the numerical model. The CFD streamlines showed that the hydraulic conditions were likely to be beneficial to fish egress at the higher total river flows (120 kcfs and greater, uniform flow distribution). At the lowest flow case, 90 kcfs, it was necessary to use a non-uniform distribution. Of the three distributions tested, splitting the flow evenly between

  18. Performance of the ISIS Distributed Computing Toolkit

    DTIC Science & Technology

    1994-06-22

    Best Available Copy .. A a ~ d ~ . 1) - . Fs’A aiaer rnrgC"opyr~IL tI.ru~ Performance of the ISIS Distributed Computing Toolkit* Kenneth P. Birman...isis.com. Please cite as Technical Report TR-94-1432, Dept. of Computer Science, Cornell University. Performance of the Isis Distributed Computing Toolkit... Distributed computing , performance, process groups, atomic broadcast, causal and total message ordering, cbcast, abcast, multiple process groups

  19. Deadlock Detection in Distributed Computing Systems.

    DTIC Science & Technology

    1982-06-01

    With the advent of distributed computing systems, the problem of deadlock, which has been essentially solved for centralized computing systems, has...reappeared. Existing centralized deadlock detection techniques are either too expensive or they do not work correctly in distributed computing systems...incorrect. Additionally, although fault-tolerance is usually listed as an advantage of distributed computing systems, little has been done to analyze

  20. On Deadlock Detection in Distributed Computing Systems.

    DTIC Science & Technology

    1983-04-01

    With the advent of distributed computing systems, the problem of deadlock, which has been essentially solved for centralized computing systems, has...reappeared. Existing centralized deadlock detection techniques are either too expensive or they do not work correctly in distributed computing systems

  1. A Different Look at Secure Distributed Computation

    DTIC Science & Technology

    1997-06-01

    9, 12]. Still, the worst-case view dominates the secure computing literature in general and the secure distributed computing literature in...The model we now suggest represents distributed computing as two or more interwoven networks of competing nodes. In 111 1997, pp. 109{115 the

  2. Computer Graphics Simulations of Sampling Distributions.

    ERIC Educational Resources Information Center

    Gordon, Florence S.; Gordon, Sheldon P.

    1989-01-01

    Describes the use of computer graphics simulations to enhance student understanding of sampling distributions that arise in introductory statistics. Highlights include the distribution of sample proportions, the distribution of the difference of sample means, the distribution of the difference of sample proportions, and the distribution of sample…

  3. Parallel and Distributed Computing Combinatorial Algorithms

    DTIC Science & Technology

    1993-10-01

    FUPNDKC %2,•, PARALLEL AND DISTRIBUTED COMPUTING COMBINATORIAL ALGORITHMS 6. AUTHOR(S) 2304/DS F49620-92-J-0125 DR. LEIGHTON 7 PERFORMING ORGANIZATION NAME...on several problems involving parallel and distributed computing and combinatorial optimization. This research is reported in the numerous papers that...network decom- position. In Proceedings of the Eleventh Annual ACM Symposium on Principles of Distributed Computing , August 1992. [15] B. Awerbuch, B

  4. Modular Programming Techniques for Distributed Computing Tasks

    DTIC Science & Technology

    2004-08-01

    Modular Programming Techniques for Distributed Computing Tasks Anthony Cowley, Hwa-Chow Hsu, Camillo J. Taylor GRASP Laboratory University of...network, distributed computing , software design 1. INTRODUCTION As efforts to field sensor networks, or teams of mobile robots, become more...TITLE AND SUBTITLE Modular Programming Techniques for Distributed Computing Tasks 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER

  5. Distributed Computing Environment for Mine Warfare Command

    DTIC Science & Technology

    1993-06-01

    AD-A268 799 j -•111lllli UliilllI ME ii El UU NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC V4 * cLP i0 1993 RA THESIS DISTRIBUTED COMPUTING ENVIRONMENT...Project No [Task No lWork Unit Accession 1 -1 No 11 Title (include security classification) DISTRIBUTED COMPUTING ENVIRONMENT FOR MINE WARFARE COMMAND 12... DISTRIBUTED COMPUTING ..... .. 26 A. STANDARDS FOR OPEN SYSTEMS ... .......... 27 1. OSI Model .......... ................. 28 2. DOD Model

  6. A Weibull distribution accrual failure detector for cloud computing

    PubMed Central

    Wu, Zhibo; Wu, Jin; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing. PMID:28278229

  7. A Weibull distribution accrual failure detector for cloud computing.

    PubMed

    Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Dong, Jian; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing.

  8. A distributed computing approach to mission operations support. [for spacecraft

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1975-01-01

    Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.

  9. Distributed Computing at Belle II

    NASA Astrophysics Data System (ADS)

    Bansal, Vikas; Belle Collaboration, II

    2016-03-01

    The Belle II experiment at the SuperKEKB collider in Tsukuba, Japan, will start physics data taking in 2018 and will accumulate 50 ab-1 of e+e- collision data, about 50 times larger than the data set of the earlier Belle experiment. The computing requirements of Belle II are comparable to those of a RUN I high-pT LHC experiment. Computing will make full use of high speed networking and of the Computing Grids in North America, Asia and Europe. Results of an initial MC simulation campaign with 5 ab-1 equivalent luminosity will be described.

  10. Distributed computing and nuclear reactor analysis

    SciTech Connect

    Brown, F.B.; Derstine, K.L.; Blomquist, R.N.

    1994-03-01

    Large-scale scientific and engineering calculations for nuclear reactor analysis can now be carried out effectively in a distributed computing environment, at costs far lower than for traditional mainframes. The distributed computing environment must include support for traditional system services, such as a queuing system for batch work, reliable filesystem backups, and parallel processing capabilities for large jobs. All ANL computer codes for reactor analysis have been adapted successfully to a distributed system based on workstations and X-terminals. Distributed parallel processing has been demonstrated to be effective for long-running Monte Carlo calculations.

  11. Equilibrium distribution from distributed computing (simulations of protein folding).

    PubMed

    Scalco, Riccardo; Caflisch, Amedeo

    2011-05-19

    Multiple independent molecular dynamics (MD) simulations are often carried out starting from a single protein structure or a set of conformations that do not correspond to a thermodynamic ensemble. Therefore, a significant statistical bias is usually present in the Markov state model generated by simply combining the whole MD sampling into a network whose nodes and links are clusters of snapshots and transitions between them, respectively. Here, we introduce a depth-first search algorithm to extract from the whole conformation space network the largest ergodic component, i.e., the subset of nodes of the network whose transition matrix corresponds to an ergodic Markov chain. For multiple short MD simulations of a globular protein (as in distributed computing), the steady state, i.e., stationary distribution determined using the largest ergodic component, yields more accurate free energy profiles and mean first passage times than the original network or the ergodic network obtained by imposing detailed balance by means of symmetrization of the transition counts.

  12. Connecting micro dynamics and population distributions in system dynamics models.

    PubMed

    Fallah-Fini, Saeideh; Rahmandad, Hazhir; Chen, Hsin-Jen; Xue, Hong; Wang, Youfa

    2013-01-01

    Researchers use system dynamics models to capture the mean behavior of groups of indistinguishable population elements (e.g., people) aggregated in stock variables. Yet, many modeling problems require capturing the heterogeneity across elements with respect to some attribute(s) (e.g., body weight). This paper presents a new method to connect the micro-level dynamics associated with elements in a population with the macro-level population distribution along an attribute of interest without the need to explicitly model every element. We apply the proposed method to model the distribution of Body Mass Index and its changes over time in a sample population of American women obtained from the U.S. National Health and Nutrition Examination Survey. Comparing the results with those obtained from an individual-based model that captures the same phenomena shows that our proposed method delivers accurate results with less computation than the individual-based model.

  13. Connecting micro dynamics and population distributions in system dynamics models

    PubMed Central

    Rahmandad, Hazhir; Chen, Hsin-Jen; Xue, Hong; Wang, Youfa

    2014-01-01

    Researchers use system dynamics models to capture the mean behavior of groups of indistinguishable population elements (e.g., people) aggregated in stock variables. Yet, many modeling problems require capturing the heterogeneity across elements with respect to some attribute(s) (e.g., body weight). This paper presents a new method to connect the micro-level dynamics associated with elements in a population with the macro-level population distribution along an attribute of interest without the need to explicitly model every element. We apply the proposed method to model the distribution of Body Mass Index and its changes over time in a sample population of American women obtained from the U.S. National Health and Nutrition Examination Survey. Comparing the results with those obtained from an individual-based model that captures the same phenomena shows that our proposed method delivers accurate results with less computation than the individual-based model. PMID:25620842

  14. Dynamical Properties of Polymers: Computational Modeling

    SciTech Connect

    CURRO, JOHN G.; ROTTACH, DANA; MCCOY, JOHN D.

    2001-01-01

    The free volume distribution has been a qualitatively useful concept by which dynamical properties of polymers, such as the penetrant diffusion constant, viscosity, and glass transition temperature, could be correlated with static properties. In an effort to put this on a more quantitative footing, we define the free volume distribution as the probability of finding a spherical cavity of radius R in a polymer liquid. This is identical to the insertion probability in scaled particle theory, and is related to the chemical potential of hard spheres of radius R in a polymer in the Henry's law limit. We used the Polymer Reference Interaction Site Model (PRISM) theory to compute the free volume distribution of semiflexible polymer melts as a function of chain stiffness. Good agreement was found with the corresponding free volume distributions obtained from MD simulations. Surprisingly, the free volume distribution was insensitive to the chain stiffness, even though the single chain structure and the intermolecular pair correlation functions showed a strong dependence on chain stiffness. We also calculated the free volume distributions of polyisobutylene (PIB) and polyethylene (PE) at 298K and at elevated temperatures from PRISM theory. We found that PIB has more of its free volume distributed in smaller size cavities than for PE at the same temperature.

  15. Computational Methods for Structural Mechanics and Dynamics

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)

    1989-01-01

    Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.

  16. Nonlinear dynamics as an engine of computation

    NASA Astrophysics Data System (ADS)

    Kia, Behnam; Lindner, John F.; Ditto, William L.

    2017-03-01

    Control of chaos teaches that control theory can tame the complex, random-like behaviour of chaotic systems. This alliance between control methods and physics-cybernetical physics-opens the door to many applications, including dynamics-based computing. In this article, we introduce nonlinear dynamics and its rich, sometimes chaotic behaviour as an engine of computation. We review our work that has demonstrated how to compute using nonlinear dynamics. Furthermore, we investigate the interrelationship between invariant measures of a dynamical system and its computing power to strengthen the bridge between physics and computation. This article is part of the themed issue 'Horizons of cybernetical physics'.

  17. Nonlinear dynamics as an engine of computation.

    PubMed

    Kia, Behnam; Lindner, John F; Ditto, William L

    2017-03-06

    Control of chaos teaches that control theory can tame the complex, random-like behaviour of chaotic systems. This alliance between control methods and physics-cybernetical physics-opens the door to many applications, including dynamics-based computing. In this article, we introduce nonlinear dynamics and its rich, sometimes chaotic behaviour as an engine of computation. We review our work that has demonstrated how to compute using nonlinear dynamics. Furthermore, we investigate the interrelationship between invariant measures of a dynamical system and its computing power to strengthen the bridge between physics and computation.This article is part of the themed issue 'Horizons of cybernetical physics'.

  18. Next generation distributed computing for cancer research.

    PubMed

    Agarwal, Pankaj; Owzar, Kouros

    2014-01-01

    Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing.

  19. Next Generation Distributed Computing for Cancer Research

    PubMed Central

    Agarwal, Pankaj; Owzar, Kouros

    2014-01-01

    Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing. PMID:25983539

  20. Parallel distributed computing using Python

    NASA Astrophysics Data System (ADS)

    Dalcin, Lisandro D.; Paz, Rodrigo R.; Kler, Pablo A.; Cosimo, Alejandro

    2011-09-01

    This work presents two software components aimed to relieve the costs of accessing high-performance parallel computing resources within a Python programming environment: MPI for Python and PETSc for Python. MPI for Python is a general-purpose Python package that provides bindings for the Message Passing Interface (MPI) standard using any back-end MPI implementation. Its facilities allow parallel Python programs to easily exploit multiple processors using the message passing paradigm. PETSc for Python provides access to the Portable, Extensible Toolkit for Scientific Computation (PETSc) libraries. Its facilities allow sequential and parallel Python applications to exploit state of the art algorithms and data structures readily available in PETSc for the solution of large-scale problems in science and engineering. MPI for Python and PETSc for Python are fully integrated to PETSc-FEM, an MPI and PETSc based parallel, multiphysics, finite elements code developed at CIMEC laboratory. This software infrastructure supports research activities related to simulation of fluid flows with applications ranging from the design of microfluidic devices for biochemical analysis to modeling of large-scale stream/aquifer interactions.

  1. Distributed Computing and Collaboration Framework (DCCF)

    DTIC Science & Technology

    2002-09-01

    The Distributed Computing and Collaboration Framework has been developed by the Space and Naval Warfare Systems Center, San Diego (a Naval research and development facility), under the sponsorship of the Office of Naval

  2. Simulation model of load balancing in distributed computing systems

    NASA Astrophysics Data System (ADS)

    Botygin, I. A.; Popov, V. N.; Frolov, S. G.

    2017-02-01

    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.

  3. A Software Rejuvenation Framework for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Chau, Savio

    2009-01-01

    A performability-oriented conceptual framework for software rejuvenation has been constructed as a means of increasing levels of reliability and performance in distributed stateful computing. As used here, performability-oriented signifies that the construction of the framework is guided by the concept of analyzing the ability of a given computing system to deliver services with gracefully degradable performance. The framework is especially intended to support applications that involve stateful replicas of server computers.

  4. Object-oriented Tools for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.

    1993-01-01

    Distributed computing systems are proliferating, owing to the availability of powerful, affordable microcomputers and inexpensive communication networks. A critical problem in developing such systems is getting application programs to interact with one another across a computer network. Remote interprogram connectivity is particularly challenging across heterogeneous environments, where applications run on different kinds of computers and operating systems. NetWorks! (trademark) is an innovative software product that provides an object-oriented messaging solution to these problems. This paper describes the design and functionality of NetWorks! and illustrates how it is being used to build complex distributed applications for NASA and in the commercial sector.

  5. Research on Computational Fluid Dynamics and Turbulence

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Preconditioning matrices for Chebyshev derivative operators in several space dimensions; the Jacobi matrix technique in computational fluid dynamics; and Chebyshev techniques for periodic problems are discussed.

  6. Distributed Real-Time Computing with Harness

    SciTech Connect

    Di Saverio, Emanuele; Cesati, Marco; Di Biagio, Christian; Pennella, Guido; Engelmann, Christian

    2007-01-01

    Modern parallel and distributed computing solutions are often built onto a ''middleware'' software layer providing a higher and common level of service between computational nodes. Harness is an adaptable, plugin-based middleware framework for parallel and distributed computing. This paper reports recent research and development results of using Harness for real-time distributed computing applications in the context of an industrial environment with the needs to perform several safety critical tasks. The presented work exploits the modular architecture of Harness in conjunction with a lightweight threaded implementation to resolve several real-time issues by adding three new Harness plug-ins to provide a prioritized lightweight execution environment, low latency communication facilities, and local timestamped event logging.

  7. The impact of distributed computing on education

    NASA Technical Reports Server (NTRS)

    Utku, S.; Lestingi, J.; Salama, M.

    1982-01-01

    In this paper, developments in digital computer technology since the early Fifties are reviewed briefly, and the parallelism which exists between these developments and developments in analysis and design procedures of structural engineering is identified. The recent trends in digital computer technology are examined in order to establish the fact that distributed processing is now an accepted philosophy for further developments. The impact of this on the analysis and design practices of structural engineering is assessed by first examining these practices from a data processing standpoint to identify the key operations and data bases, and then fitting them to the characteristics of distributed processing. The merits and drawbacks of the present philosophy in educating structural engineers are discussed and projections are made for the industry-academia relations in the distributed processing environment of structural analysis and design. An ongoing experiment of distributed computing in a university environment is described.

  8. Predictive Dynamic Security Assessment through Advanced Computing

    SciTech Connect

    Huang, Zhenyu; Diao, Ruisheng; Jin, Shuangshuang; Chen, Yousu

    2014-11-30

    Abstract— Traditional dynamic security assessment is limited by several factors and thus falls short in providing real-time information to be predictive for power system operation. These factors include the steady-state assumption of current operating points, static transfer limits, and low computational speed. This addresses these factors and frames predictive dynamic security assessment. The primary objective of predictive dynamic security assessment is to enhance the functionality and computational process of dynamic security assessment through the use of high-speed phasor measurements and the application of advanced computing technologies for faster-than-real-time simulation. This paper presents algorithms, computing platforms, and simulation frameworks that constitute the predictive dynamic security assessment capability. Examples of phasor application and fast computation for dynamic security assessment are included to demonstrate the feasibility and speed enhancement for real-time applications.

  9. Computer Simulation of Strong Ground Motion near a Fault Using Dynamic Fault Rupture Modeling: Spatial Distribution of the Peak Ground Velocity Vectors

    NASA Astrophysics Data System (ADS)

    Miyatake, T.

    Computer simulation was used to study the nature of the strong ground motion near a strike-slip fault. The faulting process was modeled by stress release with fixed rupture velocity in a uniform elastic half-space or layered half-space. The fourth-order 3-D finite-difference method with staggered grids was employed to compute both ground motions and slip histories on the fault. The fault rupture was assumed to start from a point and propagate circularly with 0.8 times shear-wave velocity. In the present paper, we focused on the spatial pattern of ground velocity vectors, i.e., the direction of strong motions. In the case of bilateral rupture propagation, the strong fault parallel ground motion appeared near the center of the fault. The fault normal motions of ground velocity appeared near the edges of the fault. In the case of unilateral rupture, the fault parallel motion appeared near the starting point however, the amplitude was lower than that for the bilateral rupture case. The fault normal motion was predominant near the terminal point of the rupture. The results were applied to the earthquake damage data, especially the directions that simple bodies overturned and wooden houses collapsed, caused by the 1927 Tango, the 1930 Kita-Izu, and the 1948 Fukui earthquakes. The spatial distributions of the direction data were found to reflect the strong ground motions generated from the earthquake source process.

  10. Molecular dynamics on hypercube parallel computers

    NASA Astrophysics Data System (ADS)

    Smith, W.

    1991-03-01

    The implementation of molecular dynamics on parallel computers is described, with particular reference to hypercube computers. Three particular algorithms are described: replicated data (RD); systolic loop (SLS-G), and parallelised link-cells (PLC), all of which have good load balancing. The performance characteristics of each algorithm and the factors affecting their scaling properties are discussed. The article is pedagogic in intent, to introduce a novice to the main aspects of parallel computing in molecular dynamics.

  11. Dance Dynamics: Computers and Dance.

    ERIC Educational Resources Information Center

    Gray, Judith A., Ed.; And Others

    1983-01-01

    Five articles discuss the use of computers in dance and dance education. They describe: (1) a computerized behavioral profile of a dance teacher; (2) computer-based dance notation; (3) elementary school computer-assisted dance instruction; (4) quantified analysis of dance criticism; and (5) computerized simulation of human body movements in a…

  12. Pattern recognition and massively distributed computing.

    PubMed

    Davies, E Keith; Glick, Meir; Harrison, Karl N; Richards, W Graham

    2002-12-01

    A feature of Peter Kollman's research was his exploitation of the latest computational techniques to devise novel applications of the free energy perturbation method. He would certainly have seized upon the opportunities offered by massively distributed computing. Here we describe the use of over a million personal computers to perform virtual screening of 3.5 billion druglike molecules against protein targets by pharmacophore pattern matching, together with other applications of pattern recognition such as docking ligands without any a priori knowledge about the binding site location.

  13. Great Expectations: Distributed Financial Computing at Cornell.

    ERIC Educational Resources Information Center

    Schulden, Louise; Sidle, Clint

    1988-01-01

    The Cornell University Distributed Accounting (CUDA) system is an attempt to provide departments a software tool for better managing their finances, creating microcomputer standards, creating a vehicle for better administrative microcomputer support, and insuring local systems are consistent with central computer systems. (Author/MLW)

  14. Distributed Computing: Options in the Eighties.

    ERIC Educational Resources Information Center

    Klingenstein, Kenneth; Devine, Gary D.

    1985-01-01

    University administrative data processing is moving toward a more distributed environment. An architecture must be established that incorporates central sites, campus centers, and end users in a networked pool of computer systems, with applications located at appropriate nodes in the network. (Author/MLW)

  15. Computer Systems for Distributed and Distance Learning.

    ERIC Educational Resources Information Center

    Anderson, M.; Jackson, David

    2000-01-01

    Discussion of network-based learning focuses on a survey of computer systems for distributed and distance learning. Both Web-based systems and non-Web-based systems are reviewed in order to highlight some of the major trends of past projects and to suggest ways in which progress may be made in the future. (Contains 92 references.) (Author/LRW)

  16. Distributed computing system with dual independent communications paths between computers and employing split tokens

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert D. (Inventor); Manning, Robert M. (Inventor); Lewis, Blair F. (Inventor); Bolotin, Gary S. (Inventor); Ward, Richard S. (Inventor)

    1990-01-01

    This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided.

  17. Computing spatial information from Fourier coefficient distributions.

    PubMed

    Heinz, William F; Werbin, Jeffrey L; Lattman, Eaton; Hoh, Jan H

    2011-05-01

    The spatial relationships between molecules can be quantified in terms of information. In the case of membranes, the spatial organization of molecules in a bilayer is closely related to biophysically and biologically important properties. Here, we present an approach to computing spatial information based on Fourier coefficient distributions. The Fourier transform (FT) of an image contains a complete description of the image, and the values of the FT coefficients are uniquely associated with that image. For an image where the distribution of pixels is uncorrelated, the FT coefficients are normally distributed and uncorrelated. Further, the probability distribution for the FT coefficients of such an image can readily be obtained by Parseval's theorem. We take advantage of these properties to compute the spatial information in an image by determining the probability of each coefficient (both real and imaginary parts) in the FT, then using the Shannon formalism to calculate information. By using the probability distribution obtained from Parseval's theorem, an effective distance from the uncorrelated or most uncertain case is obtained. The resulting quantity is an information computed in k-space (kSI). This approach provides a robust, facile and highly flexible framework for quantifying spatial information in images and other types of data (of arbitrary dimensions). The kSI metric is tested on a 2D Ising model, frequently used as a model for lipid bilayer; and the temperature-dependent phase transition is accurately determined from the spatial information in configurations of the system.

  18. Distributed Computing Framework for Synthetic Radar Application

    NASA Technical Reports Server (NTRS)

    Gurrola, Eric M.; Rosen, Paul A.; Aivazis, Michael

    2006-01-01

    We are developing an extensible software framework, in response to Air Force and NASA needs for distributed computing facilities for a variety of radar applications. The objective of this work is to develop a Python based software framework, that is the framework elements of the middleware that allows developers to control processing flow on a grid in a distributed computing environment. Framework architectures to date allow developers to connect processing functions together as interchangeable objects, thereby allowing a data flow graph to be devised for a specific problem to be solved. The Pyre framework, developed at the California Institute of Technology (Caltech), and now being used as the basis for next-generation radar processing at JPL, is a Python-based software framework. We have extended the Pyre framework to include new facilities to deploy processing components as services, including components that monitor and assess the state of the distributed network for eventual real-time control of grid resources.

  19. Research computing in a distributed cloud environment

    NASA Astrophysics Data System (ADS)

    Fransham, K.; Agarwal, A.; Armstrong, P.; Bishop, A.; Charbonneau, A.; Desmarais, R.; Hill, N.; Gable, I.; Gaudet, S.; Goliath, S.; Impey, R.; Leavett-Brown, C.; Ouellete, J.; Paterson, M.; Pritchet, C.; Penfold-Brown, D.; Podaima, W.; Schade, D.; Sobie, R. J.

    2010-11-01

    The recent increase in availability of Infrastructure-as-a-Service (IaaS) computing clouds provides a new way for researchers to run complex scientific applications. However, using cloud resources for a large number of research jobs requires significant effort and expertise. Furthermore, running jobs on many different clouds presents even more difficulty. In order to make it easy for researchers to deploy scientific applications across many cloud resources, we have developed a virtual machine resource manager (Cloud Scheduler) for distributed compute clouds. In response to a user's job submission to a batch system, the Cloud Scheduler manages the distribution and deployment of user-customized virtual machines across multiple clouds. We describe the motivation for and implementation of a distributed cloud using the Cloud Scheduler that is spread across both commercial and dedicated private sites, and present some early results of scientific data analysis using the system.

  20. GEANT4 distributed computing for compact clusters

    NASA Astrophysics Data System (ADS)

    Harrawood, Brian P.; Agasthya, Greeshma A.; Lakshmanan, Manu N.; Raterman, Gretchen; Kapadia, Anuj J.

    2014-11-01

    A new technique for distribution of GEANT4 processes is introduced to simplify running a simulation in a parallel environment such as a tightly coupled computer cluster. Using a new C++ class derived from the GEANT4 toolkit, multiple runs forming a single simulation are managed across a local network of computers with a simple inter-node communication protocol. The class is integrated with the GEANT4 toolkit and is designed to scale from a single symmetric multiprocessing (SMP) machine to compact clusters ranging in size from tens to thousands of nodes. User designed 'work tickets' are distributed to clients using a client-server work flow model to specify the parameters for each individual run of the simulation. The new g4DistributedRunManager class was developed and well tested in the course of our Neutron Stimulated Emission Computed Tomography (NSECT) experiments. It will be useful for anyone running GEANT4 for large discrete data sets such as covering a range of angles in computed tomography, calculating dose delivery with multiple fractions or simply speeding the through-put of a single model.

  1. A Computational Fluid Dynamics Algorithm on a Massively Parallel Computer

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.; Levit, Creon

    1989-01-01

    The discipline of computational fluid dynamics is demanding ever-increasing computational power to deal with complex fluid flow problems. We investigate the performance of a finite-difference computational fluid dynamics algorithm on a massively parallel computer, the Connection Machine. Of special interest is an implicit time-stepping algorithm; to obtain maximum performance from the Connection Machine, it is necessary to use a nonstandard algorithm to solve the linear systems that arise in the implicit algorithm. We find that the Connection Machine ran achieve very high computation rates on both explicit and implicit algorithms. The performance of the Connection Machine puts it in the same class as today's most powerful conventional supercomputers.

  2. Accelerating Computation of DNA Sequence Alignment in Distributed Environment

    NASA Astrophysics Data System (ADS)

    Guo, Tao; Li, Guiyang; Deaton, Russel

    Sequence similarity and alignment are most important operations in computational biology. However, analyzing large sets of DNA sequence seems to be impractical on a regular PC. Using multiple threads with JavaParty mechanism, this project has successfully implemented in extending the capabilities of regular Java to a distributed environment for simulation of DNA computation. With the aid of JavaParty and the design of multiple threads, the results of this study demonstrated that the modified regular Java program could perform parallel computing without using RMI or socket communication. In this paper, an efficient method for modeling and comparing DNA sequences with dynamic programming and JavaParty was firstly proposed. Additionally, results of this method in distributed environment have been discussed.

  3. A directory service for configuring high-performance distributed computations

    SciTech Connect

    Fitzgerald, S.; Kesselman, C.; Foster, I.

    1997-08-01

    High-performance execution in distributed computing environments often requires careful selection and configuration not only of computers, networks, and other resources but also of the protocols and algorithms used by applications. Selection and configuration in turn require access to accurate, up-to-date information on the structure and state of available resources. Unfortunately, no standard mechanism exists for organizing or accessing such information. Consequently, different tools and applications adopt ad hoc mechanisms, or they compromise their portability and performance by using default configurations. We propose a Metacomputing Directory Service that provides efficient and scalable access to diverse, dynamic, and distributed information about resource structure and state. We define an extensible data model to represent required information and present a scalable, high-performance, distributed implementation. The data representation and application programming interface are adopted from the Lightweight Directory Access Protocol; the data model and implementation are new. We use the Globus distributed computing toolkit to illustrate how this directory service enables the development of more flexible and efficient distributed computing services and applications.

  4. Fluid dynamics computer programs for NERVA turbopump

    NASA Technical Reports Server (NTRS)

    Brunner, J. J.

    1972-01-01

    During the design of the NERVA turbopump, numerous computer programs were developed for the analyses of fluid dynamic problems within the machine. Program descriptions, example cases, users instructions, and listings for the majority of these programs are presented.

  5. Open Source Live Distributions for Computer Forensics

    NASA Astrophysics Data System (ADS)

    Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele

    Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.

  6. Comparing the Performance of Two Dynamic Load Distribution Methods

    NASA Technical Reports Server (NTRS)

    Kale, L. V.

    1987-01-01

    Parallel processing of symbolic computations on a message-passing multi-processor presents one challenge: To effectively utilize the available processors, the load must be distributed uniformly to all the processors. However, the structure of these computations cannot be predicted in advance. go, static scheduling methods are not applicable. In this paper, we compare the performance of two dynamic, distributed load balancing methods with extensive simulation studies. The two schemes are: the Contracting Within a Neighborhood (CWN) scheme proposed by us, and the Gradient Model proposed by Lin and Keller. We conclude that although simpler, the CWN is significantly more effective at distributing the work than the Gradient model.

  7. Can distributed delays perfectly stabilize dynamical networks?

    NASA Astrophysics Data System (ADS)

    Omi, Takahiro; Shinomoto, Shigeru

    2008-04-01

    Signal transmission delays tend to destabilize dynamical networks leading to oscillation, but their dispersion contributes oppositely toward stabilization. We analyze an integrodifferential equation that describes the collective dynamics of a neural network with distributed signal delays. With the Γ distributed delays less dispersed than exponential distribution, the system exhibits reentrant phenomena, in which the stability is once lost but then recovered as the mean delay is increased. With delays dispersed more highly than exponential, the system never destabilizes.

  8. Beyond input-output computings: error-driven emergence with parallel non-distributed slime mold computer.

    PubMed

    Aono, Masashi; Gunji, Yukio-Pegio

    2003-10-01

    The emergence derived from errors is the key importance for both novel computing and novel usage of the computer. In this paper, we propose an implementable experimental plan for the biological computing so as to elicit the emergent property of complex systems. An individual plasmodium of the true slime mold Physarum polycephalum acts in the slime mold computer. Modifying the Elementary Cellular Automaton as it entails the global synchronization problem upon the parallel computing provides the NP-complete problem solved by the slime mold computer. The possibility to solve the problem by giving neither all possible results nor explicit prescription of solution-seeking is discussed. In slime mold computing, the distributivity in the local computing logic can change dynamically, and its parallel non-distributed computing cannot be reduced into the spatial addition of multiple serial computings. The computing system based on exhaustive absence of the super-system may produce, something more than filling the vacancy.

  9. BES-III distributed computing status

    NASA Astrophysics Data System (ADS)

    Belov, S. D.; Deng, Z. Y.; Korenkov, V. V.; Li, W. D.; Lin, T.; Ma, Z. T.; Nicholson, C.; Pelevanyuk, I. S.; Suo, B.; Trofimov, V. V.; Tsaregorodtsev, A. U.; Uzhinskiy, A. V.; Yan, T.; Yan, X. F.; Zhang, X. M.; Zhemchugov, A. S.

    2016-09-01

    The BES-III experiment at the Institute of High Energy Physics (Beijing, China) is aimed at the precision measurements in e+e- annihilation in the energy range from 2.0 till 4.6 GeV. The world's largest samples of J/psi and psi' events and unique samples of XYZ data have been already collected. The expected increase of the data volume in the coming years required a significant evolution of the computing model, namely shift from a centralized data processing to a distributed one. This report summarizes a current design of the BES-III distributed computing system, some of key decisions and experience gained during 2 years of operations.

  10. Subtlenoise: sonification of distributed computing operations

    NASA Astrophysics Data System (ADS)

    Love, P. A.

    2015-12-01

    The operation of distributed computing systems requires comprehensive monitoring to ensure reliability and robustness. There are two components found in most monitoring systems: one being visually rich time-series graphs and another being notification systems for alerting operators under certain pre-defined conditions. In this paper the sonification of monitoring messages is explored using an architecture that fits easily within existing infrastructures based on mature opensource technologies such as ZeroMQ, Logstash, and Supercollider (a synth engine). Message attributes are mapped onto audio attributes based on broad classification of the message (continuous or discrete metrics) but keeping the audio stream subtle in nature. The benefits of audio rendering are described in the context of distributed computing operations and may provide a less intrusive way to understand the operational health of these systems.

  11. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2003-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  12. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2004-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  13. Workshop on Populations & Crowds: Dynamics, Disruptions and their Computational Models

    DTIC Science & Technology

    2015-01-01

    Aug-2012 9-Aug-2013 Approved for Public Release; Distribution Unlimited Final Report: Workshop on Populations & Crowds: Dynamics, Disruptions and... Disruptions , Social networks REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR/MONITOR’S ACRONYM(S) ARO 8. PERFORMING...Number of Papers published in non peer-reviewed journals: Final Report: Workshop on Populations & Crowds: Dynamics, Disruptions and their Computational

  14. Radar data processing using a distributed computational system

    NASA Astrophysics Data System (ADS)

    Mota, Gilberto F.

    1992-06-01

    This research specifies and validates a new concurrent decomposition scheme, called Confined Space Search Decomposition (CSSD), to exploit parallelism of Radar Data Processing algorithms using a Distributed Computational System. To formalize the specification, we propose and apply an object-oriented methodology called Decomposition Cost Evaluation Model (DCEM). To reduce the penalties of load imbalance, we propose a distributed dynamic load balance heuristic called Object Reincarnation (OR). To validate the research, we first compare our decomposition with an identified alternative using the proposed DCEM model and then develop a theoretical prediction of selected parameters. We also develop a simulation to check the Object Reincarnation Concept.

  15. Fast Parallel Computation Of Manipulator Inverse Dynamics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1991-01-01

    Method for fast parallel computation of inverse dynamics problem, essential for real-time dynamic control and simulation of robot manipulators, undergoing development. Enables exploitation of high degree of parallelism and, achievement of significant computational efficiency, while minimizing various communication and synchronization overheads as well as complexity of required computer architecture. Universal real-time robotic controller and simulator (URRCS) consists of internal host processor and several SIMD processors with ring topology. Architecture modular and expandable: more SIMD processors added to match size of problem. Operate asynchronously and in MIMD fashion.

  16. Computational strategies for three-dimensional flow simulations on distributed computer systems

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-01-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  17. Computational strategies for three-dimensional flow simulations on distributed computer systems

    NASA Astrophysics Data System (ADS)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-08-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  18. The future of PanDA in ATLAS distributed computing

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.

  19. Computational fluid dynamics - The coming revolution

    NASA Technical Reports Server (NTRS)

    Graves, R. A., Jr.

    1982-01-01

    The development of aerodynamic theory is traced from the days of Aristotle to the present, with the next stage in computational fluid dynamics dependent on superspeed computers for flow calculations. Additional attention is given to the history of numerical methods inherent in writing computer codes applicable to viscous and inviscid analyses for complex configurations. The advent of the superconducting Josephson junction is noted to place configurational demands on computer design to avoid limitations imposed by the speed of light, and a Japanese projection of a computer capable of several hundred billion operations/sec is mentioned. The NASA Numerical Aerodynamic Simulator is described, showing capabilities of a billion operations/sec with a memory of 240 million words using existing technology. Near-term advances in fluid dynamics are discussed.

  20. Computational plasticity algorithm for particle dynamics simulations

    NASA Astrophysics Data System (ADS)

    Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.

    2017-03-01

    The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.

  1. Distributed Computing for the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Chudoba, J.

    2015-12-01

    Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system.

  2. Three-Dimensional Computational Fluid Dynamics

    SciTech Connect

    Haworth, D.C.; O'Rourke, P.J.; Ranganathan, R.

    1998-09-01

    Computational fluid dynamics (CFD) is one discipline falling under the broad heading of computer-aided engineering (CAE). CAE, together with computer-aided design (CAD) and computer-aided manufacturing (CAM), comprise a mathematical-based approach to engineering product and process design, analysis and fabrication. In this overview of CFD for the design engineer, our purposes are three-fold: (1) to define the scope of CFD and motivate its utility for engineering, (2) to provide a basic technical foundation for CFD, and (3) to convey how CFD is incorporated into engineering product and process design.

  3. Pseudo-interactive monitoring in distributed computing

    SciTech Connect

    Sfiligoi, I.; Bradley, D.; Livny, M.; /Wisconsin U., Madison

    2009-05-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  4. Pseudo-interactive monitoring in distributed computing

    NASA Astrophysics Data System (ADS)

    Sfiligoi, I.; Bradley, D.; Livny, M.

    2010-04-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  5. A Hundred Impossibility Proofs for Distributed Computing

    DTIC Science & Technology

    1989-08-01

    distributed computing . In this category, I include not just results that say that a particular task cannot be accomplished, but also lower bound results, which say that a task cannot be accomplished within a certain bound on cost. I started out with a simple plan for preparing this talk: I would spend a couple of weeks reading all the impossibility proofs in our fields, and would categorize them according to the ideas used. Then I would make wise and general observations, and try to predict where the future of this area is headed. That turned out to be a bit too ambitious;

  6. Interoperable PKI Data Distribution in Computational Grids

    SciTech Connect

    Pala, Massimiliano; Cholia, Shreyas; Rea, Scott A.; Smith, Sean W.

    2008-07-25

    One of the most successful working examples of virtual organizations, computational grids need authentication mechanisms that inter-operate across domain boundaries. Public Key Infrastructures(PKIs) provide sufficient flexibility to allow resource managers to securely grant access to their systems in such distributed environments. However, as PKIs grow and services are added to enhance both security and usability, users and applications must struggle to discover available resources-particularly when the Certification Authority (CA) is alien to the relying party. This article presents how to overcome these limitations of the current grid authentication model by integrating the PKI Resource Query Protocol (PRQP) into the Grid Security Infrastructure (GSI).

  7. Airport Simulations Using Distributed Computational Resources

    NASA Technical Reports Server (NTRS)

    McDermott, William J.; Maluf, David A.; Gawdiak, Yuri; Tran, Peter; Clancy, Daniel (Technical Monitor)

    2002-01-01

    The Virtual National Airspace Simulation (VNAS) will improve the safety of Air Transportation. In 2001, using simulation and information management software running over a distributed network of super-computers, researchers at NASA Ames, Glenn, and Langley Research Centers developed a working prototype of a virtual airspace. This VNAS prototype modeled daily operations of the Atlanta airport by integrating measured operational data and simulation data on up to 2,000 flights a day. The concepts and architecture developed by NASA for this prototype are integral to the National Airspace Simulation to support the development of strategies improving aviation safety, identifying precursors to component failure.

  8. Dynamic MTW: a dynamic bandwidth distribution scheme in EPON

    NASA Astrophysics Data System (ADS)

    Chen, Hong; Ge, Liangwei; Zeng, Lieguang

    2002-08-01

    An algorithm to improve the bandwidth utilization for EPON by using dynamic bandwidth distribution is put forward. System performance, such as queuing delay under self-similar traffic, is simulated by using OPNET.

  9. Information modification and particle collisions in distributed computation.

    PubMed

    Lizier, Joseph T; Prokopenko, Mikhail; Zomaya, Albert Y

    2010-09-01

    Distributed computation can be described in terms of the fundamental operations of information storage, transfer, and modification. To describe the dynamics of information in computation, we need to quantify these operations on a local scale in space and time. In this paper we extend previous work regarding the local quantification of information storage and transfer, to explore how information modification can be quantified at each spatiotemporal point in a system. We introduce the separable information, a measure which locally identifies information modification events where separate inspection of the sources to a computation is misleading about its outcome. We apply this measure to cellular automata, where it is shown to be the first direct quantitative measure to provide evidence for the long-held conjecture that collisions between emergent particles therein are the dominant information modification events.

  10. Progress in the dynamical parton distributions

    SciTech Connect

    Jimenez-Delgado, Pedro

    2012-06-01

    The present status of the (JR) dynamical parton distribution functions is reported. Different theoretical improvements, including the determination of the strange sea input distribution, the treatment of correlated errors and the inclusion of alternative data sets, are discussed. Highlights in the ongoing developments as well as (very) preliminary results in the determination of the strong coupling constant are presented.

  11. Dynamic Associations in Nonlinear Computing Arrays

    NASA Astrophysics Data System (ADS)

    Huberman, B. A.; Hogg, T.

    1985-10-01

    We experimentally show that nonlinear parallel arrays can be made to compute with attractors. This leads to fast adaptive behavior in which dynamical associations can be made between different inputs which initially produce sharply distinct outputs. We first define a set of simple local procedures which allow a general computing structure to change its state in time so as to produce classical Pavlovian conditioning. We then examine the dynamics of coalescence and dissociation of attractors with a number of quantitative experiments. We also show how such arrays exhibit generalization and differentiation of inputs in their behavior.

  12. Numerical analysis of the dynamics of distributed vortex configurations

    NASA Astrophysics Data System (ADS)

    Govorukhin, V. N.

    2016-08-01

    A numerical algorithm is proposed for analyzing the dynamics of distributed plane vortex configurations in an inviscid incompressible fluid. At every time step, the algorithm involves the computation of unsteady vortex flows, an analysis of the configuration structure with the help of heuristic criteria, the visualization of the distribution of marked particles and vorticity, the construction of streamlines of fluid particles, and the computation of the field of local Lyapunov exponents. The inviscid incompressible fluid dynamic equations are solved by applying a meshless vortex method. The algorithm is used to investigate the interaction of two and three identical distributed vortices with various initial positions in the flow region with and without the Coriolis force.

  13. Fast Parallel Computation Of Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Kwan, Gregory L.; Bagherzadeh, Nader

    1996-01-01

    Constraint-force algorithm fast, efficient, parallel-computation algorithm for solving forward dynamics problem of multibody system like robot arm or vehicle. Solves problem in minimum time proportional to log(N) by use of optimal number of processors proportional to N, where N is number of dynamical degrees of freedom: in this sense, constraint-force algorithm both time-optimal and processor-optimal parallel-processing algorithm.

  14. Shipboard Application of a Ring Structured Distributed Computing System.

    DTIC Science & Technology

    Considerable research is currently going on into the application of distributed computing systems. They appear particularly suitable for the...structured distributed computing system might be adapted to function in this environment. Included in this consideration are the feasibility of

  15. Development of a Defence Distributed Computing Environment (DCE) Database Demonstrator,

    DTIC Science & Technology

    1995-11-01

    This report discusses the development of a Defence Distributed Computing Environment (DCE) database demonstrator program. The Demonstrator program...showcases the interoperability, portability, survivability and security features of Open Software Foundation’s Distributed Computing Environment.

  16. Advances in the spatially distributed ages-w model: parallel computation, java connection framework (JCF) integration, and streamflow/nitrogen dynamics assessment

    Technology Transfer Automated Retrieval System (TEKTRAN)

    AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic and water quality (H/WQ) simulation components under the Java Connection Framework (JCF) and the Object Modeling System (OMS) environmental modeling framework. AgES-W is implicitly scala...

  17. Models and Measurements of Parallelism for a Distributed Computer System.

    DTIC Science & Technology

    1982-01-01

    that parallel execution of the processes comprising an application program will defray U the overhead costs of distributed computing . This...of Different Approaches to Distributed Computing ", Proceedings of the Ist International Conference on Distributed Comput er Systems, Huntsville, AL...Oct. 1-5, 1979), pp. 222-232. [20] Liskov, B., "Primitives for Distributed Computing ", Froceedings of the 7--th Symposium on Operating System

  18. Probability distributions of molecular observables computed from Markov models.

    PubMed

    Noé, Frank

    2008-06-28

    Molecular dynamics (MD) simulations can be used to estimate transition rates between conformational substates of the simulated molecule. Such an estimation is associated with statistical uncertainty, which depends on the number of observed transitions. In turn, it induces uncertainties in any property computed from the simulation, such as free energy differences or the time scales involved in the system's kinetics. Assessing these uncertainties is essential for testing the reliability of a given observation and also to plan further simulations in such a way that the most serious uncertainties will be reduced with minimal effort. Here, a rigorous statistical method is proposed to approximate the complete statistical distribution of any observable of an MD simulation provided that one can identify conformational substates such that the transition process between them may be modeled with a memoryless jump process, i.e., Markov or Master equation dynamics. The method is based on sampling the statistical distribution of Markov transition matrices that is induced by the observed transition events. It allows physically meaningful constraints to be included, such as sampling only matrices that fulfill detailed balance, or matrices that produce a predefined equilibrium distribution of states. The method is illustrated on mus MD simulations of a hexapeptide for which the distributions and uncertainties of the free energy differences between conformations, the transition matrix elements, and the transition matrix eigenvalues are estimated. It is found that both constraints, detailed balance and predefined equilibrium distribution, can significantly reduce the uncertainty of some observables.

  19. HL-20 computational fluid dynamics analysis

    NASA Astrophysics Data System (ADS)

    Weilmuenster, K. James; Greene, Francis A.

    1993-09-01

    The essential elements of a computational fluid dynamics analysis of the HL-20/personnel launch system aerothermal environment at hypersonic speeds including surface definition, grid generation, solution techniques, and visual representation of results are presented. Examples of solution technique validation through comparison with data from ground-based facilities are presented, along with results from computations at flight conditions. Computations at flight points indicate that real-gas effects have little or no effect on vehicle aerodynamics and, at these conditions, results from approximate techniques for determining surface heating are comparable with those obtained from Navier-Stokes solutions.

  20. HL-20 computational fluid dynamics analysis

    NASA Technical Reports Server (NTRS)

    Weilmuenster, K. J.; Greene, Francis A.

    1993-01-01

    The essential elements of a computational fluid dynamics analysis of the HL-20/personnel launch system aerothermal environment at hypersonic speeds including surface definition, grid generation, solution techniques, and visual representation of results are presented. Examples of solution technique validation through comparison with data from ground-based facilities are presented, along with results from computations at flight conditions. Computations at flight points indicate that real-gas effects have little or no effect on vehicle aerodynamics and, at these conditions, results from approximate techniques for determining surface heating are comparable with those obtained from Navier-Stokes solutions.

  1. Testing the CDF distributed computing framework

    SciTech Connect

    Bartsch, Valeria; Baranovski, Andrew; Belforte, Stefano; Burgon-Lyon, Morag; Garzoglio, Gabriele; Herber, Randolph; Illingworth, Robert; Kennedy, Rob; Kerzel, Ulrich; Kreymer, Art; Leslie, Matt; Loebel-Carpenter, Lauri; Lueking, Lee; Lyon, Adam; Merritt, Wyatt; Ratnikov, Fedor; Sill, Alan; St. Denis, Richard; Stonjek, Stefan; Terekhov, Igor; Trumbo, Julie; /Fermilab /Oxford U. /INFN, Trieste /Glasgow U. /Karlsruhe U. /Rutgers U., Piscataway /Texas Tech.

    2004-12-01

    A major source of CPU power for CDF (Collider Detector at Fermilab) is the CAF (Central Analysis Farm) [1] at Fermilab. The CAF is a farm of computers running Linux with access to the CDF data handling system and databases to allow CDF collaborators to run batch analysis jobs. Beside providing CPU power it has a good monitoring tool. The CAF software is a wrapper around a batch system, either fbsng [3] or condor, to submit jobs in a uniform way. So the submission to the CAF clusters inside and outside Fermilab from many computers with kerberos authentification is possible. It is mainly used to access datasets which comprise a large amount of files and analyze the data. Up to now the DCache system has been used to access the files. In autumn 2004 some of the important datasets will only be readable with the help of the data handling system SAM (Sequential Access to data via Metadata) [2]. This will be done in order to switch to using only one data handling system at Fermilab and on the remote sites. SAM has been used in run II to store, manage, deliver and track the processing of all data. It is designed to copy data to remote sites with remote analysis in mind. To prove CAF and SAM could provide the required CPU power and Data Handling, stress tests of the combined system were carried out. A second goal of CDF is to distribute computing. In 2005 50% of the computing shall be located outside of Fermilab. For this purpose CDF will use the DCAF (Decentralized CDF Analysis Farms) in combination with SAM. To achieve user friendliness the SAM station environment has to be common to all stations and adaptations to the environment have to be made.

  2. Graphics supercomputer for computational fluid dynamics research

    NASA Astrophysics Data System (ADS)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  3. Visualization of unsteady computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1994-01-01

    A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.

  4. An Applet-based Anonymous Distributed Computing System.

    ERIC Educational Resources Information Center

    Finkel, David; Wills, Craig E.; Ciaraldi, Michael J.; Amorin, Kevin; Covati, Adam; Lee, Michael

    2001-01-01

    Defines anonymous distributed computing systems and focuses on the specifics of a Java, applet-based approach for large-scale, anonymous, distributed computing on the Internet. Explains the possibility of a large number of computers participating in a single computation and describes a test of the functionality of the system. (Author/LRW)

  5. Computational fluid dynamics in oil burner design

    SciTech Connect

    Butcher, T.A.

    1997-09-01

    In Computational Fluid Dynamics, the differential equations which describe flow, heat transfer, and mass transfer are approximately solved using a very laborious numerical procedure. Flows of practical interest to burner designs are always turbulent, adding to the complexity of requiring a turbulence model. This paper presents a model for burner design.

  6. Final Report Computational Analysis of Dynamical Systems

    SciTech Connect

    Guckenheimer, John

    2012-05-08

    This is the final report for DOE Grant DE-FG02-93ER25164, initiated in 1993. This grant supported research of John Guckenheimer on computational analysis of dynamical systems. During that period, seventeen individuals received PhD degrees under the supervision of Guckenheimer and over fifty publications related to the grant were produced. This document contains copies of these publications.

  7. Absolute nonlocality via distributed computing without communication

    NASA Astrophysics Data System (ADS)

    Czekaj, Ł.; Pawłowski, M.; Vértesi, T.; Grudka, A.; Horodecki, M.; Horodecki, R.

    2015-09-01

    Understanding the role that quantum entanglement plays as a resource in various information processing tasks is one of the crucial goals of quantum information theory. Here we propose an alternative perspective for studying quantum entanglement: distributed computation of functions without communication between nodes. To formalize this approach, we propose identity games. Surprisingly, despite no signaling, we obtain that nonlocal quantum strategies beat classical ones in terms of winning probability for identity games originating from certain bipartite and multipartite functions. Moreover we show that, for a majority of functions, access to general nonsignaling resources boosts success probability two times in comparison to classical ones for a number of large enough outputs. Because there are no constraints on the inputs and no processing of the outputs in the identity games, they detect very strong types of correlations: absolute nonlocality.

  8. LHCbDirac: distributed computing in LHCb

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Charpentier, P.; Graciani, R.; Tsaregorodtsev, A.; Closier, J.; Mathe, Z.; Ubeda, M.; Zhelezov, A.; Lanciotti, E.; Romanovskiy, V.; Ciba, K. D.; Casajus, A.; Roiser, S.; Sapunov, M.; Remenska, D.; Bernardoff, V.; Santana, R.; Nandakumar, R.

    2012-12-01

    We present LHCbDirac, an extension of the DIRAC community Grid solution that handles LHCb specificities. The DIRAC software has been developed for many years within LHCb only. Nowadays it is a generic software, used by many scientific communities worldwide. Each community wanting to take advantage of DIRAC has to develop an extension, containing all the necessary code for handling their specific cases. LHCbDirac is an actively developed extension, implementing the LHCb computing model and workflows handling all the distributed computing activities of LHCb. Such activities include real data processing (reconstruction, stripping and streaming), Monte-Carlo simulation and data replication. Other activities are groups and user analysis, data management, resources management and monitoring, data provenance, accounting for user and production jobs. LHCbDirac also provides extensions of the DIRAC interfaces, including a secure web client, python APIs and CLIs. Before putting in production a new release, a number of certification tests are run in a dedicated setup. This contribution highlights the versatility of the system, also presenting the experience with real data processing, data and resources management, monitoring for activities and resources.

  9. Computation-distributed probability hypothesis density filter

    NASA Astrophysics Data System (ADS)

    Wang, Junjie; Zhao, Lingling; Su, Xiaohong; Shi, Chunmei; Ma, JiQuan

    2016-12-01

    Particle probability hypothesis density filtering has become a promising approach for multi-target tracking due to its capability of handling an unknown and time-varying number of targets in a nonlinear, non-Gaussian system. However, its computational complexity linearly increases with the number of obtained observations and the number of particles, which can be very time consuming, particularly when numerous targets and clutter exist in the surveillance region. To address this issue, we present a distributed computation particle probability hypothesis density(PHD) filter for target tracking. It runs several local decomposed particle PHD filters in parallel while processing elements. Each processing element takes responsibility for a portion of particles but all measurements and provides local estimates. A central unit controls particle exchange among the processing elements and specifies a fusion rule to match and fuse the estimates from different local filters. The proposed framework is suitable for parallel implementation. Simulations verify that the proposed method can significantly accelerate and maintain a comparative accuracy compared to the standard particle PHD filter.

  10. Automating usability of ATLAS Distributed Computing resources

    NASA Astrophysics Data System (ADS)

    Tupputi, S. A.; Di Girolamo, A.; Kouba, T.; Schovancová, J.; Atlas Collaboration

    2014-06-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  11. Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation: Second Year Progress Report

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    Mesh generation has long been recognized as a bottleneck in the CFD process. While much research on automating the volume mesh generation process have been relatively successful,these methods rely on appropriate initial surface triangulation to work properly. Surface discretization has been one of the least automated steps in computational simulation due to its dependence on implicitly defined CAD surfaces and curves. Differences in CAD peometry engines manifest themselves in discrepancies in their interpretation of the same entities. This lack of "good" geometry causes significant problems for mesh generators, requiring users to "repair" the CAD geometry before mesh generation. The problem is exacerbated when CAD geometry is translated to other forms (e.g., IGES )which do not include important topological and construction information in addition to entity geometry. One technique to avoid these problems is to access the CAD geometry directly from the mesh generating software, rather than through files. By accessing the geometry model (not a discretized version) in its native environment, t h s a proach avoids translation to a format which can deplete the model of topological information. Our approach to enable models developed in the Denali software environment to directly access CAD geometry and functions is through an Application Programming Interface (API) known as CAPRI. CAPRI provides a layer of indirection through which CAD-specific data may be accessed by an application program using CAD-system neutral C and FORTRAN language function calls. CAPRI supports a general set of CAD operations such as truth testing, geometry construction and entity queries.

  12. Computationally Efficient Multiconfigurational Reactive Molecular Dynamics

    PubMed Central

    Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A.

    2012-01-01

    It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as “multistate”. These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations. PMID:25100924

  13. The Gain of Resource Delegation in Distributed Computing Environments

    NASA Astrophysics Data System (ADS)

    Fölling, Alexander; Grimme, Christian; Lepping, Joachim; Papaspyrou, Alexander

    In this paper, we address job scheduling in Distributed Computing Infrastructures, that is a loosely coupled network of autonomous acting High Performance Computing systems. In contrast to the common approach of mutual workload exchange, we consider the more intuitive operator's viewpoint of load-dependent resource reconfiguration. In case of a site's over-utilization, the scheduling system is able to lease resources from other sites to keep up service quality for its local user community. Contrary, the granting of idle resources can increase utilization in times of low local workload and thus ensure higher efficiency. The evaluation considers real workload data and is done with respect to common service quality indicators. For two simple resource exchange policies and three basic setups we show the possible gain of this approach and analyze the dynamics in workload-adaptive reconfiguration behavior.

  14. The brain dynamics of linguistic computation

    PubMed Central

    Murphy, Elliot

    2015-01-01

    Neural oscillations at distinct frequencies are increasingly being related to a number of basic and higher cognitive faculties. Oscillations enable the construction of coherently organized neuronal assemblies through establishing transitory temporal correlations. By exploring the elementary operations of the language faculty—labeling, concatenation, cyclic transfer—alongside neural dynamics, a new model of linguistic computation is proposed. It is argued that the universality of language, and the true biological source of Universal Grammar, is not to be found purely in the genome as has long been suggested, but more specifically within the extraordinarily preserved nature of mammalian brain rhythms employed in the computation of linguistic structures. Computational-representational theories are used as a guide in investigating the neurobiological foundations of the human “cognome”—the set of computations performed by the nervous system—and new directions are suggested for how the dynamics of the brain (the “dynome”) operate and execute linguistic operations. The extent to which brain rhythms are the suitable neuronal processes which can capture the computational properties of the human language faculty is considered against a backdrop of existing cartographic research into the localization of linguistic interpretation. Particular focus is placed on labeling, the operation elsewhere argued to be species-specific. A Basic Label model of the human cognome-dynome is proposed, leading to clear, causally-addressable empirical predictions, to be investigated by a suggested research program, Dynamic Cognomics. In addition, a distinction between minimal and maximal degrees of explanation is introduced to differentiate between the depth of analysis provided by cartographic, rhythmic, neurochemical, and other approaches to computation. PMID:26528201

  15. Visualization of unsteady computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1995-01-01

    The current computing environment that most researchers are using for the calculation of 3D unsteady Computational Fluid Dynamic (CFD) results is a super-computer class machine. The Massively Parallel Processors (MPP's) such as the 160 node IBM SP2 at NAS and clusters of workstations acting as a single MPP (like NAS's SGI Power-Challenge array) provide the required computation bandwidth for CFD calculations of transient problems. Work is in progress on a set of software tools designed specifically to address visualizing 3D unsteady CFD results in these super-computer-like environments. The visualization is concurrently executed with the CFD solver. The parallel version of Visual3, pV3 required splitting up the unsteady visualization task to allow execution across a network of workstation(s) and compute servers. In this computing model, the network is almost always the bottleneck so much of the effort involved techniques to reduce the size of the data transferred between machines.

  16. Feasibility Study of Computational Fluid Dynamics Simulation of Coronary Computed Tomography Angiography Based on Dual-Source Computed Tomography

    PubMed Central

    Lu, Jing; Yu, Jie; Shi, Heshui

    2017-01-01

    Background Adding functional features to morphological features offers a new method for non-invasive assessment of myocardial perfusion. This study aimed to explore technical routes of assessing the left coronary artery pressure gradient, wall shear stress distribution and blood flow velocity distribution, combining three-dimensional coronary model which was based on high resolution dual-source computed tomography (CT) with computational fluid dynamics (CFD) simulation. Methods Three cases of no obvious stenosis, mild stenosis and severe stenosis in left anterior descending (LAD) were enrolled. Images acquired on dual-source CT were input into software Mimics, ICEMCFD and FLUENT to simulate pressure gradient, wall shear stress distribution and blood flow velocity distribution. Measuring coronary enhancement ratio of coronary artery was to compare with pressure gradient. Results Results conformed to theoretical values and showed difference between normal and abnormal samples. Conclusions The study verified essential parameters and basic techniques in blood flow numerical simulation preliminarily. It was proved feasible. PMID:27924174

  17. Computational fluid dynamics using CATIA created geometry

    NASA Astrophysics Data System (ADS)

    Gengler, Jeanne E.

    1989-07-01

    A method has been developed to link the geometry definition residing on a CAD/CAM system with a computational fluid dynamics (CFD) tool needed to evaluate aerodynamic designs and requiring the memory capacity of a supercomputer. Requirements for surfaces suitable for CFD analysis are discussed. Techniques for developing surfaces and verifying their smoothness are compared, showing the capability of the CAD/CAM system. The utilization of a CAD/CAM system to create a computational mesh is explained, and the mesh interaction with the geometry and input file preparation for the CFD analysis is discussed.

  18. Dynamic data distributions in Vienna Fortran

    NASA Technical Reports Server (NTRS)

    Chapman, Barbara; Mehrotra, Piyush; Moritsch, Hans; Zima, Hans

    1993-01-01

    Vienna Fortran is a machine-independent language extension of Fortran, which is based upon the Single-Program-Multiple-Data (SPMD) paradigm and allows the user to write programs for distributed-memory systems using global addresses. The language features focus mainly on the issue of distributing data across virtual processor structures. Those features of Vienna Fortran that allow the data distributions of arrays to change dynamically, depending on runtime conditions are discussed. The relevant language features are discussed, their implementation is outlined, and how they may be used in applications is described.

  19. The use of computers for instruction in fluid dynamics

    NASA Technical Reports Server (NTRS)

    Watson, Val

    1987-01-01

    Applications for computers which improve instruction in fluid dynamics are examined. Computers can be used to illustrate three-dimensional flow fields and simple fluid dynamics mechanisms, to solve fluid dynamics problems, and for electronic sketching. The usefulness of computer applications is limited by computer speed, memory, and software and the clarity and field of view of the projected display. Proposed advances in personal computers which will address these limitations are discussed. Long range applications for computers in education are considered.

  20. Spectral Methods for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Streett, C. L.; Hussaini, M. Y.

    1994-01-01

    As a tool for large-scale computations in fluid dynamics, spectral methods were prophesized in 1944, born in 1954, virtually buried in the mid-1960's, resurrected in 1969, evangalized in the 1970's, and catholicized in the 1980's. The use of spectral methods for meteorological problems was proposed by Blinova in 1944 and the first numerical computations were conducted by Silberman (1954). By the early 1960's computers had achieved sufficient power to permit calculations with hundreds of degrees of freedom. For problems of this size the traditional way of computing the nonlinear terms in spectral methods was expensive compared with finite-difference methods. Consequently, spectral methods fell out of favor. The expense of computing nonlinear terms remained a severe drawback until Orszag (1969) and Eliasen, Machenauer, and Rasmussen (1970) developed the transform methods that still form the backbone of many large-scale spectral computations. The original proselytes of spectral methods were meteorologists involved in global weather modeling and fluid dynamicists investigating isotropic turbulence. The converts who were inspired by the successes of these pioneers remained, for the most part, confined to these and closely related fields throughout the 1970's. During that decade spectral methods appeared to be well-suited only for problems governed by ordinary diSerential eqllations or by partial differential equations with periodic boundary conditions. And, of course, the solution itself needed to be smooth. Some of the obstacles to wider application of spectral methods were: (1) poor resolution of discontinuous solutions; (2) inefficient implementation of implicit methods; and (3) drastic geometric constraints. All of these barriers have undergone some erosion during the 1980's, particularly the latter two. As a result, the applicability and appeal of spectral methods for computational fluid dynamics has broadened considerably. The motivation for the use of spectral

  1. Dynamic Singularity Spectrum Distribution of Sea Clutter

    NASA Astrophysics Data System (ADS)

    Xiong, Gang; Yu, Wenxian; Zhang, Shuning

    2015-12-01

    The fractal and multifractal theory have provided new approaches for radar signal processing and target-detecting under the background of ocean. However, the related research mainly focuses on fractal dimension or multifractal spectrum (MFS) of sea clutter. In this paper, a new dynamic singularity analysis method of sea clutter using MFS distribution is developed, based on moving detrending analysis (DMA-MFSD). Theoretically, we introduce the time information by using cyclic auto-correlation of sea clutter. For transient correlation series, the instantaneous singularity spectrum based on multifractal detrending moving analysis (MF-DMA) algorithm is calculated, and the dynamic singularity spectrum distribution of sea clutter is acquired. In addition, we analyze the time-varying singularity exponent ranges and maximum position function in DMA-MFSD of sea clutter. For the real sea clutter data, we analyze the dynamic singularity spectrum distribution of real sea clutter in level III sea state, and conclude that the radar sea clutter has the non-stationary and time-varying scale characteristic and represents the time-varying singularity spectrum distribution based on the proposed DMA-MFSD method. The DMA-MFSD will also provide reference for nonlinear dynamics and multifractal signal processing.

  2. Flight Simulation of Taketombo Based on Computational Fluid Dynamics and Computational Flight Dynamics

    NASA Astrophysics Data System (ADS)

    Kawamura, Kohei; Ueno, Yosuke; Nakamura, Yoshiaki

    In the present study we have developed a numerical method to simulate the flight dynamics of a small flying body with unsteady motion, where both aerodynamics and flight dynamics are fully considered. A key point of this numerical code is to use computational fluid dynamics and computational flight dynamics at the same time, which is referred to as CFD2, or double CFDs, where several new ideas are adopted in the governing equations, the method to make each quantity nondimensional, and the coupling method between aerodynamics and flight dynamics. This numerical code can be applied to simulate the unsteady motion of small vehicles such as micro air vehicles (MAV). As a sample calculation, we take up Taketombo, or a bamboo dragonfly, and its free flight in the air is demonstrated. The eventual aim of this research is to virtually fly an aircraft with arbitrary motion to obtain aerodynamic and flight dynamic data, which cannot be taken in the conventional wind tunnel.

  3. Distributed Design and Analysis of Computer Experiments

    SciTech Connect

    Doak, Justin

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. For example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation of an

  4. A Distributed Computing Infrastructure for Computational Thermodynamic Calculations of Solid-Liquid Phase Equilibria

    NASA Astrophysics Data System (ADS)

    Ghiorso, M. S.; Kress, V. C.

    2004-12-01

    routines is being accessed. Fourth, the flexibility of calling library functions means that the client has more control over the configuration and output of the MELTS calculation. Fifth, if the client computer is a multi-processor compute cluster capable of issuing parallel requests to the MELTS "remote" library, then these requests may be in turn parallelized to the server compute cluster to enhance throughput and performance. Application of this computational model to fluid dynamical simulations of melting and transport in the Earth's mantle is envisioned. Further information and example clients for utilizing the current prototype library for distributed computing applications can be found at http://melts.uchicago.edu.

  5. State space representations of distributed fluid line dynamics

    NASA Technical Reports Server (NTRS)

    Yao, H.; Goodson, R. E.; Leonard, R. G.

    1974-01-01

    The purpose of this paper is to demonstrate the convenience of using a systematic straight forward procedure to obtain meaningful dynamic information for a class of complex distributed parameter fluid line networks. System transients in the time domain are determined by means of state space techniques. Digital computer implementation yields a simple but consistent way of obtaining overall system time solutions. A step-by-step analysis procedure flow chart is shown in Appendix I which illustrates the basic approach for modeling, approximating and selecting digital techniques for simulating the dynamic response of fluid line systems.

  6. Computation in Dynamically Bounded Asymmetric Systems

    PubMed Central

    Rutishauser, Ueli; Slotine, Jean-Jacques; Douglas, Rodney

    2015-01-01

    Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. Here we analyze the behavior of asymmetrical connected networks of linear threshold neurons, whose positive response is unbounded. We show that, for a wide range of parameters, this asymmetry brings interesting and computationally useful dynamical properties. When driven by input, the network explores potential solutions through highly unstable ‘expansion’ dynamics. This expansion is steered and constrained by negative divergence of the dynamics, which ensures that the dimensionality of the solution space continues to reduce until an acceptable solution manifold is reached. Then the system contracts stably on this manifold towards its final solution trajectory. The unstable positive feedback and cross inhibition that underlie expansion and divergence are common motifs in molecular and neuronal networks. Therefore we propose that very simple organizational constraints that combine these motifs can lead to spontaneous computation and so to the spontaneous modification of entropy that is characteristic of living systems. PMID:25617645

  7. The ICAAP Project, Part Three: OSF Distributed Computing Environment.

    ERIC Educational Resources Information Center

    Cantor, Scott

    1997-01-01

    DCE (Distributed Computing Environment) is a collection of services, tools, and libraries for building the infrastructure necessary for distributed computing within an enterprise. This articles discusses the Open Software Foundation (OSF); the components of DCE, including the Directory and Security Services, the Distributed Time Service, and the…

  8. A computational model for dynamic vision

    NASA Technical Reports Server (NTRS)

    Moezzi, Saied; Weymouth, Terry E.

    1990-01-01

    This paper describes a novel computational model for dynamic vision which promises to be both powerful and robust. Furthermore the paradigm is ideal for an active vision system where camera vergence changes dynamically. Its basis is the retinotopically indexed object-centered encoding of the early visual information. Specifically, the relative distances of objects to a set of referents is encoded in image registered maps. To illustrate the efficacy of the method, it is applied to the problem of dynamic stereo vision. Integration of depth information over multiple frames obtained by a moving robot generally requires precise information about the relative camera position from frame to frame. Usually, this information can only be approximated. The method facilitates the integration of depth information without direct use or knowledge of camera motion.

  9. Arterioportal shunts on dynamic computed tomography

    SciTech Connect

    Nakayama, T.; Hiyama, Y.; Ohnishi, K.; Tsuchiya, S.; Kohno, K.; Nakajima, Y.; Okuda, K.

    1983-05-01

    Thirty-two patients, 20 with hepatocelluar carcinoma and 12 with liver cirrhosis, were examined by dynamic computed tomography (CT) using intravenous bolus injection of contrast medium and by celiac angiography. Dynamic CT disclosed arterioportal shunting in four cases of hepatocellular carcinoma and in one of cirrhosis. In three of the former, the arterioportal shunt was adjacent to a mass lesion on CT, suggesting tumor invasion into the portal branch. In one with hepatocellular carcinoma, the shunt was remote from the mass. In the case with cirrhosis, there was no mass. In these last two cases, the shunt might have been caused by prior percutaneous needle puncture. In another case of hepatocellular carcinoma, celiac angiography but not CT demonstrated an arterioportal shunt. Thus, dynamic CT was diagnostic in five of six cases of arteriographically demonstrated arterioportal shunts.

  10. Computational fluid dynamics uses in fluid dynamics/aerodynamics education

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    1994-01-01

    The field of computational fluid dynamics (CFD) has advanced to the point where it can now be used for the purpose of fluid dynamics physics education. Because of the tremendous wealth of information available from numerical simulation, certain fundamental concepts can be efficiently communicated using an interactive graphical interrogation of the appropriate numerical simulation data base. In other situations, a large amount of aerodynamic information can be communicated to the student by interactive use of simple CFD tools on a workstation or even in a personal computer environment. The emphasis in this presentation is to discuss ideas for how this process might be implemented. Specific examples, taken from previous publications, will be used to highlight the presentation.

  11. Inverse dynamics: Simultaneous trajectory tracking and vibration reduction with distributed actuators

    NASA Technical Reports Server (NTRS)

    Devasia, Santosh; Bayo, Eduardo

    1993-01-01

    This paper addresses the problem of inverse dynamics for articulated flexible structures with both lumped and distributed actuators. This problem arises, for example, in the combined vibration minimization and trajectory control of space robots and structures. A new inverse dynamics scheme for computing the nominal lumped and distributed inputs for tracking a prescribed trajectory is given.

  12. Inverse dynamics: Simultaneous trajectory tracking and vibration reduction with distributed actuators

    NASA Astrophysics Data System (ADS)

    Devasia, Santosh; Bayo, Eduardo

    1993-02-01

    This paper addresses the problem of inverse dynamics for articulated flexible structures with both lumped and distributed actuators. This problem arises, for example, in the combined vibration minimization and trajectory control of space robots and structures. A new inverse dynamics scheme for computing the nominal lumped and distributed inputs for tracking a prescribed trajectory is given.

  13. Computational Fluid Dynamics of rising droplets

    SciTech Connect

    Wagner, Matthew; Francois, Marianne M.

    2012-09-05

    The main goal of this study is to perform simulations of droplet dynamics using Truchas, a LANL-developed computational fluid dynamics (CFD) software, and compare them to a computational study of Hysing et al.[IJNMF, 2009, 60:1259]. Understanding droplet dynamics is of fundamental importance in liquid-liquid extraction, a process used in the nuclear fuel cycle to separate various components. Simulations of a single droplet rising by buoyancy are conducted in two-dimensions. Multiple parametric studies are carried out to ensure the problem set-up is optimized. An Interface Smoothing Length (ISL) study and mesh resolution study are performed to verify convergence of the calculations. ISL is a parameter for the interface curvature calculation. Further, wall effects are investigated and checked against existing correlations. The ISL study found that the optimal ISL value is 2.5{Delta}x, with {Delta}x being the mesh cell spacing. The mesh resolution study found that the optimal mesh resolution is d/h=40, for d=drop diameter and h={Delta}x. In order for wall effects on terminal velocity to be insignificant, a conservative wall width of 9d or a nonconservative wall width of 7d can be used. The percentage difference between Hysing et al.[IJNMF, 2009, 60:1259] and Truchas for the velocity profiles vary from 7.9% to 9.9%. The computed droplet velocity and interface profiles are found in agreement with the study. The CFD calculations are performed on multiple cores, using LANL's Institutional High Performance Computing.

  14. Development of Distributed Computing Systems Software Design Methodologies.

    DTIC Science & Technology

    1982-11-05

    R12i 941 DEVELOPMENT OF DISTRIBUTED COMPUTING SYSTEMS SOFTWARE ± DESIGN METHODOLOGIES(U) NORTHWESTERN UNIV EVANSTON IL DEPT OF ELECTRICAL...GUIRWAU OF STANDARDS -16 5 A Ax u FINAL REPORT Development of Distributed Computing System Software Design Methodologies C)0 Stephen S. Yau September 22...of Distributed Computing Systems Software pt.22,, 80 -OJu1, 2 * Dsig Mehodloges PERFORMING ORG REPORT NUMBERDesign th ol ies" 7. AUTHOR() .. CONTRACT

  15. Determination of eigenvalues of dynamical systems by symbolic computation

    NASA Technical Reports Server (NTRS)

    Howard, J. C.

    1982-01-01

    A symbolic computation technique for determining the eigenvalues of dynamical systems is described wherein algebraic operations, symbolic differentiation, matrix formulation and inversion, etc., can be performed on a digital computer equipped with a formula-manipulation compiler. An example is included that demonstrates the facility with which the system dynamics matrix and the control distribution matrix from the state space formulation of the equations of motion can be processed to obtain eigenvalue loci as a function of a system parameter. The example chosen to demonstrate the technique is a fourth-order system representing the longitudinal response of a DC 8 aircraft to elevator inputs. This simplified system has two dominant modes, one of which is lightly damped and the other well damped. The loci may be used to determine the value of the controlling parameter that satisfied design requirements. The results were obtained using the MACSYMA symbolic manipulation system.

  16. Distributing an executable job load file to compute nodes in a parallel computer

    DOEpatents

    Gooding, Thomas M.

    2016-08-09

    Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications link over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.

  17. Distributing an executable job load file to compute nodes in a parallel computer

    DOEpatents

    Gooding, Thomas M.

    2016-09-13

    Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications link over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.

  18. Computational stability analysis of dynamical systems

    NASA Astrophysics Data System (ADS)

    Nikishkov, Yuri Gennadievich

    2000-10-01

    Due to increased available computer power, the analysis of nonlinear flexible multi-body systems, fixed-wing aircraft and rotary-wing vehicles is relying on increasingly complex, large scale models. An important aspect of the dynamic response of flexible multi-body systems is the potential presence of instabilities. Stability analysis is typically performed on simplified models with the smallest number of degrees of freedom required to capture the physical phenomena that cause the instability. The system stability boundaries are then evaluated using the characteristic exponent method or Floquet theory for systems with constant or periodic coefficients, respectively. As the number of degrees of freedom used to represent the system increases, these methods become increasingly cumbersome, and quickly unmanageable. In this work, a novel approach is proposed, the Implicit Floquet Analysis, which evaluates the largest eigenvalues of the transition matrix using the Arnoldi algorithm, without the explicit computation of this matrix. This method is far more computationally efficient than the classical approach and is ideally suited for systems involving a large number of degrees of freedom. The proposed approach is conveniently implemented as a postprocessing step to any existing simulation tool. The application of the method to a geometrically nonlinear multi-body dynamics code is presented. This work also focuses on the implementation of trimming algorithms and the development of tools for the graphical representation of numerical simulations and stability information for multi-body systems.

  19. Accommodating Heterogeneity in a Debugger for Distributed Computations

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Cheng, Doreen; Cooper, D. M. (Technical Monitor)

    1994-01-01

    In an ongoing project at NASA Ames Research Center, we are building debugger for distributed computations running on a heterogeneous set of machines. Historically, such debuggers have been built as front-ends to existing source-level debuggers on the target platforms. In effect, these back-end debuggers are providing a collection of debugger services to a client. The major drawback is that because of inconsistencies among the back-end debuggers, the front-end must use a different protocol when talking to each back-end debugger. This can make the front-end quite complex. We have avoided this complexity problem by defining the client-server debugger protocol. While it does require vendors to adapt their existing debugger code to meet the protocol, vendors are generally interested in doing so because the approach has several advantages. In addition to solving the heterogenous platform debugging problem, it will be possible to write interesting debugger user interfaces that can be easily ported across a variety of machines. This will likely encourage investment in application-domain specific debuggers. In fact, the user interface of our debugger will be geared to scientists developing computational fluid dynamics codes. This paper describes some of the problems encountered in developing a portable debugger for heterogenous, distributed computing and how the architecture of our debugger avoids them. It then provides a detailed description of the debugger client-server protocol. Some of the more interesting attributes of the protocol are: (1) It is object-oriented; (2) It uses callback functions to capture the asynchronous nature of debugging in a procedural fashion; (3) It contains abstractions, such as in-line instrumentation, for the debugging of computationally intensive programs; (4) For remote debugging, it has operations that enable the implementor to optimize message passing traffic between client and server. The soundness of the protocol is being tested through

  20. Time Delay Systems with Distribution Dependent Dynamics

    DTIC Science & Technology

    2006-05-10

    sensitivity function for general nonlinear ordinary differential equations (ODEs) in a Banach space. Here we only show the construction of the abstract...shear: A nonlinear stick-slip formulation. CRSC-TR06-07, February, 2006; Differential Equations and Nonlinear Mechanics. Banks, H.T. and H.K. Nguyen (to...dependent dynamical system (in this case a 6 complicated system of partial differential equations ) for which the distribution PL must be estimated in some

  1. Distributed computing environments for future space control systems

    NASA Technical Reports Server (NTRS)

    Viallefont, Pierre

    1993-01-01

    The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.

  2. A uniform approach for programming distributed heterogeneous computing systems

    PubMed Central

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-01-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. PMID:25844015

  3. A uniform approach for programming distributed heterogeneous computing systems.

    PubMed

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-12-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.

  4. Computational fluid dynamics in cardiovascular disease.

    PubMed

    Lee, Byoung-Kwon

    2011-08-01

    Computational fluid dynamics (CFD) is a mechanical engineering field for analyzing fluid flow, heat transfer, and associated phenomena, using computer-based simulation. CFD is a widely adopted methodology for solving complex problems in many modern engineering fields. The merit of CFD is developing new and improved devices and system designs, and optimization is conducted on existing equipment through computational simulations, resulting in enhanced efficiency and lower operating costs. However, in the biomedical field, CFD is still emerging. The main reason why CFD in the biomedical field has lagged behind is the tremendous complexity of human body fluid behavior. Recently, CFD biomedical research is more accessible, because high performance hardware and software are easily available with advances in computer science. All CFD processes contain three main components to provide useful information, such as pre-processing, solving mathematical equations, and post-processing. Initial accurate geometric modeling and boundary conditions are essential to achieve adequate results. Medical imaging, such as ultrasound imaging, computed tomography, and magnetic resonance imaging can be used for modeling, and Doppler ultrasound, pressure wire, and non-invasive pressure measurements are used for flow velocity and pressure as a boundary condition. Many simulations and clinical results have been used to study congenital heart disease, heart failure, ventricle function, aortic disease, and carotid and intra-cranial cerebrovascular diseases. With decreasing hardware costs and rapid computing times, researchers and medical scientists may increasingly use this reliable CFD tool to deliver accurate results. A realistic, multidisciplinary approach is essential to accomplish these tasks. Indefinite collaborations between mechanical engineers and clinical and medical scientists are essential. CFD may be an important methodology to understand the pathophysiology of the development and

  5. Shuttle rocket booster computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Chung, T. J.; Park, O. Y.

    1988-01-01

    Additional results and a revised and improved computer program listing from the shuttle rocket booster computational fluid dynamics formulations are presented. Numerical calculations for the flame zone of solid propellants are carried out using the Galerkin finite elements, with perturbations expanded to the zeroth, first, and second orders. The results indicate that amplification of oscillatory motions does indeed prevail in high frequency regions. For the second order system, the trend is similar to the first order system for low frequencies, but instabilities may appear at frequencies lower than those of the first order system. The most significant effect of the second order system is that the admittance is extremely oscillatory between moderately high frequency ranges.

  6. Computational fluid dynamics: Transition to design applications

    NASA Technical Reports Server (NTRS)

    Bradley, R. G.; Bhateley, I. C.; Howell, G. A.

    1987-01-01

    The development of aerospace vehicles, over the years, was an evolutionary process in which engineering progress in the aerospace community was based, generally, on prior experience and data bases obtained through wind tunnel and flight testing. Advances in the fundamental understanding of flow physics, wind tunnel and flight test capability, and mathematical insights into the governing flow equations were translated into improved air vehicle design. The modern day field of Computational Fluid Dynamics (CFD) is a continuation of the growth in analytical capability and the digital mathematics needed to solve the more rigorous form of the flow equations. Some of the technical and managerial challenges that result from rapidly developing CFD capabilites, some of the steps being taken by the Fort Worth Division of General Dynamics to meet these challenges, and some of the specific areas of application for high performance air vehicles are presented.

  7. Verification of computer users using keystroke dynamics.

    PubMed

    Obaidat, M S; Sadoun, B

    1997-01-01

    This paper presents techniques to verify the identity of computer users using the keystroke dynamics of computer user's login string as characteristic patterns using pattern recognition and neural network techniques. This work is a continuation of our previous work where only interkey times were used as features for identifying computer users. In this work we used the key hold times for classification and then compared the performance with the former interkey time-based technique. Then we use the combined interkey and hold times for the identification process. We applied several neural network and pattern recognition algorithms for verifying computer users as they type their password phrases. It was found that hold times are more effective than interkey times and the best identification performance was achieved by using both time measurements. An identification accuracy of 100% was achieved when the combined hold and intekey time-based approach were considered as features using the fuzzy ARTMAP, radial basis function networks (RBFN), and learning vector quantization (LVQ) neural network paradigms. Other neural network and classical pattern algorithms such as backpropagation with a sigmoid transfer function (BP, Sigm), hybrid sum-of-products (HSOP), sum-of-products (SOP), potential function and Bayes' rule algorithms gave moderate performance.

  8. Domain decomposition algorithms and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.

    1988-01-01

    Some of the new domain decomposition algorithms are applied to two model problems in computational fluid dynamics: the two-dimensional convection-diffusion problem and the incompressible driven cavity flow problem. First, a brief introduction to the various approaches of domain decomposition is given, and a survey of domain decomposition preconditioners for the operator on the interface separating the subdomains is then presented. For the convection-diffusion problem, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is examined.

  9. Efficient quantum computing of complex dynamics.

    PubMed

    Benenti, G; Casati, G; Montangero, S; Shepelyansky, D L

    2001-11-26

    We propose a quantum algorithm which uses the number of qubits in an optimal way and efficiently simulates a physical model with rich and complex dynamics described by the quantum sawtooth map. The numerical study of the effect of static imperfections in the quantum computer hardware shows that the main elements of the phase space structures are accurately reproduced up to a time scale which is polynomial in the number of qubits. The errors generated by these imperfections are more significant than the errors of random noise in gate operations.

  10. Computational Fluid Dynamics Symposium on Aeropropulsion

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Recognizing the considerable advances that have been made in computational fluid dynamics, the Internal Fluid Mechanics Division of NASA Lewis Research Center sponsored this symposium with the objective of providing a forum for exchanging information regarding recent developments in numerical methods, physical and chemical modeling, and applications. This conference publication is a compilation of 4 invited and 34 contributed papers presented in six sessions: algorithms one and two, turbomachinery, turbulence, components application, and combustors. Topics include numerical methods, grid generation, chemically reacting flows, turbulence modeling, inlets, nozzles, and unsteady flows.

  11. Computational Fluid Dynamics Technology for Hypersonic Applications

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2003-01-01

    Several current challenges in computational fluid dynamics and aerothermodynamics for hypersonic vehicle applications are discussed. Example simulations are presented from code validation and code benchmarking efforts to illustrate capabilities and limitations. Opportunities to advance the state-of-art in algorithms, grid generation and adaptation, and code validation are identified. Highlights of diverse efforts to address these challenges are then discussed. One such effort to re-engineer and synthesize the existing analysis capability in LAURA, VULCAN, and FUN3D will provide context for these discussions. The critical (and evolving) role of agile software engineering practice in the capability enhancement process is also noted.

  12. Optimization of an interactive distributive computer network

    NASA Technical Reports Server (NTRS)

    Frederick, V.

    1985-01-01

    The activities under a cooperative agreement for the development of a computer network are briefly summarized. Research activities covered are: computer operating systems optimization and integration; software development and implementation of the IRIS (Infrared Imaging of Shuttle) Experiment; and software design, development, and implementation of the APS (Aerosol Particle System) Experiment.

  13. Equation solvers for distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    A large number of scientific and engineering problems require the rapid solution of large systems of simultaneous equations. The performance of parallel computers in this area now dwarfs traditional vector computers by nearly an order of magnitude. This talk describes the major issues involved in parallel equation solvers with particular emphasis on the Intel Paragon, IBM SP-1 and SP-2 processors.

  14. Direct modeling for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Xu, Kun

    2015-06-01

    All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct

  15. Comparison of TCP automatic tuning techniques for distributed computing

    SciTech Connect

    Weigle, E. H.; Feng, W. C.

    2002-01-01

    Rather than painful, manual, static, per-connection optimization of TCP buffer sizes simply to achieve acceptable performance for distributed applications, many researchers have proposed techniques to perform this tuning automatically. This paper first discusses the relative merits of the various approaches in theory, and then provides substantial experimental data concerning two competing implementations - the buffer autotuning already present in Linux 2.4.x and 'Dynamic Right-Sizing.' This paper reveals heretofore unknown aspects of the problem and current solutions, provides insight into the proper approach for various circumstances, and points toward ways to further improve performance. TCP, for good or ill, is the only protocol widely available for reliable end-to-end congestion-controlled network communication, and thus it is the one used for almost all distributed computing. Unfortunately, TCP was not designed with high-performance computing in mind - its original design decisions focused on long-term fairness first, with performance a distant second. Thus users must often perform tortuous manual optimizations simply to achieve acceptable behavior. The most important and often most difficult task is determining and setting appropriate buffer sizes. Because of this, at least six ways of automatically setting these sizes have been proposed. In this paper, we compare and contrast these tuning methods. First we explain each method, followed by an in-depth discussion of their features. Next we discuss the experiments to fully characterize two particularly interesting methods (Linux 2.4 autotuning and Dynamic Right-Sizing). We conclude with results and possible improvements.

  16. Distributed Computing Environment: An Architecture For Supporting Change?

    DTIC Science & Technology

    1995-11-01

    Distributed Computing Environment (DCE) has been in development for about five years but has only been widely used in the last two years. It consists...these services form an architecture for distributed computing that enables users to carry out the new, cheaper operations they require with the

  17. Distributed Computing: Considerations for Its Use within Educational Environments.

    ERIC Educational Resources Information Center

    Pratt, S. J.

    1985-01-01

    Emphasizing more effective use of existing equipment, this article highlights distributed computing design considerations applicable to educational environments; identifies potential roles of networking in the provision of adequate teaching aids; presents a networking model; and describes the development of a distributed computing configuration at…

  18. Distributed-Computer System Optimizes SRB Joints

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.

    1991-01-01

    Initial calculations of redesign of joint on solid rocket booster (SRB) that failed during Space Shuttle tragedy showed redesign increased weight. Optimization techniques applied to determine whether weight could be reduced while keeping joint closed and limiting stresses. Analysis system developed by use of existing software coupling structural analysis with optimization computations. Software designed executable on network of computer workstations. Took advantage of parallelism offered by finite-difference technique of computing gradients to enable several workstations to contribute simultaneously to solution of problem. Key features, effective use of redundancies in hardware and flexible software, enabling optimization to proceed with minimal delay and decreased overall time to completion.

  19. Spatiotemporal dynamics of distributed synthetic genetic circuits

    NASA Astrophysics Data System (ADS)

    Kanakov, Oleg; Laptyeva, Tetyana; Tsimring, Lev; Ivanchenko, Mikhail

    2016-04-01

    We propose and study models of two distributed synthetic gene circuits, toggle-switch and oscillator, each split between two cell strains and coupled via quorum-sensing signals. The distributed toggle switch relies on mutual repression of the two strains, and oscillator is comprised of two strains, one of which acts as an activator for another that in turn acts as a repressor. Distributed toggle switch can exhibit mobile fronts, switching the system from the weaker to the stronger spatially homogeneous state. The circuit can also act as a biosensor, with the switching front dynamics determined by the properties of an external signal. Distributed oscillator system displays another biosensor functionality: oscillations emerge once a small amount of one cell strain appears amid the other, present in abundance. Distribution of synthetic gene circuits among multiple strains allows one to reduce crosstalk among different parts of the overall system and also decrease the energetic burden of the synthetic circuit per cell, which may allow for enhanced functionality and viability of engineered cells.

  20. Configuring computation tree topologies on a distributed computing system

    SciTech Connect

    Woei Lin; Chuan-lin Wu

    1983-01-01

    The authors describe an approach to connecting hardware resources for high-performance computation. Two basic algorithms are designed for configuring binary tree topologies. The configuring command can be issued from any processing mode. The algorithms can select proper modes for connection while maintaining good utilization of processing nodes. 7 references.

  1. Visualization of Unsteady Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1997-01-01

    The current compute environment that most researchers are using for the calculation of 3D unsteady Computational Fluid Dynamic (CFD) results is a super-computer class machine. The Massively Parallel Processors (MPP's) such as the 160 node IBM SP2 at NAS and clusters of workstations acting as a single MPP (like NAS's SGI Power-Challenge array and the J90 cluster) provide the required computation bandwidth for CFD calculations of transient problems. If we follow the traditional computational analysis steps for CFD (and we wish to construct an interactive visualizer) we need to be aware of the following: (1) Disk space requirements. A single snap-shot must contain at least the values (primitive variables) stored at the appropriate locations within the mesh. For most simple 3D Euler solvers that means 5 floating point words. Navier-Stokes solutions with turbulence models may contain 7 state-variables. (2) Disk speed vs. Computational speeds. The time required to read the complete solution of a saved time frame from disk is now longer than the compute time for a set number of iterations from an explicit solver. Depending, on the hardware and solver an iteration of an implicit code may also take less time than reading the solution from disk. If one examines the performance improvements in the last decade or two, it is easy to see that depending on disk performance (vs. CPU improvement) may not be the best method for enhancing interactivity. (3) Cluster and Parallel Machine I/O problems. Disk access time is much worse within current parallel machines and cluster of workstations that are acting in concert to solve a single problem. In this case we are not trying to read the volume of data, but are running the solver and the solver outputs the solution. These traditional network interfaces must be used for the file system. (4) Numerics of particle traces. Most visualization tools can work upon a single snap shot of the data but some visualization tools for transient

  2. Computational fluid dynamics in coronary artery disease.

    PubMed

    Sun, Zhonghua; Xu, Lei

    2014-12-01

    Computational fluid dynamics (CFD) is a widely used method in mechanical engineering to solve complex problems by analysing fluid flow, heat transfer, and associated phenomena by using computer simulations. In recent years, CFD has been increasingly used in biomedical research of coronary artery disease because of its high performance hardware and software. CFD techniques have been applied to study cardiovascular haemodynamics through simulation tools to predict the behaviour of circulatory blood flow in the human body. CFD simulation based on 3D luminal reconstructions can be used to analyse the local flow fields and flow profiling due to changes of coronary artery geometry, thus, identifying risk factors for development and progression of coronary artery disease. This review aims to provide an overview of the CFD applications in coronary artery disease, including biomechanics of atherosclerotic plaques, plaque progression and rupture; regional haemodynamics relative to plaque location and composition. A critical appraisal is given to a more recently developed application, fractional flow reserve based on CFD computation with regard to its diagnostic accuracy in the detection of haemodynamically significant coronary artery disease.

  3. Computation of the sampling distribution of coherence estimate.

    PubMed

    Nadarajah, Saralees; Kotz, Samuel

    2006-12-01

    The recent paper published by Miranda de Sa (2004) derived, for the first time, the sampling distribution of coherence estimated between two signals. The paper also considered computational issues of the sampling distribution, using an approximate method. In this short note, we provided several 1-line programs for the exact computation of various measures of the sampling distribution. The advantages of using these programs are discussed.

  4. Cardea: Dynamic Access Control in Distributed Systems

    NASA Technical Reports Server (NTRS)

    Lepro, Rebekah

    2004-01-01

    Modern authorization systems span domains of administration, rely on many different authentication sources, and manage complex attributes as part of the authorization process. This . paper presents Cardea, a distributed system that facilitates dynamic access control, as a valuable piece of an inter-operable authorization framework. First, the authorization model employed in Cardea and its functionality goals are examined. Next, critical features of the system architecture and its handling of the authorization process are then examined. Then the S A M L and XACML standards, as incorporated into the system, are analyzed. Finally, the future directions of this project are outlined and connection points with general components of an authorization system are highlighted.

  5. Distributing Computer Resources in Education and Training.

    ERIC Educational Resources Information Center

    Bell, Wynne

    1982-01-01

    The future direction of computers in educational settings is the topic of speculation. It is noted that resources in education are so meagre that only practical ventures can be considered. Suggestions are made for stretching available resources and maximizing the benefits to be gained through the new technology. (MP)

  6. A comparison of queueing, cluster and distributed computing systems

    NASA Technical Reports Server (NTRS)

    Kaplan, Joseph A.; Nelson, Michael L.

    1993-01-01

    Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.

  7. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1991-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  8. Sawfishes stealth revealed using computational fluid dynamics.

    PubMed

    Bradney, D R; Davidson, A; Evans, S P; Wueringer, B E; Morgan, D L; Clausen, P D

    2017-02-27

    Detailed computational fluid dynamics simulations for the rostrum of three species of sawfish (Pristidae) revealed that negligible turbulent flow is generated from all rostra during lateral swipe prey manipulation and swimming. These results suggest that sawfishes are effective stealth hunters that may not be detected by their teleost prey's lateral line sensory system during pursuits. Moreover, during lateral swipes, the rostra were found to induce little velocity into the surrounding fluid. Consistent with previous data of sawfish feeding behaviour, these data indicate that the rostrum is therefore unlikely to be used to stir up the bottom to uncover benthic prey. Whilst swimming with the rostrum inclined at a small angle to the horizontal, the coefficient of drag of the rostrum is relatively low and the coefficient of lift is zero.

  9. Lectures series in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Thompson, Kevin W.

    1987-01-01

    The lecture notes cover the basic principles of computational fluid dynamics (CFD). They are oriented more toward practical applications than theory, and are intended to serve as a unified source for basic material in the CFD field as well as an introduction to more specialized topics in artificial viscosity and boundary conditions. Each chapter in the test is associated with a videotaped lecture. The basic properties of conservation laws, wave equations, and shock waves are described. The duality of the conservation law and wave representations is investigated, and shock waves are examined in some detail. Finite difference techniques are introduced for the solution of wave equations and conservation laws. Stability analysis for finite difference approximations are presented. A consistent description of artificial viscosity methods are provided. Finally, the problem of nonreflecting boundary conditions are treated.

  10. A perspective of computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Kutler, P.

    1986-01-01

    Computational fluid dynamics (CFD) is maturing, and is at a stage in its technological life cycle in which it is now routinely applied to some rather complicated problems; it is starting to create an impact on the design cycle of aerospace flight vehicles and their components. CFD is also being used to better understand the fluid physics of flows heretofore not understood, such as three-dimensional separation. CFD is also being used to complement and is being complemented by experiments. In this paper, the primary and secondary pacing items that govern CFD in the past are reviewed and updated. The future prospects of CFD are explored which will offer people working in the discipline challenges that should extend the technological life cycle to further increase the capabilities of a proven demonstrated technology.

  11. Domain decomposition algorithms and computation fluid dynamics

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.

    1988-01-01

    In the past several years, domain decomposition was a very popular topic, partly motivated by the potential of parallelization. While a large body of theory and algorithms were developed for model elliptic problems, they are only recently starting to be tested on realistic applications. The application of some of these methods to two model problems in computational fluid dynamics are investigated. Some examples are two dimensional convection-diffusion problems and the incompressible driven cavity flow problem. The construction and analysis of efficient preconditioners for the interface operator to be used in the iterative solution of the interface solution is described. For the convection-diffusion problems, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is discussed.

  12. Protein Dynamics from NMR and Computer Simulation

    NASA Astrophysics Data System (ADS)

    Wu, Qiong; Kravchenko, Olga; Kemple, Marvin; Likic, Vladimir; Klimtchuk, Elena; Prendergast, Franklyn

    2002-03-01

    Proteins exhibit internal motions from the millisecond to sub-nanosecond time scale. The challenge is to relate these internal motions to biological function. A strategy to address this aim is to apply a combination of several techniques including high-resolution NMR, computer simulation of molecular dynamics (MD), molecular graphics, and finally molecular biology, the latter to generate appropriate samples. Two difficulties that arise are: (1) the time scale which is most directly biologically relevant (ms to μs) is not readily accessible by these techniques and (2) the techniques focus on local and not collective motions. We will outline methods using ^13C-NMR to help alleviate the second problem, as applied to intestinal fatty acid binding protein, a relatively small intracellular protein believed to be involved in fatty acid transport and metabolism. This work is supported in part by PHS Grant GM34847 (FGP) and by a fellowship from the American Heart Association (QW).

  13. Artificial Intelligence In Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Vogel, Alison Andrews

    1991-01-01

    Paper compares four first-generation artificial-intelligence (Al) software systems for computational fluid dynamics. Includes: Expert Cooling Fan Design System (EXFAN), PAN AIR Knowledge System (PAKS), grid-adaptation program MITOSIS, and Expert Zonal Grid Generation (EZGrid). Focuses on knowledge-based ("expert") software systems. Analyzes intended tasks, kinds of knowledge possessed, magnitude of effort required to codify knowledge, how quickly constructed, performances, and return on investment. On basis of comparison, concludes Al most successful when applied to well-formulated problems solved by classifying or selecting preenumerated solutions. In contrast, application of Al to poorly understood or poorly formulated problems generally results in long development time and large investment of effort, with no guarantee of success.

  14. Computational modeling of intraocular gas dynamics

    NASA Astrophysics Data System (ADS)

    Noohi, P.; Abdekhodaie, M. J.; Cheng, Y. L.

    2015-12-01

    The purpose of this study was to develop a computational model to simulate the dynamics of intraocular gas behavior in pneumatic retinopexy (PR) procedure. The presented model predicted intraocular gas volume at any time and determined the tolerance angle within which a patient can maneuver and still gas completely covers the tear(s). Computational fluid dynamics calculations were conducted to describe PR procedure. The geometrical model was constructed based on the rabbit and human eye dimensions. SF6 in the form of pure and diluted with air was considered as the injected gas. The presented results indicated that the composition of the injected gas affected the gas absorption rate and gas volume. After injection of pure SF6, the bubble expanded to 2.3 times of its initial volume during the first 23 h, but when diluted SF6 was used, no significant expansion was observed. Also, head positioning for the treatment of retinal tear influenced the rate of gas absorption. Moreover, the determined tolerance angle depended on the bubble and tear size. More bubble expansion and smaller retinal tear caused greater tolerance angle. For example, after 23 h, for the tear size of 2 mm the tolerance angle of using pure SF6 is 1.4 times more than that of using diluted SF6 with 80% air. Composition of the injected gas and conditions of the tear in PR may dramatically affect the gas absorption rate and gas volume. Quantifying these effects helps to predict the tolerance angle and improve treatment efficiency.

  15. Computational modeling of intraocular gas dynamics.

    PubMed

    Noohi, P; Abdekhodaie, M J; Cheng, Y L

    2015-12-18

    The purpose of this study was to develop a computational model to simulate the dynamics of intraocular gas behavior in pneumatic retinopexy (PR) procedure. The presented model predicted intraocular gas volume at any time and determined the tolerance angle within which a patient can maneuver and still gas completely covers the tear(s). Computational fluid dynamics calculations were conducted to describe PR procedure. The geometrical model was constructed based on the rabbit and human eye dimensions. SF6 in the form of pure and diluted with air was considered as the injected gas. The presented results indicated that the composition of the injected gas affected the gas absorption rate and gas volume. After injection of pure SF6, the bubble expanded to 2.3 times of its initial volume during the first 23 h, but when diluted SF6 was used, no significant expansion was observed. Also, head positioning for the treatment of retinal tear influenced the rate of gas absorption. Moreover, the determined tolerance angle depended on the bubble and tear size. More bubble expansion and smaller retinal tear caused greater tolerance angle. For example, after 23 h, for the tear size of 2 mm the tolerance angle of using pure SF6 is 1.4 times more than that of using diluted SF6 with 80% air. Composition of the injected gas and conditions of the tear in PR may dramatically affect the gas absorption rate and gas volume. Quantifying these effects helps to predict the tolerance angle and improve treatment efficiency.

  16. Tactical Airborne Distributed Computing and Networks

    DTIC Science & Technology

    1981-10-01

    material supplied by AGARD or the authors. Published October 1981 Copyright Q AGARD 1981 All Rights Reserved ISBN 92-835-0302-3 Printed by Technical...identifiable, common goal. That goal may be the supplying of general-purpose computing support, a collection of integrated applications such as corporate...uptd in the air to air weapon system. Range of target was still being entered i.ito the system from a subjective appreciation supplied by thi pilot

  17. Bioreactor studies and computational fluid dynamics.

    PubMed

    Singh, H; Hutmacher, D W

    2009-01-01

    The hydrodynamic environment "created" by bioreactors for the culture of a tissue engineered construct (TEC) is known to influence cell migration, proliferation and extra cellular matrix production. However, tissue engineers have looked at bioreactors as black boxes within which TECs are cultured mainly by trial and error, as the complex relationship between the hydrodynamic environment and tissue properties remains elusive, yet is critical to the production of clinically useful tissues. It is well known in the chemical and biotechnology field that a more detailed description of fluid mechanics and nutrient transport within process equipment can be achieved via the use of computational fluid dynamics (CFD) technology. Hence, the coupling of experimental methods and computational simulations forms a synergistic relationship that can potentially yield greater and yet, more cohesive data sets for bioreactor studies. This review aims at discussing the rationale of using CFD in bioreactor studies related to tissue engineering, as fluid flow processes and phenomena have direct implications on cellular response such as migration and/or proliferation. We conclude that CFD should be seen by tissue engineers as an invaluable tool allowing us to analyze and visualize the impact of fluidic forces and stresses on cells and TECs.

  18. Bioreactor Studies and Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Singh, H.; Hutmacher, D. W.

    The hydrodynamic environment “created” by bioreactors for the culture of a tissue engineered construct (TEC) is known to influence cell migration, proliferation and extra cellular matrix production. However, tissue engineers have looked at bioreactors as black boxes within which TECs are cultured mainly by trial and error, as the complex relationship between the hydrodynamic environment and tissue properties remains elusive, yet is critical to the production of clinically useful tissues. It is well known in the chemical and biotechnology field that a more detailed description of fluid mechanics and nutrient transport within process equipment can be achieved via the use of computational fluid dynamics (CFD) technology. Hence, the coupling of experimental methods and computational simulations forms a synergistic relationship that can potentially yield greater and yet, more cohesive data sets for bioreactor studies. This review aims at discussing the rationale of using CFD in bioreactor studies related to tissue engineering, as fluid flow processes and phenomena have direct implications on cellular response such as migration and/or proliferation. We conclude that CFD should be seen by tissue engineers as an invaluable tool allowing us to analyze and visualize the impact of fluidic forces and stresses on cells and TECs.

  19. Computational Fluid Dynamics - Applications in Manufacturing Processes

    NASA Astrophysics Data System (ADS)

    Beninati, Maria Laura; Kathol, Austin; Ziemian, Constance

    2012-11-01

    A new Computational Fluid Dynamics (CFD) exercise has been developed for the undergraduate introductory fluid mechanics course at Bucknell University. The goal is to develop a computational exercise that students complete which links the manufacturing processes course and the concurrent fluid mechanics course in a way that reinforces the concepts in both. In general, CFD is used as a tool to increase student understanding of the fundamentals in a virtual world. A ``learning factory,'' which is currently in development at Bucknell seeks to use the laboratory as a means to link courses that previously seemed to have little correlation at first glance. A large part of the manufacturing processes course is a project using an injection molding machine. The flow of pressurized molten polyurethane into the mold cavity can also be an example of fluid motion (a jet of liquid hitting a plate) that is applied in manufacturing. The students will run a CFD process that captures this flow using their virtual mold created with a graphics package, such as SolidWorks. The laboratory structure is currently being implemented and analyzed as a part of the ``learning factory''. Lastly, a survey taken before and after the CFD exercise demonstrate a better understanding of both the CFD and manufacturing process.

  20. Computational Fluid Dynamics Modeling of Bacillus anthracis ...

    EPA Pesticide Factsheets

    Journal Article Three-dimensional computational fluid dynamics and Lagrangian particle deposition models were developed to compare the deposition of aerosolized Bacillus anthracis spores in the respiratory airways of a human with that of the rabbit, a species commonly used in the study of anthrax disease. The respiratory airway geometries for each species were derived from computed tomography (CT) or µCT images. Both models encompassed airways that extended from the external nose to the lung with a total of 272 outlets in the human model and 2878 outlets in the rabbit model. All simulations of spore deposition were conducted under transient, inhalation-exhalation breathing conditions using average species-specific minute volumes. Four different exposure scenarios were modeled in the rabbit based upon experimental inhalation studies. For comparison, human simulations were conducted at the highest exposure concentration used during the rabbit experimental exposures. Results demonstrated that regional spore deposition patterns were sensitive to airway geometry and ventilation profiles. Despite the complex airway geometries in the rabbit nose, higher spore deposition efficiency was predicted in the upper conducting airways of the human at the same air concentration of anthrax spores. This greater deposition of spores in the upper airways in the human resulted in lower penetration and deposition in the tracheobronchial airways and the deep lung than that predict

  1. Nonlinear ship waves and computational fluid dynamics

    PubMed Central

    MIYATA, Hideaki; ORIHARA, Hideo; SATO, Yohei

    2014-01-01

    Research works undertaken in the first author’s laboratory at the University of Tokyo over the past 30 years are highlighted. Finding of the occurrence of nonlinear waves (named Free-Surface Shock Waves) in the vicinity of a ship advancing at constant speed provided the start-line for the progress of innovative technologies in the ship hull-form design. Based on these findings, a multitude of the Computational Fluid Dynamic (CFD) techniques have been developed over this period, and are highlighted in this paper. The TUMMAC code has been developed for wave problems, based on a rectangular grid system, while the WISDAM code treats both wave and viscous flow problems in the framework of a boundary-fitted grid system. These two techniques are able to cope with almost all fluid dynamical problems relating to ships, including the resistance, ship’s motion and ride-comfort issues. Consequently, the two codes have contributed significantly to the progress in the technology of ship design, and now form an integral part of the ship-designing process. PMID:25311139

  2. Nonlinear ship waves and computational fluid dynamics.

    PubMed

    Miyata, Hideaki; Orihara, Hideo; Sato, Yohei

    2014-01-01

    Research works undertaken in the first author's laboratory at the University of Tokyo over the past 30 years are highlighted. Finding of the occurrence of nonlinear waves (named Free-Surface Shock Waves) in the vicinity of a ship advancing at constant speed provided the start-line for the progress of innovative technologies in the ship hull-form design. Based on these findings, a multitude of the Computational Fluid Dynamic (CFD) techniques have been developed over this period, and are highlighted in this paper. The TUMMAC code has been developed for wave problems, based on a rectangular grid system, while the WISDAM code treats both wave and viscous flow problems in the framework of a boundary-fitted grid system. These two techniques are able to cope with almost all fluid dynamical problems relating to ships, including the resistance, ship's motion and ride-comfort issues. Consequently, the two codes have contributed significantly to the progress in the technology of ship design, and now form an integral part of the ship-designing process.

  3. The Design Methodology of Distributed Computer Systems.

    DTIC Science & Technology

    1980-12-01

    This remedies most of the drawbacks of the ccntralized approach . However, due to the inherent communication delay, the chosen control node may get an...alternative approach is the bayesian approach advocated by Littlewood -31 - (LIT 79(B)). Here we postulate a prior distribution for each of 1, 2, .. j- Then...sses A-i Chapter 2 describes top-down deve lopment approach . The development process is pdivided into four successive phases; (1) requirement, and

  4. LaRC local area networks to support distributed computing

    NASA Technical Reports Server (NTRS)

    Riddle, E. P.

    1984-01-01

    The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.

  5. Moments of inclination error distribution computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1981-01-01

    A FORTRAN coded computer program is described which calculates orbital inclination error statistics using a closed-form solution. This solution uses a data base of trajectory errors from actual flights to predict the orbital inclination error statistics. The Scott flight history data base consists of orbit insertion errors in the trajectory parameters - altitude, velocity, flight path angle, flight azimuth, latitude and longitude. The methods used to generate the error statistics are of general interest since they have other applications. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included.

  6. A Computability Theory for Distributed Systems.

    DTIC Science & Technology

    1986-03-13

    the following two elementary properties. * [p] is an equivalence relation over system computations. * For z a prefix of V, there is an event on p...assumption. C1 CO ’ We note that the two conditions in the last sentence of the theorem are not exclusive. Con-Obserration I. Any occurrence of "P" in a...Basic Tense Logic, in D. Gab- 10~th POPL (1983) 141-1,54. bay and F. Guenthner (eds.) Handbook of [MP31 Manna, Z., Pnueli, A. - Verification of Con

  7. A distributed computing model for telemetry data processing

    NASA Astrophysics Data System (ADS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  8. A distributed computing model for telemetry data processing

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  9. Methodology for Uncertainty Analysis of Dynamic Computational Toxicology Models

    EPA Science Inventory

    The task of quantifying the uncertainty in both parameter estimates and model predictions has become more important with the increased use of dynamic computational toxicology models by the EPA. Dynamic toxicological models include physiologically-based pharmacokinetic (PBPK) mode...

  10. Approximation modeling for the online performance management of distributed computing systems.

    PubMed

    Kusic, Dara; Kandasamy, Nagarajan; Jiang, Guofei

    2008-10-01

    A promising method of automating management tasks in computing systems is to formulate them as control or optimization problems in terms of performance metrics. For an online optimization scheme to be of practical value in a distributed setting, however, it must successfully tackle the curses of dimensionality and modeling. This paper develops a hierarchical control framework to solve performance management problems in distributed computing systems operating in a data center. Concepts from approximation theory are used to reduce the computational burden of controlling such large-scale systems. The relevant approximations are made in the construction of the dynamical models to predict system behavior and in the solution of the associated control equations. Using a dynamic resource-provisioning problem as a case study, we show that a computing system managed by the proposed control framework with approximation models realizes profit gains that are, in the best case, within 1% of a controller using an explicit model of the system.

  11. System Design for On-line Distributed Computational Visualization and Steering

    SciTech Connect

    Wu, Qishi; Zhu, Mengxia; Rao, Nageswara S

    2006-01-01

    We propose a distributed computing framework for network-optimized visualization and steering of real-time scientific simulations and computations executed on a remote host, such as workstation, cluster or supercomputer. Unlike the conventional "batch" simulations, this system enables: (i) monitoring of an ongoing remote computation using visualization tools, and (ii) on-line specification of simulation parameters to interactively steer remote computations. Using performance models for transport channels and visualization modules, we develop a dynamic programming method to optimize the realization of the visualization pipeline over a wide-area network to maximize the frame rate. We present experimental results to illustrate the effectiveness of this system.

  12. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  13. Actors: A Model of Concurrent Computation in Distributed Systems.

    DTIC Science & Technology

    1985-06-01

    RD-A157 917 ACTORS: A MODEL OF CONCURRENT COMPUTATION IN 1/3- DISTRIBUTED SY𔃿TEMS(U) MASSACHUSETTS INST OF TECH CRMBRIDGE ARTIFICIAL INTELLIGENCE...EmmmmmmEmmmmmE mmmmmmmmmmmmmmlfllfllf EEEEEEEmmmmmEE Sa~WNVS AO nflWl ,VNOIJVN 27 n- -o :1 ~ili0 Technical Report 844 Actors: A Model Of Concurrent...Computation In Distributed Systems Gui A. Aghai MIT Artificial Intelligence Laboratory Thsdocument ha. been cipp -oved I= pblicrelease and sale; itsI

  14. Distributed sensor networks with collective computation

    SciTech Connect

    Lanman, D. R.

    2001-01-01

    Simulations of a network of N sensors have been performed. The simulation space contains a number of sound sources and a large number of sensors. Each sensor is equipped with an omni-directional microphone and is capable of measuring only the time of arrival of a signal. Sensors are able to wirelessly transmit and receive packets of information, and have some computing power. The sensors were programmed to merge all information (received packets as well as local measurements) into a 'world view' for that node. This world view is then transmitted. In this way, information can slowly diffuse across the network. One node was monitored in the network as a proxy for when information had diffused across the network. Simulations demonstrated that the energy expended per sensor per time step was approximately independent of N.

  15. Nonlinear Fluid Computations in a Distributed Environment

    NASA Technical Reports Server (NTRS)

    Atwood, Christopher A.; Smith, Merritt H.

    1995-01-01

    The performance of a loosely and tightly-coupled workstation cluster is compared against a conventional vector supercomputer for the solution the Reynolds- averaged Navier-Stokes equations. The application geometries include a transonic airfoil, a tiltrotor wing/fuselage, and a wing/body/empennage/nacelle transport. Decomposition is of the manager-worker type, with solution of one grid zone per worker process coupled using the PVM message passing library. Task allocation is determined by grid size and processor speed, subject to available memory penalties. Each fluid zone is computed using an implicit diagonal scheme in an overset mesh framework, while relative body motion is accomplished using an additional worker process to re-establish grid communication.

  16. Clock distribution system for digital computers

    DOEpatents

    Wyman, Robert H.; Loomis, Jr., Herschel H.

    1981-01-01

    Apparatus for eliminating, in each clock distribution amplifier of a clock distribution system, sequential pulse catch-up error due to one pulse "overtaking" a prior clock pulse. The apparatus includes timing means to produce a periodic electromagnetic signal with a fundamental frequency having a fundamental frequency component V'.sub.01 (t); an array of N signal characteristic detector means, with detector means No. 1 receiving the timing means signal and producing a change-of-state signal V.sub.1 (t) in response to receipt of a signal above a predetermined threshold; N substantially identical filter means, one filter means being operatively associated with each detector means, for receiving the change-of-state signal V.sub.n (t) and producing a modified change-of-state signal V'.sub.n (t) (n=1, . . . , N) having a fundamental frequency component that is substantially proportional to V'.sub.01 (t-.theta..sub.n (t) with a cumulative phase shift .theta..sub.n (t) having a time derivative that may be made uniformly and arbitrarily small; and with the detector means n+1 (1.ltoreq.n

  17. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    NASA Astrophysics Data System (ADS)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  18. Computer Power: Part 1: Distribution of Power (and Communications).

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1988-01-01

    Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)

  19. The Penalty of Context-Switch Time in Distributed Computing

    DTIC Science & Technology

    1988-05-13

    Context-switch time is a significant cost in distributed computing , affecting through-put and response time. We report statistics gathered for a large network of Sun 2’s, Sun 3’s and DEC VAX computers.

  20. Developing a Distributed Computing Architecture at Arizona State University.

    ERIC Educational Resources Information Center

    Armann, Neil; And Others

    1994-01-01

    Development of Arizona State University's computing architecture, designed to ensure that all new distributed computing pieces will work together, is described. Aspects discussed include the business rationale, the general architectural approach, characteristics and objectives of the architecture, specific services, and impact on the university…

  1. Pladipus Enables Universal Distributed Computing in Proteomics Bioinformatics.

    PubMed

    Verheggen, Kenneth; Maddelein, Davy; Hulstaert, Niels; Martens, Lennart; Barsnes, Harald; Vaudel, Marc

    2016-03-04

    The use of proteomics bioinformatics substantially contributes to an improved understanding of proteomes, but this novel and in-depth knowledge comes at the cost of increased computational complexity. Parallelization across multiple computers, a strategy termed distributed computing, can be used to handle this increased complexity; however, setting up and maintaining a distributed computing infrastructure requires resources and skills that are not readily available to most research groups. Here we propose a free and open-source framework named Pladipus that greatly facilitates the establishment of distributed computing networks for proteomics bioinformatics tools. Pladipus is straightforward to install and operate thanks to its user-friendly graphical interface, allowing complex bioinformatics tasks to be run easily on a network instead of a single computer. As a result, any researcher can benefit from the increased computational efficiency provided by distributed computing, hence empowering them to tackle more complex bioinformatics challenges. Notably, it enables any research group to perform large-scale reprocessing of publicly available proteomics data, thus supporting the scientific community in mining these data for novel discoveries.

  2. Computational fluid dynamics modelling in cardiovascular medicine

    PubMed Central

    Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P

    2016-01-01

    This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards ‘digital patient’ or ‘virtual physiological human’ representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. PMID:26512019

  3. Computational fluid dynamics modelling in cardiovascular medicine.

    PubMed

    Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P

    2016-01-01

    This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges.

  4. Perspectives on distributed computing : thirty people, four user types, and the distributed computing user experience.

    SciTech Connect

    Childers, L.; Liming, L.; Foster, I.; Mathematics and Computer Science; Univ. of Chicago

    2008-10-15

    This report summarizes the methodology and results of a user perspectives study conducted by the Community Driven Improvement of Globus Software (CDIGS) project. The purpose of the study was to document the work-related goals and challenges facing today's scientific technology users, to record their perspectives on Globus software and the distributed-computing ecosystem, and to provide recommendations to the Globus community based on the observations. Globus is a set of open source software components intended to provide a framework for collaborative computational science activities. Rather than attempting to characterize all users or potential users of Globus software, our strategy has been to speak in detail with a small group of individuals in the scientific community whose work appears to be the kind that could benefit from Globus software, learn as much as possible about their work goals and the challenges they face, and describe what we found. The result is a set of statements about specific individuals experiences. We do not claim that these are representative of a potential user community, but we do claim to have found commonalities and differences among the interviewees that may be reflected in the user community as a whole. We present these as a series of hypotheses that can be tested by subsequent studies, and we offer recommendations to Globus developers based on the assumption that these hypotheses are representative. Specifically, we conducted interviews with thirty technology users in the scientific community. We included both people who have used Globus software and those who have not. We made a point of including individuals who represent a variety of roles in scientific projects, for example, scientists, software developers, engineers, and infrastructure providers. The following material is included in this report: (1) A summary of the reported work-related goals, significant issues, and points of satisfaction with the use of Globus software; (2

  5. Distriblets: Java-Based Distributed Computing on the Web.

    ERIC Educational Resources Information Center

    Finkel, David; Wills, Craig E.; Brennan, Brian; Brennan, Chris

    1999-01-01

    Describes a system for using the World Wide Web to distribute computational tasks to multiple hosts on the Web that is written in Java programming language. Describes the programs written to carry out the load distribution, the structure of a "distriblet" class, and experiences in using this system. (Author/LRW)

  6. Transient dynamic distributed strain sensing using photonic crystal fibres

    NASA Astrophysics Data System (ADS)

    Samad, Shafeek A.; Hegde, G. M.; Roy Mahapatra, D.; Hanagud, S.

    2014-02-01

    A technique to determine the strain field in one-dimensional (1D) photonic crystal (PC) involving high strain rate, high temperature around shock or ballistic impact is proposed. Transient strain sensing is important in aerospace and other structural health monitoring (SHM) applications. We consider a MEMS based smart sensor design with photonic crystal integrated on a silicon substrate for dynamic strain correlation. Deeply etched silicon rib waveguides with distributed Bragg reflectors are suitable candidates for miniaturization of sensing elements, replacing the conventional FBG. Main objective here is to investigate the effect of non-uniform strain localization on the sensor output. Computational analysis is done to determine the static and dynamic strain sensing characteristics of the 1D photonic crystal based sensor. The structure is designed and modeled using Finite Element Method. Dynamic localization of strain field is observed. The distributed strain field is used to calculated the PC waveguide response. The sensitivity of the proposed sensor is estimated to be 0.6 pm/μɛ.

  7. A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)

    2001-01-01

    NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.

  8. Computational social dynamic modeling of group recruitment.

    SciTech Connect

    Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken; Smrcka, Julianne D.; Ko, Teresa H.; Moy, Timothy David; Wu, Benjamin C.

    2004-01-01

    The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.

  9. Arcade: A Web-Java Based Framework for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.

  10. Product limit estimation for capturing of pressure distribution dynamics.

    PubMed

    Wininger, Michael; Crane, Barbara A

    2016-05-01

    Measurement of contact pressures at the wheelchair-seating interface is a critically important approach for laboratory research and clinical application in monitoring risk for pressure ulceration. As yet, measures obtained from pressure mapping are static in nature: there is no accounting for changes in pressure distribution over time, despite the well-known interaction between time and pressure in risk estimation. Here, we introduce the first dynamic analysis for distribution of pressure data, based on the Kaplan-Meier (KM) Product Limit Estimator (PLE) a ubiquitous tool encountered in clinical trials and survival analysis. In this approach, the pressure array-over-time data set is sub-sampled two frames at a time (random pairing), and their similarity of pressure distribution is quantified via a correlation coefficient. A large number (here: 100) of these frame pairs is then sorted into descending order of correlation value, and visualized as a KM curve; we build confidence limits via a bootstrap computed over 1000 replications. PLEs and the KM have robust statistical support and extensive development: the opportunities for extended application are substantial. We propose that the KM-PLE in particular, and dynamic analysis in general, may provide key leverage on future development of seating technology, and valuable new insight into extant datasets.

  11. Protocols for configuring computation loops on a distributed multiprocessor system

    SciTech Connect

    Woei Lin; Chuan-lin Wu

    1983-01-01

    Protocols for configuring computation loops in a multiprocessing system are examined. Processing nodes are connected by a reconfigurable communication subnet using a multistage interconnection network. Configuration protocols are presented in terms of distributed algorithms such that processing nodes are configured in loop topologies. The configurability of loop topologies is first investigated. It is verified that the communication subnet can emulate loop distributed systems. It is also proven that multiple loops of various lengths can be configured in the distributed network. The technique demonstrated for configuring loop topologies can be used to configure other computation topologies. 6 references.

  12. One approach for evaluating the Distributed Computing Design System (DCDS)

    NASA Technical Reports Server (NTRS)

    Ellis, J. T.

    1985-01-01

    The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.

  13. Description and development of the means of a model experiment for load balancing in distributed computing systems

    NASA Astrophysics Data System (ADS)

    Nagiyev, A. E.; Sherstnyova, A. I.; Botygin, I. A.; Galanova, N. Y.

    2016-06-01

    The results of the statistical model experiments research of various load balancing algorithms in distributed computing systems are presented. Software tools were developed. These tools, which allow to create a virtual infrastructure of distributed computing system in accordance with the intended objective of the research focused on multi-agent and multithreaded data processing were developed. A diagram of the control processing of requests from the terminal devices, providing an effective dynamic horizontal scaling of computing power at peak loads, is proposed.

  14. Exact Score Distribution Computation for Similarity Searches in Ontologies

    NASA Astrophysics Data System (ADS)

    Schulz, Marcel H.; Köhler, Sebastian; Bauer, Sebastian; Vingron, Martin; Robinson, Peter N.

    Semantic similarity searches in ontologies are an important component of many bioinformatic algorithms, e.g., protein function prediction with the Gene Ontology. In this paper we consider the exact computation of score distributions for similarity searches in ontologies, and introduce a simple null hypothesis which can be used to compute a P-value for the statistical significance of similarity scores. We concentrate on measures based on Resnik’s definition of ontological similarity. A new algorithm is proposed that collapses subgraphs of the ontology graph and thereby allows fast score distribution computation. The new algorithm is several orders of magnitude faster than the naive approach, as we demonstrate by computing score distributions for similarity searches in the Human Phenotype Ontology.

  15. Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments

    SciTech Connect

    Jin, Shuangshuang; Chen, Yousu; Wu, Di; Diao, Ruisheng; Huang, Zhenyu

    2015-12-09

    Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Message Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.

  16. Computational fluid dynamic modelling of cavitation

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    Models in sheet cavitation in cryogenic fluids are developed for use in Euler and Navier-Stokes codes. The models are based upon earlier potential-flow models but enable the cavity inception point, length, and shape to be determined as part of the computation. In the present paper, numerical solutions are compared with experimental measurements for both pressure distribution and cavity length. Comparisons between models are also presented. The CFD model provides a relatively simple modification to an existing code to enable cavitation performance predictions to be included. The analysis also has the added ability of incorporating thermodynamic effects of cryogenic fluids into the analysis. Extensions of the current two-dimensional steady state analysis to three-dimensions and/or time-dependent flows are, in principle, straightforward although geometrical issues become more complicated. Linearized models, however offer promise of providing effective cavitation modeling in three-dimensions. This analysis presents good potential for improved understanding of many phenomena associated with cavity flows.

  17. Bringing the CMS distributed computing system into scalable operations

    NASA Astrophysics Data System (ADS)

    Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.

    2010-04-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.

  18. Integrated computer simulation on FIR FEL dynamics

    SciTech Connect

    Furukawa, H.; Kuruma, S.; Imasaki, K.

    1995-12-31

    An integrated computer simulation code has been developed to analyze the RF-Linac FEL dynamics. First, the simulation code on the electron beam acceleration and transport processes in RF-Linac: (LUNA) has been developed to analyze the characteristics of the electron beam in RF-Linac and to optimize the parameters of RF-Linac. Second, a space-time dependent 3D FEL simulation code (Shipout) has been developed. The RF-Linac FEL total simulations have been performed by using the electron beam data from LUNA in Shipout. The number of particles using in a RF-Linac FEL total simulation is approximately 1000. The CPU time for the simulation of 1 round trip is about 1.5 minutes. At ILT/ILE, Osaka, a 8.5MeV RF-Linac with a photo-cathode RF-gun is used for FEL oscillation experiments. By using 2 cm wiggler, the FEL oscillation in the wavelength approximately 46 {mu}m are investigated. By the simulations using LUNA with the parameters of an ILT/ILE experiment, the pulse shape and the energy spectra of the electron beam at the end of the linac are estimated. The pulse shape of the electron beam at the end of the linac has sharp rise-up and it slowly decays as a function of time. By the RF-linac FEL total simulations with the parameters of an ILT/ILE experiment, the dependencies of the start up of the FEL oscillations on the pulse shape of the electron beam at the end of the linac are estimated. The coherent spontaneous emission effects and the quick start up of FEL oscillations have been observed by the RF-Linac FEL total simulations.

  19. Dynamic leaching test of personal computer components.

    PubMed

    Li, Yadong; Richardson, Jay B; Niu, Xiaojun; Jackson, Ollie J; Laster, Jeremy D; Walker, Aaron K

    2009-11-15

    A dynamic leaching test (DLT) was developed and used to evaluate the leaching of toxic substances for electronic waste in the environment. The major components in personal computers (PCs) including motherboards, hard disc drives, floppy disc drives, and compact disc drives were tested. The tests lasted for 2 years for motherboards and 1.5 year for the disc drives. The extraction fluids for the standard toxicity characteristic leaching procedure (TCLP) and synthetic precipitation leaching procedure (SPLP) were used as the DLT leaching solutions. A total of 18 elements including Ag, Al, As, Au, Ba, Be, Cd, Cr, Cu, Fe, Ga, Ni, Pd, Pb, Sb, Se, Sn, and Zn were analyzed in the DLT leachates. Only Al, Cu, Fe, Ni, Pb, and Zn were commonly found in the DLT leachates of the PC components. Their leaching levels were much higher in TCLP extraction fluid than in SPLP extraction fluid. The toxic heavy metal Pb was found to continuously leach out of the components over the entire test periods. The cumulative amounts of Pb leached out of the motherboards in TCLP extraction fluid reached 2.0 g per motherboard over the 2-year test period, and that in SPLP extraction fluid were 75-90% less. The leaching rates or levels of Pb were largely affected by the content of galvanized steel in the PC components. The higher was the steel content, the lower the Pb leaching rate would be. The findings suggest that the obsolete PCs disposed of in landfills or discarded in the environment continuously release Pb for years when subjected to landfill leachate or rains.

  20. Computational Fluid Dynamics of Acoustically Driven Bubble Systems

    NASA Astrophysics Data System (ADS)

    Glosser, Connor; Lie, Jie; Dault, Daniel; Balasubramaniam, Shanker; Piermarocchi, Carlo

    2014-03-01

    The development of modalities for precise, targeted drug delivery has become increasingly important in medical care in recent years. Assemblages of microbubbles steered by acoustic pressure fields present one potential vehicle for such delivery. Modeling the collective response of multi-bubble systems to an intense, externally applied ultrasound field requires accurately capturing acoustic interactions between bubbles and the externally applied field, and their effect on the evolution of bubble kinetics. In this work, we present a methodology for multiphysics simulation based on an efficient transient boundary integral equation (TBIE) coupled with molecular dynamics (MD) to compute trajectories of multiple acoustically interacting bubbles in an ideal fluid under pulsed acoustic excitation. For arbitrary configurations of spherical bubbles, the TBIE solver self-consistently models transient surface pressure distributions at bubble-fluid interfaces due to acoustic interactions and relative potential flows induced by bubble motion. Forces derived from the resulting pressure distributions act as driving terms in the MD update at each timestep. The resulting method efficiently and accurately captures individual bubble dynamics for clouds containing up to hundreds of bubbles.

  1. Validation of Magnetic Resonance Thermometry by Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Rydquist, Grant; Owkes, Mark; Verhulst, Claire M.; Benson, Michael J.; Vanpoppel, Bret P.; Burton, Sascha; Eaton, John K.; Elkins, Christopher P.

    2016-11-01

    Magnetic Resonance Thermometry (MRT) is a new experimental technique that can create fully three-dimensional temperature fields in a noninvasive manner. However, validation is still required to determine the accuracy of measured results. One method of examination is to compare data gathered experimentally to data computed with computational fluid dynamics (CFD). In this study, large-eddy simulations have been performed with the NGA computational platform to generate data for a comparison with previously run MRT experiments. The experimental setup consisted of a heated jet inclined at 30° injected into a larger channel. In the simulations, viscosity and density were scaled according to the local temperature to account for differences in buoyant and viscous forces. A mesh-independent study was performed with 5 mil-, 15 mil- and 45 mil-cell meshes. The program Star-CCM + was used to simulate the complete experimental geometry. This was compared to data generated from NGA. Overall, both programs show good agreement with the experimental data gathered with MRT. With this data, the validity of MRT as a diagnostic tool has been shown and the tool can be used to further our understanding of a range of flows with non-trivial temperature distributions.

  2. New security infrastructure model for distributed computing systems

    NASA Astrophysics Data System (ADS)

    Dubenskaya, J.; Kryukov, A.; Demichev, A.; Prikhodko, N.

    2016-02-01

    At the paper we propose a new approach to setting up a user-friendly and yet secure authentication and authorization procedure in a distributed computing system. The security concept of the most heterogeneous distributed computing systems is based on the public key infrastructure along with proxy certificates which are used for rights delegation. In practice a contradiction between the limited lifetime of the proxy certificates and the unpredictable time of the request processing is a big issue for the end users of the system. We propose to use unlimited in time hashes which are individual for each request instead of proxy certificate. Our approach allows to avoid using of the proxy certificates. Thus the security infrastructure of distributed computing system becomes easier for development, support and use.

  3. Generalization of the logistic distribution in the dynamic model of wind direction

    NASA Astrophysics Data System (ADS)

    Kaplya, E. V.

    2016-12-01

    Statistical regularity in the dynamics of wind direction has been found. The density distribution of the increment of the wind-direction angle has been approximated using a generalized advanced logistic distribution. The advanced logistic distribution involves an additional power-law parameter. The parameters of the approximation function have been computed from experimental data using the method of least squares. The consistency of the proposed function with meteorological data has been tested using Pearson's chisquared test and the Kolmogorov test.

  4. Spatially distributed characterization of soil-moisture dynamics using travel-time distributions

    NASA Astrophysics Data System (ADS)

    Heße, Falk; Zink, Matthias; Kumar, Rohini; Samaniego, Luis; Attinger, Sabine

    2017-01-01

    Travel-time distributions are a comprehensive tool for the characterization of hydrological system dynamics. Unlike the streamflow hydrograph, they describe the movement and storage of water within and throughout the hydrological system. Until recently, studies using such travel-time distributions have generally either been applied to lumped models or to real-world catchments using available time series, e.g., stable isotopes. Whereas the former are limited in their realism and lack information on the spatial arrangements of the relevant quantities, the latter are limited in their use of available data sets. In our study, we employ the spatially distributed mesoscale Hydrological Model (mHM) and apply it to a catchment in central Germany. Being able to draw on multiple large data sets for calibration and verification, we generate a large array of spatially distributed states and fluxes. These hydrological outputs are then used to compute the travel-time distributions for every grid cell in the modeling domain. A statistical analysis indicates the general soundness of the upscaling scheme employed in mHM and reveals precipitation, saturated soil moisture and potential evapotranspiration as important predictors for explaining the spatial heterogeneity of mean travel times. In addition, we demonstrate and discuss the high information content of mean travel times for characterization of internal hydrological processes.

  5. Dynamic stiffness method for space frames under distributed harmonic loads

    NASA Astrophysics Data System (ADS)

    Dumir, P. C.; Saha, D. C.; Sengupta, S.

    1992-10-01

    An exact dynamic equivalent load vector for space frames subjected to harmonic distributed loads has been derived using the dynamic stiffness approach. The Taylor's series expansion of the dynamic equivalent load vector has revealed that the static consistent equivalent load vector used in a 12 degree of freedom two-noded finite element for a space frame is just the first term of the series. The dynamic stiffness approach using the exact dynamic equivalent load vector requires discretization of a member subjected to distributed loads into only one element. The results of the dynamic stiffness method are compared with those of the finite element method for illustrative problems.

  6. Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.

    NASA Astrophysics Data System (ADS)

    Elliott, William Dewey

    1995-01-01

    A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over

  7. Computation of glint, glare, and solar irradiance distribution

    DOEpatents

    Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh

    2015-08-11

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.

  8. Distributed computer taxonomy based on O/S structure

    NASA Technical Reports Server (NTRS)

    Foudriat, Edwin C.

    1985-01-01

    The taxonomy considers the resource structure at the operating system level. It compares a communication based taxonomy with the new taxonomy to illustrate how the latter does a better job when related to the client's view of the distributed computer. The results illustrate the fundamental features and what is required to construct fully distributed processing systems. The problem of using network computers on the space station is addressed. A detailed discussion of the taxonomy is not given here. Information is given in the form of charts and diagrams that were used to illustrate a talk.

  9. First Experiences with LHC Grid Computing and Distributed Analysis

    SciTech Connect

    Fisk, Ian

    2010-12-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  10. Bayesian uncertainty quantification and propagation in molecular dynamics simulations: A high performance computing framework

    NASA Astrophysics Data System (ADS)

    Angelikopoulos, Panagiotis; Papadimitriou, Costas; Koumoutsakos, Petros

    2012-10-01

    We present a Bayesian probabilistic framework for quantifying and propagating the uncertainties in the parameters of force fields employed in molecular dynamics (MD) simulations. We propose a highly parallel implementation of the transitional Markov chain Monte Carlo for populating the posterior probability distribution of the MD force-field parameters. Efficient scheduling algorithms are proposed to handle the MD model runs and to distribute the computations in clusters with heterogeneous architectures. Furthermore, adaptive surrogate models are proposed in order to reduce the computational cost associated with the large number of MD model runs. The effectiveness and computational efficiency of the proposed Bayesian framework is demonstrated in MD simulations of liquid and gaseous argon.

  11. Bayesian uncertainty quantification and propagation in molecular dynamics simulations: a high performance computing framework.

    PubMed

    Angelikopoulos, Panagiotis; Papadimitriou, Costas; Koumoutsakos, Petros

    2012-10-14

    We present a Bayesian probabilistic framework for quantifying and propagating the uncertainties in the parameters of force fields employed in molecular dynamics (MD) simulations. We propose a highly parallel implementation of the transitional Markov chain Monte Carlo for populating the posterior probability distribution of the MD force-field parameters. Efficient scheduling algorithms are proposed to handle the MD model runs and to distribute the computations in clusters with heterogeneous architectures. Furthermore, adaptive surrogate models are proposed in order to reduce the computational cost associated with the large number of MD model runs. The effectiveness and computational efficiency of the proposed Bayesian framework is demonstrated in MD simulations of liquid and gaseous argon.

  12. Eleventh Workshop for Computational Fluid Dynamic Applications in Rocket Propulsion

    NASA Technical Reports Server (NTRS)

    Williams, R. W. (Compiler)

    1993-01-01

    Conference publication includes 79 abstracts and presentations and 3 invited presentations given at the Eleventh Workshop for Computational Fluid Dynamic Applications in Rocket Propulsion held at George C. Marshall Space Flight Center, April 20-22, 1993. The purpose of the workshop is to discuss experimental and computational fluid dynamic activities in rocket propulsion. The workshop is an open meeting for government, industry, and academia. A broad number of topics are discussed including computational fluid dynamic methodology, liquid and solid rocket propulsion, turbomachinery, combustion, heat transfer, and grid generation.

  13. Dynamic Equilibrium Explained Using the Computer

    ERIC Educational Resources Information Center

    Sariçayir, Hakan; Sahin, Musa; Üce, Musa

    2006-01-01

    Since their introduction into schools, educators have tried to utilize computers in classes in order to make difficult topics more comprehensible. Chemistry educators, when faced with the task of teaching a topic that cannot be taught through experiments in a laboratory, resort to computers to help students visualize difficult concepts and…

  14. Computing Bisectors in a Dynamic Geometry Environment

    ERIC Educational Resources Information Center

    Botana, Francisco

    2013-01-01

    In this note, an approach combining dynamic geometry and automated deduction techniques is used to study the bisectors between points and curves. Usual teacher constructions for bisectors are discussed, showing that inherent limitations in dynamic geometry software impede their thorough study. We show that the interactive sketching of bisectors…

  15. How do Chinese cities grow? A distribution dynamics approach

    NASA Astrophysics Data System (ADS)

    Wu, Jian-Xin; He, Ling-Yun

    2017-03-01

    This paper examines the dynamic behavior of city size using a distribution dynamics approach with Chinese city data for the period 1984-2010. Instead of convergence, divergence or paralleled growth, multimodality and persistence are the dominant characteristics in the distribution dynamics of Chinese prefectural cities. Moreover, initial city size matters, initially small and medium-sized cities exhibit strong tendency of convergence, while large cities show significant persistence and multimodality in the sample period. Examination on the regional city groups shows that locational fundamentals have important impact on the distribution dynamics of city size.

  16. Computer architecture evaluation for structural dynamics computations: Project summary

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1989-01-01

    The intent of the proposed effort is the examination of the impact of the elements of parallel architectures on the performance realized in a parallel computation. To this end, three major projects are developed: a language for the expression of high level parallelism, a statistical technique for the synthesis of multicomputer interconnection networks based upon performance prediction, and a queueing model for the analysis of shared memory hierarchies.

  17. CMS Monte Carlo production operations in a distributed computing environment

    SciTech Connect

    Mohapatra, A.; Lazaridis, C.; Hernandez, J.M.; Caballero, J.; Hof, C.; Kalinin, S.; Flossdorf, A.; Abbrescia, M.; De Filippis, N.; Donvito, G.; Maggi, G.; /Bari U. /INFN, Bari /INFN, Pisa /Vrije U., Brussels /Brussels U. /Imperial Coll., London /CERN /Princeton U. /Fermilab

    2008-01-01

    Monte Carlo production for the CMS experiment is carried out in a distributed computing environment; the goal of producing 30M simulated events per month in the first half of 2007 has been reached. A brief overview of the production operations and statistics is presented.

  18. Distributed Computing with Centralized Support Works at Brigham Young.

    ERIC Educational Resources Information Center

    McDonald, Kelly; Stone, Brad

    1992-01-01

    Brigham Young University (Utah) has addressed the need for maintenance and support of distributed computing systems on campus by implementing a program patterned after a national business franchise, providing the support and training of a centralized administration but allowing each unit to operate much as an independent small business.…

  19. Chandrasekhar equations and computational algorithms for distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Burns, J. A.; Ito, K.; Powers, R. K.

    1984-01-01

    The Chandrasekhar equations arising in optimal control problems for linear distributed parameter systems are considered. The equations are derived via approximation theory. This approach is used to obtain existence, uniqueness, and strong differentiability of the solutions and provides the basis for a convergent computation scheme for approximating feedback gain operators. A numerical example is presented to illustrate these ideas.

  20. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm

  1. A fault detection service for wide area distributed computations.

    SciTech Connect

    Stelling, P.

    1998-06-09

    The potential for faults in distributed computing systems is a significant complicating factor for application developers. While a variety of techniques exist for detecting and correcting faults, the implementation of these techniques in a particular context can be difficult. Hence, we propose a fault detection service designed to be incorporated, in a modular fashion, into distributed computing systems, tools, or applications. This service uses well-known techniques based on unreliable fault detectors to detect and report component failure, while allowing the user to tradeoff timeliness of reporting against false positive rates. We describe the architecture of this service, report on experimental results that quantify its cost and accuracy, and describe its use in two applications, monitoring the status of system components of the GUSTO computational grid testbed and as part of the NetSolve network-enabled numerical solver.

  2. Sonovestibular symptoms evaluated by computed dynamic posturography.

    PubMed

    Teszler, C B; Ben-David, J; Podoshin, L; Sabo, E

    2000-01-01

    The investigation of stability under bilateral acoustic stimulation was undertaken in an attempt to mimic the real-life conditions of noisy environment (e.g., industry, aviation). The Tullio phenomenon evaluated by computed dynamic posturography (CDP) under acoustic stimulation is reflected in postural unsteadiness, rather than in the classic nystagmus. With such a method, the dangerous effects of noise-induced instability can be assessed and prevented. Three groups of subjects were submitted. The first (group A) included 20 patients who complained of sonovestibular symptoms (i.e., Tullio phenomenon) on the background of an inner-ear disease. The second group (B) included 20 neurootological patients without a history of Tullio phenomenon. Group C consisted of 20 patients with normal hearing, as controls. A pure-tone stimulus of 1,000 Hz at 110 dB was delivered binaurally for 20 seconds during condition 5 and condition 6 of the CDP sensory organization test. The sequence of six sensory organization conditions was performed three times with two intermissions of 15-20 minutes between the trials. The first was performed in the regular mode (quiet stance). This was followed 20 minutes by a trial carried out in quiet stance in sensory organizations tests (SOTs) 1 through 4, and with acoustic stimulation in SOT 5 and SOT 6. The last test was performed in quiet stance throughout (identical to the first trial). A significant drop in the composite equilibrium score was witnessed in group A patients upon acoustic stimulation (p < .0001). This imbalance did not disappear completely until 20 minutes later when the third sensory organization trial was performed. In fact, the composite score obtained on the last SOT was still significantly worse than the baseline. Group B and the normal subjects (group C) showed no significant change in composite score. As regards the vestibular ratio score, again, group A marked a drop on stimulation with sound (p < .004). This decrease

  3. ADDRESSING ENVIRONMENTAL ENGINEERING CHALLENGES WITH COMPUTATIONAL FLUID DYNAMICS

    EPA Science Inventory

    This paper discusses the status and application of Computational Fluid Dynamics )CFD) models to address environmental engineering challenges for more detailed understanding of air pollutant source emissions, atmospheric dispersion and resulting human exposure. CFD simulations ...

  4. Autonomous management of distributed information systems using evolutionary computation techniques

    NASA Astrophysics Data System (ADS)

    Oates, Martin J.

    1999-03-01

    As the size of typical industrial strength information systems continues to rise, particularly in the arena of Internet based management information systems and multimedia servers, the issue of managing data distribution over clusters or `farms' to overcome performance and scalability issues is becoming of paramount importance. Further, where access is global, this can cause points of geographically localized load contention to `follow the sun' during the day. Traditional site mirroring is not overly effective in addressing this contention and so a more dynamic approach is being investigated to tackle load balancing. The general objective is to manage a self-adapting, distributed database so as to reliably and consistently provide near optimal performance as perceived by client applications. Such a management system must be ultimately capable of operating over a range of time varying usage profiles and fault scenarios, incorporate considerations for communications network delays, multiple updates and maintenance operations. It must also be shown to be capable of being scaled in a practical fashion to ever larger sized networks and databases. Two key components of such an automated system are an optimiser capable of efficiently finding new configuration options, and a suitable model of the system capable of accurately reflecting the performance (or any other required quality of service metric) of the real world system. As conditions change in the real world system, these are fed into the model. The optimiser is then run to find new configurations which are tested in the model prior to implementation in the real world. The model therefore forms an evaluation function which the optimiser utilises to direct its search. Whilst it has already been shown that Genetic Algorithms can provide good solutions to this problem, there are a number of issues associated with this approach. In particular, for industrial strength applications, it must be shown that the GA employed

  5. Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing

    NASA Technical Reports Server (NTRS)

    Ozguner, Fusun

    1996-01-01

    Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.

  6. Performance Assessment of OVERFLOW on Distributed Computing Environment

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Rizk, Yehia M.

    2000-01-01

    The aerodynamic computer code, OVERFLOW, with a multi-zone overset grid feature, has been parallelized to enhance its performance on distributed and shared memory paradigms. Practical application benchmarks have been set to assess the efficiency of code's parallelism on high-performance architectures. The code's performance has also been experimented with in the context of the distributed computing paradigm on distant computer resources using the Information Power Grid (IPG) toolkit, Globus. Two parallel versions of the code, namely OVERFLOW-MPI and -MLP, have developed around the natural coarse grained parallelism inherent in a multi-zonal domain decomposition paradigm. The algorithm invokes a strategy that forms a number of groups, each consisting of a zone, a cluster of zones and/or a partition of a large zone. Each group can be thought of as a process with one or multithreads assigned to it and that all groups run in parallel. The -MPI version of the code uses explicit message-passing based on the standard MPI library for sending and receiving interzonal boundary data across processors. The -MLP version employs no message-passing paradigm; the boundary data is transferred through the shared memory. The -MPI code is suited for both distributed and shared memory architectures, while the -MLP code can only be used on shared memory platforms. The IPG applications are implemented by the -MPI code using the Globus toolkit. While a computational task is distributed across multiple computer resources, the parallelism can be explored on each resource alone. Performance studies are achieved with some practical aerodynamic problems with complex geometries, consisting of 2.5 up to 33 million grid points and a large number of zonal blocks. The computations were executed primarily on SGI Origin 2000 multiprocessors and on the Cray T3E. OVERFLOW's IPG applications are carried out on NASA homogeneous metacomputing machines located at three sites, Ames, Langley and Glenn. Plans

  7. Income dynamics with a stationary double Pareto distribution

    NASA Astrophysics Data System (ADS)

    Toda, Alexis Akira

    2011-04-01

    Once controlled for the trend, the distribution of personal income appears to be double Pareto, a distribution that obeys the power law exactly in both the upper and the lower tails. I propose a model of income dynamics with a stationary distribution that is consistent with this fact. Using US male wage data for 1970-1993, I estimate the power law exponent in two ways—(i) from each cross section, assuming that the distribution has converged to the stationary distribution, and (ii) from a panel directly estimating the parameters of the income dynamics model—and obtain the same value of 8.4.

  8. Beyond the NAS Parallel Benchmarks: Measuring Dynamic Program Performance and Grid Computing Applications

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Biswas, Rupak; Frumkin, Michael; Feng, Huiyu; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The contents include: 1) A brief history of NPB; 2) What is (not) being measured by NPB; 3) Irregular dynamic applications (UA Benchmark); and 4) Wide area distributed computing (NAS Grid Benchmarks-NGB). This paper is presented in viewgraph form.

  9. (U) Computation acceleration using dynamic memory

    SciTech Connect

    Hakel, Peter

    2014-10-24

    Many computational applications require the repeated use of quantities, whose calculations can be expensive. In order to speed up the overall execution of the program, it is often advantageous to replace computation with extra memory usage. In this approach, computed values are stored and then, when they are needed again, they are quickly retrieved from memory rather than being calculated again at great cost. Sometimes, however, the precise amount of memory needed to store such a collection is not known in advance, and only emerges in the course of running the calculation. One problem accompanying such a situation is wasted memory space in overdimensioned (and possibly sparse) arrays. Another issue is the overhead of copying existing values to a new, larger memory space, if the original allocation turns out to be insufficient. In order to handle these runtime problems, the programmer therefore has the extra task of addressing them in the code.

  10. Quantum and classical dynamics in adiabatic computation

    NASA Astrophysics Data System (ADS)

    Crowley, P. J. D.; Äńurić, T.; Vinci, W.; Warburton, P. A.; Green, A. G.

    2014-10-01

    Adiabatic transport provides a powerful way to manipulate quantum states. By preparing a system in a readily initialized state and then slowly changing its Hamiltonian, one may achieve quantum states that would otherwise be inaccessible. Moreover, a judicious choice of final Hamiltonian whose ground state encodes the solution to a problem allows adiabatic transport to be used for universal quantum computation. However, the dephasing effects of the environment limit the quantum correlations that an open system can support and degrade the power of such adiabatic computation. We quantify this effect by allowing the system to evolve over a restricted set of quantum states, providing a link between physically inspired classical optimization algorithms and quantum adiabatic optimization. This perspective allows us to develop benchmarks to bound the quantum correlations harnessed by an adiabatic computation. We apply these to the D-Wave Vesuvius machine with revealing—though inconclusive—results.

  11. A Computational Wireless Network Backplane: Performance in a Distributed Speaker Identification Application Postprint

    DTIC Science & Technology

    2008-12-01

    traffic patterns are intense but constrained to a local area. Examples include peer-to-peer applications or sensor data processing in the region. In such...vol. 30, no. 4, pp. 68–74, 1997. [7] J. Dean and S. Ghemawat, “ Mapreduce : simplified data processing on large clusters ,” Commun. ACM, vol. 51, no. 1...DWARF, a general distributed application execution framework for wireless ad-hoc networks which dynamically allocates computation resources and manages

  12. Vibration suppression with approximate finite dimensional compensators for distributed systems: Computational methods and experimental results

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Smith, Ralph C.; Wang, Yun

    1994-01-01

    Based on a distributed parameter model for vibrations, an approximate finite dimensional dynamic compensator is designed to suppress vibrations (multiple modes with a broad band of frequencies) of a circular plate with Kelvin-Voigt damping and clamped boundary conditions. The control is realized via piezoceramic patches bonded to the plate and is calculated from information available from several pointwise observed state variables. Examples from computational studies as well as use in laboratory experiments are presented to demonstrate the effectiveness of this design.

  13. Computer modeling and simulation of a 20kHz ac distribution system for Space Station

    NASA Technical Reports Server (NTRS)

    Tsai, Fu-Sheng; Lee, Fred C.

    1987-01-01

    A computer model of a 20 kHz, ac distribution testbed for Space Station is presented. The system consists of six resonant inverters, a one-hundred-meter transmission line, and three load receivers: a dc receiver, a bidirectional receiver, and an ac receiver. A model library is generated characterizing all system components. The system's local and global behaviors are investigated using the EASY5 dynamic analysis program.

  14. Distributed parallel computing in stochastic modeling of groundwater systems.

    PubMed

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling.

  15. Potential applications of computational fluid dynamics to biofluid analysis

    NASA Technical Reports Server (NTRS)

    Kwak, D.; Chang, J. L. C.; Rogers, S. E.; Rosenfeld, M.; Kwak, D.

    1988-01-01

    Computational fluid dynamics was developed to the stage where it has become an indispensable part of aerospace research and design. In view of advances made in aerospace applications, the computational approach can be used for biofluid mechanics research. Several flow simulation methods developed for aerospace problems are briefly discussed for potential applications to biofluids, especially to blood flow analysis.

  16. Computer Visualization of Many-Particle Quantum Dynamics

    SciTech Connect

    Ozhigov, A. Y.

    2009-03-10

    In this paper I show the importance of computer visualization in researching of many-particle quantum dynamics. Such a visualization becomes an indispensable illustrative tool for understanding the behavior of dynamic swarm-based quantum systems. It is also an important component of the corresponding simulation framework, and can simplify the studies of underlying algorithms for multi-particle quantum systems.

  17. The Computer Simulation of Liquids by Molecular Dynamics.

    ERIC Educational Resources Information Center

    Smith, W.

    1987-01-01

    Proposes a mathematical computer model for the behavior of liquids using the classical dynamic principles of Sir Isaac Newton and the molecular dynamics method invented by other scientists. Concludes that other applications will be successful using supercomputers to go beyond simple Newtonian physics. (CW)

  18. File and metadata management for BESIII distributed computing

    NASA Astrophysics Data System (ADS)

    Nicholson, C.; Lin, L.; Deng, Z. Y.; Li, W. D.; Zhang, X. M.; Zheng, Y. H.

    2012-12-01

    The BESIII experiment at the Institute of High Energy Physics (IHEP), Beijing, uses the high-luminosity BEPCII e+e- collider to study physics in the π-charm energy region around 3.7 GeV; BEPCII has produced the worlds largest samples of J/phi and phi’ events to date. An order of magnitude increase in the data sample size over the 2011-2012 data-taking period demanded a move from a very centralized to a distributed computing environment, as well as the development of an efficient file and metadata management system. While BESIII is on a smaller scale than some other HEP experiments, this poses particular challenges for its distributed computing and data management system. These constraints include limited resources and manpower, and low quality of network connections to IHEP. Drawing on the rich experience of the HEP community, a system has been developed which meets these constraints. The design and development of the BESIII distributed data management system, including its integration with other BESIII distributed computing components, such as job management, are presented here.

  19. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    NASA Astrophysics Data System (ADS)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-01

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  20. Computational spectroscopy, dynamics, and photochemistry of photosensory flavoproteins.

    PubMed

    Domratcheva, Tatiana; Udvarhelyi, Anikó; Shahi, Abdul Rehaman Moughal

    2014-01-01

    Extensive interest in photosensory proteins stimulated computational studies of flavins and flavoproteins in the past decade. This review is dedicated to the three central topics of these studies: calculations of flavin UV-visible and IR spectra, simulated dynamics of photoreceptor proteins, and flavin photochemistry. Accordingly, this chapter is divided into three parts; each part describes corresponding computational protocols, summarizes computational results, and discusses the emerging mechanistic picture.

  1. Parallel Domain Decomposition Preconditioning for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai; Kutler, Paul (Technical Monitor)

    1998-01-01

    This viewgraph presentation gives an overview of the parallel domain decomposition preconditioning for computational fluid dynamics. Details are given on some difficult fluid flow problems, stabilized spatial discretizations, and Newton's method for solving the discretized flow equations. Schur complement domain decomposition is described through basic formulation, simplifying strategies (including iterative subdomain and Schur complement solves, matrix element dropping, localized Schur complement computation, and supersparse computations), and performance evaluation.

  2. Towards a Population Dynamics Theory for Evolutionary Computing: Learning from Biological Population Dynamics in Nature

    NASA Astrophysics Data System (ADS)

    Ma, Zhanshan (Sam)

    In evolutionary computing (EC), population size is one of the critical parameters that a researcher has to deal with. Hence, it was no surprise that the pioneers of EC, such as De Jong (1975) and Holland (1975), had already studied the population sizing from the very beginning of EC. What is perhaps surprising is that more than three decades later, we still largely depend on the experience or ad-hoc trial-and-error approach to set the population size. For example, in a recent monograph, Eiben and Smith (2003) indicated: "In almost all EC applications, the population size is constant and does not change during the evolutionary search." Despite enormous research on this issue in recent years, we still lack a well accepted theory for population sizing. In this paper, I propose to develop a population dynamics theory forEC with the inspiration from the population dynamics theory of biological populations in nature. Essentially, the EC population is considered as a dynamic system over time (generations) and space (search space or fitness landscape), similar to the spatial and temporal dynamics of biological populations in nature. With this conceptual mapping, I propose to 'transplant' the biological population dynamics theory to EC via three steps: (i) experimentally test the feasibility—whether or not emulating natural population dynamics improves the EC performance; (ii) comparatively study the underlying mechanisms—why there are improvements, primarily via statistical modeling analysis; (iii) conduct theoretical analysis with theoretical models such as percolation theory and extended evolutionary game theory that are generally applicable to both EC and natural populations. This article is a summary of a series of studies we have performed to achieve the general goal [27][30]-[32]. In the following, I start with an extremely brief introduction on the theory and models of natural population dynamics (Sections 1 & 2). In Sections 4 to 6, I briefly discuss three

  3. Dynamic traffic assignment on parallel computers

    SciTech Connect

    Nagel, K.; Frye, R.; Jakob, R.; Rickert, M.; Stretz, P.

    1998-12-01

    The authors describe part of the current framework of the TRANSIMS traffic research project at the Los Alamos National Laboratory. It includes parallel implementations of a route planner and a microscopic traffic simulation model. They present performance figures and results of an offline load-balancing scheme used in one of the iterative re-planning runs required for dynamic route assignment.

  4. Distributed Computer Networks in Support of Complex Group Practices

    PubMed Central

    Wess, Bernard P.

    1978-01-01

    The economics of medical computer networks are presented in context with the patient care and administrative goals of medical networks. Design alternatives and network topologies are discussed with an emphasis on medical network design requirements in distributed data base design, telecommunications, satellite systems, and software engineering. The success of the medical computer networking technology is predicated on the ability of medical and data processing professionals to design comprehensive, efficient, and virtually impenetrable security systems to protect data bases, network access and services, and patient confidentiality.

  5. EST analysis pipeline: use of distributed computing resources.

    PubMed

    González, Francisco Javier; Vizcaíno, Juan Antonio

    2011-01-01

    This chapter describes how a pipeline for the analysis of expressed sequence tag (EST) data can be -implemented, based on our previous experience generating ESTs from Trichoderma spp. We focus on key steps in the workflow, such as the processing of raw data from the sequencers, the clustering of ESTs, and the functional annotation of the sequences using BLAST, InterProScan, and BLAST2GO. Some of the steps require the use of intensive computing power. Since these resources are not available for small research groups or institutes without bioinformatics support, an alternative will be described: the use of distributed computing resources (local grids and Amazon EC2).

  6. Computed voltage distributions around solar electric propulsion spacecraft

    NASA Technical Reports Server (NTRS)

    Stevens, N. J.

    1979-01-01

    The NASA Charging Analyzer Program is used to conduct preliminary computations of the voltage distributions around such large spacecraft in geomagnetic substorm environments at geosynchronous altitudes. Both a standard operating voltage (+ or - 150 volts on solar arrays) and direct-drive (+1200 volts on arrays) configurations are considered. Thruster-off simulations are computed for both operating voltage configurations while the effect of simulated thruster-on conditions are evaluated only for direct-drive configuration. These simulated thruster-on conditions are evaluated only for direct-drive configuration. These simulated thruster operations appear to alleviate surface charging.

  7. Distributed-Memory Computing With the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA)

    NASA Technical Reports Server (NTRS)

    Riley, Christopher J.; Cheatwood, F. McNeil

    1997-01-01

    The Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA), a Navier-Stokes solver, has been modified for use in a parallel, distributed-memory environment using the Message-Passing Interface (MPI) standard. A standard domain decomposition strategy is used in which the computational domain is divided into subdomains with each subdomain assigned to a processor. Performance is examined on dedicated parallel machines and a network of desktop workstations. The effect of domain decomposition and frequency of boundary updates on performance and convergence is also examined for several realistic configurations and conditions typical of large-scale computational fluid dynamic analysis.

  8. Numerical ray-tracing approach with laser intensity distribution for LIDAR signal power function computation

    NASA Astrophysics Data System (ADS)

    Shi, Guangyuan; Li, Song; Huang, Ke; Li, Zile; Zheng, Guoxing

    2016-10-01

    We have developed a new numerical ray-tracing approach for LIDAR signal power function computation, in which the light round-trip propagation is analyzed by geometrical optics and a simple experiment is employed to acquire the laser intensity distribution. It is relatively more accurate and flexible than previous methods. We emphatically discuss the relationship between the inclined angle and the dynamic range of detector output signal in biaxial LIDAR system. Results indicate that an appropriate negative angle can compress the signal dynamic range. This technique has been successfully proved by comparison with real measurements.

  9. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  10. Parallel and distributed computation for fault-tolerant object recognition

    NASA Technical Reports Server (NTRS)

    Wechsler, Harry

    1988-01-01

    The distributed associative memory (DAM) model is suggested for distributed and fault-tolerant computation as it relates to object recognition tasks. The fault-tolerance is with respect to geometrical distortions (scale and rotation), noisy inputs, occulsion/overlap, and memory faults. An experimental system was developed for fault-tolerant structure recognition which shows the feasibility of such an approach. The approach is futher extended to the problem of multisensory data integration and applied successfully to the recognition of colored polyhedral objects.

  11. Dynamics of the return distribution in the Korean financial market

    NASA Astrophysics Data System (ADS)

    Yang, Jae-Suk; Chae, Seungbyung; Jung, Woo-Sung; Moon, Hie-Tae

    2006-03-01

    In this paper, we studied the dynamics of the log-return distribution of the Korean Composition Stock Price Index (KOSPI) from 1992 to 2004. Based on the microscopic spin model, we found that while the index during the late 1990s showed a power-law distribution, the distribution in the early 2000s was exponential. This change in distribution shape was caused by the duration and velocity, among other parameters, of the information that flowed into the market.

  12. A new computational structure for real-time dynamics

    SciTech Connect

    Izaguirre, A. ); Hashimoto, Minoru )

    1992-08-01

    The authors present an efficient structure for the computation of robot dynamics in real time. The fundamental characteristic of this structure is the division of the computation into a high-priority synchronous task and low-priority background tasks, possibly sharing the resources of a conventional computing unit based on commercial microprocessors. The background tasks compute the inertial and gravitational coefficients as well as the forces due to the velocities of the joints. In each control sample period, the high-priority synchronous task computes the product of the inertial coefficients by the accelerations of the joints and performs the summation of the torques due to the velocities and gravitational forces. Kircanski et al. (1986) have shown that the bandwidth of the variation of joint angles and of their velocities is an order of magnitude less than the variation of joint accelerations. This result agrees with the experiments the authors have carried out using a PUMA 260 robot. Two main strategies contribute to reduce the computational burden associated with the evaluation of the dynamic equations. The first involves the use of efficient algorithms for the evaluation of the equations. The second is aimed at reducing the number of dynamic parameters by identifying beforehand the linear dependencies among these parameters, as well as carrying out a significance analysis of the parameters' contribution to the final joint torques. The actual code used to evaluate this dynamic model is entirely computer generated from experimental data, requiring no other manual intervention than performing a campaign of measurements.

  13. Neural Computations in a Dynamical System with Multiple Time Scales

    PubMed Central

    Mi, Yuanyuan; Lin, Xiaohan; Wu, Si

    2016-01-01

    Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions. PMID:27679569

  14. Integrating Xgrid into the HENP distributed computing model

    NASA Astrophysics Data System (ADS)

    Hajdu, L.; Kocoloski, A.; Lauret, J.; Miller, M.

    2008-07-01

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.

  15. Robot-Arm Dynamic Control by Computer

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.; Tarn, Tzyh J.; Chen, Yilong J.

    1987-01-01

    Feedforward and feedback schemes linearize responses to control inputs. Method for control of robot arm based on computed nonlinear feedback and state tranformations to linearize system and decouple robot end-effector motions along each of cartesian axes augmented with optimal scheme for correction of errors in workspace. Major new feature of control method is: optimal error-correction loop directly operates on task level and not on joint-servocontrol level.

  16. Temporal Distributional Limit Theorems for Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Dolgopyat, Dmitry; Sarig, Omri

    2017-02-01

    Suppose {T^t} is a Borel flow on a complete separable metric space X, f:X→ R is Borel, and xin X. A temporal distributional limit theorem is a scaling limit for the distributions of the random variables X_T:=int _0^t f(T^s x)ds, where t is chosen randomly uniformly from [0, T], x is fixed, and T→ ∞. We discuss such laws for irrational rotations, Anosov flows, and horocycle flows.

  17. Semiquantum key distribution with secure delegated quantum computation.

    PubMed

    Li, Qin; Chan, Wai Hong; Zhang, Shengyu

    2016-01-27

    Semiquantum key distribution allows a quantum party to share a random key with a "classical" party who only can prepare and measure qubits in the computational basis or reorder some qubits when he has access to a quantum channel. In this work, we present a protocol where a secret key can be established between a quantum user and an almost classical user who only needs the quantum ability to access quantum channels, by securely delegating quantum computation to a quantum server. We show the proposed protocol is robust even when the delegated quantum server is a powerful adversary, and is experimentally feasible with current technology. As one party of our protocol is the most quantum-resource efficient, it can be more practical and significantly widen the applicability scope of quantum key distribution.

  18. Semiquantum key distribution with secure delegated quantum computation

    PubMed Central

    Li, Qin; Chan, Wai Hong; Zhang, Shengyu

    2016-01-01

    Semiquantum key distribution allows a quantum party to share a random key with a “classical” party who only can prepare and measure qubits in the computational basis or reorder some qubits when he has access to a quantum channel. In this work, we present a protocol where a secret key can be established between a quantum user and an almost classical user who only needs the quantum ability to access quantum channels, by securely delegating quantum computation to a quantum server. We show the proposed protocol is robust even when the delegated quantum server is a powerful adversary, and is experimentally feasible with current technology. As one party of our protocol is the most quantum-resource efficient, it can be more practical and significantly widen the applicability scope of quantum key distribution. PMID:26813384

  19. A biological solution to a fundamental distributed computing problem.

    PubMed

    Afek, Yehuda; Alon, Noga; Barad, Omer; Hornstein, Eran; Barkai, Naama; Bar-Joseph, Ziv

    2011-01-14

    Computational and biological systems are often distributed so that processors (cells) jointly solve a task, without any of them receiving all inputs or observing all outputs. Maximal independent set (MIS) selection is a fundamental distributed computing procedure that seeks to elect a set of local leaders in a network. A variant of this problem is solved during the development of the fly's nervous system, when sensory organ precursor (SOP) cells are chosen. By studying SOP selection, we derived a fast algorithm for MIS selection that combines two attractive features. First, processors do not need to know their degree; second, it has an optimal message complexity while only using one-bit messages. Our findings suggest that simple and efficient algorithms can be developed on the basis of biologically derived insights.

  20. Optimal eigenvalue computation on distributed-memory MIMD multiprocessors

    SciTech Connect

    Crivelli, S.; Jessup, E. R.

    1992-10-01

    Simon proves that bisection is not the optimal method for computing an eigenvalue on a single vector processor. In this paper, we show that his analysis does not extend in a straightforward way to the computation of an eigenvalue on a distributed-memory MIMD multiprocessor. In particular, we show how the optimal number of sections (and processors) to use for multisection depends on variables such as the matrix size and certain parameters inherent to the machine. We also show that parallel multisection outperforms the variant of parallel bisection proposed by Swarztrauber or this problem on a distributed-memory MIMD multiprocessor. We present the results of experiments on the 64-processor Intel iPSC/2 hypercube and the 512-processor Intel Touchstone Delta mesh multiprocessor.

  1. Osmosis : a molecular dynamics computer simulation study

    NASA Astrophysics Data System (ADS)

    Lion, Thomas

    Osmosis is a phenomenon of critical importance in a variety of processes ranging from the transport of ions across cell membranes and the regulation of blood salt levels by the kidneys to the desalination of water and the production of clean energy using potential osmotic power plants. However, despite its importance and over one hundred years of study, there is an ongoing confusion concerning the nature of the microscopic dynamics of the solvent particles in their transfer across the membrane. In this thesis the microscopic dynamical processes underlying osmotic pressure and concentration gradients are investigated using molecular dynamics (MD) simulations. I first present a new derivation for the local pressure that can be used for determining osmotic pressure gradients. Using this result, the steady-state osmotic pressure is studied in a minimal model for an osmotic system and the steady-state density gradients are explained using a simple mechanistic hopping model for the solvent particles. The simulation setup is then modified, allowing us to explore the timescales involved in the relaxation dynamics of the system in the period preceding the steady state. Further consideration is also given to the relative roles of diffusive and non-diffusive solvent transport in this period. Finally, in a novel modification to the classic osmosis experiment, the solute particles are driven out-of-equilibrium by the input of energy. The effect of this modification on the osmotic pressure and the osmotic ow is studied and we find that active solute particles can cause reverse osmosis to occur. The possibility of defining a new "osmotic effective temperature" is also considered and compared to the results of diffusive and kinetic temperatures..

  2. Dynamics of Bottlebrush Networks: A Computational Study

    NASA Astrophysics Data System (ADS)

    Dobrynin, Andrey; Cao, Zhen; Sheiko, Sergei

    We study dynamics of deformation of bottlebrush networks using molecular dynamics simulations and theoretical calculations. Analysis of our simulation results show that the dynamics of bottlebrush network deformation can be described by a Rouse model for polydisperse networks with effective Rouse time of the bottlebrush network strand, τR =τ0Ns2 (Nsc + 1) where, Ns is the number-average degree of polymerization of the bottlebrush backbone strands between crosslinks, Nsc is the degree of polymerization of the side chains and τ0is a characteristic monomeric relaxation time. At time scales t smaller than the Rouse time, t <τR , the time dependent network shear modulus decays with time as G (t) ~ ρkB T(τ0 / t) 1 / 2 , where ρis the monomer number density. However, at the time scale t larger than the Rouse time of the bottlebrush strands between crosslinks, the network response is pure elastic with shear modulus G (t) =G0 , where G0 is the equilibrium shear modulus at small deformation. The stress evolution in the bottlebrush networks can be described by a universal function of t /τR . NSF DMR-1409710.

  3. Distributed Cognition (DCOG): Foundations for a Computational Associative Memory Model

    DTIC Science & Technology

    2006-08-01

    This isolates the skateboard as the one that doesn’t belong. Certain automatic, attention-shifting mechanisms will be required in our model . We...STINFO COPY AFRL-HE-WP-TR-2006-0160 Distributed Cognition (DCOG): Foundations for a Computational Associative Memory Model Robert G. Eggleston...reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions , searching

  4. Scalable Quantum Networks for Distributed Computing and Sensing

    DTIC Science & Technology

    2016-04-01

    AFRL-AFOSR-UK-TR-2016-0007 Scalable Quantum Networks for Distributed Computing and Sensing Ian Walmsley THE UNIVERSITY OF OXFORD Final Report 04/01...MM-YYYY) 12/07/2015 2. REPORT TYPE Final 3. DATES COVERED (From - To) 01-Sep-2012 to 31-Aug-2015 4. TITLE AND SUBTITLE Scalable Quantum Networks...SUPPLEMENTARY NOTES 14. ABSTRACT We identified two barriers to the implementation of large-scale photonic quantum networks. First, as scalability requires

  5. Dynamic Stall Computations Using a Zonal Navier-Stokes Model

    DTIC Science & Technology

    1988-06-01

    COMPUTATIONS USING A ZONAL NAVIER-STOKES MODEL OfOSONA, AUTWOR(S) Conrovd, Jack H. r. __ _ I, ,3 , iOR co T’M( COVERED DATE Of REPORT (Yea, Month Oy) IS PAGE...48 computer and is used to calculate the flow field about a NACA 0012 airfoil oscillating in pitch. Surface pressure distributions and integrated...lift, pitching moment, and drag coefficient versus angle of attack are compared to existing experimental data for four cases and existing computational

  6. Computationally intensive econometrics using a distributed matrix-programming language.

    PubMed

    Doornik, Jurgen A; Hendry, David F; Shephard, Neil

    2002-06-15

    This paper reviews the need for powerful computing facilities in econometrics, focusing on concrete problems which arise in financial economics and in macroeconomics. We argue that the profession is being held back by the lack of easy-to-use generic software which is able to exploit the availability of cheap clusters of distributed computers. Our response is to extend, in a number of directions, the well-known matrix-programming interpreted language Ox developed by the first author. We note three possible levels of extensions: (i) Ox with parallelization explicit in the Ox code; (ii) Ox with a parallelized run-time library; and (iii) Ox with a parallelized interpreter. This paper studies and implements the first case, emphasizing the need for deterministic computing in science. We give examples in the context of financial economics and time-series modelling.

  7. Multi-VO support in IHEP's distributed computing environment

    NASA Astrophysics Data System (ADS)

    Yan, T.; Suo, B.; Zhao, X. H.; Zhang, X. M.; Ma, Z. T.; Yan, X. F.; Lin, T.; Deng, Z. Y.; Li, W. D.; Belov, S.; Pelevanyuk, I.; Zhemchugov, A.; Cai, H.

    2015-12-01

    Inspired by the success of BESDIRAC, the distributed computing environment based on DIRAC for BESIII experiment, several other experiments operated by Institute of High Energy Physics (IHEP), such as Circular Electron Positron Collider (CEPC), Jiangmen Underground Neutrino Observatory (JUNO), Large High Altitude Air Shower Observatory (LHAASO) and Hard X-ray Modulation Telescope (HXMT) etc, are willing to use DIRAC to integrate the geographically distributed computing resources available by their collaborations. In order to minimize manpower and hardware cost, we extended the BESDIRAC platform to support multi-VO scenario, instead of setting up a self-contained distributed computing environment for each VO. This makes DIRAC as a service for the community of those experiments. To support multi-VO, the system architecture of BESDIRAC is adjusted for scalability. The VOMS and DIRAC servers are reconfigured to manage users and groups belong to several VOs. A lightweight storage resource manager StoRM is employed as the central SE to integrate local and grid data. A frontend system is designed for user's massive job splitting, submission and management, with plugins to support new VOs. A monitoring and accounting system is also considered to easy the system administration and VO related resources usage accounting.

  8. Dynamical localization simulated on a few-qubit quantum computer

    SciTech Connect

    Benenti, Giuliano; Montangero, Simone; Casati, Giulio; Shepelyansky, Dima L.

    2003-05-01

    We show that a quantum computer operating with a small number of qubits can simulate the dynamical localization of classical chaos in a system described by the quantum sawtooth map model. The dynamics of the system is computed efficiently up to a time t{>=}l, and then the localization length l can be obtained with accuracy {nu} by means of order 1/{nu}{sup 2} computer runs, followed by coarse-grained projective measurements on the computational basis. We also show that in the presence of static imperfections, a reliable computation of the localization length is possible without error correction up to an imperfection threshold which drops polynomially with the number of qubits.

  9. Exponential rise of dynamical complexity in quantum computing through projections.

    PubMed

    Burgarth, Daniel Klaus; Facchi, Paolo; Giovannetti, Vittorio; Nakazato, Hiromichi; Pascazio, Saverio; Yuasa, Kazuya

    2014-10-10

    The ability of quantum systems to host exponentially complex dynamics has the potential to revolutionize science and technology. Therefore, much effort has been devoted to developing of protocols for computation, communication and metrology, which exploit this scaling, despite formidable technical difficulties. Here we show that the mere frequent observation of a small part of a quantum system can turn its dynamics from a very simple one into an exponentially complex one, capable of universal quantum computation. After discussing examples, we go on to show that this effect is generally to be expected: almost any quantum dynamics becomes universal once 'observed' as outlined above. Conversely, we show that any complex quantum dynamics can be 'purified' into a simpler one in larger dimensions. We conclude by demonstrating that even local noise can lead to an exponentially complex dynamics.

  10. Exponential rise of dynamical complexity in quantum computing through projections

    PubMed Central

    Burgarth, Daniel Klaus; Facchi, Paolo; Giovannetti, Vittorio; Nakazato, Hiromichi; Pascazio, Saverio; Yuasa, Kazuya

    2014-01-01

    The ability of quantum systems to host exponentially complex dynamics has the potential to revolutionize science and technology. Therefore, much effort has been devoted to developing of protocols for computation, communication and metrology, which exploit this scaling, despite formidable technical difficulties. Here we show that the mere frequent observation of a small part of a quantum system can turn its dynamics from a very simple one into an exponentially complex one, capable of universal quantum computation. After discussing examples, we go on to show that this effect is generally to be expected: almost any quantum dynamics becomes universal once ‘observed’ as outlined above. Conversely, we show that any complex quantum dynamics can be ‘purified’ into a simpler one in larger dimensions. We conclude by demonstrating that even local noise can lead to an exponentially complex dynamics. PMID:25300692

  11. Distributing Data from Desktop to Hand-Held Computers

    NASA Technical Reports Server (NTRS)

    Elmore, Jason L.

    2005-01-01

    A system of server and client software formats and redistributes data from commercially available desktop to commercially available hand-held computers via both wired and wireless networks. This software is an inexpensive means of enabling engineers and technicians to gain access to current sensor data while working in locations in which such data would otherwise be inaccessible. The sensor data are first gathered by a data-acquisition server computer, then transmitted via a wired network to a data-distribution computer that executes the server portion of the present software. Data in all sensor channels -- both raw sensor outputs in millivolt units and results of conversion to engineering units -- are made available for distribution. Selected subsets of the data are transmitted to each hand-held computer via the wired and then a wireless network. The selection of the subsets and the choice of the sequences and formats for displaying the data is made by means of a user interface generated by the client portion of the software. The data displayed on the screens of hand-held units can be updated at rates from 1 to

  12. Lightweight distributed computing for intraoperative real-time image guidance

    NASA Astrophysics Data System (ADS)

    Suwelack, Stefan; Katic, Darko; Wagner, Simon; Spengler, Patrick; Bodenstedt, Sebastian; Röhl, Sebastian; Dillmann, Rüdiger; Speidel, Stefanie

    2012-02-01

    In order to provide real-time intraoperative guidance, computer assisted surgery (CAS) systems often rely on computationally expensive algorithms. The real-time constraint is especially challenging if several components such as intraoperative image processing, soft tissue registration or context aware visualization are combined in a single system. In this paper, we present a lightweight approach to distribute the workload over several workstations based on the OpenIGTLink protocol. We use XML-based message passing for remote procedure calls and native types for transferring data such as images, meshes or point coordinates. Two different, but typical scenarios are considered in order to evaluate the performance of the new system. First, we analyze a real-time soft tissue registration algorithm based on a finite element (FE) model. Here, we use the proposed approach to distribute the computational workload between a primary workstation that handles sensor data processing and visualization and a dedicated workstation that runs the real-time FE algorithm. We show that the additional overhead that is introduced by the technique is small compared to the total execution time. Furthermore, the approach is used to speed up a context aware augmented reality based navigation system for dental implant surgery. In this scenario, the additional delay for running the computationally expensive reasoning server on a separate workstation is less than a millisecond. The results show that the presented approach is a promising strategy to speed up real-time CAS systems.

  13. Algorithm-dependent fault tolerance for distributed computing

    SciTech Connect

    P. D. Hough; M. e. Goldsby; E. J. Walsh

    2000-02-01

    Large-scale distributed systems assembled from commodity parts, like CPlant, have become common tools in the distributed computing world. Because of their size and diversity of parts, these systems are prone to failures. Applications that are being run on these systems have not been equipped to efficiently deal with failures, nor is there vendor support for fault tolerance. Thus, when a failure occurs, the application crashes. While most programmers make use of checkpoints to allow for restarting of their applications, this is cumbersome and incurs substantial overhead. In many cases, there are more efficient and more elegant ways in which to address failures. The goal of this project is to develop a software architecture for the detection of and recovery from faults in a cluster computing environment. The detection phase relies on the latest techniques developed in the fault tolerance community. Recovery is being addressed in an application-dependent manner, thus allowing the programmer to take advantage of algorithmic characteristics to reduce the overhead of fault tolerance. This architecture will allow large-scale applications to be more robust in high-performance computing environments that are comprised of clusters of commodity computers such as CPlant and SMP clusters.

  14. Simulation of emission tomography using grid middleware for distributed computing.

    PubMed

    Thomason, M G; Longton, R F; Gregor, J; Smith, G T; Hutson, R K

    2004-09-01

    SimSET is Monte Carlo simulation software for emission tomography. This paper describes a simple but effective scheme for parallel execution of SimSET using NetSolve, a client-server system for distributed computation. NetSolve (version 1.4.1) is "grid middleware" which enables a user (the client) to run specific computations remotely and simultaneously on a grid of networked computers (the servers). Since the servers do not have to be identical machines, computation may take place in a heterogeneous environment. To take advantage of diversity in machines and their workloads, a client-side scheduler was implemented for the Monte Carlo simulation. The scheduler partitions the total decay events by taking into account the inherent compute-speeds and recent average workloads, i.e., the scheduler assigns more decay events to processors expected to give faster service and fewer decay events to those expected to give slower service. When compute-speeds and sustained workloads are taken into account, the speed-up is essentially linear in the number of equivalent "maximum-service" processors. One modification in the SimSET code (version 2.6.2.3) was made to ensure that the total number of decay events specified by the user is maintained in the distributed simulation. No other modifications in the standard SimSET code were made. Each processor runs complete SimSET code for its assignment of decay events, independently of others running simultaneously. Empirical results are reported for simulation of a clinical-quality lung perfusion study.

  15. Some rotorcraft applications of computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Mccroskey, W. J.

    1988-01-01

    The growing application of computational aerodynamics to nonlinear rotorcraft problems is outlined, with particular emphasis on the development of new methods based on the Euler and thin-layer Navier-Stokes equations. Rotor airfoil characteristics can now be calculated accurately over a wide range of transonic flow conditions. However, unsteady 3-D viscous codes remain in the research stage, and a numerical simulation of the complete flow field about a helicopter in forward flight is not now feasible. Nevertheless, impressive progress is being made in preparation for future supercomputers that will enable meaningful calculations to be made for arbitrary rotorcraft configurations.

  16. Perspective: Computer simulations of long time dynamics

    SciTech Connect

    Elber, Ron

    2016-02-14

    Atomically detailed computer simulations of complex molecular events attracted the imagination of many researchers in the field as providing comprehensive information on chemical, biological, and physical processes. However, one of the greatest limitations of these simulations is of time scales. The physical time scales accessible to straightforward simulations are too short to address many interesting and important molecular events. In the last decade significant advances were made in different directions (theory, software, and hardware) that significantly expand the capabilities and accuracies of these techniques. This perspective describes and critically examines some of these advances.

  17. Perspective: Computer simulations of long time dynamics

    PubMed Central

    Elber, Ron

    2016-01-01

    Atomically detailed computer simulations of complex molecular events attracted the imagination of many researchers in the field as providing comprehensive information on chemical, biological, and physical processes. However, one of the greatest limitations of these simulations is of time scales. The physical time scales accessible to straightforward simulations are too short to address many interesting and important molecular events. In the last decade significant advances were made in different directions (theory, software, and hardware) that significantly expand the capabilities and accuracies of these techniques. This perspective describes and critically examines some of these advances. PMID:26874473

  18. Computational fluid dynamics combustion analysis evaluation

    NASA Technical Reports Server (NTRS)

    Kim, Y. M.; Shang, H. M.; Chen, C. P.; Ziebarth, J. P.

    1992-01-01

    This study involves the development of numerical modelling in spray combustion. These modelling efforts are mainly motivated to improve the computational efficiency in the stochastic particle tracking method as well as to incorporate the physical submodels of turbulence, combustion, vaporization, and dense spray effects. The present mathematical formulation and numerical methodologies can be casted in any time-marching pressure correction methodologies (PCM) such as FDNS code and MAST code. A sequence of validation cases involving steady burning sprays and transient evaporating sprays will be included.

  19. A fractal approach to dynamic inference and distribution analysis.

    PubMed

    van Rooij, Marieke M J W; Nash, Bertha A; Rajaraman, Srinivasan; Holden, John G

    2013-01-01

    Event-distributions inform scientists about the variability and dispersion of repeated measurements. This dispersion can be understood from a complex systems perspective, and quantified in terms of fractal geometry. The key premise is that a distribution's shape reveals information about the governing dynamics of the system that gave rise to the distribution. Two categories of characteristic dynamics are distinguished: additive systems governed by component-dominant dynamics and multiplicative or interdependent systems governed by interaction-dominant dynamics. A logic by which systems governed by interaction-dominant dynamics are expected to yield mixtures of lognormal and inverse power-law samples is discussed. These mixtures are described by a so-called cocktail model of response times derived from human cognitive performances. The overarching goals of this article are twofold: First, to offer readers an introduction to this theoretical perspective and second, to offer an overview of the related statistical methods.

  20. Computing interface motion in compressible gas dynamics

    NASA Technical Reports Server (NTRS)

    Mulder, W.; Osher, S.; Sethan, James A.

    1992-01-01

    An analysis is conducted of the coupling of Osher and Sethian's (1988) 'Hamilton-Jacobi' level set formulation of the equations of motion for propagating interfaces to a system of conservation laws for compressible gas dynamics, giving attention to both the conservative and nonconservative differencing of the level set function. The capabilities of the method are illustrated in view of the results of numerical convergence studies of the compressible Rayleigh-Taylor and Kelvin-Helmholtz instabilities for air-air and air-helium boundaries.

  1. Multi-threaded, discrete event simulation of distributed computing systems

    NASA Astrophysics Data System (ADS)

    Legrand, Iosif; MONARC Collaboration

    2001-10-01

    The LHC experiments have envisaged computing systems of unprecedented complexity, for which is necessary to provide a realistic description and modeling of data access patterns, and of many jobs running concurrently on large scale distributed systems and exchanging very large amounts of data. A process oriented approach for discrete event simulation is well suited to describe various activities running concurrently, as well the stochastic arrival patterns specific for such type of simulation. Threaded objects or "Active Objects" can provide a natural way to map the specific behaviour of distributed data processing into the simulation program. The simulation tool developed within MONARC is based on Java (TM) technology which provides adequate tools for developing a flexible and distributed process oriented simulation. Proper graphics tools, and ways to analyze data interactively, are essential in any simulation project. The design elements, status and features of the MONARC simulation tool are presented. The program allows realistic modeling of complex data access patterns by multiple concurrent users in large scale computing systems in a wide range of possible architectures, from centralized to highly distributed. Comparison between queuing theory and realistic client-server measurements is also presented.

  2. User's Manual for Computer Program ROTOR. [to calculate tilt-rotor aircraft dynamic characteristics

    NASA Technical Reports Server (NTRS)

    Yasue, M.

    1974-01-01

    A detailed description of a computer program to calculate tilt-rotor aircraft dynamic characteristics is presented. This program consists of two parts: (1) the natural frequencies and corresponding mode shapes of the rotor blade and wing are developed from structural data (mass distribution and stiffness distribution); and (2) the frequency response (to gust and blade pitch control inputs) and eigenvalues of the tilt-rotor dynamic system, based on the natural frequencies and mode shapes, are derived. Sample problems are included to assist the user.

  3. Dynamics of Numerics & Spurious Behaviors in CFD Computations. Revised

    NASA Technical Reports Server (NTRS)

    Yee, Helen C.; Sweby, Peter K.

    1997-01-01

    The global nonlinear behavior of finite discretizations for constant time steps and fixed or adaptive grid spacings is studied using tools from dynamical systems theory. Detailed analysis of commonly used temporal and spatial discretizations for simple model problems is presented. The role of dynamics in the understanding of long time behavior of numerical integration and the nonlinear stability, convergence, and reliability of using time-marching approaches for obtaining steady-state numerical solutions in computational fluid dynamics (CFD) is explored. The study is complemented with examples of spurious behavior observed in steady and unsteady CFD computations. The CFD examples were chosen to illustrate non-apparent spurious behavior that was difficult to detect without extensive grid and temporal refinement studies and some knowledge from dynamical systems theory. Studies revealed the various possible dangers of misinterpreting numerical simulation of realistic complex flows that are constrained by available computing power. In large scale computations where the physics of the problem under study is not well understood and numerical simulations are the only viable means of solution, extreme care must be taken in both computation and interpretation of the numerical data. The goal of this paper is to explore the important role that dynamical systems theory can play in the understanding of the global nonlinear behavior of numerical algorithms and to aid the identification of the sources of numerical uncertainties in CFD.

  4. Oxygen and seizure dynamics: II. Computational modeling

    PubMed Central

    Wei, Yina; Ullah, Ghanim; Ingram, Justin

    2014-01-01

    Electrophysiological recordings show intense neuronal firing during epileptic seizures leading to enhanced energy consumption. However, the relationship between oxygen metabolism and seizure patterns has not been well studied. Recent studies have developed fast and quantitative techniques to measure oxygen microdomain concentration during seizure events. In this article, we develop a biophysical model that accounts for these experimental observations. The model is an extension of the Hodgkin-Huxley formalism and includes the neuronal microenvironment dynamics of sodium, potassium, and oxygen concentrations. Our model accounts for metabolic energy consumption during and following seizure events. We can further account for the experimental observation that hypoxia can induce seizures, with seizures occurring only within a narrow range of tissue oxygen pressure. We also reproduce the interplay between excitatory and inhibitory neurons seen in experiments, accounting for the different oxygen levels observed during seizures in excitatory vs. inhibitory cell layers. Our findings offer a more comprehensive understanding of the complex interrelationship among seizures, ion dynamics, and energy metabolism. PMID:24671540

  5. Distributed Computation Resources for Earth System Grid Federation (ESGF)

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Doutriaux, C.; Williams, D. N.

    2014-12-01

    The Intergovernmental Panel on Climate Change (IPCC), prompted by the United Nations General Assembly, has published a series of papers in their Fifth Assessment Report (AR5) on processes, impacts, and mitigations of climate change in 2013. The science used in these reports was generated by an international group of domain experts. They studied various scenarios of climate change through the use of highly complex computer models to simulate the Earth's climate over long periods of time. The resulting total data of approximately five petabytes are stored in a distributed data grid known as the Earth System Grid Federation (ESGF). Through the ESGF, consumers of the data can find and download data with limited capabilities for server-side processing. The Sixth Assessment Report (AR6) is already in the planning stages and is estimated to create as much as two orders of magnitude more data than the AR5 distributed archive. It is clear that data analysis capabilities currently in use will be inadequate to allow for the necessary science to be done with AR6 data—the data will just be too big. A major paradigm shift from downloading data to local systems to perform data analytics must evolve to moving the analysis routines to the data and performing these computations on distributed platforms. In preparation for this need, the ESGF has started a Compute Working Team (CWT) to create solutions that allow users to perform distributed, high-performance data analytics on the AR6 data. The team will be designing and developing a general Application Programming Interface (API) to enable highly parallel, server-side processing throughout the ESGF data grid. This API will be integrated with multiple analysis and visualization tools, such as the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT), netCDF Operator (NCO), and others. This presentation will provide an update on the ESGF CWT's overall approach toward enabling the necessary storage proximal computational

  6. Dynamics of strongly coupled spatially distributed logistic equations with delay

    NASA Astrophysics Data System (ADS)

    Kashchenko, I. S.; Kashchenko, S. A.

    2015-04-01

    The dynamics of a system of two logistic delay equations with spatially distributed coupling is studied. The coupling coefficient is assumed to be sufficiently large. Special nonlinear systems of parabolic equations are constructed such that the behavior of their solutions is determined in the first approximation by the dynamical properties of the original system.

  7. Entry Times Distribution for Dynamical Balls on Metric Spaces

    NASA Astrophysics Data System (ADS)

    Haydn, N.; Yang, F.

    2017-03-01

    We show that the entry and return times for dynamical balls (Bowen balls) is exponential for systems that have an α -mixing invariant measure with certain regularities. We also show that systems modeled by Young's tower has exponential entry time distribution for dynamical balls. We also apply the results to conformal maps and expanding maps on the interval.

  8. Challenges to computing plasma thruster dynamics

    SciTech Connect

    Smith, G.A. )

    1992-01-01

    This paper describes computational challenges in describing high thrust and I[sub sp] expected from the proposed ion-compressed antimatter nuclear (ICAN) propulsion system. This concept uses antiprotons to induce fission reactions that jump start a microfission/fusion process in a target compressed by low-energy ion beams. The ICAN system could readily provide the high energy density required for interplanetary space missions of short duration. In conventional rocket design, thrust is obtained by expelling a propellant under high pressure through a nozzle. A larger I[sub sp] can be achieved by operating the system at a higher temperature. Full ionization of propellant at high temperature introduces new and challenging questions in the design of plasma thrusters.

  9. Experience with automatic, dynamic load balancing and adaptive finite element computation

    SciTech Connect

    Wheat, S.R.; Devine, K.D.; Maccabe, A.B.

    1993-10-01

    Distributed memory, Massively Parallel (MP), MIMD technology has enabled the development of applications requiring computational resources previously unobtainable. Structural mechanics and fluid dynamics applications, for example, are often solved by finite element methods (FEMs) requiring, millions of degrees of freedom to accurately simulate physical phenomenon. Adaptive methods, which automatically refine or coarsen meshes and vary the order of accuracy of the numerical solution, offer greater robustness and computational efficiency than traditional FEMs by reducing the amount of computation required away from physical structures such as shock waves and boundary layers. On MP computers, FEMs frequently result in distributed processor load imbalances. To overcome load imbalance, many MP FEMs use static load balancing as a preprocessor to the finite element calculation. Adaptive methods complicate the load imbalance problem since the work per element is not uniform across the solution domain and changes as the computation proceeds. Therefore, dynamic load balancing is required to maintain global load balance. We describe a dynamic, fine-grained, element-based data migration system that maintains global load balance and is effective in the presence of changing work loads. Global load balance is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method utilizes an automatic element management system library to which a programmer integrates the application`s computational description. The library`s flexibility supports a large class of finite element and finite difference based applications.

  10. A modular system for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    McCarthy, D. R.; Foutch, D. W.; Shurtleff, G. E.

    This paper describes the Modular System for Compuational Fluid Dynamics (MOSYS), a software facility for the construction and execution of arbitrary solution procedures on multizone, structured body-fitted grids. It focuses on the structure and capabilities of MOSYS and the philosophy underlying its design. The system offers different levels of capability depending on the objectives of the user. It enables the applications engineer to quickly apply a variety of methods to geometrically complex problems. The methods developer can implement new algorithms in a simple form, and immediately apply them to problems of both theoretical and practical interest. And for the code builder it consitutes a toolkit for fast construction of CFD codes tailored to various purposes. These capabilities are illustrated through applications to a particularly complex problem encountered in aircraft propulsion systems, namely, the analysis of a landing aircraft in reverse thrust.

  11. SD-CAS: Spin Dynamics by Computer Algebra System.

    PubMed

    Filip, Xenia; Filip, Claudiu

    2010-11-01

    A computer algebra tool for describing the Liouville-space quantum evolution of nuclear 1/2-spins is introduced and implemented within a computational framework named Spin Dynamics by Computer Algebra System (SD-CAS). A distinctive feature compared with numerical and previous computer algebra approaches to solving spin dynamics problems results from the fact that no matrix representation for spin operators is used in SD-CAS, which determines a full symbolic character to the performed computations. Spin correlations are stored in SD-CAS as four-entry nested lists of which size increases linearly with the number of spins into the system and are easily mapped into analytical expressions in terms of spin operator products. For the so defined SD-CAS spin correlations a set of specialized functions and procedures is introduced that are essential for implementing basic spin algebra operations, such as the spin operator products, commutators, and scalar products. They provide results in an abstract algebraic form: specific procedures to quantitatively evaluate such symbolic expressions with respect to the involved spin interaction parameters and experimental conditions are also discussed. Although the main focus in the present work is on laying the foundation for spin dynamics symbolic computation in NMR based on a non-matrix formalism, practical aspects are also considered throughout the theoretical development process. In particular, specific SD-CAS routines have been implemented using the YACAS computer algebra package (http://yacas.sourceforge.net), and their functionality was demonstrated on a few illustrative examples.

  12. Use of computational fluid dynamics in the design of dynamic contrast enhanced imaging phantoms

    NASA Astrophysics Data System (ADS)

    Hariharan, Prasanna; Freed, Melanie; Myers, Matthew R.

    2013-09-01

    Phantoms for dynamic contrast enhanced (DCE) imaging modalities such as DCE computed tomography (DCE-CT) and DCE magnetic resonance imaging (DCE-MRI) are valuable tools for evaluating and comparing imaging systems. It is important for the contrast-agent distribution within the phantom to possess a time dependence that replicates a curve observed clinically, known as the ‘tumor-enhancement curve’. It is also important for the concentration field within the lesion to be as uniform as possible. This study demonstrates how computational fluid dynamics (CFD) can be applied to achieve these goals within design constraints. The distribution of the contrast agent within the simulated phantoms was investigated in relation to the influence of three factors of the phantom design. First, the interaction between the inlets and the uniformity of the contrast agent within the phantom was modeled. Second, pumps were programmed using a variety of schemes and the resultant dynamic uptake curves were compared to tumor-enhancement curves obtained from clinical data. Third, the effectiveness of pulsing the inlet flow rate to produce faster equilibration of the contrast-agent distribution was quantified. The models employed a spherical lesion and design constraints (lesion diameter, inlet-tube size and orientation, contrast-agent flow rates and fluid properties) taken from a recently published DCE-MRI phantom study. For DCE-MRI in breast cancer detection, where the target tumor-enhancement curve varies on the scale of hundreds of seconds, optimizing the number of inlet tubes and their orientation was found to be adequate for attaining concentration uniformity and reproducing the target tumor-enhancement curve. For DCE-CT in liver tumor detection, where the tumor-enhancement curve varies on a scale of tens of seconds, the use of an iterated inlet condition (programmed into the pump) enabled the phantom to reproduce the target tumor-enhancement curve within a few per cent beyond about

  13. Automated Static Culture System Cell Module Mixing Protocol and Computational Fluid Dynamics Analysis

    NASA Technical Reports Server (NTRS)

    Kleis, Stanley J.; Truong, Tuan; Goodwin, Thomas J,

    2004-01-01

    This report is a documentation of a fluid dynamic analysis of the proposed Automated Static Culture System (ASCS) cell module mixing protocol. The report consists of a review of some basic fluid dynamics principles appropriate for the mixing of a patch of high oxygen content media into the surrounding media which is initially depleted of oxygen, followed by a computational fluid dynamics (CFD) study of this process for the proposed protocol over a range of the governing parameters. The time histories of oxygen concentration distributions and mechanical shear levels generated are used to characterize the mixing process for different parameter values.

  14. Some Contributions to Computational Fluid Dynamics.

    NASA Astrophysics Data System (ADS)

    Miller, Harvey Philip

    A three-dimensional, time-dependent free surface model has been developed for predicting the velocity field and surface height variations in a tidal bay. An explicit finite difference numerical solution is obtained by transforming the vertical coordinate in the governing model equations. The ocean-bay interface open boundary condition is incorporated without approximation into the hydrodynamic model by employing a staggered grid Richardson lattice. The momentum equations ignore horizontal diffusion, which is justifiably small for the South Biscayne Bay. Another three-dimensional, time-dependent free surface model for the South Biscayne Bay is used for application to suspended particles transport. A unique mass-conserving numerical model is used for solving the concentration equation by an explicit finite difference scheme. The effects of constant particle settling velocity and bottom bed deposition rate are compared and discussed. For convection dominated coastal flows, the flux -corrected transport (FCT) method is compared with other low-dispersive, explicit finite difference schemes for the two-dimensional linear advection of 2-D gaussian initial temperature distributions of various half-widths. The flow field is specified a-priori as consisting of a slowly varying, oscillating, uniform x-component of velocity, and a constant y-component of velocity. This type of flow field is typically encountered in near-coastal waters. The artificial numerical effects of diffusion (dissipation), dispersion, and anisotropy are discussed. Finally, two-dimensional linear advection solutions of transported fluid temperature are explored by implementing high resolution, high order explicit finite difference schemes. A comparison of the flux-corrected transport (FCT) methods is made with other total variation diminishing (TVD) schemes for the 2-D gaussian initial temperature distributions of various half-widths. Further clipping of the sharply peaked gaussian distribution in 2-D

  15. Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations

    NASA Technical Reports Server (NTRS)

    Chrisochoides, Nikos

    1995-01-01

    We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.

  16. Numerical simulation of landfill aeration using computational fluid dynamics.

    PubMed

    Fytanidis, Dimitrios K; Voudrias, Evangelos A

    2014-04-01

    The present study is an application of Computational Fluid Dynamics (CFD) to the numerical simulation of landfill aeration systems. Specifically, the CFD algorithms provided by the commercial solver ANSYS Fluent 14.0, combined with an in-house source code developed to modify the main solver, were used. The unsaturated multiphase flow of air and liquid phases and the biochemical processes for aerobic biodegradation of the organic fraction of municipal solid waste were simulated taking into consideration their temporal and spatial evolution, as well as complex effects, such as oxygen mass transfer across phases, unsaturated flow effects (capillary suction and unsaturated hydraulic conductivity), temperature variations due to biochemical processes and environmental correction factors for the applied kinetics (Monod and 1st order kinetics). The developed model results were compared with literature experimental data. Also, pilot scale simulations and sensitivity analysis were implemented. Moreover, simulation results of a hypothetical single aeration well were shown, while its zone of influence was estimated using both the pressure and oxygen distribution. Finally, a case study was simulated for a hypothetical landfill aeration system. Both a static (steadily positive or negative relative pressure with time) and a hybrid (following a square wave pattern of positive and negative values of relative pressure with time) scenarios for the aeration wells were examined. The results showed that the present model is capable of simulating landfill aeration and the obtained results were in good agreement with corresponding previous experimental and numerical investigations.

  17. Computational Fluid Dynamics Analysis of Canadian Supercritical Water Reactor (SCWR)

    NASA Astrophysics Data System (ADS)

    Movassat, Mohammad; Bailey, Joanne; Yetisir, Metin

    2015-11-01

    A Computational Fluid Dynamics (CFD) simulation was performed on the proposed design for the Canadian SuperCritical Water Reactor (SCWR). The proposed Canadian SCWR is a 1200 MW(e) supercritical light-water cooled nuclear reactor with pressurized fuel channels. The reactor concept uses an inlet plenum that all fuel channels are attached to and an outlet header nested inside the inlet plenum. The coolant enters the inlet plenum at 350 C and exits the outlet header at 625 C. The operating pressure is approximately 26 MPa. The high pressure and high temperature outlet conditions result in a higher electric conversion efficiency as compared to existing light water reactors. In this work, CFD simulations were performed to model fluid flow and heat transfer in the inlet plenum, outlet header, and various parts of the fuel assembly. The ANSYS Fluent solver was used for simulations. Results showed that mass flow rate distribution in fuel channels varies radially and the inner channels achieve higher outlet temperatures. At the outlet header, zones with rotational flow were formed as the fluid from 336 fuel channels merged. Results also suggested that insulation of the outlet header should be considered to reduce the thermal stresses caused by the large temperature gradients.

  18. Fluid dynamics parallel computer development at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Townsend, James C.; Zang, Thomas A.; Dwoyer, Douglas L.

    1987-01-01

    To accomplish more detailed simulations of highly complex flows, such as the transition to turbulence, fluid dynamics research requires computers much more powerful than any available today. Only parallel processing on multiple-processor computers offers hope for achieving the required effective speeds. Looking ahead to the use of these machines, the fluid dynamicist faces three issues: algorithm development for near-term parallel computers, architecture development for future computer power increases, and assessment of possible advantages of special purpose designs. Two projects at NASA Langley address these issues. Software development and algorithm exploration is being done on the FLEX/32 Parallel Processing Research Computer. New architecture features are being explored in the special purpose hardware design of the Navier-Stokes Computer. These projects are complementary and are producing promising results.

  19. Population-based learning of load balancing policies for a distributed computer system

    NASA Technical Reports Server (NTRS)

    Mehra, Pankaj; Wah, Benjamin W.

    1993-01-01

    Effective load-balancing policies use dynamic resource information to schedule tasks in a distributed computer system. We present a novel method for automatically learning such policies. At each site in our system, we use a comparator neural network to predict the relative speedup of an incoming task using only the resource-utilization patterns obtained prior to the task's arrival. Outputs of these comparator networks are broadcast periodically over the distributed system, and the resource schedulers at each site use these values to determine the best site for executing an incoming task. The delays incurred in propagating workload information and tasks from one site to another, as well as the dynamic and unpredictable nature of workloads in multiprogrammed multiprocessors, may cause the workload pattern at the time of execution to differ from patterns prevailing at the times of load-index computation and decision making. Our load-balancing policy accommodates this uncertainty by using certain tunable parameters. We present a population-based machine-learning algorithm that adjusts these parameters in order to achieve high average speedups with respect to local execution. Our results show that our load-balancing policy, when combined with the comparator neural network for workload characterization, is effective in exploiting idle resources in a distributed computer system.

  20. Improving CMS data transfers among its distributed computing facilities

    NASA Astrophysics Data System (ADS)

    Flix, J.; Magini, N.; Sartirana, A.

    2011-12-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on their usage, customizing the topologies and improving their setup in order to keep CMS transferring data at the desired levels in a reliable and robust way.

  1. Analytical and Computational Properties of Distributed Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis

  2. Parallel matrix transpose algorithms on distributed memory concurrent computers

    SciTech Connect

    Choi, J.; Walker, D.W.; Dongarra, J.J. |

    1993-10-01

    This paper describes parallel matrix transpose algorithms on distributed memory concurrent processors. It is assumed that the matrix is distributed over a P x Q processor template with a block scattered data distribution. P, Q, and the block size can be arbitrary, so the algorithms have wide applicability. The communication schemes of the algorithms are determined by the greatest common divisor (GCD) of P and Q. If P and Q are relatively prime, the matrix transpose algorithm involves complete exchange communication. If P and Q are not relatively prime, processors are divided into GCD groups and the communication operations are overlapped for different groups of processors. Processors transpose GCD wrapped diagonal blocks simultaneously, and the matrix can be transposed with LCM/GCD steps, where LCM is the least common multiple of P and Q. The algorithms make use of non-blocking, point-to-point communication between processors. The use of nonblocking communication allows a processor to overlap the messages that it sends to different processors, thereby avoiding unnecessary synchronization. Combined with the matrix multiplication routine, C = A{center_dot}B, the algorithms are used to compute parallel multiplications of transposed matrices, C = A{sup T}{center_dot}B{sup T}, in the PUMMA package. Details of the parallel implementation of the algorithms are given, and results are presented for runs on the Intel Touchstone Delta computer.

  3. Amoeba-inspired nanoarchitectonic computing: solving intractable computational problems using nanoscale photoexcitation transfer dynamics.

    PubMed

    Aono, Masashi; Naruse, Makoto; Kim, Song-Ju; Wakabayashi, Masamitsu; Hori, Hirokazu; Ohtsu, Motoichi; Hara, Masahiko

    2013-06-18

    Biologically inspired computing devices and architectures are expected to overcome the limitations of conventional technologies in terms of solving computationally demanding problems, adapting to complex environments, reducing energy consumption, and so on. We previously demonstrated that a primitive single-celled amoeba (a plasmodial slime mold), which exhibits complex spatiotemporal oscillatory dynamics and sophisticated computing capabilities, can be used to search for a solution to a very hard combinatorial optimization problem. We successfully extracted the essential spatiotemporal dynamics by which the amoeba solves the problem. This amoeba-inspired computing paradigm can be implemented by various physical systems that exhibit suitable spatiotemporal dynamics resembling the amoeba's problem-solving process. In this Article, we demonstrate that photoexcitation transfer phenomena in certain quantum nanostructures mediated by optical near-field interactions generate the amoebalike spatiotemporal dynamics and can be used to solve the satisfiability problem (SAT), which is the problem of judging whether a given logical proposition (a Boolean formula) is self-consistent. SAT is related to diverse application problems in artificial intelligence, information security, and bioinformatics and is a crucially important nondeterministic polynomial time (NP)-complete problem, which is believed to become intractable for conventional digital computers when the problem size increases. We show that our amoeba-inspired computing paradigm dramatically outperforms a conventional stochastic search method. These results indicate the potential for developing highly versatile nanoarchitectonic computers that realize powerful solution searching with low energy consumption.

  4. Computational Fluid Dynamics Simulation of Fluidized Bed Polymerization Reactors

    SciTech Connect

    Fan, Rong

    2006-01-01

    Fluidized beds (FB) reactors are widely used in the polymerization industry due to their superior heat- and mass-transfer characteristics. Nevertheless, problems associated with local overheating of polymer particles and excessive agglomeration leading to FB reactors defluidization still persist and limit the range of operating temperatures that can be safely achieved in plant-scale reactors. Many people have been worked on the modeling of FB polymerization reactors, and quite a few models are available in the open literature, such as the well-mixed model developed by McAuley, Talbot, and Harris (1994), the constant bubble size model (Choi and Ray, 1985) and the heterogeneous three phase model (Fernandes and Lona, 2002). Most these research works focus on the kinetic aspects, but from industrial viewpoint, the behavior of FB reactors should be modeled by considering the particle and fluid dynamics in the reactor. Computational fluid dynamics (CFD) is a powerful tool for understanding the effect of fluid dynamics on chemical reactor performance. For single-phase flows, CFD models for turbulent reacting flows are now well understood and routinely applied to investigate complex flows with detailed chemistry. For multiphase flows, the state-of-the-art in CFD models is changing rapidly and it is now possible to predict reasonably well the flow characteristics of gas-solid FB reactors with mono-dispersed, non-cohesive solids. This thesis is organized into seven chapters. In Chapter 2, an overview of fluidized bed polymerization reactors is given, and a simplified two-site kinetic mechanism are discussed. Some basic theories used in our work are given in detail in Chapter 3. First, the governing equations and other constitutive equations for the multi-fluid model are summarized, and the kinetic theory for describing the solid stress tensor is discussed. The detailed derivation of DQMOM for the population balance equation is given as the second section. In this section

  5. Computational Fluid Dynamics. [numerical methods and algorithm development

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  6. Qualification of a computer program for drill string dynamics

    SciTech Connect

    Stone, C.M.; Carne, T.G.; Caskey, B.C.

    1985-01-01

    A four point plan for the qualification of the GEODYN drill string dynamics computer program is described. The qualification plan investigates both modal response and transient response of a short drill string subjected to simulated cutting loads applied through a polycrystalline diamond compact (PDC) bit. The experimentally based qualification shows that the analytical techniques included in Phase 1 GEODYN correctly simulate the dynamic response of the bit-drill string system. 6 refs., 8 figs.

  7. A Scalable Distribution Network Risk Evaluation Framework via Symbolic Dynamics

    PubMed Central

    Yuan, Kai; Liu, Jian; Liu, Kaipei; Tan, Tianyuan

    2015-01-01

    Background Evaluations of electric power distribution network risks must address the problems of incomplete information and changing dynamics. A risk evaluation framework should be adaptable to a specific situation and an evolving understanding of risk. Methods This study investigates the use of symbolic dynamics to abstract raw data. After introducing symbolic dynamics operators, Kolmogorov-Sinai entropy and Kullback-Leibler relative entropy are used to quantitatively evaluate relationships between risk sub-factors and main factors. For layered risk indicators, where the factors are categorized into four main factors – device, structure, load and special operation – a merging algorithm using operators to calculate the risk factors is discussed. Finally, an example from the Sanya Power Company is given to demonstrate the feasibility of the proposed method. Conclusion Distribution networks are exposed and can be affected by many things. The topology and the operating mode of a distribution network are dynamic, so the faults and their consequences are probabilistic. PMID:25789859

  8. Overset grid applications on distributed memory MIMD computers

    NASA Technical Reports Server (NTRS)

    Chawla, Kalpana; Weeratunga, Sisira

    1994-01-01

    Analysis of modern aerospace vehicles requires the computation of flowfields about complex three dimensional geometries composed of regions with varying spatial resolution requirements. Overset grid methods allow the use of proven structured grid flow solvers to address the twin issues of geometrical complexity and the resolution variation by decomposing the complex physical domain into a collection of overlapping subdomains. This flexibility is accompanied by the need for irregular intergrid boundary communication among the overlapping component grids. This study investigates a strategy for implementing such a static overset grid implicit flow solver on distributed memory, MIMD computers; i.e., the 128 node Intel iPSC/860 and the 208 node Intel Paragon. Performance data for two composite grid configurations characteristic of those encountered in present day aerodynamic analysis are also presented.

  9. Distributed computing for membrane-based modeling of action potential propagation.

    PubMed

    Porras, D; Rogers, J M; Smith, W M; Pollard, A E

    2000-08-01

    Action potential propagation simulations with physiologic membrane currents and macroscopic tissue dimensions are computationally expensive. We, therefore, analyzed distributed computing schemes to reduce execution time in workstation clusters by parallelizing solutions with message passing. Four schemes were considered in two-dimensional monodomain simulations with the Beeler-Reuter membrane equations. Parallel speedups measured with each scheme were compared to theoretical speedups, recognizing the relationship between speedup and code portions that executed serially. A data decomposition scheme based on total ionic current provided the best performance. Analysis of communication latencies in that scheme led to a load-balancing algorithm in which measured speedups at 89 +/- 2% and 75 +/- 8% of theoretical speedups were achieved in homogeneous and heterogeneous clusters of workstations. Speedups in this scheme with the Luo-Rudy dynamic membrane equations exceeded 3.0 with eight distributed workstations. Cluster speedups were comparable to those measured during parallel execution on a shared memory machine.

  10. KNET - DISTRIBUTED COMPUTING AND/OR DATA TRANSFER PROGRAM

    NASA Technical Reports Server (NTRS)

    Hui, J.

    1994-01-01

    KNET facilitates distributed computing between a UNIX compatible local host and a remote host which may or may not be UNIX compatible. It is capable of automatic remote login. That is, it performs on the user's behalf the chore of handling host selection, user name, and password to the designated host. Once the login has been successfully completed, the user may interactively communicate with the remote host. Data output from the remote host may be directed to the local screen, to a local file, and/or to a local process. Conversely, data input from the keyboard, a local file, or a local process may be directed to the remote host. KNET takes advantage of the multitasking and terminal mode control features of the UNIX operating system. A parent process is used as the upper layer for interfacing with the local user. A child process is used for a lower layer for interfacing with the remote host computer, and optionally one or more child processes can be used for the remote data output. Output may be directed to the screen and/or to the local processes under the control of a data pipe switch. In order for KNET to operate, the local and remote hosts must observe a common communications protocol. KNET is written in ANSI standard C-language for computers running UNIX. It has been successfully implemented on several Sun series computers and a DECstation 3100 and used to run programs remotely on VAX VMS and UNIX based computers. It requires 100K of RAM under SunOS and 120K of RAM under DEC RISC ULTRIX. An electronic copy of the documentation is provided on the distribution medium. The standard distribution medium for KNET is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. KNET was developed in 1991 and is a copyrighted work with all copyright vested in NASA. UNIX is a registered trademark of AT&T Bell Laboratories. Sun and SunOS are trademarks of Sun Microsystems, Inc. DECstation, VAX, VMS, and

  11. Job monitoring on DIRAC for Belle II distributed computing

    NASA Astrophysics Data System (ADS)

    Kato, Yuji; Hayasaka, Kiyoshi; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo

    2015-12-01

    We developed a monitoring system for Belle II distributed computing, which consists of active and passive methods. In this paper we describe the passive monitoring system, where information stored in the DIRAC database is processed and visualized. We divide the DIRAC workload management flow into steps and store characteristic variables which indicate issues. These variables are chosen carefully based on our experiences, then visualized. As a result, we are able to effectively detect issues. Finally, we discuss the future development for automating log analysis, notification of issues, and disabling problematic sites.

  12. Performance Evaluation of Three Distributed Computing Environments for Scientific Applications

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod; Weeratunga, Sisira; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    We present performance results for three distributed computing environments using the three simulated CFD applications in the NAS Parallel Benchmark suite. These environments are the DCF cluster, the LACE cluster, and an Intel iPSC/860 machine. The DCF is a prototypic cluster of loosely coupled SGI R3000 machines connected by Ethernet. The LACE cluster is a tightly coupled cluster of 32 IBM RS6000/560 machines connected by Ethernet as well as by either FDDI or an IBM Allnode switch. Results of several parallel algorithms for the three simulated applications are presented and analyzed based on the interplay between the communication requirements of an algorithm and the characteristics of the communication network of a distributed system.

  13. Morphing-Based Shape Optimization in Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Rousseau, Yannick; Men'Shov, Igor; Nakamura, Yoshiaki

    In this paper, a Morphing-based Shape Optimization (MbSO) technique is presented for solving Optimum-Shape Design (OSD) problems in Computational Fluid Dynamics (CFD). The proposed method couples Free-Form Deformation (FFD) and Evolutionary Computation, and, as its name suggests, relies on the morphing of shape and computational domain, rather than direct shape parameterization. Advantages of the FFD approach compared to traditional parameterization are first discussed. Then, examples of shape and grid deformations by FFD are presented. Finally, the MbSO approach is illustrated and applied through an example: the design of an airfoil for a future Mars exploration airplane.

  14. Portable lamp with dynamically controlled lighting distribution

    DOEpatents

    Siminovitch, Michael J.; Page, Erik R.

    2001-01-01

    A double lamp table or floor lamp lighting system has a pair of compact fluorescent lamps (CFLs) arranged vertically with a reflective septum in between. By selectively turning on one or both of the CFLs, down lighting, up lighting, or both up and down lighting is produced. The control system can also vary the light intensity from each CFL. The reflective septum insures that almost all the light produced by each lamp will be directed into the desired light distribution pattern which is selected and easily changed by the user. Planar compact fluorescent lamps, e.g. circular CFLs, particularly oriented horizontally, are preferable. CFLs provide energy efficiency. The lighting system may be designed for the home, hospitality, office or other environments.

  15. A model of cerebellar computations for dynamical state estimation.

    PubMed

    Paulin, M G; Hoffman, L F; Assad, C

    2001-11-01

    The cerebellum is a neural structure that is essential for agility in vertebrate movements. Its contribution to motor control appears to be due to a fundamental role in dynamical state estimation, which also underlies its role in various non-motor tasks. Single spikes in vestibular sensory neurons carry information about head state. We show how computations for optimal dynamical state estimation may be accomplished when signals are encoded in spikes. This provides a novel way to design dynamical state estimators, and a novel way to interpret the structure and function of the cerebellum.

  16. A model of cerebellar computations for dynamical state estimation

    NASA Technical Reports Server (NTRS)

    Paulin, M. G.; Hoffman, L. F.; Assad, C.

    2001-01-01

    The cerebellum is a neural structure that is essential for agility in vertebrate movements. Its contribution to motor control appears to be due to a fundamental role in dynamical state estimation, which also underlies its role in various non-motor tasks. Single spikes in vestibular sensory neurons carry information about head state. We show how computations for optimal dynamical state estimation may be accomplished when signals are encoded in spikes. This provides a novel way to design dynamical state estimators, and a novel way to interpret the structure and function of the cerebellum.

  17. Atomistic protein folding simulations on the submillisecond time scale using worldwide distributed computing.

    PubMed

    Pande, Vijay S; Baker, Ian; Chapman, Jarrod; Elmer, Sidney P; Khaliq, Siraj; Larson, Stefan M; Rhee, Young Min; Shirts, Michael R; Snow, Christopher D; Sorin, Eric J; Zagrovic, Bojan

    2003-01-01

    Atomistic simulations of protein folding have the potential to be a great complement to experimental studies, but have been severely limited by the time scales accessible with current computer hardware and algorithms. By employing a worldwide distributed computing network of tens of thousands of PCs and algorithms designed to efficiently utilize this new many-processor, highly heterogeneous, loosely coupled distributed computing paradigm, we have been able to simulate hundreds of microseconds of atomistic molecular dynamics. This has allowed us to directly simulate the folding mechanism and to accurately predict the folding rate of several fast-folding proteins and polymers, including a nonbiological helix, polypeptide alpha-helices, a beta-hairpin, and a three-helix bundle protein from the villin headpiece. Our results demonstrate that one can reach the time scales needed to simulate fast folding using distributed computing, and that potential sets used to describe interatomic interactions are sufficiently accurate to reach the folded state with experimentally validated rates, at least for small proteins.

  18. Textbook Multigrid Efficiency for Computational Fluid Dynamics Simulations

    NASA Technical Reports Server (NTRS)

    Brandt, Achi; Thomas, James L.; Diskin, Boris

    2001-01-01

    Considerable progress over the past thirty years has been made in the development of large-scale computational fluid dynamics (CFD) solvers for the Euler and Navier-Stokes equations. Computations are used routinely to design the cruise shapes of transport aircraft through complex-geometry simulations involving the solution of 25-100 million equations; in this arena the number of wind-tunnel tests for a new design has been substantially reduced. However, simulations of the entire flight envelope of the vehicle, including maximum lift, buffet onset, flutter, and control effectiveness have not been as successful in eliminating the reliance on wind-tunnel testing. These simulations involve unsteady flows with more separation and stronger shock waves than at cruise. The main reasons limiting further inroads of CFD into the design process are: (1) the reliability of turbulence models; and (2) the time and expense of the numerical simulation. Because of the prohibitive resolution requirements of direct simulations at high Reynolds numbers, transition and turbulence modeling is expected to remain an issue for the near term. The focus of this paper addresses the latter problem by attempting to attain optimal efficiencies in solving the governing equations. Typically current CFD codes based on the use of multigrid acceleration techniques and multistage Runge-Kutta time-stepping schemes are able to converge lift and drag values for cruise configurations within approximately 1000 residual evaluations. An optimally convergent method is defined as having textbook multigrid efficiency (TME), meaning the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in the discretized system of equations (residual equations). In this paper, a distributed relaxation approach to achieving TME for Reynolds-averaged Navier-Stokes (RNAS) equations are discussed along with the foundations that form the

  19. System design and algorithmic development for computational steering in distributed environments

    SciTech Connect

    Wu, Qishi; Zhu, Mengxia; Gu, Yi; Rao, Nageswara S

    2010-03-01

    Supporting visualization pipelines over wide-area networks is critical to enabling large-scale scientific applications that require visual feedback to interactively steer online computations. We propose a remote computational steering system that employs analytical models to estimate the cost of computing and communication components and optimizes the overall system performance in distributed environments with heterogeneous resources. We formulate and categorize the visualization pipeline configuration problems for maximum frame rate into three classes according to the constraints on node reuse or resource sharing, namely no, contiguous, and arbitrary reuse. We prove all three problems to be NP-complete and present heuristic approaches based on a dynamic programming strategy. The superior performance of the proposed solution is demonstrated with extensive simulation results in comparison with existing algorithms and is further evidenced by experimental results collected on a prototype implementation deployed over the Internet.

  20. Remote Visualization and Remote Collaboration On Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Watson, Val; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    A new technology has been developed for remote visualization that provides remote, 3D, high resolution, dynamic, interactive viewing of scientific data (such as fluid dynamics simulations or measurements). Based on this technology, some World Wide Web sites on the Internet are providing fluid dynamics data for educational or testing purposes. This technology is also being used for remote collaboration in joint university, industry, and NASA projects in computational fluid dynamics and wind tunnel testing. Previously, remote visualization of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewer's local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit).

  1. Computational fluid dynamics applications to improve crop production systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Computational fluid dynamics (CFD), numerical analysis and simulation tools of fluid flow processes have emerged from the development stage and become nowadays a robust design tool. It is widely used to study various transport phenomena which involve fluid flow, heat and mass transfer, providing det...

  2. Current capabilities and future directions in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    1986-01-01

    A summary of significant findings is given, followed by specific recommendations for future directions of emphasis for computational fluid dynamics development. The discussion is organized into three application areas: external aerodynamics, hypersonics, and propulsion - and followed by a turbulence modeling synopsis.

  3. Computational Issues in the Control of Quantum Dynamics

    NASA Astrophysics Data System (ADS)

    Rabitz, Herschel

    2003-03-01

    Computational Issues in the Control of Quantum Dynamics Phenomena Herschel Rabitz Department of Chemistry Princeton University The control of quantum phenomena embraces a variety of applications, with the most common implementation involving tailored laser pulses to steer the dynamics of a quantum system towards some specified observable outcome. The theoretical and computational aspects of this subject are intimately tied to the growing experimental capabilities, especially the ability to perform massive numbers of high throughput experiments. Computational studies in this context have special roles. Especially important is the use of computational techniques to develop new control algorithms, which ultimately would be implemented in the laboratory to guide the control of complex quantum systems. Beyond control alone, many of the same concepts can be exploited for the performance of experiments optimally tuned for inversion, to extract Hamiltonian information. The latter scenario poses very high demands on the efficiency of solving the quantum dynamics equations to extract the information content from the experimental data. The concept of exploiting a computational quantum control tool kit will be introduced as a means for addressing many of these challenges.

  4. Computational Fluid Dynamics Demonstration of Rigid Bodies in Motion

    NASA Technical Reports Server (NTRS)

    Camarena, Ernesto; Vu, Bruce T.

    2011-01-01

    The Design Analysis Branch (NE-Ml) at the Kennedy Space Center has not had the ability to accurately couple Rigid Body Dynamics (RBD) and Computational Fluid Dynamics (CFD). OVERFLOW-D is a flow solver that has been developed by NASA to have the capability to analyze and simulate dynamic motions with up to six Degrees of Freedom (6-DOF). Two simulations were prepared over the course of the internship to demonstrate 6DOF motion of rigid bodies under aerodynamic loading. The geometries in the simulations were based on a conceptual Space Launch System (SLS). The first simulation that was prepared and computed was the motion of a Solid Rocket Booster (SRB) as it separates from its core stage. To reduce computational time during the development of the simulation, only half of the physical domain with respect to the symmetry plane was simulated. Then a full solution was prepared and computed. The second simulation was a model of the SLS as it departs from a launch pad under a 20 knot crosswind. This simulation was reduced to Two Dimensions (2D) to reduce both preparation and computation time. By allowing 2-DOF for translations and 1-DOF for rotation, the simulation predicted unrealistic rotation. The simulation was then constrained to only allow translations.

  5. Performance evaluation of communication software systems for distributed computing

    NASA Astrophysics Data System (ADS)

    Fatoohi, R. A.

    1997-09-01

    In recent years there has been an increasing interest in object-oriented distributed computing since it is better equipped to deal with complex systems while providing extensibility, maintainability and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI and ATM. The performance results for three communication software systems are presented, analysed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.

  6. Performance Evaluation of Communication Software Systems for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod

    1996-01-01

    In recent years there has been an increasing interest in object-oriented distributed computing since it is better quipped to deal with complex systems while providing extensibility, maintainability, and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI, and ATM. The performance results for three communication software systems are presented, analyzed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification, and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.

  7. Dynamical quorum sensing and clustering dynamics in a population of spatially distributed active rotators

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Hidetsugu; Maeyama, Satomi

    2013-02-01

    A model of clustering dynamics is proposed for a population of spatially distributed active rotators. A transition from excitable to oscillatory dynamics is induced by the increase of the local density of active rotators. It is interpreted as dynamical quorum sensing. In the oscillation regime, phase waves propagate without decay, which generates an effectively long-range interaction in the clustering dynamics. The clustering process becomes facilitated and only one dominant cluster appears rapidly as a result of the dynamical quorum sensing. An exact localized solution is found to a simplified model equation, and the competitive dynamics between two localized states is studied numerically.

  8. Interactive computer code for dynamic and soil structure interaction analysis

    SciTech Connect

    Mulliken, J.S.

    1995-12-01

    A new interactive computer code is presented in this paper for dynamic and soil-structure interaction (SSI) analyses. The computer program FETA (Finite Element Transient Analysis) is a self contained interactive graphics environment for IBM-PC`s that is used for the development of structural and soil models as well as post-processing dynamic analysis output. Full 3-D isometric views of the soil-structure system, animation of displacements, frequency and time domain responses at nodes, and response spectra are all graphically available simply by pointing and clicking with a mouse. FETA`s finite element solver performs 2-D and 3-D frequency and time domain soil-structure interaction analyses. The solver can be directly accessed from the graphical interface on a PC, or run on a number of other computer platforms.

  9. GAiN: Distributed Array Computation with Python

    SciTech Connect

    Daily, Jeffrey A.

    2009-05-01

    Scientific computing makes use of very large, multidimensional numerical arrays - typically, gigabytes to terabytes in size - much larger than can fit on even the largest single compute node. Such arrays must be distributed across a "cluster" of nodes. Global Arrays is a cluster-based software system from Battelle Pacific Northwest National Laboratory that enables an efficient, portable, and parallel shared-memory programming interface to manipulate these arrays. Written in and for the C and FORTRAN programming languages, it takes advantage of high-performance cluster interconnections to allow any node in the cluster to access data on any other node very rapidly. The "numpy" module is the de facto standard for numerical calculation in the Python programming language, a language whose use is growing rapidly in the scientific and engineering communities. numpy provides a powerful N-dimensional array class as well as other scientific computing capabilities. However, like the majority of the core Python modules, numpy is inherently serial. Our system, GAiN (Global Arrays in NumPy), is a parallel extension to Python that accesses Global Arrays through numpy. This allows parallel processing and/or larger problem sizes to be harnessed almost transparently within new or existing numpy programs.

  10. Computational dynamics for robotics systems using a non-strict computational approach

    NASA Technical Reports Server (NTRS)

    Orin, David E.; Wong, Ho-Cheung; Sadayappan, P.

    1989-01-01

    A Non-Strict computational approach for real-time robotics control computations is proposed. In contrast to the traditional approach to scheduling such computations, based strictly on task dependence relations, the proposed approach relaxes precedence constraints and scheduling is guided instead by the relative sensitivity of the outputs with respect to the various paths in the task graph. An example of the computation of the Inverse Dynamics of a simple inverted pendulum is used to demonstrate the reduction in effective computational latency through use of the Non-Strict approach. A speedup of 5 has been obtained when the processes of the task graph are scheduled to reduce the latency along the crucial path of the computation. While error is introduced by the relaxation of precedence constraints, the Non-Strict approach has a smaller error than the conventional Strict approach for a wide range of input conditions.

  11. Dynamic Load Balancing for Finite Element Calculations on Parallel Computers. Chapter 1

    NASA Technical Reports Server (NTRS)

    Pramono, Eddy; Simon, Horst D.; Sohn, Andrew; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a frame work is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine SP2.

  12. Enabling 3D-Liver Perfusion Mapping from MR-DCE Imaging Using Distributed Computing.

    PubMed

    Leporq, Benjamin; Camarasu-Pop, Sorina; Davila-Serrano, Eduardo E; Pilleul, Frank; Beuf, Olivier

    2013-01-01

    An MR acquisition protocol and a processing method using distributed computing on the European Grid Infrastructure (EGI) to allow 3D liver perfusion parametric mapping after Magnetic Resonance Dynamic Contrast Enhanced (MR-DCE) imaging are presented. Seven patients (one healthy control and six with chronic liver diseases) were prospectively enrolled after liver biopsy. MR-dynamic acquisition was continuously performed in free-breathing during two minutes after simultaneous intravascular contrast agent (MS-325 blood pool agent) injection. Hepatic capillary system was modeled by a 3-parameters one-compartment pharmacokinetic model. The processing step was parallelized and executed on the EGI. It was modeled and implemented as a grid workflow using the Gwendia language and the MOTEUR workflow engine. Results showed good reproducibility in repeated processing on the grid. The results obtained from the grid were well correlated with ROI-based reference method ran locally on a personal computer. The speed-up range was 71 to 242 with an average value of 126. In conclusion, distributed computing applied to perfusion mapping brings significant speed-up to quantification step to be used for further clinical studies in a research context. Accuracy would be improved with higher image SNR accessible on the latest 3T MR systems available today.

  13. Dynamic self-assembly in living systems as computation.

    SciTech Connect

    Bouchard, Ann Marie; Osbourn, Gordon Cecil

    2004-06-01

    Biochemical reactions taking place in living systems that map different inputs to specific outputs are intuitively recognized as performing information processing. Conventional wisdom distinguishes such proteins, whose primary function is to transfer and process information, from proteins that perform the vast majority of the construction, maintenance, and actuation tasks of the cell (assembling and disassembling macromolecular structures, producing movement, and synthesizing and degrading molecules). In this paper, we examine the computing capabilities of biological processes in the context of the formal model of computing known as the random access machine (RAM) [Dewdney AK (1993) The New Turing Omnibus. Computer Science Press, New York], which is equivalent to a Turing machine [Minsky ML (1967) Computation: Finite and Infinite Machines. Prentice-Hall, Englewood Cliffs, NJ]. When viewed from the RAM perspective, we observe that many of these dynamic self-assembly processes - synthesis, degradation, assembly, movement - do carry out computational operations. We also show that the same computing model is applicable at other hierarchical levels of biological systems (e.g., cellular or organism networks as well as molecular networks). We present stochastic simulations of idealized protein networks designed explicitly to carry out a numeric calculation. We explore the reliability of such computations and discuss error-correction strategies (algorithms) employed by living systems. Finally, we discuss some real examples of dynamic self-assembly processes that occur in living systems, and describe the RAM computer programs they implement. Thus, by viewing the processes of living systems from the RAM perspective, a far greater fraction of these processes can be understood as computing than has been previously recognized.

  14. Towards Dynamic Remote Data Auditing in Computational Clouds

    PubMed Central

    Khurram Khan, Muhammad; Anuar, Nor Badrul

    2014-01-01

    Cloud computing is a significant shift of computational paradigm where computing as a utility and storing data remotely have a great potential. Enterprise and businesses are now more interested in outsourcing their data to the cloud to lessen the burden of local data storage and maintenance. However, the outsourced data and the computation outcomes are not continuously trustworthy due to the lack of control and physical possession of the data owners. To better streamline this issue, researchers have now focused on designing remote data auditing (RDA) techniques. The majority of these techniques, however, are only applicable for static archive data and are not subject to audit the dynamically updated outsourced data. We propose an effectual RDA technique based on algebraic signature properties for cloud storage system and also present a new data structure capable of efficiently supporting dynamic data operations like append, insert, modify, and delete. Moreover, this data structure empowers our method to be applicable for large-scale data with minimum computation cost. The comparative analysis with the state-of-the-art RDA schemes shows that the proposed scheme is secure and highly efficient in terms of the computation and communication overhead on the auditor and server. PMID:25121114

  15. Towards dynamic remote data auditing in computational clouds.

    PubMed

    Sookhak, Mehdi; Akhunzada, Adnan; Gani, Abdullah; Khurram Khan, Muhammad; Anuar, Nor Badrul

    2014-01-01

    Cloud computing is a significant shift of computational paradigm where computing as a utility and storing data remotely have a great potential. Enterprise and businesses are now more interested in outsourcing their data to the cloud to lessen the burden of local data storage and maintenance. However, the outsourced data and the computation outcomes are not continuously trustworthy due to the lack of control and physical possession of the data owners. To better streamline this issue, researchers have now focused on designing remote data auditing (RDA) techniques. The majority of these techniques, however, are only applicable for static archive data and are not subject to audit the dynamically updated outsourced data. We propose an effectual RDA technique based on algebraic signature properties for cloud storage system and also present a new data structure capable of efficiently supporting dynamic data operations like append, insert, modify, and delete. Moreover, this data structure empowers our method to be applicable for large-scale data with minimum computation cost. The comparative analysis with the state-of-the-art RDA schemes shows that the proposed scheme is secure and highly efficient in terms of the computation and communication overhead on the auditor and server.

  16. Lilith: A Java framework for the development of scalable tools for high performance distributed computing platforms

    SciTech Connect

    Evensky, D.A.; Gentile, A.C.; Armstrong, R.C.

    1998-03-19

    Increasingly, high performance computing constitutes the use of very large heterogeneous clusters of machines. The use and maintenance of such clusters are subject to complexities of communication between the machines in a time efficient and secure manner. Lilith is a general purpose tool that provides a highly scalable, secure, and easy distribution of user code across a heterogeneous computing platform. By handling the details of code distribution and communication, such a framework allows for the rapid development of tools for the use and management of large distributed systems. Lilith is written in Java, taking advantage of Java`s unique features of loading and distributing code dynamically, its platform independence, its thread support, and its provision of graphical components to facilitate easy to use resultant tools. The authors describe the use of Lilith in a tool developed for the maintenance of the large distributed cluster at their institution and present details of the Lilith architecture and user API for the general user development of scalable tools.

  17. Computer Modeling of Crystallization and Crystal Size distributions

    NASA Astrophysics Data System (ADS)

    Amenta, R. V.

    2002-05-01

    The crystal size distribution of an igneous rock has been shown to be related to the crystallization kinetics. In order to better understand crystallization processes, the nucleation and growth of crystals in a closed system is modeled computationally and graphically. Units of volume analogous to unit cells are systematically attached to stationary crystal nuclei. The number of volume units attached to each crystal per growth stage is proportional to the crystal size insuring that crystal dimensional growth rates are constant regardless of their size. The number of new crystal nuclei per total system volume that form in each growth stage increases exponentially Cumulative crystal size distributions (CCSD) are determined for various stages of crystallization (30 percent, 60 pct, etc) from a database generated by the computer model, and each distribution is fit to an exponential function of the same form. Simulation results show that CCSD functions appear to fit the data reasonably well (R-square) with the greatest misfit at 100 pct crystallization. The crystal size distribution at each pct crystallization can be obtained from the derivative of the respective CCSD function. The log form of each crystal size distribution (CSD) is a linear function with negative slope. Results show that the slopes of the CSD functions at pcts crystallization up to 90 pct are parallel, but the slope at 100 pct crystallization differs from the others although still in approximate alignment. We suggest that real crystallization of igneous rocks may show this pattern. In the early stages of crystallization crystals are far apart and CSD's are ideal as predicted by theory based on growth of crystals in a brine. At advanced stages of crystallization growth collision boundaries develop between crystals. As contiguity increases crystals become blocked and inactive because they can no longer grow. As crystallization approaches 100 pct a significant number of inactive crystals exist resulting in

  18. CytoSolve: A Scalable Computational Method for Dynamic Integration of Multiple Molecular Pathway Models.

    PubMed

    Ayyadurai, V A Shiva; Dewey, C Forbes

    2011-03-01

    A grand challenge of computational systems biology is to create a molecular pathway model of the whole cell. Current approaches involve merging smaller molecular pathway models' source codes to create a large monolithic model (computer program) that runs on a single computer. Such a larger model is difficult, if not impossible, to maintain given ongoing updates to the source codes of the smaller models. This paper describes a new system called CytoSolve that dynamically integrates computations of smaller models that can run in parallel across different machines without the need to merge the source codes of the individual models. This approach is demonstrated on the classic Epidermal Growth Factor Receptor (EGFR) model of Kholodenko. The EGFR model is split into four smaller models and each smaller model is distributed on a different machine. Results from four smaller models are dynamically integrated to generate identical results to the monolithic EGFR model running on a single machine. The overhead for parallel and dynamic computation is approximately twice that of a monolithic model running on a single machine. The CytoSolve approach provides a scalable method since smaller models may reside on any computer worldwide, where the source code of each model can be independently maintained and updated.

  19. Classification of bacterial contamination using image processing and distributed computing.

    PubMed

    Ahmed, W M; Bayraktar, B; Bhunia, A; Hirleman, E D; Robinson, J P; Rajwa, B

    2013-01-01

    Disease outbreaks due to contaminated food are a major concern not only for the food-processing industry but also for the public at large. Techniques for automated detection and classification of microorganisms can be a great help in preventing outbreaks and maintaining the safety of the nations food supply. Identification and classification of foodborne pathogens using colony scatter patterns is a promising new label-free technique that utilizes image-analysis and machine-learning tools. However, the feature-extraction tools employed for this approach are computationally complex, and choosing the right combination of scatter-related features requires extensive testing with different feature combinations. In the presented work we used computer clusters to speed up the feature-extraction process, which enables us to analyze the contribution of different scatter-based features to the overall classification accuracy. A set of 1000 scatter patterns representing ten different bacterial strains was used. Zernike and Chebyshev moments as well as Haralick texture features were computed from the available light-scatter patterns. The most promising features were first selected using Fishers discriminant analysis, and subsequently a support-vector-machine (SVM) classifier with a linear kernel was used. With extensive testing we were able to identify a small subset of features that produced the desired results in terms of classification accuracy and execution speed. The use of distributed computing for scatter-pattern analysis, feature extraction, and selection provides a feasible mechanism for large-scale deployment of a light scatter-based approach to bacterial classification.

  20. An Optimization Framework for Dynamic, Distributed Real-Time Systems

    NASA Technical Reports Server (NTRS)

    Eckert, Klaus; Juedes, David; Welch, Lonnie; Chelberg, David; Bruggerman, Carl; Drews, Frank; Fleeman, David; Parrott, David; Pfarr, Barbara

    2003-01-01

    Abstract. This paper presents a model that is useful for developing resource allocation algorithms for distributed real-time systems .that operate in dynamic environments. Interesting aspects of the model include dynamic environments, utility and service levels, which provide a means for graceful degradation in resource-constrained situations and support optimization of the allocation of resources. The paper also provides an allocation algorithm that illustrates how to use the model for producing feasible, optimal resource allocations.

  1. Integrating aerodynamic surface modeling for computational fluid dynamics with computer aided structural analysis, design, and manufacturing

    NASA Technical Reports Server (NTRS)

    Thorp, Scott A.

    1992-01-01

    This presentation will discuss the development of a NASA Geometry Exchange Specification for transferring aerodynamic surface geometry between LeRC systems and grid generation software used for computational fluid dynamics research. The proposed specification is based on a subset of the Initial Graphics Exchange Specification (IGES). The presentation will include discussion of how the NASA-IGES standard will accommodate improved computer aided design inspection methods and reverse engineering techniques currently being developed. The presentation is in viewgraph format.

  2. Localised distributions and criteria for correctness in complex Langevin dynamics

    SciTech Connect

    Aarts, Gert; Giudice, Pietro; Seiler, Erhard

    2013-10-15

    Complex Langevin dynamics can solve the sign problem appearing in numerical simulations of theories with a complex action. In order to justify the procedure, it is important to understand the properties of the real and positive distribution, which is effectively sampled during the stochastic process. In the context of a simple model, we study this distribution by solving the Fokker–Planck equation as well as by brute force and relate the results to the recently derived criteria for correctness. We demonstrate analytically that it is possible that the distribution has support in a strip in the complexified configuration space only, in which case correct results are expected. -- Highlights: •Characterisation of the equilibrium distribution sampled in complex Langevin dynamics. •Connection between criteria for correctness and breakdown. •Solution of the Fokker–Planck equation in the case of real noise. •Analytical determination of support in complexified space.

  3. Estimation of free-energy differences from computed work distributions: an application of Jarzynski's equality.

    PubMed

    Echeverria, Ignacia; Amzel, L Mario

    2012-09-13

    Equilibrium free-energy differences can be computed from nonequilibrium molecular dynamics (MD) simulations using Jarzynski's equality (Jarzynski, C. Phys. Rev. Lett.1997, 78, 2690) by combining a large set of independent trajectories (path ensemble). Here we present the multistep trajectory combination (MSTC) method to compute free-energy differences, which by combining trajectories significantly reduces the number of trajectories necessary to generate a representative path ensemble. This method generates well-sampled work distributions, even for large systems, by combining parts of a relatively small number of trajectories carried out in steps. To assess the efficiency of the MSTC method, we derived analytical expressions and used them to compute the bias and the variance of the free-energy estimates along with numerically calculated values. We show that the MSTC method significantly reduces both the bias and variance of the free-energy estimates compared to the estimates obtained using single-step trajectories. In addition, because in the MSTC method the process is divided into steps, it is feasible to compute the reverse transition. By combining the forward and reverse processes, the free-energy difference can be computed using the Crooks' fluctuation theorem (Crooks, G. E. J. Stat. Phys.1998, 90, 1481 and Crooks, G. E. Phys. Rev. E 2000, 61, 2361) or Bennett's acceptance ratio (Bennett, C. H. J. Comput. Phys. 1976, 22, 245), which further reduces the bias and variance of the estimates.

  4. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    NASA Technical Reports Server (NTRS)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  5. Parallel Computational Fluid Dynamics: Current Status and Future Requirements

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; VanDalsem, William R.; Dagum, Leonardo; Kutler, Paul (Technical Monitor)

    1994-01-01

    One or the key objectives of the Applied Research Branch in the Numerical Aerodynamic Simulation (NAS) Systems Division at NASA Allies Research Center is the accelerated introduction of highly parallel machines into a full operational environment. In this report we discuss the performance results obtained from the implementation of some computational fluid dynamics (CFD) applications on the Connection Machine CM-2 and the Intel iPSC/860. We summarize some of the experiences made so far with the parallel testbed machines at the NAS Applied Research Branch. Then we discuss the long term computational requirements for accomplishing some of the grand challenge problems in computational aerosciences. We argue that only massively parallel machines will be able to meet these grand challenge requirements, and we outline the computer science and algorithm research challenges ahead.

  6. Interactive computational models of particle dynamics using virtual reality

    SciTech Connect

    Canfield, T.; Diachin, D.; Freitag, L.; Heath, D.; Herzog, J.; Michels, W.

    1996-12-31

    An increasing number of industrial applications rely on computational models to reduce costs in product design, development, and testing cycles. Here, the authors discuss an interactive environment for the visualization, analysis, and modification of computational models used in industrial settings. In particular, they focus on interactively placing massless, massed, and evaporating particulate matter in computational fluid dynamics applications.they discuss the numerical model used to compute the particle pathlines in the fluid flow for display and analysis. They briefly describe the toolkits developed for vector and scalar field visualization, interactive particulate source placement, and a three-dimensional GUI interface. This system is currently used in two industrial applications, and they present the tools in the context of these applications. They summarize the current state of the project and offer directions for future research.

  7. Computational strategies for three-dimensional flow simulations on distributed computer systems. Ph.D. Thesis Semiannual Status Report, 15 Aug. 1993 - 15 Feb. 1994

    NASA Technical Reports Server (NTRS)

    Weed, Richard Allen; Sankar, L. N.

    1994-01-01

    An increasing amount of research activity in computational fluid dynamics has been devoted to the development of efficient algorithms for parallel computing systems. The increasing performance to price ratio of engineering workstations has led to research to development procedures for implementing a parallel computing system composed of distributed workstations. This thesis proposal outlines an ongoing research program to develop efficient strategies for performing three-dimensional flow analysis on distributed computing systems. The PVM parallel programming interface was used to modify an existing three-dimensional flow solver, the TEAM code developed by Lockheed for the Air Force, to function as a parallel flow solver on clusters of workstations. Steady flow solutions were generated for three different wing and body geometries to validate the code and evaluate code performance. The proposed research will extend the parallel code development to determine the most efficient strategies for unsteady flow simulations.

  8. A Textbook for a First Course in Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Zingg, D. W.; Pulliam, T. H.; Nixon, David (Technical Monitor)

    1999-01-01

    This paper describes and discusses the textbook, Fundamentals of Computational Fluid Dynamics by Lomax, Pulliam, and Zingg, which is intended for a graduate level first course in computational fluid dynamics. This textbook emphasizes fundamental concepts in developing, analyzing, and understanding numerical methods for the partial differential equations governing the physics of fluid flow. Its underlying philosophy is that the theory of linear algebra and the attendant eigenanalysis of linear systems provides a mathematical framework to describe and unify most numerical methods in common use in the field of fluid dynamics. Two linear model equations, the linear convection and diffusion equations, are used to illustrate concepts throughout. Emphasis is on the semi-discrete approach, in which the governing partial differential equations (PDE's) are reduced to systems of ordinary differential equations (ODE's) through a discretization of the spatial derivatives. The ordinary differential equations are then reduced to ordinary difference equations (O(Delta)E's) using a time-marching method. This methodology, using the progression from PDE through ODE's to O(Delta)E's, together with the use of the eigensystems of tridiagonal matrices and the theory of O(Delta)E's, gives the book its distinctiveness and provides a sound basis for a deep understanding of fundamental concepts in computational fluid dynamics.

  9. Incorporating geometrically complex vegetation in a computational fluid dynamic framework

    NASA Astrophysics Data System (ADS)

    Boothroyd, Richard; Hardy, Richard; Warburton, Jeff; Rosser, Nick

    2015-04-01

    Vegetation is known to have a significant influence on the hydraulic, geomorphological, and ecological functioning of river systems. Vegetation acts as a blockage to flow, thereby causing additional flow resistance and influencing flow dynamics, in particular flow conveyance. These processes need to be incorporated into flood models to improve predictions used in river management. However, the current practice in representing vegetation in hydraulic models is either through roughness parameterisation or process understanding derived experimentally from flow through highly simplified configurations of fixed, rigid cylinders. It is suggested that such simplifications inadequately describe the geometric complexity that characterises vegetation, and therefore the modelled flow dynamics may be oversimplified. This paper addresses this issue by using an approach combining field and numerical modelling techniques. Terrestrial Laser Scanning (TLS) with waveform processing has been applied to collect a sub-mm, 3-dimensional representation of Prunus laurocerasus, an invasive species to the UK that has been increasingly recorded in riparian zones. Multiple scan perspectives produce a highly detailed point cloud (>5,000,000 individual data points) which is reduced in post processing using an octree-based voxelisation technique. The method retains the geometric complexity of the vegetation by subdividing the point cloud into 0.01 m3 cubic voxels. The voxelised representation is subsequently read into a computational fluid dynamic (CFD) model using a Mass Flux Scaling Algorithm, allowing the vegetation to be directly represented in the modelling framework. Results demonstrate the development of a complex flow field around the vegetation. The downstream velocity profile is characterised by two distinct inflection points. A high velocity zone in the near-bed (plant-stem) region is apparent due to the lack of significant near-bed foliage. Above this, a zone of reduced velocity is

  10. The coupling of fluids, dynamics, and controls on advanced architecture computers

    NASA Technical Reports Server (NTRS)

    Atwood, Christopher

    1995-01-01

    This grant provided for the demonstration of coupled controls, body dynamics, and fluids computations in a workstation cluster environment; and an investigation of the impact of peer-peer communication on flow solver performance and robustness. The findings of these investigations were documented in the conference articles.The attached publication, 'Towards Distributed Fluids/Controls Simulations', documents the solution and scaling of the coupled Navier-Stokes, Euler rigid-body dynamics, and state feedback control equations for a two-dimensional canard-wing. The poor scaling shown was due to serialized grid connectivity computation and Ethernet bandwidth limits. The scaling of a peer-to-peer communication flow code on an IBM SP-2 was also shown. The scaling of the code on the switched fabric-linked nodes was good, with a 2.4 percent loss due to communication of intergrid boundary point information. The code performance on 30 worker nodes was 1.7 (mu)s/point/iteration, or a factor of three over a Cray C-90 head. The attached paper, 'Nonlinear Fluid Computations in a Distributed Environment', documents the effect of several computational rate enhancing methods on convergence. For the cases shown, the highest throughput was achieved using boundary updates at each step, with the manager process performing communication tasks only. Constrained domain decomposition of the implicit fluid equations did not degrade the convergence rate or final solution. The scaling of a coupled body/fluid dynamics problem on an Ethernet-linked cluster was also shown.

  11. Research into display sharing techniques for distributed computing environments

    NASA Technical Reports Server (NTRS)

    Hugg, Steven B.; Fitzgerald, Paul F., Jr.; Rosson, Nina Y.; Johns, Stephen R.

    1990-01-01

    The X-based Display Sharing solution for distributed computing environments is described. The Display Sharing prototype includes the base functionality for telecast and display copy requirements. Since the prototype implementation is modular and the system design provided flexibility for the Mission Control Center Upgrade (MCCU) operational consideration, the prototype implementation can be the baseline for a production Display Sharing implementation. To facilitate the process the following discussions are presented: Theory of operation; System of architecture; Using the prototype; Software description; Research tools; Prototype evaluation; and Outstanding issues. The prototype is based on the concept of a dedicated central host performing the majority of the Display Sharing processing, allowing minimal impact on each individual workstation. Each workstation participating in Display Sharing hosts programs to facilitate the user's access to Display Sharing as host machine.

  12. Parallel algorithms and architecture for computation of manipulator forward dynamics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Parallel computation of manipulator forward dynamics is investigated. Considering three classes of algorithms for the solution of the problem, that is, the O(n), the O(n exp 2), and the O(n exp 3) algorithms, parallelism in the problem is analyzed. It is shown that the problem belongs to the class of NC and that the time and processors bounds are of O(log2/2n) and O(n exp 4), respectively. However, the fastest stable parallel algorithms achieve the computation time of O(n) and can be derived by parallelization of the O(n exp 3) serial algorithms. Parallel computation of the O(n exp 3) algorithms requires the development of parallel algorithms for a set of fundamentally different problems, that is, the Newton-Euler formulation, the computation of the inertia matrix, decomposition of the symmetric, positive definite matrix, and the solution of triangular systems. Parallel algorithms for this set of problems are developed which can be efficiently implemented on a unique architecture, a triangular array of n(n+2)/2 processors with a simple nearest-neighbor interconnection. This architecture is particularly suitable for VLSI and WSI implementations. The developed parallel algorithm, compared to the best serial O(n) algorithm, achieves an asymptotic speedup of more than two orders-of-magnitude in the computation the forward dynamics.

  13. Computational fluid dynamics studies of nuclear rocket performance

    NASA Technical Reports Server (NTRS)

    Stubbs, Robert M.; Kim, Suk C.; Benson, Thomas J.

    1994-01-01

    A CFD analysis of a low pressure nuclear rocket concept is presented with the use of an advanced chemical kinetics, Navier-Stokes code. The computations describe the flow field in detail, including gas dynamic, thermodynamic and chemical properties, as well as global performance quantities such as specific impulse. Computational studies of several rocket nozzle shapes are conducted in an attempt to maximize hydrogen recombination. These Navier-Stokes calculations, which include real gas and viscous effects, predict lower performance values than have been reported heretofore.

  14. Operational computer graphics in the flight dynamics environment

    NASA Technical Reports Server (NTRS)

    Jeletic, James F.

    1989-01-01

    Over the past five years, the Flight Dynamics Division of the National Aeronautics and Space Administration's (NASA's) Goddard Space Flight Center has incorporated computer graphics technology into its operational environment. In an attempt to increase the effectiveness and productivity of the Division, computer graphics software systems have been developed that display spacecraft tracking and telemetry data in 2-d and 3-d graphic formats that are more comprehensible than the alphanumeric tables of the past. These systems vary in functionality from real-time mission monitoring system, to mission planning utilities, to system development tools. Here, the capabilities and architecture of these systems are discussed.

  15. Multitasking the code ARC3D. [for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Barton, John T.; Hsiung, Christopher C.

    1986-01-01

    The CRAY multitasking system was developed in order to utilize all four processors and sharply reduce the wall clock run time. This paper describes the techniques used to modify the computational fluid dynamics code ARC3D for this run and analyzes the achieved speedup. The ARC3D code solves either the Euler or thin-layer N-S equations using an implicit approximate factorization scheme. Results indicate that multitask processing can be used to achieve wall clock speedup factors of over three times, depending on the nature of the program code being used. Multitasking appears to be particularly advantageous for large-memory problems running on multiple CPU computers.

  16. Computational fluid dynamics applications at McDonnel Douglas

    NASA Technical Reports Server (NTRS)

    Hakkinen, R. J.

    1987-01-01

    Representative examples are presented of applications and development of advanced Computational Fluid Dynamics (CFD) codes for aerodynamic design at the McDonnell Douglas Corporation (MDC). Transonic potential and Euler codes, interactively coupled with boundary layer computation, and solutions of slender-layer Navier-Stokes approximation are applied to aircraft wing/body calculations. An optimization procedure using evolution theory is described in the context of transonic wing design. Euler methods are presented for analysis of hypersonic configurations, and helicopter rotors in hover and forward flight. Several of these projects were accepted for access to the Numerical Aerodynamic Simulation (NAS) facility at the NASA-Ames Research Center.

  17. Computer simulation of multigrid body dynamics and control

    NASA Technical Reports Server (NTRS)

    Swaminadham, M.; Moon, Young I.; Venkayya, V. B.

    1990-01-01

    The objective is to set up and analyze benchmark problems on multibody dynamics and to verify the predictions of two multibody computer simulation codes. TREETOPS and DISCOS have been used to run three example problems - one degree-of-freedom spring mass dashpot system, an inverted pendulum system, and a triple pendulum. To study the dynamics and control interaction, an inverted planar pendulum with an external body force and a torsional control spring was modeled as a hinge connected two-rigid body system. TREETOPS and DISCOS affected the time history simulation of this problem. System state space variables and their time derivatives from two simulation codes were compared.

  18. Distributed Framework for Dynamic Telescope and Instrument Control

    NASA Technical Reports Server (NTRS)

    Ames, Troy J.; Case, Lynne

    2002-01-01

    Traditionally, instrument command and control systems have been developed specifically for a single instrument. Such solutions are frequently expensive and are inflexible to support the next instrument development effort. NASA Goddard Space Flight Center is developing an extensible framework, known as Instrument Remote Control (IRC) that applies to any kind of instrument that can be controlled by a computer. IRC combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML). A key aspect of the architecture is software that is driven by an instrument description, written using the Instrument Markup Language (IML). IML is an XML dialect used to describe graphical user interfaces to control and monitor the instrument, command sets and command formats, data streams, communication mechanisms, and data processing algorithms. The IRC framework provides the ability to communicate to components anywhere on a network using the JXTA protocol for dynamic discovery of distributed components. JXTA (see httD://www.jxta.org,) is a generalized protocol that allows any devices connected by a network to communicate in a peer-to-peer manner. IRC uses JXTA to advertise a device's IML and discover devices of interest on the network. Devices can join or leave the network and thus join or leave the instrument control environment of IRC. Currently, several astronomical instruments are working with the IRC development team to develop custom components for IRC to control their instruments. These instruments include: High resolution Airborne Wideband Camera (HAWC), a first light instrument for the Stratospheric Observatory for Infrared Astronomy (SOFIA); Submillimeter And Far Infrared Experiment (SAFIRE), a Principal Investigator instrument for SOFIA; and Fabry-Perot Interferometer Bolometer Research Experiment (FIBRE), a prototype of the SAFIRE instrument, used at the Caltech Submillimeter Observatory (CSO). Most recently, we have

  19. Toward unification of taxonomy databases in a distributed computer environment

    SciTech Connect

    Kitakami, Hajime; Tateno, Yoshio; Gojobori, Takashi

    1994-12-31

    All the taxonomy databases constructed with the DNA databases of the international DNA data banks are powerful electronic dictionaries which aid in biological research by computer. The taxonomy databases are, however not consistently unified with a relational format. If we can achieve consistent unification of the taxonomy databases, it will be useful in comparing many research results, and investigating future research directions from existent research results. In particular, it will be useful in comparing relationships between phylogenetic trees inferred from molecular data and those constructed from morphological data. The goal of the present study is to unify the existent taxonomy databases and eliminate inconsistencies (errors) that are present in them. Inconsistencies occur particularly in the restructuring of the existent taxonomy databases, since classification rules for constructing the taxonomy have rapidly changed with biological advancements. A repair system is needed to remove inconsistencies in each data bank and mismatches among data banks. This paper describes a new methodology for removing both inconsistencies and mismatches from the databases on a distributed computer environment. The methodology is implemented in a relational database management system, SYBASE.

  20. Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons

    PubMed Central

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-01-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons. PMID:22096452

  1. Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid

    NASA Astrophysics Data System (ADS)

    Andrade, P.; Babik, M.; Bhatt, K.; Chand, P.; Collados, D.; Duggal, V.; Fuente, P.; Hayashi, S.; Imamagic, E.; Joshi, P.; Kalmady, R.; Karnani, U.; Kumar, V.; Lapka, W.; Quick, R.; Tarragon, J.; Teige, S.; Triantafyllidis, C.

    2012-12-01

    The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO[1] managers, service managers, management), from different middleware providers (ARC[2], dCache[3], gLite[4], UNICORE[5] and VDT[6]), consortiums (WLCG[7], EMI[11], EGI[15], OSG[13]), and operational teams (GOC[16], OMB[8], OTAG[9], CSIRT[10]). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG[27] portal where it is exposed to other clients. This monitoring workflow profits from the interoperability established between the SAM[19] and RSV[20] frameworks. We show how these two distributed structures are capable of uniting technologies and hiding the complexity around them, making them easy to be used by the community. Finally, the different supported deployment strategies, tailored not only for monitoring the entire infrastructure but also for monitoring sites and virtual organizations, are presented and the associated operational benefits highlighted.

  2. Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex

    PubMed Central

    Procyk, Emmanuel; Dominey, Peter Ford

    2016-01-01

    Primates display a remarkable ability to adapt to novel situations. Determining what is most pertinent in these situations is not always possible based only on the current sensory inputs, and often also depends on recent inputs and behavioral outputs that contribute to internal states. Thus, one can ask how cortical dynamics generate representations of these complex situations. It has been observed that mixed selectivity in cortical neurons contributes to represent diverse situations defined by a combination of the current stimuli, and that mixed selectivity is readily obtained in randomly connected recurrent networks. In this context, these reservoir networks reproduce the highly recurrent nature of local cortical connectivity. Recombining present and past inputs, random recurrent networks from the reservoir computing framework generate mixed selectivity which provides pre-coded representations of an essentially universal set of contexts. These representations can then be selectively amplified through learning to solve the task at hand. We thus explored their representational power and dynamical properties after training a reservoir to perform a complex cognitive task initially developed for monkeys. The reservoir model inherently displayed a dynamic form of mixed selectivity, key to the representation of the behavioral context over time. The pre-coded representation of context was amplified by training a feedback neuron to explicitly represent this context, thereby reproducing the effect of learning and allowing the model to perform more robustly. This second version of the model demonstrates how a hybrid dynamical regime combining spatio-temporal processing of reservoirs, and input driven attracting dynamics generated by the feedback neuron, can be used to solve a complex cognitive task. We compared reservoir activity to neural activity of dorsal anterior cingulate cortex of monkeys which revealed similar network dynamics. We argue that reservoir computing is a

  3. Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex.

    PubMed

    Enel, Pierre; Procyk, Emmanuel; Quilodran, René; Dominey, Peter Ford

    2016-06-01

    Primates display a remarkable ability to adapt to novel situations. Determining what is most pertinent in these situations is not always possible based only on the current sensory inputs, and often also depends on recent inputs and behavioral outputs that contribute to internal states. Thus, one can ask how cortical dynamics generate representations of these complex situations. It has been observed that mixed selectivity in cortical neurons contributes to represent diverse situations defined by a combination of the current stimuli, and that mixed selectivity is readily obtained in randomly connected recurrent networks. In this context, these reservoir networks reproduce the highly recurrent nature of local cortical connectivity. Recombining present and past inputs, random recurrent networks from the reservoir computing framework generate mixed selectivity which provides pre-coded representations of an essentially universal set of contexts. These representations can then be selectively amplified through learning to solve the task at hand. We thus explored their representational power and dynamical properties after training a reservoir to perform a complex cognitive task initially developed for monkeys. The reservoir model inherently displayed a dynamic form of mixed selectivity, key to the representation of the behavioral context over time. The pre-coded representation of context was amplified by training a feedback neuron to explicitly represent this context, thereby reproducing the effect of learning and allowing the model to perform more robustly. This second version of the model demonstrates how a hybrid dynamical regime combining spatio-temporal processing of reservoirs, and input driven attracting dynamics generated by the feedback neuron, can be used to solve a complex cognitive task. We compared reservoir activity to neural activity of dorsal anterior cingulate cortex of monkeys which revealed similar network dynamics. We argue that reservoir computing is a

  4. Analysis of nuclear thermal propulsion systems using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Stubbs, Robert M.; Kim, Suk C.; Papp, John L.

    1993-01-01

    Computational fluid dynamics (CFD) analyses of nuclear rockets with relatively low chamber pressures were carried out to assess the merits of using such low pressures to take advantage of hydrogen dissociation and recombination. The computations, using a Navier-Stokes code with chemical kinetics, describe the flow field in detail, including gas dynamics, thermodynamic and chemical properties, and provide global performance quantities such as specific impulse and thrust. Parametric studies were performed varying chamber temperature, chamber pressure and nozzle size. Chamber temperature was varied between 2700 K and 3600 K, and chamber pressure between 0.1 atm. and 10 atm. Performance advantages associated with lower chamber pressures are shown to occur at the higher chamber temperatures. Viscous losses are greater at lower chamber pressures and can be decreased in larger nozzles where the boundary layer is a smaller fraction of the flow field.

  5. Distributed Adaptive Particle Swarm Optimizer in Dynamic Environment

    SciTech Connect

    Cui, Xiaohui; Potok, Thomas E

    2007-01-01

    In the real world, we have to frequently deal with searching and tracking an optimal solution in a dynamical and noisy environment. This demands that the algorithm not only find the optimal solution but also track the trajectory of the changing solution. Particle Swarm Optimization (PSO) is a population-based stochastic optimization technique, which can find an optimal, or near optimal, solution to a numerical and qualitative problem. In PSO algorithm, the problem solution emerges from the interactions between many simple individual agents called particles, which make PSO an inherently distributed algorithm. However, the traditional PSO algorithm lacks the ability to track the optimal solution in a dynamic and noisy environment. In this paper, we present a distributed adaptive PSO (DAPSO) algorithm that can be used for tracking a non-stationary optimal solution in a dynamically changing and noisy environment.

  6. Dynamic species distribution models from categorical survey data.

    PubMed

    Mieszkowska, Nova; Milligan, Gregg; Burrows, Michael T; Freckleton, Rob; Spencer, Matthew

    2013-11-01

    1. Species distribution models are static models for the distribution of a species, based on Hutchinson's niche concept. They make probabilistic predictions about the distribution of a species, but do not have a temporal interpretation. In contrast, density-structured models based on categorical abundance data make it possible to incorporate population dynamics into species distribution modelling. 2. Using dynamic species distribution models, temporal aspects of a species' distribution can be investigated, including the predictability of future abundance categories and the expected persistence times of local populations, and how these may respond to environmental or anthropogenic drivers. 3. We built density-structured models for two intertidal marine invertebrates, the Lusitanian trochid gastropods Phorcus lineatus and Gibbula umbilicalis, based on 9 years of field data from around the United Kingdom. Abundances were recorded on a categorical scale, and stochastic models for year-to-year changes in abundance category were constructed with winter mean sea surface temperature (SST) and wave fetch (a measure of the exposure of a shore) as explanatory variables. 4. Both species were more likely to be present at sites with high SST, but differed in their responses to wave fetch. Phorcus lineatus had more predictable future abundance and longer expected persistence times than G. umbilicalis. This is consistent with the longer lifespan of P. lineatus. 5. Where data from multiple time points are available, dynamic species distribution models of the kind described here have many applications in population and conservation biology. These include allowing for changes over time when combining historical and contemporary data, and predicting how climate change might alter future abundance conditional on current distributions.

  7. Validation of Computational Fluid Dynamics Simulations for Realistic Flows (Preprint)

    DTIC Science & Technology

    2007-12-01

    these calculations, the reference length is the vortex core radius, the reference flow conditions are the free stream conditions with the Mach number M...currently valid OMB control number . PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 3. DATES COVERED...From - To) 11-10-2007 Technical Paper & Briefing Charts 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Validation of Computational Fluid Dynamics

  8. Using Soft Computing Technologies for the Simulation of LCAC Dynamics

    DTIC Science & Technology

    2011-09-01

    real-time, time-domain predictions of the vehicle’s dynamics as a function of the control signals given by the driver. Results are presented...free- running LCAC model, faster-than-real-time simulation, soft computing technology 1.0 INTRODUCTION The Maneuvering and Control Division (MCD...like all hovercraft , rides on a cushion of air. The air is supplied to the cushion by four centrifugal fans driven by the craft’s gas turbine

  9. Combining Dynamical Decoupling with Fault-Tolerant Quantum Computation

    DTIC Science & Technology

    2009-11-17

    ar X iv :0 91 1. 32 02 v1 [ qu an t- ph ] 1 7 N ov 2 00 9 Combining dynamical decoupling with fault-tolerant quantum computation Hui Khoon Ng,1...Daniel A. Lidar,2 and John Preskill1 1Institute for Quantum Information, California Institute of Technology, Pasadena, CA 91125, USA 2Departments...of Chemistry, Electrical Engineering, and Physics, and Center for Quantum Information Science & Technology, University of Southern California, Los

  10. Dynamic Scaling of Island-size Distribution on Anisotropic Surfaces

    NASA Astrophysics Data System (ADS)

    Li, Maozhi; Wang, E. G.; Liu, Banggui; Zhang, Zhenyu

    2002-03-01

    Dynamic scaling of island-size distribution on isotropic and anisotropic surfaces in submonolayer growth is systematically studied using kinetic Monte Carlo simulations. It is found that the island-size distribution in anisotropic submonolayer growth exhibits a general dynamic scaling behavior. An analytic expression is proposed for the scaling function, and is compared with the simulation results. This scaling function not only improves previous results for the isotropic growth (1), but also describes the scaling behavior of the island-size distribution in anisotropic submonolayer growth very well (2). 1. J. G. Amar and F. Family, Phys. Rev. Lett. 74, 2066 (1995). 2. M. Z. Li, E. G. Wang, B. G. Liu, and Z. Y. Zhang, Phys. Rev. Lett. (submitted).

  11. Computer simulation of methanol exchange dynamics around cations and anions

    SciTech Connect

    Roy, Santanu; Dang, Liem X.

    2016-03-03

    In this paper, we present the first computer simulation of methanol exchange dynamics between the first and second solvation shells around different cations and anions. After water, methanol is the most frequently used solvent for ions. Methanol has different structural and dynamical properties than water, so its ion solvation process is different. To this end, we performed molecular dynamics simulations using polarizable potential models to describe methanol-methanol and ion-methanol interactions. In particular, we computed methanol exchange rates by employing the transition state theory, the Impey-Madden-McDonald method, the reactive flux approach, and the Grote-Hynes theory. We observed that methanol exchange occurs at a nanosecond time scale for Na+ and at a picosecond time scale for other ions. We also observed a trend in which, for like charges, the exchange rate is slower for smaller ions because they are more strongly bound to methanol. This work was supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. The calculations were carried out using computer resources provided by the Office of Basic Energy Sciences.

  12. High-Performance Computation of Distributed-Memory Parallel 3D Voronoi and Delaunay Tessellation

    SciTech Connect

    Peterka, Tom; Morozov, Dmitriy; Phillips, Carolyn

    2014-11-14

    Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets: N-body simulations, molecular dynamics codes, and LIDAR point clouds are just a few examples. Such computational geometry methods are common in data analysis and visualization; but as the scale of simulations and observations surpasses billions of particles, the existing serial and shared-memory algorithms no longer suffice. A distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this paper is a new parallel Delaunay and Voronoi tessellation algorithm that automatically determines which neighbor points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include periodic and wall boundary conditions, comparison of our method using two popular serial libraries, and application to numerous science datasets.

  13. Impact of Load Balancing on Unstructured Adaptive Grid Computations for Distributed-Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak; Simon, Horst D.

    1996-01-01

    The computational requirements for an adaptive solution of unsteady problems change as the simulation progresses. This causes workload imbalance among processors on a parallel machine which, in turn, requires significant data movement at runtime. We present a new dynamic load-balancing framework, called JOVE, that balances the workload across all processors with a global view. Whenever the computational mesh is adapted, JOVE is activated to eliminate the load imbalance. JOVE has been implemented on an IBM SP2 distributed-memory machine in MPI for portability. Experimental results for two model meshes demonstrate that mesh adaption with load balancing gives more than a sixfold improvement over one without load balancing. We also show that JOVE gives a 24-fold speedup on 64 processors compared to sequential execution.

  14. Finite element dynamic analysis on CDC STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Lambiotte, J. J., Jr.

    1978-01-01

    Computational algorithms are presented for the finite element dynamic analysis of structures on the CDC STAR-100 computer. The spatial behavior is described using higher-order finite elements. The temporal behavior is approximated by using either the central difference explicit scheme or Newmark's implicit scheme. In each case the analysis is broken up into a number of basic macro-operations. Discussion is focused on the organization of the computation and the mode of storage of different arrays to take advantage of the STAR pipeline capability. The potential of the proposed algorithms is discussed and CPU times are given for performing the different macro-operations for a shell modeled by higher order composite shallow shell elements having 80 degrees of freedom.

  15. Computational fluid dynamics capability for the solid fuel ramjet projectile

    NASA Astrophysics Data System (ADS)

    Nusca, Michael J.; Chakravarthy, Sukumar R.; Goldberg, Uriel C.

    1988-12-01

    A computational fluid dynamics solution of the Navier-Stokes equations has been applied to the internal and external flow of inert solid-fuel ramjet projectiles. Computational modeling reveals internal flowfield details not attainable by flight or wind tunnel measurements, thus contributing to the current investigation into the flight performance of solid-fuel ramjet projectiles. The present code employs numerical algorithms termed total variational diminishing (TVD). Computational solutions indicate the importance of several special features of the code including the zonal grid framework, the TVD scheme, and a recently developed backflow turbulence model. The solutions are compared with results of internal surface pressure measurements. As demonstrated by these comparisons, the use of a backflow turbulence model distinguishes between satisfactory and poor flowfield predictions.

  16. Applying uncertainty quantification to multiphase flow computational fluid dynamics

    SciTech Connect

    Gel, A; Garg, R; Tong, C; Shahnam, M; Guenther, C

    2013-07-01

    Multiphase computational fluid dynamics plays a major role in design and optimization of fossil fuel based reactors. There is a growing interest in accounting for the influence of uncertainties associated with physical systems to increase the reliability of computational simulation based engineering analysis. The U.S. Department of Energy's National Energy Technology Laboratory (NETL) has recently undertaken an initiative to characterize uncertainties associated with computer simulation of reacting multiphase flows encountered in energy producing systems such as a coal gasifier. The current work presents the preliminary results in applying non-intrusive parametric uncertainty quantification and propagation techniques with NETL's open-source multiphase computational fluid dynamics software MFIX. For this purpose an open-source uncertainty quantification toolkit, PSUADE developed at the Lawrence Livermore National Laboratory (LLNL) has been interfaced with MFIX software. In this study, the sources of uncertainty associated with numerical approximation and model form have been neglected, and only the model input parametric uncertainty with forward propagation has been investigated by constructing a surrogate model based on data-fitted response surface for a multiphase flow demonstration problem. Monte Carlo simulation was employed for forward propagation of the aleatory type input uncertainties. Several insights gained based on the outcome of these simulations are presented such as how inadequate characterization of uncertainties can affect the reliability of the prediction results. Also a global sensitivity study using Sobol' indices was performed to better understand the contribution of input parameters to the variability observed in response variable.

  17. A computer code for beam dynamics simulations in SFRFQ structure

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Chen, J. E.; Lu, Y. R.; Yan, X. Q.; Zhu, K.; Fang, J. X.; Guo, Z. Y.

    2007-03-01

    A computer code (SFRFQCODEv1.0) is developed to analyze the beam dynamics of Separated Function Radio Frequency Quadruples (SFRFQ) structure. Calculations show that the transverse and longitudinal stability can be ensured by selecting proper dynamic and structure parameters. This paper describes the beam dynamical mechanism of SFRFQ, and presents a design example of SFRFQ cavity, which will be used as a post accelerator of a 26 MHz 1 MeV O + Integrated Split Ring (ISR) RFQ and accelerate O + from 1 to 1.5 MeV. Three electrostatic quadruples are adopted to realize the transverse beam matching from ISR RFQ to SFRFQ cavity. This setting is also useful for the beam size adjustment and its applications.

  18. Dynamic analysis of spur gears using computer program DANST

    NASA Astrophysics Data System (ADS)

    Oswald, Fred B.; Lin, Hsiang Hsi; Liou, Chuen-Huei; Valco, Mark J.

    1993-06-01

    DANST is a computer program for static and dynamic analysis of spur gear systems. The program can be used for parametric studies to predict the effect on dynamic load and tooth bending stress of spur gears due to operating speed, torque, stiffness, damping, inertia, and tooth profile. DANST performs geometric modeling and dynamic analysis for low- or high-contact-ratio spur gears. DANST can simulate gear systems with contact ratio ranging from one to three. It was designed to be easy to use, and it is extensively documented by comments in the source code. This report describes the installation and use of DANST. It covers input data requirements and presents examples. The report also compares DANST predictions for gear tooth loads and bending stress to experimental and finite element results.

  19. Dynamic analysis of spur gears using computer program DANST

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.; Lin, Hsiang Hsi; Liou, Chuen-Huei; Valco, Mark J.

    1993-01-01

    DANST is a computer program for static and dynamic analysis of spur gear systems. The program can be used for parametric studies to predict the effect on dynamic load and tooth bending stress of spur gears due to operating speed, torque, stiffness, damping, inertia, and tooth profile. DANST performs geometric modeling and dynamic analysis for low- or high-contact-ratio spur gears. DANST can simulate gear systems with contact ratio ranging from one to three. It was designed to be easy to use, and it is extensively documented by comments in the source code. This report describes the installation and use of DANST. It covers input data requirements and presents examples. The report also compares DANST predictions for gear tooth loads and bending stress to experimental and finite element results.

  20. Dynamic analysis of spur gears using computer program DANST

    SciTech Connect

    Oswald, F.B.; Lin, H.H.; Liou, Chuenheui; Valco, M.J.

    1993-06-01

    DANST is a computer program for static and dynamic analysis of spur gear systems. The program can be used for parametric studies to predict the effect on dynamic load and tooth bending stress of spur gears due to operating speed, torque, stiffness, damping, inertia, and tooth profile. DANST performs geometric modeling and dynamic analysis for low- or high-contact-ratio spur gears. DANST can simulate gear systems with contact ratio ranging from one to three. It was designed to be easy to use, and it is extensively documented by comments in the source code. This report describes the installation and use of DANST. It covers input data requirements and presents examples. The report also compares DANST predictions for gear tooth loads and bending stress to experimental and finite element results. 14 refs.

  1. Multi-heuristic dynamic task allocation using genetic algorithms in a heterogeneous distributed system.

    PubMed

    Page, Andrew J; Keane, Thomas M; Naughton, Thomas J

    2010-07-01

    We present a multi-heuristic evolutionary task allocation algorithm to dynamically map tasks to processors in a heterogeneous distributed system. It utilizes a genetic algorithm, combined with eight common heuristics, in an effort to minimize the total execution time. It operates on batches of unmapped tasks and can preemptively remap tasks to processors. The algorithm has been implemented on a Java distributed system and evaluated with a set of six problems from the areas of bioinformatics, biomedical engineering, computer science and cryptography. Experiments using up to 150 heterogeneous processors show that the algorithm achieves better efficiency than other state-of-the-art heuristic algorithms.

  2. Distributed computing feasibility in a non-dedicated homogeneous distributed system

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Sun, Xian-He

    1993-01-01

    The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.

  3. Automation of multi-agent control for complex dynamic systems in heterogeneous computational network

    NASA Astrophysics Data System (ADS)

    Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan

    2017-01-01

    The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.

  4. Smoluchowski coagulation models of sea ice thickness distribution dynamics

    NASA Astrophysics Data System (ADS)

    Godlovitch, D.; Illner, R.; Monahan, A.

    2011-12-01

    Sea ice thickness distributions display a ubiquitous exponential decrease with thickness. This tail characterizes the range of ice thickness produced by mechanical redistribution of ice through the process of ridging, rafting, and shearing. We investigate how well the thickness distribution can be simulated by representing mechanical redistribution as a generalized stacking process. Such processes are naturally described by a well-studied class of models known as Smoluchowski Coagulation Models (SCMs), which describe the dynamics of a population of fixed-mass "particles" which combine in pairs to form a "particle" with the combined mass of the constituent pair at a rate which depends on the mass of the interacting particles. Like observed sea ice thickness distributions, the mass distribution of the populations generated by SCMs has an exponential or quasi-exponential form. We use SCMs to model sea ice, identifying mass-increasing particle combinations with thickness-increasing ice redistribution processes. Our model couples an SCM component with a thermodynamic component and generates qualitatively accurate thickness distributions with a variety of rate kernels. Our results suggest that the exponential tail of the sea ice thickness distribution arises from the nature of the ridging process, rather than specific physical properties of sea ice or the spatial arrangement of floes, and that the relative strengths of the dynamic and thermodynamic processes are key in accurately simulating the rate at which the sea ice thickness tail drops off with thickness.

  5. Microphysical and Dynamical Influences on Cirrus Cloud Optical Depth Distributions

    SciTech Connect

    Kay, J.; Baker, M.; Hegg, D.

    2005-03-18

    Cirrus cloud inhomogeneity occurs at scales greater than the cirrus radiative smoothing scale ({approx}100 m), but less than typical global climate model (GCM) resolutions ({approx}300 km). Therefore, calculating cirrus radiative impacts in GCMs requires an optical depth distribution parameterization. Radiative transfer calculations are sensitive to optical depth distribution assumptions (Fu et al. 2000; Carlin et al. 2002). Using raman lidar observations, we quantify cirrus timescales and optical depth distributions at the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site in Lamont, OK (USA). We demonstrate the sensitivity of outgoing longwave radiation (OLR) calculations to assumed optical depth distributions and to the temporal resolution of optical depth measurements. Recent work has highlighted the importance of dynamics and nucleation for cirrus evolution (Haag and Karcher 2004; Karcher and Strom 2003). We need to understand the main controls on cirrus optical depth distributions to incorporate cirrus variability into model radiative transfer calculations. With an explicit ice microphysics parcel model, we aim to understand the influence of ice nucleation mechanism and imposed dynamics on cirrus optical depth distributions.

  6. Computational Fluid Dynamics Program at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    1989-01-01

    The Computational Fluid Dynamics (CFD) Program at NASA Ames Research Center is reviewed and discussed. The technical elements of the CFD Program are listed and briefly discussed. These elements include algorithm research, research and pilot code development, scientific visualization, advanced surface representation, volume grid generation, and numerical optimization. Next, the discipline of CFD is briefly discussed and related to other areas of research at NASA Ames including experimental fluid dynamics, computer science research, computational chemistry, and numerical aerodynamic simulation. These areas combine with CFD to form a larger area of research, which might collectively be called computational technology. The ultimate goal of computational technology research at NASA Ames is to increase the physical understanding of the world in which we live, solve problems of national importance, and increase the technical capabilities of the aerospace community. Next, the major programs at NASA Ames that either use CFD technology or perform research in CFD are listed and discussed. Briefly, this list includes turbulent/transition physics and modeling, high-speed real gas flows, interdisciplinary research, turbomachinery demonstration computations, complete aircraft aerodynamics, rotorcraft applications, powered lift flows, high alpha flows, multiple body aerodynamics, and incompressible flow applications. Some of the individual problems actively being worked in each of these areas is listed to help define the breadth or extent of CFD involvement in each of these major programs. State-of-the-art examples of various CFD applications are presented to highlight most of these areas. The main emphasis of this portion of the presentation is on examples which will not otherwise be treated at this conference by the individual presentations. Finally, a list of principal current limitations and expected future directions is given.

  7. Stationary transmission distribution of random spike trains by dynamical synapses

    NASA Astrophysics Data System (ADS)

    Hahnloser, Richard H.

    2003-02-01

    Many nonlinearities in neural media are strongly dependent on spike timing jitter and intrinsic dynamics of synaptic transmission. Here we are interested in the stationary density of evoked postsynaptic potentials transmitted by depressing synapses for Poisson spike trains of fixed mean rates. We present a nonperturbative iterative method for computing the stationary density over increasing intervals. We conclude by showing how this method generalizes to other types of synapses, such as facilitating and hybrid synapses.

  8. Smoluchowski Coagulation Models Of Sea Ice Thickness Distribution Dynamics

    NASA Astrophysics Data System (ADS)

    Godlovitch, D.; Illner, R.; Monahan, A. H.

    2011-12-01

    Sea ice thickness distributions display a ubiquitous exponential decrease with thickness. This tail characterises the range of ice thickness produced by mechanical redistribution of ice through the process of ridging, rafting, and shearing. It is possible to simulate thickness distribution dynamics by representing mechanical redistribution as a generalized stacking process. Stacking processes may be described by a class of models known as Smoluchowski Coagulation models, which originated in Statistical Mechanics and describe the dynamics of a population of fixed-mass "particles" which combine in pairs to form a "particle" with the combined mass of the constituent pair at a rate which depends on the mass of the interacting particles. We use SCMs to model sea ice, identifying mass-increasing particle combinations with thickness-increasing ice redistribution processes. Our model couples an SCM component with a thermodynamic component and generates qualitatively accurate thickness distributions. The model behaviour suggests that the exponential tail of the sea ice thickness distribution arises from the nature of the ridging process, rather than specific physical properties of sea ice or the spatial arrangement of floes, and that the relative strengths of the dynamic and thermodynamic processes are key in accurately simulating the rate at which the sea ice thickness tail drops off with thickness.

  9. Maintaining Traceability in an Evolving Distributed Computing Environment

    NASA Astrophysics Data System (ADS)

    Collier, I.; Wartel, R.

    2015-12-01

    The management of risk is fundamental to the operation of any distributed computing infrastructure. Identifying the cause of incidents is essential to prevent them from re-occurring. In addition, it is a goal to contain the impact of an incident while keeping services operational. For response to incidents to be acceptable this needs to be commensurate with the scale of the problem. The minimum level of traceability for distributed computing infrastructure usage is to be able to identify the source of all actions (executables, file transfers, pilot jobs, portal jobs, etc.) and the individual who initiated them. In addition, sufficiently fine-grained controls, such as blocking the originating user and monitoring to detect abnormal behaviour, are necessary for keeping services operational. It is essential to be able to understand the cause and to fix any problems before re-enabling access for the user. The aim is to be able to answer the basic questions who, what, where, and when concerning any incident. This requires retaining all relevant information, including timestamps and the digital identity of the user, sufficient to identify, for each service instance, and for every security event including at least the following: connect, authenticate, authorize (including identity changes) and disconnect. In traditional grid infrastructures (WLCG, EGI, OSG etc.) best practices and procedures for gathering and maintaining the information required to maintain traceability are well established. In particular, sites collect and store information required to ensure traceability of events at their sites. With the increased use of virtualisation and private and public clouds for HEP workloads established procedures, which are unable to see 'inside' running virtual machines no longer capture all the information required. Maintaining traceability will at least involve a shift of responsibility from sites to Virtual Organisations (VOs) bringing with it new requirements for their

  10. High-performance computing, high-speed networks, and configurable computing environments: progress toward fully distributed computing.

    PubMed

    Johnston, W E; Jacobson, V L; Loken, S C; Robertson, D W; Tierney, B L

    1992-01-01

    The next several years will see the maturing of a collection of technologies that will enable fully and transparently distributed computing environments. Networks will be used to configure independent computing, storage, and I/O elements into "virtual systems" that are optimal for solving a particular problem. This environment will make the most powerful computing systems those that are logically assembled from network-based components and will also make those systems available to a widespread audience. Anticipating that the necessary technology and communications infrastructure will be available in the next 3 to 5 years, we are developing and demonstrating prototype applications that test and exercise the currently available elements of this configurable environment. The Lawrence Berkeley Laboratory (LBL) Information and Computing Sciences and Research Medicine Divisions have collaborated with the Pittsburgh Supercomputer Center to demonstrate one distributed application that illuminates the issues and potential of using networks to configure virtual systems. This application allows the interactive visualization of large three-dimensional (3D) scalar fields (voxel data sets) by using a network-based configuration of heterogeneous supercomputers and workstations. The specific test case is visualization of 3D magnetic resonance imaging (MRI) data. The virtual system architecture consists of a Connection Machine-2 (CM-2) that performs surface reconstruction from the voxel data, a Cray Y-MP that renders the resulting geometric data into an image, and a workstation that provides the display of the image and the user interface for specifying the parameters for the geometry generation and 3D viewing. These three elements are configured into a virtual system by using several different network technologies. This paper reviews the current status of the software, hardware, and communications technologies that are needed to enable this configurable environment. These

  11. Universality in survivor distributions: Characterizing the winners of competitive dynamics

    NASA Astrophysics Data System (ADS)

    Luck, J. M.; Mehta, A.

    2015-11-01

    We investigate the survivor distributions of a spatially extended model of competitive dynamics in different geometries. The model consists of a deterministic dynamical system of individual agents at specified nodes, which might or might not survive the predatory dynamics: all stochasticity is brought in by the initial state. Every such initial state leads to a unique and extended pattern of survivors and nonsurvivors, which is known as an attractor of the dynamics. We show that the number of such attractors grows exponentially with system size, so that their exact characterization is limited to only very small systems. Given this, we construct an analytical approach based on inhomogeneous mean-field theory to calculate survival probabilities for arbitrary networks. This powerful (albeit approximate) approach shows how universality arises in survivor distributions via a key concept—the dynamical fugacity. Remarkably, in the large-mass limit, the survivor probability of a node becomes independent of network geometry and assumes a simple form which depends only on its mass and degree.

  12. Dynamics of Biofilm Regrowth in Drinking Water Distribution Systems

    PubMed Central

    Husband, S.; Loza, V.; Boxall, J.

    2016-01-01

    ABSTRACT The majority of biomass within water distribution systems is in the form of attached biofilm. This is known to be central to drinking water quality degradation following treatment, yet little understanding of the dynamics of these highly heterogeneous communities exists. This paper presents original information on such dynamics, with findings demonstrating patterns of material accumulation, seasonality, and influential factors. Rigorous flushing operations repeated over a 1-year period on an operational chlorinated system in the United Kingdom are presented here. Intensive monitoring and sampling were undertaken, including time-series turbidity and detailed microbial analysis using 16S rRNA Illumina MiSeq sequencing. The results show that bacterial dynamics were influenced by differences in the supplied water and by the material remaining attached to the pipe wall following flushing. Turbidity, metals, and phosphate were the main factors correlated with the distribution of bacteria in the samples. Coupled with the lack of inhibition of biofilm development due to residual chlorine, this suggests that limiting inorganic nutrients, rather than organic carbon, might be a viable component in treatment strategies to manage biofilms. The research also showed that repeat flushing exerted beneficial selective pressure, giving another reason for flushing being a viable advantageous biofilm management option. This work advances our understanding of microbiological processes in drinking water distribution systems and helps inform strategies to optimize asset performance. IMPORTANCE This research provides novel information regarding the dynamics of biofilm formation in real drinking water distribution systems made of different materials. This new knowledge on microbiological process in water supply systems can be used to optimize the performance of the distribution network and to guarantee safe and good-quality drinking water to consumers. PMID:27208119

  13. Evolution of geometrically necessary dislocation density from computational dislocation dynamics

    NASA Astrophysics Data System (ADS)

    Guruprasad, P. J.; Benzerga, A. A.

    2009-07-01

    This paper presents a method for calculating GND densities in dislocation dynamics simulations. Evolution of suitably defined averages of GND density as well as maps showing the spatial nonuniform distribution of GNDs are analyzed under uniaxial loading. Focus is laid on the resolution dependence of the very notion of GND density, its dependence upon physical dimensions of plastically deformed specimens and its sensitivity to initial conditions. Acknowledgments Support from the National Science Foundation (CMMI-0748187) is gratefully acknowledged.

  14. A dynamical model describing stock market price distributions

    NASA Astrophysics Data System (ADS)

    Masoliver, Jaume; Montero, Miquel; Porrà, Josep M.

    2000-08-01

    High-frequency data in finance have led to a deeper understanding on probability distributions of market prices. Several facts seem to be well established by empirical evidence. Specifically, probability distributions have the following properties: (i) They are not Gaussian and their center is well adjusted by Lévy distributions. (ii) They are long-tailed but have finite moments of any order. (iii) They are self-similar on many time scales. Finally, (iv) at small time scales, price volatility follows a non-diffusive behavior. We extend Merton's ideas on speculative price formation and present a dynamical model resulting in a characteristic function that explains in a natural way all of the above features. The knowledge of such a distribution opens a new and useful way of quantifying financial risk. The results of the model agree - with high degree of accuracy - with empirical data taken from historical records of the Standard & Poor's 500 cash index.

  15. Protein folding by distributed computing and the denatured state ensemble.

    PubMed

    Marianayagam, Neelan J; Fawzi, Nicolas L; Head-Gordon, Teresa

    2005-11-15

    The distributed computing (DC) paradigm in conjunction with the folding@home (FH) client server has been used to study the folding kinetics of small peptides and proteins, giving excellent agreement with experimentally measured folding rates, although pathways sampled in these simulations are not always consistent with the folding mechanism. In this study, we use a coarse-grain model of protein L, whose two-state kinetics have been characterized in detail by using long-time equilibrium simulations, to rigorously test a FH protocol using approximately 10,000 short-time, uncoupled folding simulations starting from an extended state of the protein. We show that the FH results give non-Poisson distributions and early folding events that are unphysical, whereas longer folding events experience a correct barrier to folding but are not representative of the equilibrium folding ensemble. Using short-time, uncoupled folding simulations started from an equilibrated denatured state ensemble (DSE), we also do not get agreement with the equilibrium two-state kinetics because of overrepresented folding events arising from higher energy subpopulations in the DSE. The DC approach using uncoupled short trajectories can make contact with traditionally measured experimental rates and folding mechanism when starting from an equilibrated DSE, when the simulation time is long enough to sample the lowest energy states of the unfolded basin and the simulated free-energy surface is correct. However, the DC paradigm, together with faster time-resolved and single-molecule experiments, can also reveal the breakdown in the two-state approximation due to observation of folding events from higher energy subpopulations in the DSE.

  16. Systems analysis of the space shuttle. [communication systems, computer systems, and power distribution

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.; Oh, S. J.; Thau, F.

    1975-01-01

    Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.

  17. Use of the Web by a Distributed Research group Performing Distributed Computing

    NASA Astrophysics Data System (ADS)

    Burke, David A.; Peterkin, Robert E.

    2001-06-01

    A distributed research group that uses distributed computers faces a spectrum of challenges--some of which can be met by using various electronic means of communication. The particular challenge of our group involves three physically separated research entities. We have had to link two collaborating groups at AFRL and NRL together for software development, and the same AFRL group with a LANL group for software applications. We are developing and using a pair of general-purpose, portable, parallel, unsteady, plasma physics simulation codes. The first collaboration is centered around a formal weekly video teleconference on relatively inexpensive equipment that we have set up in convenient locations in our respective laboratories. The formal virtual meetings are augmented with informal virtual meetings as the need arises. Both collaborations share research data in a variety of forms on a secure URL that is set up behind the firewall at the AFRL. Of course, a computer-generated animation is a particularly efficient way of displaying results from time-dependent numerical simulations, so we generally like to post such animations (along with proper documentation) on our web page. In this presentation, we will discuss some of our accomplishments and disappointments.

  18. Computational Fluid Dynamic simulations of pipe elbow flow.

    SciTech Connect

    Homicz, Gregory Francis

    2004-08-01

    One problem facing today's nuclear power industry is flow-accelerated corrosion and erosion in pipe elbows. The Korean Atomic Energy Research Institute (KAERI) is performing experiments in their Flow-Accelerated Corrosion (FAC) test loop to better characterize these phenomena, and develop advanced sensor technologies for the condition monitoring of critical elbows on a continuous basis. In parallel with these experiments, Sandia National Laboratories is performing Computational Fluid Dynamic (CFD) simulations of the flow in one elbow of the FAC test loop. The simulations are being performed using the FLUENT commercial software developed and marketed by Fluent, Inc. The model geometry and mesh were created using the GAMBIT software, also from Fluent, Inc. This report documents the results of the simulations that have been made to date; baseline results employing the RNG k-e turbulence model are presented. The predicted value for the diametrical pressure coefficient is in reasonably good agreement with published correlations. Plots of the velocities, pressure field, wall shear stress, and turbulent kinetic energy adjacent to the wall are shown within the elbow section. Somewhat to our surprise, these indicate that the maximum values of both wall shear stress and turbulent kinetic energy occur near the elbow entrance, on the inner radius of the bend. Additional simulations were performed for the same conditions, but with the RNG k-e model replaced by either the standard k-{var_epsilon}, or the realizable k-{var_epsilon} turbulence model. The predictions using the standard k-{var_epsilon} model are quite similar to those obtained in the baseline simulation. However, with the realizable k-{var_epsilon} model, more significant differences are evident. The maximums in both wall shear stress and turbulent kinetic energy now appear on the outer radius, near the elbow exit, and are {approx}11% and 14% greater, respectively, than those predicted in the baseline calculation

  19. Computational approaches to analyse and predict small molecule transport and distribution at cellular and subcellular levels.

    PubMed

    Min, Kyoung Ah; Zhang, Xinyuan; Yu, Jing-yu; Rosania, Gus R

    2014-01-01

    Quantitative structure-activity relationship (QSAR) studies and mechanistic mathematical modeling approaches have been independently employed for analysing and predicting the transport and distribution of small molecule chemical agents in living organisms. Both of these computational approaches have been useful for interpreting experiments measuring the transport properties of small molecule chemical agents, in vitro and in vivo. Nevertheless, mechanistic cell-based pharmacokinetic models have been especially useful to guide the design of experiments probing the molecular pathways underlying small molecule transport phenomena. Unlike QSAR models, mechanistic models can be integrated from microscopic to macroscopic levels, to analyse the spatiotemporal dynamics of small molecule chemical agents from intracellular organelles to whole organs, well beyond the experiments and training data sets upon which the models are based. Based on differential equations, mechanistic models can also be integrated with other differential equations-based systems biology models of biochemical networks or signaling pathways. Although the origin and evolution of mathematical modeling approaches aimed at predicting drug transport and distribution has occurred independently from systems biology, we propose that the incorporation of mechanistic cell-based computational models of drug transport and distribution into a systems biology modeling framework is a logical next step for the advancement of systems pharmacology research.

  20. Above the cloud computing orbital services distributed data model

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2014-05-01

    Technology miniaturization and system architecture advancements have created an opportunity to significantly lower the cost of many types of space missions by sharing capabilities between multiple spacecraft. Historically, most spacecraft have been atomic entities that (aside from their communications with and tasking by ground controllers) operate in isolation. Several notable example exist; however, these are purpose-designed systems that collaborate to perform a single goal. The above the cloud computing (ATCC) concept aims to create ad-hoc collaboration between service provider and consumer craft. Consumer craft can procure processing, data transmission, storage, imaging and other capabilities from provider craft. Because of onboard storage limitations, communications link capability limitations and limited windows of communication, data relevant to or required for various operations may span multiple craft. This paper presents a model for the identification, storage and accessing of this data. This model includes appropriate identification features for this highly distributed environment. It also deals with business model constraints such as data ownership, retention and the rights of the storing craft to access, resell, transmit or discard the data in its possession. The model ensures data integrity and confidentiality (to the extent applicable to a given data item), deals with unique constraints of the orbital environment and tags data with business model (contractual) obligation data.

  1. Parallelizing Sylvester-like operations on a distributed memory computer

    SciTech Connect

    Hu, D.Y.; Sorensen, D.C.

    1994-12-31

    Discretization of linear operators arising in applied mathematics often leads to matrices with the following structure: M(x) = (D {circle_times} A + B {circle_times} I{sub n} + V)x, where x {element_of} R{sup mn}, B, D {element_of} R{sup nxn}, A {element_of} R{sup mxm} and V {element_of} R{sup mnxmn}; both D and V are diagonal. For the notational convenience, the authors assume that both A and B are symmetric. All the results through this paper can be easily extended to the cases with general A and B. The linear operator on R{sup mn} defined above can be viewed as a generalization of the Sylvester operator: S(x) = (I{sub m} {circle_times} A + B {circle_times} I{sub n})x. The authors therefore refer to it as a Sylvester-like operator. The schemes discussed in this paper therefore also apply to Sylvester operator. In this paper, the authors present the SIMD scheme for parallelization of the Sylvester-like operator on a distributed memory computer. This scheme is designed to approach the best possible efficiency by avoiding unnecessary communication among processors.

  2. Automatic distribution of vision-tasks on computing clusters

    NASA Astrophysics Data System (ADS)

    Müller, Thomas; Tran, Binh An; Knoll, Alois

    2011-01-01

    In this paper a consistent and efficient but yet convenient system for parallel computer vision, and in fact also realtime actuator control is proposed. The system implements the multi-agent paradigm and a blackboard information storage. This, in combination with a generic interface for hardware abstraction and integration of external software components, is setup on basis of the message passing interface (MPI). The system allows for data- and task-parallel processing, and supports both synchronous communication, as data exchange can be triggered by events, and asynchronous communication, as data can be polled, strategies. Also, by duplication of processing units (agents) redundant processing is possible to achieve greater robustness. As the system automatically distributes the task units to available resources, and a monitoring concept allows for combination of tasks and their composition to complex processes, it is easy to develop efficient parallel vision / robotics applications quickly. Multiple vision based applications have already been implemented, including academic, research related fields and prototypes for industrial automation. For the scientific community the system has been recently launched open-source.

  3. Computational spectroscopy using the Quantum ESPRESSO distribution (Invited)

    NASA Astrophysics Data System (ADS)

    Baroni, S.; Giannozzi, P.

    2009-12-01

    Quantum ESPRESSO (QE) [1,2] is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves, and pseudopotentials. QE freely available to researchers around the world under the terms of the GNU general public licence. In this talk I will introduce the QE distribution, with emphasis on some of its features that may appeal to the Earth Sciences and Mineralogy communities. I will focus on the determination of vibrational frequencies to be used for spectroscopic purposes, for the determination of soft modes leading to mechanical instabilities, and as ingredients for the simulation of thermal properties in the (quasi-) harmonic approximations. I will conclude with some recent developments which are allowing for the simulation of electronic (absorption and photo-emission) spectroscopies, using many-body and time-dependent density-functional perturbation theories. [1] P. Giannozzi et al. J. Phys.: Condens. Matter 21, 395502 (2009); http://dx.doi.org/10.1088/0953-8984/21/39/395502 [2] http://www.quantum-espresso.org

  4. Insights from molecular dynamics simulations for computational protein design.

    PubMed

    Childers, Matthew Carter; Daggett, Valerie

    2017-02-01

    A grand challenge in the field of structural biology is to design and engineer proteins that exhibit targeted functions. Although much success on this front has been achieved, design success rates remain low, an ever-present reminder of our limited understanding of the relationship between amino acid sequences and the structures they adopt. In addition to experimental techniques and rational design strategies, computational methods have been employed to aid in the design and engineering of proteins. Molecular dynamics (MD) is one such method that simulates the motions of proteins according to classical dynamics. Here, we review how insights into protein dynamics derived from MD simulations have influenced the design of proteins. One of the greatest strengths of MD is its capacity to reveal information beyond what is available in the static structures deposited in the Protein Data Bank. In this regard simulations can be used to directly guide protein design by providing atomistic details of the dynamic molecular interactions contributing to protein stability and function. MD simulations can also be used as a virtual screening tool to rank, select, identify, and assess potential designs. MD is uniquely poised to inform protein design efforts where the application requires realistic models of protein dynamics and atomic level descriptions of the relationship between dynamics and function. Here, we review cases where MD simulations was used to modulate protein stability and protein function by providing information regarding the conformation(s), conformational transitions, interactions, and dynamics that govern stability and function. In addition, we discuss cases where conformations from protein folding/unfolding simulations have been exploited for protein design, yielding novel outcomes that could not be obtained from static structures.

  5. Distributed Computing Over New Technology Networks: Quality of Service for CORBA Objects.

    DTIC Science & Technology

    1996-10-01

    technology. This was accomplished in four (largely sequential) steps: (1) Study the impact of new technology networks on distributed computing environments...distributed programs such as C3 or collaborative planning applications; (2) Study how Distributed Computing Environments (DCEs) should support QoS in

  6. Computer Assisted Instruction of Population Dynamics: A New Approach to Population Education. Report No. T-19.

    ERIC Educational Resources Information Center

    Klaff, Vivian; Handler, Paul

    Available on the University of Illinois PLATO IV Computer system, the Population Dynamic Group computer-aided instruction program for teaching population dynamics is described and explained. The computer-generated visual graphics enable fast and intuitive understanding of the dynamics of population and of the concepts and data of population. The…

  7. Analog computation through high-dimensional physical chaotic neuro-dynamics

    NASA Astrophysics Data System (ADS)

    Horio, Yoshihiko; Aihara, Kazuyuki

    2008-07-01

    Conventional von Neumann computers have difficulty in solving complex and ill-posed real-world problems. However, living organisms often face such problems in real life, and must quickly obtain suitable solutions through physical, dynamical, and collective computations involving vast assemblies of neurons. These highly parallel computations through high-dimensional dynamics (computation through dynamics) are completely different from the numerical computations on von Neumann computers (computation through algorithms). In this paper, we explore a novel computational mechanism with high-dimensional physical chaotic neuro-dynamics. We physically constructed two hardware prototypes using analog chaotic-neuron integrated circuits. These systems combine analog computations with chaotic neuro-dynamics and digital computation through algorithms. We used quadratic assignment problems (QAPs) as benchmarks. The first prototype utilizes an analog chaotic neural network with 800-dimensional dynamics. An external algorithm constructs a solution for a QAP using the internal dynamics of the network. In the second system, 300-dimensional analog chaotic neuro-dynamics drive a tabu-search algorithm. We demonstrate experimentally that both systems efficiently solve QAPs through physical chaotic dynamics. We also qualitatively analyze the underlying mechanism of the highly parallel and collective analog computations by observing global and local dynamics. Furthermore, we introduce spatial and temporal mutual information to quantitatively evaluate the system dynamics. The experimental results confirm the validity and efficiency of the proposed computational paradigm with the physical analog chaotic neuro-dynamics.

  8. Real-time computational attention model for dynamic scenes analysis: from implementation to evaluation

    NASA Astrophysics Data System (ADS)

    Courboulay, Vincent; Perreira Da Silva, Matthieu

    2012-06-01

    Providing real time analysis of the huge amount of data generated by computer vision algorithms in interactive applications is still an open problem. It promises great advances across a wide variety of fields. When using dynamics scene analysis algorithms for computer vision, a trade-off must be found between the quality of the results expected, and the amount of computer resources allocated for each task. It is usually a design time decision, implemented through the choice of pre-defined algorithms and parameters. However, this way of doing limits the generality of the system. Using an adaptive vision system provides a more flexible solution as its analysis strategy can be changed according to the new information available. As a consequence, such a system requires some kind of guiding mechanism to explore the scene faster and more efficiently. We propose a visual attention system that it adapts its processing according to the interest (or salience) of each element of the dynamic scene. Somewhere in between hierarchical salience based and competitive distributed, we propose a hierarchical yet competitive and non salience based model. Our original approach allows the generation of attentional focus points without the need of neither saliency map nor explicit inhibition of return mechanism. This new realtime computational model is based on a preys / predators system. The use of this kind of dynamical system is justified by an adjustable trade-off between nondeterministic attentional behavior and properties of stability, reproducibility and reactiveness.

  9. Applications of Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.

    2004-01-01

    Initial steps in the application o f a low-order panel method computational fluid dynamic (CFD) code to the calculation of aircraft dynamic stability and control (S&C) derivatives are documented. Several capabilities, unique to CFD but not unique to this particular demonstration, are identified and demonstrated in this paper. These unique capabilities complement conventional S&C techniques and they include the ability to: 1) perform maneuvers without the flow-kinematic restrictions and support interference commonly associated with experimental S&C facilities, 2) easily simulate advanced S&C testing techniques, 3) compute exact S&C derivatives with uncertainty propagation bounds, and 4) alter the flow physics associated with a particular testing technique from those observed in a wind or water tunnel test in order to isolate effects. Also presented are discussions about some computational issues associated with the simulation of S&C tests and selected results from numerous surface grid resolution studies performed during the course of the study.

  10. Digital computer program for generating dynamic turbofan engine models (DIGTEM)

    NASA Technical Reports Server (NTRS)

    Daniele, C. J.; Krosel, S. M.; Szuch, J. R.; Westerkamp, E. J.

    1983-01-01

    This report describes DIGTEM, a digital computer program that simulates two spool, two-stream turbofan engines. The turbofan engine model in DIGTEM contains steady-state performance maps for all of the components and has control volumes where continuity and energy balances are maintained. Rotor dynamics and duct momentum dynamics are also included. Altogether there are 16 state variables and state equations. DIGTEM features a backward-differnce integration scheme for integrating stiff systems. It trims the model equations to match a prescribed design point by calculating correction coefficients that balance out the dynamic equations. It uses the same coefficients at off-design points and iterates to a balanced engine condition. Transients can also be run. They are generated by defining controls as a function of time (open-loop control) in a user-written subroutine (TMRSP). DIGTEM has run on the IBM 370/3033 computer using implicit integration with time steps ranging from 1.0 msec to 1.0 sec. DIGTEM is generalized in the aerothermodynamic treatment of components.

  11. [Dynamic distribution of methandrostenolone in the body of white rats].

    PubMed

    Krylov, Iu F; Krynetskiĭ, E Iu; Prokhorov, B S; Sapozhnikova, G A; Smirnov, P A

    1987-01-01

    Dynamics of distribution of anabolic steroidal hormone methandrostenolone and routes of its elimination from the organism of Wistar rats were studied by using methods of radioisotopes and high-performance liquid chromatography. Methandrostenolone metabolites were shown to be excreted mainly in the urine. Methandrostenolone metabolism is a complicated process in the course of which redistribution of metabolites among various organs occurs. The anabolic effect of methandrostenolone is supposed to be due to the formation of its metabolites.

  12. Determining Dynamical Path Distributions usingMaximum Relative Entropy

    DTIC Science & Technology

    2015-05-31

    information. MaxCal is just The Principle of Maximum Entropy (MaxEnt) where constraints are changing in time. This simply amounts to an additional...Determining Dynamical Path Distributions using Maximum Relative Entropy The views, opinions and/or findings contained in this report are those of the...SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Maximum Entropy

  13. Molecular Dynamics, Monte Carlo Simulations, and Langevin Dynamics: A Computational Review

    PubMed Central

    Paquet, Eric; Viktor, Herna L.

    2015-01-01

    Macromolecular structures, such as neuraminidases, hemagglutinins, and monoclonal antibodies, are not rigid entities. Rather, they are characterised by their flexibility, which is the result of the interaction and collective motion of their constituent atoms. This conformational diversity has a significant impact on their physicochemical and biological properties. Among these are their structural stability, the transport of ions through the M2 channel, drug resistance, macromolecular docking, binding energy, and rational epitope design. To assess these properties and to calculate the associated thermodynamical observables, the conformational space must be efficiently sampled and the dynamic of the constituent atoms must be simulated. This paper presents algorithms and techniques that address the abovementioned issues. To this end, a computational review of molecular dynamics, Monte Carlo simulations, Langevin dynamics, and free energy calculation is presented. The exposition is made from first principles to promote a better understanding of the potentialities, limitations, applications, and interrelations of these computational methods. PMID:25785262

  14. Toward a Dynamically Reconfigurable Computing and Communication System for Small Spacecraft

    NASA Technical Reports Server (NTRS)

    Kifle, Muli; Andro, Monty; Tran, Quang K.; Fujikawa, Gene; Chu, Pong P.

    2003-01-01

    Future science missions will require the use of multiple spacecraft with multiple sensor nodes autonomously responding and adapting to a dynamically changing space environment. The acquisition of random scientific events will require rapidly changing network topologies, distributed processing power, and a dynamic resource management strategy. Optimum utilization and configuration of spacecraft communications and navigation resources will be critical in meeting the demand of these stringent mission requirements. There are two important trends to follow with respect to NASA's (National Aeronautics and Space Administration) future scientific missions: the use of multiple satellite systems and the development of an integrated space communications network. Reconfigurable computing and communication systems may enable versatile adaptation of a spacecraft system's resources by dynamic allocation of the processor hardware to perform new operations or to maintain functionality due to malfunctions or hardware faults. Advancements in FPGA (Field Programmable Gate Array) technology make it possible to incorporate major communication and network functionalities in FPGA chips and provide the basis for a dynamically reconfigurable communication system. Advantages of higher computation speeds and accuracy are envisioned with tremendous hardware flexibility to ensure maximum survivability of future science mission spacecraft. This paper discusses the requirements, enabling technologies, and challenges associated with dynamically reconfigurable space communications systems.

  15. Computational model of particle deposition in the nasal cavity under steady and dynamic flow.

    PubMed

    Karakosta, Paraskevi; Alexopoulos, Aleck H; Kiparissides, Costas

    2015-01-01

    A computational model for flow and particle deposition in a three-dimensional representation of the human nasal cavity is developed. Simulations of steady state and dynamic airflow during inhalation are performed at flow rates of 9-60 l/min. Depositions for particles of size 0.5-20 μm are determined and compared with experimental and simulation results from the literature in terms of deposition efficiencies. The nasal model is validated by comparison with experimental and simulation results from the literature for particle deposition under steady-state flow. The distribution of deposited particles in the nasal cavity is presented in terms of an axial deposition distribution as well as a bivariate axial deposition and particle size distribution. Simulations of dynamic airflow and particle deposition during an inhalation cycle are performed for different nasal cavity outlet pressure variations and different particle injections. The total particle deposition efficiency under dynamic flow is found to depend strongly on the dynamics of airflow as well as the type of particle injection.

  16. Data Point Averaging for Computational Fluid Dynamics Data

    NASA Technical Reports Server (NTRS)

    Norman, David, Jr. (Inventor)

    2014-01-01

    A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.

  17. Computational Fluid Dynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Kutler, Paul

    1994-01-01

    Computational fluid dynamics (CFD) is beginning to play a major role in the aircraft industry of the United States because of the realization that CFD can be a new and effective design tool and thus could provide a company with a competitive advantage. It is also playing a significant role in research institutions, both governmental and academic, as a tool for researching new fluid physics, as well as supplementing and complementing experimental testing. In this presentation, some of the progress made to date in CFD at NASA Ames will be reviewed. The presentation addresses the status of CFD in terms of methods, examples of CFD solutions, and computer technology. In addition, the role CFD will play in supporting the revolutionary goals set forth by the Aeronautical Policy Review Committee established by the Office of Science and Technology Policy is noted. The need for validated CFD tools is also briefly discussed.

  18. Computational strategies in the dynamic simulation of constrained flexible MBS

    NASA Technical Reports Server (NTRS)

    Amirouche, F. M. L.; Xie, M.

    1993-01-01

    This research focuses on the computational dynamics of flexible constrained multibody systems. At first a recursive mapping formulation of the kinematical expressions in a minimum dimension as well as the matrix representation of the equations of motion are presented. The method employs Kane's equation, FEM, and concepts of continuum mechanics. The generalized active forces are extended to include the effects of high temperature conditions, such as creep, thermal stress, and elastic-plastic deformation. The time variant constraint relations for rolling/contact conditions between two flexible bodies are also studied. The constraints for validation of MBS simulation of gear meshing contact using a modified Timoshenko beam theory are also presented. The last part deals with minimization of vibration/deformation of the elastic beam in multibody systems making use of time variant boundary conditions. The above methodologies and computational procedures developed are being implemented in a program called DYAMUS.

  19. Computational fluid dynamics (CFD) and its potential for nuclear applications

    SciTech Connect

    Weber, D.P.; Wei, T.Y.C.; Rock, D.T.; Rizwan-Uddin; Brewster, R.A.; Jonnavithula, S.

    1999-11-01

    The purpose of this paper is to examine the use of these advanced models, methods and computing environments for nuclear applications to determine if the industry can expect to derive the same benefit as other industries, such as the automotive and the aerospace industries. As an example, the authors will examine the use of modern computational fluid dynamics (CFD) capability for subchannel analysis, which is an important part of the analysis technology used by utilities to ensure safe and economical design and operation of reactors. In the current deregulated environment, it is possible that by use of these enhanced techniques, the thermal and electrical output of current reactors may be increased without any increase in cost and at no compromise in safety.

  20. Multilevel model reduction for uncertainty quantification in computational structural dynamics

    NASA Astrophysics Data System (ADS)

    Ezvan, O.; Batou, A.; Soize, C.; Gagliardini, L.

    2017-02-01

    This work deals with an extension of the reducedorder models (ROMs) that are classically constructed by modal analysis in linear structural dynamics for which the computational models are assumed to be uncertain. It is based on a multilevel projection strategy consisting in introducing three reduced-order bases that are obtained by using a spatial filtering methodology of local displacements. This filtering involves global shape functions for the kinetic energy. The proposed multilevel stochastic ROM is constructed by using the nonparametric probabilistic approach of uncertainties. It allows for affecting a specific level of uncertainties to each type of displacements associated with the corresponding vibration regime. The proposed methodology is applied to the computational model of an automobile structure, for which the multilevel stochastic ROM is identified with respect to experimental measurements. This identification is performed by solving a statistical inverse problem.

  1. Parallelization of implicit finite difference schemes in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Decker, Naomi H.; Naik, Vijay K.; Nicoules, Michel

    1990-01-01

    Implicit finite difference schemes are often the preferred numerical schemes in computational fluid dynamics, requiring less stringent stability bounds than the explicit schemes. Each iteration in an implicit scheme involves global data dependencies in the form of second and higher order recurrences. Efficient parallel implementations of such iterative methods are considerably more difficult and non-intuitive. The parallelization of the implicit schemes that are used for solving the Euler and the thin layer Navier-Stokes equations and that require inversions of large linear systems in the form of block tri-diagonal and/or block penta-diagonal matrices is discussed. Three-dimensional cases are emphasized and schemes that minimize the total execution time are presented. Partitioning and scheduling schemes for alleviating the effects of the global data dependencies are described. An analysis of the communication and the computation aspects of these methods is presented. The effect of the boundary conditions on the parallel schemes is also discussed.

  2. Dynamic computing resource allocation in online flood monitoring and prediction

    NASA Astrophysics Data System (ADS)

    Kuchar, S.; Podhoranyi, M.; Vavrik, R.; Portero, A.

    2016-08-01

    This paper presents tools and methodologies for dynamic allocation of high performance computing resources during operation of the Floreon+ online flood monitoring and prediction system. The resource allocation is done throughout the execution of supported simulations to meet the required service quality levels for system operation. It also ensures flexible reactions to changing weather and flood situations, as it is not economically feasible to operate online flood monitoring systems in the full performance mode during non-flood seasons. Different service quality levels are therefore described for different flooding scenarios, and the runtime manager controls them by allocating only minimal resources currently expected to meet the deadlines. Finally, an experiment covering all presented aspects of computing resource allocation in rainfall-runoff and Monte Carlo uncertainty simulation is performed for the area of the Moravian-Silesian region in the Czech Republic.

  3. Computational methods. [Calculation of dynamic loading to offshore platforms

    SciTech Connect

    Maeda, H. . Inst. of Industrial Science)

    1993-02-01

    With regard to the computational methods for hydrodynamic forces, first identification of marine hydrodynamics in offshore technology is discussed. Then general computational methods, the state of the arts and uncertainty on flow problems in offshore technology in which developed, developing and undeveloped problems are categorized and future works follow. Marine hydrodynamics consists of water surface and underwater fluid dynamics. Marine hydrodynamics covers, not only hydro, but also aerodynamics such as wind load or current-wave-wind interaction, hydrodynamics such as cavitation, underwater noise, multi-phase flow such as two-phase flow in pipes or air bubble in water or surface and internal waves, and magneto-hydrodynamics such as propulsion due to super conductivity. Among them, two key words are focused on as the identification of marine hydrodynamics in offshore technology; they are free surface and vortex shedding.

  4. Data Point Averaging for Computational Fluid Dynamics Data

    NASA Technical Reports Server (NTRS)

    Norman, Jr., David (Inventor)

    2016-01-01

    A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.

  5. Computational methods of the Advanced Fluid Dynamics Model

    SciTech Connect

    Bohl, W.R.; Wilhelm, D.; Parker, F.R.; Berthier, J.; Maudlin, P.J.; Schmuck, P.; Goutagny, L.; Ichikawa, S.; Ninokata, H.; Luck, L.B.

    1987-01-01

    To more accurately treat severe accidents in fast reactors, a program has been set up to investigate new computational models and approaches. The product of this effort is a computer code, the Advanced Fluid Dynamics Model (AFDM). This paper describes some of the basic features of the numerical algorithm used in AFDM. Aspects receiving particular emphasis are the fractional-step method of time integration, the semi-implicit pressure iteration, the virtual mass inertial terms, the use of three velocity fields, higher order differencing, convection of interfacial area with source and sink terms, multicomponent diffusion processes in heat and mass transfer, the SESAME equation of state, and vectorized programming. A calculated comparison with an isothermal tetralin/ammonia experiment is performed. We conclude that significant improvements are possible in reliably calculating the progression of severe accidents with further development.

  6. Use of computational fluid dynamics in respiratory medicine.

    PubMed

    Fernández Tena, Ana; Casan Clarà, Pere

    2015-06-01

    Computational Fluid Dynamics (CFD) is a computer-based tool for simulating fluid movement. The main advantages of CFD over other fluid mechanics studies include: substantial savings in time and cost, the analysis of systems or conditions that are very difficult to simulate experimentally (as is the case of the airways), and a practically unlimited level of detail. We used the Ansys-Fluent CFD program to develop a conducting airway model to simulate different inspiratory flow rates and the deposition of inhaled particles of varying diameters, obtaining results consistent with those reported in the literature using other procedures. We hope this approach will enable clinicians to further individualize the treatment of different respiratory diseases.

  7. Theoretical and computational dynamics of a compressible flow

    NASA Technical Reports Server (NTRS)

    Pai, Shih-I; Luo, Shijun

    1991-01-01

    An introduction to the theoretical and computational fluid dynamics of a compressible fluid is presented. The general topics addressed include: thermodynamics and physical properties of compressible fluids; 1D flow of an inviscid compressible fluid; shock waves; fundamental equations of the dynamics of a compressible inviscid non-heat-conducting and radiating fluid, method of small perturbations, linearized theory; 2D subsonic steady potential flow; hodograph and rheograph methods, exact solutions of 2D insentropic steady flow equations, 2D steady transonic and hypersonic flows, method of characteristics, linearized theory of 3D potential flow, nonlinear theory of 3D compressibe flow, anisentropic (rotational) flow of inviscid compressible fluid, electromagnetogasdynamics, multiphase flows, flows of a compressible fluid with transport phenomena.

  8. The role of computational fluid dynamics (CFD) in hair science.

    PubMed

    Spicka, Peter; Grald, Eric

    2004-01-01

    The use of computational fluid dynamics (CFD) as a virtual prototyping tool is widespread in the consumer packaged goods industry. CFD refers to the calculation on a computer of the velocity, pressure, and temperature and chemical species concentrations within a flowing liquid or gas. Because the performance of manufacturing equipment and product designs can be simulated on the computer, the benefit of using CFD is significant time and cost savings when compared to traditional physical testing methods. CFD has been used to design, scale-up and troubleshoot mixing tanks, spray dryers, heat exchangers and other process equipment. Recently, computer models of the capillary wicking process inside fibrous structures have been added to CFD software. These models have been used to gain a better understanding of the absorbent performance of diapers and feminine protection products. The same models can also be used to represent the movement of shampoo, conditioner, colorants and other products through the hair and scalp. In this paper, we provide an introduction to CFD and show some examples of its application to the manufacture of consumer products. We also provide sonic examples to show the potential of CFD for understanding the performance of products applied to the hair and scalp.

  9. Computational complexity of ecological and evolutionary spatial dynamics.

    PubMed

    Ibsen-Jensen, Rasmus; Chatterjee, Krishnendu; Nowak, Martin A

    2015-12-22

    There are deep, yet largely unexplored, connections between computer science and biology. Both disciplines examine how information proliferates in time and space. Central results in computer science describe the complexity of algorithms that solve certain classes of problems. An algorithm is deemed efficient if it can solve a problem in polynomial time, which means the running time of the algorithm is a polynomial function of the length of the input. There are classes of harder problems for which the fastest possible algorithm requires exponential time. Another criterion is the space requirement of the algorithm. There is a crucial distinction between algorithms that can find a solution, verify a solution, or list several distinct solutions in given time and space. The complexity hierarchy that is generated in this way is the foundation of theoretical computer science. Precise complexity results can be notoriously difficult. The famous question whether polynomial time equals nondeterministic polynomial time (i.e., P = NP) is one of the hardest open problems in computer science and all of mathematics. Here, we consider simple processes of ecological and evolutionary spatial dynamics. The basic question is: What is the probability that a new invader (or a new mutant) will take over a resident population? We derive precise complexity results for a variety of scenarios. We therefore show that some fundamental questions in this area cannot be answered by simple equations (assuming that P is not equal to NP).

  10. Modeling fires in adjacent ship compartments with computational fluid dynamics

    SciTech Connect

    Wix, S.D.; Cole, J.K.; Koski, J.A.

    1998-05-10

    This paper presents an analysis of the thermal effects on radioactive (RAM) transportation packages with a fire in an adjacent compartment. An assumption for this analysis is that the adjacent hold fire is some sort of engine room fire. Computational fluid dynamics (CFD) analysis tools were used to perform the analysis in order to include convective heat transfer effects. The analysis results were compared to experimental data gathered in a series of tests on tile US Coast Guard ship Mayo Lykes located at Mobile, Alabama.

  11. Continuing Validation of Computational Fluid Dynamics for Supersonic Retropropulsion

    NASA Technical Reports Server (NTRS)

    Schauerhamer, Daniel Guy; Trumble, Kerry A.; Kleb, Bil; Carlson, Jan-Renee; Edquist, Karl T.

    2011-01-01

    A large step in the validation of Computational Fluid Dynamics (CFD) for Supersonic Retropropulsion (SRP) is shown through the comparison of three Navier-Stokes solvers (DPLR, FUN3D, and OVERFLOW) and wind tunnel test results. The test was designed specifically for CFD validation and was conducted in the Langley supersonic 4 x4 Unitary Plan Wind Tunnel and includes variations in the number of nozzles, Mach and Reynolds numbers, thrust coefficient, and angles of orientation. Code-to-code and code-to-test comparisons are encouraging and possible error sources are discussed.

  12. Robust dynamical decoupling for quantum computing and quantum memory.

    PubMed

    Souza, Alexandre M; Alvarez, Gonzalo A; Suter, Dieter

    2011-06-17

    Dynamical decoupling (DD) is a popular technique for protecting qubits from the environment. However, unless special care is taken, experimental errors in the control pulses used in this technique can destroy the quantum information instead of preserving it. Here, we investigate techniques for making DD sequences robust against different types of experimental errors while retaining good decoupling efficiency in a fluctuating environment. We present experimental data from solid-state nuclear spin qubits and introduce a new DD sequence that is suitable for quantum computing and quantum memory.

  13. Computer studies of multiple-quantum spin dynamics

    SciTech Connect

    Murdoch, J.B.

    1982-11-01

    The excitation and detection of multiple-quantum (MQ) transitions in Fourier transform NMR spectroscopy is an interesting problem in the quantum mechanical dynamics of spin systems as well as an important new technique for investigation of molecular structure. In particular, multiple-quantum spectroscopy can be used to simplify overly complex spectra or to separate the various interactions between a nucleus and its environment. The emphasis of this work is on computer simulation of spin-system evolution to better relate theory and experiment.

  14. Computer Modeling of Real-Time Dynamic Lighting

    NASA Technical Reports Server (NTRS)

    Maida, James C.; Pace, J.; Novak, J.; Russo, Dane M. (Technical Monitor)

    2000-01-01

    Space Station tasks involve procedures that are very complex and highly dependent on the availability of visual information. In many situations, cameras are used as tools to help overcome the visual and physical restrictions associated with space flight. However, these cameras are effected by the dynamic lighting conditions of space. Training for these is conditions is necessary. The current project builds on the findings of an earlier NRA funded project, which revealed improved performance by humans when trained with computer graphics and lighting effects such as shadows and glare.

  15. New Challenges in Visualization of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Gerald-Yamasaki, Michael; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The development of visualization systems for analyzing computational fluid dynamics data has been driven by increasing size and complexity of the data. New extensions to the system domain into analysis of data from multiple sources, parameter space studies, and multidisciplinary studies in support of integrated aeronautical design systems provide new g challenges for the visualization system developer. Recent work at NASA Ames Research Center in visualization systems, automatic flow feature detection, unsteady flow visualization techniques, and a new area, data exploitation, will be discussed in the context of NASA information technology initiatives.

  16. The very local Hubble flow: Computer simulations of dynamical history

    NASA Astrophysics Data System (ADS)

    Chernin, A. D.; Karachentsev, I. D.; Valtonen, M. J.; Dolgachev, V. P.; Domozhilova, L. M.; Makarov, D. I.

    2004-02-01

    The phenomenon of the very local (≤3 Mpc) Hubble flow is studied on the basis of the data of recent precision observations. A set of computer simulations is performed to trace the trajectories of the flow galaxies back in time to the epoch of the formation of the Local Group. It is found that the ``initial conditions'' of the flow are drastically different from the linear velocity-distance relation. The simulations enable one also to recognize the major trends of the flow evolution and identify the dynamical role of universal antigravity produced by the cosmic vacuum.

  17. Computer based training for flight dynamics and METEOSAT spacecraft

    NASA Astrophysics Data System (ADS)

    Thomas, Graham Roland

    With its friendly language and completely integrated graphics and communications capabilities the Flight Dynamics Computer Based Training (CBT) Facility is everything the developer requires to turn their knowledge into sophisticated, technical training cources. It incorporates high quality graphics and has an open communications interface to allow current and future connections to external applications. For the author it provides a simple and effective suite of commands to develop training material. For the trainee, logical layout and access to help and graphical data via hypertext, provides a quick and pleasant learning system.

  18. Molecular Dynamics Computer Simulations of Multidrug RND Efflux Pumps.

    PubMed

    Ruggerone, Paolo; Vargiu, Attilio V; Collu, Francesca; Fischer, Nadine; Kandt, Christian

    2013-01-01

    Over-expression of multidrug efflux pumps of the Resistance Nodulation Division (RND) protein super family counts among the main causes for microbial resistance against pharmaceuticals. Understanding the molecular basis of this process is one of the major challenges of modern biomedical research, involving a broad range of experimental and computational techniques. Here we review the current state of RND transporter investigation employing molecular dynamics simulations providing conformational samples of transporter components to obtain insights into the functional mechanism underlying efflux pump-mediated antibiotics resistance in Escherichia coli and Pseudomonas aeruginosa.

  19. Molecular Dynamics Computer Simulations of Multidrug RND Efflux Pumps

    PubMed Central

    Ruggerone, Paolo; Vargiu, Attilio V.; Collu, Francesca; Fischer, Nadine; Kandt, Christian

    2013-01-01

    Over-expression of multidrug efflux pumps of the Resistance Nodulation Division (RND) protein super family counts among the main causes for microbial resistance against pharmaceuticals. Understanding the molecular basis of this process is one of the major challenges of modern biomedical research, involving a broad range of experimental and computational techniques. Here we review the current state of RND transporter investigation employing molecular dynamics simulations providing conformational samples of transporter components to obtain insights into the functional mechanism underlying efflux pump-mediated antibiotics resistance in Escherichia coli and Pseudomonas aeruginosa. PMID:24688701

  20. Dynamic computer simulations of electrophoresis: three decades of active research.

    PubMed

    Thormann, Wolfgang; Caslavska, Jitka; Breadmore, Michael C; Mosher, Richard A

    2009-06-01

    Dynamic models for electrophoresis are based upon model equations derived from the transport concepts in solution together with user-inputted conditions. They are able to predict theoretically the movement of ions and are as such the most versatile tool to explore the fundamentals of electrokinetic separations. Since its inception three decades ago, the state of dynamic computer simulation software and its use has progressed significantly and Electrophoresis played a pivotal role in that endeavor as a large proportion of the fundamental and application papers were published in this periodical. Software is available that simulates all basic electrophoretic systems, including moving boundary electrophoresis, zone electrophoresis, ITP, IEF and EKC, and their combinations under almost exactly the same conditions used in the laboratory. This has been employed to show the detailed mechanisms of many of the fundamental phenomena that occur in electrophoretic separations. Dynamic electrophoretic simulations are relevant for separations on any scale and instrumental format, including free-fluid preparative, gel, capillary and chip electrophoresis. This review includes a historical overview, a survey of current simulators, simulation examples and a discussion of the applications and achievements of dynamic simulation.