Science.gov

Sample records for distributed dynamical computation

  1. Improving flow distribution in influent channels using computational fluid dynamics.

    PubMed

    Park, No-Suk; Yoon, Sukmin; Jeong, Woochang; Lee, Seungjae

    2016-10-01

    Although the flow distribution in an influent channel where the inflow is split into each treatment process in a wastewater treatment plant greatly affects the efficiency of the process, and a weir is the typical structure for the flow distribution, to the authors' knowledge, there is a paucity of research on the flow distribution in an open channel with a weir. In this study, the influent channel of a real-scale wastewater treatment plant was used, installing a suppressed rectangular weir that has a horizontal crest to cross the full channel width. The flow distribution in the influent channel was analyzed using a validated computational fluid dynamics model to investigate (1) the comparison of single-phase and two-phase simulation, (2) the improved procedure of the prototype channel, and (3) the effect of the inflow rate on flow distribution. The results show that two-phase simulation is more reliable due to the description of the free-surface fluctuations. It should first be considered for improving flow distribution to prevent a short-circuit flow, and the difference in the kinetic energy with the inflow rate makes flow distribution trends different. The authors believe that this case study is helpful for improving flow distribution in an influent channel.

  2. Distributed computations in a dynamic, heterogeneous Grid environment

    NASA Astrophysics Data System (ADS)

    Dramlitsch, Thomas

    2003-06-01

    In order to face the rapidly increasing need for computational resources of various scientific and engineering applications one has to think of new ways to make more efficient use of the worlds current computational resources. In this respect, the growing speed of wide area networks made a new kind of distributed computing possible: Metacomputing or (distributed) Grid computing. This is a rather new and uncharted field in computational science. The rapidly increasing speed of networks even outperforms the average increase of processor speed: Processor speeds double on average each 18 month whereas network bandwidths double every 9 months. Due to this development of local and wide area networks Grid computing will certainly play a key role in the future of parallel computing. This type of distributed computing, however, distinguishes from the traditional parallel computing in many ways since it has to deal with many problems not occurring in classical parallel computing. Those problems are for example heterogeneity, authentication and slow networks to mention only a few. Some of those problems, e.g. the allocation of distributed resources along with the providing of information about these resources to the application have been already attacked by the Globus software. Unfortunately, as far as we know, hardly any application or middle-ware software takes advantage of this information, since most parallelizing algorithms for finite differencing codes are implicitly designed for single supercomputer or cluster execution. We show that although it is possible to apply classical parallelizing algorithms in a Grid environment, in most cases the observed efficiency of the executed code is very poor. In this work we are closing this gap. In our thesis, we will - show that an execution of classical parallel codes in Grid environments is possible but very slow - analyze this situation of bad performance, nail down bottlenecks in communication, remove unnecessary overhead and

  3. Parallel and Distributed Computational Fluid Dynamics: Experimental Results and Challenges

    NASA Technical Reports Server (NTRS)

    Djomehri, Mohammad Jahed; Biswas, R.; VanderWijngaart, R.; Yarrow, M.

    2000-01-01

    This paper describes several results of parallel and distributed computing using a large scale production flow solver program. A coarse grained parallelization based on clustering of discretization grids combined with partitioning of large grids for load balancing is presented. An assessment is given of its performance on distributed and distributed-shared memory platforms using large scale scientific problems. An experiment with this solver, adapted to a Wide Area Network execution environment is presented. We also give a comparative performance assessment of computation and communication times on both the tightly and loosely-coupled machines.

  4. An Evaluation of Biosurveillance Grid—Dynamic Algorithm Distribution Across Multiple Computer Nodes

    PubMed Central

    Tsai, Ming-Chi; Tsui, Fu-Chiang; Wagner, Michael M.

    2007-01-01

    Performing fast data analysis to detect disease outbreaks plays a critical role in real-time biosurveillance. In this paper, we described and evaluated an Algorithm Distribution Manager Service (ADMS) based on grid technologies, which dynamically partition and distribute detection algorithms across multiple computers. We compared the execution time to perform the analysis on a single computer and on a grid network (3 computing nodes) with and without using dynamic algorithm distribution. We found that algorithms with long runtime completed approximately three times earlier in distributed environment than in a single computer while short runtime algorithms performed worse in distributed environment. A dynamic algorithm distribution approach also performed better than static algorithm distribution approach. This pilot study shows a great potential to reduce lengthy analysis time through dynamic algorithm partitioning and parallel processing, and provides the opportunity of distributing algorithms from a client to remote computers in a grid network. PMID:18693936

  5. An evaluation of biosurveillance grid--dynamic algorithm distribution across multiple computer nodes.

    PubMed

    Tsai, Ming-Chi; Tsui, Fu-Chiang; Wagner, Michael M

    2007-10-11

    Performing fast data analysis to detect disease outbreaks plays a critical role in real-time biosurveillance. In this paper, we described and evaluated an Algorithm Distribution Manager Service (ADMS) based on grid technologies, which dynamically partition and distribute detection algorithms across multiple computers. We compared the execution time to perform the analysis on a single computer and on a grid network (3 computing nodes) with and without using dynamic algorithm distribution. We found that algorithms with long runtime completed approximately three times earlier in distributed environment than in a single computer while short runtime algorithms performed worse in distributed environment. A dynamic algorithm distribution approach also performed better than static algorithm distribution approach. This pilot study shows a great potential to reduce lengthy analysis time through dynamic algorithm partitioning and parallel processing, and provides the opportunity of distributing algorithms from a client to remote computers in a grid network.

  6. Gravitation Field Calculations on a Dynamic Lattice by Distributed Computing

    NASA Astrophysics Data System (ADS)

    Mähönen, Petri; Punkka, Veikko

    A new method of calculating numerically time evolution of a gravitational field in General Relatity is introduced. Vierbein (tetrad) formalism, dynamic lattice and massively parallelized computation are suggested as they are expected to speed up the calculations considerably and facilitate the solution of problems previously considered too hard to be solved, such as the time evolution of a system consisting of two or more black holes or the structure of worm holes.

  7. Gravitational field calculations on a dynamic lattice by distributed computing.

    NASA Astrophysics Data System (ADS)

    Mähönen, P.; Punkka, V.

    A new method of calculating numerically time evolution of a gravitational field in general relativity is introduced. Vierbein (tetrad) formalism, dynamic lattice and massively parallelized computation are suggested as they are expected to speed up the calculations considerably and facilitate the solution of problems previously considered too hard to be solved, such as the time evolution of a system consisting of two or more black holes or the structure of worm holes.

  8. Dynamic resource allocation scheme for distributed heterogeneous computer systems

    NASA Technical Reports Server (NTRS)

    Liu, Howard T. (Inventor); Silvester, John A. (Inventor)

    1991-01-01

    This invention relates to a resource allocation in computer systems, and more particularly, to a method and associated apparatus for shortening response time and improving efficiency of a heterogeneous distributed networked computer system by reallocating the jobs queued up for busy nodes to idle, or less-busy nodes. In accordance with the algorithm (SIDA for short), the load-sharing is initiated by the server device in a manner such that extra overhead in not imposed on the system during heavily-loaded conditions. The algorithm employed in the present invention uses a dual-mode, server-initiated approach. Jobs are transferred from heavily burdened nodes (i.e., over a high threshold limit) to low burdened nodes at the initiation of the receiving node when: (1) a job finishes at a node which is burdened below a pre-established threshold level, or (2) a node is idle for a period of time as established by a wakeup timer at the node. The invention uses a combination of the local queue length and the local service rate ratio at each node as the workload indicator.

  9. Distributed Computing.

    ERIC Educational Resources Information Center

    Ryland, Jane N.

    1988-01-01

    The microcomputer revolution, in which small and large computers have gained tremendously in capability, has created a distributed computing environment. This circumstance presents administrators with the opportunities and the dilemmas of choosing appropriate computing resources for each situation. (Author/MSE)

  10. Distributed Computing.

    ERIC Educational Resources Information Center

    Ryland, Jane N.

    1988-01-01

    The microcomputer revolution, in which small and large computers have gained tremendously in capability, has created a distributed computing environment. This circumstance presents administrators with the opportunities and the dilemmas of choosing appropriate computing resources for each situation. (Author/MSE)

  11. Dynamic Load Balancing for Adaptive Computations on Distributed-Memory Machines

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Dynamic load balancing is central to adaptive mesh-based computations on large-scale parallel computers. The principal investigator has investigated various issues on the dynamic load balancing problem under NASA JOVE and JAG rants. The major accomplishments of the project are two graph partitioning algorithms and a load balancing framework. The S-HARP dynamic graph partitioner is known to be the fastest among the known dynamic graph partitioners to date. It can partition a graph of over 100,000 vertices in 0.25 seconds on a 64- processor Cray T3E distributed-memory multiprocessor while maintaining the scalability of over 16-fold speedup. Other known and widely used dynamic graph partitioners take over a second or two while giving low scalability of a few fold speedup on 64 processors. These results have been published in journals and peer-reviewed flagship conferences.

  12. Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.

    2000-01-01

    The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.

  13. Fast computation of statistical uncertainty for spatiotemporal distributions estimated directly from dynamic cone beam SPECT projections

    SciTech Connect

    Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.

    2001-04-09

    The estimation of time-activity curves and kinetic model parameters directly from projection data is potentially useful for clinical dynamic single photon emission computed tomography (SPECT) studies, particularly in those clinics that have only single-detector systems and thus are not able to perform rapid tomographic acquisitions. Because the radiopharmaceutical distribution changes while the SPECT gantry rotates, projections at different angles come from different tracer distributions. A dynamic image sequence reconstructed from the inconsistent projections acquired by a slowly rotating gantry can contain artifacts that lead to biases in kinetic parameters estimated from time-activity curves generated by overlaying regions of interest on the images. If cone beam collimators are used and the focal point of the collimators always remains in a particular transaxial plane, additional artifacts can arise in other planes reconstructed using insufficient projection samples [1]. If the projection samples truncate the patient's body, this can result in additional image artifacts. To overcome these sources of bias in conventional image based dynamic data analysis, we and others have been investigating the estimation of time-activity curves and kinetic model parameters directly from dynamic SPECT projection data by modeling the spatial and temporal distribution of the radiopharmaceutical throughout the projected field of view [2-8]. In our previous work we developed a computationally efficient method for fully four-dimensional (4-D) direct estimation of spatiotemporal distributions from dynamic SPECT projection data [5], which extended Formiconi's least squares algorithm for reconstructing temporally static distributions [9]. In addition, we studied the biases that result from modeling various orders temporal continuity and using various time samplings [5]. the present work, we address computational issues associated with evaluating the statistical uncertainty of

  14. Large-Scale Distributed Computational Fluid Dynamics on the Information Power Grid Using Globus

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen; Biswas, Rupak; Saini, Subhash; VanderWijngaart, Robertus; Yarrow, Maurice; Zechtzer, Lou; Foster, Ian; Larsson, Olle

    1999-01-01

    This paper describes an experiment in which a large-scale scientific application development for tightly-coupled parallel machines is adapted to the distributed execution environment of the Information Power Grid (IPG). A brief overview of the IPG and a description of the computational fluid dynamics (CFD) algorithm are given. The Globus metacomputing toolkit is used as the enabling device for the geographically-distributed computation. Modifications related to latency hiding and Load balancing were required for an efficient implementation of the CFD application in the IPG environment. Performance results on a pair of SGI Origin 2000 machines indicate that real scientific applications can be effectively implemented on the IPG; however, a significant amount of continued effort is required to make such an environment useful and accessible to scientists and engineers.

  15. A distributed, dynamic, parallel computational model: the role of noise in velocity storage

    PubMed Central

    Merfeld, Daniel M.

    2012-01-01

    Networks of neurons perform complex calculations using distributed, parallel computation, including dynamic “real-time” calculations required for motion control. The brain must combine sensory signals to estimate the motion of body parts using imperfect information from noisy neurons. Models and experiments suggest that the brain sometimes optimally minimizes the influence of noise, although it remains unclear when and precisely how neurons perform such optimal computations. To investigate, we created a model of velocity storage based on a relatively new technique–“particle filtering”–that is both distributed and parallel. It extends existing observer and Kalman filter models of vestibular processing by simulating the observer model many times in parallel with noise added. During simulation, the variance of the particles defining the estimator state is used to compute the particle filter gain. We applied our model to estimate one-dimensional angular velocity during yaw rotation, which yielded estimates for the velocity storage time constant, afferent noise, and perceptual noise that matched experimental data. We also found that the velocity storage time constant was Bayesian optimal by comparing the estimate of our particle filter with the estimate of the Kalman filter, which is optimal. The particle filter demonstrated a reduced velocity storage time constant when afferent noise increased, which mimics what is known about aminoglycoside ablation of semicircular canal hair cells. This model helps bridge the gap between parallel distributed neural computation and systems-level behavioral responses like the vestibuloocular response and perception. PMID:22514288

  16. Distributed dynamical computation in neural circuits with propagating coherent activity patterns.

    PubMed

    Gong, Pulin; van Leeuwen, Cees

    2009-12-01

    Activity in neural circuits is spatiotemporally organized. Its spatial organization consists of multiple, localized coherent patterns, or patchy clusters. These patterns propagate across the circuits over time. This type of collective behavior has ubiquitously been observed, both in spontaneous activity and evoked responses; its function, however, has remained unclear. We construct a spatially extended, spiking neural circuit that generates emergent spatiotemporal activity patterns, thereby capturing some of the complexities of the patterns observed empirically. We elucidate what kind of fundamental function these patterns can serve by showing how they process information. As self-sustained objects, localized coherent patterns can signal information by propagating across the neural circuit. Computational operations occur when these emergent patterns interact, or collide with each other. The ongoing behaviors of these patterns naturally embody both distributed, parallel computation and cascaded logical operations. Such distributed computations enable the system to work in an inherently flexible and efficient way. Our work leads us to propose that propagating coherent activity patterns are the underlying primitives with which neural circuits carry out distributed dynamical computation.

  17. FPGA-based distributed computing microarchitecture for complex physical dynamics investigation.

    PubMed

    Borgese, Gianluca; Pace, Calogero; Pantano, Pietro; Bilotta, Eleonora

    2013-09-01

    In this paper, we present a distributed computing system, called DCMARK, aimed at solving partial differential equations at the basis of many investigation fields, such as solid state physics, nuclear physics, and plasma physics. This distributed architecture is based on the cellular neural network paradigm, which allows us to divide the differential equation system solving into many parallel integration operations to be executed by a custom multiprocessor system. We push the number of processors to the limit of one processor for each equation. In order to test the present idea, we choose to implement DCMARK on a single FPGA, designing the single processor in order to minimize its hardware requirements and to obtain a large number of easily interconnected processors. This approach is particularly suited to study the properties of 1-, 2- and 3-D locally interconnected dynamical systems. In order to test the computing platform, we implement a 200 cells, Korteweg-de Vries (KdV) equation solver and perform a comparison between simulations conducted on a high performance PC and on our system. Since our distributed architecture takes a constant computing time to solve the equation system, independently of the number of dynamical elements (cells) of the CNN array, it allows us to reduce the elaboration time more than other similar systems in the literature. To ensure a high level of reconfigurability, we design a compact system on programmable chip managed by a softcore processor, which controls the fast data/control communication between our system and a PC Host. An intuitively graphical user interface allows us to change the calculation parameters and plot the results.

  18. The van Hove distribution function for brownian hard spheres: dynamical test particle theory and computer simulations for bulk dynamics.

    PubMed

    Hopkins, Paul; Fortini, Andrea; Archer, Andrew J; Schmidt, Matthias

    2010-12-14

    We describe a test particle approach based on dynamical density functional theory (DDFT) for studying the correlated time evolution of the particles that constitute a fluid. Our theory provides a means of calculating the van Hove distribution function by treating its self and distinct parts as the two components of a binary fluid mixture, with the "self " component having only one particle, the "distinct" component consisting of all the other particles, and using DDFT to calculate the time evolution of the density profiles for the two components. We apply this approach to a bulk fluid of Brownian hard spheres and compare to results for the van Hove function and the intermediate scattering function from Brownian dynamics computer simulations. We find good agreement at low and intermediate densities using the very simple Ramakrishnan-Yussouff [Phys. Rev. B 19, 2775 (1979)] approximation for the excess free energy functional. Since the DDFT is based on the equilibrium Helmholtz free energy functional, we can probe a free energy landscape that underlies the dynamics. Within the mean-field approximation we find that as the particle density increases, this landscape develops a minimum, while an exact treatment of a model confined situation shows that for an ergodic fluid this landscape should be monotonic. We discuss possible implications for slow, glassy, and arrested dynamics at high densities.

  19. On the simulation of protein folding by short time scale molecular dynamics and distributed computing.

    PubMed

    Fersht, Alan R

    2002-10-29

    There are proposals to overcome the current incompatibilities between the time scales of protein folding and molecular dynamics simulation by using a large number of short simulations of only tens of nanoseconds (distributed computing). According to the principles of first-order kinetic processes, a sufficiently large number of short simulations will include, de facto, a small number of long time scale events that have proceeded to completion. But protein folding is not an elementary kinetic step: folding has a series of early conformational steps that lead to lag phases at the beginning of the kinetics. The presence of these lag phases can bias short simulations toward selecting minor pathways that have fewer or faster lag steps and so miss the major folding pathways. Attempts to circumvent the lags by using loosely coupled parallel simulations that search for first-order transitions are also problematic because of the difficulty of detecting transitions in molecular dynamics simulations. Nevertheless, the procedure of using parallel independent simulations is perfectly valid and quite feasible once the time scale of simulation proceeds past the lag phases into a single exponential region.

  20. Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    The following reports are presented on this project:A first year progress report on: Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; A second year progress report on: Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design; Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration; and Improving the Aircraft Design Process Using Web-based Modeling and Simulation.

  1. Quantifying the residence time distribution of surface transient storage in streams: A computational fluid dynamics approach

    NASA Astrophysics Data System (ADS)

    Jackson, T. R.; Drost, K. J.; Haggerty, R.; Apte, S. V.

    2011-12-01

    Transient storage is the sum of surface transient storage (STS) and hyporheic transient storage (HTS) and separating the two storage components is challenging. A number of studies have attempted to determine the relationship between transient storage and stream channel properties; however, difficulties ensue when attempting to calculate STS. The present study attempts to develop a predictive relationship between a stream's STS residence time distribution (RTD) to physically-based and field-measureable properties of natural streams. Our approach is to use field measurements to constrain a computational fluid dynamics (CFD) model of STS and use both to develop and test a predictive model of STS RTD. Field sites were located on Oak and Soap creeks in the Willamette Valley near Corvallis, Oregon. Data collection included: (1) obtaining detailed stream and STS zone morphologies through dense survey measurements; (2) determining turbulence parameters and CFD model boundary inputs from stream and storage zone velocity measurements with a Marsh-McBirney and acoustic Doppler velocimeter; (3) quantifying the RTD and its mean using salt tracer injections and electrical conductivity probes; and (4) estimating mixing layer parameters using velocity measurements and a visual dye. Preliminary results from the CFD model and comparison to field data will be presented, and resulting insights into the RTD.

  2. Using computational fluid dynamics software to estimate circulation time distributions in bioreactors.

    PubMed

    Davidson, Kyle M; Sushil, Shrinivasan; Eggleton, Charles D; Marten, Mark R

    2003-01-01

    Nonideal mixing in many fermentation processes can lead to concentration gradients in nutrients, oxygen, and pH, among others. These gradients are likely to influence cellular behavior, growth, or yield of the fermentation process. Frequency of exposure to these gradients can be defined by the circulation time distribution (CTD). There are few examples of CTDs in the literature, and experimental determination of CTD is at best a challenging task. The goal in this study was to determine whether computational fluid dynamics (CFD) software (FLUENT 4 and MixSim) could be used to characterize the CTD in a single-impeller mixing tank. To accomplish this, CFD software was used to simulate flow fields in three different mixing tanks by meshing the tanks with a grid of elements and solving the Navier-Stokes equations using the kappa-epsilon turbulence model. Tracer particles were released from a reference zone within the simulated flow fields, particle trajectories were simulated for 30 s, and the time taken for these tracer particles to return to the reference zone was calculated. CTDs determined by experimental measurement, which showed distinct features (log-normal, bimodal, and unimodal), were compared with CTDs determined using CFD simulation. Reproducing the signal processing procedures used in each of the experiments, CFD simulations captured the characteristic features of the experimentally measured CTDs. The CFD data suggests new signal processing procedures that predict unimodal CTDs for all three tanks.

  3. Chapter on Distributed Computing

    DTIC Science & Technology

    1989-02-01

    MASSACHUSETTS LABORATORY FOR INSTITUTE OF COMPUTER SCIENCE TECHNOLOGY ("D / o O MIT/LCS/TM-384 CHAPTER ON DISTRIBUTED COMPUTING Leslie Lamport Nancy...22217 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Miude Secuwity Ciaifiation) Chapter on Distributed Computing 12. PERSONAL AUTHOR(S) Lamport... distributed computing , distributed systems models, dis- tributed algorithms, message-passing, shared variables, 19. UBSTRACT (Continue on reverse if

  4. Design and performance evaluation of dynamic wavelength scheduled hybrid WDM/TDM PON for distributed computing applications.

    PubMed

    Zhu, Min; Guo, Wei; Xiao, Shilin; Dong, Yi; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2009-01-19

    This paper investigates the design and implementation of distributed computing applications in local area network. We propose a novel Dynamical Wavelength Scheduled Hybrid WDM/TDM Passive Optical Network, which is termed as DWS-HPON. The system is implemented by using spectrum slicing techniques of broadband light source and overlay broadcast-signaling scheme. The Time-Wavelength Co-Allocation (TWCA) Problem is defined and an effective greedy approach to this problem is presented for aggregating large files in distributed computing applications. The simulations demonstrate that the performance is improved significantly compared with the conventional TDM-over-WDM PON.

  5. High-throughput all-atom molecular dynamics simulations using distributed computing.

    PubMed

    Buch, I; Harvey, M J; Giorgino, T; Anderson, D P; De Fabritiis, G

    2010-03-22

    Although molecular dynamics simulation methods are useful in the modeling of macromolecular systems, they remain computationally expensive, with production work requiring costly high-performance computing (HPC) resources. We review recent innovations in accelerating molecular dynamics on graphics processing units (GPUs), and we describe GPUGRID, a volunteer computing project that uses the GPU resources of nondedicated desktop and workstation computers. In particular, we demonstrate the capability of simulating thousands of all-atom molecular trajectories generated at an average of 20 ns/day each (for systems of approximately 30 000-80 000 atoms). In conjunction with a potential of mean force (PMF) protocol for computing binding free energies, we demonstrate the use of GPUGRID in the computation of accurate binding affinities of the Src SH2 domain/pYEEI ligand complex by reconstructing the PMF over 373 umbrella sampling windows of 55 ns each (20.5 mus of total data). We obtain a standard free energy of binding of -8.7 +/- 0.4 kcal/mol within 0.7 kcal/mol from experimental results. This infrastructure will provide the basis for a robust system for high-throughput accurate binding affinity prediction.

  6. Program Facilitates Distributed Computing

    NASA Technical Reports Server (NTRS)

    Hui, Joseph

    1993-01-01

    KNET computer program facilitates distribution of computing between UNIX-compatible local host computer and remote host computer, which may or may not be UNIX-compatible. Capable of automatic remote log-in. User communicates interactively with remote host computer. Data output from remote host computer directed to local screen, to local file, and/or to local process. Conversely, data input from keyboard, local file, or local process directed to remote host computer. Written in ANSI standard C language.

  7. Distributed computing in bioinformatics.

    PubMed

    Jain, Eric

    2002-01-01

    This paper provides an overview of methods and current applications of distributed computing in bioinformatics. Distributed computing is a strategy of dividing a large workload among multiple computers to reduce processing time, or to make use of resources such as programs and databases that are not available on all computers. Participating computers may be connected either through a local high-speed network or through the Internet.

  8. Parallel and Distributed Computing.

    DTIC Science & Technology

    1986-12-12

    program was devoted to parallel and distributed computing . Support for this part of the program was obtained from the present Army contract and a...Umesh Vazirani. A workshop on parallel and distributed computing was held from May 19 to May 23, 1986 and drew 141 participants. Keywords: Mathematical programming; Protocols; Randomized algorithms. (Author)

  9. Dynamics of computational ecosystems

    NASA Astrophysics Data System (ADS)

    Kephart, J. O.; Hogg, T.; Huberman, B. A.

    1989-07-01

    Recently, Huberman and Hogg [in The Ecology of Computation, edited by B. A. Huberman (North-Holland, 1988), pp. 77-115] analyzed the dynamics of resource allocation in a model of computational ecosystems which incorporated many of the features endemic to large distributed processing systems, including distributed control, asynchrony, resource contention, and cooperation among agents and the concomitant problems of incomplete knowledge and delayed information. In this paper we supplement an analysis of several simple examples of computational ecosystems with computer simulations to gain insight into the effects of time delays, cooperation, multiple resources, inhomogeneity, etc. The simulations verify Huberman and Hogg's prediction of persistent oscillations and chaos, and confirm the Ceccatto-Huberman [Proc. Natl. Acad. Sci. U.S.A. 86, 3443 (1989)] prediction of extremely long-lived metastable states in computational ecosystems. Extending the analysis to inhomogeneous systems, we show that they can be more stable than homogeneous systems because agents with different computational needs settle into different strategic niches, and that overly clever local decision-making algorithms can induce chaotic behavior.

  10. Cardea: Providing Support for Dynamic Resource Access in a Distributed Computing Environment

    NASA Technical Reports Server (NTRS)

    Lepro, Rebekah

    2003-01-01

    The environment framing the modem authorization process span domains of administration, relies on many different authentication sources, and manages complex attributes as part of the authorization process. Cardea facilitates dynamic access control within this environment as a central function of an inter-operable authorization framework. The system departs from the traditional authorization model by separating the authentication and authorization processes, distributing the responsibility for authorization data and allowing collaborating domains to retain control over their implementation mechanisms. Critical features of the system architecture and its handling of the authorization process differentiate the system from existing authorization components by addressing common needs not adequately addressed by existing systems. Continuing system research seeks to enhance the implementation of the current authorization model employed in Cardea, increase the robustness of current features, further the framework for establishing trust and promote interoperability with existing security mechanisms.

  11. Using computer simulations to determine the limitations of dynamic clamp stimuli applied at the soma in mimicking distributed conductance sources

    PubMed Central

    Lin, Risa J.

    2011-01-01

    In previous studies we used the technique of dynamic clamp to study how temporal modulation of inhibitory and excitatory inputs control the frequency and precise timing of spikes in neurons of the deep cerebellar nuclei (DCN). Although this technique is now widely used, it is limited to interpreting conductance inputs as being location independent; i.e., all inputs that are biologically distributed across the dendritic tree are applied to the soma. We used computer simulations of a morphologically realistic model of DCN neurons to compare the effects of purely somatic vs. distributed dendritic inputs in this cell type. We applied the same conductance stimuli used in our published experiments to the model. To simulate variability in neuronal responses to repeated stimuli, we added a somatic white current noise to reproduce subthreshold fluctuations in the membrane potential. We were able to replicate our dynamic clamp results with respect to spike rates and spike precision for different patterns of background synaptic activity. We found only minor differences in the spike pattern generation between focal or distributed input in this cell type even when strong inhibitory or excitatory bursts were applied. However, the location dependence of dynamic clamp stimuli is likely to be different for each cell type examined, and the simulation approach developed in the present study will allow a careful assessment of location dependence in all cell types. PMID:21325676

  12. Estimation of equivalence ratio distribution in diesel spray using a computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Suzuki, Yasumasa; Tsujimura, Taku; Kusaka, Jin

    2014-08-01

    It is important to understand the mechanism of mixing and atomization of the diesel spray. In addition, the computational prediction of mixing behavior and internal structure of a diesel spray is expected to promote the further understanding about a diesel spray and development of the diesel engine including devices for fuel injection. In this study, we predicted the formation of diesel fuel spray with 3D-CFD code and validated the application by comparing experimental results of the fuel spray behavior and the equivalence ratio visualized by Layleigh-scatter imaging under some ambient, injection and fuel conditions. Using the applicable constants of KH-RT model, we can predict the liquid length spray on a quantitative level. under various fuel injection, ambient and fuel conditions. On the other hand, the change of the vapor penetration and the fuel mass fraction and equivalence ratio distribution with change of fuel injection and ambient conditions quantitatively. The 3D-CFD code used in this study predicts the spray cone angle and entrainment of ambient gas are predicted excessively, therefore there is the possibility of the improvement in the prediction accuracy by the refinement of fuel droplets breakup and evaporation model and the quantitative prediction of spray cone angle.

  13. Coping with distributed computing

    SciTech Connect

    Cormell, L.

    1992-09-01

    The rapid increase in the availability of high performance, cost-effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no longer provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by providing some examples of the approaches taken at various HEP institutions. In addition, a brief review of commercial directions or products for distributed computing and management will be given.

  14. Nonuniform Moving Boundary Method for Computational Fluid Dynamics Simulation of Intrathecal Cerebrospinal Flow Distribution in a Cynomolgus Monkey.

    PubMed

    Khani, Mohammadreza; Xing, Tao; Gibbs, Christina; Oshinski, John N; Stewart, Gregory R; Zeller, Jillynne R; Martin, Bryn A

    2017-08-01

    A detailed quantification and understanding of cerebrospinal fluid (CSF) dynamics may improve detection and treatment of central nervous system (CNS) diseases and help optimize CSF system-based delivery of CNS therapeutics. This study presents a computational fluid dynamics (CFD) model that utilizes a nonuniform moving boundary approach to accurately reproduce the nonuniform distribution of CSF flow along the spinal subarachnoid space (SAS) of a single cynomolgus monkey. A magnetic resonance imaging (MRI) protocol was developed and applied to quantify subject-specific CSF space geometry and flow and define the CFD domain and boundary conditions. An algorithm was implemented to reproduce the axial distribution of unsteady CSF flow by nonuniform deformation of the dura surface. Results showed that maximum difference between the MRI measurements and CFD simulation of CSF flow rates was <3.6%. CSF flow along the entire spine was laminar with a peak Reynolds number of ∼150 and average Womersley number of ∼5.4. Maximum CSF flow rate was present at the C4-C5 vertebral level. Deformation of the dura ranged up to a maximum of 134 μm. Geometric analysis indicated that total spinal CSF space volume was ∼8.7 ml. Average hydraulic diameter, wetted perimeter, and SAS area were 2.9 mm, 37.3 mm and 27.24 mm2, respectively. CSF pulse wave velocity (PWV) along the spine was quantified to be 1.2 m/s.

  15. Understanding pharmacokinetics using realistic computational models of fluid dynamics: biosimulation of drug distribution within the CSF space for intrathecal drugs.

    PubMed

    Kuttler, Andreas; Dimke, Thomas; Kern, Steven; Helmlinger, Gabriel; Stanski, Donald; Finelli, Luca A

    2010-12-01

    We introduce how biophysical modeling in pharmaceutical research and development, combining physiological observations at the tissue, organ and system level with selected drug physiochemical properties, may contribute to a greater and non-intuitive understanding of drug pharmacokinetics and therapeutic design. Based on rich first-principle knowledge combined with experimental data at both conception and calibration stages, and leveraging our insights on disease processes and drug pharmacology, biophysical modeling may provide a novel and unique opportunity to interactively characterize detailed drug transport, distribution, and subsequent therapeutic effects. This innovative approach is exemplified through a three-dimensional (3D) computational fluid dynamics model of the spinal canal motivated by questions arising during pharmaceutical development of one molecular therapy for spinal cord injury. The model was based on actual geometry reconstructed from magnetic resonance imaging data subsequently transformed in a parametric 3D geometry and a corresponding finite-volume representation. With dynamics controlled by transient Navier-Stokes equations, the model was implemented in a commercial multi-physics software environment established in the automotive and aerospace industries. While predictions were performed in silico, the underlying biophysical models relied on multiple sources of experimental data and knowledge from scientific literature. The results have provided insights into the primary factors that can influence the intrathecal distribution of drug after lumbar administration. This example illustrates how the approach connects the causal chain underlying drug distribution, starting with the technical aspect of drug delivery systems, through physiology-driven drug transport, then eventually linking to tissue penetration, binding, residence, and ultimately clearance. Currently supporting our drug development projects with an improved understanding of systems

  16. The Survivable Distributed Computing Environment

    DTIC Science & Technology

    1994-06-01

    an architecture for a survivable Distributed Computing Environment (SDCE). In essence, the SDCE will be a base upon which survivable distributed...and/or ISIS distributed Computing Environments to provide many of the SDCE requirements.

  17. Simplified Distributed Computing

    NASA Astrophysics Data System (ADS)

    Li, G. G.

    2006-05-01

    The distributed computing runs from high performance parallel computing, GRID computing, to an environment where idle CPU cycles and storage space of numerous networked systems are harnessed to work together through the Internet. In this work we focus on building an easy and affordable solution for computationally intensive problems in scientific applications based on existing technology and hardware resources. This system consists of a series of controllers. When a job request is detected by a monitor or initialized by an end user, the job manager launches the specific job handler for this job. The job handler pre-processes the job, partitions the job into relative independent tasks, and distributes the tasks into the processing queue. The task handler picks up the related tasks, processes the tasks, and puts the results back into the processing queue. The job handler also monitors and examines the tasks and the results, and assembles the task results into the overall solution for the job request when all tasks are finished for each job. A resource manager configures and monitors all participating notes. A distributed agent is deployed on all participating notes to manage the software download and report the status. The processing queue is the key to the success of this distributed system. We use BEA's Weblogic JMS queue in our implementation. It guarantees the message delivery and has the message priority and re-try features so that the tasks never get lost. The entire system is built on the J2EE technology and it can be deployed on heterogeneous platforms. It can handle algorithms and applications developed in any languages on any platforms. J2EE adaptors are provided to manage and communicate the existing applications to the system so that the applications and algorithms running on Unix, Linux and Windows can all work together. This system is easy and fast to develop based on the industry's well-adopted technology. It is highly scalable and heterogeneous. It is

  18. Computational Model of Human and System Dynamics in Free Flight: Studies in Distributed Control Technologies

    NASA Technical Reports Server (NTRS)

    Corker, Kevin M.; Pisanich, Gregory; Lebacqz, J. Victor (Technical Monitor)

    1998-01-01

    This paper presents a set of studies in full mission simulation and the development of a predictive computational model of human performance in control of complex airspace operations. NASA and the FAA have initiated programs of research and development to provide flight crew, airline operations and air traffic managers with automation aids to increase capacity in en route and terminal area to support the goals of safe, flexible, predictable and efficient operations. In support of these developments, we present a computational model to aid design that includes representation of multiple cognitive agents (both human operators and intelligent aiding systems). The demands of air traffic management require representation of many intelligent agents sharing world-models, coordinating action/intention, and scheduling goals and actions in a potentially unpredictable world of operations. The operator-model structure includes attention functions, action priority, and situation assessment. The cognitive model has been expanded to include working memory operations including retrieval from long-term store, and interference. The operator's activity structures have been developed to provide for anticipation (knowledge of the intention and action of remote operators), and to respond to failures of the system and other operators in the system in situation-specific paradigms. System stability and operator actions can be predicted by using the model. The model's predictive accuracy was verified using the full-mission simulation data of commercial flight deck operations with advanced air traffic management techniques.

  19. 3-D parallel program for numerical calculation of gas dynamics problems with heat conductivity on distributed memory computational systems (CS)

    SciTech Connect

    Sofronov, I.D.; Voronin, B.L.; Butnev, O.I.

    1997-12-31

    The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle. The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.

  20. Modeling hydroxyl radical distribution and trialkyl phosphates oxidation in UV-H2O2 photoreactors using computational fluid dynamics.

    PubMed

    Santoro, Domenico; Raisee, Mehrdad; Moghaddami, Mostafa; Ducoste, Joel; Sasges, Micheal; Liberti, Lorenzo; Notarnicola, Michele

    2010-08-15

    Advanced Oxidation Processes (AOPs) promoted by ultraviolet light are innovative and potentially cost-effective solutions for treating persistent pollutants recalcitrant to conventional water and wastewater treatment. While several studies have been performed during the past decade to improve the fundamental understanding of the UV-H(2)O(2) AOP and its kinetic modeling, Computational Fluid Dynamics (CFD) has only recently emerged as a powerful tool that allows a deeper understanding of complex photochemical processes in environmental and reactor engineering applications. In this paper, a comprehensive kinetic model of UV-H(2)O(2) AOP was coupled with the Reynolds averaged Navier-Stokes (RANS) equations using CFD to predict the oxidation of tributyl phosphate (TBP) and tri(2-chloroethtyl) phosphate (TCEP) in two different photoreactors: a parallel- and a cross-flow UV device employing a UV lamp emitting primarily 253.7 nm radiation. CFD simulations, obtained for both turbulent and laminar flow regimes and compared with experimental data over a wide range of UV doses, enabled the spatial visualization of hydrogen peroxide and hydroxyl radical distributions in the photoreactor. The annular photoreactor displayed consistently better oxidation performance than the cross-flow system due to the absence of recirculation zones, as confirmed by the hydroxyl radical dose distributions. Notably, such discrepancy was found to be strongly dependent on and directly correlated with the hydroxyl radical rate constant becoming relevant for conditions approaching diffusion-controlled reaction regimes (k(C,OH) > 10(9) M(-1) s(-1)).

  1. A three-dimensional computational fluid dynamics model of shear stress distribution during neotissue growth in a perfusion bioreactor.

    PubMed

    Guyot, Y; Luyten, F P; Schrooten, J; Papantoniou, I; Geris, L

    2015-12-01

    Bone tissue engineering strategies use flow through perfusion bioreactors to apply mechanical stimuli to cells seeded on porous scaffolds. Cells grow on the scaffold surface but also by bridging the scaffold pores leading a fully filled scaffold following the scaffold's geometric characteristics. Current computational fluid dynamic approaches for tissue engineering bioreactor systems have been mostly carried out for empty scaffolds. The effect of 3D cell growth and extracellular matrix formation (termed in this study as neotissue growth), on its surrounding fluid flow field is a challenge yet to be tackled. In this work a combined approach was followed linking curvature driven cell growth to fluid dynamics modeling. The level-set method (LSM) was employed to capture neotissue growth driven by curvature, while the Stokes and Darcy equations, combined in the Brinkman equation, provided information regarding the distribution of the shear stress profile at the neotissue/medium interface and within the neotissue itself during growth. The neotissue was assumed to be micro-porous allowing flow through its structure while at the same time allowing the simulation of complete scaffold filling without numerical convergence issues. The results show a significant difference in the amplitude of shear stress for cells located within the micro-porous neo-tissue or at the neotissue/medium interface, demonstrating the importance of taking along the neotissue in the calculation of the mechanical stimulation of cells during culture.The presented computational framework is used on different scaffold pore geometries demonstrating its potential to be used a design as tool for scaffold architecture taking into account the growing neotissue. Biotechnol. Bioeng. 2015;112: 2591-2600. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  2. Cooperative Fault Tolerant Distributed Computing

    SciTech Connect

    Fagg, Graham E.

    2006-03-15

    HARNESS was proposed as a system that combined the best of emerging technologies found in current distributed computing research and commercial products into a very flexible, dynamically adaptable framework that could be used by applications to allow them to evolve and better handle their execution environment. The HARNESS system was designed using the considerable experience from previous projects such as PVM, MPI, IceT and Cumulvs. As such, the system was designed to avoid any of the common problems found with using these current systems, such as no single point of failure, ability to survive machine, node and software failures. Additional features included improved inter-component connectivity, with full support for dynamic down loading of addition components at run-time thus reducing the stress on application developers to build in all the libraries they need in advance.

  3. Dynamics of Computing Structures

    NASA Astrophysics Data System (ADS)

    Huberman, B. A.

    1985-01-01

    Complex systems, such as biological organisms and computing structures, lie between the realms of statistical mechanics and the physics of a few degrees of freedom. Moreover, they can exhibit self-organized behavior which in many cases is characterized by learning, recognition and fault tolerance. This talk will describe studies of adaptive parallel computers which are capable of reliable learning and recognition. The existence of attractors in their dynamical behavior leads to a novel self-repairing mechanism which has been tested by quantitative experiments. Moreover, we will show how these highly concurrent structures, which are capable of universal computation, can be used to study simple, fault-tolerant, perceptual tasks.

  4. Task allocation in a distributed computing system

    NASA Technical Reports Server (NTRS)

    Seward, Walter D.

    1987-01-01

    A conceptual framework is examined for task allocation in distributed systems. Application and computing system parameters critical to task allocation decision processes are discussed. Task allocation techniques are addressed which focus on achieving a balance in the load distribution among the system's processors. Equalization of computing load among the processing elements is the goal. Examples of system performance are presented for specific applications. Both static and dynamic allocation of tasks are considered and system performance is evaluated using different task allocation methodologies.

  5. Dynamic Load Balancing for Computational Plasticity on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Pramono, Eddy; Simon, Horst

    1994-01-01

    The simulation of the computational plasticity on a complex structure remains a formidable computational task, especially when a highly nonlinear, complex material model was used. It appears that the computational requirements for a such problem can only be satisfied by massively parallel architectures. In order to effectively harness the tremendous computational power provided by such architectures, it is imperative to investigate and to study the algorithmic and implementation issues pertaining to dynamic load balancing for computational plasticity on a highly parallel, distributed-memory, multiple-instruction, multiple-data computers. This paper will measure the effectiveness of the algorithms developed in handling the dynamic load balancing.

  6. Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Chung, T. J.

    2002-03-01

    Computational fluid dynamics (CFD) techniques are used to study and solve complex fluid flow and heat transfer problems. This comprehensive text ranges from elementary concepts for the beginner to state-of-the-art CFD for the practitioner. It discusses and illustrates the basic principles of finite difference (FD), finite element (FE), and finite volume (FV) methods, with step-by-step hand calculations. Chapters go on to examine structured and unstructured grids, adaptive methods, computing techniques, and parallel processing. Finally, the author describes a variety of practical applications to problems in turbulence, reacting flows and combustion, acoustics, combined mode radiative heat transfer, multiphase flows, electromagnetic fields, and relativistic astrophysical flows. Students and practitioners--particularly in mechanical, aerospace, chemical, and civil engineering--will use this authoritative text to learn about and apply numerical techniques to the solution of fluid dynamics problems.

  7. Dynamic Architecture Computer

    DTIC Science & Technology

    1988-12-01

    of Engineering of the Air Force Institute of Tecnology Air University In Partial Fulfillment of Master of Science in Electrical Engineering Accession...architecture. The review assured that this study did not duplicate previous studies and provided the background information for this study. 4 Analysis of...a dynamic architecture computer based on the information obtained from the analysis outlined in the steps above. Analysis of Results. This concluding

  8. Computational reacting gas dynamics

    NASA Technical Reports Server (NTRS)

    Lam, S. H.

    1993-01-01

    In the study of high speed flows at high altitudes, such as that encountered by re-entry spacecrafts, the interaction of chemical reactions and other non-equilibrium processes in the flow field with the gas dynamics is crucial. Generally speaking, problems of this level of complexity must resort to numerical methods for solutions, using sophisticated computational fluid dynamics (CFD) codes. The difficulties introduced by reacting gas dynamics can be classified into three distinct headings: (1) the usually inadequate knowledge of the reaction rate coefficients in the non-equilibrium reaction system; (2) the vastly larger number of unknowns involved in the computation and the expected stiffness of the equations; and (3) the interpretation of the detailed reacting CFD numerical results. The research performed accepts the premise that reacting flows of practical interest in the future will in general be too complex or 'untractable' for traditional analytical developments. The power of modern computers must be exploited. However, instead of focusing solely on the construction of numerical solutions of full-model equations, attention is also directed to the 'derivation' of the simplified model from the given full-model. In other words, the present research aims to utilize computations to do tasks which have traditionally been done by skilled theoreticians: to reduce an originally complex full-model system into an approximate but otherwise equivalent simplified model system. The tacit assumption is that once the appropriate simplified model is derived, the interpretation of the detailed numerical reacting CFD numerical results will become much easier. The approach of the research is called computational singular perturbation (CSP).

  9. Knowledge and Distributed computation

    DTIC Science & Technology

    1990-05-01

    convincing evidence that reasoning in terms of knowledge can lead to .. n... uif.yi ...... lts" about diStfibuLuc computation, and we extend the standard...can be made precise in the context of computer science. In this thesis, we pro- vide convincing evidence that reasoning in terms of knowledge can lead ...against different adversaries. We show how different adversaries lead to different definitions of probabilistic knowledge, and given a particular adversary

  10. Numerical Uncertainty Analysis for Computational Fluid Dynamics using Student T Distribution -- Application of CFD Uncertainty Analysis Compared to Exact Analytical Solution

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.

    2014-01-01

    Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.

  11. GRIMD: distributed computing for chemists and biologists

    PubMed Central

    Piotto, Stefano; Biasi, Luigi Di; Concilio, Simona; Castiglione, Aniello; Cattaneo, Giuseppe

    2014-01-01

    Motivation: Biologists and chemists are facing problems of high computational complexity that require the use of several computers organized in clusters or in specialized grids. Examples of such problems can be found in molecular dynamics (MD), in silico screening, and genome analysis. Grid Computing and Cloud Computing are becoming prevalent mainly because of their competitive performance/cost ratio. Regrettably, the diffusion of Grid Computing is strongly limited because two main limitations: it is confined to scientists with strong Computer Science background and the analyses of the large amount of data produced can be cumbersome it. We have developed a package named GRIMD to provide an easy and flexible implementation of distributed computing for the Bioinformatics community. GRIMD is very easy to install and maintain, and it does not require any specific Computer Science skill. Moreover, permits preliminary analysis on the distributed machines to reduce the amount of data to transfer. GRIMD is very flexible because it shields the typical computational biologist from the need to write specific code for tasks such as molecular dynamics or docking calculations. Furthermore, it permits an efficient use of GPU cards whenever is possible. GRIMD calculations scale almost linearly and, therefore, permits to exploit efficiently each machine in the network. Here, we provide few examples of grid computing in computational biology (MD and docking) and bioinformatics (proteome analysis). Availability GRIMD is available for free for noncommercial research at www.yadamp.unisa.it/grimd Supplementary information www.yadamp.unisa.it/grimd/howto.aspx PMID:24516326

  12. [Vertical distribution of formaldehyde concentration and simulated temperature and wind velocity from computational fluid dynamics in a gross anatomy laboratory].

    PubMed

    Takayanagi, Masaaki; Fujita, Toshio; Mikuni, Tsunebumi; Sakai, Makoto; Ishikawa, Youichi; Murakami, Kunio; Kimura, Akihiko; Kakuta, Sachiko; Sato, Fumi

    2008-03-01

    Cadavers for gross anatomy laboratories are typically embalmed in formaldehyde. Thus, medical students and instructors are exposed to formaldehyde vapors emitted from cadavers during dissection. In an attempt to improve the dissection environment, we examined indoor formaldehyde concentrations in a gross anatomy laboratory. Air samples were taken from 20, 110, 160, and 230 cm above the floor between dissection beds to represent areas near the floor, in the breathing zone of sitting students, in the breathing zone of standing students, and near the ceiling, respectively. Formaldehyde vapors were thoroughly diffused from the floor to the ceiling, suggesting that medical students are exposed to similar concentrations of formaldehyde based on distance from the floor. Computational fluid dynamics showed that cadavers are warmed by overhead fluorescent lights and the body heat of anatomy students, and indicated that the diffusion of formaldehyde vapors is increased by lighting and the body temperature of students. Computational fluid dynamics showed that gentle convection from anatomy students and cadavers carry formaldehyde vapors upward; downward flow near admission ports diffuse formaldehyde vapors from the ceiling to the floor in the anatomy laboratory.

  13. Heterogeneous Distributed Computing for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Sunderam, Vaidy S.

    1998-01-01

    The research supported under this award focuses on heterogeneous distributed computing for high-performance applications, with particular emphasis on computational aerosciences. The overall goal of this project was to and investigate issues in, and develop solutions to, efficient execution of computational aeroscience codes in heterogeneous concurrent computing environments. In particular, we worked in the context of the PVM[1] system and, subsequent to detailed conversion efforts and performance benchmarking, devising novel techniques to increase the efficacy of heterogeneous networked environments for computational aerosciences. Our work has been based upon the NAS Parallel Benchmark suite, but has also recently expanded in scope to include the NAS I/O benchmarks as specified in the NHT-1 document. In this report we summarize our research accomplishments under the auspices of the grant.

  14. Simulation of the Velocity and Temperature Distribution of Inhalation Thermal Injury in a Human Upper Airway Model by Application of Computational Fluid Dynamics.

    PubMed

    Chang, Yang; Zhao, Xiao-zhuo; Wang, Cheng; Ning, Fang-gang; Zhang, Guo-an

    2015-01-01

    Inhalation injury is an important cause of death after thermal burns. This study was designed to simulate the velocity and temperature distribution of inhalation thermal injury in the upper airway in humans using computational fluid dynamics. Cervical computed tomography images of three Chinese adults were imported to Mimics software to produce three-dimensional models. After grids were established and boundary conditions were defined, the simulation time was set at 1 minute and the gas temperature was set to 80 to 320°C using ANSYS software (ANSYS, Canonsburg, PA) to simulate the velocity and temperature distribution of inhalation thermal injury. Cross-sections were cut at 2-mm intervals, and maximum airway temperature and velocity were recorded for each cross-section. The maximum velocity peaked in the lower part of the nasal cavity and then decreased with air flow. The velocities in the epiglottis and glottis were higher than those in the surrounding areas. Further, the maximum airway temperature decreased from the nasal cavity to the trachea. Computational fluid dynamics technology can be used to simulate the velocity and temperature distribution of inhaled heated air.

  15. Computational fluid dynamic applications

    SciTech Connect

    Chang, S.-L.; Lottes, S. A.; Zhou, C. Q.

    2000-04-03

    The rapid advancement of computational capability including speed and memory size has prompted the wide use of computational fluid dynamics (CFD) codes to simulate complex flow systems. CFD simulations are used to study the operating problems encountered in system, to evaluate the impacts of operation/design parameters on the performance of a system, and to investigate novel design concepts. CFD codes are generally developed based on the conservation laws of mass, momentum, and energy that govern the characteristics of a flow. The governing equations are simplified and discretized for a selected computational grid system. Numerical methods are selected to simplify and calculate approximate flow properties. For turbulent, reacting, and multiphase flow systems the complex processes relating to these aspects of the flow, i.e., turbulent diffusion, combustion kinetics, interfacial drag and heat and mass transfer, etc., are described in mathematical models, based on a combination of fundamental physics and empirical data, that are incorporated into the code. CFD simulation has been applied to a large variety of practical and industrial scale flow systems.

  16. Distributed GPU Computing in GIScience

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.

    2013-12-01

    Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE

  17. Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.

  18. Computational astrophysical fluid dynamics

    NASA Technical Reports Server (NTRS)

    Norman, Michael L.; Clarke, David A.; Stone, James M.

    1991-01-01

    The field of astrophysical fluid dynamics (AFD) is described as an emerging discipline which derives historically from both the theory of stellar evolution and space plasma physics. The fundamental physical assumption behind AFD is that fluid equations of motion accurately describe the evolution of plasmas on scales that are large in comparison with particle interaction length scales. Particular attention is given to purely fluid models of large-scale astrophysical plasmas. The role of computer simulation in AFD research is also highlighted and a suite of general-purpose application codes for AFD research is discussed. The codes are called ZEUS-2D and ZEUS-3D and solve the equations of AFD in two and three dimensions, respectively, in several coordinate geometries for general initial and boundary conditions. The topics of bipolar outflows from protostars, galactic superbubbles and supershells, and extragalactic radio sources are addressed.

  19. BESIII production with distributed computing

    NASA Astrophysics Data System (ADS)

    Zhang, X. M.; Yan, T.; Zhao, X. H.; Ma, Z. T.; Yan, X. F.; Lin, T.; Deng, Z. Y.; Li, W. D.; Belov, S.; Pelevanyuk, I.; Zhemchugov, A.; Cai, H.

    2015-12-01

    Distributed computing is necessary nowadays for high energy physics experiments to organize heterogeneous computing resources all over the world to process enormous amounts of data. The BESIII experiment in China, has established its own distributed computing system, based on DIRAC, as a supplement to local clusters, collecting cluster, grid, desktop and cloud resources from collaborating member institutes around the world. The system consists of workload management and data management to deal with the BESIII Monte Carlo production workflow in a distributed environment. A dataset-based data transfer system has been developed to support data movements among sites. File and metadata management tools and a job submission frontend have been developed to provide a virtual layer for BESIII physicists to use distributed resources. Moreover, the paper shows the experience to cope with lack of grid experience and low manpower among the BESIII community.

  20. Sputnik: ad hoc distributed computation.

    PubMed

    Völkel, Gunnar; Lausser, Ludwig; Schmid, Florian; Kraus, Johann M; Kestler, Hans A

    2015-04-15

    In bioinformatic applications, computationally demanding algorithms are often parallelized to speed up computation. Nevertheless, setting up computational environments for distributed computation is often tedious. Aim of this project were the lightweight ad hoc set up and fault-tolerant computation requiring only a Java runtime, no administrator rights, while utilizing all CPU cores most effectively. The Sputnik framework provides ad hoc distributed computation on the Java Virtual Machine which uses all supplied CPU cores fully. It provides a graphical user interface for deployment setup and a web user interface displaying the current status of current computation jobs. Neither a permanent setup nor administrator privileges are required. We demonstrate the utility of our approach on feature selection of microarray data. The Sputnik framework is available on Github http://github.com/sysbio-bioinf/sputnik under the Eclipse Public License. hkestler@fli-leibniz.de or hans.kestler@uni-ulm.de Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Distributed computing at the SSCL

    SciTech Connect

    Cormell, L.; White, R.

    1993-05-01

    The rapid increase in the availability of high performance, cost- effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no linger provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by discussing the approach taken at the Superconducting Super Collider Laboratory. In addition, a brief review of the future directions of commercial products for distributed computing and management will be given.

  2. Computational Fluid Dynamics Library

    SciTech Connect

    Kashiwa, B. A.; Padial, N. T.; Rauenzahn, R. M.; VanderHeyden, & W.B.

    2005-03-04

    CFDLib05 is the Los Alamos Computational Fluid Dynamics LIBrary. This is a collection of hydrocodes using a common data structure and a common numerical method, for problems ranging from single-field, incompressible flow, to multi-species, multi-field, compressible flow. The data structure is multi-block, with a so-called structured grid in each block. The numerical method is a Finite-Volume scheme employing a state vector that is fully cell-centered. This means that the integral form of the conservation laws is solved on the physical domain that is represented by a mesh of control volumes. The typical control volume is an arbitrary quadrilateral in 2D and an arbitrary hexahedron in 3D. The Finite-Volume scheme is for time-unsteady flow and remains well coupled by means of time and space centered fluxes; if a steady state solution is required, the problem is integrated forward in time until the user is satisfied that the state is stationary.

  3. Low Power Computing in Distributed Systems

    DTIC Science & Technology

    2006-04-01

    IEEE Communications Magazine, Volume 40, Issue 8, pp. 102-114, Aug. 2002. [3] E. R . Post and M. Orth, “Smart Fabric, or Wearable Computing,” Proc...www.cse.psu.edu/~mdl/software.htm [20] http://carlsberg.mit.edu/JouleTrack/ [21] M. Srivastava, A. Chandrakasan. R . Brodersen, “Predictive system shutdown...Dynamic Load Balancing in Distributed Systems,” IEEE International Conference on Systems, Man and Cybernetics, pp. 3795-3799, 1995. [27] A. Talukder

  4. A Wigner Distribution Analysis of Scattering Dynamics

    NASA Astrophysics Data System (ADS)

    Weeks, David; Lacy, Brent

    2009-04-01

    Using the time dependent Channel Packet Method (CPM),ootnotetextD.E.Weeks, T.A.Niday, S.H.Yang, J Chem Phys. 125, 164301 (2006). a Fourier transformation of the correlation function between evolving wave packets is used to compute scattering matrix elements. The correlation function can also be used to compute a Wigner distribution as a function of time and energy. This scattering Wigner distribution is then used to investigate times at which various energetic contributions to the scattering matrix are made during a molecular collision. We compute scattering Wigner distributions for a variety of molecular systems and use them to characterize the associated molecular dynamics. In particular, the square well provides a simple and easily modified potential to study the relationship between the scattering Wigner distribution and wave packet dynamics. Additional systems that are being studied include the collinear H + H2 molecular reaction, and the non-adiabatic B + H2 molecular collision.

  5. Oleuropein: Molecular Dynamics and Computation.

    PubMed

    Gentile, Luigi; Uccella, Nicola A; Sivakumar, Ganapathy

    2017-09-11

    Olive oil and table olive biophenols have been shown to significantly enrich the hedonic-sensory and nutritional quality of the Mediterranean diet. Oleuropein is one of the predominate biophenols in green olives and leaves, which not only has noteworthy free-radical quenching activity but also putatively reduces the incidence of various cancers. Clinical trials suggest that the consumption of extra virgin olive oil reduces the risk of several degenerative diseases. The oleuropein-based bioactives in olive oil could reduce tumor necrosis factor α, interleukin-1β and nitric oxide. Therefore, olive bioactives quality should be preserved and even improved due to their disease-fighting properties. Understanding the molecular dynamics of oleuropein is crucial to increase olive oil and table olive quality. The objective of this review is to provide the molecular dynamics and computational mapping of oleuropein. It is a biophenol-secoiridoid expressing different functionalities such as two π-bonds, two esters, two acetals, one catechol, and four hexose hydroxyls within 540 mw. The molecular bond sequential breaking mechanisms were analyzed through unimolecular reactions under electron spray ionization, collision activated dissociations, and fast atom bombardment mass spectrometry. The oleuropein solvent-free reactivity is leading to glucose loss and bioactive aglycone-dialdehydes via secoiridoid ring opening. Oleuropein electron distribution revealed that the free-radical non-polar processes occur from its highest occupied molecular orbital, while the lowest unoccupied molecular orbital is clearly devoted to nucleophilic and base site reactivity. This molecular dynamics and computational mapping of oleuropein could contribute to the engineering of olive-based biomedicine and/or functional food. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  6. Towards an Infrastructure for MLS Distributed Computing

    DTIC Science & Technology

    1998-01-01

    Distributed computing owes its success to the development of infrastructure, middleware, and standards (e.g., CORBA) by the computing industry. This...Government must protect national security information against unauthorized information flow. To support MLS distributed computing , a MLS infrastructure...protection of classified information and use both the emerging distributed computing and commercial security infrastructures. The resulting infrastructure

  7. Distributed Computing in Universities and Colleges.

    ERIC Educational Resources Information Center

    Sircar, Sumit

    1979-01-01

    Analyzes the implications of distributed computing in institutions of higher education. Discusses (1) the extent to which the quality of computing might be enhanced by adopting a distributed computing approach, (2) variations in distributed systems design and the cost of adoption, and (3) administration of distributed systems. (Author/CMV)

  8. Bimolecular dynamics by computer analysis

    SciTech Connect

    Eilbeck, J.C.; Lomdahl, P.S.; Scott, A.C.

    1984-01-01

    As numerical tools (computers and display equipment) become more powerful and the atomic structures of important biological molecules become known, the importance of detailed computation of nonequilibrium biomolecular dynamics increases. In this manuscript we report results from a well developed study of the hydrogen bonded polypeptide crystal acetanilide, a model protein. Directions for future research are suggested. 9 references, 6 figures.

  9. Computer animation challenges for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Vines, Mauricio; Lee, Won-Sook; Mavriplis, Catherine

    2012-07-01

    Computer animation requirements differ from those of traditional computational fluid dynamics (CFD) investigations in that visual plausibility and rapid frame update rates trump physical accuracy. We present an overview of the main techniques for fluid simulation in computer animation, starting with Eulerian grid approaches, the Lattice Boltzmann method, Fourier transform techniques and Lagrangian particle introduction. Adaptive grid methods, precomputation of results for model reduction, parallelisation and computation on graphical processing units (GPUs) are reviewed in the context of accelerating simulation computations for animation. A survey of current specific approaches for the application of these techniques to the simulation of smoke, fire, water, bubbles, mixing, phase change and solid-fluid coupling is also included. Adding plausibility to results through particle introduction, turbulence detail and concentration on regions of interest by level set techniques has elevated the degree of accuracy and realism of recent animations. Basic approaches are described here. Techniques to control the simulation to produce a desired visual effect are also discussed. Finally, some references to rendering techniques and haptic applications are mentioned to provide the reader with a complete picture of the challenges of simulating fluids in computer animation.

  10. Computational Workbench for Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Edmonds, Karina

    2007-01-01

    PyCraft is a computer program that provides an interactive, workbenchlike computing environment for developing and testing algorithms for multibody dynamics. Examples of multibody dynamic systems amenable to analysis with the help of PyCraft include land vehicles, spacecraft, robots, and molecular models. PyCraft is based on the Spatial-Operator- Algebra (SOA) formulation for multibody dynamics. The SOA operators enable construction of simple and compact representations of complex multibody dynamical equations. Within the Py-Craft computational workbench, users can, essentially, use the high-level SOA operator notation to represent the variety of dynamical quantities and algorithms and to perform computations interactively. PyCraft provides a Python-language interface to underlying C++ code. Working with SOA concepts, a user can create and manipulate Python-level operator classes in order to implement and evaluate new dynamical quantities and algorithms. During use of PyCraft, virtually all SOA-based algorithms are available for computational experiments.

  11. Computational Fluid Dynamics.

    DTIC Science & Technology

    1986-06-01

    dual variable method can be applied to the predictive model of the fluid dynamics associated with an axially symmetric centerbody combustor being...general nonlinear, parameter-dependent equations of the form F(z,A) - 0 where F is a nonlinear mapping, z is the state variable representing the solu...represents, in general, a differentiable manifold in the combined space of the state variable and the parameter vector. This requires a regularity assumption

  12. Overlapping clusters for distributed computation.

    SciTech Connect

    Mirrokni, Vahab; Andersen, Reid; Gleich, David F.

    2010-11-01

    Scalable, distributed algorithms must address communication problems. We investigate overlapping clusters, or vertex partitions that intersect, for graph computations. This setup stores more of the graph than required but then affords the ease of implementation of vertex partitioned algorithms. Our hope is that this technique allows us to reduce communication in a computation on a distributed graph. The motivation above draws on recent work in communication avoiding algorithms. Mohiyuddin et al. (SC09) design a matrix-powers kernel that gives rise to an overlapping partition. Fritzsche et al. (CSC2009) develop an overlapping clustering for a Schwarz method. Both techniques extend an initial partitioning with overlap. Our procedure generates overlap directly. Indeed, Schwarz methods are commonly used to capitalize on overlap. Elsewhere, overlapping communities (Ahn et al, Nature 2009; Mishra et al. WAW2007) are now a popular model of structure in social networks. These have long been studied in statistics (Cole and Wishart, CompJ 1970). We present two types of results: (i) an estimated swapping probability {rho}{infinity}; and (ii) the communication volume of a parallel PageRank solution (link-following {alpha} = 0.85) using an additive Schwarz method. The volume ratio is the amount of extra storage for the overlap (2 means we store the graph twice). Below, as the ratio increases, the swapping probability and PageRank communication volume decreases.

  13. Hybrid Human-Computing Distributed Sense-Making: Extending the SOA Paradigm for Dynamic Adjudication and Optimization of Human and Computer Roles

    ERIC Educational Resources Information Center

    Rimland, Jeffrey C.

    2013-01-01

    In many evolving systems, inputs can be derived from both human observations and physical sensors. Additionally, many computation and analysis tasks can be performed by either human beings or artificial intelligence (AI) applications. For example, weather prediction, emergency event response, assistive technology for various human sensory and…

  14. Hybrid Human-Computing Distributed Sense-Making: Extending the SOA Paradigm for Dynamic Adjudication and Optimization of Human and Computer Roles

    ERIC Educational Resources Information Center

    Rimland, Jeffrey C.

    2013-01-01

    In many evolving systems, inputs can be derived from both human observations and physical sensors. Additionally, many computation and analysis tasks can be performed by either human beings or artificial intelligence (AI) applications. For example, weather prediction, emergency event response, assistive technology for various human sensory and…

  15. Adaptive file allocation in distributed computer systems

    NASA Astrophysics Data System (ADS)

    Mahmood, A.; Khan, H. U.; Fatmi, H. A.

    1994-12-01

    An algorithm to dynamically reallocate the database files in a computer network is presented. The proposed algorithm uses the best fit approach to allocate and delete beneficial file copies. A key problem of economical estimation of future access and update pattern is discussed and an algorithm based on the Gabor-Kolmogorov learning process is presented to estimate the access and the update patterns. A distributed candidate selection algorithm is presented to reduce the number of files and nodes in reallocation. The simulation results are presented to demonstrate both accuracy and efficiency of the proposed algorithms.

  16. Pair distribution function computed tomography.

    PubMed

    Jacques, Simon D M; Di Michiel, Marco; Kimber, Simon A J; Yang, Xiaohao; Cernik, Robert J; Beale, Andrew M; Billinge, Simon J L

    2013-01-01

    An emerging theme of modern composites and devices is the coupling of nanostructural properties of materials with their targeted arrangement at the microscale. Of the imaging techniques developed that provide insight into such designer materials and devices, those based on diffraction are particularly useful. However, to date, these have been heavily restrictive, providing information only on materials that exhibit high crystallographic ordering. Here we describe a method that uses a combination of X-ray atomic pair distribution function analysis and computed tomography to overcome this limitation. It allows the structure of nanocrystalline and amorphous materials to be identified, quantified and mapped. We demonstrate the method with a phantom object and subsequently apply it to resolving, in situ, the physicochemical states of a heterogeneous catalyst system. The method may have potential impact across a range of disciplines from materials science, biomaterials, geology, environmental science, palaeontology and cultural heritage to health.

  17. Computational Fluid Dynamics Modeling of The Dalles Project: Effects of Spill Flow Distribution Between the Washington Shore and the Tailrace Spillwall

    SciTech Connect

    Rakowski, Cynthia L.; Serkowski, John A.; Richmond, Marshall C.

    2010-12-01

    The U.S. Army Corps of Engineers-Portland District (CENWP) has ongoing work to improve the survival of juvenile salmonids (smolt) migrating past The Dalles Dam. As part of that effort, a spillwall was constructed to improve juvenile egress through the tailrace downstream of the stilling basin. The spillwall was designed to improve smolt survival by decreasing smolt retention time in the spillway tailrace and the exposure to predators on the spillway shelf. The spillwall guides spillway flows, and hence smolt, more quickly into the thalweg. In this study, an existing computational fluid dynamics (CFD) model was modified and used to characterize tailrace hydraulics between the new spillwall and the Washington shore for six different total river flows. The effect of spillway flow distribution was simulated for three spill patterns at the lowest total river flow. The commercial CFD solver, STAR-CD version 4.1, was used to solve the unsteady Reynolds-averaged Navier-Stokes equations together with the k-epsilon turbulence model. Free surface motion was simulated using the volume-of-fluid (VOF) technique. The model results were used in two ways. First, results graphics were provided to CENWP and regional fisheries agency representatives for use and comparison to the same flow conditions at a reduced-scale physical model. The CFD results were very similar in flow pattern to that produced by the reduced-scale physical model but these graphics provided a quantitative view of velocity distribution. During the physical model work, an additional spill pattern was tested. Subsequently, that spill pattern was also simulated in the numerical model. The CFD streamlines showed that the hydraulic conditions were likely to be beneficial to fish egress at the higher total river flows (120 kcfs and greater, uniform flow distribution). At the lowest flow case, 90 kcfs, it was necessary to use a non-uniform distribution. Of the three distributions tested, splitting the flow evenly between

  18. Performance of the ISIS Distributed Computing Toolkit

    DTIC Science & Technology

    1994-06-22

    Best Available Copy .. A a ~ d ~ . 1) - . Fs’A aiaer rnrgC"opyr~IL tI.ru~ Performance of the ISIS Distributed Computing Toolkit* Kenneth P. Birman...isis.com. Please cite as Technical Report TR-94-1432, Dept. of Computer Science, Cornell University. Performance of the Isis Distributed Computing Toolkit... Distributed computing , performance, process groups, atomic broadcast, causal and total message ordering, cbcast, abcast, multiple process groups

  19. Deadlock Detection in Distributed Computing Systems.

    DTIC Science & Technology

    1982-06-01

    With the advent of distributed computing systems, the problem of deadlock, which has been essentially solved for centralized computing systems, has...reappeared. Existing centralized deadlock detection techniques are either too expensive or they do not work correctly in distributed computing systems...incorrect. Additionally, although fault-tolerance is usually listed as an advantage of distributed computing systems, little has been done to analyze

  20. Computational dynamics of soft machines

    NASA Astrophysics Data System (ADS)

    Hu, Haiyan; Tian, Qiang; Liu, Cheng

    2017-06-01

    Soft machine refers to a kind of mechanical system made of soft materials to complete sophisticated missions, such as handling a fragile object and crawling along a narrow tunnel corner, under low cost control and actuation. Hence, soft machines have raised great challenges to computational dynamics. In this review article, recent studies of the authors on the dynamic modeling, numerical simulation, and experimental validation of soft machines are summarized in the framework of multibody system dynamics. The dynamic modeling approaches are presented first for the geometric nonlinearities of coupled overall motions and large deformations of a soft component, the physical nonlinearities of a soft component made of hyperelastic or elastoplastic materials, and the frictional contacts/impacts of soft components, respectively. Then the computation approach is outlined for the dynamic simulation of soft machines governed by a set of differential-algebraic equations of very high dimensions, with an emphasis on the efficient computations of the nonlinear elastic force vector of finite elements. The validations of the proposed approaches are given via three case studies, including the locomotion of a soft quadrupedal robot, the spinning deployment of a solar sail of a spacecraft, and the deployment of a mesh reflector of a satellite antenna, as well as the corresponding experimental studies. Finally, some remarks are made for future studies.

  1. VLSI Design, Parallel Computation and Distributed Computing

    DTIC Science & Technology

    1991-09-30

    Operations Research and Manage - ment Scienc,. \\, I IV Product Planning and Inventory , North-Holland, 19T9. to api- ir 109. F ’. L,.t ’A 2,1 - I l.uwr...7- theory. ignifcan prgrs ha bee mad n th deeomndo fi n sorting circuiits, network management protocols for high speed nertworks, distributed...Significant progress has been made on the development of efficient sorting circuits, network management protocols for high speed networks, distributed graph

  2. Validation of computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Sacher, P. W.; Bradley, R. G., Jr.; Schmidt, W.

    1989-05-01

    The Fluid Dynamics Panel AGARD Symposium entitled Validation of Computational Fluid Dynamics is reviewed and evaluated. The purpose of the Symposium was to assess the state of the art of Validation of Computer Codes and to ensure that the mathematical and numerical schemes employed in the codes correctly model the critical physics of the flow field under consideration. The evaluator addresses each of the papers presented separately and makes general comments on the seven major topic sessions. In addition, a Poster Session is reviewed in detail. It is evident that the new possibilities of CFD provide efficient tools for Analysis and Design in the Aeronautical Industry, but it is also evident that in spite of the existence of a number of excellent experimental databases, there is still a need for efforts in validating the computer programs both by experiment as well as by numerical exercises.

  3. On Deadlock Detection in Distributed Computing Systems.

    DTIC Science & Technology

    1983-04-01

    With the advent of distributed computing systems, the problem of deadlock, which has been essentially solved for centralized computing systems, has...reappeared. Existing centralized deadlock detection techniques are either too expensive or they do not work correctly in distributed computing systems

  4. A Different Look at Secure Distributed Computation

    DTIC Science & Technology

    1997-06-01

    9, 12]. Still, the worst-case view dominates the secure computing literature in general and the secure distributed computing literature in...The model we now suggest represents distributed computing as two or more interwoven networks of competing nodes. In 111 1997, pp. 109{115 the

  5. Computer Graphics Simulations of Sampling Distributions.

    ERIC Educational Resources Information Center

    Gordon, Florence S.; Gordon, Sheldon P.

    1989-01-01

    Describes the use of computer graphics simulations to enhance student understanding of sampling distributions that arise in introductory statistics. Highlights include the distribution of sample proportions, the distribution of the difference of sample means, the distribution of the difference of sample proportions, and the distribution of sample…

  6. Cooperative Autonomic Management in Dynamic Distributed Systems

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Zhao, Ming; Fortes, José A. B.

    The centralized management of large distributed systems is often impractical, particularly when the both the topology and status of the system change dynamically. This paper proposes an approach to application-centric self-management in large distributed systems consisting of a collection of autonomic components that join and leave the system dynamically. Cooperative autonomic components self-organize into a dynamically created overlay network. Through local information sharing with neighbors, each component gains access to global information as needed for optimizing performance of applications. The approach has been validated and evaluated by developing a decentralized autonomic system consisting of multiple autonomic application managers previously developed for the In-VIGO grid-computing system. Using analytical results from complex random network and measurements done in a prototype system, we demonstrate the robustness, self-organization and adaptability of our approach, both theoretically and experimentally.

  7. Parallel and Distributed Computing Combinatorial Algorithms

    DTIC Science & Technology

    1993-10-01

    FUPNDKC %2,•, PARALLEL AND DISTRIBUTED COMPUTING COMBINATORIAL ALGORITHMS 6. AUTHOR(S) 2304/DS F49620-92-J-0125 DR. LEIGHTON 7 PERFORMING ORGANIZATION NAME...on several problems involving parallel and distributed computing and combinatorial optimization. This research is reported in the numerous papers that...network decom- position. In Proceedings of the Eleventh Annual ACM Symposium on Principles of Distributed Computing , August 1992. [15] B. Awerbuch, B

  8. Modular Programming Techniques for Distributed Computing Tasks

    DTIC Science & Technology

    2004-08-01

    Modular Programming Techniques for Distributed Computing Tasks Anthony Cowley, Hwa-Chow Hsu, Camillo J. Taylor GRASP Laboratory University of...network, distributed computing , software design 1. INTRODUCTION As efforts to field sensor networks, or teams of mobile robots, become more...TITLE AND SUBTITLE Modular Programming Techniques for Distributed Computing Tasks 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER

  9. Distributed Computing Environment for Mine Warfare Command

    DTIC Science & Technology

    1993-06-01

    AD-A268 799 j -•111lllli UliilllI ME ii El UU NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC V4 * cLP i0 1993 RA THESIS DISTRIBUTED COMPUTING ENVIRONMENT...Project No [Task No lWork Unit Accession 1 -1 No 11 Title (include security classification) DISTRIBUTED COMPUTING ENVIRONMENT FOR MINE WARFARE COMMAND 12... DISTRIBUTED COMPUTING ..... .. 26 A. STANDARDS FOR OPEN SYSTEMS ... .......... 27 1. OSI Model .......... ................. 28 2. DOD Model

  10. A Weibull distribution accrual failure detector for cloud computing

    PubMed Central

    Wu, Zhibo; Wu, Jin; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing. PMID:28278229

  11. A Weibull distribution accrual failure detector for cloud computing.

    PubMed

    Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Dong, Jian; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing.

  12. A distributed computing approach to mission operations support. [for spacecraft

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1975-01-01

    Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.

  13. Distributed Computing at Belle II

    NASA Astrophysics Data System (ADS)

    Bansal, Vikas; Belle Collaboration, II

    2016-03-01

    The Belle II experiment at the SuperKEKB collider in Tsukuba, Japan, will start physics data taking in 2018 and will accumulate 50 ab-1 of e+e- collision data, about 50 times larger than the data set of the earlier Belle experiment. The computing requirements of Belle II are comparable to those of a RUN I high-pT LHC experiment. Computing will make full use of high speed networking and of the Computing Grids in North America, Asia and Europe. Results of an initial MC simulation campaign with 5 ab-1 equivalent luminosity will be described.

  14. Distributed computing and nuclear reactor analysis

    SciTech Connect

    Brown, F.B.; Derstine, K.L.; Blomquist, R.N.

    1994-03-01

    Large-scale scientific and engineering calculations for nuclear reactor analysis can now be carried out effectively in a distributed computing environment, at costs far lower than for traditional mainframes. The distributed computing environment must include support for traditional system services, such as a queuing system for batch work, reliable filesystem backups, and parallel processing capabilities for large jobs. All ANL computer codes for reactor analysis have been adapted successfully to a distributed system based on workstations and X-terminals. Distributed parallel processing has been demonstrated to be effective for long-running Monte Carlo calculations.

  15. Parallel computational fluid dynamics - Implementations and results

    NASA Technical Reports Server (NTRS)

    Simon, Horst D. (Editor)

    1992-01-01

    The present volume on parallel CFD discusses implementations on parallel machines, numerical algorithms for parallel CFD, and performance evaluation and computer science issues. Attention is given to a parallel algorithm for compressible flows through rotor-stator combinations, a massively parallel Euler solver for unstructured grids, a fast scheme to analyze 3D disk airflow on a parallel computer, and a block implicit multigrid solution of the Euler equations. Topics addressed include a 3D ADI algorithm on distributed memory multiprocessors, clustered element-by-element computations for fluid flow, hypercube FFT and the Fourier pseudospectral method, and an investigation of parallel iterative algorithms for CFD. Also discussed are fluid dynamics using interface methods on parallel processors, sorting for particle flow simulation on the connection machine, a large grain mapping method, and efforts toward a Teraflops capability for CFD.

  16. Connecting micro dynamics and population distributions in system dynamics models.

    PubMed

    Fallah-Fini, Saeideh; Rahmandad, Hazhir; Chen, Hsin-Jen; Xue, Hong; Wang, Youfa

    2013-01-01

    Researchers use system dynamics models to capture the mean behavior of groups of indistinguishable population elements (e.g., people) aggregated in stock variables. Yet, many modeling problems require capturing the heterogeneity across elements with respect to some attribute(s) (e.g., body weight). This paper presents a new method to connect the micro-level dynamics associated with elements in a population with the macro-level population distribution along an attribute of interest without the need to explicitly model every element. We apply the proposed method to model the distribution of Body Mass Index and its changes over time in a sample population of American women obtained from the U.S. National Health and Nutrition Examination Survey. Comparing the results with those obtained from an individual-based model that captures the same phenomena shows that our proposed method delivers accurate results with less computation than the individual-based model.

  17. Connecting micro dynamics and population distributions in system dynamics models

    PubMed Central

    Rahmandad, Hazhir; Chen, Hsin-Jen; Xue, Hong; Wang, Youfa

    2014-01-01

    Researchers use system dynamics models to capture the mean behavior of groups of indistinguishable population elements (e.g., people) aggregated in stock variables. Yet, many modeling problems require capturing the heterogeneity across elements with respect to some attribute(s) (e.g., body weight). This paper presents a new method to connect the micro-level dynamics associated with elements in a population with the macro-level population distribution along an attribute of interest without the need to explicitly model every element. We apply the proposed method to model the distribution of Body Mass Index and its changes over time in a sample population of American women obtained from the U.S. National Health and Nutrition Examination Survey. Comparing the results with those obtained from an individual-based model that captures the same phenomena shows that our proposed method delivers accurate results with less computation than the individual-based model. PMID:25620842

  18. Equilibrium distribution from distributed computing (simulations of protein folding).

    PubMed

    Scalco, Riccardo; Caflisch, Amedeo

    2011-05-19

    Multiple independent molecular dynamics (MD) simulations are often carried out starting from a single protein structure or a set of conformations that do not correspond to a thermodynamic ensemble. Therefore, a significant statistical bias is usually present in the Markov state model generated by simply combining the whole MD sampling into a network whose nodes and links are clusters of snapshots and transitions between them, respectively. Here, we introduce a depth-first search algorithm to extract from the whole conformation space network the largest ergodic component, i.e., the subset of nodes of the network whose transition matrix corresponds to an ergodic Markov chain. For multiple short MD simulations of a globular protein (as in distributed computing), the steady state, i.e., stationary distribution determined using the largest ergodic component, yields more accurate free energy profiles and mean first passage times than the original network or the ergodic network obtained by imposing detailed balance by means of symmetrization of the transition counts.

  19. Dynamical Properties of Polymers: Computational Modeling

    SciTech Connect

    CURRO, JOHN G.; ROTTACH, DANA; MCCOY, JOHN D.

    2001-01-01

    The free volume distribution has been a qualitatively useful concept by which dynamical properties of polymers, such as the penetrant diffusion constant, viscosity, and glass transition temperature, could be correlated with static properties. In an effort to put this on a more quantitative footing, we define the free volume distribution as the probability of finding a spherical cavity of radius R in a polymer liquid. This is identical to the insertion probability in scaled particle theory, and is related to the chemical potential of hard spheres of radius R in a polymer in the Henry's law limit. We used the Polymer Reference Interaction Site Model (PRISM) theory to compute the free volume distribution of semiflexible polymer melts as a function of chain stiffness. Good agreement was found with the corresponding free volume distributions obtained from MD simulations. Surprisingly, the free volume distribution was insensitive to the chain stiffness, even though the single chain structure and the intermolecular pair correlation functions showed a strong dependence on chain stiffness. We also calculated the free volume distributions of polyisobutylene (PIB) and polyethylene (PE) at 298K and at elevated temperatures from PRISM theory. We found that PIB has more of its free volume distributed in smaller size cavities than for PE at the same temperature.

  20. Computational Methods for Structural Mechanics and Dynamics

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson (Editor); Housner, Jerrold M. (Editor); Tanner, John A. (Editor); Hayduk, Robert J. (Editor)

    1989-01-01

    Topics addressed include: transient dynamics; transient finite element method; transient analysis in impact and crash dynamic studies; multibody computer codes; dynamic analysis of space structures; multibody mechanics and manipulators; spatial and coplanar linkage systems; flexible body simulation; multibody dynamics; dynamical systems; and nonlinear characteristics of joints.

  1. Next Generation Distributed Computing for Cancer Research

    PubMed Central

    Agarwal, Pankaj; Owzar, Kouros

    2014-01-01

    Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing. PMID:25983539

  2. Next generation distributed computing for cancer research.

    PubMed

    Agarwal, Pankaj; Owzar, Kouros

    2014-01-01

    Advances in next generation sequencing (NGS) and mass spectrometry (MS) technologies have provided many new opportunities and angles for extending the scope of translational cancer research while creating tremendous challenges in data management and analysis. The resulting informatics challenge is invariably not amenable to the use of traditional computing models. Recent advances in scalable computing and associated infrastructure, particularly distributed computing for Big Data, can provide solutions for addressing these challenges. In this review, the next generation of distributed computing technologies that can address these informatics problems is described from the perspective of three key components of a computational platform, namely computing, data storage and management, and networking. A broad overview of scalable computing is provided to set the context for a detailed description of Hadoop, a technology that is being rapidly adopted for large-scale distributed computing. A proof-of-concept Hadoop cluster, set up for performance benchmarking of NGS read alignment, is described as an example of how to work with Hadoop. Finally, Hadoop is compared with a number of other current technologies for distributed computing.

  3. Nonlinear dynamics as an engine of computation.

    PubMed

    Kia, Behnam; Lindner, John F; Ditto, William L

    2017-03-06

    Control of chaos teaches that control theory can tame the complex, random-like behaviour of chaotic systems. This alliance between control methods and physics-cybernetical physics-opens the door to many applications, including dynamics-based computing. In this article, we introduce nonlinear dynamics and its rich, sometimes chaotic behaviour as an engine of computation. We review our work that has demonstrated how to compute using nonlinear dynamics. Furthermore, we investigate the interrelationship between invariant measures of a dynamical system and its computing power to strengthen the bridge between physics and computation.This article is part of the themed issue 'Horizons of cybernetical physics'.

  4. Nonlinear dynamics as an engine of computation

    NASA Astrophysics Data System (ADS)

    Kia, Behnam; Lindner, John F.; Ditto, William L.

    2017-03-01

    Control of chaos teaches that control theory can tame the complex, random-like behaviour of chaotic systems. This alliance between control methods and physics-cybernetical physics-opens the door to many applications, including dynamics-based computing. In this article, we introduce nonlinear dynamics and its rich, sometimes chaotic behaviour as an engine of computation. We review our work that has demonstrated how to compute using nonlinear dynamics. Furthermore, we investigate the interrelationship between invariant measures of a dynamical system and its computing power to strengthen the bridge between physics and computation. This article is part of the themed issue 'Horizons of cybernetical physics'.

  5. Distributed Generalized Dynamic Barrier Synchronization

    NASA Astrophysics Data System (ADS)

    Agarwal, Shivali; Joshi, Saurabh; Shyamasundar, Rudrapatna K.

    Barrier synchronization is widely used in shared-memory parallel programs to synchronize between phases of data-parallel algorithms. With proliferation of many-core processors, barrier synchronization has been adapted for higher level language abstractions in new languages such as X10 wherein the processes participating in barrier synchronization are not known a priori, and the processes in distinct "places" don't share memory. Thus, the challenge here is to not only achieve barrier synchronization in a distributed setting without any centralized controller, but also to deal with dynamic nature of such a synchronization as processes are free to join and drop out at any synchronization phase. In this paper, we describe a solution for the generalized distributed barrier synchronization wherein processes can dynamically join or drop out of barrier synchronization; that is, participating processes are not known a priori. Using the policy of permitting a process to join only in the beginning of each phase, we arrive at a solution that ensures (i) Progress: a process executing phase k will enter phase k + 1 unless it wants to drop out of synchronization (assuming the phase execution of the processes terminate), and (ii) Starvation Freedom: a new process that wants to join a phase synchronization group that has already started, does so in a finite number of phases. The above protocol is further generalized to multiple groups of processes (possibly non-disjoint) engaged in barrier synchronization.

  6. Parallel distributed computing using Python

    NASA Astrophysics Data System (ADS)

    Dalcin, Lisandro D.; Paz, Rodrigo R.; Kler, Pablo A.; Cosimo, Alejandro

    2011-09-01

    This work presents two software components aimed to relieve the costs of accessing high-performance parallel computing resources within a Python programming environment: MPI for Python and PETSc for Python. MPI for Python is a general-purpose Python package that provides bindings for the Message Passing Interface (MPI) standard using any back-end MPI implementation. Its facilities allow parallel Python programs to easily exploit multiple processors using the message passing paradigm. PETSc for Python provides access to the Portable, Extensible Toolkit for Scientific Computation (PETSc) libraries. Its facilities allow sequential and parallel Python applications to exploit state of the art algorithms and data structures readily available in PETSc for the solution of large-scale problems in science and engineering. MPI for Python and PETSc for Python are fully integrated to PETSc-FEM, an MPI and PETSc based parallel, multiphysics, finite elements code developed at CIMEC laboratory. This software infrastructure supports research activities related to simulation of fluid flows with applications ranging from the design of microfluidic devices for biochemical analysis to modeling of large-scale stream/aquifer interactions.

  7. Distributed Computing and Collaboration Framework (DCCF)

    DTIC Science & Technology

    2002-09-01

    The Distributed Computing and Collaboration Framework has been developed by the Space and Naval Warfare Systems Center, San Diego (a Naval research and development facility), under the sponsorship of the Office of Naval

  8. Decentralized Resource Management in Distributed Computer Systems.

    DTIC Science & Technology

    1982-02-01

    Archons project, which is performing research in the science and eigneering of ’uhet we -term- distributed computersa. By this we mean a computer...Classification of Synchronization Techniques 23 3.2.1 Access Synchronization 23 3.2.2 coordinating Synchronization 25 3.2.3 Meta.synchronization 26 3.3...3.4 Access Synchronization Techniques 29 3.4.1 Access Synchronization in Shared Memory Computer System 30 3.4.2 Concepts and Issues in Distributed

  9. Visualization of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Gerald-Yamasaki, Michael; Hultquist, Jeff; Bryson, Steve; Kenwright, David; Lane, David; Walatka, Pamela; Clucas, Jean; Watson, Velvin; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Scientific visualization serves the dual purpose of exploration and exposition of the results of numerical simulations of fluid flow. Along with the basic visualization process which transforms source data into images, there are four additional components to a complete visualization system: Source Data Processing, User Interface and Control, Presentation, and Information Management. The requirements imposed by the desired mode of operation (i.e. real-time, interactive, or batch) and the source data have their effect on each of these visualization system components. The special requirements imposed by the wide variety and size of the source data provided by the numerical simulation of fluid flow presents an enormous challenge to the visualization system designer. We describe the visualization system components including specific visualization techniques and how the mode of operation and source data requirements effect the construction of computational fluid dynamics visualization systems.

  10. Simulation model of load balancing in distributed computing systems

    NASA Astrophysics Data System (ADS)

    Botygin, I. A.; Popov, V. N.; Frolov, S. G.

    2017-02-01

    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.

  11. Evaluation of distributed computing tools

    SciTech Connect

    Stanberry, L.

    1992-10-28

    The original goal stated in the collaboration agreement from LCC`s perspective was ``to show that networking tools available in UNICOS perform well enough to meet the requirements of LCC customers.`` This translated into evaluating how easy it was to port ELROS over CRI`s ISO 2.0, which itself is a port of ISODE to the Cray. In addition we tested the interoperability of ELROS and ISO 2.0 programs running on the Cray, and communicating with each other, and with servers or clients running on other machines. To achieve these goals from LCC`s side, we ported ELROS to the Cray, and also obtained and installed a copy of the ISO 2.0 distribution from CRI. CRI`s goal for the collaboration was to evaluate the usability of ELROS. In particular, we were interested in their potential feedback on the use of ELROS in implementing ISO protocols--whether ELROS would be easter to use and perform better than other tools that form part of the standard ISODE system. To help achieve these goals for CRI, we provided them with a distribution tar file containing the ELROS system, once we had completed our port of ELROS to the Cray.

  12. Evaluation of distributed computing tools

    SciTech Connect

    Stanberry, L.

    1992-10-28

    The original goal stated in the collaboration agreement from LCC's perspective was to show that networking tools available in UNICOS perform well enough to meet the requirements of LCC customers.'' This translated into evaluating how easy it was to port ELROS over CRI's ISO 2.0, which itself is a port of ISODE to the Cray. In addition we tested the interoperability of ELROS and ISO 2.0 programs running on the Cray, and communicating with each other, and with servers or clients running on other machines. To achieve these goals from LCC's side, we ported ELROS to the Cray, and also obtained and installed a copy of the ISO 2.0 distribution from CRI. CRI's goal for the collaboration was to evaluate the usability of ELROS. In particular, we were interested in their potential feedback on the use of ELROS in implementing ISO protocols--whether ELROS would be easter to use and perform better than other tools that form part of the standard ISODE system. To help achieve these goals for CRI, we provided them with a distribution tar file containing the ELROS system, once we had completed our port of ELROS to the Cray.

  13. A Software Rejuvenation Framework for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Chau, Savio

    2009-01-01

    A performability-oriented conceptual framework for software rejuvenation has been constructed as a means of increasing levels of reliability and performance in distributed stateful computing. As used here, performability-oriented signifies that the construction of the framework is guided by the concept of analyzing the ability of a given computing system to deliver services with gracefully degradable performance. The framework is especially intended to support applications that involve stateful replicas of server computers.

  14. Computational Fluid Dynamics in the United Kingdom.

    DTIC Science & Technology

    1987-04-01

    D-A14 589 COMPUTATIONAL FLUID DYNAMICS IN THE UNITED KINGDOM (U) /i ROYAL AIRCRAFT ESTABLISHMENT FARNBOROUGH ( ENGLAND ) UNCLASIFIED HALL ET AL APR...COMPUTATIONAL FLUID DYNAMICS IN THE UNITED KINGDOM by N M. G. Hall S. P. Fiddes April 1987 .,DTI [ ELECTE It ’ji .1 .SEP 11 1987 Procurement Executive...Memorandum Aero 2098 Received for printing 3 April 1987 COMPUTATIONAL FLUID DYNAMICS IN THE UNITED KINGDOM * by M. G. Hall S. P. Fiddes SUMMARY A review

  15. Object-oriented Tools for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.

    1993-01-01

    Distributed computing systems are proliferating, owing to the availability of powerful, affordable microcomputers and inexpensive communication networks. A critical problem in developing such systems is getting application programs to interact with one another across a computer network. Remote interprogram connectivity is particularly challenging across heterogeneous environments, where applications run on different kinds of computers and operating systems. NetWorks! (trademark) is an innovative software product that provides an object-oriented messaging solution to these problems. This paper describes the design and functionality of NetWorks! and illustrates how it is being used to build complex distributed applications for NASA and in the commercial sector.

  16. Distributed Real-Time Computing with Harness

    SciTech Connect

    Di Saverio, Emanuele; Cesati, Marco; Di Biagio, Christian; Pennella, Guido; Engelmann, Christian

    2007-01-01

    Modern parallel and distributed computing solutions are often built onto a ''middleware'' software layer providing a higher and common level of service between computational nodes. Harness is an adaptable, plugin-based middleware framework for parallel and distributed computing. This paper reports recent research and development results of using Harness for real-time distributed computing applications in the context of an industrial environment with the needs to perform several safety critical tasks. The presented work exploits the modular architecture of Harness in conjunction with a lightweight threaded implementation to resolve several real-time issues by adding three new Harness plug-ins to provide a prioritized lightweight execution environment, low latency communication facilities, and local timestamped event logging.

  17. Research on Computational Fluid Dynamics and Turbulence

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Preconditioning matrices for Chebyshev derivative operators in several space dimensions; the Jacobi matrix technique in computational fluid dynamics; and Chebyshev techniques for periodic problems are discussed.

  18. The impact of distributed computing on education

    NASA Technical Reports Server (NTRS)

    Utku, S.; Lestingi, J.; Salama, M.

    1982-01-01

    In this paper, developments in digital computer technology since the early Fifties are reviewed briefly, and the parallelism which exists between these developments and developments in analysis and design procedures of structural engineering is identified. The recent trends in digital computer technology are examined in order to establish the fact that distributed processing is now an accepted philosophy for further developments. The impact of this on the analysis and design practices of structural engineering is assessed by first examining these practices from a data processing standpoint to identify the key operations and data bases, and then fitting them to the characteristics of distributed processing. The merits and drawbacks of the present philosophy in educating structural engineers are discussed and projections are made for the industry-academia relations in the distributed processing environment of structural analysis and design. An ongoing experiment of distributed computing in a university environment is described.

  19. The impact of distributed computing on education

    NASA Technical Reports Server (NTRS)

    Utku, S.; Lestingi, J.; Salama, M.

    1982-01-01

    In this paper, developments in digital computer technology since the early Fifties are reviewed briefly, and the parallelism which exists between these developments and developments in analysis and design procedures of structural engineering is identified. The recent trends in digital computer technology are examined in order to establish the fact that distributed processing is now an accepted philosophy for further developments. The impact of this on the analysis and design practices of structural engineering is assessed by first examining these practices from a data processing standpoint to identify the key operations and data bases, and then fitting them to the characteristics of distributed processing. The merits and drawbacks of the present philosophy in educating structural engineers are discussed and projections are made for the industry-academia relations in the distributed processing environment of structural analysis and design. An ongoing experiment of distributed computing in a university environment is described.

  20. Improved Pyrolysis Micro reactor Design via Computational Fluid Dynamics Simulations

    DTIC Science & Technology

    2017-05-23

    NUMBER (Include area code) 23 May 2017 Briefing Charts 25 April 2017 - 23 May 2017 Improved Pyrolysis Micro-reactor Design via Computational Fluid... PYROLYSIS MICRO-REACTOR DESIGN VIA COMPUTATIONAL FLUID DYNAMICS SIMULATIONS Ghanshyam L. Vaghjiani* DISTRIBUTION A: Approved for public release...Approved for public release, distribution unlimited. PA Clearance 17247 Chen-Source (>240 references from SciFinder as of 5/1/17): Flash pyrolysis

  1. Model dynamics for quantum computing

    NASA Astrophysics Data System (ADS)

    Tabakin, Frank

    2017-08-01

    A model master equation suitable for quantum computing dynamics is presented. In an ideal quantum computer (QC), a system of qubits evolves in time unitarily and, by virtue of their entanglement, interfere quantum mechanically to solve otherwise intractable problems. In the real situation, a QC is subject to decoherence and attenuation effects due to interaction with an environment and with possible short-term random disturbances and gate deficiencies. The stability of a QC under such attacks is a key issue for the development of realistic devices. We assume that the influence of the environment can be incorporated by a master equation that includes unitary evolution with gates, supplemented by a Lindblad term. Lindblad operators of various types are explored; namely, steady, pulsed, gate friction, and measurement operators. In the master equation, we use the Lindblad term to describe short time intrusions by random Lindblad pulses. The phenomenological master equation is then extended to include a nonlinear Beretta term that describes the evolution of a closed system with increasing entropy. An external Bath environment is stipulated by a fixed temperature in two different ways. Here we explore the case of a simple one-qubit system in preparation for generalization to multi-qubit, qutrit and hybrid qubit-qutrit systems. This model master equation can be used to test the stability of memory and the efficacy of quantum gates. The properties of such hybrid master equations are explored, with emphasis on the role of thermal equilibrium and entropy constraints. Several significant properties of time-dependent qubit evolution are revealed by this simple study.

  2. Predictive Dynamic Security Assessment through Advanced Computing

    SciTech Connect

    Huang, Zhenyu; Diao, Ruisheng; Jin, Shuangshuang; Chen, Yousu

    2014-11-30

    Abstract— Traditional dynamic security assessment is limited by several factors and thus falls short in providing real-time information to be predictive for power system operation. These factors include the steady-state assumption of current operating points, static transfer limits, and low computational speed. This addresses these factors and frames predictive dynamic security assessment. The primary objective of predictive dynamic security assessment is to enhance the functionality and computational process of dynamic security assessment through the use of high-speed phasor measurements and the application of advanced computing technologies for faster-than-real-time simulation. This paper presents algorithms, computing platforms, and simulation frameworks that constitute the predictive dynamic security assessment capability. Examples of phasor application and fast computation for dynamic security assessment are included to demonstrate the feasibility and speed enhancement for real-time applications.

  3. Computer Simulation of Strong Ground Motion near a Fault Using Dynamic Fault Rupture Modeling: Spatial Distribution of the Peak Ground Velocity Vectors

    NASA Astrophysics Data System (ADS)

    Miyatake, T.

    Computer simulation was used to study the nature of the strong ground motion near a strike-slip fault. The faulting process was modeled by stress release with fixed rupture velocity in a uniform elastic half-space or layered half-space. The fourth-order 3-D finite-difference method with staggered grids was employed to compute both ground motions and slip histories on the fault. The fault rupture was assumed to start from a point and propagate circularly with 0.8 times shear-wave velocity. In the present paper, we focused on the spatial pattern of ground velocity vectors, i.e., the direction of strong motions. In the case of bilateral rupture propagation, the strong fault parallel ground motion appeared near the center of the fault. The fault normal motions of ground velocity appeared near the edges of the fault. In the case of unilateral rupture, the fault parallel motion appeared near the starting point however, the amplitude was lower than that for the bilateral rupture case. The fault normal motion was predominant near the terminal point of the rupture. The results were applied to the earthquake damage data, especially the directions that simple bodies overturned and wooden houses collapsed, caused by the 1927 Tango, the 1930 Kita-Izu, and the 1948 Fukui earthquakes. The spatial distributions of the direction data were found to reflect the strong ground motions generated from the earthquake source process.

  4. Pattern recognition and massively distributed computing.

    PubMed

    Davies, E Keith; Glick, Meir; Harrison, Karl N; Richards, W Graham

    2002-12-01

    A feature of Peter Kollman's research was his exploitation of the latest computational techniques to devise novel applications of the free energy perturbation method. He would certainly have seized upon the opportunities offered by massively distributed computing. Here we describe the use of over a million personal computers to perform virtual screening of 3.5 billion druglike molecules against protein targets by pharmacophore pattern matching, together with other applications of pattern recognition such as docking ligands without any a priori knowledge about the binding site location.

  5. Molecular dynamics on hypercube parallel computers

    NASA Astrophysics Data System (ADS)

    Smith, W.

    1991-03-01

    The implementation of molecular dynamics on parallel computers is described, with particular reference to hypercube computers. Three particular algorithms are described: replicated data (RD); systolic loop (SLS-G), and parallelised link-cells (PLC), all of which have good load balancing. The performance characteristics of each algorithm and the factors affecting their scaling properties are discussed. The article is pedagogic in intent, to introduce a novice to the main aspects of parallel computing in molecular dynamics.

  6. Dance Dynamics: Computers and Dance.

    ERIC Educational Resources Information Center

    Gray, Judith A., Ed.; And Others

    1983-01-01

    Five articles discuss the use of computers in dance and dance education. They describe: (1) a computerized behavioral profile of a dance teacher; (2) computer-based dance notation; (3) elementary school computer-assisted dance instruction; (4) quantified analysis of dance criticism; and (5) computerized simulation of human body movements in a…

  7. Dance Dynamics: Computers and Dance.

    ERIC Educational Resources Information Center

    Gray, Judith A., Ed.; And Others

    1983-01-01

    Five articles discuss the use of computers in dance and dance education. They describe: (1) a computerized behavioral profile of a dance teacher; (2) computer-based dance notation; (3) elementary school computer-assisted dance instruction; (4) quantified analysis of dance criticism; and (5) computerized simulation of human body movements in a…

  8. Great Expectations: Distributed Financial Computing at Cornell.

    ERIC Educational Resources Information Center

    Schulden, Louise; Sidle, Clint

    1988-01-01

    The Cornell University Distributed Accounting (CUDA) system is an attempt to provide departments a software tool for better managing their finances, creating microcomputer standards, creating a vehicle for better administrative microcomputer support, and insuring local systems are consistent with central computer systems. (Author/MLW)

  9. Great Expectations: Distributed Financial Computing at Cornell.

    ERIC Educational Resources Information Center

    Schulden, Louise; Sidle, Clint

    1988-01-01

    The Cornell University Distributed Accounting (CUDA) system is an attempt to provide departments a software tool for better managing their finances, creating microcomputer standards, creating a vehicle for better administrative microcomputer support, and insuring local systems are consistent with central computer systems. (Author/MLW)

  10. Distributed Computing: Options in the Eighties.

    ERIC Educational Resources Information Center

    Klingenstein, Kenneth; Devine, Gary D.

    1985-01-01

    University administrative data processing is moving toward a more distributed environment. An architecture must be established that incorporates central sites, campus centers, and end users in a networked pool of computer systems, with applications located at appropriate nodes in the network. (Author/MLW)

  11. Data Integration in Computer Distributed Systems

    NASA Astrophysics Data System (ADS)

    Kwiecień, Błażej

    In this article the author analyze a problem of data integration in a computer distributed systems. Exchange of information between different levels in integrated pyramid of enterprise process is fundamental with regard to efficient enterprise work. Communication and data exchange between levels are not always the same cause of necessity of different network protocols usage, communication medium, system response time, etc.

  12. Computer Systems for Distributed and Distance Learning.

    ERIC Educational Resources Information Center

    Anderson, M.; Jackson, David

    2000-01-01

    Discussion of network-based learning focuses on a survey of computer systems for distributed and distance learning. Both Web-based systems and non-Web-based systems are reviewed in order to highlight some of the major trends of past projects and to suggest ways in which progress may be made in the future. (Contains 92 references.) (Author/LRW)

  13. Distributed computing system with dual independent communications paths between computers and employing split tokens

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert D. (Inventor); Manning, Robert M. (Inventor); Lewis, Blair F. (Inventor); Bolotin, Gary S. (Inventor); Ward, Richard S. (Inventor)

    1990-01-01

    This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided.

  14. Computing spatial information from Fourier coefficient distributions.

    PubMed

    Heinz, William F; Werbin, Jeffrey L; Lattman, Eaton; Hoh, Jan H

    2011-05-01

    The spatial relationships between molecules can be quantified in terms of information. In the case of membranes, the spatial organization of molecules in a bilayer is closely related to biophysically and biologically important properties. Here, we present an approach to computing spatial information based on Fourier coefficient distributions. The Fourier transform (FT) of an image contains a complete description of the image, and the values of the FT coefficients are uniquely associated with that image. For an image where the distribution of pixels is uncorrelated, the FT coefficients are normally distributed and uncorrelated. Further, the probability distribution for the FT coefficients of such an image can readily be obtained by Parseval's theorem. We take advantage of these properties to compute the spatial information in an image by determining the probability of each coefficient (both real and imaginary parts) in the FT, then using the Shannon formalism to calculate information. By using the probability distribution obtained from Parseval's theorem, an effective distance from the uncorrelated or most uncertain case is obtained. The resulting quantity is an information computed in k-space (kSI). This approach provides a robust, facile and highly flexible framework for quantifying spatial information in images and other types of data (of arbitrary dimensions). The kSI metric is tested on a 2D Ising model, frequently used as a model for lipid bilayer; and the temperature-dependent phase transition is accurately determined from the spatial information in configurations of the system.

  15. Research computing in a distributed cloud environment

    NASA Astrophysics Data System (ADS)

    Fransham, K.; Agarwal, A.; Armstrong, P.; Bishop, A.; Charbonneau, A.; Desmarais, R.; Hill, N.; Gable, I.; Gaudet, S.; Goliath, S.; Impey, R.; Leavett-Brown, C.; Ouellete, J.; Paterson, M.; Pritchet, C.; Penfold-Brown, D.; Podaima, W.; Schade, D.; Sobie, R. J.

    2010-11-01

    The recent increase in availability of Infrastructure-as-a-Service (IaaS) computing clouds provides a new way for researchers to run complex scientific applications. However, using cloud resources for a large number of research jobs requires significant effort and expertise. Furthermore, running jobs on many different clouds presents even more difficulty. In order to make it easy for researchers to deploy scientific applications across many cloud resources, we have developed a virtual machine resource manager (Cloud Scheduler) for distributed compute clouds. In response to a user's job submission to a batch system, the Cloud Scheduler manages the distribution and deployment of user-customized virtual machines across multiple clouds. We describe the motivation for and implementation of a distributed cloud using the Cloud Scheduler that is spread across both commercial and dedicated private sites, and present some early results of scientific data analysis using the system.

  16. Distributed Computing Framework for Synthetic Radar Application

    NASA Technical Reports Server (NTRS)

    Gurrola, Eric M.; Rosen, Paul A.; Aivazis, Michael

    2006-01-01

    We are developing an extensible software framework, in response to Air Force and NASA needs for distributed computing facilities for a variety of radar applications. The objective of this work is to develop a Python based software framework, that is the framework elements of the middleware that allows developers to control processing flow on a grid in a distributed computing environment. Framework architectures to date allow developers to connect processing functions together as interchangeable objects, thereby allowing a data flow graph to be devised for a specific problem to be solved. The Pyre framework, developed at the California Institute of Technology (Caltech), and now being used as the basis for next-generation radar processing at JPL, is a Python-based software framework. We have extended the Pyre framework to include new facilities to deploy processing components as services, including components that monitor and assess the state of the distributed network for eventual real-time control of grid resources.

  17. GEANT4 distributed computing for compact clusters

    NASA Astrophysics Data System (ADS)

    Harrawood, Brian P.; Agasthya, Greeshma A.; Lakshmanan, Manu N.; Raterman, Gretchen; Kapadia, Anuj J.

    2014-11-01

    A new technique for distribution of GEANT4 processes is introduced to simplify running a simulation in a parallel environment such as a tightly coupled computer cluster. Using a new C++ class derived from the GEANT4 toolkit, multiple runs forming a single simulation are managed across a local network of computers with a simple inter-node communication protocol. The class is integrated with the GEANT4 toolkit and is designed to scale from a single symmetric multiprocessing (SMP) machine to compact clusters ranging in size from tens to thousands of nodes. User designed 'work tickets' are distributed to clients using a client-server work flow model to specify the parameters for each individual run of the simulation. The new g4DistributedRunManager class was developed and well tested in the course of our Neutron Stimulated Emission Computed Tomography (NSECT) experiments. It will be useful for anyone running GEANT4 for large discrete data sets such as covering a range of angles in computed tomography, calculating dose delivery with multiple fractions or simply speeding the through-put of a single model.

  18. Can distributed delays perfectly stabilize dynamical networks?

    NASA Astrophysics Data System (ADS)

    Omi, Takahiro; Shinomoto, Shigeru

    2008-04-01

    Signal transmission delays tend to destabilize dynamical networks leading to oscillation, but their dispersion contributes oppositely toward stabilization. We analyze an integrodifferential equation that describes the collective dynamics of a neural network with distributed signal delays. With the Γ distributed delays less dispersed than exponential distribution, the system exhibits reentrant phenomena, in which the stability is once lost but then recovered as the mean delay is increased. With delays dispersed more highly than exponential, the system never destabilizes.

  19. Comparing the Performance of Two Dynamic Load Distribution Methods

    NASA Technical Reports Server (NTRS)

    Kale, L. V.

    1987-01-01

    Parallel processing of symbolic computations on a message-passing multi-processor presents one challenge: To effectively utilize the available processors, the load must be distributed uniformly to all the processors. However, the structure of these computations cannot be predicted in advance. go, static scheduling methods are not applicable. In this paper, we compare the performance of two dynamic, distributed load balancing methods with extensive simulation studies. The two schemes are: the Contracting Within a Neighborhood (CWN) scheme proposed by us, and the Gradient Model proposed by Lin and Keller. We conclude that although simpler, the CWN is significantly more effective at distributing the work than the Gradient model.

  20. Accelerating Computation of DNA Sequence Alignment in Distributed Environment

    NASA Astrophysics Data System (ADS)

    Guo, Tao; Li, Guiyang; Deaton, Russel

    Sequence similarity and alignment are most important operations in computational biology. However, analyzing large sets of DNA sequence seems to be impractical on a regular PC. Using multiple threads with JavaParty mechanism, this project has successfully implemented in extending the capabilities of regular Java to a distributed environment for simulation of DNA computation. With the aid of JavaParty and the design of multiple threads, the results of this study demonstrated that the modified regular Java program could perform parallel computing without using RMI or socket communication. In this paper, an efficient method for modeling and comparing DNA sequences with dynamic programming and JavaParty was firstly proposed. Additionally, results of this method in distributed environment have been discussed.

  1. A directory service for configuring high-performance distributed computations

    SciTech Connect

    Fitzgerald, S.; Kesselman, C.; Foster, I.

    1997-08-01

    High-performance execution in distributed computing environments often requires careful selection and configuration not only of computers, networks, and other resources but also of the protocols and algorithms used by applications. Selection and configuration in turn require access to accurate, up-to-date information on the structure and state of available resources. Unfortunately, no standard mechanism exists for organizing or accessing such information. Consequently, different tools and applications adopt ad hoc mechanisms, or they compromise their portability and performance by using default configurations. We propose a Metacomputing Directory Service that provides efficient and scalable access to diverse, dynamic, and distributed information about resource structure and state. We define an extensible data model to represent required information and present a scalable, high-performance, distributed implementation. The data representation and application programming interface are adopted from the Lightweight Directory Access Protocol; the data model and implementation are new. We use the Globus distributed computing toolkit to illustrate how this directory service enables the development of more flexible and efficient distributed computing services and applications.

  2. Fluid dynamics computer programs for NERVA turbopump

    NASA Technical Reports Server (NTRS)

    Brunner, J. J.

    1972-01-01

    During the design of the NERVA turbopump, numerous computer programs were developed for the analyses of fluid dynamic problems within the machine. Program descriptions, example cases, users instructions, and listings for the majority of these programs are presented.

  3. A Computational Fluid Dynamics Algorithm on a Massively Parallel Computer

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.; Levit, Creon

    1989-01-01

    The discipline of computational fluid dynamics is demanding ever-increasing computational power to deal with complex fluid flow problems. We investigate the performance of a finite-difference computational fluid dynamics algorithm on a massively parallel computer, the Connection Machine. Of special interest is an implicit time-stepping algorithm; to obtain maximum performance from the Connection Machine, it is necessary to use a nonstandard algorithm to solve the linear systems that arise in the implicit algorithm. We find that the Connection Machine ran achieve very high computation rates on both explicit and implicit algorithms. The performance of the Connection Machine puts it in the same class as today's most powerful conventional supercomputers.

  4. Open Source Live Distributions for Computer Forensics

    NASA Astrophysics Data System (ADS)

    Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele

    Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.

  5. BES-III distributed computing status

    NASA Astrophysics Data System (ADS)

    Belov, S. D.; Deng, Z. Y.; Korenkov, V. V.; Li, W. D.; Lin, T.; Ma, Z. T.; Nicholson, C.; Pelevanyuk, I. S.; Suo, B.; Trofimov, V. V.; Tsaregorodtsev, A. U.; Uzhinskiy, A. V.; Yan, T.; Yan, X. F.; Zhang, X. M.; Zhemchugov, A. S.

    2016-09-01

    The BES-III experiment at the Institute of High Energy Physics (Beijing, China) is aimed at the precision measurements in e+e- annihilation in the energy range from 2.0 till 4.6 GeV. The world's largest samples of J/psi and psi' events and unique samples of XYZ data have been already collected. The expected increase of the data volume in the coming years required a significant evolution of the computing model, namely shift from a centralized data processing to a distributed one. This report summarizes a current design of the BES-III distributed computing system, some of key decisions and experience gained during 2 years of operations.

  6. Subtlenoise: sonification of distributed computing operations

    NASA Astrophysics Data System (ADS)

    Love, P. A.

    2015-12-01

    The operation of distributed computing systems requires comprehensive monitoring to ensure reliability and robustness. There are two components found in most monitoring systems: one being visually rich time-series graphs and another being notification systems for alerting operators under certain pre-defined conditions. In this paper the sonification of monitoring messages is explored using an architecture that fits easily within existing infrastructures based on mature opensource technologies such as ZeroMQ, Logstash, and Supercollider (a synth engine). Message attributes are mapped onto audio attributes based on broad classification of the message (continuous or discrete metrics) but keeping the audio stream subtle in nature. The benefits of audio rendering are described in the context of distributed computing operations and may provide a less intrusive way to understand the operational health of these systems.

  7. Beyond input-output computings: error-driven emergence with parallel non-distributed slime mold computer.

    PubMed

    Aono, Masashi; Gunji, Yukio-Pegio

    2003-10-01

    The emergence derived from errors is the key importance for both novel computing and novel usage of the computer. In this paper, we propose an implementable experimental plan for the biological computing so as to elicit the emergent property of complex systems. An individual plasmodium of the true slime mold Physarum polycephalum acts in the slime mold computer. Modifying the Elementary Cellular Automaton as it entails the global synchronization problem upon the parallel computing provides the NP-complete problem solved by the slime mold computer. The possibility to solve the problem by giving neither all possible results nor explicit prescription of solution-seeking is discussed. In slime mold computing, the distributivity in the local computing logic can change dynamically, and its parallel non-distributed computing cannot be reduced into the spatial addition of multiple serial computings. The computing system based on exhaustive absence of the super-system may produce, something more than filling the vacancy.

  8. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2004-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  9. Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.; Murphy, Patrick C.

    2003-01-01

    Force and moment measurements from an F-16XL during forced pitch oscillation tests result in dynamic stability derivatives, which are measured in combinations. Initial computational simulations of the motions and combined derivatives are attempted via a low-order, time-dependent panel method computational fluid dynamics code. The code dynamics are shown to be highly questionable for this application and the chosen configuration. However, three methods to computationally separate such combined dynamic stability derivatives are proposed. One of the separation techniques is demonstrated on the measured forced pitch oscillation data. Extensions of the separation techniques to yawing and rolling motions are discussed. In addition, the possibility of considering the angles of attack and sideslip state vector elements as distributed quantities, rather than point quantities, is introduced.

  10. Radar data processing using a distributed computational system

    NASA Astrophysics Data System (ADS)

    Mota, Gilberto F.

    1992-06-01

    This research specifies and validates a new concurrent decomposition scheme, called Confined Space Search Decomposition (CSSD), to exploit parallelism of Radar Data Processing algorithms using a Distributed Computational System. To formalize the specification, we propose and apply an object-oriented methodology called Decomposition Cost Evaluation Model (DCEM). To reduce the penalties of load imbalance, we propose a distributed dynamic load balance heuristic called Object Reincarnation (OR). To validate the research, we first compare our decomposition with an identified alternative using the proposed DCEM model and then develop a theoretical prediction of selected parameters. We also develop a simulation to check the Object Reincarnation Concept.

  11. Research in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Murman, Earll M.

    1987-01-01

    The numerical integration of quasi-one-dimensional unsteady flow problems which involve finite rate chemistry are discussed, and are expressed in terms of conservative form Euler and species conservation equations. Hypersonic viscous calculations for delta wing geometries is also examined. The conical Navier-Stokes equations model was selected in order to investigate the effects of viscous-inviscid interations. The more complete three-dimensional model is beyond the available computing resources. The flux vector splitting method with van Leer's MUSCL differencing is being used. Preliminary results were computed for several conditions.

  12. Computational strategies for three-dimensional flow simulations on distributed computer systems

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-01-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  13. Computational strategies for three-dimensional flow simulations on distributed computer systems

    NASA Astrophysics Data System (ADS)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-08-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  14. Fast Parallel Computation Of Manipulator Inverse Dynamics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1991-01-01

    Method for fast parallel computation of inverse dynamics problem, essential for real-time dynamic control and simulation of robot manipulators, undergoing development. Enables exploitation of high degree of parallelism and, achievement of significant computational efficiency, while minimizing various communication and synchronization overheads as well as complexity of required computer architecture. Universal real-time robotic controller and simulator (URRCS) consists of internal host processor and several SIMD processors with ring topology. Architecture modular and expandable: more SIMD processors added to match size of problem. Operate asynchronously and in MIMD fashion.

  15. The future of PanDA in ATLAS distributed computing

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.

  16. Distributed Computing for the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Chudoba, J.

    2015-12-01

    Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system.

  17. Dynamics and computation in functional shifts

    NASA Astrophysics Data System (ADS)

    Namikawa, Jun; Hashimoto, Takashi

    2004-07-01

    We introduce a new type of shift dynamics as an extended model of symbolic dynamics, and investigate the characteristics of shift spaces from the viewpoints of both dynamics and computation. This shift dynamics is called a functional shift, which is defined by a set of bi-infinite sequences of some functions on a set of symbols. To analyse the complexity of functional shifts, we measure them in terms of topological entropy, and locate their languages in the Chomsky hierarchy. Through this study, we argue that considering functional shifts from the viewpoints of both dynamics and computation gives us opposite results about the complexity of systems. We also describe a new class of shift spaces whose languages are not recursively enumerable.

  18. Computational plasticity algorithm for particle dynamics simulations

    NASA Astrophysics Data System (ADS)

    Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.

    2017-03-01

    The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.

  19. Computational fluid dynamics - The coming revolution

    NASA Technical Reports Server (NTRS)

    Graves, R. A., Jr.

    1982-01-01

    The development of aerodynamic theory is traced from the days of Aristotle to the present, with the next stage in computational fluid dynamics dependent on superspeed computers for flow calculations. Additional attention is given to the history of numerical methods inherent in writing computer codes applicable to viscous and inviscid analyses for complex configurations. The advent of the superconducting Josephson junction is noted to place configurational demands on computer design to avoid limitations imposed by the speed of light, and a Japanese projection of a computer capable of several hundred billion operations/sec is mentioned. The NASA Numerical Aerodynamic Simulator is described, showing capabilities of a billion operations/sec with a memory of 240 million words using existing technology. Near-term advances in fluid dynamics are discussed.

  20. Experimental and computational studies of dynamic stall

    NASA Technical Reports Server (NTRS)

    Carr, L. W.; Platzer, M. F.; Chandrasekhara, M. S.; Ekaterinaris, J.

    1989-01-01

    A review of dynamic stall research in progress under the Navy-NASA Joint Institute of Aeronautics is presented. This effort, which includes both experimental and computational studies of the dynamic stall process, is directed toward better understanding and modeling of the fluid flow that occurs on helicopters and aircraft flying in conditions that induce dynamic stall. The results of research now in progress are presented, with discussion of the experimental program on compressibility effects on dynamic stall, related CFD studies of the stall process based on Navier-Stokes modeling, and viscous-inviscid flow modeling of the incipient stall process.

  1. Three-Dimensional Computational Fluid Dynamics

    SciTech Connect

    Haworth, D.C.; O'Rourke, P.J.; Ranganathan, R.

    1998-09-01

    Computational fluid dynamics (CFD) is one discipline falling under the broad heading of computer-aided engineering (CAE). CAE, together with computer-aided design (CAD) and computer-aided manufacturing (CAM), comprise a mathematical-based approach to engineering product and process design, analysis and fabrication. In this overview of CFD for the design engineer, our purposes are three-fold: (1) to define the scope of CFD and motivate its utility for engineering, (2) to provide a basic technical foundation for CFD, and (3) to convey how CFD is incorporated into engineering product and process design.

  2. Dynamic MTW: a dynamic bandwidth distribution scheme in EPON

    NASA Astrophysics Data System (ADS)

    Chen, Hong; Ge, Liangwei; Zeng, Lieguang

    2002-08-01

    An algorithm to improve the bandwidth utilization for EPON by using dynamic bandwidth distribution is put forward. System performance, such as queuing delay under self-similar traffic, is simulated by using OPNET.

  3. Pseudo-interactive monitoring in distributed computing

    SciTech Connect

    Sfiligoi, I.; Bradley, D.; Livny, M.; /Wisconsin U., Madison

    2009-05-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  4. Pseudo-interactive monitoring in distributed computing

    NASA Astrophysics Data System (ADS)

    Sfiligoi, I.; Bradley, D.; Livny, M.

    2010-04-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  5. Progress in the dynamical parton distributions

    SciTech Connect

    Jimenez-Delgado, Pedro

    2012-06-01

    The present status of the (JR) dynamical parton distribution functions is reported. Different theoretical improvements, including the determination of the strange sea input distribution, the treatment of correlated errors and the inclusion of alternative data sets, are discussed. Highlights in the ongoing developments as well as (very) preliminary results in the determination of the strong coupling constant are presented.

  6. Airport Simulations Using Distributed Computational Resources

    NASA Technical Reports Server (NTRS)

    McDermott, William J.; Maluf, David A.; Gawdiak, Yuri; Tran, Peter; Clancy, Daniel (Technical Monitor)

    2002-01-01

    The Virtual National Airspace Simulation (VNAS) will improve the safety of Air Transportation. In 2001, using simulation and information management software running over a distributed network of super-computers, researchers at NASA Ames, Glenn, and Langley Research Centers developed a working prototype of a virtual airspace. This VNAS prototype modeled daily operations of the Atlanta airport by integrating measured operational data and simulation data on up to 2,000 flights a day. The concepts and architecture developed by NASA for this prototype are integral to the National Airspace Simulation to support the development of strategies improving aviation safety, identifying precursors to component failure.

  7. A Hundred Impossibility Proofs for Distributed Computing

    DTIC Science & Technology

    1989-08-01

    distributed computing . In this category, I include not just results that say that a particular task cannot be accomplished, but also lower bound results, which say that a task cannot be accomplished within a certain bound on cost. I started out with a simple plan for preparing this talk: I would spend a couple of weeks reading all the impossibility proofs in our fields, and would categorize them according to the ideas used. Then I would make wise and general observations, and try to predict where the future of this area is headed. That turned out to be a bit too ambitious;

  8. Interoperable PKI Data Distribution in Computational Grids

    SciTech Connect

    Pala, Massimiliano; Cholia, Shreyas; Rea, Scott A.; Smith, Sean W.

    2008-07-25

    One of the most successful working examples of virtual organizations, computational grids need authentication mechanisms that inter-operate across domain boundaries. Public Key Infrastructures(PKIs) provide sufficient flexibility to allow resource managers to securely grant access to their systems in such distributed environments. However, as PKIs grow and services are added to enhance both security and usability, users and applications must struggle to discover available resources-particularly when the Certification Authority (CA) is alien to the relying party. This article presents how to overcome these limitations of the current grid authentication model by integrating the PKI Resource Query Protocol (PRQP) into the Grid Security Infrastructure (GSI).

  9. Numerical analysis of the dynamics of distributed vortex configurations

    NASA Astrophysics Data System (ADS)

    Govorukhin, V. N.

    2016-08-01

    A numerical algorithm is proposed for analyzing the dynamics of distributed plane vortex configurations in an inviscid incompressible fluid. At every time step, the algorithm involves the computation of unsteady vortex flows, an analysis of the configuration structure with the help of heuristic criteria, the visualization of the distribution of marked particles and vorticity, the construction of streamlines of fluid particles, and the computation of the field of local Lyapunov exponents. The inviscid incompressible fluid dynamic equations are solved by applying a meshless vortex method. The algorithm is used to investigate the interaction of two and three identical distributed vortices with various initial positions in the flow region with and without the Coriolis force.

  10. Information modification and particle collisions in distributed computation.

    PubMed

    Lizier, Joseph T; Prokopenko, Mikhail; Zomaya, Albert Y

    2010-09-01

    Distributed computation can be described in terms of the fundamental operations of information storage, transfer, and modification. To describe the dynamics of information in computation, we need to quantify these operations on a local scale in space and time. In this paper we extend previous work regarding the local quantification of information storage and transfer, to explore how information modification can be quantified at each spatiotemporal point in a system. We introduce the separable information, a measure which locally identifies information modification events where separate inspection of the sources to a computation is misleading about its outcome. We apply this measure to cellular automata, where it is shown to be the first direct quantitative measure to provide evidence for the long-held conjecture that collisions between emergent particles therein are the dominant information modification events.

  11. Distributed Data Mining using a Public Resource Computing Framework

    NASA Astrophysics Data System (ADS)

    Cesario, Eugenio; de Caria, Nicola; Mastroianni, Carlo; Talia, Domenico

    The public resource computing paradigm is often used as a successful and low cost mechanism for the management of several classes of scientific and commercial applications that require the execution of a large number of independent tasks. Public computing frameworks, also known as “Desktop Grids”, exploit the computational power and storage facilities of private computers, or “workers”. Despite the inherent decentralized nature of the applications for which they are devoted, these systems often adopt a centralized mechanism for the assignment of jobs and distribution of input data, as is the case for BOINC, the most popular framework in this realm. We present a decentralized framework that aims at increasing the flexibility and robustness of public computing applications, thanks to two basic features: (i) the adoption of a P2P protocol for dynamically matching the job specifications with the worker characteristics, without relying on centralized resources; (ii) the use of distributed cache servers for an efficient dissemination and reutilization of data files. This framework is exploitable for a wide set of applications. In this work, we describe how a Java prototype of the framework was used to tackle the problem of mining frequent itemsets from a transactional dataset, and show some preliminary yet interesting performance results that prove the efficiency improvements that can derive from the presented architecture.

  12. Advances in the spatially distributed ages-w model: parallel computation, java connection framework (JCF) integration, and streamflow/nitrogen dynamics assessment

    USDA-ARS?s Scientific Manuscript database

    AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic and water quality (H/WQ) simulation components under the Java Connection Framework (JCF) and the Object Modeling System (OMS) environmental modeling framework. AgES-W is implicitly scala...

  13. Dynamic Associations in Nonlinear Computing Arrays

    NASA Astrophysics Data System (ADS)

    Huberman, B. A.; Hogg, T.

    1985-10-01

    We experimentally show that nonlinear parallel arrays can be made to compute with attractors. This leads to fast adaptive behavior in which dynamical associations can be made between different inputs which initially produce sharply distinct outputs. We first define a set of simple local procedures which allow a general computing structure to change its state in time so as to produce classical Pavlovian conditioning. We then examine the dynamics of coalescence and dissociation of attractors with a number of quantitative experiments. We also show how such arrays exhibit generalization and differentiation of inputs in their behavior.

  14. Shipboard Application of a Ring Structured Distributed Computing System.

    DTIC Science & Technology

    Considerable research is currently going on into the application of distributed computing systems. They appear particularly suitable for the...structured distributed computing system might be adapted to function in this environment. Included in this consideration are the feasibility of

  15. Development of a Defence Distributed Computing Environment (DCE) Database Demonstrator,

    DTIC Science & Technology

    1995-11-01

    This report discusses the development of a Defence Distributed Computing Environment (DCE) database demonstrator program. The Demonstrator program...showcases the interoperability, portability, survivability and security features of Open Software Foundation’s Distributed Computing Environment.

  16. Fast Parallel Computation Of Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Kwan, Gregory L.; Bagherzadeh, Nader

    1996-01-01

    Constraint-force algorithm fast, efficient, parallel-computation algorithm for solving forward dynamics problem of multibody system like robot arm or vehicle. Solves problem in minimum time proportional to log(N) by use of optimal number of processors proportional to N, where N is number of dynamical degrees of freedom: in this sense, constraint-force algorithm both time-optimal and processor-optimal parallel-processing algorithm.

  17. Models and Measurements of Parallelism for a Distributed Computer System.

    DTIC Science & Technology

    1982-01-01

    that parallel execution of the processes comprising an application program will defray U the overhead costs of distributed computing . This...of Different Approaches to Distributed Computing ", Proceedings of the Ist International Conference on Distributed Comput er Systems, Huntsville, AL...Oct. 1-5, 1979), pp. 222-232. [20] Liskov, B., "Primitives for Distributed Computing ", Froceedings of the 7--th Symposium on Operating System

  18. Probability distributions of molecular observables computed from Markov models.

    PubMed

    Noé, Frank

    2008-06-28

    Molecular dynamics (MD) simulations can be used to estimate transition rates between conformational substates of the simulated molecule. Such an estimation is associated with statistical uncertainty, which depends on the number of observed transitions. In turn, it induces uncertainties in any property computed from the simulation, such as free energy differences or the time scales involved in the system's kinetics. Assessing these uncertainties is essential for testing the reliability of a given observation and also to plan further simulations in such a way that the most serious uncertainties will be reduced with minimal effort. Here, a rigorous statistical method is proposed to approximate the complete statistical distribution of any observable of an MD simulation provided that one can identify conformational substates such that the transition process between them may be modeled with a memoryless jump process, i.e., Markov or Master equation dynamics. The method is based on sampling the statistical distribution of Markov transition matrices that is induced by the observed transition events. It allows physically meaningful constraints to be included, such as sampling only matrices that fulfill detailed balance, or matrices that produce a predefined equilibrium distribution of states. The method is illustrated on mus MD simulations of a hexapeptide for which the distributions and uncertainties of the free energy differences between conformations, the transition matrix elements, and the transition matrix eigenvalues are estimated. It is found that both constraints, detailed balance and predefined equilibrium distribution, can significantly reduce the uncertainty of some observables.

  19. Testing the CDF distributed computing framework

    SciTech Connect

    Bartsch, Valeria; Baranovski, Andrew; Belforte, Stefano; Burgon-Lyon, Morag; Garzoglio, Gabriele; Herber, Randolph; Illingworth, Robert; Kennedy, Rob; Kerzel, Ulrich; Kreymer, Art; Leslie, Matt; Loebel-Carpenter, Lauri; Lueking, Lee; Lyon, Adam; Merritt, Wyatt; Ratnikov, Fedor; Sill, Alan; St. Denis, Richard; Stonjek, Stefan; Terekhov, Igor; Trumbo, Julie; /Fermilab /Oxford U. /INFN, Trieste /Glasgow U. /Karlsruhe U. /Rutgers U., Piscataway /Texas Tech.

    2004-12-01

    A major source of CPU power for CDF (Collider Detector at Fermilab) is the CAF (Central Analysis Farm) [1] at Fermilab. The CAF is a farm of computers running Linux with access to the CDF data handling system and databases to allow CDF collaborators to run batch analysis jobs. Beside providing CPU power it has a good monitoring tool. The CAF software is a wrapper around a batch system, either fbsng [3] or condor, to submit jobs in a uniform way. So the submission to the CAF clusters inside and outside Fermilab from many computers with kerberos authentification is possible. It is mainly used to access datasets which comprise a large amount of files and analyze the data. Up to now the DCache system has been used to access the files. In autumn 2004 some of the important datasets will only be readable with the help of the data handling system SAM (Sequential Access to data via Metadata) [2]. This will be done in order to switch to using only one data handling system at Fermilab and on the remote sites. SAM has been used in run II to store, manage, deliver and track the processing of all data. It is designed to copy data to remote sites with remote analysis in mind. To prove CAF and SAM could provide the required CPU power and Data Handling, stress tests of the combined system were carried out. A second goal of CDF is to distribute computing. In 2005 50% of the computing shall be located outside of Fermilab. For this purpose CDF will use the DCAF (Decentralized CDF Analysis Farms) in combination with SAM. To achieve user friendliness the SAM station environment has to be common to all stations and adaptations to the environment have to be made.

  20. HL-20 computational fluid dynamics analysis

    NASA Astrophysics Data System (ADS)

    Weilmuenster, K. James; Greene, Francis A.

    1993-09-01

    The essential elements of a computational fluid dynamics analysis of the HL-20/personnel launch system aerothermal environment at hypersonic speeds including surface definition, grid generation, solution techniques, and visual representation of results are presented. Examples of solution technique validation through comparison with data from ground-based facilities are presented, along with results from computations at flight conditions. Computations at flight points indicate that real-gas effects have little or no effect on vehicle aerodynamics and, at these conditions, results from approximate techniques for determining surface heating are comparable with those obtained from Navier-Stokes solutions.

  1. HL-20 computational fluid dynamics analysis

    NASA Technical Reports Server (NTRS)

    Weilmuenster, K. J.; Greene, Francis A.

    1993-01-01

    The essential elements of a computational fluid dynamics analysis of the HL-20/personnel launch system aerothermal environment at hypersonic speeds including surface definition, grid generation, solution techniques, and visual representation of results are presented. Examples of solution technique validation through comparison with data from ground-based facilities are presented, along with results from computations at flight conditions. Computations at flight points indicate that real-gas effects have little or no effect on vehicle aerodynamics and, at these conditions, results from approximate techniques for determining surface heating are comparable with those obtained from Navier-Stokes solutions.

  2. An Applet-based Anonymous Distributed Computing System.

    ERIC Educational Resources Information Center

    Finkel, David; Wills, Craig E.; Ciaraldi, Michael J.; Amorin, Kevin; Covati, Adam; Lee, Michael

    2001-01-01

    Defines anonymous distributed computing systems and focuses on the specifics of a Java, applet-based approach for large-scale, anonymous, distributed computing on the Internet. Explains the possibility of a large number of computers participating in a single computation and describes a test of the functionality of the system. (Author/LRW)

  3. Dynamic object management for distributed data structures

    NASA Technical Reports Server (NTRS)

    Totty, Brian K.; Reed, Daniel A.

    1992-01-01

    In distributed-memory multiprocessors, remote memory accesses incur larger delays than local accesses. Hence, insightful allocation and access of distributed data can yield substantial performance gains. The authors argue for the use of dynamic data management policies encapsulated within individual distributed data structures. Distributed data structures offer performance, flexibility, abstraction, and system independence. This approach is supported by data from a trace-driven simulation study of parallel scientific benchmarks. Experimental data on memory locality, message count, message volume, and communication delay suggest that data-structure-specific data management is superior to a single, system-imposed policy.

  4. Visualization of unsteady computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1994-01-01

    A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.

  5. Graphics supercomputer for computational fluid dynamics research

    NASA Astrophysics Data System (ADS)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  6. Computational fluid dynamics and supercomputers, chapter 6

    NASA Astrophysics Data System (ADS)

    Gentzsch, W.

    1988-03-01

    It is important to optimally adapt codes and algorithms to the vector or parallel computer in use. In addition to faster and larger supercomputers, users must be much better trained than for (scalar) general purpose computers. Details on restructuring typical numerical algorithms to achieve superior performance on vector computers. The focus, of course, is on Computational Fluid Dynamics. During the last two decades CFD gained an important position together with experiments in wind tunnels and analytical methods. The main objective of CFD is to simulate dynamic flow fields through the numerical solution of the governing equations, e.g., the Navier-Stokes equations, using high-speed computers. The simulation of 2-D inviscid and viscous flows on vector computers does not represent any difficulties with respect to memory requirements or computation time. In 3-D, however, one has to compute some 20 to 30 variables per mesh point in a 3-D field per time-step or iteration such as the velocity components, density, pressure, enthalpy, temperature, concentrations, dissipative fluxes, local time steps, geometry coefficients, dummy arrays, etc. Computations in the case of 3-D are therefore restricted to fairly coarse meshes as well as to solutions which are often not fully converged solutions. The large amount of CPU time involved and the fact that the data cannot be contained in central memory are the main reasons for the long elapsed times for CFD applications. In these cases, the mapping of the problem onto the architecture of the machine and in particular onto special organizations of the memory must be fully considered to take full advantage of the vector computer.

  7. Computational fluid dynamics in oil burner design

    SciTech Connect

    Butcher, T.A.

    1997-09-01

    In Computational Fluid Dynamics, the differential equations which describe flow, heat transfer, and mass transfer are approximately solved using a very laborious numerical procedure. Flows of practical interest to burner designs are always turbulent, adding to the complexity of requiring a turbulence model. This paper presents a model for burner design.

  8. Final Report Computational Analysis of Dynamical Systems

    SciTech Connect

    Guckenheimer, John

    2012-05-08

    This is the final report for DOE Grant DE-FG02-93ER25164, initiated in 1993. This grant supported research of John Guckenheimer on computational analysis of dynamical systems. During that period, seventeen individuals received PhD degrees under the supervision of Guckenheimer and over fifty publications related to the grant were produced. This document contains copies of these publications.

  9. Optimal dynamic remapping of parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Reynolds, Paul F., Jr.

    1987-01-01

    A large class of computations are characterized by a sequence of phases, with phase changes occurring unpredictably. The decision problem was considered regarding the remapping of workload to processors in a parallel computation when the utility of remapping and the future behavior of the workload is uncertain, and phases exhibit stable execution requirements during a given phase, but requirements may change radically between phases. For these problems a workload assignment generated for one phase may hinder performance during the next phase. This problem is treated formally for a probabilistic model of computation with at most two phases. The fundamental problem of balancing the expected remapping performance gain against the delay cost was addressed. Stochastic dynamic programming is used to show that the remapping decision policy minimizing the expected running time of the computation has an extremely simple structure. Because the gain may not be predictable, the performance of a heuristic policy that does not require estimnation of the gain is examined. The heuristic method's feasibility is demonstrated by its use on an adaptive fluid dynamics code on a multiprocessor. The results suggest that except in extreme cases, the remapping decision problem is essentially that of dynamically determining whether gain can be achieved by remapping after a phase change. The results also suggest that this heuristic is applicable to computations with more than two phases.

  10. Absolute nonlocality via distributed computing without communication

    NASA Astrophysics Data System (ADS)

    Czekaj, Ł.; Pawłowski, M.; Vértesi, T.; Grudka, A.; Horodecki, M.; Horodecki, R.

    2015-09-01

    Understanding the role that quantum entanglement plays as a resource in various information processing tasks is one of the crucial goals of quantum information theory. Here we propose an alternative perspective for studying quantum entanglement: distributed computation of functions without communication between nodes. To formalize this approach, we propose identity games. Surprisingly, despite no signaling, we obtain that nonlocal quantum strategies beat classical ones in terms of winning probability for identity games originating from certain bipartite and multipartite functions. Moreover we show that, for a majority of functions, access to general nonsignaling resources boosts success probability two times in comparison to classical ones for a number of large enough outputs. Because there are no constraints on the inputs and no processing of the outputs in the identity games, they detect very strong types of correlations: absolute nonlocality.

  11. LHCbDirac: distributed computing in LHCb

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Charpentier, P.; Graciani, R.; Tsaregorodtsev, A.; Closier, J.; Mathe, Z.; Ubeda, M.; Zhelezov, A.; Lanciotti, E.; Romanovskiy, V.; Ciba, K. D.; Casajus, A.; Roiser, S.; Sapunov, M.; Remenska, D.; Bernardoff, V.; Santana, R.; Nandakumar, R.

    2012-12-01

    We present LHCbDirac, an extension of the DIRAC community Grid solution that handles LHCb specificities. The DIRAC software has been developed for many years within LHCb only. Nowadays it is a generic software, used by many scientific communities worldwide. Each community wanting to take advantage of DIRAC has to develop an extension, containing all the necessary code for handling their specific cases. LHCbDirac is an actively developed extension, implementing the LHCb computing model and workflows handling all the distributed computing activities of LHCb. Such activities include real data processing (reconstruction, stripping and streaming), Monte-Carlo simulation and data replication. Other activities are groups and user analysis, data management, resources management and monitoring, data provenance, accounting for user and production jobs. LHCbDirac also provides extensions of the DIRAC interfaces, including a secure web client, python APIs and CLIs. Before putting in production a new release, a number of certification tests are run in a dedicated setup. This contribution highlights the versatility of the system, also presenting the experience with real data processing, data and resources management, monitoring for activities and resources.

  12. Computation-distributed probability hypothesis density filter

    NASA Astrophysics Data System (ADS)

    Wang, Junjie; Zhao, Lingling; Su, Xiaohong; Shi, Chunmei; Ma, JiQuan

    2016-12-01

    Particle probability hypothesis density filtering has become a promising approach for multi-target tracking due to its capability of handling an unknown and time-varying number of targets in a nonlinear, non-Gaussian system. However, its computational complexity linearly increases with the number of obtained observations and the number of particles, which can be very time consuming, particularly when numerous targets and clutter exist in the surveillance region. To address this issue, we present a distributed computation particle probability hypothesis density(PHD) filter for target tracking. It runs several local decomposed particle PHD filters in parallel while processing elements. Each processing element takes responsibility for a portion of particles but all measurements and provides local estimates. A central unit controls particle exchange among the processing elements and specifies a fusion rule to match and fuse the estimates from different local filters. The proposed framework is suitable for parallel implementation. Simulations verify that the proposed method can significantly accelerate and maintain a comparative accuracy compared to the standard particle PHD filter.

  13. Automating usability of ATLAS Distributed Computing resources

    NASA Astrophysics Data System (ADS)

    Tupputi, S. A.; Di Girolamo, A.; Kouba, T.; Schovancová, J.; Atlas Collaboration

    2014-06-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  14. Dynamics of protein distributions in cell populations

    NASA Astrophysics Data System (ADS)

    Brenner, Naama; Farkash, Keren; Braun, Erez

    2006-09-01

    A population of cells exhibits wide phenotypic variation even if it is genetically homogeneous. In particular, individual cells differ from one another in the amount of protein they express under a given regulatory system under fixed conditions. Here we study how protein distributions in a population of the yeast S. cerevisiae are shaped by a balance of processes: protein production—an intracellular process—and protein dilution due to cell division—a population process. We measure protein distributions by employing reporter green fluorescence protein (gfp) under the regulation of the yeast GAL system under conditions where it is metabolically essential. Cell populations are grown in chemostats, thus allowing control of the environment and stable measurements of distribution dynamics over many generations. Despite the essential functional role of the GAL system in a pure galactose medium, steady-state distributions are found to be universally broad, with exponential tails and a large standard-deviation-to-mean ratio. Under several different perturbations the dynamics of the distribution is observed to be asymmetric, with a much longer time to build a wide expression distribution from below compared with a fast relaxation of the distribution toward steady state from above. These results show that the main features of the protein distributions are largely determined by population effects and are less sensitive to the intracellular biochemical noise.

  15. Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation: Second Year Progress Report

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    Mesh generation has long been recognized as a bottleneck in the CFD process. While much research on automating the volume mesh generation process have been relatively successful,these methods rely on appropriate initial surface triangulation to work properly. Surface discretization has been one of the least automated steps in computational simulation due to its dependence on implicitly defined CAD surfaces and curves. Differences in CAD peometry engines manifest themselves in discrepancies in their interpretation of the same entities. This lack of "good" geometry causes significant problems for mesh generators, requiring users to "repair" the CAD geometry before mesh generation. The problem is exacerbated when CAD geometry is translated to other forms (e.g., IGES )which do not include important topological and construction information in addition to entity geometry. One technique to avoid these problems is to access the CAD geometry directly from the mesh generating software, rather than through files. By accessing the geometry model (not a discretized version) in its native environment, t h s a proach avoids translation to a format which can deplete the model of topological information. Our approach to enable models developed in the Denali software environment to directly access CAD geometry and functions is through an Application Programming Interface (API) known as CAPRI. CAPRI provides a layer of indirection through which CAD-specific data may be accessed by an application program using CAD-system neutral C and FORTRAN language function calls. CAPRI supports a general set of CAD operations such as truth testing, geometry construction and entity queries.

  16. The Gain of Resource Delegation in Distributed Computing Environments

    NASA Astrophysics Data System (ADS)

    Fölling, Alexander; Grimme, Christian; Lepping, Joachim; Papaspyrou, Alexander

    In this paper, we address job scheduling in Distributed Computing Infrastructures, that is a loosely coupled network of autonomous acting High Performance Computing systems. In contrast to the common approach of mutual workload exchange, we consider the more intuitive operator's viewpoint of load-dependent resource reconfiguration. In case of a site's over-utilization, the scheduling system is able to lease resources from other sites to keep up service quality for its local user community. Contrary, the granting of idle resources can increase utilization in times of low local workload and thus ensure higher efficiency. The evaluation considers real workload data and is done with respect to common service quality indicators. For two simple resource exchange policies and three basic setups we show the possible gain of this approach and analyze the dynamics in workload-adaptive reconfiguration behavior.

  17. Computationally Efficient Multiconfigurational Reactive Molecular Dynamics.

    PubMed

    Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A

    2012-12-11

    It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as "multistate". These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations.

  18. Computationally Efficient Multiconfigurational Reactive Molecular Dynamics

    PubMed Central

    Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A.

    2012-01-01

    It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as “multistate”. These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations. PMID:25100924

  19. Dynamic data distributions in Vienna Fortran

    NASA Technical Reports Server (NTRS)

    Chapman, Barbara; Mehrotra, Piyush; Moritsch, Hans; Zima, Hans

    1993-01-01

    Vienna Fortran is a machine-independent language extension of Fortran, which is based upon the Single-Program-Multiple-Data (SPMD) paradigm and allows the user to write programs for distributed-memory systems using global addresses. The language features focus mainly on the issue of distributing data across virtual processor structures. Those features of Vienna Fortran that allow the data distributions of arrays to change dynamically, depending on runtime conditions are discussed. The relevant language features are discussed, their implementation is outlined, and how they may be used in applications is described.

  20. Dynamic Singularity Spectrum Distribution of Sea Clutter

    NASA Astrophysics Data System (ADS)

    Xiong, Gang; Yu, Wenxian; Zhang, Shuning

    2015-12-01

    The fractal and multifractal theory have provided new approaches for radar signal processing and target-detecting under the background of ocean. However, the related research mainly focuses on fractal dimension or multifractal spectrum (MFS) of sea clutter. In this paper, a new dynamic singularity analysis method of sea clutter using MFS distribution is developed, based on moving detrending analysis (DMA-MFSD). Theoretically, we introduce the time information by using cyclic auto-correlation of sea clutter. For transient correlation series, the instantaneous singularity spectrum based on multifractal detrending moving analysis (MF-DMA) algorithm is calculated, and the dynamic singularity spectrum distribution of sea clutter is acquired. In addition, we analyze the time-varying singularity exponent ranges and maximum position function in DMA-MFSD of sea clutter. For the real sea clutter data, we analyze the dynamic singularity spectrum distribution of real sea clutter in level III sea state, and conclude that the radar sea clutter has the non-stationary and time-varying scale characteristic and represents the time-varying singularity spectrum distribution based on the proposed DMA-MFSD method. The DMA-MFSD will also provide reference for nonlinear dynamics and multifractal signal processing.

  1. The brain dynamics of linguistic computation

    PubMed Central

    Murphy, Elliot

    2015-01-01

    Neural oscillations at distinct frequencies are increasingly being related to a number of basic and higher cognitive faculties. Oscillations enable the construction of coherently organized neuronal assemblies through establishing transitory temporal correlations. By exploring the elementary operations of the language faculty—labeling, concatenation, cyclic transfer—alongside neural dynamics, a new model of linguistic computation is proposed. It is argued that the universality of language, and the true biological source of Universal Grammar, is not to be found purely in the genome as has long been suggested, but more specifically within the extraordinarily preserved nature of mammalian brain rhythms employed in the computation of linguistic structures. Computational-representational theories are used as a guide in investigating the neurobiological foundations of the human “cognome”—the set of computations performed by the nervous system—and new directions are suggested for how the dynamics of the brain (the “dynome”) operate and execute linguistic operations. The extent to which brain rhythms are the suitable neuronal processes which can capture the computational properties of the human language faculty is considered against a backdrop of existing cartographic research into the localization of linguistic interpretation. Particular focus is placed on labeling, the operation elsewhere argued to be species-specific. A Basic Label model of the human cognome-dynome is proposed, leading to clear, causally-addressable empirical predictions, to be investigated by a suggested research program, Dynamic Cognomics. In addition, a distinction between minimal and maximal degrees of explanation is introduced to differentiate between the depth of analysis provided by cartographic, rhythmic, neurochemical, and other approaches to computation. PMID:26528201

  2. Computational and dynamic models in neuroimaging

    PubMed Central

    Friston, Karl J.; Dolan, Raymond J.

    2010-01-01

    This article reviews the substantial impact computational neuroscience has had on neuroimaging over the past years. It builds on the distinction between models of the brain as a computational machine and computational models of neuronal dynamics per se; i.e., models of brain function and biophysics. Both sorts of model borrow heavily from computational neuroscience, and both have enriched the analysis of neuroimaging data and the type of questions we address. To illustrate the role of functional models in imaging neuroscience, we focus on optimal control and decision (game) theory; the models used here provide a mechanistic account of neuronal computations and the latent (mental) states represent by the brain. In terms of biophysical modelling, we focus on dynamic causal modelling, with a special emphasis on recent advances in neural-mass models for hemodynamic and electrophysiological time series. Each example emphasises the role of generative models, which embed our hypotheses or questions, and the importance of model comparison (i.e., hypothesis testing). We will refer to this theme, when trying to contextualise recent trends in relation to each other. PMID:20036335

  3. Visualization of unsteady computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1995-01-01

    The current computing environment that most researchers are using for the calculation of 3D unsteady Computational Fluid Dynamic (CFD) results is a super-computer class machine. The Massively Parallel Processors (MPP's) such as the 160 node IBM SP2 at NAS and clusters of workstations acting as a single MPP (like NAS's SGI Power-Challenge array) provide the required computation bandwidth for CFD calculations of transient problems. Work is in progress on a set of software tools designed specifically to address visualizing 3D unsteady CFD results in these super-computer-like environments. The visualization is concurrently executed with the CFD solver. The parallel version of Visual3, pV3 required splitting up the unsteady visualization task to allow execution across a network of workstation(s) and compute servers. In this computing model, the network is almost always the bottleneck so much of the effort involved techniques to reduce the size of the data transferred between machines.

  4. Feasibility Study of Computational Fluid Dynamics Simulation of Coronary Computed Tomography Angiography Based on Dual-Source Computed Tomography

    PubMed Central

    Lu, Jing; Yu, Jie; Shi, Heshui

    2017-01-01

    Background Adding functional features to morphological features offers a new method for non-invasive assessment of myocardial perfusion. This study aimed to explore technical routes of assessing the left coronary artery pressure gradient, wall shear stress distribution and blood flow velocity distribution, combining three-dimensional coronary model which was based on high resolution dual-source computed tomography (CT) with computational fluid dynamics (CFD) simulation. Methods Three cases of no obvious stenosis, mild stenosis and severe stenosis in left anterior descending (LAD) were enrolled. Images acquired on dual-source CT were input into software Mimics, ICEMCFD and FLUENT to simulate pressure gradient, wall shear stress distribution and blood flow velocity distribution. Measuring coronary enhancement ratio of coronary artery was to compare with pressure gradient. Results Results conformed to theoretical values and showed difference between normal and abnormal samples. Conclusions The study verified essential parameters and basic techniques in blood flow numerical simulation preliminarily. It was proved feasible. PMID:27924174

  5. Computational fluid dynamics using CATIA created geometry

    SciTech Connect

    Gengler, J.E.

    1989-01-01

    A method has been developed to link the geometry definition residing on a CAD/CAM system with a computational fluid dynamics (CFD) tool needed to evaluate aerodynamic designs and requiring the memory capacity of a supercomputer. Requirements for surfaces suitable for CFD analysis are discussed. Techniques for developing surfaces and verifying their smoothness are compared, showing the capability of the CAD/CAM system. The utilization of a CAD/CAM system to create a computational mesh is explained, and the mesh interaction with the geometry and input file preparation for the CFD analysis is discussed.

  6. Computational fluid dynamics using CATIA created geometry

    NASA Astrophysics Data System (ADS)

    Gengler, Jeanne E.

    1989-07-01

    A method has been developed to link the geometry definition residing on a CAD/CAM system with a computational fluid dynamics (CFD) tool needed to evaluate aerodynamic designs and requiring the memory capacity of a supercomputer. Requirements for surfaces suitable for CFD analysis are discussed. Techniques for developing surfaces and verifying their smoothness are compared, showing the capability of the CAD/CAM system. The utilization of a CAD/CAM system to create a computational mesh is explained, and the mesh interaction with the geometry and input file preparation for the CFD analysis is discussed.

  7. State space representations of distributed fluid line dynamics

    NASA Technical Reports Server (NTRS)

    Yao, H.; Goodson, R. E.; Leonard, R. G.

    1974-01-01

    The purpose of this paper is to demonstrate the convenience of using a systematic straight forward procedure to obtain meaningful dynamic information for a class of complex distributed parameter fluid line networks. System transients in the time domain are determined by means of state space techniques. Digital computer implementation yields a simple but consistent way of obtaining overall system time solutions. A step-by-step analysis procedure flow chart is shown in Appendix I which illustrates the basic approach for modeling, approximating and selecting digital techniques for simulating the dynamic response of fluid line systems.

  8. Spectral Methods for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Streett, C. L.; Hussaini, M. Y.

    1994-01-01

    As a tool for large-scale computations in fluid dynamics, spectral methods were prophesized in 1944, born in 1954, virtually buried in the mid-1960's, resurrected in 1969, evangalized in the 1970's, and catholicized in the 1980's. The use of spectral methods for meteorological problems was proposed by Blinova in 1944 and the first numerical computations were conducted by Silberman (1954). By the early 1960's computers had achieved sufficient power to permit calculations with hundreds of degrees of freedom. For problems of this size the traditional way of computing the nonlinear terms in spectral methods was expensive compared with finite-difference methods. Consequently, spectral methods fell out of favor. The expense of computing nonlinear terms remained a severe drawback until Orszag (1969) and Eliasen, Machenauer, and Rasmussen (1970) developed the transform methods that still form the backbone of many large-scale spectral computations. The original proselytes of spectral methods were meteorologists involved in global weather modeling and fluid dynamicists investigating isotropic turbulence. The converts who were inspired by the successes of these pioneers remained, for the most part, confined to these and closely related fields throughout the 1970's. During that decade spectral methods appeared to be well-suited only for problems governed by ordinary diSerential eqllations or by partial differential equations with periodic boundary conditions. And, of course, the solution itself needed to be smooth. Some of the obstacles to wider application of spectral methods were: (1) poor resolution of discontinuous solutions; (2) inefficient implementation of implicit methods; and (3) drastic geometric constraints. All of these barriers have undergone some erosion during the 1980's, particularly the latter two. As a result, the applicability and appeal of spectral methods for computational fluid dynamics has broadened considerably. The motivation for the use of spectral

  9. Spectral Methods for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Streett, C. L.; Hussaini, M. Y.

    1994-01-01

    As a tool for large-scale computations in fluid dynamics, spectral methods were prophesized in 1944, born in 1954, virtually buried in the mid-1960's, resurrected in 1969, evangalized in the 1970's, and catholicized in the 1980's. The use of spectral methods for meteorological problems was proposed by Blinova in 1944 and the first numerical computations were conducted by Silberman (1954). By the early 1960's computers had achieved sufficient power to permit calculations with hundreds of degrees of freedom. For problems of this size the traditional way of computing the nonlinear terms in spectral methods was expensive compared with finite-difference methods. Consequently, spectral methods fell out of favor. The expense of computing nonlinear terms remained a severe drawback until Orszag (1969) and Eliasen, Machenauer, and Rasmussen (1970) developed the transform methods that still form the backbone of many large-scale spectral computations. The original proselytes of spectral methods were meteorologists involved in global weather modeling and fluid dynamicists investigating isotropic turbulence. The converts who were inspired by the successes of these pioneers remained, for the most part, confined to these and closely related fields throughout the 1970's. During that decade spectral methods appeared to be well-suited only for problems governed by ordinary diSerential eqllations or by partial differential equations with periodic boundary conditions. And, of course, the solution itself needed to be smooth. Some of the obstacles to wider application of spectral methods were: (1) poor resolution of discontinuous solutions; (2) inefficient implementation of implicit methods; and (3) drastic geometric constraints. All of these barriers have undergone some erosion during the 1980's, particularly the latter two. As a result, the applicability and appeal of spectral methods for computational fluid dynamics has broadened considerably. The motivation for the use of spectral

  10. Distributed Design and Analysis of Computer Experiments

    SciTech Connect

    Doak, Justin

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. For example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation of an

  11. Flight Simulation of Taketombo Based on Computational Fluid Dynamics and Computational Flight Dynamics

    NASA Astrophysics Data System (ADS)

    Kawamura, Kohei; Ueno, Yosuke; Nakamura, Yoshiaki

    In the present study we have developed a numerical method to simulate the flight dynamics of a small flying body with unsteady motion, where both aerodynamics and flight dynamics are fully considered. A key point of this numerical code is to use computational fluid dynamics and computational flight dynamics at the same time, which is referred to as CFD2, or double CFDs, where several new ideas are adopted in the governing equations, the method to make each quantity nondimensional, and the coupling method between aerodynamics and flight dynamics. This numerical code can be applied to simulate the unsteady motion of small vehicles such as micro air vehicles (MAV). As a sample calculation, we take up Taketombo, or a bamboo dragonfly, and its free flight in the air is demonstrated. The eventual aim of this research is to virtually fly an aircraft with arbitrary motion to obtain aerodynamic and flight dynamic data, which cannot be taken in the conventional wind tunnel.

  12. Delaunay triangulation and computational fluid dynamics meshes

    NASA Technical Reports Server (NTRS)

    Posenau, Mary-Anne K.; Mount, David M.

    1992-01-01

    In aerospace computational fluid dynamics (CFD) calculations, the Delaunay triangulation of suitable quadrilateral meshes can lead to unsuitable triangulated meshes. Here, we present case studies which illustrate the limitations of using structured grid generation methods which produce points in a curvilinear coordinate system for subsequent triangulations for CFD applications. We discuss conditions under which meshes of quadrilateral elements may not produce a Delaunay triangulation suitable for CFD calculations, particularly with regard to high aspect ratio, skewed quadrilateral elements.

  13. High performance computational chemistry: Towards fully distributed parallel algorithms

    SciTech Connect

    Guest, M.F.; Apra, E.; Bernholdt, D.E.

    1994-07-01

    An account is given of work in progress within the High Performance Computational Chemistry Group (HPCC) at the Pacific Northwest Laboratory (PNL) to develop molecular modeling software applications for massively parallel processors (MPPs). A discussion of the issues in developing scalable parallel algorithms is presented, with a particular focus on the distribution, as opposed to the replication, of key data structures. Replication of large data structures limits the maximum calculation size by imposing a low ratio of processors to memory. Only applications that distribute both data and computation across processors are truly scalable. The use of shared data structures, which may be independently accessed by each process even in a distributed-memory environment, greatly simplifies development and provides a significant performance enhancement. In describing tools to support this programming paradigm, an outline is given of the implementation and performance of a highly efficient and scalable algorithm to perform quadratically convergent, self-consistent field calculations on molecular systems. A brief account is given of the development of corresponding MPP capabilities in the areas of periodic Hartree Fock, Moeller-Plesset perturbation theory (MP2), density functional theory, and molecular dynamics. Performance figures are presented using both the Intel Touchstone Delta and Kendall Square Research KSR-2 supercomputers.

  14. A Distributed Computing Infrastructure for Computational Thermodynamic Calculations of Solid-Liquid Phase Equilibria

    NASA Astrophysics Data System (ADS)

    Ghiorso, M. S.; Kress, V. C.

    2004-12-01

    routines is being accessed. Fourth, the flexibility of calling library functions means that the client has more control over the configuration and output of the MELTS calculation. Fifth, if the client computer is a multi-processor compute cluster capable of issuing parallel requests to the MELTS "remote" library, then these requests may be in turn parallelized to the server compute cluster to enhance throughput and performance. Application of this computational model to fluid dynamical simulations of melting and transport in the Earth's mantle is envisioned. Further information and example clients for utilizing the current prototype library for distributed computing applications can be found at http://melts.uchicago.edu.

  15. The ICAAP Project, Part Three: OSF Distributed Computing Environment.

    ERIC Educational Resources Information Center

    Cantor, Scott

    1997-01-01

    DCE (Distributed Computing Environment) is a collection of services, tools, and libraries for building the infrastructure necessary for distributed computing within an enterprise. This articles discusses the Open Software Foundation (OSF); the components of DCE, including the Directory and Security Services, the Distributed Time Service, and the…

  16. Computation in Dynamically Bounded Asymmetric Systems

    PubMed Central

    Rutishauser, Ueli; Slotine, Jean-Jacques; Douglas, Rodney

    2015-01-01

    Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. Here we analyze the behavior of asymmetrical connected networks of linear threshold neurons, whose positive response is unbounded. We show that, for a wide range of parameters, this asymmetry brings interesting and computationally useful dynamical properties. When driven by input, the network explores potential solutions through highly unstable ‘expansion’ dynamics. This expansion is steered and constrained by negative divergence of the dynamics, which ensures that the dimensionality of the solution space continues to reduce until an acceptable solution manifold is reached. Then the system contracts stably on this manifold towards its final solution trajectory. The unstable positive feedback and cross inhibition that underlie expansion and divergence are common motifs in molecular and neuronal networks. Therefore we propose that very simple organizational constraints that combine these motifs can lead to spontaneous computation and so to the spontaneous modification of entropy that is characteristic of living systems. PMID:25617645

  17. Inverse dynamics: Simultaneous trajectory tracking and vibration reduction with distributed actuators

    NASA Technical Reports Server (NTRS)

    Devasia, Santosh; Bayo, Eduardo

    1993-01-01

    This paper addresses the problem of inverse dynamics for articulated flexible structures with both lumped and distributed actuators. This problem arises, for example, in the combined vibration minimization and trajectory control of space robots and structures. A new inverse dynamics scheme for computing the nominal lumped and distributed inputs for tracking a prescribed trajectory is given.

  18. Inverse dynamics: Simultaneous trajectory tracking and vibration reduction with distributed actuators

    NASA Astrophysics Data System (ADS)

    Devasia, Santosh; Bayo, Eduardo

    1993-02-01

    This paper addresses the problem of inverse dynamics for articulated flexible structures with both lumped and distributed actuators. This problem arises, for example, in the combined vibration minimization and trajectory control of space robots and structures. A new inverse dynamics scheme for computing the nominal lumped and distributed inputs for tracking a prescribed trajectory is given.

  19. Meshless methods for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Katz, Aaron Jon

    While the generation of meshes has always posed challenges for computational scientists, the problem has become more acute in recent years. Increased computational power has enabled scientists to tackle problems of increasing size and complexity. While algorithms have seen great advances, mesh generation has lagged behind, creating a computational bottleneck. For industry and government looking to impact current and future products with simulation technology, mesh generation imposes great challenges. Many generation procedures often lack automation, requiring many man-hours, which are becoming far more expensive than computer hardware. More automated methods are less reliable for complex geometry with sharp corners, concavity, or otherwise complex features. Most mesh generation methods to date require a great deal of user expertise to obtain accurate simulation results. Since the application of computational methods to real world problems appears to be paced by mesh generation, alleviating this bottleneck potentially impacts an enormous field of problems. Meshless methods applied to computational fluid dynamics is a relatively new area of research designed to help alleviate the burden of mesh generation. Despite their recent inception, there exists no shortage of formulations and algorithms for meshless schemes in the literature. A brief survey of the field reveals varied approaches arising from diverse mathematical backgrounds applied to a wide variety of applications. All meshless schemes attempt to bypass the use of a conventional mesh entirely or in part by discretizing governing partial differential equations on scattered clouds of points. A goal of the present thesis is to develop a meshless scheme for computational fluid dynamics and evaluate its performance compared with conventional methods. The meshless schemes developed in this work compare favorably with conventional finite volume methods in terms of accuracy and efficiency for the Euler and Navier

  20. Arterioportal shunts on dynamic computed tomography

    SciTech Connect

    Nakayama, T.; Hiyama, Y.; Ohnishi, K.; Tsuchiya, S.; Kohno, K.; Nakajima, Y.; Okuda, K.

    1983-05-01

    Thirty-two patients, 20 with hepatocelluar carcinoma and 12 with liver cirrhosis, were examined by dynamic computed tomography (CT) using intravenous bolus injection of contrast medium and by celiac angiography. Dynamic CT disclosed arterioportal shunting in four cases of hepatocellular carcinoma and in one of cirrhosis. In three of the former, the arterioportal shunt was adjacent to a mass lesion on CT, suggesting tumor invasion into the portal branch. In one with hepatocellular carcinoma, the shunt was remote from the mass. In the case with cirrhosis, there was no mass. In these last two cases, the shunt might have been caused by prior percutaneous needle puncture. In another case of hepatocellular carcinoma, celiac angiography but not CT demonstrated an arterioportal shunt. Thus, dynamic CT was diagnostic in five of six cases of arteriographically demonstrated arterioportal shunts.

  1. A computational model for dynamic vision

    NASA Technical Reports Server (NTRS)

    Moezzi, Saied; Weymouth, Terry E.

    1990-01-01

    This paper describes a novel computational model for dynamic vision which promises to be both powerful and robust. Furthermore the paradigm is ideal for an active vision system where camera vergence changes dynamically. Its basis is the retinotopically indexed object-centered encoding of the early visual information. Specifically, the relative distances of objects to a set of referents is encoded in image registered maps. To illustrate the efficacy of the method, it is applied to the problem of dynamic stereo vision. Integration of depth information over multiple frames obtained by a moving robot generally requires precise information about the relative camera position from frame to frame. Usually, this information can only be approximated. The method facilitates the integration of depth information without direct use or knowledge of camera motion.

  2. Human systems dynamics: Toward a computational model

    NASA Astrophysics Data System (ADS)

    Eoyang, Glenda H.

    2012-09-01

    A robust and reliable computational model of complex human systems dynamics could support advancements in theory and practice for social systems at all levels, from intrapersonal experience to global politics and economics. Models of human interactions have evolved from traditional, Newtonian systems assumptions, which served a variety of practical and theoretical needs of the past. Another class of models has been inspired and informed by models and methods from nonlinear dynamics, chaos, and complexity science. None of the existing models, however, is able to represent the open, high dimension, and nonlinear self-organizing dynamics of social systems. An effective model will represent interactions at multiple levels to generate emergent patterns of social and political life of individuals and groups. Existing models and modeling methods are considered and assessed against characteristic pattern-forming processes in observed and experienced phenomena of human systems. A conceptual model, CDE Model, based on the conditions for self-organizing in human systems, is explored as an alternative to existing models and methods. While the new model overcomes the limitations of previous models, it also provides an explanatory base and foundation for prospective analysis to inform real-time meaning making and action taking in response to complex conditions in the real world. An invitation is extended to readers to engage in developing a computational model that incorporates the assumptions, meta-variables, and relationships of this open, high dimension, and nonlinear conceptual model of the complex dynamics of human systems.

  3. Computational fluid dynamics uses in fluid dynamics/aerodynamics education

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    1994-01-01

    The field of computational fluid dynamics (CFD) has advanced to the point where it can now be used for the purpose of fluid dynamics physics education. Because of the tremendous wealth of information available from numerical simulation, certain fundamental concepts can be efficiently communicated using an interactive graphical interrogation of the appropriate numerical simulation data base. In other situations, a large amount of aerodynamic information can be communicated to the student by interactive use of simple CFD tools on a workstation or even in a personal computer environment. The emphasis in this presentation is to discuss ideas for how this process might be implemented. Specific examples, taken from previous publications, will be used to highlight the presentation.

  4. Concept for a distributed processor computer

    NASA Technical Reports Server (NTRS)

    Bogue, P. N.; Burnett, G. J.; Koczela, L. J.

    1970-01-01

    Future generation computer utilizes cell of single metal oxide semiconductor wafer containing general purpose processor section and small memory of approximately 512 words of 16 bits each. Cells are organized into groups and groups interconnected to form computer.

  5. Hybrid computer technique yields random signal probability distributions

    NASA Technical Reports Server (NTRS)

    Cameron, W. D.

    1965-01-01

    Hybrid computer determines the probability distributions of instantaneous and peak amplitudes of random signals. This combined digital and analog computer system reduces the errors and delays of manual data analysis.

  6. Development of Distributed Computing Systems Software Design Methodologies.

    DTIC Science & Technology

    1982-11-05

    R12i 941 DEVELOPMENT OF DISTRIBUTED COMPUTING SYSTEMS SOFTWARE ± DESIGN METHODOLOGIES(U) NORTHWESTERN UNIV EVANSTON IL DEPT OF ELECTRICAL...GUIRWAU OF STANDARDS -16 5 A Ax u FINAL REPORT Development of Distributed Computing System Software Design Methodologies C)0 Stephen S. Yau September 22...of Distributed Computing Systems Software pt.22,, 80 -OJu1, 2 * Dsig Mehodloges PERFORMING ORG REPORT NUMBERDesign th ol ies" 7. AUTHOR() .. CONTRACT

  7. Computational Fluid Dynamics of rising droplets

    SciTech Connect

    Wagner, Matthew; Francois, Marianne M.

    2012-09-05

    The main goal of this study is to perform simulations of droplet dynamics using Truchas, a LANL-developed computational fluid dynamics (CFD) software, and compare them to a computational study of Hysing et al.[IJNMF, 2009, 60:1259]. Understanding droplet dynamics is of fundamental importance in liquid-liquid extraction, a process used in the nuclear fuel cycle to separate various components. Simulations of a single droplet rising by buoyancy are conducted in two-dimensions. Multiple parametric studies are carried out to ensure the problem set-up is optimized. An Interface Smoothing Length (ISL) study and mesh resolution study are performed to verify convergence of the calculations. ISL is a parameter for the interface curvature calculation. Further, wall effects are investigated and checked against existing correlations. The ISL study found that the optimal ISL value is 2.5{Delta}x, with {Delta}x being the mesh cell spacing. The mesh resolution study found that the optimal mesh resolution is d/h=40, for d=drop diameter and h={Delta}x. In order for wall effects on terminal velocity to be insignificant, a conservative wall width of 9d or a nonconservative wall width of 7d can be used. The percentage difference between Hysing et al.[IJNMF, 2009, 60:1259] and Truchas for the velocity profiles vary from 7.9% to 9.9%. The computed droplet velocity and interface profiles are found in agreement with the study. The CFD calculations are performed on multiple cores, using LANL's Institutional High Performance Computing.

  8. Force distribution in a granular medium under dynamic loading

    NASA Astrophysics Data System (ADS)

    Danylenko, Vyacheslav A.; Mykulyak, Sergiy V.; Polyakovskyi, Volodymyr O.; Kulich, Vasyl V.; Oleynik, Ivan I.

    2017-07-01

    Force distribution in a granular medium subjected to an impulse loading is investigated in experiment and computer simulations. An experimental technique is developed to measure forces acting on individual grains at the bottom of the granular sample consisting of steel balls. Discrete element method simulation also is performed under conditions mimicking those in experiment. Both theory and experiment display exponentially decaying maximum force distributions at the bottom of the sample in the range of large forces. In addition, the simulations also reveal exponential force distribution throughout the sample and uncover correlation properties of the interparticle forces during dynamic loading of the granular samples. Simulated time dependence of coordination number, orientational order parameter, correlation radius, and force distribution clearly demonstrates the nonequilibrium character of the deformation process in a granular medium under impulse loading.

  9. Time Delay Systems with Distribution Dependent Dynamics

    DTIC Science & Technology

    2006-05-10

    sensitivity function for general nonlinear ordinary differential equations (ODEs) in a Banach space. Here we only show the construction of the abstract...shear: A nonlinear stick-slip formulation. CRSC-TR06-07, February, 2006; Differential Equations and Nonlinear Mechanics. Banks, H.T. and H.K. Nguyen (to...dependent dynamical system (in this case a 6 complicated system of partial differential equations ) for which the distribution PL must be estimated in some

  10. Determination of eigenvalues of dynamical systems by symbolic computation

    NASA Technical Reports Server (NTRS)

    Howard, J. C.

    1982-01-01

    A symbolic computation technique for determining the eigenvalues of dynamical systems is described wherein algebraic operations, symbolic differentiation, matrix formulation and inversion, etc., can be performed on a digital computer equipped with a formula-manipulation compiler. An example is included that demonstrates the facility with which the system dynamics matrix and the control distribution matrix from the state space formulation of the equations of motion can be processed to obtain eigenvalue loci as a function of a system parameter. The example chosen to demonstrate the technique is a fourth-order system representing the longitudinal response of a DC 8 aircraft to elevator inputs. This simplified system has two dominant modes, one of which is lightly damped and the other well damped. The loci may be used to determine the value of the controlling parameter that satisfied design requirements. The results were obtained using the MACSYMA symbolic manipulation system.

  11. Distributing an executable job load file to compute nodes in a parallel computer

    DOEpatents

    Gooding, Thomas M.

    2016-09-13

    Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications link over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.

  12. Distributing an executable job load file to compute nodes in a parallel computer

    DOEpatents

    Gooding, Thomas M.

    2016-08-09

    Distributing an executable job load file to compute nodes in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: determining, by a compute node in the parallel computer, whether the compute node is participating in a job; determining, by the compute node in the parallel computer, whether a descendant compute node is participating in the job; responsive to determining that the compute node is participating in the job or that the descendant compute node is participating in the job, communicating, by the compute node to a parent compute node, an identification of a data communications link over which the compute node receives data from the parent compute node; constructing a class route for the job, wherein the class route identifies all compute nodes participating in the job; and broadcasting the executable load file for the job along the class route for the job.

  13. Accommodating Heterogeneity in a Debugger for Distributed Computations

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Cheng, Doreen; Cooper, D. M. (Technical Monitor)

    1994-01-01

    In an ongoing project at NASA Ames Research Center, we are building debugger for distributed computations running on a heterogeneous set of machines. Historically, such debuggers have been built as front-ends to existing source-level debuggers on the target platforms. In effect, these back-end debuggers are providing a collection of debugger services to a client. The major drawback is that because of inconsistencies among the back-end debuggers, the front-end must use a different protocol when talking to each back-end debugger. This can make the front-end quite complex. We have avoided this complexity problem by defining the client-server debugger protocol. While it does require vendors to adapt their existing debugger code to meet the protocol, vendors are generally interested in doing so because the approach has several advantages. In addition to solving the heterogenous platform debugging problem, it will be possible to write interesting debugger user interfaces that can be easily ported across a variety of machines. This will likely encourage investment in application-domain specific debuggers. In fact, the user interface of our debugger will be geared to scientists developing computational fluid dynamics codes. This paper describes some of the problems encountered in developing a portable debugger for heterogenous, distributed computing and how the architecture of our debugger avoids them. It then provides a detailed description of the debugger client-server protocol. Some of the more interesting attributes of the protocol are: (1) It is object-oriented; (2) It uses callback functions to capture the asynchronous nature of debugging in a procedural fashion; (3) It contains abstractions, such as in-line instrumentation, for the debugging of computationally intensive programs; (4) For remote debugging, it has operations that enable the implementor to optimize message passing traffic between client and server. The soundness of the protocol is being tested through

  14. Accommodating Heterogeneity in a Debugger for Distributed Computations

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Cheng, Doreen; Cooper, D. M. (Technical Monitor)

    1994-01-01

    In an ongoing project at NASA Ames Research Center, we are building debugger for distributed computations running on a heterogeneous set of machines. Historically, such debuggers have been built as front-ends to existing source-level debuggers on the target platforms. In effect, these back-end debuggers are providing a collection of debugger services to a client. The major drawback is that because of inconsistencies among the back-end debuggers, the front-end must use a different protocol when talking to each back-end debugger. This can make the front-end quite complex. We have avoided this complexity problem by defining the client-server debugger protocol. While it does require vendors to adapt their existing debugger code to meet the protocol, vendors are generally interested in doing so because the approach has several advantages. In addition to solving the heterogenous platform debugging problem, it will be possible to write interesting debugger user interfaces that can be easily ported across a variety of machines. This will likely encourage investment in application-domain specific debuggers. In fact, the user interface of our debugger will be geared to scientists developing computational fluid dynamics codes. This paper describes some of the problems encountered in developing a portable debugger for heterogenous, distributed computing and how the architecture of our debugger avoids them. It then provides a detailed description of the debugger client-server protocol. Some of the more interesting attributes of the protocol are: (1) It is object-oriented; (2) It uses callback functions to capture the asynchronous nature of debugging in a procedural fashion; (3) It contains abstractions, such as in-line instrumentation, for the debugging of computationally intensive programs; (4) For remote debugging, it has operations that enable the implementor to optimize message passing traffic between client and server. The soundness of the protocol is being tested through

  15. Computational stability analysis of dynamical systems

    NASA Astrophysics Data System (ADS)

    Nikishkov, Yuri Gennadievich

    2000-10-01

    Due to increased available computer power, the analysis of nonlinear flexible multi-body systems, fixed-wing aircraft and rotary-wing vehicles is relying on increasingly complex, large scale models. An important aspect of the dynamic response of flexible multi-body systems is the potential presence of instabilities. Stability analysis is typically performed on simplified models with the smallest number of degrees of freedom required to capture the physical phenomena that cause the instability. The system stability boundaries are then evaluated using the characteristic exponent method or Floquet theory for systems with constant or periodic coefficients, respectively. As the number of degrees of freedom used to represent the system increases, these methods become increasingly cumbersome, and quickly unmanageable. In this work, a novel approach is proposed, the Implicit Floquet Analysis, which evaluates the largest eigenvalues of the transition matrix using the Arnoldi algorithm, without the explicit computation of this matrix. This method is far more computationally efficient than the classical approach and is ideally suited for systems involving a large number of degrees of freedom. The proposed approach is conveniently implemented as a postprocessing step to any existing simulation tool. The application of the method to a geometrically nonlinear multi-body dynamics code is presented. This work also focuses on the implementation of trimming algorithms and the development of tools for the graphical representation of numerical simulations and stability information for multi-body systems.

  16. Dynamic causal modelling of distributed electromagnetic responses

    PubMed Central

    Daunizeau, Jean; Kiebel, Stefan J.; Friston, Karl J.

    2009-01-01

    In this note, we describe a variant of dynamic causal modelling for evoked responses as measured with electroencephalography or magnetoencephalography (EEG and MEG). We depart from equivalent current dipole formulations of DCM, and extend it to provide spatiotemporal source estimates that are spatially distributed. The spatial model is based upon neural-field equations that model neuronal activity on the cortical manifold. We approximate this description of electrocortical activity with a set of local standing-waves that are coupled though their temporal dynamics. The ensuing distributed DCM models source as a mixture of overlapping patches on the cortical mesh. Time-varying activity in this mixture, caused by activity in other sources and exogenous inputs, is propagated through appropriate lead-field or gain-matrices to generate observed sensor data. This spatial model has three key advantages. First, it is more appropriate than equivalent current dipole models, when real source activity is distributed locally within a cortical area. Second, the spatial degrees of freedom of the model can be specified and therefore optimised using model selection. Finally, the model is linear in the spatial parameters, which finesses model inversion. Here, we describe the distributed spatial model and present a comparative evaluation with conventional equivalent current dipole (ECD) models of auditory processing, as measured with EEG. PMID:19398015

  17. Distributed computing environments for future space control systems

    NASA Technical Reports Server (NTRS)

    Viallefont, Pierre

    1993-01-01

    The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.

  18. A uniform approach for programming distributed heterogeneous computing systems

    PubMed Central

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-01-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. PMID:25844015

  19. A uniform approach for programming distributed heterogeneous computing systems.

    PubMed

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-12-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.

  20. Computational fluid dynamics: Transition to design applications

    NASA Technical Reports Server (NTRS)

    Bradley, R. G.; Bhateley, I. C.; Howell, G. A.

    1987-01-01

    The development of aerospace vehicles, over the years, was an evolutionary process in which engineering progress in the aerospace community was based, generally, on prior experience and data bases obtained through wind tunnel and flight testing. Advances in the fundamental understanding of flow physics, wind tunnel and flight test capability, and mathematical insights into the governing flow equations were translated into improved air vehicle design. The modern day field of Computational Fluid Dynamics (CFD) is a continuation of the growth in analytical capability and the digital mathematics needed to solve the more rigorous form of the flow equations. Some of the technical and managerial challenges that result from rapidly developing CFD capabilites, some of the steps being taken by the Fort Worth Division of General Dynamics to meet these challenges, and some of the specific areas of application for high performance air vehicles are presented.

  1. Computational fluid dynamics in cardiovascular disease.

    PubMed

    Lee, Byoung-Kwon

    2011-08-01

    Computational fluid dynamics (CFD) is a mechanical engineering field for analyzing fluid flow, heat transfer, and associated phenomena, using computer-based simulation. CFD is a widely adopted methodology for solving complex problems in many modern engineering fields. The merit of CFD is developing new and improved devices and system designs, and optimization is conducted on existing equipment through computational simulations, resulting in enhanced efficiency and lower operating costs. However, in the biomedical field, CFD is still emerging. The main reason why CFD in the biomedical field has lagged behind is the tremendous complexity of human body fluid behavior. Recently, CFD biomedical research is more accessible, because high performance hardware and software are easily available with advances in computer science. All CFD processes contain three main components to provide useful information, such as pre-processing, solving mathematical equations, and post-processing. Initial accurate geometric modeling and boundary conditions are essential to achieve adequate results. Medical imaging, such as ultrasound imaging, computed tomography, and magnetic resonance imaging can be used for modeling, and Doppler ultrasound, pressure wire, and non-invasive pressure measurements are used for flow velocity and pressure as a boundary condition. Many simulations and clinical results have been used to study congenital heart disease, heart failure, ventricle function, aortic disease, and carotid and intra-cranial cerebrovascular diseases. With decreasing hardware costs and rapid computing times, researchers and medical scientists may increasingly use this reliable CFD tool to deliver accurate results. A realistic, multidisciplinary approach is essential to accomplish these tasks. Indefinite collaborations between mechanical engineers and clinical and medical scientists are essential. CFD may be an important methodology to understand the pathophysiology of the development and

  2. Computational Fluid Dynamics in Cardiovascular Disease

    PubMed Central

    2011-01-01

    Computational fluid dynamics (CFD) is a mechanical engineering field for analyzing fluid flow, heat transfer, and associated phenomena, using computer-based simulation. CFD is a widely adopted methodology for solving complex problems in many modern engineering fields. The merit of CFD is developing new and improved devices and system designs, and optimization is conducted on existing equipment through computational simulations, resulting in enhanced efficiency and lower operating costs. However, in the biomedical field, CFD is still emerging. The main reason why CFD in the biomedical field has lagged behind is the tremendous complexity of human body fluid behavior. Recently, CFD biomedical research is more accessible, because high performance hardware and software are easily available with advances in computer science. All CFD processes contain three main components to provide useful information, such as pre-processing, solving mathematical equations, and post-processing. Initial accurate geometric modeling and boundary conditions are essential to achieve adequate results. Medical imaging, such as ultrasound imaging, computed tomography, and magnetic resonance imaging can be used for modeling, and Doppler ultrasound, pressure wire, and non-invasive pressure measurements are used for flow velocity and pressure as a boundary condition. Many simulations and clinical results have been used to study congenital heart disease, heart failure, ventricle function, aortic disease, and carotid and intra-cranial cerebrovascular diseases. With decreasing hardware costs and rapid computing times, researchers and medical scientists may increasingly use this reliable CFD tool to deliver accurate results. A realistic, multidisciplinary approach is essential to accomplish these tasks. Indefinite collaborations between mechanical engineers and clinical and medical scientists are essential. CFD may be an important methodology to understand the pathophysiology of the development and

  3. Shuttle rocket booster computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Chung, T. J.; Park, O. Y.

    1988-01-01

    Additional results and a revised and improved computer program listing from the shuttle rocket booster computational fluid dynamics formulations are presented. Numerical calculations for the flame zone of solid propellants are carried out using the Galerkin finite elements, with perturbations expanded to the zeroth, first, and second orders. The results indicate that amplification of oscillatory motions does indeed prevail in high frequency regions. For the second order system, the trend is similar to the first order system for low frequencies, but instabilities may appear at frequencies lower than those of the first order system. The most significant effect of the second order system is that the admittance is extremely oscillatory between moderately high frequency ranges.

  4. Computational fluid dynamics in brain aneurysms

    PubMed Central

    Sforza, Daniel M.; Putman, Christopher M.; Cebral, Juan R.

    2013-01-01

    SUMMARY Because of its ability to deal with any geometry, image-based computational fluid dynamics (CFD) has been progressively used to investigate the role of hemodynamics in the underlying mechanisms governing the natural history of cerebral aneurysms. Despite great progress in methodological developments and many studies using patient-specific data, there are still significant controversies about the precise governing processes and divergent conclusions from apparently contradictory results. Sorting out these issues requires a global vision of the state of the art and a unified approach to solving this important scientific problem. Towards this end, this paper reviews the contributions made using patient-specific CFD models to further the understanding of these mechanisms, and highlights the great potential of patient-specific computational models for clinical use in the assessment of aneurysm rupture risk and patient management. PMID:25364852

  5. Optimization of an interactive distributive computer network

    NASA Technical Reports Server (NTRS)

    Frederick, V.

    1985-01-01

    The activities under a cooperative agreement for the development of a computer network are briefly summarized. Research activities covered are: computer operating systems optimization and integration; software development and implementation of the IRIS (Infrared Imaging of Shuttle) Experiment; and software design, development, and implementation of the APS (Aerosol Particle System) Experiment.

  6. Equation solvers for distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    A large number of scientific and engineering problems require the rapid solution of large systems of simultaneous equations. The performance of parallel computers in this area now dwarfs traditional vector computers by nearly an order of magnitude. This talk describes the major issues involved in parallel equation solvers with particular emphasis on the Intel Paragon, IBM SP-1 and SP-2 processors.

  7. Computational Fluid Dynamics Symposium on Aeropropulsion

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Recognizing the considerable advances that have been made in computational fluid dynamics, the Internal Fluid Mechanics Division of NASA Lewis Research Center sponsored this symposium with the objective of providing a forum for exchanging information regarding recent developments in numerical methods, physical and chemical modeling, and applications. This conference publication is a compilation of 4 invited and 34 contributed papers presented in six sessions: algorithms one and two, turbomachinery, turbulence, components application, and combustors. Topics include numerical methods, grid generation, chemically reacting flows, turbulence modeling, inlets, nozzles, and unsteady flows.

  8. Computational Fluid Dynamics Technology for Hypersonic Applications

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2003-01-01

    Several current challenges in computational fluid dynamics and aerothermodynamics for hypersonic vehicle applications are discussed. Example simulations are presented from code validation and code benchmarking efforts to illustrate capabilities and limitations. Opportunities to advance the state-of-art in algorithms, grid generation and adaptation, and code validation are identified. Highlights of diverse efforts to address these challenges are then discussed. One such effort to re-engineer and synthesize the existing analysis capability in LAURA, VULCAN, and FUN3D will provide context for these discussions. The critical (and evolving) role of agile software engineering practice in the capability enhancement process is also noted.

  9. Parallel Processing for Computational Continuum Dynamics.

    DTIC Science & Technology

    1985-05-10

    F49620-84-C-0111In I PARALLEL PROCESSING FOR COMPUTATIONAL CONTINUUM DYNAMICS: A FINAL REPORT Accession For Joseph F. McGrath DTIc TAB KMS Fusion, Inc...Uiarmouncod 0P . . B O X 1 5 6 7 J u s t tic a t io - --- - - Ann Arbor, MI 48106 A v ar_ _ la b il it¥ C o d e a 10 May 1985 nF , Final Report ... REPORT (Yr., Mo. a) 15 PAGE COUNT * Final IFROM 5S4i..4r.5 .. Mar. 10 May 1985 42 * 16. SUPPLEMENTARY NOTATION 17. COSATI CODES IB. SUBJECT TERMS

  10. Domain decomposition algorithms and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.

    1988-01-01

    Some of the new domain decomposition algorithms are applied to two model problems in computational fluid dynamics: the two-dimensional convection-diffusion problem and the incompressible driven cavity flow problem. First, a brief introduction to the various approaches of domain decomposition is given, and a survey of domain decomposition preconditioners for the operator on the interface separating the subdomains is then presented. For the convection-diffusion problem, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is examined.

  11. Colour in visualisation for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Kinnear, David; Atherton, Mark; Collins, Michael; Dokhan, Jason; Karayiannis, Tassos

    2006-06-01

    Colour is used in computational fluid dynamic (CFD) simulations in two key ways. First it is used to visualise the geometry and allow the engineer to be confident that the model constructed is a good representation of the engineering situation. Once an analysis has been completed, colour is used in post-processing the data from the simulations to illustrate the complex fluid mechanic phenomena under investigation. This paper describes these two uses of colour and provides some examples to illustrate the key visualisation approaches used in CFD.

  12. Domain decomposition algorithms and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.

    1988-01-01

    Some of the new domain decomposition algorithms are applied to two model problems in computational fluid dynamics: the two-dimensional convection-diffusion problem and the incompressible driven cavity flow problem. First, a brief introduction to the various approaches of domain decomposition is given, and a survey of domain decomposition preconditioners for the operator on the interface separating the subdomains is then presented. For the convection-diffusion problem, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is examined.

  13. Efficient quantum computing of complex dynamics.

    PubMed

    Benenti, G; Casati, G; Montangero, S; Shepelyansky, D L

    2001-11-26

    We propose a quantum algorithm which uses the number of qubits in an optimal way and efficiently simulates a physical model with rich and complex dynamics described by the quantum sawtooth map. The numerical study of the effect of static imperfections in the quantum computer hardware shows that the main elements of the phase space structures are accurately reproduced up to a time scale which is polynomial in the number of qubits. The errors generated by these imperfections are more significant than the errors of random noise in gate operations.

  14. Verification of computer users using keystroke dynamics.

    PubMed

    Obaidat, M S; Sadoun, B

    1997-01-01

    This paper presents techniques to verify the identity of computer users using the keystroke dynamics of computer user's login string as characteristic patterns using pattern recognition and neural network techniques. This work is a continuation of our previous work where only interkey times were used as features for identifying computer users. In this work we used the key hold times for classification and then compared the performance with the former interkey time-based technique. Then we use the combined interkey and hold times for the identification process. We applied several neural network and pattern recognition algorithms for verifying computer users as they type their password phrases. It was found that hold times are more effective than interkey times and the best identification performance was achieved by using both time measurements. An identification accuracy of 100% was achieved when the combined hold and intekey time-based approach were considered as features using the fuzzy ARTMAP, radial basis function networks (RBFN), and learning vector quantization (LVQ) neural network paradigms. Other neural network and classical pattern algorithms such as backpropagation with a sigmoid transfer function (BP, Sigm), hybrid sum-of-products (HSOP), sum-of-products (SOP), potential function and Bayes' rule algorithms gave moderate performance.

  15. Comparison of TCP automatic tuning techniques for distributed computing

    SciTech Connect

    Weigle, E. H.; Feng, W. C.

    2002-01-01

    Rather than painful, manual, static, per-connection optimization of TCP buffer sizes simply to achieve acceptable performance for distributed applications, many researchers have proposed techniques to perform this tuning automatically. This paper first discusses the relative merits of the various approaches in theory, and then provides substantial experimental data concerning two competing implementations - the buffer autotuning already present in Linux 2.4.x and 'Dynamic Right-Sizing.' This paper reveals heretofore unknown aspects of the problem and current solutions, provides insight into the proper approach for various circumstances, and points toward ways to further improve performance. TCP, for good or ill, is the only protocol widely available for reliable end-to-end congestion-controlled network communication, and thus it is the one used for almost all distributed computing. Unfortunately, TCP was not designed with high-performance computing in mind - its original design decisions focused on long-term fairness first, with performance a distant second. Thus users must often perform tortuous manual optimizations simply to achieve acceptable behavior. The most important and often most difficult task is determining and setting appropriate buffer sizes. Because of this, at least six ways of automatically setting these sizes have been proposed. In this paper, we compare and contrast these tuning methods. First we explain each method, followed by an in-depth discussion of their features. Next we discuss the experiments to fully characterize two particularly interesting methods (Linux 2.4 autotuning and Dynamic Right-Sizing). We conclude with results and possible improvements.

  16. Distributed Computing: Considerations for Its Use within Educational Environments.

    ERIC Educational Resources Information Center

    Pratt, S. J.

    1985-01-01

    Emphasizing more effective use of existing equipment, this article highlights distributed computing design considerations applicable to educational environments; identifies potential roles of networking in the provision of adequate teaching aids; presents a networking model; and describes the development of a distributed computing configuration at…

  17. Distributed Computing Environment: An Architecture For Supporting Change?

    DTIC Science & Technology

    1995-11-01

    Distributed Computing Environment (DCE) has been in development for about five years but has only been widely used in the last two years. It consists...these services form an architecture for distributed computing that enables users to carry out the new, cheaper operations they require with the

  18. Direct modeling for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Xu, Kun

    2015-06-01

    All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier-Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct

  19. Spatiotemporal dynamics of distributed synthetic genetic circuits

    NASA Astrophysics Data System (ADS)

    Kanakov, Oleg; Laptyeva, Tetyana; Tsimring, Lev; Ivanchenko, Mikhail

    2016-04-01

    We propose and study models of two distributed synthetic gene circuits, toggle-switch and oscillator, each split between two cell strains and coupled via quorum-sensing signals. The distributed toggle switch relies on mutual repression of the two strains, and oscillator is comprised of two strains, one of which acts as an activator for another that in turn acts as a repressor. Distributed toggle switch can exhibit mobile fronts, switching the system from the weaker to the stronger spatially homogeneous state. The circuit can also act as a biosensor, with the switching front dynamics determined by the properties of an external signal. Distributed oscillator system displays another biosensor functionality: oscillations emerge once a small amount of one cell strain appears amid the other, present in abundance. Distribution of synthetic gene circuits among multiple strains allows one to reduce crosstalk among different parts of the overall system and also decrease the energetic burden of the synthetic circuit per cell, which may allow for enhanced functionality and viability of engineered cells.

  20. Utilizing parallel optimization in computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Kokkolaras, Michael

    1998-12-01

    General problems of interest in computational fluid dynamics are investigated by means of optimization. Specifically, in the first part of the dissertation, a method of optimal incremental function approximation is developed for the adaptive solution of differential equations. Various concepts and ideas utilized by numerical techniques employed in computational mechanics and artificial neural networks (e.g. function approximation and error minimization, variational principles and weighted residuals, and adaptive grid optimization) are combined to formulate the proposed method. The basis functions and associated coefficients of a series expansion, representing the solution, are optimally selected by a parallel direct search technique at each step of the algorithm according to appropriate criteria; the solution is built sequentially. In this manner, the proposed method is adaptive in nature, although a grid is neither built nor adapted in the traditional sense using a-posteriori error estimates. Variational principles are utilized for the definition of the objective function to be extremized in the associated optimization problems, ensuring that the problem is well-posed. Complicated data structures and expensive remeshing algorithms and systems solvers are avoided. Computational efficiency is increased by using low-order basis functions and concurrent computing. Numerical results and convergence rates are reported for a range of steady-state problems, including linear and nonlinear differential equations associated with general boundary conditions, and illustrate the potential of the proposed method. Fluid dynamics applications are emphasized. Conclusions are drawn by discussing the method's limitations, advantages, and possible extensions. The second part of the dissertation is concerned with the optimization of the viscous-inviscid-interaction (VII) mechanism in an airfoil flow analysis code. The VII mechanism is based on the concept of a transpiration velocity

  1. Distributed-Computer System Optimizes SRB Joints

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.

    1991-01-01

    Initial calculations of redesign of joint on solid rocket booster (SRB) that failed during Space Shuttle tragedy showed redesign increased weight. Optimization techniques applied to determine whether weight could be reduced while keeping joint closed and limiting stresses. Analysis system developed by use of existing software coupling structural analysis with optimization computations. Software designed executable on network of computer workstations. Took advantage of parallelism offered by finite-difference technique of computing gradients to enable several workstations to contribute simultaneously to solution of problem. Key features, effective use of redundancies in hardware and flexible software, enabling optimization to proceed with minimal delay and decreased overall time to completion.

  2. Dynamic shared state maintenance in distributed virtual environments

    NASA Astrophysics Data System (ADS)

    Hamza-Lup, Felix George

    Advances in computer networks and rendering systems facilitate the creation of distributed collaborative environments in which the distribution of information at remote locations allows efficient communication. Particularly challenging are distributed interactive Virtual Environments (VE) that allow knowledge sharing through 3D information. The purpose of this work is to address the problem of latency in distributed interactive VE and to develop a conceptual model for consistency maintenance in these environments based on the participant interaction model. An area that needs to be explored is the relationship between the dynamic shared state and the interaction with the virtual entities present in the shared scene. Mixed Reality (MR) and VR environments must bring the human participant interaction into the loop through a wide range of electronic motion sensors, and haptic devices. Part of the work presented here defines a novel criterion for categorization of distributed interactive VE and introduces, as well as analyzes, an adaptive synchronization algorithm for consistency maintenance in such environments. As part of the work, a distributed interactive Augmented Reality (AR) testbed and the algorithm implementation details are presented. Currently the testbed is part of several research efforts at the Optical Diagnostics and Applications Laboratory including 3D visualization applications using custom built head-mounted displays (HMDs) with optical motion tracking and a medical training prototype for endotracheal intubation and medical prognostics. An objective method using quaternion calculus is applied for the algorithm assessment. In spite of significant network latency, results show that the dynamic shared state can be maintained consistent at multiple remotely located sites. In further consideration of the latency problems and in the light of the current trends in interactive distributed VE applications, we propose a hybrid distributed system architecture for

  3. Cardea: Dynamic Access Control in Distributed Systems

    NASA Technical Reports Server (NTRS)

    Lepro, Rebekah

    2004-01-01

    Modern authorization systems span domains of administration, rely on many different authentication sources, and manage complex attributes as part of the authorization process. This . paper presents Cardea, a distributed system that facilitates dynamic access control, as a valuable piece of an inter-operable authorization framework. First, the authorization model employed in Cardea and its functionality goals are examined. Next, critical features of the system architecture and its handling of the authorization process are then examined. Then the S A M L and XACML standards, as incorporated into the system, are analyzed. Finally, the future directions of this project are outlined and connection points with general components of an authorization system are highlighted.

  4. Dynamics of cerium distribution during oscillatory extraction

    SciTech Connect

    Smirnov, A.V.; Afonin, M.A.; Sedov, V.M.

    1995-01-01

    The dynamics of the Ce distribution in a nonequilibrium extraction process are investigated. Results are presented for the extraction of Ce by tributylphosphate in organic diluents from an aqueous solution in which a Belousov-Zhabotinskii oscillating reaction is occurring. The time dependences of [Ce] in the organic phase of the oscillatory extraction system that are synchronous with the redox-potential dependences are obtained for the first time. The effect of the initial concentrations of Ce, oxidant, and organic substrate on the principal parameters of the oscillatory extraction, such as the frequency, amplitude, etc., is found. A conclusion is made about the ability to control an oscillatory extraction process.

  5. Configuring computation tree topologies on a distributed computing system

    SciTech Connect

    Woei Lin; Chuan-lin Wu

    1983-01-01

    The authors describe an approach to connecting hardware resources for high-performance computation. Two basic algorithms are designed for configuring binary tree topologies. The configuring command can be issued from any processing mode. The algorithms can select proper modes for connection while maintaining good utilization of processing nodes. 7 references.

  6. Computational Fluid Dynamics Study for Optimization of a Fin Design

    DTIC Science & Technology

    2005-09-28

    Computational Fluid Dynamics Study for Optimization of a Fin Design 5b. GRANT NUMBER 64-6093-A-5 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT...NUMBER Ravi Ramamurti and William C. Sandberg 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) 8. PERFORMING...MONITORING AGENCY NAME( S ) AND ADDRESS(ES) 10. SPONSOR / MONITOR’S ACRONYM( S ) 11. SPONSOR I MONITOR’S REPORT NUMBER( S ) 12. DISTRIBUTION / AVAILABILITY

  7. Visualization of Unsteady Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1997-01-01

    The current compute environment that most researchers are using for the calculation of 3D unsteady Computational Fluid Dynamic (CFD) results is a super-computer class machine. The Massively Parallel Processors (MPP's) such as the 160 node IBM SP2 at NAS and clusters of workstations acting as a single MPP (like NAS's SGI Power-Challenge array and the J90 cluster) provide the required computation bandwidth for CFD calculations of transient problems. If we follow the traditional computational analysis steps for CFD (and we wish to construct an interactive visualizer) we need to be aware of the following: (1) Disk space requirements. A single snap-shot must contain at least the values (primitive variables) stored at the appropriate locations within the mesh. For most simple 3D Euler solvers that means 5 floating point words. Navier-Stokes solutions with turbulence models may contain 7 state-variables. (2) Disk speed vs. Computational speeds. The time required to read the complete solution of a saved time frame from disk is now longer than the compute time for a set number of iterations from an explicit solver. Depending, on the hardware and solver an iteration of an implicit code may also take less time than reading the solution from disk. If one examines the performance improvements in the last decade or two, it is easy to see that depending on disk performance (vs. CPU improvement) may not be the best method for enhancing interactivity. (3) Cluster and Parallel Machine I/O problems. Disk access time is much worse within current parallel machines and cluster of workstations that are acting in concert to solve a single problem. In this case we are not trying to read the volume of data, but are running the solver and the solver outputs the solution. These traditional network interfaces must be used for the file system. (4) Numerics of particle traces. Most visualization tools can work upon a single snap shot of the data but some visualization tools for transient

  8. Computing dynamic classification images from correlation maps.

    PubMed

    Lu, Hongjing; Liu, Zili

    2006-05-22

    We used Pearson's correlation to compute dynamic classification images of biological motion in a point-light display. Observers discriminated whether a human figure that was embedded in dynamic white Gaussian noise was walking forward or backward. Their responses were correlated with the Gaussian noise fields frame by frame, across trials. The resultant correlation map gave rise to a sequence of dynamic classification images that were clearer than either the standard method of A. J. Ahumada and J. Lovell (1971) or the optimal weighting method of R. F. Murray, P. J. Bennett, and A. B. Sekuler (2002). Further, the correlation coefficients of all the point lights were similar to each other when overlapping pixels between forward and backward walkers were excluded. This pattern is consistent with the hypothesis that the point-light walker is represented in a global manner, as opposed to a fixed subset of point lights being more important than others. We conjecture that the superior performance of the correlation map may reflect inherent nonlinearities in processing biological motion, which are incompatible with the assumptions underlying the previous methods.

  9. Computational fluid dynamics in coronary artery disease.

    PubMed

    Sun, Zhonghua; Xu, Lei

    2014-12-01

    Computational fluid dynamics (CFD) is a widely used method in mechanical engineering to solve complex problems by analysing fluid flow, heat transfer, and associated phenomena by using computer simulations. In recent years, CFD has been increasingly used in biomedical research of coronary artery disease because of its high performance hardware and software. CFD techniques have been applied to study cardiovascular haemodynamics through simulation tools to predict the behaviour of circulatory blood flow in the human body. CFD simulation based on 3D luminal reconstructions can be used to analyse the local flow fields and flow profiling due to changes of coronary artery geometry, thus, identifying risk factors for development and progression of coronary artery disease. This review aims to provide an overview of the CFD applications in coronary artery disease, including biomechanics of atherosclerotic plaques, plaque progression and rupture; regional haemodynamics relative to plaque location and composition. A critical appraisal is given to a more recently developed application, fractional flow reserve based on CFD computation with regard to its diagnostic accuracy in the detection of haemodynamically significant coronary artery disease. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Computation of the sampling distribution of coherence estimate.

    PubMed

    Nadarajah, Saralees; Kotz, Samuel

    2006-12-01

    The recent paper published by Miranda de Sa (2004) derived, for the first time, the sampling distribution of coherence estimated between two signals. The paper also considered computational issues of the sampling distribution, using an approximate method. In this short note, we provided several 1-line programs for the exact computation of various measures of the sampling distribution. The advantages of using these programs are discussed.

  11. Distributing Computer Resources in Education and Training.

    ERIC Educational Resources Information Center

    Bell, Wynne

    1982-01-01

    The future direction of computers in educational settings is the topic of speculation. It is noted that resources in education are so meagre that only practical ventures can be considered. Suggestions are made for stretching available resources and maximizing the benefits to be gained through the new technology. (MP)

  12. Distributing Computer Resources in Education and Training.

    ERIC Educational Resources Information Center

    Bell, Wynne

    1982-01-01

    The future direction of computers in educational settings is the topic of speculation. It is noted that resources in education are so meagre that only practical ventures can be considered. Suggestions are made for stretching available resources and maximizing the benefits to be gained through the new technology. (MP)

  13. A comparison of queueing, cluster and distributed computing systems

    NASA Technical Reports Server (NTRS)

    Kaplan, Joseph A.; Nelson, Michael L.

    1993-01-01

    Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.

  14. Distributed metadata in a high performance computing environment

    DOEpatents

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua; Liu, Xuezhao; Tang, Haiying

    2017-07-11

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination that a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.

  15. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1991-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  16. Artificial Intelligence In Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Vogel, Alison Andrews

    1991-01-01

    Paper compares four first-generation artificial-intelligence (Al) software systems for computational fluid dynamics. Includes: Expert Cooling Fan Design System (EXFAN), PAN AIR Knowledge System (PAKS), grid-adaptation program MITOSIS, and Expert Zonal Grid Generation (EZGrid). Focuses on knowledge-based ("expert") software systems. Analyzes intended tasks, kinds of knowledge possessed, magnitude of effort required to codify knowledge, how quickly constructed, performances, and return on investment. On basis of comparison, concludes Al most successful when applied to well-formulated problems solved by classifying or selecting preenumerated solutions. In contrast, application of Al to poorly understood or poorly formulated problems generally results in long development time and large investment of effort, with no guarantee of success.

  17. High performance computations using dynamical nucleation theory

    SciTech Connect

    Windus, Theresa L.; Kathmann, Shawn M.; Crosby, Lonnie D.

    2008-07-14

    Chemists continue to explore the use of very large computations to perform simulations that describe the molecular level physics of critical challenges in science. In this paper, the Dynamical Nucleation Theory Monte Carlo (DNTMC) model - a model for determining molecular scale nucleation rate constants - and its parallel capabilities are described. The potential for bottlenecks and the challenges to running on future petascale or larger resources are delineated. A "master-slave" solution is proposed to scale to the petascale and will be developed in the NWChem software. In addition, mathematical and data analysis challenges are also described. This work was supported by the U.S. Department of Energy's (DOE) Office of Basic Energy Sciences, Chemical Sciences program. The Pacific Northwest National Laboratory is operated by Battelle for DOE.

  18. Domain decomposition algorithms and computation fluid dynamics

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.

    1988-01-01

    In the past several years, domain decomposition was a very popular topic, partly motivated by the potential of parallelization. While a large body of theory and algorithms were developed for model elliptic problems, they are only recently starting to be tested on realistic applications. The application of some of these methods to two model problems in computational fluid dynamics are investigated. Some examples are two dimensional convection-diffusion problems and the incompressible driven cavity flow problem. The construction and analysis of efficient preconditioners for the interface operator to be used in the iterative solution of the interface solution is described. For the convection-diffusion problems, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is discussed.

  19. Lectures series in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Thompson, Kevin W.

    1987-01-01

    The lecture notes cover the basic principles of computational fluid dynamics (CFD). They are oriented more toward practical applications than theory, and are intended to serve as a unified source for basic material in the CFD field as well as an introduction to more specialized topics in artificial viscosity and boundary conditions. Each chapter in the test is associated with a videotaped lecture. The basic properties of conservation laws, wave equations, and shock waves are described. The duality of the conservation law and wave representations is investigated, and shock waves are examined in some detail. Finite difference techniques are introduced for the solution of wave equations and conservation laws. Stability analysis for finite difference approximations are presented. A consistent description of artificial viscosity methods are provided. Finally, the problem of nonreflecting boundary conditions are treated.

  20. Protein Dynamics from NMR and Computer Simulation

    NASA Astrophysics Data System (ADS)

    Wu, Qiong; Kravchenko, Olga; Kemple, Marvin; Likic, Vladimir; Klimtchuk, Elena; Prendergast, Franklyn

    2002-03-01

    Proteins exhibit internal motions from the millisecond to sub-nanosecond time scale. The challenge is to relate these internal motions to biological function. A strategy to address this aim is to apply a combination of several techniques including high-resolution NMR, computer simulation of molecular dynamics (MD), molecular graphics, and finally molecular biology, the latter to generate appropriate samples. Two difficulties that arise are: (1) the time scale which is most directly biologically relevant (ms to μs) is not readily accessible by these techniques and (2) the techniques focus on local and not collective motions. We will outline methods using ^13C-NMR to help alleviate the second problem, as applied to intestinal fatty acid binding protein, a relatively small intracellular protein believed to be involved in fatty acid transport and metabolism. This work is supported in part by PHS Grant GM34847 (FGP) and by a fellowship from the American Heart Association (QW).

  1. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1992-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  2. Automated domain decomposition for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Vogel, Alison Andrews

    1990-01-01

    Automation of flow-field zoning in two-dimensions is an important step towards easing the three-dimensional grid generation bottleneck in computational fluid dynamics. A knowledge-based approach works well, but several aspects of flow-field zoning make the use of such an approach challenging. A proposed model and language to describe the process of zoning a flow field are presented, followed by a discussion of the implementation of EZGrid, a knowledge-based two-dimensional (2-D) flow-field zoner. Results are shown for representative two-dimensional aerodynamic configurations. Finally, an approach to the evaluation of flow-field zonings is described and used to compare the performance of EZGrid with that of a human expert.

  3. A perspective of computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Kutler, P.

    1986-01-01

    Computational fluid dynamics (CFD) is maturing, and is at a stage in its technological life cycle in which it is now routinely applied to some rather complicated problems; it is starting to create an impact on the design cycle of aerospace flight vehicles and their components. CFD is also being used to better understand the fluid physics of flows heretofore not understood, such as three-dimensional separation. CFD is also being used to complement and is being complemented by experiments. In this paper, the primary and secondary pacing items that govern CFD in the past are reviewed and updated. The future prospects of CFD are explored which will offer people working in the discipline challenges that should extend the technological life cycle to further increase the capabilities of a proven demonstrated technology.

  4. Sawfishes stealth revealed using computational fluid dynamics.

    PubMed

    Bradney, D R; Davidson, A; Evans, S P; Wueringer, B E; Morgan, D L; Clausen, P D

    2017-02-27

    Detailed computational fluid dynamics simulations for the rostrum of three species of sawfish (Pristidae) revealed that negligible turbulent flow is generated from all rostra during lateral swipe prey manipulation and swimming. These results suggest that sawfishes are effective stealth hunters that may not be detected by their teleost prey's lateral line sensory system during pursuits. Moreover, during lateral swipes, the rostra were found to induce little velocity into the surrounding fluid. Consistent with previous data of sawfish feeding behaviour, these data indicate that the rostrum is therefore unlikely to be used to stir up the bottom to uncover benthic prey. Whilst swimming with the rostrum inclined at a small angle to the horizontal, the coefficient of drag of the rostrum is relatively low and the coefficient of lift is zero.

  5. Artificial Intelligence In Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Vogel, Alison Andrews

    1991-01-01

    Paper compares four first-generation artificial-intelligence (Al) software systems for computational fluid dynamics. Includes: Expert Cooling Fan Design System (EXFAN), PAN AIR Knowledge System (PAKS), grid-adaptation program MITOSIS, and Expert Zonal Grid Generation (EZGrid). Focuses on knowledge-based ("expert") software systems. Analyzes intended tasks, kinds of knowledge possessed, magnitude of effort required to codify knowledge, how quickly constructed, performances, and return on investment. On basis of comparison, concludes Al most successful when applied to well-formulated problems solved by classifying or selecting preenumerated solutions. In contrast, application of Al to poorly understood or poorly formulated problems generally results in long development time and large investment of effort, with no guarantee of success.

  6. Computational modeling of intraocular gas dynamics

    NASA Astrophysics Data System (ADS)

    Noohi, P.; Abdekhodaie, M. J.; Cheng, Y. L.

    2015-12-01

    The purpose of this study was to develop a computational model to simulate the dynamics of intraocular gas behavior in pneumatic retinopexy (PR) procedure. The presented model predicted intraocular gas volume at any time and determined the tolerance angle within which a patient can maneuver and still gas completely covers the tear(s). Computational fluid dynamics calculations were conducted to describe PR procedure. The geometrical model was constructed based on the rabbit and human eye dimensions. SF6 in the form of pure and diluted with air was considered as the injected gas. The presented results indicated that the composition of the injected gas affected the gas absorption rate and gas volume. After injection of pure SF6, the bubble expanded to 2.3 times of its initial volume during the first 23 h, but when diluted SF6 was used, no significant expansion was observed. Also, head positioning for the treatment of retinal tear influenced the rate of gas absorption. Moreover, the determined tolerance angle depended on the bubble and tear size. More bubble expansion and smaller retinal tear caused greater tolerance angle. For example, after 23 h, for the tear size of 2 mm the tolerance angle of using pure SF6 is 1.4 times more than that of using diluted SF6 with 80% air. Composition of the injected gas and conditions of the tear in PR may dramatically affect the gas absorption rate and gas volume. Quantifying these effects helps to predict the tolerance angle and improve treatment efficiency.

  7. Computational modeling of intraocular gas dynamics.

    PubMed

    Noohi, P; Abdekhodaie, M J; Cheng, Y L

    2015-12-18

    The purpose of this study was to develop a computational model to simulate the dynamics of intraocular gas behavior in pneumatic retinopexy (PR) procedure. The presented model predicted intraocular gas volume at any time and determined the tolerance angle within which a patient can maneuver and still gas completely covers the tear(s). Computational fluid dynamics calculations were conducted to describe PR procedure. The geometrical model was constructed based on the rabbit and human eye dimensions. SF6 in the form of pure and diluted with air was considered as the injected gas. The presented results indicated that the composition of the injected gas affected the gas absorption rate and gas volume. After injection of pure SF6, the bubble expanded to 2.3 times of its initial volume during the first 23 h, but when diluted SF6 was used, no significant expansion was observed. Also, head positioning for the treatment of retinal tear influenced the rate of gas absorption. Moreover, the determined tolerance angle depended on the bubble and tear size. More bubble expansion and smaller retinal tear caused greater tolerance angle. For example, after 23 h, for the tear size of 2 mm the tolerance angle of using pure SF6 is 1.4 times more than that of using diluted SF6 with 80% air. Composition of the injected gas and conditions of the tear in PR may dramatically affect the gas absorption rate and gas volume. Quantifying these effects helps to predict the tolerance angle and improve treatment efficiency.

  8. SETI@home, BOINC, and Volunteer Distributed Computing

    NASA Astrophysics Data System (ADS)

    Korpela, Eric J.

    2012-05-01

    Volunteer computing, also known as public-resource computing, is a form of distributed computing that relies on members of the public donating the processing power, Internet connection, and storage capabilities of their home computers. Projects that utilize this mode of distributed computation can potentially access millions of Internet-attached central processing units (CPUs) that provide PFLOPS (thousands of trillions of floating-point operations per second) of processing power. In addition, these projects can access the talents of the volunteers themselves. Projects span a wide variety of domains including astronomy, biochemistry, climatology, physics, and mathematics. This review provides an introduction to volunteer computing and some of the difficulties involved in its implementation. I describe the dominant infrastructure for volunteer computing in some depth and provide descriptions of a small number of projects as an illustration of the variety of projects that can be undertaken.

  9. Nonlinear ship waves and computational fluid dynamics.

    PubMed

    Miyata, Hideaki; Orihara, Hideo; Sato, Yohei

    2014-01-01

    Research works undertaken in the first author's laboratory at the University of Tokyo over the past 30 years are highlighted. Finding of the occurrence of nonlinear waves (named Free-Surface Shock Waves) in the vicinity of a ship advancing at constant speed provided the start-line for the progress of innovative technologies in the ship hull-form design. Based on these findings, a multitude of the Computational Fluid Dynamic (CFD) techniques have been developed over this period, and are highlighted in this paper. The TUMMAC code has been developed for wave problems, based on a rectangular grid system, while the WISDAM code treats both wave and viscous flow problems in the framework of a boundary-fitted grid system. These two techniques are able to cope with almost all fluid dynamical problems relating to ships, including the resistance, ship's motion and ride-comfort issues. Consequently, the two codes have contributed significantly to the progress in the technology of ship design, and now form an integral part of the ship-designing process.

  10. Nonlinear ship waves and computational fluid dynamics

    PubMed Central

    MIYATA, Hideaki; ORIHARA, Hideo; SATO, Yohei

    2014-01-01

    Research works undertaken in the first author’s laboratory at the University of Tokyo over the past 30 years are highlighted. Finding of the occurrence of nonlinear waves (named Free-Surface Shock Waves) in the vicinity of a ship advancing at constant speed provided the start-line for the progress of innovative technologies in the ship hull-form design. Based on these findings, a multitude of the Computational Fluid Dynamic (CFD) techniques have been developed over this period, and are highlighted in this paper. The TUMMAC code has been developed for wave problems, based on a rectangular grid system, while the WISDAM code treats both wave and viscous flow problems in the framework of a boundary-fitted grid system. These two techniques are able to cope with almost all fluid dynamical problems relating to ships, including the resistance, ship’s motion and ride-comfort issues. Consequently, the two codes have contributed significantly to the progress in the technology of ship design, and now form an integral part of the ship-designing process. PMID:25311139

  11. Computational Fluid Dynamics Modeling of Bacillus anthracis ...

    EPA Pesticide Factsheets

    Journal Article Three-dimensional computational fluid dynamics and Lagrangian particle deposition models were developed to compare the deposition of aerosolized Bacillus anthracis spores in the respiratory airways of a human with that of the rabbit, a species commonly used in the study of anthrax disease. The respiratory airway geometries for each species were derived from computed tomography (CT) or µCT images. Both models encompassed airways that extended from the external nose to the lung with a total of 272 outlets in the human model and 2878 outlets in the rabbit model. All simulations of spore deposition were conducted under transient, inhalation-exhalation breathing conditions using average species-specific minute volumes. Four different exposure scenarios were modeled in the rabbit based upon experimental inhalation studies. For comparison, human simulations were conducted at the highest exposure concentration used during the rabbit experimental exposures. Results demonstrated that regional spore deposition patterns were sensitive to airway geometry and ventilation profiles. Despite the complex airway geometries in the rabbit nose, higher spore deposition efficiency was predicted in the upper conducting airways of the human at the same air concentration of anthrax spores. This greater deposition of spores in the upper airways in the human resulted in lower penetration and deposition in the tracheobronchial airways and the deep lung than that predict

  12. Computational Fluid Dynamics Modeling of Bacillus anthracis ...

    EPA Pesticide Factsheets

    Journal Article Three-dimensional computational fluid dynamics and Lagrangian particle deposition models were developed to compare the deposition of aerosolized Bacillus anthracis spores in the respiratory airways of a human with that of the rabbit, a species commonly used in the study of anthrax disease. The respiratory airway geometries for each species were derived from computed tomography (CT) or µCT images. Both models encompassed airways that extended from the external nose to the lung with a total of 272 outlets in the human model and 2878 outlets in the rabbit model. All simulations of spore deposition were conducted under transient, inhalation-exhalation breathing conditions using average species-specific minute volumes. Four different exposure scenarios were modeled in the rabbit based upon experimental inhalation studies. For comparison, human simulations were conducted at the highest exposure concentration used during the rabbit experimental exposures. Results demonstrated that regional spore deposition patterns were sensitive to airway geometry and ventilation profiles. Despite the complex airway geometries in the rabbit nose, higher spore deposition efficiency was predicted in the upper conducting airways of the human at the same air concentration of anthrax spores. This greater deposition of spores in the upper airways in the human resulted in lower penetration and deposition in the tracheobronchial airways and the deep lung than that predict

  13. Bioreactor studies and computational fluid dynamics.

    PubMed

    Singh, H; Hutmacher, D W

    2009-01-01

    The hydrodynamic environment "created" by bioreactors for the culture of a tissue engineered construct (TEC) is known to influence cell migration, proliferation and extra cellular matrix production. However, tissue engineers have looked at bioreactors as black boxes within which TECs are cultured mainly by trial and error, as the complex relationship between the hydrodynamic environment and tissue properties remains elusive, yet is critical to the production of clinically useful tissues. It is well known in the chemical and biotechnology field that a more detailed description of fluid mechanics and nutrient transport within process equipment can be achieved via the use of computational fluid dynamics (CFD) technology. Hence, the coupling of experimental methods and computational simulations forms a synergistic relationship that can potentially yield greater and yet, more cohesive data sets for bioreactor studies. This review aims at discussing the rationale of using CFD in bioreactor studies related to tissue engineering, as fluid flow processes and phenomena have direct implications on cellular response such as migration and/or proliferation. We conclude that CFD should be seen by tissue engineers as an invaluable tool allowing us to analyze and visualize the impact of fluidic forces and stresses on cells and TECs.

  14. Bioreactor Studies and Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Singh, H.; Hutmacher, D. W.

    The hydrodynamic environment “created” by bioreactors for the culture of a tissue engineered construct (TEC) is known to influence cell migration, proliferation and extra cellular matrix production. However, tissue engineers have looked at bioreactors as black boxes within which TECs are cultured mainly by trial and error, as the complex relationship between the hydrodynamic environment and tissue properties remains elusive, yet is critical to the production of clinically useful tissues. It is well known in the chemical and biotechnology field that a more detailed description of fluid mechanics and nutrient transport within process equipment can be achieved via the use of computational fluid dynamics (CFD) technology. Hence, the coupling of experimental methods and computational simulations forms a synergistic relationship that can potentially yield greater and yet, more cohesive data sets for bioreactor studies. This review aims at discussing the rationale of using CFD in bioreactor studies related to tissue engineering, as fluid flow processes and phenomena have direct implications on cellular response such as migration and/or proliferation. We conclude that CFD should be seen by tissue engineers as an invaluable tool allowing us to analyze and visualize the impact of fluidic forces and stresses on cells and TECs.

  15. Computational Fluid Dynamics - Applications in Manufacturing Processes

    NASA Astrophysics Data System (ADS)

    Beninati, Maria Laura; Kathol, Austin; Ziemian, Constance

    2012-11-01

    A new Computational Fluid Dynamics (CFD) exercise has been developed for the undergraduate introductory fluid mechanics course at Bucknell University. The goal is to develop a computational exercise that students complete which links the manufacturing processes course and the concurrent fluid mechanics course in a way that reinforces the concepts in both. In general, CFD is used as a tool to increase student understanding of the fundamentals in a virtual world. A ``learning factory,'' which is currently in development at Bucknell seeks to use the laboratory as a means to link courses that previously seemed to have little correlation at first glance. A large part of the manufacturing processes course is a project using an injection molding machine. The flow of pressurized molten polyurethane into the mold cavity can also be an example of fluid motion (a jet of liquid hitting a plate) that is applied in manufacturing. The students will run a CFD process that captures this flow using their virtual mold created with a graphics package, such as SolidWorks. The laboratory structure is currently being implemented and analyzed as a part of the ``learning factory''. Lastly, a survey taken before and after the CFD exercise demonstrate a better understanding of both the CFD and manufacturing process.

  16. The Design Methodology of Distributed Computer Systems.

    DTIC Science & Technology

    1980-12-01

    This remedies most of the drawbacks of the ccntralized approach . However, due to the inherent communication delay, the chosen control node may get an...alternative approach is the bayesian approach advocated by Littlewood -31 - (LIT 79(B)). Here we postulate a prior distribution for each of 1, 2, .. j- Then...sses A-i Chapter 2 describes top-down deve lopment approach . The development process is pdivided into four successive phases; (1) requirement, and

  17. LaRC local area networks to support distributed computing

    NASA Technical Reports Server (NTRS)

    Riddle, E. P.

    1984-01-01

    The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.

  18. The Design Methodology of Distributed Computer Systems.

    DTIC Science & Technology

    1980-05-01

    of possible reconfiguration strategies and to select the one that is best * suited for a given dynamic environment. The evaluation is based on the...satisfied. OA0 T TV’ _ K> Figure 2. alAn inconsistent system Figure 2.1b A consistent system The practical implication behind this system classification...program once. The procedure for constructing the RP-graph of a con- current system can be best illustrated by an example. Figure 3.1 shows a concurrent

  19. Moments of inclination error distribution computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1981-01-01

    A FORTRAN coded computer program is described which calculates orbital inclination error statistics using a closed-form solution. This solution uses a data base of trajectory errors from actual flights to predict the orbital inclination error statistics. The Scott flight history data base consists of orbit insertion errors in the trajectory parameters - altitude, velocity, flight path angle, flight azimuth, latitude and longitude. The methods used to generate the error statistics are of general interest since they have other applications. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included.

  20. A Computability Theory for Distributed Systems.

    DTIC Science & Technology

    1986-03-13

    the following two elementary properties. * [p] is an equivalence relation over system computations. * For z a prefix of V, there is an event on p...assumption. C1 CO ’ We note that the two conditions in the last sentence of the theorem are not exclusive. Con-Obserration I. Any occurrence of "P" in a...Basic Tense Logic, in D. Gab- 10~th POPL (1983) 141-1,54. bay and F. Guenthner (eds.) Handbook of [MP31 Manna, Z., Pnueli, A. - Verification of Con

  1. A distributed computing model for telemetry data processing

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  2. A distributed computing model for telemetry data processing

    NASA Astrophysics Data System (ADS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  3. Approximation modeling for the online performance management of distributed computing systems.

    PubMed

    Kusic, Dara; Kandasamy, Nagarajan; Jiang, Guofei

    2008-10-01

    A promising method of automating management tasks in computing systems is to formulate them as control or optimization problems in terms of performance metrics. For an online optimization scheme to be of practical value in a distributed setting, however, it must successfully tackle the curses of dimensionality and modeling. This paper develops a hierarchical control framework to solve performance management problems in distributed computing systems operating in a data center. Concepts from approximation theory are used to reduce the computational burden of controlling such large-scale systems. The relevant approximations are made in the construction of the dynamical models to predict system behavior and in the solution of the associated control equations. Using a dynamic resource-provisioning problem as a case study, we show that a computing system managed by the proposed control framework with approximation models realizes profit gains that are, in the best case, within 1% of a controller using an explicit model of the system.

  4. System Design for On-line Distributed Computational Visualization and Steering

    SciTech Connect

    Wu, Qishi; Zhu, Mengxia; Rao, Nageswara S

    2006-01-01

    We propose a distributed computing framework for network-optimized visualization and steering of real-time scientific simulations and computations executed on a remote host, such as workstation, cluster or supercomputer. Unlike the conventional "batch" simulations, this system enables: (i) monitoring of an ongoing remote computation using visualization tools, and (ii) on-line specification of simulation parameters to interactively steer remote computations. Using performance models for transport channels and visualization modules, we develop a dynamic programming method to optimize the realization of the visualization pipeline over a wide-area network to maximize the frame rate. We present experimental results to illustrate the effectiveness of this system.

  5. Methodology for Uncertainty Analysis of Dynamic Computational Toxicology Models

    EPA Science Inventory

    The task of quantifying the uncertainty in both parameter estimates and model predictions has become more important with the increased use of dynamic computational toxicology models by the EPA. Dynamic toxicological models include physiologically-based pharmacokinetic (PBPK) mode...

  6. Methodology for Uncertainty Analysis of Dynamic Computational Toxicology Models

    EPA Science Inventory

    The task of quantifying the uncertainty in both parameter estimates and model predictions has become more important with the increased use of dynamic computational toxicology models by the EPA. Dynamic toxicological models include physiologically-based pharmacokinetic (PBPK) mode...

  7. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  8. Actors: A Model of Concurrent Computation in Distributed Systems.

    DTIC Science & Technology

    1985-06-01

    RD-A157 917 ACTORS: A MODEL OF CONCURRENT COMPUTATION IN 1/3- DISTRIBUTED SY𔃿TEMS(U) MASSACHUSETTS INST OF TECH CRMBRIDGE ARTIFICIAL INTELLIGENCE...EmmmmmmEmmmmmE mmmmmmmmmmmmmmlfllfllf EEEEEEEmmmmmEE Sa~WNVS AO nflWl ,VNOIJVN 27 n- -o :1 ~ili0 Technical Report 844 Actors: A Model Of Concurrent...Computation In Distributed Systems Gui A. Aghai MIT Artificial Intelligence Laboratory Thsdocument ha. been cipp -oved I= pblicrelease and sale; itsI

  9. Toward personalized nasal surgery using computational fluid dynamics.

    PubMed

    Rhee, John S; Pawar, Sachin S; Garcia, Guilherme J M; Kimbell, Julia S

    2011-01-01

    To evaluate whether virtual surgery performed on 3-dimensional (3D) nasal airway models can predict postsurgical, biophysical parameters obtained by computational fluid dynamics (CFD). Presurgery and postsurgery computed tomographic scans of a patient undergoing septoplasty and right inferior turbinate reduction (ITR) were used to generate 3D models of the nasal airway. Prior to obtaining the postsurgery scan, the presurgery model was digitally altered to generate 3 virtual surgery models: (1) right ITR only, (2) septoplasty only, and (3) septoplasty with right ITR. The results of the virtual surgery CFD analyses were compared with postsurgical CFD outcome measures including nasal resistance, unilateral airflow allocation, and regional airflow distribution. Postsurgery CFD analysis and all virtual surgery models predicted similar reductions in overall nasal resistance, as well as more balanced airflow distribution between sides, primarily in the middle region, when compared with the presurgery state. In contrast, virtual ITR alone produced little change in either nasal resistance or regional airflow allocation. We present an innovative approach for assessing functional outcomes of nasal surgery using CFD techniques. This preliminary study suggests that virtual nasal surgery has the potential to be a predictive tool that will enable surgeons to perform personalized nasal surgery using computer simulation techniques. Further investigation involving correlation of patient-reported measures with CFD outcome measures in multiple individuals is under way.

  10. Nonlinear Fluid Computations in a Distributed Environment

    NASA Technical Reports Server (NTRS)

    Atwood, Christopher A.; Smith, Merritt H.

    1995-01-01

    The performance of a loosely and tightly-coupled workstation cluster is compared against a conventional vector supercomputer for the solution the Reynolds- averaged Navier-Stokes equations. The application geometries include a transonic airfoil, a tiltrotor wing/fuselage, and a wing/body/empennage/nacelle transport. Decomposition is of the manager-worker type, with solution of one grid zone per worker process coupled using the PVM message passing library. Task allocation is determined by grid size and processor speed, subject to available memory penalties. Each fluid zone is computed using an implicit diagonal scheme in an overset mesh framework, while relative body motion is accomplished using an additional worker process to re-establish grid communication.

  11. Distributed sensor networks with collective computation

    SciTech Connect

    Lanman, D. R.

    2001-01-01

    Simulations of a network of N sensors have been performed. The simulation space contains a number of sound sources and a large number of sensors. Each sensor is equipped with an omni-directional microphone and is capable of measuring only the time of arrival of a signal. Sensors are able to wirelessly transmit and receive packets of information, and have some computing power. The sensors were programmed to merge all information (received packets as well as local measurements) into a 'world view' for that node. This world view is then transmitted. In this way, information can slowly diffuse across the network. One node was monitored in the network as a proxy for when information had diffused across the network. Simulations demonstrated that the energy expended per sensor per time step was approximately independent of N.

  12. Clock distribution system for digital computers

    DOEpatents

    Wyman, Robert H.; Loomis, Jr., Herschel H.

    1981-01-01

    Apparatus for eliminating, in each clock distribution amplifier of a clock distribution system, sequential pulse catch-up error due to one pulse "overtaking" a prior clock pulse. The apparatus includes timing means to produce a periodic electromagnetic signal with a fundamental frequency having a fundamental frequency component V'.sub.01 (t); an array of N signal characteristic detector means, with detector means No. 1 receiving the timing means signal and producing a change-of-state signal V.sub.1 (t) in response to receipt of a signal above a predetermined threshold; N substantially identical filter means, one filter means being operatively associated with each detector means, for receiving the change-of-state signal V.sub.n (t) and producing a modified change-of-state signal V'.sub.n (t) (n=1, . . . , N) having a fundamental frequency component that is substantially proportional to V'.sub.01 (t-.theta..sub.n (t) with a cumulative phase shift .theta..sub.n (t) having a time derivative that may be made uniformly and arbitrarily small; and with the detector means n+1 (1.ltoreq.n

  13. Nature computes: information processing in quantum dynamical systems.

    PubMed

    Wiesner, Karoline

    2010-09-01

    Nature intrinsically computes. It has been suggested that the entire universe is a computer, in particular, a quantum computer. To corroborate this idea we require tools to quantify the information processing. Here we review a theoretical framework for quantifying information processing in a quantum dynamical system. So-called intrinsic quantum computation combines tools from dynamical systems theory, information theory, quantum mechanics, and computation theory. We will review how far the framework has been developed and what some of the main open questions are. On the basis of this framework we discuss upper and lower bounds for intrinsic information storage in a quantum dynamical system.

  14. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    NASA Astrophysics Data System (ADS)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  15. Computer Power: Part 1: Distribution of Power (and Communications).

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1988-01-01

    Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)

  16. Developing a Distributed Computing Architecture at Arizona State University.

    ERIC Educational Resources Information Center

    Armann, Neil; And Others

    1994-01-01

    Development of Arizona State University's computing architecture, designed to ensure that all new distributed computing pieces will work together, is described. Aspects discussed include the business rationale, the general architectural approach, characteristics and objectives of the architecture, specific services, and impact on the university…

  17. The Penalty of Context-Switch Time in Distributed Computing

    DTIC Science & Technology

    1988-05-13

    Context-switch time is a significant cost in distributed computing , affecting through-put and response time. We report statistics gathered for a large network of Sun 2’s, Sun 3’s and DEC VAX computers.

  18. Developing a Distributed Computing Architecture at Arizona State University.

    ERIC Educational Resources Information Center

    Armann, Neil; And Others

    1994-01-01

    Development of Arizona State University's computing architecture, designed to ensure that all new distributed computing pieces will work together, is described. Aspects discussed include the business rationale, the general architectural approach, characteristics and objectives of the architecture, specific services, and impact on the university…

  19. Beta distributions: A computer program for probabilities and fractile points

    NASA Technical Reports Server (NTRS)

    Brownlow, J. D.; Swaroop, R.

    1979-01-01

    A beta distribution is specified by range parameters a b, and two shape parameters alpha and beta 0. The computer program presented calculates any desired probability and/or fractile point for specified values of a, b, alpha, and beta. This program additionally computes gamma function values for integer and noninteger arguments.

  20. Computer Power: Part 1: Distribution of Power (and Communications).

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1988-01-01

    Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)

  1. Pladipus Enables Universal Distributed Computing in Proteomics Bioinformatics.

    PubMed

    Verheggen, Kenneth; Maddelein, Davy; Hulstaert, Niels; Martens, Lennart; Barsnes, Harald; Vaudel, Marc

    2016-03-04

    The use of proteomics bioinformatics substantially contributes to an improved understanding of proteomes, but this novel and in-depth knowledge comes at the cost of increased computational complexity. Parallelization across multiple computers, a strategy termed distributed computing, can be used to handle this increased complexity; however, setting up and maintaining a distributed computing infrastructure requires resources and skills that are not readily available to most research groups. Here we propose a free and open-source framework named Pladipus that greatly facilitates the establishment of distributed computing networks for proteomics bioinformatics tools. Pladipus is straightforward to install and operate thanks to its user-friendly graphical interface, allowing complex bioinformatics tasks to be run easily on a network instead of a single computer. As a result, any researcher can benefit from the increased computational efficiency provided by distributed computing, hence empowering them to tackle more complex bioinformatics challenges. Notably, it enables any research group to perform large-scale reprocessing of publicly available proteomics data, thus supporting the scientific community in mining these data for novel discoveries.

  2. Transient dynamic distributed strain sensing using photonic crystal fibres

    NASA Astrophysics Data System (ADS)

    Samad, Shafeek A.; Hegde, G. M.; Roy Mahapatra, D.; Hanagud, S.

    2014-02-01

    A technique to determine the strain field in one-dimensional (1D) photonic crystal (PC) involving high strain rate, high temperature around shock or ballistic impact is proposed. Transient strain sensing is important in aerospace and other structural health monitoring (SHM) applications. We consider a MEMS based smart sensor design with photonic crystal integrated on a silicon substrate for dynamic strain correlation. Deeply etched silicon rib waveguides with distributed Bragg reflectors are suitable candidates for miniaturization of sensing elements, replacing the conventional FBG. Main objective here is to investigate the effect of non-uniform strain localization on the sensor output. Computational analysis is done to determine the static and dynamic strain sensing characteristics of the 1D photonic crystal based sensor. The structure is designed and modeled using Finite Element Method. Dynamic localization of strain field is observed. The distributed strain field is used to calculated the PC waveguide response. The sensitivity of the proposed sensor is estimated to be 0.6 pm/μɛ.

  3. Toward Distributed Service Discovery in Pervasive Computing Environments

    DTIC Science & Technology

    2006-02-01

    Library at www.computer.org/publications/ dlib . 112 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 5, NO. 2, FEBRUARY 2006 ... Library for Parallel Simulation of Large-Scale Wireless Networks,” Proc. 12th Workshop Parallel and Distributed Simulations, 1998. CHAKRABORTY ET AL...computing, digital libraries , electronic commerce, and trusted information systems. She has published eight books and more than 100 refereed articles in

  4. Computational fluid dynamics modelling in cardiovascular medicine

    PubMed Central

    Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P

    2016-01-01

    This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards ‘digital patient’ or ‘virtual physiological human’ representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges. PMID:26512019

  5. Computational fluid dynamics modelling in cardiovascular medicine.

    PubMed

    Morris, Paul D; Narracott, Andrew; von Tengg-Kobligk, Hendrik; Silva Soto, Daniel Alejandro; Hsiao, Sarah; Lungu, Angela; Evans, Paul; Bressloff, Neil W; Lawford, Patricia V; Hose, D Rodney; Gunn, Julian P

    2016-01-01

    This paper reviews the methods, benefits and challenges associated with the adoption and translation of computational fluid dynamics (CFD) modelling within cardiovascular medicine. CFD, a specialist area of mathematics and a branch of fluid mechanics, is used routinely in a diverse range of safety-critical engineering systems, which increasingly is being applied to the cardiovascular system. By facilitating rapid, economical, low-risk prototyping, CFD modelling has already revolutionised research and development of devices such as stents, valve prostheses, and ventricular assist devices. Combined with cardiovascular imaging, CFD simulation enables detailed characterisation of complex physiological pressure and flow fields and the computation of metrics which cannot be directly measured, for example, wall shear stress. CFD models are now being translated into clinical tools for physicians to use across the spectrum of coronary, valvular, congenital, myocardial and peripheral vascular diseases. CFD modelling is apposite for minimally-invasive patient assessment. Patient-specific (incorporating data unique to the individual) and multi-scale (combining models of different length- and time-scales) modelling enables individualised risk prediction and virtual treatment planning. This represents a significant departure from traditional dependence upon registry-based, population-averaged data. Model integration is progressively moving towards 'digital patient' or 'virtual physiological human' representations. When combined with population-scale numerical models, these models have the potential to reduce the cost, time and risk associated with clinical trials. The adoption of CFD modelling signals a new era in cardiovascular medicine. While potentially highly beneficial, a number of academic and commercial groups are addressing the associated methodological, regulatory, education- and service-related challenges.

  6. Product limit estimation for capturing of pressure distribution dynamics.

    PubMed

    Wininger, Michael; Crane, Barbara A

    2016-05-01

    Measurement of contact pressures at the wheelchair-seating interface is a critically important approach for laboratory research and clinical application in monitoring risk for pressure ulceration. As yet, measures obtained from pressure mapping are static in nature: there is no accounting for changes in pressure distribution over time, despite the well-known interaction between time and pressure in risk estimation. Here, we introduce the first dynamic analysis for distribution of pressure data, based on the Kaplan-Meier (KM) Product Limit Estimator (PLE) a ubiquitous tool encountered in clinical trials and survival analysis. In this approach, the pressure array-over-time data set is sub-sampled two frames at a time (random pairing), and their similarity of pressure distribution is quantified via a correlation coefficient. A large number (here: 100) of these frame pairs is then sorted into descending order of correlation value, and visualized as a KM curve; we build confidence limits via a bootstrap computed over 1000 replications. PLEs and the KM have robust statistical support and extensive development: the opportunities for extended application are substantial. We propose that the KM-PLE in particular, and dynamic analysis in general, may provide key leverage on future development of seating technology, and valuable new insight into extant datasets.

  7. Distriblets: Java-Based Distributed Computing on the Web.

    ERIC Educational Resources Information Center

    Finkel, David; Wills, Craig E.; Brennan, Brian; Brennan, Chris

    1999-01-01

    Describes a system for using the World Wide Web to distribute computational tasks to multiple hosts on the Web that is written in Java programming language. Describes the programs written to carry out the load distribution, the structure of a "distriblet" class, and experiences in using this system. (Author/LRW)

  8. Distriblets: Java-Based Distributed Computing on the Web.

    ERIC Educational Resources Information Center

    Finkel, David; Wills, Craig E.; Brennan, Brian; Brennan, Chris

    1999-01-01

    Describes a system for using the World Wide Web to distribute computational tasks to multiple hosts on the Web that is written in Java programming language. Describes the programs written to carry out the load distribution, the structure of a "distriblet" class, and experiences in using this system. (Author/LRW)

  9. Perspectives on distributed computing : thirty people, four user types, and the distributed computing user experience.

    SciTech Connect

    Childers, L.; Liming, L.; Foster, I.; Mathematics and Computer Science; Univ. of Chicago

    2008-10-15

    This report summarizes the methodology and results of a user perspectives study conducted by the Community Driven Improvement of Globus Software (CDIGS) project. The purpose of the study was to document the work-related goals and challenges facing today's scientific technology users, to record their perspectives on Globus software and the distributed-computing ecosystem, and to provide recommendations to the Globus community based on the observations. Globus is a set of open source software components intended to provide a framework for collaborative computational science activities. Rather than attempting to characterize all users or potential users of Globus software, our strategy has been to speak in detail with a small group of individuals in the scientific community whose work appears to be the kind that could benefit from Globus software, learn as much as possible about their work goals and the challenges they face, and describe what we found. The result is a set of statements about specific individuals experiences. We do not claim that these are representative of a potential user community, but we do claim to have found commonalities and differences among the interviewees that may be reflected in the user community as a whole. We present these as a series of hypotheses that can be tested by subsequent studies, and we offer recommendations to Globus developers based on the assumption that these hypotheses are representative. Specifically, we conducted interviews with thirty technology users in the scientific community. We included both people who have used Globus software and those who have not. We made a point of including individuals who represent a variety of roles in scientific projects, for example, scientists, software developers, engineers, and infrastructure providers. The following material is included in this report: (1) A summary of the reported work-related goals, significant issues, and points of satisfaction with the use of Globus software; (2

  10. A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)

    2001-01-01

    NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.

  11. Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments

    SciTech Connect

    Jin, Shuangshuang; Chen, Yousu; Wu, Di; Diao, Ruisheng; Huang, Zhenyu

    2015-12-09

    Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Message Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.

  12. Description and development of the means of a model experiment for load balancing in distributed computing systems

    NASA Astrophysics Data System (ADS)

    Nagiyev, A. E.; Sherstnyova, A. I.; Botygin, I. A.; Galanova, N. Y.

    2016-06-01

    The results of the statistical model experiments research of various load balancing algorithms in distributed computing systems are presented. Software tools were developed. These tools, which allow to create a virtual infrastructure of distributed computing system in accordance with the intended objective of the research focused on multi-agent and multithreaded data processing were developed. A diagram of the control processing of requests from the terminal devices, providing an effective dynamic horizontal scaling of computing power at peak loads, is proposed.

  13. Arcade: A Web-Java Based Framework for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.

  14. Computational social dynamic modeling of group recruitment.

    SciTech Connect

    Berry, Nina M.; Lee, Marinna; Pickett, Marc; Turnley, Jessica Glicken; Smrcka, Julianne D.; Ko, Teresa H.; Moy, Timothy David; Wu, Benjamin C.

    2004-01-01

    The Seldon software toolkit combines concepts from agent-based modeling and social science to create a computationally social dynamic model for group recruitment. The underlying recruitment model is based on a unique three-level hybrid agent-based architecture that contains simple agents (level one), abstract agents (level two), and cognitive agents (level three). This uniqueness of this architecture begins with abstract agents that permit the model to include social concepts (gang) or institutional concepts (school) into a typical software simulation environment. The future addition of cognitive agents to the recruitment model will provide a unique entity that does not exist in any agent-based modeling toolkits to date. We use social networks to provide an integrated mesh within and between the different levels. This Java based toolkit is used to analyze different social concepts based on initialization input from the user. The input alters a set of parameters used to influence the values associated with the simple agents, abstract agents, and the interactions (simple agent-simple agent or simple agent-abstract agent) between these entities. The results of phase-1 Seldon toolkit provide insight into how certain social concepts apply to different scenario development for inner city gang recruitment.

  15. Computational fluid dynamics in ventilation: Practical approach

    NASA Astrophysics Data System (ADS)

    Fontaine, J. R.

    The potential of computation fluid dynamics (CFD) for conceiving ventilation systems is shown through the simulation of five practical cases. The following examples are considered: capture of pollutants on a surface treating tank equipped with a unilateral suction slot in the presence of a disturbing air draft opposed to suction; dispersion of solid aerosols inside fume cupboards; performances comparison of two general ventilation systems in a silkscreen printing workshop; ventilation of a large open painting area; and oil fog removal inside a mechanical engineering workshop. Whereas the two first problems are analyzed through two dimensional numerical simulations, the three other cases require three dimensional modeling. For the surface treating tank case, numerical results are compared to laboratory experiment data. All simulations are carried out using EOL, a CFD software specially devised to deal with air quality problems in industrial ventilated premises. It contains many analysis tools to interpret the results in terms familiar to the industrial hygienist. Much experimental work has been engaged to validate the predictions of EOL for ventilation flows.

  16. One approach for evaluating the Distributed Computing Design System (DCDS)

    NASA Technical Reports Server (NTRS)

    Ellis, J. T.

    1985-01-01

    The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.

  17. Protocols for configuring computation loops on a distributed multiprocessor system

    SciTech Connect

    Woei Lin; Chuan-lin Wu

    1983-01-01

    Protocols for configuring computation loops in a multiprocessing system are examined. Processing nodes are connected by a reconfigurable communication subnet using a multistage interconnection network. Configuration protocols are presented in terms of distributed algorithms such that processing nodes are configured in loop topologies. The configurability of loop topologies is first investigated. It is verified that the communication subnet can emulate loop distributed systems. It is also proven that multiple loops of various lengths can be configured in the distributed network. The technique demonstrated for configuring loop topologies can be used to configure other computation topologies. 6 references.

  18. Experiment Dashboard for Monitoring of the LHC Distributed Computing Systems

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Devesas Campos, M.; Tarragon Cros, J.; Gaidioz, B.; Karavakis, E.; Kokoszkiewicz, L.; Lanciotti, E.; Maier, G.; Ollivier, W.; Nowotka, M.; Rocha, R.; Sadykov, T.; Saiz, P.; Sargsyan, L.; Sidorova, I.; Tuckett, D.

    2011-12-01

    LHC experiments are currently taking collisions data. A distributed computing model chosen by the four main LHC experiments allows physicists to benefit from resources spread all over the world. The distributed model and the scale of LHC computing activities increase the level of complexity of middleware, and also the chances of possible failures or inefficiencies in involved components. In order to ensure the required performance and functionality of the LHC computing system, monitoring the status of the distributed sites and services as well as monitoring LHC computing activities are among the key factors. Over the last years, the Experiment Dashboard team has been working on a number of applications that facilitate the monitoring of different activities: including following up jobs, transfers, and also site and service availabilities. This presentation describes Experiment Dashboard applications used by the LHC experiments and experience gained during the first months of data taking.

  19. Exact Score Distribution Computation for Similarity Searches in Ontologies

    NASA Astrophysics Data System (ADS)

    Schulz, Marcel H.; Köhler, Sebastian; Bauer, Sebastian; Vingron, Martin; Robinson, Peter N.

    Semantic similarity searches in ontologies are an important component of many bioinformatic algorithms, e.g., protein function prediction with the Gene Ontology. In this paper we consider the exact computation of score distributions for similarity searches in ontologies, and introduce a simple null hypothesis which can be used to compute a P-value for the statistical significance of similarity scores. We concentrate on measures based on Resnik’s definition of ontological similarity. A new algorithm is proposed that collapses subgraphs of the ontology graph and thereby allows fast score distribution computation. The new algorithm is several orders of magnitude faster than the naive approach, as we demonstrate by computing score distributions for similarity searches in the Human Phenotype Ontology.

  20. Application of concurrent processing to structural dynamic response computations

    NASA Technical Reports Server (NTRS)

    Ransom, J.; Sotraasli, O.; Fulton, R.

    1984-01-01

    Described are the experiences gained from solving for the dynamic response of two simple structures on an experimental Multiple Instruction Multiple Data (MIMD) computer called the finite element machine. Introduced are MIMD computing concepts, describing how the concurrent algorithmic techniques implemented and giving results for the two example problems. The results show computational speedups of up to 7.83 using eight of the finite element machine processors and indicate that significant computational speedups are possible for large order structural computations.

  1. Computational fluid dynamic modelling of cavitation

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    Models in sheet cavitation in cryogenic fluids are developed for use in Euler and Navier-Stokes codes. The models are based upon earlier potential-flow models but enable the cavity inception point, length, and shape to be determined as part of the computation. In the present paper, numerical solutions are compared with experimental measurements for both pressure distribution and cavity length. Comparisons between models are also presented. The CFD model provides a relatively simple modification to an existing code to enable cavitation performance predictions to be included. The analysis also has the added ability of incorporating thermodynamic effects of cryogenic fluids into the analysis. Extensions of the current two-dimensional steady state analysis to three-dimensions and/or time-dependent flows are, in principle, straightforward although geometrical issues become more complicated. Linearized models, however offer promise of providing effective cavitation modeling in three-dimensions. This analysis presents good potential for improved understanding of many phenomena associated with cavity flows.

  2. Bringing the CMS distributed computing system into scalable operations

    NASA Astrophysics Data System (ADS)

    Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.

    2010-04-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.

  3. Visualization of vortical flows in computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Volkov, K. N.; Emel'yanov, V. N.; Teterina, I. V.; Yakovchuk, M. S.

    2017-08-01

    The concepts and methods of the visual representation of fluid dynamics computations of vortical flows are studied. Approaches to the visualization of vortical flows based on the use of various definitions of a vortex and various tests for its identification are discussed. Examples of the visual representation of solutions to some fluid dynamics problems related to the computation of vortical flows in jets, channels, and cavities and of the computation of separated flows occurring in flows around bodies of various shapes are discussed.

  4. Dynamic leaching test of personal computer components.

    PubMed

    Li, Yadong; Richardson, Jay B; Niu, Xiaojun; Jackson, Ollie J; Laster, Jeremy D; Walker, Aaron K

    2009-11-15

    A dynamic leaching test (DLT) was developed and used to evaluate the leaching of toxic substances for electronic waste in the environment. The major components in personal computers (PCs) including motherboards, hard disc drives, floppy disc drives, and compact disc drives were tested. The tests lasted for 2 years for motherboards and 1.5 year for the disc drives. The extraction fluids for the standard toxicity characteristic leaching procedure (TCLP) and synthetic precipitation leaching procedure (SPLP) were used as the DLT leaching solutions. A total of 18 elements including Ag, Al, As, Au, Ba, Be, Cd, Cr, Cu, Fe, Ga, Ni, Pd, Pb, Sb, Se, Sn, and Zn were analyzed in the DLT leachates. Only Al, Cu, Fe, Ni, Pb, and Zn were commonly found in the DLT leachates of the PC components. Their leaching levels were much higher in TCLP extraction fluid than in SPLP extraction fluid. The toxic heavy metal Pb was found to continuously leach out of the components over the entire test periods. The cumulative amounts of Pb leached out of the motherboards in TCLP extraction fluid reached 2.0 g per motherboard over the 2-year test period, and that in SPLP extraction fluid were 75-90% less. The leaching rates or levels of Pb were largely affected by the content of galvanized steel in the PC components. The higher was the steel content, the lower the Pb leaching rate would be. The findings suggest that the obsolete PCs disposed of in landfills or discarded in the environment continuously release Pb for years when subjected to landfill leachate or rains.

  5. COMPUTATIONAL FLUID DYNAMICS MODELING ANALYSIS OF COMBUSTORS

    SciTech Connect

    Mathur, M.P.; Freeman, Mark; Gera, Dinesh

    2001-11-06

    In the current fiscal year FY01, several CFD simulations were conducted to investigate the effects of moisture in biomass/coal, particle injection locations, and flow parameters on carbon burnout and NO{sub x} inside a 150 MW GEEZER industrial boiler. Various simulations were designed to predict the suitability of biomass cofiring in coal combustors, and to explore the possibility of using biomass as a reburning fuel to reduce NO{sub x}. Some additional CFD simulations were also conducted on CERF combustor to examine the combustion characteristics of pulverized coal in enriched O{sub 2}/CO{sub 2} environments. Most of the CFD models available in the literature treat particles to be point masses with uniform temperature inside the particles. This isothermal condition may not be suitable for larger biomass particles. To this end, a stand alone program was developed from the first principles to account for heat conduction from the surface of the particle to its center. It is envisaged that the recently developed non-isothermal stand alone module will be integrated with the Fluent solver during next fiscal year to accurately predict the carbon burnout from larger biomass particles. Anisotropy in heat transfer in radial and axial will be explored using different conductivities in radial and axial directions. The above models will be validated/tested on various fullscale industrial boilers. The current NO{sub x} modules will be modified to account for local CH, CH{sub 2}, and CH{sub 3} radicals chemistry, currently it is based on global chemistry. It may also be worth exploring the effect of enriched O{sub 2}/CO{sub 2} environment on carbon burnout and NO{sub x} concentration. The research objective of this study is to develop a 3-Dimensional Combustor Model for Biomass Co-firing and reburning applications using the Fluent Computational Fluid Dynamics Code.

  6. AIR INGRESS ANALYSIS: COMPUTATIONAL FLUID DYNAMIC MODELS

    SciTech Connect

    Chang H. Oh; Eung S. Kim; Richard Schultz; Hans Gougar; David Petti; Hyung S. Kang

    2010-08-01

    The Idaho National Laboratory (INL), under the auspices of the U.S. Department of Energy, is performing research and development that focuses on key phenomena important during potential scenarios that may occur in very high temperature reactors (VHTRs). Phenomena Identification and Ranking Studies to date have ranked an air ingress event, following on the heels of a VHTR depressurization, as important with regard to core safety. Consequently, the development of advanced air ingress-related models and verification and validation data are a very high priority. Following a loss of coolant and system depressurization incident, air will enter the core of the High Temperature Gas Cooled Reactor through the break, possibly causing oxidation of the in-the core and reflector graphite structure. Simple core and plant models indicate that, under certain circumstances, the oxidation may proceed at an elevated rate with additional heat generated from the oxidation reaction itself. Under postulated conditions of fluid flow and temperature, excessive degradation of the lower plenum graphite can lead to a loss of structural support. Excessive oxidation of core graphite can also lead to the release of fission products into the confinement, which could be detrimental to a reactor safety. Computational fluid dynamic model developed in this study will improve our understanding of this phenomenon. This paper presents two-dimensional and three-dimensional CFD results for the quantitative assessment of the air ingress phenomena. A portion of results of the density-driven stratified flow in the inlet pipe will be compared with results of the experimental results.

  7. Integrated computer simulation on FIR FEL dynamics

    SciTech Connect

    Furukawa, H.; Kuruma, S.; Imasaki, K.

    1995-12-31

    An integrated computer simulation code has been developed to analyze the RF-Linac FEL dynamics. First, the simulation code on the electron beam acceleration and transport processes in RF-Linac: (LUNA) has been developed to analyze the characteristics of the electron beam in RF-Linac and to optimize the parameters of RF-Linac. Second, a space-time dependent 3D FEL simulation code (Shipout) has been developed. The RF-Linac FEL total simulations have been performed by using the electron beam data from LUNA in Shipout. The number of particles using in a RF-Linac FEL total simulation is approximately 1000. The CPU time for the simulation of 1 round trip is about 1.5 minutes. At ILT/ILE, Osaka, a 8.5MeV RF-Linac with a photo-cathode RF-gun is used for FEL oscillation experiments. By using 2 cm wiggler, the FEL oscillation in the wavelength approximately 46 {mu}m are investigated. By the simulations using LUNA with the parameters of an ILT/ILE experiment, the pulse shape and the energy spectra of the electron beam at the end of the linac are estimated. The pulse shape of the electron beam at the end of the linac has sharp rise-up and it slowly decays as a function of time. By the RF-linac FEL total simulations with the parameters of an ILT/ILE experiment, the dependencies of the start up of the FEL oscillations on the pulse shape of the electron beam at the end of the linac are estimated. The coherent spontaneous emission effects and the quick start up of FEL oscillations have been observed by the RF-Linac FEL total simulations.

  8. Computational Fluid Dynamics of Acoustically Driven Bubble Systems

    NASA Astrophysics Data System (ADS)

    Glosser, Connor; Lie, Jie; Dault, Daniel; Balasubramaniam, Shanker; Piermarocchi, Carlo

    2014-03-01

    The development of modalities for precise, targeted drug delivery has become increasingly important in medical care in recent years. Assemblages of microbubbles steered by acoustic pressure fields present one potential vehicle for such delivery. Modeling the collective response of multi-bubble systems to an intense, externally applied ultrasound field requires accurately capturing acoustic interactions between bubbles and the externally applied field, and their effect on the evolution of bubble kinetics. In this work, we present a methodology for multiphysics simulation based on an efficient transient boundary integral equation (TBIE) coupled with molecular dynamics (MD) to compute trajectories of multiple acoustically interacting bubbles in an ideal fluid under pulsed acoustic excitation. For arbitrary configurations of spherical bubbles, the TBIE solver self-consistently models transient surface pressure distributions at bubble-fluid interfaces due to acoustic interactions and relative potential flows induced by bubble motion. Forces derived from the resulting pressure distributions act as driving terms in the MD update at each timestep. The resulting method efficiently and accurately captures individual bubble dynamics for clouds containing up to hundreds of bubbles.

  9. Towards Personalized Nasal Surgery Using Computational Fluid Dynamics

    PubMed Central

    Rhee, John S.; Pawar, Sachin S.; Garcia, Guilherme J.M.; Kimbell, Julia S.

    2013-01-01

    Objective To evaluate whether virtual surgery (VS) performed on 3D nasal airway models can predict post-surgical, biophysical parameters obtained by computational fluid dynamics (CFD). Methods Pre- and post- surgery CT scans of a patient undergoing septoplasty and right inferior turbinate reduction (ITR) were used to generate 3D models of the nasal airway. Prior to obtaining the post-surgery scan, the pre-surgery model was digitally altered to generate three VS models: 1) right ITR only, 2) septoplasty only, and 3) septoplasty with right ITR. The results of the VS CFD analyses were compared with post-surgical CFD outcome measures including nasal resistance, unilateral airflow allocation, and regional airflow distribution. Results Post-surgery CFD analysis and all VS models predicted similar reductions in overall nasal resistance, as well as more balanced airflow distribution between sides, primarily in the middle region, when compared with the pre-surgery state. In contrast, virtual ITR alone produced little change in either nasal resistance or regional airflow allocation. Conclusions We present an innovative approach for assessing functional outcomes of nasal surgery using CFD techniques. This preliminary study suggests that virtual nasal surgery has the potential to be a predictive tool that will enable surgeons to perform personalized nasal surgery using computer simulation techniques. Further investigation involving correlation of patient-reported measures with CFD outcome measures in multiple individuals is underway. PMID:21502467

  10. Dynamic stiffness method for space frames under distributed harmonic loads

    NASA Astrophysics Data System (ADS)

    Dumir, P. C.; Saha, D. C.; Sengupta, S.

    1992-10-01

    An exact dynamic equivalent load vector for space frames subjected to harmonic distributed loads has been derived using the dynamic stiffness approach. The Taylor's series expansion of the dynamic equivalent load vector has revealed that the static consistent equivalent load vector used in a 12 degree of freedom two-noded finite element for a space frame is just the first term of the series. The dynamic stiffness approach using the exact dynamic equivalent load vector requires discretization of a member subjected to distributed loads into only one element. The results of the dynamic stiffness method are compared with those of the finite element method for illustrative problems.

  11. Spatially distributed characterization of soil-moisture dynamics using travel-time distributions

    NASA Astrophysics Data System (ADS)

    Heße, Falk; Zink, Matthias; Kumar, Rohini; Samaniego, Luis; Attinger, Sabine

    2017-01-01

    Travel-time distributions are a comprehensive tool for the characterization of hydrological system dynamics. Unlike the streamflow hydrograph, they describe the movement and storage of water within and throughout the hydrological system. Until recently, studies using such travel-time distributions have generally either been applied to lumped models or to real-world catchments using available time series, e.g., stable isotopes. Whereas the former are limited in their realism and lack information on the spatial arrangements of the relevant quantities, the latter are limited in their use of available data sets. In our study, we employ the spatially distributed mesoscale Hydrological Model (mHM) and apply it to a catchment in central Germany. Being able to draw on multiple large data sets for calibration and verification, we generate a large array of spatially distributed states and fluxes. These hydrological outputs are then used to compute the travel-time distributions for every grid cell in the modeling domain. A statistical analysis indicates the general soundness of the upscaling scheme employed in mHM and reveals precipitation, saturated soil moisture and potential evapotranspiration as important predictors for explaining the spatial heterogeneity of mean travel times. In addition, we demonstrate and discuss the high information content of mean travel times for characterization of internal hydrological processes.

  12. Generalization of the logistic distribution in the dynamic model of wind direction

    NASA Astrophysics Data System (ADS)

    Kaplya, E. V.

    2016-12-01

    Statistical regularity in the dynamics of wind direction has been found. The density distribution of the increment of the wind-direction angle has been approximated using a generalized advanced logistic distribution. The advanced logistic distribution involves an additional power-law parameter. The parameters of the approximation function have been computed from experimental data using the method of least squares. The consistency of the proposed function with meteorological data has been tested using Pearson's chisquared test and the Kolmogorov test.

  13. GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data

    NASA Astrophysics Data System (ADS)

    Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.

    2016-12-01

    Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We

  14. Validation of Magnetic Resonance Thermometry by Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Rydquist, Grant; Owkes, Mark; Verhulst, Claire M.; Benson, Michael J.; Vanpoppel, Bret P.; Burton, Sascha; Eaton, John K.; Elkins, Christopher P.

    2016-11-01

    Magnetic Resonance Thermometry (MRT) is a new experimental technique that can create fully three-dimensional temperature fields in a noninvasive manner. However, validation is still required to determine the accuracy of measured results. One method of examination is to compare data gathered experimentally to data computed with computational fluid dynamics (CFD). In this study, large-eddy simulations have been performed with the NGA computational platform to generate data for a comparison with previously run MRT experiments. The experimental setup consisted of a heated jet inclined at 30° injected into a larger channel. In the simulations, viscosity and density were scaled according to the local temperature to account for differences in buoyant and viscous forces. A mesh-independent study was performed with 5 mil-, 15 mil- and 45 mil-cell meshes. The program Star-CCM + was used to simulate the complete experimental geometry. This was compared to data generated from NGA. Overall, both programs show good agreement with the experimental data gathered with MRT. With this data, the validity of MRT as a diagnostic tool has been shown and the tool can be used to further our understanding of a range of flows with non-trivial temperature distributions.

  15. Methods for Optimal Output Prediction in Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Kast, Steven Michael

    In a Computational Fluid Dynamics (CFD) simulation, not all data is of equal importance. Instead, the goal of the user is often to compute certain critical outputs - such as lift and drag - accurately. While in recent years CFD simulations have become routine, ensuring accuracy in these outputs is still surprisingly difficult. Unacceptable levels of output error arise even in industry-standard simulations, such as the steady flow around commercial aircraft. This problem is only exacerbated when simulating more complex, unsteady flows. In this thesis, we present a mesh adaptation strategy for unsteady problems that can automatically reduce errors in outputs of interest. This strategy applies to problems in which the computational domain deforms in time - such as flapping-flight simulations - and relies on an unsteady adjoint to identify regions of the mesh contributing most to the output error. This error is then driven down via refinement of the critical regions in both space and time. Here, we demonstrate this strategy on a series of flapping-wing problems in two and three dimensions, using high-order discontinuous Galerkin (DG) methods for both spatial and temporal discretizations. Compared to other methods, results indicate that this strategy can deliver a desired level of output accuracy with significant reductions in computational cost. After concluding our work on mesh adaptation, we take a step back and investigate another idea for obtaining output accuracy: adapting the numerical method itself. In particular, we show how the test space of discontinuous finite element methods can be "optimized" to achieve accuracy in certain outputs or regions. In this work, we compute test functions that ensure accuracy specifically along domain boundaries. These regions - which are vital to both scalar outputs (such as lift and drag) and distributions (such as pressure and skin friction) - are often the most important from an engineering standpoint.

  16. New security infrastructure model for distributed computing systems

    NASA Astrophysics Data System (ADS)

    Dubenskaya, J.; Kryukov, A.; Demichev, A.; Prikhodko, N.

    2016-02-01

    At the paper we propose a new approach to setting up a user-friendly and yet secure authentication and authorization procedure in a distributed computing system. The security concept of the most heterogeneous distributed computing systems is based on the public key infrastructure along with proxy certificates which are used for rights delegation. In practice a contradiction between the limited lifetime of the proxy certificates and the unpredictable time of the request processing is a big issue for the end users of the system. We propose to use unlimited in time hashes which are individual for each request instead of proxy certificate. Our approach allows to avoid using of the proxy certificates. Thus the security infrastructure of distributed computing system becomes easier for development, support and use.

  17. Nonlinear structural analysis on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Watson, Brian C.; Noor, Ahmed K.

    1995-01-01

    A computational strategy is presented for the nonlinear static and postbuckling analyses of large complex structures on massively parallel computers. The strategy is designed for distributed-memory, message-passing parallel computer systems. The key elements of the proposed strategy are: (1) a multiple-parameter reduced basis technique; (2) a nested dissection (or multilevel substructuring) ordering scheme; (3) parallel assembly of global matrices; and (4) a parallel sparse equation solver. The effectiveness of the strategy is assessed by applying it to thermo-mechanical postbuckling analyses of stiffened composite panels with cutouts, and nonlinear large-deflection analyses of HSCT models on Intel Paragon XP/S computers. The numerical studies presented demonstrate the advantages of nested dissection-based solvers over traditional skyline-based solvers on distributed memory machines.

  18. Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.

    NASA Astrophysics Data System (ADS)

    Elliott, William Dewey

    1995-01-01

    A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over

  19. On the sighting of unicorns: A variational approach to computing invariant sets in dynamical systems

    NASA Astrophysics Data System (ADS)

    Junge, Oliver; Kevrekidis, Ioannis G.

    2017-06-01

    We propose to compute approximations to invariant sets in dynamical systems by minimizing an appropriate distance between a suitably selected finite set of points and its image under the dynamics. We demonstrate, through computational experiments, that this approach can successfully converge to approximations of (maximal) invariant sets of arbitrary topology, dimension, and stability, such as, e.g., saddle type invariant sets with complicated dynamics. We further propose to extend this approach by adding a Lennard-Jones type potential term to the objective function, which yields more evenly distributed approximating finite point sets, and illustrate the procedure through corresponding numerical experiments.

  20. Computation of glint, glare, and solar irradiance distribution

    DOEpatents

    Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh

    2015-08-11

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.

  1. Computation of glint, glare, and solar irradiance distribution

    DOEpatents

    Ho, Clifford Kuofei; Khalsa, Siri Sahib Singh

    2017-08-01

    Described herein are technologies pertaining to computing the solar irradiance distribution on a surface of a receiver in a concentrating solar power system or glint/glare emitted from a reflective entity. At least one camera captures images of the Sun and the entity of interest, wherein the images have pluralities of pixels having respective pluralities of intensity values. Based upon the intensity values of the pixels in the respective images, the solar irradiance distribution on the surface of the entity or glint/glare corresponding to the entity is computed.

  2. Distributed computer taxonomy based on O/S structure

    NASA Technical Reports Server (NTRS)

    Foudriat, Edwin C.

    1985-01-01

    The taxonomy considers the resource structure at the operating system level. It compares a communication based taxonomy with the new taxonomy to illustrate how the latter does a better job when related to the client's view of the distributed computer. The results illustrate the fundamental features and what is required to construct fully distributed processing systems. The problem of using network computers on the space station is addressed. A detailed discussion of the taxonomy is not given here. Information is given in the form of charts and diagrams that were used to illustrate a talk.

  3. First Experiences with LHC Grid Computing and Distributed Analysis

    SciTech Connect

    Fisk, Ian

    2010-12-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  4. How to Reduce Computational Time in Distributed Hydrological Modeling?

    NASA Astrophysics Data System (ADS)

    Khan, U.; Tuteja, N. K.; Ajami, H.; Sharma, A.

    2012-12-01

    Equivalent Cross-sections approach are close to the reference fluxes while the computational time is reduced significantly of the order of ~7 to ~10 times. The U3M-2d model evaluation is performed by comparing the simulated soil moisture of hillslope cross-sections with the observed soil moisture at several locations in the Wagga Wagga experimental catchment. Results illustrates that the model has capability to produce consistent results and capture daily soil moisture dynamics. Results from this study indicate that an Equivalent Cross-section based distributed hydrological modeling approach has the potential to reduce the computational time significantly while retaining the same order of accuracy. References Khan, U., A. Sharma, and N. K. Tuteja (2009), A new approach for delineation of hydrologic response units in large catchments, in 18th IMACS World Congress MODSIM 2009, International Conference, Modelling and Simulation Society of Australia and New Zealand, edited by R. S. Anderssen, R.D. Braddock and L.T.H. Newham, pp. 3521-3527, Cairns, Australia.

  5. Distributed MRI reconstruction using Gadgetron-based cloud computing.

    PubMed

    Xue, Hui; Inati, Souheil; Sørensen, Thomas Sangild; Kellman, Peter; Hansen, Michael S

    2015-03-01

    To expand the open source Gadgetron reconstruction framework to support distributed computing and to demonstrate that a multinode version of the Gadgetron can be used to provide nonlinear reconstruction with clinically acceptable latency. The Gadgetron framework was extended with new software components that enable an arbitrary number of Gadgetron instances to collaborate on a reconstruction task. This cloud-enabled version of the Gadgetron was deployed on three different distributed computing platforms ranging from a heterogeneous collection of commodity computers to the commercial Amazon Elastic Compute Cloud. The Gadgetron cloud was used to provide nonlinear, compressed sensing reconstruction on a clinical scanner with low reconstruction latency (eg, cardiac and neuroimaging applications). The proposed setup was able to handle acquisition and 11 -SPIRiT reconstruction of nine high temporal resolution real-time, cardiac short axis cine acquisitions, covering the ventricles for functional evaluation, in under 1 min. A three-dimensional high-resolution brain acquisition with 1 mm(3) isotropic pixel size was acquired and reconstructed with nonlinear reconstruction in less than 5 min. A distributed computing enabled Gadgetron provides a scalable way to improve reconstruction performance using commodity cluster computing. Nonlinear, compressed sensing reconstruction can be deployed clinically with low image reconstruction latency. © 2014 Wiley Periodicals, Inc.

  6. Parallel Processing for Computational Continuum Dynamics,

    DTIC Science & Technology

    1985-01-01

    Instruction stream, Multiple Data stream ( MIMD ). An example of a machine of this type is the HEP HIOO computer manu- factured by the Denelcor...parallel architecture in general and for the HEP H1O00 computer in partic- ular. The approach is a step-by-step procedure based on a progression from the...Element Processor) by Denelcor has MIMD architecture. The HEP computer is designed to combine from one up to 16 Process Execu- tion Modules (PEM’s

  7. How do Chinese cities grow? A distribution dynamics approach

    NASA Astrophysics Data System (ADS)

    Wu, Jian-Xin; He, Ling-Yun

    2017-03-01

    This paper examines the dynamic behavior of city size using a distribution dynamics approach with Chinese city data for the period 1984-2010. Instead of convergence, divergence or paralleled growth, multimodality and persistence are the dominant characteristics in the distribution dynamics of Chinese prefectural cities. Moreover, initial city size matters, initially small and medium-sized cities exhibit strong tendency of convergence, while large cities show significant persistence and multimodality in the sample period. Examination on the regional city groups shows that locational fundamentals have important impact on the distribution dynamics of city size.

  8. Bayesian uncertainty quantification and propagation in molecular dynamics simulations: A high performance computing framework

    NASA Astrophysics Data System (ADS)

    Angelikopoulos, Panagiotis; Papadimitriou, Costas; Koumoutsakos, Petros

    2012-10-01

    We present a Bayesian probabilistic framework for quantifying and propagating the uncertainties in the parameters of force fields employed in molecular dynamics (MD) simulations. We propose a highly parallel implementation of the transitional Markov chain Monte Carlo for populating the posterior probability distribution of the MD force-field parameters. Efficient scheduling algorithms are proposed to handle the MD model runs and to distribute the computations in clusters with heterogeneous architectures. Furthermore, adaptive surrogate models are proposed in order to reduce the computational cost associated with the large number of MD model runs. The effectiveness and computational efficiency of the proposed Bayesian framework is demonstrated in MD simulations of liquid and gaseous argon.

  9. Bayesian uncertainty quantification and propagation in molecular dynamics simulations: a high performance computing framework.

    PubMed

    Angelikopoulos, Panagiotis; Papadimitriou, Costas; Koumoutsakos, Petros

    2012-10-14

    We present a Bayesian probabilistic framework for quantifying and propagating the uncertainties in the parameters of force fields employed in molecular dynamics (MD) simulations. We propose a highly parallel implementation of the transitional Markov chain Monte Carlo for populating the posterior probability distribution of the MD force-field parameters. Efficient scheduling algorithms are proposed to handle the MD model runs and to distribute the computations in clusters with heterogeneous architectures. Furthermore, adaptive surrogate models are proposed in order to reduce the computational cost associated with the large number of MD model runs. The effectiveness and computational efficiency of the proposed Bayesian framework is demonstrated in MD simulations of liquid and gaseous argon.

  10. Eleventh Workshop for Computational Fluid Dynamic Applications in Rocket Propulsion

    NASA Technical Reports Server (NTRS)

    Williams, R. W. (Compiler)

    1993-01-01

    Conference publication includes 79 abstracts and presentations and 3 invited presentations given at the Eleventh Workshop for Computational Fluid Dynamic Applications in Rocket Propulsion held at George C. Marshall Space Flight Center, April 20-22, 1993. The purpose of the workshop is to discuss experimental and computational fluid dynamic activities in rocket propulsion. The workshop is an open meeting for government, industry, and academia. A broad number of topics are discussed including computational fluid dynamic methodology, liquid and solid rocket propulsion, turbomachinery, combustion, heat transfer, and grid generation.

  11. Integration of Cloud resources in the LHCb Distributed Computing

    NASA Astrophysics Data System (ADS)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  12. Computing Bisectors in a Dynamic Geometry Environment

    ERIC Educational Resources Information Center

    Botana, Francisco

    2013-01-01

    In this note, an approach combining dynamic geometry and automated deduction techniques is used to study the bisectors between points and curves. Usual teacher constructions for bisectors are discussed, showing that inherent limitations in dynamic geometry software impede their thorough study. We show that the interactive sketching of bisectors…

  13. Computing Bisectors in a Dynamic Geometry Environment

    ERIC Educational Resources Information Center

    Botana, Francisco

    2013-01-01

    In this note, an approach combining dynamic geometry and automated deduction techniques is used to study the bisectors between points and curves. Usual teacher constructions for bisectors are discussed, showing that inherent limitations in dynamic geometry software impede their thorough study. We show that the interactive sketching of bisectors…

  14. Dynamic Equilibrium Explained Using the Computer

    ERIC Educational Resources Information Center

    Sariçayir, Hakan; Sahin, Musa; Üce, Musa

    2006-01-01

    Since their introduction into schools, educators have tried to utilize computers in classes in order to make difficult topics more comprehensible. Chemistry educators, when faced with the task of teaching a topic that cannot be taught through experiments in a laboratory, resort to computers to help students visualize difficult concepts and…

  15. Distributed Computing with Centralized Support Works at Brigham Young.

    ERIC Educational Resources Information Center

    McDonald, Kelly; Stone, Brad

    1992-01-01

    Brigham Young University (Utah) has addressed the need for maintenance and support of distributed computing systems on campus by implementing a program patterned after a national business franchise, providing the support and training of a centralized administration but allowing each unit to operate much as an independent small business.…

  16. CMS Monte Carlo production operations in a distributed computing environment

    SciTech Connect

    Mohapatra, A.; Lazaridis, C.; Hernandez, J.M.; Caballero, J.; Hof, C.; Kalinin, S.; Flossdorf, A.; Abbrescia, M.; De Filippis, N.; Donvito, G.; Maggi, G.; /Bari U. /INFN, Bari /INFN, Pisa /Vrije U., Brussels /Brussels U. /Imperial Coll., London /CERN /Princeton U. /Fermilab

    2008-01-01

    Monte Carlo production for the CMS experiment is carried out in a distributed computing environment; the goal of producing 30M simulated events per month in the first half of 2007 has been reached. A brief overview of the production operations and statistics is presented.

  17. Chandrasekhar equations and computational algorithms for distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Burns, J. A.; Ito, K.; Powers, R. K.

    1984-01-01

    The Chandrasekhar equations arising in optimal control problems for linear distributed parameter systems are considered. The equations are derived via approximation theory. This approach is used to obtain existence, uniqueness, and strong differentiability of the solutions and provides the basis for a convergent computation scheme for approximating feedback gain operators. A numerical example is presented to illustrate these ideas.

  18. A fault detection service for wide area distributed computations.

    SciTech Connect

    Stelling, P.

    1998-06-09

    The potential for faults in distributed computing systems is a significant complicating factor for application developers. While a variety of techniques exist for detecting and correcting faults, the implementation of these techniques in a particular context can be difficult. Hence, we propose a fault detection service designed to be incorporated, in a modular fashion, into distributed computing systems, tools, or applications. This service uses well-known techniques based on unreliable fault detectors to detect and report component failure, while allowing the user to tradeoff timeliness of reporting against false positive rates. We describe the architecture of this service, report on experimental results that quantify its cost and accuracy, and describe its use in two applications, monitoring the status of system components of the GUSTO computational grid testbed and as part of the NetSolve network-enabled numerical solver.

  19. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm

  20. Computer architecture evaluation for structural dynamics computations: Project summary

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1989-01-01

    The intent of the proposed effort is the examination of the impact of the elements of parallel architectures on the performance realized in a parallel computation. To this end, three major projects are developed: a language for the expression of high level parallelism, a statistical technique for the synthesis of multicomputer interconnection networks based upon performance prediction, and a queueing model for the analysis of shared memory hierarchies.

  1. Income dynamics with a stationary double Pareto distribution

    NASA Astrophysics Data System (ADS)

    Toda, Alexis Akira

    2011-04-01

    Once controlled for the trend, the distribution of personal income appears to be double Pareto, a distribution that obeys the power law exactly in both the upper and the lower tails. I propose a model of income dynamics with a stationary distribution that is consistent with this fact. Using US male wage data for 1970-1993, I estimate the power law exponent in two ways—(i) from each cross section, assuming that the distribution has converged to the stationary distribution, and (ii) from a panel directly estimating the parameters of the income dynamics model—and obtain the same value of 8.4.

  2. ADDRESSING ENVIRONMENTAL ENGINEERING CHALLENGES WITH COMPUTATIONAL FLUID DYNAMICS

    EPA Science Inventory

    This paper discusses the status and application of Computational Fluid Dynamics )CFD) models to address environmental engineering challenges for more detailed understanding of air pollutant source emissions, atmospheric dispersion and resulting human exposure. CFD simulations ...

  3. ADDRESSING ENVIRONMENTAL ENGINEERING CHALLENGES WITH COMPUTATIONAL FLUID DYNAMICS

    EPA Science Inventory

    This paper discusses the status and application of Computational Fluid Dynamics )CFD) models to address environmental engineering challenges for more detailed understanding of air pollutant source emissions, atmospheric dispersion and resulting human exposure. CFD simulations ...

  4. Sonovestibular symptoms evaluated by computed dynamic posturography.

    PubMed

    Teszler, C B; Ben-David, J; Podoshin, L; Sabo, E

    2000-01-01

    The investigation of stability under bilateral acoustic stimulation was undertaken in an attempt to mimic the real-life conditions of noisy environment (e.g., industry, aviation). The Tullio phenomenon evaluated by computed dynamic posturography (CDP) under acoustic stimulation is reflected in postural unsteadiness, rather than in the classic nystagmus. With such a method, the dangerous effects of noise-induced instability can be assessed and prevented. Three groups of subjects were submitted. The first (group A) included 20 patients who complained of sonovestibular symptoms (i.e., Tullio phenomenon) on the background of an inner-ear disease. The second group (B) included 20 neurootological patients without a history of Tullio phenomenon. Group C consisted of 20 patients with normal hearing, as controls. A pure-tone stimulus of 1,000 Hz at 110 dB was delivered binaurally for 20 seconds during condition 5 and condition 6 of the CDP sensory organization test. The sequence of six sensory organization conditions was performed three times with two intermissions of 15-20 minutes between the trials. The first was performed in the regular mode (quiet stance). This was followed 20 minutes by a trial carried out in quiet stance in sensory organizations tests (SOTs) 1 through 4, and with acoustic stimulation in SOT 5 and SOT 6. The last test was performed in quiet stance throughout (identical to the first trial). A significant drop in the composite equilibrium score was witnessed in group A patients upon acoustic stimulation (p < .0001). This imbalance did not disappear completely until 20 minutes later when the third sensory organization trial was performed. In fact, the composite score obtained on the last SOT was still significantly worse than the baseline. Group B and the normal subjects (group C) showed no significant change in composite score. As regards the vestibular ratio score, again, group A marked a drop on stimulation with sound (p < .004). This decrease

  5. Autonomous management of distributed information systems using evolutionary computation techniques

    NASA Astrophysics Data System (ADS)

    Oates, Martin J.

    1999-03-01

    As the size of typical industrial strength information systems continues to rise, particularly in the arena of Internet based management information systems and multimedia servers, the issue of managing data distribution over clusters or `farms' to overcome performance and scalability issues is becoming of paramount importance. Further, where access is global, this can cause points of geographically localized load contention to `follow the sun' during the day. Traditional site mirroring is not overly effective in addressing this contention and so a more dynamic approach is being investigated to tackle load balancing. The general objective is to manage a self-adapting, distributed database so as to reliably and consistently provide near optimal performance as perceived by client applications. Such a management system must be ultimately capable of operating over a range of time varying usage profiles and fault scenarios, incorporate considerations for communications network delays, multiple updates and maintenance operations. It must also be shown to be capable of being scaled in a practical fashion to ever larger sized networks and databases. Two key components of such an automated system are an optimiser capable of efficiently finding new configuration options, and a suitable model of the system capable of accurately reflecting the performance (or any other required quality of service metric) of the real world system. As conditions change in the real world system, these are fed into the model. The optimiser is then run to find new configurations which are tested in the model prior to implementation in the real world. The model therefore forms an evaluation function which the optimiser utilises to direct its search. Whilst it has already been shown that Genetic Algorithms can provide good solutions to this problem, there are a number of issues associated with this approach. In particular, for industrial strength applications, it must be shown that the GA employed

  6. Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing

    NASA Technical Reports Server (NTRS)

    Ozguner, Fusun

    1996-01-01

    Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.

  7. Performance Assessment of OVERFLOW on Distributed Computing Environment

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Rizk, Yehia M.

    2000-01-01

    The aerodynamic computer code, OVERFLOW, with a multi-zone overset grid feature, has been parallelized to enhance its performance on distributed and shared memory paradigms. Practical application benchmarks have been set to assess the efficiency of code's parallelism on high-performance architectures. The code's performance has also been experimented with in the context of the distributed computing paradigm on distant computer resources using the Information Power Grid (IPG) toolkit, Globus. Two parallel versions of the code, namely OVERFLOW-MPI and -MLP, have developed around the natural coarse grained parallelism inherent in a multi-zonal domain decomposition paradigm. The algorithm invokes a strategy that forms a number of groups, each consisting of a zone, a cluster of zones and/or a partition of a large zone. Each group can be thought of as a process with one or multithreads assigned to it and that all groups run in parallel. The -MPI version of the code uses explicit message-passing based on the standard MPI library for sending and receiving interzonal boundary data across processors. The -MLP version employs no message-passing paradigm; the boundary data is transferred through the shared memory. The -MPI code is suited for both distributed and shared memory architectures, while the -MLP code can only be used on shared memory platforms. The IPG applications are implemented by the -MPI code using the Globus toolkit. While a computational task is distributed across multiple computer resources, the parallelism can be explored on each resource alone. Performance studies are achieved with some practical aerodynamic problems with complex geometries, consisting of 2.5 up to 33 million grid points and a large number of zonal blocks. The computations were executed primarily on SGI Origin 2000 multiprocessors and on the Cray T3E. OVERFLOW's IPG applications are carried out on NASA homogeneous metacomputing machines located at three sites, Ames, Langley and Glenn. Plans

  8. Performance Assessment of OVERFLOW on Distributed Computing Environment

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Rizk, Yehia M.

    2000-01-01

    The aerodynamic computer code, OVERFLOW, with a multi-zone overset grid feature, has been parallelized to enhance its performance on distributed and shared memory paradigms. Practical application benchmarks have been set to assess the efficiency of code's parallelism on high-performance architectures. The code's performance has also been experimented with in the context of the distributed computing paradigm on distant computer resources using the Information Power Grid (IPG) toolkit, Globus. Two parallel versions of the code, namely OVERFLOW-MPI and -MLP, have developed around the natural coarse grained parallelism inherent in a multi-zonal domain decomposition paradigm. The algorithm invokes a strategy that forms a number of groups, each consisting of a zone, a cluster of zones and/or a partition of a large zone. Each group can be thought of as a process with one or multithreads assigned to it and that all groups run in parallel. The -MPI version of the code uses explicit message-passing based on the standard MPI library for sending and receiving interzonal boundary data across processors. The -MLP version employs no message-passing paradigm; the boundary data is transferred through the shared memory. The -MPI code is suited for both distributed and shared memory architectures, while the -MLP code can only be used on shared memory platforms. The IPG applications are implemented by the -MPI code using the Globus toolkit. While a computational task is distributed across multiple computer resources, the parallelism can be explored on each resource alone. Performance studies are achieved with some practical aerodynamic problems with complex geometries, consisting of 2.5 up to 33 million grid points and a large number of zonal blocks. The computations were executed primarily on SGI Origin 2000 multiprocessors and on the Cray T3E. OVERFLOW's IPG applications are carried out on NASA homogeneous metacomputing machines located at three sites, Ames, Langley and Glenn. Plans

  9. Beyond the NAS Parallel Benchmarks: Measuring Dynamic Program Performance and Grid Computing Applications

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Biswas, Rupak; Frumkin, Michael; Feng, Huiyu; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The contents include: 1) A brief history of NPB; 2) What is (not) being measured by NPB; 3) Irregular dynamic applications (UA Benchmark); and 4) Wide area distributed computing (NAS Grid Benchmarks-NGB). This paper is presented in viewgraph form.

  10. Vibration suppression with approximate finite dimensional compensators for distributed systems: Computational methods and experimental results

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Smith, Ralph C.; Wang, Yun

    1994-01-01

    Based on a distributed parameter model for vibrations, an approximate finite dimensional dynamic compensator is designed to suppress vibrations (multiple modes with a broad band of frequencies) of a circular plate with Kelvin-Voigt damping and clamped boundary conditions. The control is realized via piezoceramic patches bonded to the plate and is calculated from information available from several pointwise observed state variables. Examples from computational studies as well as use in laboratory experiments are presented to demonstrate the effectiveness of this design.

  11. Computer modeling and simulation of a 20kHz ac distribution system for Space Station

    NASA Technical Reports Server (NTRS)

    Tsai, Fu-Sheng; Lee, Fred C.

    1987-01-01

    A computer model of a 20 kHz, ac distribution testbed for Space Station is presented. The system consists of six resonant inverters, a one-hundred-meter transmission line, and three load receivers: a dc receiver, a bidirectional receiver, and an ac receiver. A model library is generated characterizing all system components. The system's local and global behaviors are investigated using the EASY5 dynamic analysis program.

  12. A Computational Wireless Network Backplane: Performance in a Distributed Speaker Identification Application Postprint

    DTIC Science & Technology

    2008-12-01

    traffic patterns are intense but constrained to a local area. Examples include peer-to-peer applications or sensor data processing in the region. In such...vol. 30, no. 4, pp. 68–74, 1997. [7] J. Dean and S. Ghemawat, “ Mapreduce : simplified data processing on large clusters ,” Commun. ACM, vol. 51, no. 1...DWARF, a general distributed application execution framework for wireless ad-hoc networks which dynamically allocates computation resources and manages

  13. Computation of distribution of minimum resolution for log-normal distribution of chromatographic peak heights.

    PubMed

    Davis, Joe M

    2011-10-28

    General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Computational Methods for Analyzing Fluid Flow Dynamics from Digital Imagery

    SciTech Connect

    Luttman, A.

    2012-03-30

    The main goal (long term) of this work is to perform computational dynamics analysis and quantify uncertainty from vector fields computed directly from measured data. Global analysis based on observed spatiotemporal evolution is performed by objective function based on expected physics and informed scientific priors, variational optimization to compute vector fields from measured data, and transport analysis proceeding with observations and priors. A mathematical formulation for computing flow fields is set up for computing the minimizer for the problem. An application to oceanic flow based on sea surface temperature is presented.

  15. Distributed parallel computing in stochastic modeling of groundwater systems.

    PubMed

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  16. A Reliable Distributed Computing System Architecture for Planetary Rover

    NASA Astrophysics Data System (ADS)

    Jingping, C.; Yunde, J.

    Computing system is one of the most important parts in planetary rover Computing system is crucial to the rover function capability and survival probability When the planetary rover executes some tasks it needs to react to the events in time and to tolerant the faults cause by the environment or itself To meet the requirements the planetary rover computing system architecture should be reactive high reliable adaptable consistent and extendible This paper introduces reliable distributed computing system architecture for planetary rover This architecture integrates the new ideas and technologies of hardware architecture software architecture network architecture fault tolerant technology and the intelligent control system architecture The planetary computing system architecture defines three dimensions of fault containment regions the channel dimension the lane dimension and the integrity dimension The whole computing system has three channels The channels provide the main fault containment regions for system hardware It is the ultimate line of defense of a single physical fault The lanes are the secondary fault containment regions for physical faults It can be used to improve the capability for fault diagnosis within a channel and can improve the coverage with respect to design faults through hardware and software diversity It also can be used as backups for each others to improve the availability and can improve the computing capability The integrity dimension provides faults containment region for software design Its purpose

  17. Metastability for the Blume-Capel model with distribution of magnetic anisotropy using different dynamics.

    PubMed

    Yamamoto, Yoh; Park, Kyungwha

    2013-07-01

    We investigate the relaxation time of magnetization or the lifetime of the metastable state for a spin S=1 square-lattice ferromagnetic Blume-Capel model with distribution of magnetic anisotropy (with small variances), using two different dynamics such as Glauber and phonon-assisted dynamics. At each lattice site, the Blume-Capel model allows three spin projections (+1, 0, -1) and a site-dependent magnetic anisotropy parameter. For each dynamic, we examine the low-temperature lifetime in two dynamic regions with different sizes of the critical droplet and at the boundary between the regions, within the single-droplet regime. We compute the average lifetime of the metastable state for a fixed lattice size, using both kinetic Monte Carlo simulations and the absorbing Markov chains method in the zero-temperature limit. We find that for both dynamics the lifetime obeys a modified Arrhenius-like law, where the energy barrier of the metastable state depends on the temperature and standard deviation of the distribution of magnetic anisotropy for a given field and magnetic anisotropy and that an explicit form of this dependence differs in different dynamic regions for different dynamics. Interestingly, the phonon-assisted dynamic prevents transitions between degenerate states, which results in a large increase in the energy barrier at the region boundary compared to that for the Glauber dynamic. However, the introduction of a small distribution of magnetic anisotropy allows the spin system to relax via lower-energy pathways such that the energy barrier greatly decreases. In addition, for the phonon-assisted dynamic, even the prefactor of the lifetime is substantially reduced for a broad distribution of magnetic anisotropy in both regions considered, in contrast to the Glauber dynamic. Our findings show that overall the phonon-assisted dynamic is more significantly affected by the distribution of magnetic anisotropy than the Glauber dynamic.

  18. Computer Visualization of Many-Particle Quantum Dynamics

    SciTech Connect

    Ozhigov, A. Y.

    2009-03-10

    In this paper I show the importance of computer visualization in researching of many-particle quantum dynamics. Such a visualization becomes an indispensable illustrative tool for understanding the behavior of dynamic swarm-based quantum systems. It is also an important component of the corresponding simulation framework, and can simplify the studies of underlying algorithms for multi-particle quantum systems.

  19. The Computer Simulation of Liquids by Molecular Dynamics.

    ERIC Educational Resources Information Center

    Smith, W.

    1987-01-01

    Proposes a mathematical computer model for the behavior of liquids using the classical dynamic principles of Sir Isaac Newton and the molecular dynamics method invented by other scientists. Concludes that other applications will be successful using supercomputers to go beyond simple Newtonian physics. (CW)

  20. The Computer Simulation of Liquids by Molecular Dynamics.

    ERIC Educational Resources Information Center

    Smith, W.

    1987-01-01

    Proposes a mathematical computer model for the behavior of liquids using the classical dynamic principles of Sir Isaac Newton and the molecular dynamics method invented by other scientists. Concludes that other applications will be successful using supercomputers to go beyond simple Newtonian physics. (CW)

  1. Potential applications of computational fluid dynamics to biofluid analysis

    NASA Technical Reports Server (NTRS)

    Kwak, D.; Chang, J. L. C.; Rogers, S. E.; Rosenfeld, M.; Kwak, D.

    1988-01-01

    Computational fluid dynamics was developed to the stage where it has become an indispensable part of aerospace research and design. In view of advances made in aerospace applications, the computational approach can be used for biofluid mechanics research. Several flow simulation methods developed for aerospace problems are briefly discussed for potential applications to biofluids, especially to blood flow analysis.

  2. (U) Computation acceleration using dynamic memory

    SciTech Connect

    Hakel, Peter

    2014-10-24

    Many computational applications require the repeated use of quantities, whose calculations can be expensive. In order to speed up the overall execution of the program, it is often advantageous to replace computation with extra memory usage. In this approach, computed values are stored and then, when they are needed again, they are quickly retrieved from memory rather than being calculated again at great cost. Sometimes, however, the precise amount of memory needed to store such a collection is not known in advance, and only emerges in the course of running the calculation. One problem accompanying such a situation is wasted memory space in overdimensioned (and possibly sparse) arrays. Another issue is the overhead of copying existing values to a new, larger memory space, if the original allocation turns out to be insufficient. In order to handle these runtime problems, the programmer therefore has the extra task of addressing them in the code.

  3. Quantum and classical dynamics in adiabatic computation

    NASA Astrophysics Data System (ADS)

    Crowley, P. J. D.; Äńurić, T.; Vinci, W.; Warburton, P. A.; Green, A. G.

    2014-10-01

    Adiabatic transport provides a powerful way to manipulate quantum states. By preparing a system in a readily initialized state and then slowly changing its Hamiltonian, one may achieve quantum states that would otherwise be inaccessible. Moreover, a judicious choice of final Hamiltonian whose ground state encodes the solution to a problem allows adiabatic transport to be used for universal quantum computation. However, the dephasing effects of the environment limit the quantum correlations that an open system can support and degrade the power of such adiabatic computation. We quantify this effect by allowing the system to evolve over a restricted set of quantum states, providing a link between physically inspired classical optimization algorithms and quantum adiabatic optimization. This perspective allows us to develop benchmarks to bound the quantum correlations harnessed by an adiabatic computation. We apply these to the D-Wave Vesuvius machine with revealing—though inconclusive—results.

  4. File and metadata management for BESIII distributed computing

    NASA Astrophysics Data System (ADS)

    Nicholson, C.; Lin, L.; Deng, Z. Y.; Li, W. D.; Zhang, X. M.; Zheng, Y. H.

    2012-12-01

    The BESIII experiment at the Institute of High Energy Physics (IHEP), Beijing, uses the high-luminosity BEPCII e+e- collider to study physics in the π-charm energy region around 3.7 GeV; BEPCII has produced the worlds largest samples of J/phi and phi’ events to date. An order of magnitude increase in the data sample size over the 2011-2012 data-taking period demanded a move from a very centralized to a distributed computing environment, as well as the development of an efficient file and metadata management system. While BESIII is on a smaller scale than some other HEP experiments, this poses particular challenges for its distributed computing and data management system. These constraints include limited resources and manpower, and low quality of network connections to IHEP. Drawing on the rich experience of the HEP community, a system has been developed which meets these constraints. The design and development of the BESIII distributed data management system, including its integration with other BESIII distributed computing components, such as job management, are presented here.

  5. Dynamic oscillations predicted by computer studies

    SciTech Connect

    Butts, M.M.; Smith, H.S. )

    1991-01-01

    During the latter part of 1988, a study was begun to review the dynamic stability performance of a power company's plant. The scope of the study was to identify any operating conditions that might contribute to system oscillations and to examine alternative solutions that would control these oscillations. The study was performed in several phases. This paper discusses the study process, utilizing two different software packages for the analysis: Dynamic stability studies using time-domain software and Eigenvalue analysis using frequency-domain software.

  6. Towards a Population Dynamics Theory for Evolutionary Computing: Learning from Biological Population Dynamics in Nature

    NASA Astrophysics Data System (ADS)

    Ma, Zhanshan (Sam)

    In evolutionary computing (EC), population size is one of the critical parameters that a researcher has to deal with. Hence, it was no surprise that the pioneers of EC, such as De Jong (1975) and Holland (1975), had already studied the population sizing from the very beginning of EC. What is perhaps surprising is that more than three decades later, we still largely depend on the experience or ad-hoc trial-and-error approach to set the population size. For example, in a recent monograph, Eiben and Smith (2003) indicated: "In almost all EC applications, the population size is constant and does not change during the evolutionary search." Despite enormous research on this issue in recent years, we still lack a well accepted theory for population sizing. In this paper, I propose to develop a population dynamics theory forEC with the inspiration from the population dynamics theory of biological populations in nature. Essentially, the EC population is considered as a dynamic system over time (generations) and space (search space or fitness landscape), similar to the spatial and temporal dynamics of biological populations in nature. With this conceptual mapping, I propose to 'transplant' the biological population dynamics theory to EC via three steps: (i) experimentally test the feasibility—whether or not emulating natural population dynamics improves the EC performance; (ii) comparatively study the underlying mechanisms—why there are improvements, primarily via statistical modeling analysis; (iii) conduct theoretical analysis with theoretical models such as percolation theory and extended evolutionary game theory that are generally applicable to both EC and natural populations. This article is a summary of a series of studies we have performed to achieve the general goal [27][30]-[32]. In the following, I start with an extremely brief introduction on the theory and models of natural population dynamics (Sections 1 & 2). In Sections 4 to 6, I briefly discuss three

  7. Symplectic molecular dynamics simulations on specially designed parallel computers.

    PubMed

    Borstnik, Urban; Janezic, Dusanka

    2005-01-01

    We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.

  8. Parallel Domain Decomposition Preconditioning for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai; Kutler, Paul (Technical Monitor)

    1998-01-01

    This viewgraph presentation gives an overview of the parallel domain decomposition preconditioning for computational fluid dynamics. Details are given on some difficult fluid flow problems, stabilized spatial discretizations, and Newton's method for solving the discretized flow equations. Schur complement domain decomposition is described through basic formulation, simplifying strategies (including iterative subdomain and Schur complement solves, matrix element dropping, localized Schur complement computation, and supersparse computations), and performance evaluation.

  9. Computational spectroscopy, dynamics, and photochemistry of photosensory flavoproteins.

    PubMed

    Domratcheva, Tatiana; Udvarhelyi, Anikó; Shahi, Abdul Rehaman Moughal

    2014-01-01

    Extensive interest in photosensory proteins stimulated computational studies of flavins and flavoproteins in the past decade. This review is dedicated to the three central topics of these studies: calculations of flavin UV-visible and IR spectra, simulated dynamics of photoreceptor proteins, and flavin photochemistry. Accordingly, this chapter is divided into three parts; each part describes corresponding computational protocols, summarizes computational results, and discusses the emerging mechanistic picture.

  10. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    NASA Astrophysics Data System (ADS)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-01

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  11. Dynamics of the return distribution in the Korean financial market

    NASA Astrophysics Data System (ADS)

    Yang, Jae-Suk; Chae, Seungbyung; Jung, Woo-Sung; Moon, Hie-Tae

    2006-03-01

    In this paper, we studied the dynamics of the log-return distribution of the Korean Composition Stock Price Index (KOSPI) from 1992 to 2004. Based on the microscopic spin model, we found that while the index during the late 1990s showed a power-law distribution, the distribution in the early 2000s was exponential. This change in distribution shape was caused by the duration and velocity, among other parameters, of the information that flowed into the market.

  12. Distributed Computer Networks in Support of Complex Group Practices

    PubMed Central

    Wess, Bernard P.

    1978-01-01

    The economics of medical computer networks are presented in context with the patient care and administrative goals of medical networks. Design alternatives and network topologies are discussed with an emphasis on medical network design requirements in distributed data base design, telecommunications, satellite systems, and software engineering. The success of the medical computer networking technology is predicated on the ability of medical and data processing professionals to design comprehensive, efficient, and virtually impenetrable security systems to protect data bases, network access and services, and patient confidentiality.

  13. EST analysis pipeline: use of distributed computing resources.

    PubMed

    González, Francisco Javier; Vizcaíno, Juan Antonio

    2011-01-01

    This chapter describes how a pipeline for the analysis of expressed sequence tag (EST) data can be -implemented, based on our previous experience generating ESTs from Trichoderma spp. We focus on key steps in the workflow, such as the processing of raw data from the sequencers, the clustering of ESTs, and the functional annotation of the sequences using BLAST, InterProScan, and BLAST2GO. Some of the steps require the use of intensive computing power. Since these resources are not available for small research groups or institutes without bioinformatics support, an alternative will be described: the use of distributed computing resources (local grids and Amazon EC2).

  14. ISIS: A System for Fault-Tolerant Distributed Computing

    DTIC Science & Technology

    1986-04-01

    New Yorit aÄIJ (3 DT1C ELECTE APR 1 ? 1986 P D ISIS: A System for Fault-Tolerant Distributed Computing* Kenneth P. Birman TR »6-744 April...Department of Computer Science Cornell University, Ithaca, New York Accesion For NTIS CRA&I DTIC TAB U;.annouMced Justification i u D Diit...A . «Jl .„ _* , a . 2. RedUent objects 7575 extends a conventional operating system by introducing a new programming abstraction, the resiliera

  15. Common Accounting System for Monitoring the ATLAS Distributed Computing Resources

    NASA Astrophysics Data System (ADS)

    Karavakis, E.; Andreeva, J.; Campana, S.; Gayazov, S.; Jezequel, S.; Saiz, P.; Sargsyan, L.; Schovancova, J.; Ueda, I.; Atlas Collaboration

    2014-06-01

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  16. Computed voltage distributions around solar electric propulsion spacecraft

    NASA Technical Reports Server (NTRS)

    Stevens, N. J.

    1979-01-01

    The NASA Charging Analyzer Program is used to conduct preliminary computations of the voltage distributions around such large spacecraft in geomagnetic substorm environments at geosynchronous altitudes. Both a standard operating voltage (+ or - 150 volts on solar arrays) and direct-drive (+1200 volts on arrays) configurations are considered. Thruster-off simulations are computed for both operating voltage configurations while the effect of simulated thruster-on conditions are evaluated only for direct-drive configuration. These simulated thruster-on conditions are evaluated only for direct-drive configuration. These simulated thruster operations appear to alleviate surface charging.

  17. Temporal Distributional Limit Theorems for Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Dolgopyat, Dmitry; Sarig, Omri

    2017-02-01

    Suppose {T^t} is a Borel flow on a complete separable metric space X, f:X→ R is Borel, and xin X. A temporal distributional limit theorem is a scaling limit for the distributions of the random variables X_T:=int _0^t f(T^s x)ds, where t is chosen randomly uniformly from [0, T], x is fixed, and T→ ∞. We discuss such laws for irrational rotations, Anosov flows, and horocycle flows.

  18. The Distributed Diagonal Force Decomposition Method for Parallelizing Molecular Dynamics Simulations

    PubMed Central

    Boršnik, Urban; Miller, Benjamin T.; Brooks, Bernard R.; Janežič, Dušanka

    2011-01-01

    Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007

  19. Dynamic traffic assignment on parallel computers

    SciTech Connect

    Nagel, K.; Frye, R.; Jakob, R.; Rickert, M.; Stretz, P.

    1998-12-01

    The authors describe part of the current framework of the TRANSIMS traffic research project at the Los Alamos National Laboratory. It includes parallel implementations of a route planner and a microscopic traffic simulation model. They present performance figures and results of an offline load-balancing scheme used in one of the iterative re-planning runs required for dynamic route assignment.

  20. Numerical ray-tracing approach with laser intensity distribution for LIDAR signal power function computation

    NASA Astrophysics Data System (ADS)

    Shi, Guangyuan; Li, Song; Huang, Ke; Li, Zile; Zheng, Guoxing

    2016-10-01

    We have developed a new numerical ray-tracing approach for LIDAR signal power function computation, in which the light round-trip propagation is analyzed by geometrical optics and a simple experiment is employed to acquire the laser intensity distribution. It is relatively more accurate and flexible than previous methods. We emphatically discuss the relationship between the inclined angle and the dynamic range of detector output signal in biaxial LIDAR system. Results indicate that an appropriate negative angle can compress the signal dynamic range. This technique has been successfully proved by comparison with real measurements.

  1. Distributed-Memory Computing With the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA)

    NASA Technical Reports Server (NTRS)

    Riley, Christopher J.; Cheatwood, F. McNeil

    1997-01-01

    The Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA), a Navier-Stokes solver, has been modified for use in a parallel, distributed-memory environment using the Message-Passing Interface (MPI) standard. A standard domain decomposition strategy is used in which the computational domain is divided into subdomains with each subdomain assigned to a processor. Performance is examined on dedicated parallel machines and a network of desktop workstations. The effect of domain decomposition and frequency of boundary updates on performance and convergence is also examined for several realistic configurations and conditions typical of large-scale computational fluid dynamic analysis.

  2. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  3. Parallel and distributed computation for fault-tolerant object recognition

    NASA Technical Reports Server (NTRS)

    Wechsler, Harry

    1988-01-01

    The distributed associative memory (DAM) model is suggested for distributed and fault-tolerant computation as it relates to object recognition tasks. The fault-tolerance is with respect to geometrical distortions (scale and rotation), noisy inputs, occulsion/overlap, and memory faults. An experimental system was developed for fault-tolerant structure recognition which shows the feasibility of such an approach. The approach is futher extended to the problem of multisensory data integration and applied successfully to the recognition of colored polyhedral objects.

  4. Generalized dynamic engine simulation techniques for the digital computer

    NASA Technical Reports Server (NTRS)

    Sellers, J.; Teren, F.

    1974-01-01

    Recently advanced simulation techniques have been developed for the digital computer and used as the basis for development of a generalized dynamic engine simulation computer program, called DYNGEN. This computer program can analyze the steady state and dynamic performance of many kinds of aircraft gas turbine engines. Without changes to the basic program DYNGEN can analyze one- or two-spool turbofan engines. The user must supply appropriate component performance maps and design-point information. Examples are presented to illustrate the capabilities of DYNGEN in the steady state and dynamic modes of operation. The analytical techniques used in DYNGEN are briefly discussed, and its accuracy is compared with a comparable simulation using the hybrid computer. The impact of DYNGEN and similar all-digital programs on future engine simulation philosophy is also discussed.

  5. Generalized dynamic engine simulation techniques for the digital computer

    NASA Technical Reports Server (NTRS)

    Sellers, J.; Teren, F.

    1974-01-01

    Recently advanced simulation techniques have been developed for the digital computer and used as the basis for development of a generalized dynamic engine simulation computer program, called DYNGEN. This computer program can analyze the steady state and dynamic performance of many kinds of aircraft gas turbine engines. Without changes to the basic program, DYNGEN can analyze one- or two-spool turbofan engines. The user must supply appropriate component performance maps and design-point information. Examples are presented to illustrate the capabilities of DYNGEN in the steady state and dynamic modes of operation. The analytical techniques used in DYNGEN are briefly discussed, and its accuracy is compared with a comparable simulation using the hybrid computer. The impact of DYNGEN and similar all-digital programs on future engine simulation philosophy is also discussed.

  6. Generalized dynamic engine simulation techniques for the digital computers

    NASA Technical Reports Server (NTRS)

    Sellers, J.; Teren, F.

    1975-01-01

    Recently advanced simulation techniques have been developed for the digital computer and used as the basis for development of a generalized dynamic engine simulation computer program, called DYNGEN. This computer program can analyze the steady state and dynamic performance of many kinds of aircraft gas turbine engines. Without changes to the basic program, DYNGEN can analyze one- or two-spool turbofan engines. The user must supply appropriate component performance maps and design point information. Examples are presented to illustrate the capabilities of DYNGEN in the steady state and dynamic modes of operation. The analytical techniques used in DYNGEN are briefly discussed, and its accuracy is compared with a comparable simulation using the hybrid computer. The impact of DYNGEN and similar digital programs on future engine simulation philosophy is also discussed.

  7. Integrating Xgrid into the HENP distributed computing model

    NASA Astrophysics Data System (ADS)

    Hajdu, L.; Kocoloski, A.; Lauret, J.; Miller, M.

    2008-07-01

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.

  8. A biological solution to a fundamental distributed computing problem.

    PubMed

    Afek, Yehuda; Alon, Noga; Barad, Omer; Hornstein, Eran; Barkai, Naama; Bar-Joseph, Ziv

    2011-01-14

    Computational and biological systems are often distributed so that processors (cells) jointly solve a task, without any of them receiving all inputs or observing all outputs. Maximal independent set (MIS) selection is a fundamental distributed computing procedure that seeks to elect a set of local leaders in a network. A variant of this problem is solved during the development of the fly's nervous system, when sensory organ precursor (SOP) cells are chosen. By studying SOP selection, we derived a fast algorithm for MIS selection that combines two attractive features. First, processors do not need to know their degree; second, it has an optimal message complexity while only using one-bit messages. Our findings suggest that simple and efficient algorithms can be developed on the basis of biologically derived insights.

  9. Semiquantum key distribution with secure delegated quantum computation

    PubMed Central

    Li, Qin; Chan, Wai Hong; Zhang, Shengyu

    2016-01-01

    Semiquantum key distribution allows a quantum party to share a random key with a “classical” party who only can prepare and measure qubits in the computational basis or reorder some qubits when he has access to a quantum channel. In this work, we present a protocol where a secret key can be established between a quantum user and an almost classical user who only needs the quantum ability to access quantum channels, by securely delegating quantum computation to a quantum server. We show the proposed protocol is robust even when the delegated quantum server is a powerful adversary, and is experimentally feasible with current technology. As one party of our protocol is the most quantum-resource efficient, it can be more practical and significantly widen the applicability scope of quantum key distribution. PMID:26813384

  10. Computer-aided coordination and overcurrent protection for distribution systems

    SciTech Connect

    Tolbert, L.M.

    1995-03-01

    Overcurrent protection and coordination studies for electrical distribution systems have become much easier to perform with the emergence of several commercially available software programs that run on a personal computer. These programs have built-in libraries of protective device time-current curves, damage curves for cable and transformers, and motor starting curves, thereby facilitating the design of a selectively coordinated protection system which is also well-protected. Additionally, design time when utilizing computers is far less than the previous method of tracing manufacturers` curves on transparent paper. Basic protection and coordination principles are presented in this paper along with several helpful suggestions for designing electrical protection systems. A step-by-step methodology is presented to illustrate the design concepts when using software for selecting and coordinating the protective devices in distribution systems.

  11. Semiquantum key distribution with secure delegated quantum computation.

    PubMed

    Li, Qin; Chan, Wai Hong; Zhang, Shengyu

    2016-01-27

    Semiquantum key distribution allows a quantum party to share a random key with a "classical" party who only can prepare and measure qubits in the computational basis or reorder some qubits when he has access to a quantum channel. In this work, we present a protocol where a secret key can be established between a quantum user and an almost classical user who only needs the quantum ability to access quantum channels, by securely delegating quantum computation to a quantum server. We show the proposed protocol is robust even when the delegated quantum server is a powerful adversary, and is experimentally feasible with current technology. As one party of our protocol is the most quantum-resource efficient, it can be more practical and significantly widen the applicability scope of quantum key distribution.

  12. Optimal eigenvalue computation on distributed-memory MIMD multiprocessors

    SciTech Connect

    Crivelli, S.; Jessup, E. R.

    1992-10-01

    Simon proves that bisection is not the optimal method for computing an eigenvalue on a single vector processor. In this paper, we show that his analysis does not extend in a straightforward way to the computation of an eigenvalue on a distributed-memory MIMD multiprocessor. In particular, we show how the optimal number of sections (and processors) to use for multisection depends on variables such as the matrix size and certain parameters inherent to the machine. We also show that parallel multisection outperforms the variant of parallel bisection proposed by Swarztrauber or this problem on a distributed-memory MIMD multiprocessor. We present the results of experiments on the 64-processor Intel iPSC/2 hypercube and the 512-processor Intel Touchstone Delta mesh multiprocessor.

  13. Neural Computations in a Dynamical System with Multiple Time Scales

    PubMed Central

    Mi, Yuanyuan; Lin, Xiaohan; Wu, Si

    2016-01-01

    Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions. PMID:27679569

  14. A new computational structure for real-time dynamics

    SciTech Connect

    Izaguirre, A. ); Hashimoto, Minoru )

    1992-08-01

    The authors present an efficient structure for the computation of robot dynamics in real time. The fundamental characteristic of this structure is the division of the computation into a high-priority synchronous task and low-priority background tasks, possibly sharing the resources of a conventional computing unit based on commercial microprocessors. The background tasks compute the inertial and gravitational coefficients as well as the forces due to the velocities of the joints. In each control sample period, the high-priority synchronous task computes the product of the inertial coefficients by the accelerations of the joints and performs the summation of the torques due to the velocities and gravitational forces. Kircanski et al. (1986) have shown that the bandwidth of the variation of joint angles and of their velocities is an order of magnitude less than the variation of joint accelerations. This result agrees with the experiments the authors have carried out using a PUMA 260 robot. Two main strategies contribute to reduce the computational burden associated with the evaluation of the dynamic equations. The first involves the use of efficient algorithms for the evaluation of the equations. The second is aimed at reducing the number of dynamic parameters by identifying beforehand the linear dependencies among these parameters, as well as carrying out a significance analysis of the parameters' contribution to the final joint torques. The actual code used to evaluate this dynamic model is entirely computer generated from experimental data, requiring no other manual intervention than performing a campaign of measurements.

  15. A fractal approach to dynamic inference and distribution analysis

    PubMed Central

    van Rooij, Marieke M. J. W.; Nash, Bertha A.; Rajaraman, Srinivasan; Holden, John G.

    2013-01-01

    Event-distributions inform scientists about the variability and dispersion of repeated measurements. This dispersion can be understood from a complex systems perspective, and quantified in terms of fractal geometry. The key premise is that a distribution's shape reveals information about the governing dynamics of the system that gave rise to the distribution. Two categories of characteristic dynamics are distinguished: additive systems governed by component-dominant dynamics and multiplicative or interdependent systems governed by interaction-dominant dynamics. A logic by which systems governed by interaction-dominant dynamics are expected to yield mixtures of lognormal and inverse power-law samples is discussed. These mixtures are described by a so-called cocktail model of response times derived from human cognitive performances. The overarching goals of this article are twofold: First, to offer readers an introduction to this theoretical perspective and second, to offer an overview of the related statistical methods. PMID:23372552

  16. A fractal approach to dynamic inference and distribution analysis.

    PubMed

    van Rooij, Marieke M J W; Nash, Bertha A; Rajaraman, Srinivasan; Holden, John G

    2013-01-01

    Event-distributions inform scientists about the variability and dispersion of repeated measurements. This dispersion can be understood from a complex systems perspective, and quantified in terms of fractal geometry. The key premise is that a distribution's shape reveals information about the governing dynamics of the system that gave rise to the distribution. Two categories of characteristic dynamics are distinguished: additive systems governed by component-dominant dynamics and multiplicative or interdependent systems governed by interaction-dominant dynamics. A logic by which systems governed by interaction-dominant dynamics are expected to yield mixtures of lognormal and inverse power-law samples is discussed. These mixtures are described by a so-called cocktail model of response times derived from human cognitive performances. The overarching goals of this article are twofold: First, to offer readers an introduction to this theoretical perspective and second, to offer an overview of the related statistical methods.

  17. Osmosis : a molecular dynamics computer simulation study

    NASA Astrophysics Data System (ADS)

    Lion, Thomas

    Osmosis is a phenomenon of critical importance in a variety of processes ranging from the transport of ions across cell membranes and the regulation of blood salt levels by the kidneys to the desalination of water and the production of clean energy using potential osmotic power plants. However, despite its importance and over one hundred years of study, there is an ongoing confusion concerning the nature of the microscopic dynamics of the solvent particles in their transfer across the membrane. In this thesis the microscopic dynamical processes underlying osmotic pressure and concentration gradients are investigated using molecular dynamics (MD) simulations. I first present a new derivation for the local pressure that can be used for determining osmotic pressure gradients. Using this result, the steady-state osmotic pressure is studied in a minimal model for an osmotic system and the steady-state density gradients are explained using a simple mechanistic hopping model for the solvent particles. The simulation setup is then modified, allowing us to explore the timescales involved in the relaxation dynamics of the system in the period preceding the steady state. Further consideration is also given to the relative roles of diffusive and non-diffusive solvent transport in this period. Finally, in a novel modification to the classic osmosis experiment, the solute particles are driven out-of-equilibrium by the input of energy. The effect of this modification on the osmotic pressure and the osmotic ow is studied and we find that active solute particles can cause reverse osmosis to occur. The possibility of defining a new "osmotic effective temperature" is also considered and compared to the results of diffusive and kinetic temperatures..

  18. Robot-Arm Dynamic Control by Computer

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.; Tarn, Tzyh J.; Chen, Yilong J.

    1987-01-01

    Feedforward and feedback schemes linearize responses to control inputs. Method for control of robot arm based on computed nonlinear feedback and state tranformations to linearize system and decouple robot end-effector motions along each of cartesian axes augmented with optimal scheme for correction of errors in workspace. Major new feature of control method is: optimal error-correction loop directly operates on task level and not on joint-servocontrol level.

  19. Distributed Cognition (DCOG): Foundations for a Computational Associative Memory Model

    DTIC Science & Technology

    2006-08-01

    This isolates the skateboard as the one that doesn’t belong. Certain automatic, attention-shifting mechanisms will be required in our model . We...STINFO COPY AFRL-HE-WP-TR-2006-0160 Distributed Cognition (DCOG): Foundations for a Computational Associative Memory Model Robert G. Eggleston...reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions , searching

  20. Dynamics of Bottlebrush Networks: A Computational Study

    NASA Astrophysics Data System (ADS)

    Dobrynin, Andrey; Cao, Zhen; Sheiko, Sergei

    We study dynamics of deformation of bottlebrush networks using molecular dynamics simulations and theoretical calculations. Analysis of our simulation results show that the dynamics of bottlebrush network deformation can be described by a Rouse model for polydisperse networks with effective Rouse time of the bottlebrush network strand, τR =τ0Ns2 (Nsc + 1) where, Ns is the number-average degree of polymerization of the bottlebrush backbone strands between crosslinks, Nsc is the degree of polymerization of the side chains and τ0is a characteristic monomeric relaxation time. At time scales t smaller than the Rouse time, t <τR , the time dependent network shear modulus decays with time as G (t) ~ ρkB T(τ0 / t) 1 / 2 , where ρis the monomer number density. However, at the time scale t larger than the Rouse time of the bottlebrush strands between crosslinks, the network response is pure elastic with shear modulus G (t) =G0 , where G0 is the equilibrium shear modulus at small deformation. The stress evolution in the bottlebrush networks can be described by a universal function of t /τR . NSF DMR-1409710.

  1. Distributed Learning and Information Dynamics In Networked Autonomous Systems

    DTIC Science & Technology

    2015-11-20

    AFRL-AFOSR-VA-TR-2015-0387 (MURI-09) DISTRIBUTED LEARNING AND INFORMATION DYNAMICS IN NETWORKED AUTONOMOUS Eric Feron GEORGIA TECH RESEARCH...2009 to June 30, 2015 4. TITLE AND SUBTITLE DISTRIBUTED LEARNING AND INFORMATION DYNAMICS IN NETWORKED AUTONOMOUS SYSTEMS 5a. CONTRACT NUMBER 5b...operations of teams of autonomous vehicles to learn and adapt to uncertain and hostile environments under effective utilization of communications resources

  2. Equilibrium distribution of heavy quarks in fokker-planck dynamics

    PubMed

    Walton; Rafelski

    2000-01-03

    We obtain an explicit generalization, within Fokker-Planck dynamics, of Einstein's relation between drag, diffusion, and the equilibrium distribution for a spatially homogeneous system, considering both the transverse and longitudinal diffusion for dimension n>1. We provide a complete characterization of the equilibrium distribution in terms of the drag and diffusion transport coefficients. We apply this analysis to charm quark dynamics in a thermal quark-gluon plasma for the case of collisional equilibration.

  3. Liquid rocket performance computer model with distributed energy release

    NASA Technical Reports Server (NTRS)

    Combs, L. P.

    1972-01-01

    Development of a computer program for analyzing the effects of bipropellant spray combustion processes on liquid rocket performance is described and discussed. The distributed energy release (DER) computer program was designed to become part of the JANNAF liquid rocket performance evaluation methodology and to account for performance losses associated with the propellant combustion processes, e.g., incomplete spray gasification, imperfect mixing between sprays and their reacting vapors, residual mixture ratio striations in the flow, and two-phase flow effects. The DER computer program begins by initializing the combustion field at the injection end of a conventional liquid rocket engine, based on injector and chamber design detail, and on propellant and combustion gas properties. It analyzes bipropellant combustion, proceeding stepwise down the chamber from those initial conditions through the nozzle throat.

  4. Computationally intensive econometrics using a distributed matrix-programming language.

    PubMed

    Doornik, Jurgen A; Hendry, David F; Shephard, Neil

    2002-06-15

    This paper reviews the need for powerful computing facilities in econometrics, focusing on concrete problems which arise in financial economics and in macroeconomics. We argue that the profession is being held back by the lack of easy-to-use generic software which is able to exploit the availability of cheap clusters of distributed computers. Our response is to extend, in a number of directions, the well-known matrix-programming interpreted language Ox developed by the first author. We note three possible levels of extensions: (i) Ox with parallelization explicit in the Ox code; (ii) Ox with a parallelized run-time library; and (iii) Ox with a parallelized interpreter. This paper studies and implements the first case, emphasizing the need for deterministic computing in science. We give examples in the context of financial economics and time-series modelling.

  5. Ensuring data consistency over CMS distributed computing system

    SciTech Connect

    Rossman, Paul; /Fermilab

    2009-05-01

    CMS utilizes a distributed infrastructure of computing centers to custodially store data, to provide organized processing resources, and to provide analysis computing resources for users. Integrated over the whole system, even in the first year of data taking, the available disk storage approaches 10 petabytes of space. Maintaining consistency between the data bookkeeping, the data transfer system, and physical storage is an interesting technical and operations challenge. In this paper we will discuss the CMS effort to ensure that data is consistently available at all computing centers. We will discuss the technical tools that monitor the consistency of the catalogs and the physical storage as well as the operations model used to find and solve inconsistencies.

  6. Multi-VO support in IHEP's distributed computing environment

    NASA Astrophysics Data System (ADS)

    Yan, T.; Suo, B.; Zhao, X. H.; Zhang, X. M.; Ma, Z. T.; Yan, X. F.; Lin, T.; Deng, Z. Y.; Li, W. D.; Belov, S.; Pelevanyuk, I.; Zhemchugov, A.; Cai, H.

    2015-12-01

    Inspired by the success of BESDIRAC, the distributed computing environment based on DIRAC for BESIII experiment, several other experiments operated by Institute of High Energy Physics (IHEP), such as Circular Electron Positron Collider (CEPC), Jiangmen Underground Neutrino Observatory (JUNO), Large High Altitude Air Shower Observatory (LHAASO) and Hard X-ray Modulation Telescope (HXMT) etc, are willing to use DIRAC to integrate the geographically distributed computing resources available by their collaborations. In order to minimize manpower and hardware cost, we extended the BESDIRAC platform to support multi-VO scenario, instead of setting up a self-contained distributed computing environment for each VO. This makes DIRAC as a service for the community of those experiments. To support multi-VO, the system architecture of BESDIRAC is adjusted for scalability. The VOMS and DIRAC servers are reconfigured to manage users and groups belong to several VOs. A lightweight storage resource manager StoRM is employed as the central SE to integrate local and grid data. A frontend system is designed for user's massive job splitting, submission and management, with plugins to support new VOs. A monitoring and accounting system is also considered to easy the system administration and VO related resources usage accounting.

  7. Dynamic Stall Computations Using a Zonal Navier-Stokes Model

    DTIC Science & Technology

    1988-06-01

    COMPUTATIONS USING A ZONAL NAVIER-STOKES MODEL OfOSONA, AUTWOR(S) Conrovd, Jack H. r. __ _ I, ,3 , iOR co T’M( COVERED DATE Of REPORT (Yea, Month Oy) IS PAGE...48 computer and is used to calculate the flow field about a NACA 0012 airfoil oscillating in pitch. Surface pressure distributions and integrated...lift, pitching moment, and drag coefficient versus angle of attack are compared to existing experimental data for four cases and existing computational

  8. Algorithm-dependent fault tolerance for distributed computing

    SciTech Connect

    P. D. Hough; M. e. Goldsby; E. J. Walsh

    2000-02-01

    Large-scale distributed systems assembled from commodity parts, like CPlant, have become common tools in the distributed computing world. Because of their size and diversity of parts, these systems are prone to failures. Applications that are being run on these systems have not been equipped to efficiently deal with failures, nor is there vendor support for fault tolerance. Thus, when a failure occurs, the application crashes. While most programmers make use of checkpoints to allow for restarting of their applications, this is cumbersome and incurs substantial overhead. In many cases, there are more efficient and more elegant ways in which to address failures. The goal of this project is to develop a software architecture for the detection of and recovery from faults in a cluster computing environment. The detection phase relies on the latest techniques developed in the fault tolerance community. Recovery is being addressed in an application-dependent manner, thus allowing the programmer to take advantage of algorithmic characteristics to reduce the overhead of fault tolerance. This architecture will allow large-scale applications to be more robust in high-performance computing environments that are comprised of clusters of commodity computers such as CPlant and SMP clusters.

  9. Distributing Data from Desktop to Hand-Held Computers

    NASA Technical Reports Server (NTRS)

    Elmore, Jason L.

    2005-01-01

    A system of server and client software formats and redistributes data from commercially available desktop to commercially available hand-held computers via both wired and wireless networks. This software is an inexpensive means of enabling engineers and technicians to gain access to current sensor data while working in locations in which such data would otherwise be inaccessible. The sensor data are first gathered by a data-acquisition server computer, then transmitted via a wired network to a data-distribution computer that executes the server portion of the present software. Data in all sensor channels -- both raw sensor outputs in millivolt units and results of conversion to engineering units -- are made available for distribution. Selected subsets of the data are transmitted to each hand-held computer via the wired and then a wireless network. The selection of the subsets and the choice of the sequences and formats for displaying the data is made by means of a user interface generated by the client portion of the software. The data displayed on the screens of hand-held units can be updated at rates from 1 to

  10. Lightweight distributed computing for intraoperative real-time image guidance

    NASA Astrophysics Data System (ADS)

    Suwelack, Stefan; Katic, Darko; Wagner, Simon; Spengler, Patrick; Bodenstedt, Sebastian; Röhl, Sebastian; Dillmann, Rüdiger; Speidel, Stefanie

    2012-02-01

    In order to provide real-time intraoperative guidance, computer assisted surgery (CAS) systems often rely on computationally expensive algorithms. The real-time constraint is especially challenging if several components such as intraoperative image processing, soft tissue registration or context aware visualization are combined in a single system. In this paper, we present a lightweight approach to distribute the workload over several workstations based on the OpenIGTLink protocol. We use XML-based message passing for remote procedure calls and native types for transferring data such as images, meshes or point coordinates. Two different, but typical scenarios are considered in order to evaluate the performance of the new system. First, we analyze a real-time soft tissue registration algorithm based on a finite element (FE) model. Here, we use the proposed approach to distribute the computational workload between a primary workstation that handles sensor data processing and visualization and a dedicated workstation that runs the real-time FE algorithm. We show that the additional overhead that is introduced by the technique is small compared to the total execution time. Furthermore, the approach is used to speed up a context aware augmented reality based navigation system for dental implant surgery. In this scenario, the additional delay for running the computationally expensive reasoning server on a separate workstation is less than a millisecond. The results show that the presented approach is a promising strategy to speed up real-time CAS systems.

  11. Exponential rise of dynamical complexity in quantum computing through projections

    PubMed Central

    Burgarth, Daniel Klaus; Facchi, Paolo; Giovannetti, Vittorio; Nakazato, Hiromichi; Pascazio, Saverio; Yuasa, Kazuya

    2014-01-01

    The ability of quantum systems to host exponentially complex dynamics has the potential to revolutionize science and technology. Therefore, much effort has been devoted to developing of protocols for computation, communication and metrology, which exploit this scaling, despite formidable technical difficulties. Here we show that the mere frequent observation of a small part of a quantum system can turn its dynamics from a very simple one into an exponentially complex one, capable of universal quantum computation. After discussing examples, we go on to show that this effect is generally to be expected: almost any quantum dynamics becomes universal once ‘observed’ as outlined above. Conversely, we show that any complex quantum dynamics can be ‘purified’ into a simpler one in larger dimensions. We conclude by demonstrating that even local noise can lead to an exponentially complex dynamics. PMID:25300692

  12. Exponential rise of dynamical complexity in quantum computing through projections.

    PubMed

    Burgarth, Daniel Klaus; Facchi, Paolo; Giovannetti, Vittorio; Nakazato, Hiromichi; Pascazio, Saverio; Yuasa, Kazuya

    2014-10-10

    The ability of quantum systems to host exponentially complex dynamics has the potential to revolutionize science and technology. Therefore, much effort has been devoted to developing of protocols for computation, communication and metrology, which exploit this scaling, despite formidable technical difficulties. Here we show that the mere frequent observation of a small part of a quantum system can turn its dynamics from a very simple one into an exponentially complex one, capable of universal quantum computation. After discussing examples, we go on to show that this effect is generally to be expected: almost any quantum dynamics becomes universal once 'observed' as outlined above. Conversely, we show that any complex quantum dynamics can be 'purified' into a simpler one in larger dimensions. We conclude by demonstrating that even local noise can lead to an exponentially complex dynamics.

  13. Photonic Nonlinear Transient Computing with Multiple-Delay Wavelength Dynamics

    NASA Astrophysics Data System (ADS)

    Martinenghi, Romain; Rybalko, Sergei; Jacquot, Maxime; Chembo, Yanne K.; Larger, Laurent

    2012-06-01

    We report on the experimental demonstration of a hybrid optoelectronic neuromorphic computer based on a complex nonlinear wavelength dynamics including multiple delayed feedbacks with randomly defined weights. This neuromorphic approach is based on a new paradigm of a brain-inspired computational unit, intrinsically differing from Turing machines. This recent paradigm consists in expanding the input information to be processed into a higher dimensional phase space, through the nonlinear transient response of a complex dynamics excited by the input information. The computed output is then extracted via a linear separation of the transient trajectory in the complex phase space. The hyperplane separation is derived from a learning phase consisting of the resolution of a regression problem. The processing capability originates from the nonlinear transient, resulting in nonlinear transient computing. The computational performance is successfully evaluated on a standard benchmark test, namely, a spoken digit recognition task.

  14. Dynamical localization simulated on a few-qubit quantum computer

    SciTech Connect

    Benenti, Giuliano; Montangero, Simone; Casati, Giulio; Shepelyansky, Dima L.

    2003-05-01

    We show that a quantum computer operating with a small number of qubits can simulate the dynamical localization of classical chaos in a system described by the quantum sawtooth map model. The dynamics of the system is computed efficiently up to a time t{>=}l, and then the localization length l can be obtained with accuracy {nu} by means of order 1/{nu}{sup 2} computer runs, followed by coarse-grained projective measurements on the computational basis. We also show that in the presence of static imperfections, a reliable computation of the localization length is possible without error correction up to an imperfection threshold which drops polynomially with the number of qubits.

  15. Simulation of emission tomography using grid middleware for distributed computing.

    PubMed

    Thomason, M G; Longton, R F; Gregor, J; Smith, G T; Hutson, R K

    2004-09-01

    SimSET is Monte Carlo simulation software for emission tomography. This paper describes a simple but effective scheme for parallel execution of SimSET using NetSolve, a client-server system for distributed computation. NetSolve (version 1.4.1) is "grid middleware" which enables a user (the client) to run specific computations remotely and simultaneously on a grid of networked computers (the servers). Since the servers do not have to be identical machines, computation may take place in a heterogeneous environment. To take advantage of diversity in machines and their workloads, a client-side scheduler was implemented for the Monte Carlo simulation. The scheduler partitions the total decay events by taking into account the inherent compute-speeds and recent average workloads, i.e., the scheduler assigns more decay events to processors expected to give faster service and fewer decay events to those expected to give slower service. When compute-speeds and sustained workloads are taken into account, the speed-up is essentially linear in the number of equivalent "maximum-service" processors. One modification in the SimSET code (version 2.6.2.3) was made to ensure that the total number of decay events specified by the user is maintained in the distributed simulation. No other modifications in the standard SimSET code were made. Each processor runs complete SimSET code for its assignment of decay events, independently of others running simultaneously. Empirical results are reported for simulation of a clinical-quality lung perfusion study.

  16. Dynamics of strongly coupled spatially distributed logistic equations with delay

    NASA Astrophysics Data System (ADS)

    Kashchenko, I. S.; Kashchenko, S. A.

    2015-04-01

    The dynamics of a system of two logistic delay equations with spatially distributed coupling is studied. The coupling coefficient is assumed to be sufficiently large. Special nonlinear systems of parabolic equations are constructed such that the behavior of their solutions is determined in the first approximation by the dynamical properties of the original system.

  17. Entry Times Distribution for Dynamical Balls on Metric Spaces

    NASA Astrophysics Data System (ADS)

    Haydn, N.; Yang, F.

    2017-03-01

    We show that the entry and return times for dynamical balls (Bowen balls) is exponential for systems that have an α -mixing invariant measure with certain regularities. We also show that systems modeled by Young's tower has exponential entry time distribution for dynamical balls. We also apply the results to conformal maps and expanding maps on the interval.

  18. Computational Fluid Dynamics Modeling of Parachute Clusters

    DTIC Science & Technology

    1997-11-01

    Ramakrishnan NOVEMBER 1997 19971205 022 Approved for public release; distribution is unlimited. ----- -------- The findings in this report are not...qR (left and right states) are obtained from a Taylor series I expansion of q about th~ centroid of the corresponding cell. The algorithm just...hexahedral cells. The solver does not require any information regarding tJe shapes of the cells and the faces. This property of the solver renders it

  19. Distributed computing testbed for a remote experimental environment

    SciTech Connect

    Butner, D.N.; Casper, T.A.; Howard, B.C.; Henline, P.A.; Davis, S.L.; Barnes, D.; Greenwood, D.E.

    1995-09-18

    Collaboration is increasing as physics research becomes concentrated on a few large, expensive facilities, particularly in magnetic fusion energy research, with national and international participation. These facilities are designed for steady state operation and interactive, real-time experimentation. We are developing tools to provide for the establishment of geographically distant centers for interactive operations; such centers would allow scientists to participate in experiments from their home institutions. A testbed is being developed for a Remote Experimental Environment (REE), a ``Collaboratory.`` The testbed will be used to evaluate the ability of a remotely located group of scientists to conduct research on the DIII-D Tokamak at General Atomics. The REE will serve as a testing environment for advanced control and collaboration concepts applicable to future experiments. Process-to-process communications over high speed wide area networks provide real-time synchronization and exchange of data among multiple computer networks, while the ability to conduct research is enhanced by adding audio/video communication capabilities. The Open Software Foundation`s Distributed Computing Environment is being used to test concepts in distributed control, security, naming, remote procedure calls and distributed file access using the Distributed File Services. We are exploring the technology and sociology of remotely participating in the operation of a large scale experimental facility.

  20. Multi-threaded, discrete event simulation of distributed computing systems

    NASA Astrophysics Data System (ADS)

    Legrand, Iosif; MONARC Collaboration

    2001-10-01

    The LHC experiments have envisaged computing systems of unprecedented complexity, for which is necessary to provide a realistic description and modeling of data access patterns, and of many jobs running concurrently on large scale distributed systems and exchanging very large amounts of data. A process oriented approach for discrete event simulation is well suited to describe various activities running concurrently, as well the stochastic arrival patterns specific for such type of simulation. Threaded objects or "Active Objects" can provide a natural way to map the specific behaviour of distributed data processing into the simulation program. The simulation tool developed within MONARC is based on Java (TM) technology which provides adequate tools for developing a flexible and distributed process oriented simulation. Proper graphics tools, and ways to analyze data interactively, are essential in any simulation project. The design elements, status and features of the MONARC simulation tool are presented. The program allows realistic modeling of complex data access patterns by multiple concurrent users in large scale computing systems in a wide range of possible architectures, from centralized to highly distributed. Comparison between queuing theory and realistic client-server measurements is also presented.

  1. User's Manual for Computer Program ROTOR. [to calculate tilt-rotor aircraft dynamic characteristics

    NASA Technical Reports Server (NTRS)

    Yasue, M.

    1974-01-01

    A detailed description of a computer program to calculate tilt-rotor aircraft dynamic characteristics is presented. This program consists of two parts: (1) the natural frequencies and corresponding mode shapes of the rotor blade and wing are developed from structural data (mass distribution and stiffness distribution); and (2) the frequency response (to gust and blade pitch control inputs) and eigenvalues of the tilt-rotor dynamic system, based on the natural frequencies and mode shapes, are derived. Sample problems are included to assist the user.

  2. Computing interface motion in compressible gas dynamics

    NASA Technical Reports Server (NTRS)

    Mulder, W.; Osher, S.; Sethan, James A.

    1992-01-01

    An analysis is conducted of the coupling of Osher and Sethian's (1988) 'Hamilton-Jacobi' level set formulation of the equations of motion for propagating interfaces to a system of conservation laws for compressible gas dynamics, giving attention to both the conservative and nonconservative differencing of the level set function. The capabilities of the method are illustrated in view of the results of numerical convergence studies of the compressible Rayleigh-Taylor and Kelvin-Helmholtz instabilities for air-air and air-helium boundaries.

  3. Computational fluid dynamics combustion analysis evaluation

    NASA Technical Reports Server (NTRS)

    Kim, Y. M.; Shang, H. M.; Chen, C. P.; Ziebarth, J. P.

    1992-01-01

    This study involves the development of numerical modelling in spray combustion. These modelling efforts are mainly motivated to improve the computational efficiency in the stochastic particle tracking method as well as to incorporate the physical submodels of turbulence, combustion, vaporization, and dense spray effects. The present mathematical formulation and numerical methodologies can be casted in any time-marching pressure correction methodologies (PCM) such as FDNS code and MAST code. A sequence of validation cases involving steady burning sprays and transient evaporating sprays will be included.

  4. Perspective: Computer simulations of long time dynamics

    PubMed Central

    Elber, Ron

    2016-01-01

    Atomically detailed computer simulations of complex molecular events attracted the imagination of many researchers in the field as providing comprehensive information on chemical, biological, and physical processes. However, one of the greatest limitations of these simulations is of time scales. The physical time scales accessible to straightforward simulations are too short to address many interesting and important molecular events. In the last decade significant advances were made in different directions (theory, software, and hardware) that significantly expand the capabilities and accuracies of these techniques. This perspective describes and critically examines some of these advances. PMID:26874473

  5. Perspective: Computer simulations of long time dynamics

    SciTech Connect

    Elber, Ron

    2016-02-14

    Atomically detailed computer simulations of complex molecular events attracted the imagination of many researchers in the field as providing comprehensive information on chemical, biological, and physical processes. However, one of the greatest limitations of these simulations is of time scales. The physical time scales accessible to straightforward simulations are too short to address many interesting and important molecular events. In the last decade significant advances were made in different directions (theory, software, and hardware) that significantly expand the capabilities and accuracies of these techniques. This perspective describes and critically examines some of these advances.

  6. Some rotorcraft applications of computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Mccroskey, W. J.

    1988-01-01

    The growing application of computational aerodynamics to nonlinear rotorcraft problems is outlined, with particular emphasis on the development of new methods based on the Euler and thin-layer Navier-Stokes equations. Rotor airfoil characteristics can now be calculated accurately over a wide range of transonic flow conditions. However, unsteady 3-D viscous codes remain in the research stage, and a numerical simulation of the complete flow field about a helicopter in forward flight is not now feasible. Nevertheless, impressive progress is being made in preparation for future supercomputers that will enable meaningful calculations to be made for arbitrary rotorcraft configurations.

  7. Distributed Computation Resources for Earth System Grid Federation (ESGF)

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Doutriaux, C.; Williams, D. N.

    2014-12-01

    The Intergovernmental Panel on Climate Change (IPCC), prompted by the United Nations General Assembly, has published a series of papers in their Fifth Assessment Report (AR5) on processes, impacts, and mitigations of climate change in 2013. The science used in these reports was generated by an international group of domain experts. They studied various scenarios of climate change through the use of highly complex computer models to simulate the Earth's climate over long periods of time. The resulting total data of approximately five petabytes are stored in a distributed data grid known as the Earth System Grid Federation (ESGF). Through the ESGF, consumers of the data can find and download data with limited capabilities for server-side processing. The Sixth Assessment Report (AR6) is already in the planning stages and is estimated to create as much as two orders of magnitude more data than the AR5 distributed archive. It is clear that data analysis capabilities currently in use will be inadequate to allow for the necessary science to be done with AR6 data—the data will just be too big. A major paradigm shift from downloading data to local systems to perform data analytics must evolve to moving the analysis routines to the data and performing these computations on distributed platforms. In preparation for this need, the ESGF has started a Compute Working Team (CWT) to create solutions that allow users to perform distributed, high-performance data analytics on the AR6 data. The team will be designing and developing a general Application Programming Interface (API) to enable highly parallel, server-side processing throughout the ESGF data grid. This API will be integrated with multiple analysis and visualization tools, such as the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT), netCDF Operator (NCO), and others. This presentation will provide an update on the ESGF CWT's overall approach toward enabling the necessary storage proximal computational

  8. Dynamics of Numerics & Spurious Behaviors in CFD Computations. Revised

    NASA Technical Reports Server (NTRS)

    Yee, Helen C.; Sweby, Peter K.

    1997-01-01

    The global nonlinear behavior of finite discretizations for constant time steps and fixed or adaptive grid spacings is studied using tools from dynamical systems theory. Detailed analysis of commonly used temporal and spatial discretizations for simple model problems is presented. The role of dynamics in the understanding of long time behavior of numerical integration and the nonlinear stability, convergence, and reliability of using time-marching approaches for obtaining steady-state numerical solutions in computational fluid dynamics (CFD) is explored. The study is complemented with examples of spurious behavior observed in steady and unsteady CFD computations. The CFD examples were chosen to illustrate non-apparent spurious behavior that was difficult to detect without extensive grid and temporal refinement studies and some knowledge from dynamical systems theory. Studies revealed the various possible dangers of misinterpreting numerical simulation of realistic complex flows that are constrained by available computing power. In large scale computations where the physics of the problem under study is not well understood and numerical simulations are the only viable means of solution, extreme care must be taken in both computation and interpretation of the numerical data. The goal of this paper is to explore the important role that dynamical systems theory can play in the understanding of the global nonlinear behavior of numerical algorithms and to aid the identification of the sources of numerical uncertainties in CFD.

  9. Oxygen and seizure dynamics: II. Computational modeling

    PubMed Central

    Wei, Yina; Ullah, Ghanim; Ingram, Justin

    2014-01-01

    Electrophysiological recordings show intense neuronal firing during epileptic seizures leading to enhanced energy consumption. However, the relationship between oxygen metabolism and seizure patterns has not been well studied. Recent studies have developed fast and quantitative techniques to measure oxygen microdomain concentration during seizure events. In this article, we develop a biophysical model that accounts for these experimental observations. The model is an extension of the Hodgkin-Huxley formalism and includes the neuronal microenvironment dynamics of sodium, potassium, and oxygen concentrations. Our model accounts for metabolic energy consumption during and following seizure events. We can further account for the experimental observation that hypoxia can induce seizures, with seizures occurring only within a narrow range of tissue oxygen pressure. We also reproduce the interplay between excitatory and inhibitory neurons seen in experiments, accounting for the different oxygen levels observed during seizures in excitatory vs. inhibitory cell layers. Our findings offer a more comprehensive understanding of the complex interrelationship among seizures, ion dynamics, and energy metabolism. PMID:24671540

  10. Secure distributed genome analysis for GWAS and sequence comparison computation

    PubMed Central

    2015-01-01

    Background The rapid increase in the availability and volume of genomic data makes significant advances in biomedical research possible, but sharing of genomic data poses challenges due to the highly sensitive nature of such data. To address the challenges, a competition for secure distributed processing of genomic data was organized by the iDASH research center. Methods In this work we propose techniques for securing computation with real-life genomic data for minor allele frequency and chi-squared statistics computation, as well as distance computation between two genomic sequences, as specified by the iDASH competition tasks. We put forward novel optimizations, including a generalization of a version of mergesort, which might be of independent interest. Results We provide implementation results of our techniques based on secret sharing that demonstrate practicality of the suggested protocols and also report on performance improvements due to our optimization techniques. Conclusions This work describes our techniques, findings, and experimental results developed and obtained as part of iDASH 2015 research competition to secure real-life genomic computations and shows feasibility of securely computing with genomic data in practice. PMID:26733307

  11. Distribution and dynamics of hayscented fern following stand harvest

    Treesearch

    Songlin Fei; Peter J. Gould; Melanie J. Kaeser; Kim C. Steiner

    2008-01-01

    The distribution and dynamics of hayscented fern were examined as part of a large-scale study of oak regeneration in Pennsylvania. The study included 69 stands covering 3,333 acres in two physiographic provinces. Hayscented fern was more widely distributed and occurred at higher densities in the Allegheny Plateau physiographic provinces versus the Ridge and Valley...

  12. Experience with automatic, dynamic load balancing and adaptive finite element computation

    SciTech Connect

    Wheat, S.R.; Devine, K.D.; Maccabe, A.B.

    1993-10-01

    Distributed memory, Massively Parallel (MP), MIMD technology has enabled the development of applications requiring computational resources previously unobtainable. Structural mechanics and fluid dynamics applications, for example, are often solved by finite element methods (FEMs) requiring, millions of degrees of freedom to accurately simulate physical phenomenon. Adaptive methods, which automatically refine or coarsen meshes and vary the order of accuracy of the numerical solution, offer greater robustness and computational efficiency than traditional FEMs by reducing the amount of computation required away from physical structures such as shock waves and boundary layers. On MP computers, FEMs frequently result in distributed processor load imbalances. To overcome load imbalance, many MP FEMs use static load balancing as a preprocessor to the finite element calculation. Adaptive methods complicate the load imbalance problem since the work per element is not uniform across the solution domain and changes as the computation proceeds. Therefore, dynamic load balancing is required to maintain global load balance. We describe a dynamic, fine-grained, element-based data migration system that maintains global load balance and is effective in the presence of changing work loads. Global load balance is achieved by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. The method utilizes an automatic element management system library to which a programmer integrates the application`s computational description. The library`s flexibility supports a large class of finite element and finite difference based applications.

  13. Automated Static Culture System Cell Module Mixing Protocol and Computational Fluid Dynamics Analysis

    NASA Technical Reports Server (NTRS)

    Kleis, Stanley J.; Truong, Tuan; Goodwin, Thomas J,

    2004-01-01

    This report is a documentation of a fluid dynamic analysis of the proposed Automated Static Culture System (ASCS) cell module mixing protocol. The report consists of a review of some basic fluid dynamics principles appropriate for the mixing of a patch of high oxygen content media into the surrounding media which is initially depleted of oxygen, followed by a computational fluid dynamics (CFD) study of this process for the proposed protocol over a range of the governing parameters. The time histories of oxygen concentration distributions and mechanical shear levels generated are used to characterize the mixing process for different parameter values.

  14. Use of computational fluid dynamics in the design of dynamic contrast enhanced imaging phantoms

    NASA Astrophysics Data System (ADS)

    Hariharan, Prasanna; Freed, Melanie; Myers, Matthew R.

    2013-09-01

    Phantoms for dynamic contrast enhanced (DCE) imaging modalities such as DCE computed tomography (DCE-CT) and DCE magnetic resonance imaging (DCE-MRI) are valuable tools for evaluating and comparing imaging systems. It is important for the contrast-agent distribution within the phantom to possess a time dependence that replicates a curve observed clinically, known as the ‘tumor-enhancement curve’. It is also important for the concentration field within the lesion to be as uniform as possible. This study demonstrates how computational fluid dynamics (CFD) can be applied to achieve these goals within design constraints. The distribution of the contrast agent within the simulated phantoms was investigated in relation to the influence of three factors of the phantom design. First, the interaction between the inlets and the uniformity of the contrast agent within the phantom was modeled. Second, pumps were programmed using a variety of schemes and the resultant dynamic uptake curves were compared to tumor-enhancement curves obtained from clinical data. Third, the effectiveness of pulsing the inlet flow rate to produce faster equilibration of the contrast-agent distribution was quantified. The models employed a spherical lesion and design constraints (lesion diameter, inlet-tube size and orientation, contrast-agent flow rates and fluid properties) taken from a recently published DCE-MRI phantom study. For DCE-MRI in breast cancer detection, where the target tumor-enhancement curve varies on the scale of hundreds of seconds, optimizing the number of inlet tubes and their orientation was found to be adequate for attaining concentration uniformity and reproducing the target tumor-enhancement curve. For DCE-CT in liver tumor detection, where the tumor-enhancement curve varies on a scale of tens of seconds, the use of an iterated inlet condition (programmed into the pump) enabled the phantom to reproduce the target tumor-enhancement curve within a few per cent beyond about

  15. Computational fluid dynamics - An introduction for engineers

    NASA Astrophysics Data System (ADS)

    Abbott, Michael Barry; Basco, David R.

    An introduction to the fundamentals of CFD for engineers and physical scientists is presented. The principal definitions, basic ideas, and most common methods used in CFD are presented, and the application of these methods to the description of free surface, unsteady, and turbulent flow is shown. Emphasis is on the numerical treatment of incompressible unsteady fluid flow with primary applications to water problems using the finite difference method. While traditional areas of application like hydrology, hydraulic and coastal engineering and oceanography get the main emphasis, newer areas of application such as medical fluid dynamics, bioengineering, and soil physics and chemistry are also addressed. The possibilities and limitations of CFD are pointed out along with the relations of CFD to other branches of science.

  16. A modular system for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    McCarthy, D. R.; Foutch, D. W.; Shurtleff, G. E.

    This paper describes the Modular System for Compuational Fluid Dynamics (MOSYS), a software facility for the construction and execution of arbitrary solution procedures on multizone, structured body-fitted grids. It focuses on the structure and capabilities of MOSYS and the philosophy underlying its design. The system offers different levels of capability depending on the objectives of the user. It enables the applications engineer to quickly apply a variety of methods to geometrically complex problems. The methods developer can implement new algorithms in a simple form, and immediately apply them to problems of both theoretical and practical interest. And for the code builder it consitutes a toolkit for fast construction of CFD codes tailored to various purposes. These capabilities are illustrated through applications to a particularly complex problem encountered in aircraft propulsion systems, namely, the analysis of a landing aircraft in reverse thrust.

  17. Challenges to computing plasma thruster dynamics

    SciTech Connect

    Smith, G.A. )

    1992-01-01

    This paper describes computational challenges in describing high thrust and I[sub sp] expected from the proposed ion-compressed antimatter nuclear (ICAN) propulsion system. This concept uses antiprotons to induce fission reactions that jump start a microfission/fusion process in a target compressed by low-energy ion beams. The ICAN system could readily provide the high energy density required for interplanetary space missions of short duration. In conventional rocket design, thrust is obtained by expelling a propellant under high pressure through a nozzle. A larger I[sub sp] can be achieved by operating the system at a higher temperature. Full ionization of propellant at high temperature introduces new and challenging questions in the design of plasma thrusters.

  18. A Scalable Distribution Network Risk Evaluation Framework via Symbolic Dynamics

    PubMed Central

    Yuan, Kai; Liu, Jian; Liu, Kaipei; Tan, Tianyuan

    2015-01-01

    Background Evaluations of electric power distribution network risks must address the problems of incomplete information and changing dynamics. A risk evaluation framework should be adaptable to a specific situation and an evolving understanding of risk. Methods This study investigates the use of symbolic dynamics to abstract raw data. After introducing symbolic dynamics operators, Kolmogorov-Sinai entropy and Kullback-Leibler relative entropy are used to quantitatively evaluate relationships between risk sub-factors and main factors. For layered risk indicators, where the factors are categorized into four main factors – device, structure, load and special operation – a merging algorithm using operators to calculate the risk factors is discussed. Finally, an example from the Sanya Power Company is given to demonstrate the feasibility of the proposed method. Conclusion Distribution networks are exposed and can be affected by many things. The topology and the operating mode of a distribution network are dynamic, so the faults and their consequences are probabilistic. PMID:25789859

  19. Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations

    NASA Technical Reports Server (NTRS)

    Chrisochoides, Nikos

    1995-01-01

    We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.

  20. Population-based learning of load balancing policies for a distributed computer system

    NASA Technical Reports Server (NTRS)

    Mehra, Pankaj; Wah, Benjamin W.

    1993-01-01

    Effective load-balancing policies use dynamic resource information to schedule tasks in a distributed computer system. We present a novel method for automatically learning such policies. At each site in our system, we use a comparator neural network to predict the relative speedup of an incoming task using only the resource-utilization patterns obtained prior to the task's arrival. Outputs of these comparator networks are broadcast periodically over the distributed system, and the resource schedulers at each site use these values to determine the best site for executing an incoming task. The delays incurred in propagating workload information and tasks from one site to another, as well as the dynamic and unpredictable nature of workloads in multiprogrammed multiprocessors, may cause the workload pattern at the time of execution to differ from patterns prevailing at the times of load-index computation and decision making. Our load-balancing policy accommodates this uncertainty by using certain tunable parameters. We present a population-based machine-learning algorithm that adjusts these parameters in order to achieve high average speedups with respect to local execution. Our results show that our load-balancing policy, when combined with the comparator neural network for workload characterization, is effective in exploiting idle resources in a distributed computer system.

  1. Population-based learning of load balancing policies for a distributed computer system

    NASA Technical Reports Server (NTRS)

    Mehra, Pankaj; Wah, Benjamin W.

    1993-01-01

    Effective load-balancing policies use dynamic resource information to schedule tasks in a distributed computer system. We present a novel method for automatically learning such policies. At each site in our system, we use a comparator neural network to predict the relative speedup of an incoming task using only the resource-utilization patterns obtained prior to the task's arrival. Outputs of these comparator networks are broadcast periodically over the distributed system, and the resource schedulers at each site use these values to determine the best site for executing an incoming task. The delays incurred in propagating workload information and tasks from one site to another, as well as the dynamic and unpredictable nature of workloads in multiprogrammed multiprocessors, may cause the workload pattern at the time of execution to differ from patterns prevailing at the times of load-index computation and decision making. Our load-balancing policy accommodates this uncertainty by using certain tunable parameters. We present a population-based machine-learning algorithm that adjusts these parameters in order to achieve high average speedups with respect to local execution. Our results show that our load-balancing policy, when combined with the comparator neural network for workload characterization, is effective in exploiting idle resources in a distributed computer system.

  2. Numerical simulation of landfill aeration using computational fluid dynamics.

    PubMed

    Fytanidis, Dimitrios K; Voudrias, Evangelos A

    2014-04-01

    The present study is an application of Computational Fluid Dynamics (CFD) to the numerical simulation of landfill aeration systems. Specifically, the CFD algorithms provided by the commercial solver ANSYS Fluent 14.0, combined with an in-house source code developed to modify the main solver, were used. The unsaturated multiphase flow of air and liquid phases and the biochemical processes for aerobic biodegradation of the organic fraction of municipal solid waste were simulated taking into consideration their temporal and spatial evolution, as well as complex effects, such as oxygen mass transfer across phases, unsaturated flow effects (capillary suction and unsaturated hydraulic conductivity), temperature variations due to biochemical processes and environmental correction factors for the applied kinetics (Monod and 1st order kinetics). The developed model results were compared with literature experimental data. Also, pilot scale simulations and sensitivity analysis were implemented. Moreover, simulation results of a hypothetical single aeration well were shown, while its zone of influence was estimated using both the pressure and oxygen distribution. Finally, a case study was simulated for a hypothetical landfill aeration system. Both a static (steadily positive or negative relative pressure with time) and a hybrid (following a square wave pattern of positive and negative values of relative pressure with time) scenarios for the aeration wells were examined. The results showed that the present model is capable of simulating landfill aeration and the obtained results were in good agreement with corresponding previous experimental and numerical investigations.

  3. Computational Fluid Dynamics Analysis of Canadian Supercritical Water Reactor (SCWR)

    NASA Astrophysics Data System (ADS)

    Movassat, Mohammad; Bailey, Joanne; Yetisir, Metin

    2015-11-01

    A Computational Fluid Dynamics (CFD) simulation was performed on the proposed design for the Canadian SuperCritical Water Reactor (SCWR). The proposed Canadian SCWR is a 1200 MW(e) supercritical light-water cooled nuclear reactor with pressurized fuel channels. The reactor concept uses an inlet plenum that all fuel channels are attached to and an outlet header nested inside the inlet plenum. The coolant enters the inlet plenum at 350 C and exits the outlet header at 625 C. The operating pressure is approximately 26 MPa. The high pressure and high temperature outlet conditions result in a higher electric conversion efficiency as compared to existing light water reactors. In this work, CFD simulations were performed to model fluid flow and heat transfer in the inlet plenum, outlet header, and various parts of the fuel assembly. The ANSYS Fluent solver was used for simulations. Results showed that mass flow rate distribution in fuel channels varies radially and the inner channels achieve higher outlet temperatures. At the outlet header, zones with rotational flow were formed as the fluid from 336 fuel channels merged. Results also suggested that insulation of the outlet header should be considered to reduce the thermal stresses caused by the large temperature gradients.

  4. Introducing Computational Fluid Dynamics Simulation into Olfactory Display

    NASA Astrophysics Data System (ADS)

    Ishida, Hiroshi; Yoshida, Hitoshi; Nakamoto, Takamichi

    An olfactory display is a device that delivers various odors to the user's nose. It can be used to add special effects to movies and games by releasing odors relevant to the scenes shown on the screen. In order to provide high-presence olfactory stimuli to the users, the display must be able to generate realistic odors with appropriate concentrations in a timely manner together with visual and audio playbacks. In this paper, we propose to use computational fluid dynamics (CFD) simulations in conjunction with the olfactory display. Odor molecules released from their source are transported mainly by turbulent flow, and their behavior can be extremely complicated even in a simple indoor environment. In the proposed system, a CFD solver is employed to calculate the airflow field and the odor dispersal in the given environment. An odor blender is used to generate the odor with the concentration determined based on the calculated odor distribution. Experimental results on presenting odor stimuli synchronously with movie clips show the effectiveness of the proposed system.

  5. Distributed storage and cloud computing: a test case

    NASA Astrophysics Data System (ADS)

    Piano, S.; Delia Ricca, G.

    2014-06-01

    Since 2003 the computing farm hosted by the INFN Tier3 facility in Trieste supports the activities of many scientific communities. Hundreds of jobs from 45 different VOs, including those of the LHC experiments, are processed simultaneously. Given that normally the requirements of the different computational communities are not synchronized, the probability that at any given time the resources owned by one of the participants are not fully utilized is quite high. A balanced compensation should in principle allocate the free resources to other users, but there are limits to this mechanism. In fact, the Trieste site may not hold the amount of data needed to attract enough analysis jobs, and even in that case there could be a lack of bandwidth for their access. The Trieste ALICE and CMS computing groups, in collaboration with other Italian groups, aim to overcome the limitations of existing solutions using two approaches: sharing the data among all the participants taking full advantage of GARR-X wide area networks (10 GB/s) and integrating the resources dedicated to batch analysis with the ones reserved for dynamic interactive analysis, through modern solutions as cloud computing.

  6. Some Contributions to Computational Fluid Dynamics.

    NASA Astrophysics Data System (ADS)

    Miller, Harvey Philip

    A three-dimensional, time-dependent free surface model has been developed for predicting the velocity field and surface height variations in a tidal bay. An explicit finite difference numerical solution is obtained by transforming the vertical coordinate in the governing model equations. The ocean-bay interface open boundary condition is incorporated without approximation into the hydrodynamic model by employing a staggered grid Richardson lattice. The momentum equations ignore horizontal diffusion, which is justifiably small for the South Biscayne Bay. Another three-dimensional, time-dependent free surface model for the South Biscayne Bay is used for application to suspended particles transport. A unique mass-conserving numerical model is used for solving the concentration equation by an explicit finite difference scheme. The effects of constant particle settling velocity and bottom bed deposition rate are compared and discussed. For convection dominated coastal flows, the flux -corrected transport (FCT) method is compared with other low-dispersive, explicit finite difference schemes for the two-dimensional linear advection of 2-D gaussian initial temperature distributions of various half-widths. The flow field is specified a-priori as consisting of a slowly varying, oscillating, uniform x-component of velocity, and a constant y-component of velocity. This type of flow field is typically encountered in near-coastal waters. The artificial numerical effects of diffusion (dissipation), dispersion, and anisotropy are discussed. Finally, two-dimensional linear advection solutions of transported fluid temperature are explored by implementing high resolution, high order explicit finite difference schemes. A comparison of the flux-corrected transport (FCT) methods is made with other total variation diminishing (TVD) schemes for the 2-D gaussian initial temperature distributions of various half-widths. Further clipping of the sharply peaked gaussian distribution in 2-D

  7. Fluid dynamics parallel computer development at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Townsend, James C.; Zang, Thomas A.; Dwoyer, Douglas L.

    1987-01-01

    To accomplish more detailed simulations of highly complex flows, such as the transition to turbulence, fluid dynamics research requires computers much more powerful than any available today. Only parallel processing on multiple-processor computers offers hope for achieving the required effective speeds. Looking ahead to the use of these machines, the fluid dynamicist faces three issues: algorithm development for near-term parallel computers, architecture development for future computer power increases, and assessment of possible advantages of special purpose designs. Two projects at NASA Langley address these issues. Software development and algorithm exploration is being done on the FLEX/32 Parallel Processing Research Computer. New architecture features are being explored in the special purpose hardware design of the Navier-Stokes Computer. These projects are complementary and are producing promising results.

  8. Fluid dynamics parallel computer development at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Townsend, James C.; Zang, Thomas A.; Dwoyer, Douglas L.

    1987-01-01

    To accomplish more detailed simulations of highly complex flows, such as the transition to turbulence, fluid dynamics research requires computers much more powerful than any available today. Only parallel processing on multiple-processor computers offers hope for achieving the required effective speeds. Looking ahead to the use of these machines, the fluid dynamicist faces three issues: algorithm development for near-term parallel computers, architecture development for future computer power increases, and assessment of possible advantages of special purpose designs. Two projects at NASA Langley address these issues. Software development and algorithm exploration is being done on the FLEX/32 Parallel Processing Research Computer. New architecture features are being explored in the special purpose hardware design of the Navier-Stokes Computer. These projects are complementary and are producing promising results.

  9. Improving CMS data transfers among its distributed computing facilities

    NASA Astrophysics Data System (ADS)

    Flix, J.; Magini, N.; Sartirana, A.

    2011-12-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on their usage, customizing the topologies and improving their setup in order to keep CMS transferring data at the desired levels in a reliable and robust way.

  10. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    NASA Astrophysics Data System (ADS)

    Schovancová, J.; Campana, S.; Di Girolamo, A.; Jézéquel, S.; Ueda, I.; Wenaus, T.; Atlas Collaboration

    2014-06-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources. During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visualization bits across the different tools. A rich family of various filtering and searching options enhancing available user interfaces comes naturally with the data and visualization layer separation. With a variety of reliable monitoring data accessible through standardized interfaces, the possibility of automating actions under well defined conditions correlating multiple data sources has become feasible. In this contribution we discuss also about the automated exclusion of degraded resources and their automated recovery in various activities.

  11. Parallel matrix transpose algorithms on distributed memory concurrent computers

    SciTech Connect

    Choi, J.; Walker, D.W.; Dongarra, J.J. |

    1993-10-01

    This paper describes parallel matrix transpose algorithms on distributed memory concurrent processors. It is assumed that the matrix is distributed over a P x Q processor template with a block scattered data distribution. P, Q, and the block size can be arbitrary, so the algorithms have wide applicability. The communication schemes of the algorithms are determined by the greatest common divisor (GCD) of P and Q. If P and Q are relatively prime, the matrix transpose algorithm involves complete exchange communication. If P and Q are not relatively prime, processors are divided into GCD groups and the communication operations are overlapped for different groups of processors. Processors transpose GCD wrapped diagonal blocks simultaneously, and the matrix can be transposed with LCM/GCD steps, where LCM is the least common multiple of P and Q. The algorithms make use of non-blocking, point-to-point communication between processors. The use of nonblocking communication allows a processor to overlap the messages that it sends to different processors, thereby avoiding unnecessary synchronization. Combined with the matrix multiplication routine, C = A{center_dot}B, the algorithms are used to compute parallel multiplications of transposed matrices, C = A{sup T}{center_dot}B{sup T}, in the PUMMA package. Details of the parallel implementation of the algorithms are given, and results are presented for runs on the Intel Touchstone Delta computer.

  12. Analytical and Computational Properties of Distributed Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis

  13. Development of computational fluid dynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Inouye, M.

    1984-01-01

    Ames Research Center has the lead role among NASA centers to conduct research in computational fluid dynamics. The past, the present, and the future prospects in this field are reviewed. Past accomplishments include pioneering computer simulations of fluid dynamics problems that have made computers valuable in complementing wind tunnels for aerodynamic research. The present facilities include the most powerful computers built in the United States. Three examples of viscous flow simulations are presented: an afterbody with an exhaust plume, a blunt fin mounted on a flat plate, and the Space Shuttle. The future prospects include implementation of the Numerical Aerodynamic Simulation Processing System that will provide the capability for solving the viscous flow field around an aircraft in a matter of minutes.

  14. Computational Fluid Dynamics Simulation of Fluidized Bed Polymerization Reactors

    SciTech Connect

    Fan, Rong

    2006-01-01

    Fluidized beds (FB) reactors are widely used in the polymerization industry due to their superior heat- and mass-transfer characteristics. Nevertheless, problems associated with local overheating of polymer particles and excessive agglomeration leading to FB reactors defluidization still persist and limit the range of operating temperatures that can be safely achieved in plant-scale reactors. Many people have been worked on the modeling of FB polymerization reactors, and quite a few models are available in the open literature, such as the well-mixed model developed by McAuley, Talbot, and Harris (1994), the constant bubble size model (Choi and Ray, 1985) and the heterogeneous three phase model (Fernandes and Lona, 2002). Most these research works focus on the kinetic aspects, but from industrial viewpoint, the behavior of FB reactors should be modeled by considering the particle and fluid dynamics in the reactor. Computational fluid dynamics (CFD) is a powerful tool for understanding the effect of fluid dynamics on chemical reactor performance. For single-phase flows, CFD models for turbulent reacting flows are now well understood and routinely applied to investigate complex flows with detailed chemistry. For multiphase flows, the state-of-the-art in CFD models is changing rapidly and it is now possible to predict reasonably well the flow characteristics of gas-solid FB reactors with mono-dispersed, non-cohesive solids. This thesis is organized into seven chapters. In Chapter 2, an overview of fluidized bed polymerization reactors is given, and a simplified two-site kinetic mechanism are discussed. Some basic theories used in our work are given in detail in Chapter 3. First, the governing equations and other constitutive equations for the multi-fluid model are summarized, and the kinetic theory for describing the solid stress tensor is discussed. The detailed derivation of DQMOM for the population balance equation is given as the second section. In this section

  15. Amoeba-inspired nanoarchitectonic computing: solving intractable computational problems using nanoscale photoexcitation transfer dynamics.

    PubMed

    Aono, Masashi; Naruse, Makoto; Kim, Song-Ju; Wakabayashi, Masamitsu; Hori, Hirokazu; Ohtsu, Motoichi; Hara, Masahiko

    2013-06-18

    Biologically inspired computing devices and architectures are expected to overcome the limitations of conventional technologies in terms of solving computationally demanding problems, adapting to complex environments, reducing energy consumption, and so on. We previously demonstrated that a primitive single-celled amoeba (a plasmodial slime mold), which exhibits complex spatiotemporal oscillatory dynamics and sophisticated computing capabilities, can be used to search for a solution to a very hard combinatorial optimization problem. We successfully extracted the essential spatiotemporal dynamics by which the amoeba solves the problem. This amoeba-inspired computing paradigm can be implemented by various physical systems that exhibit suitable spatiotemporal dynamics resembling the amoeba's problem-solving process. In this Article, we demonstrate that photoexcitation transfer phenomena in certain quantum nanostructures mediated by optical near-field interactions generate the amoebalike spatiotemporal dynamics and can be used to solve the satisfiability problem (SAT), which is the problem of judging whether a given logical proposition (a Boolean formula) is self-consistent. SAT is related to diverse application problems in artificial intelligence, information security, and bioinformatics and is a crucially important nondeterministic polynomial time (NP)-complete problem, which is believed to become intractable for conventional digital computers when the problem size increases. We show that our amoeba-inspired computing paradigm dramatically outperforms a conventional stochastic search method. These results indicate the potential for developing highly versatile nanoarchitectonic computers that realize powerful solution searching with low energy consumption.

  16. Qualification of a computer program for drill string dynamics

    SciTech Connect

    Stone, C.M.; Carne, T.G.; Caskey, B.C.

    1985-01-01

    A four point plan for the qualification of the GEODYN drill string dynamics computer program is described. The qualification plan investigates both modal response and transient response of a short drill string subjected to simulated cutting loads applied through a polycrystalline diamond compact (PDC) bit. The experimentally based qualification shows that the analytical techniques included in Phase 1 GEODYN correctly simulate the dynamic response of the bit-drill string system. 6 refs., 8 figs.

  17. Computational Fluid Dynamics. [numerical methods and algorithm development

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  18. Portable lamp with dynamically controlled lighting distribution

    DOEpatents

    Siminovitch, Michael J.; Page, Erik R.

    2001-01-01

    A double lamp table or floor lamp lighting system has a pair of compact fluorescent lamps (CFLs) arranged vertically with a reflective septum in between. By selectively turning on one or both of the CFLs, down lighting, up lighting, or both up and down lighting is produced. The control system can also vary the light intensity from each CFL. The reflective septum insures that almost all the light produced by each lamp will be directed into the desired light distribution pattern which is selected and easily changed by the user. Planar compact fluorescent lamps, e.g. circular CFLs, particularly oriented horizontally, are preferable. CFLs provide energy efficiency. The lighting system may be designed for the home, hospitality, office or other environments.

  19. Open-Source, Distributed Computational Environment for Virtual Materials Exploration

    DTIC Science & Technology

    2015-01-01

    like LAMMPS with big enough systems, and the link between  molecular   dynamics   materials simulations  and FEM parameters.  22 Distribution Statement A...offering  dynamic  runtime extension using plugins, and offering reusable software  libraries that expose features already well tested in the main...FEM solvers. The hot start method was coordinated using  Python  scripts and input  files, and  would serve as an initial prototype to be implemented in

  20. Overset grid applications on distributed memory MIMD computers

    NASA Technical Reports Server (NTRS)

    Chawla, Kalpana; Weeratunga, Sisira

    1994-01-01

    Analysis of modern aerospace vehicles requires the computation of flowfields about complex three dimensional geometries composed of regions with varying spatial resolution requirements. Overset grid methods allow the use of proven structured grid flow solvers to address the twin issues of geometrical complexity and the resolution variation by decomposing the complex physical domain into a collection of overlapping subdomains. This flexibility is accompanied by the need for irregular intergrid boundary communication among the overlapping component grids. This study investigates a strategy for implementing such a static overset grid implicit flow solver on distributed memory, MIMD computers; i.e., the 128 node Intel iPSC/860 and the 208 node Intel Paragon. Performance data for two composite grid configurations characteristic of those encountered in present day aerodynamic analysis are also presented.