ERIC Educational Resources Information Center
Zillesen, Pieter G. van Schaick
This paper introduces a hardware and software independent model for producing educational computer simulation environments. The model, which is based on the results of 32 studies of educational computer simulations program production, implies that educational computer simulation environments are specified, constructed, tested, implemented, and…
Evaluation of Visual Computer Simulator for Computer Architecture Education
ERIC Educational Resources Information Center
Imai, Yoshiro; Imai, Masatoshi; Moritoh, Yoshio
2013-01-01
This paper presents trial evaluation of a visual computer simulator in 2009-2011, which has been developed to play some roles of both instruction facility and learning tool simultaneously. And it illustrates an example of Computer Architecture education for University students and usage of e-Learning tool for Assembly Programming in order to…
SIMULTANEOUS DIFFERENTIAL EQUATION COMPUTER
Collier, D.M.; Meeks, L.A.; Palmer, J.P.
1960-05-10
A description is given for an electronic simulator for a system of simultaneous differential equations, including nonlinear equations. As a specific example, a homogeneous nuclear reactor system including a reactor fluid, heat exchanger, and a steam boiler may be simulated, with the nonlinearity resulting from a consideration of temperature effects taken into account. The simulator includes three operational amplifiers, a multiplier, appropriate potential sources, and interconnecting R-C networks.
NASA Technical Reports Server (NTRS)
Sturdza, Peter (Inventor); Martins-Rivas, Herve (Inventor); Suzuki, Yoshifumi (Inventor)
2014-01-01
A fluid-flow simulation over a computer-generated surface is generated using a quasi-simultaneous technique. The simulation includes a fluid-flow mesh of inviscid and boundary-layer fluid cells. An initial fluid property for an inviscid fluid cell is determined using an inviscid fluid simulation that does not simulate fluid viscous effects. An initial boundary-layer fluid property a boundary-layer fluid cell is determined using the initial fluid property and a viscous fluid simulation that simulates fluid viscous effects. An updated boundary-layer fluid property is determined for the boundary-layer fluid cell using the initial fluid property, initial boundary-layer fluid property, and an interaction law. The interaction law approximates the inviscid fluid simulation using a matrix of aerodynamic influence coefficients computed using a two-dimensional surface panel technique and a fluid-property vector. An updated fluid property is determined for the inviscid fluid cell using the updated boundary-layer fluid property.
Computational Modeling Approaches to Multiscale Design of Icephobic Surfaces
NASA Technical Reports Server (NTRS)
Tallman, Aaron; Wang, Yan; Vargas, Mario
2017-01-01
To aid in the design of surfaces that prevent icing, a model and computational simulation of impact ice formation at the single droplet scale was implemented. The nucleation of a single supercooled droplet impacting on a substrate, in rime ice conditions, was simulated. Open source computational fluid dynamics (CFD) software was used for the simulation. To aid in the design of surfaces that prevent icing, a model of impact ice formation at the single droplet scale was proposed•No existing model simulates simultaneous impact and freezing of a single super-cooled water droplet•For the 10-week project, a low-fidelity feasibility study was the goal.
Damage progression in Composite Structures
NASA Technical Reports Server (NTRS)
Minnetyan, Levon
1996-01-01
A computational simulation tool is used to evaluate the various stages of damage progression in composite materials during Iosipescu sheat testing. Unidirectional composite specimens with either the major or minor material axis in the load direction are considered. Damage progression characteristics are described for each specimen using two types of boundary conditions. A procedure is outlined regarding the use of computational simulation in composites testing. Iosipescu shear testing using the V-notched beam specimen is a convenient method to measure both shear strength and shear stiffness simultaneously. The evaluation of composite test response can be made more productive and informative via computational simulation of progressive damage and fracture. Computational simulation performs a complete evaluation of laminated composite fracture via assessment of ply and subply level damage/fracture processes.
Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla
2016-11-01
Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.
A heterogeneous computing environment for simulating astrophysical fluid flows
NASA Technical Reports Server (NTRS)
Cazes, J.
1994-01-01
In the Concurrent Computing Laboratory in the Department of Physics and Astronomy at Louisiana State University we have constructed a heterogeneous computing environment that permits us to routinely simulate complicated three-dimensional fluid flows and to readily visualize the results of each simulation via three-dimensional animation sequences. An 8192-node MasPar MP-1 computer with 0.5 GBytes of RAM provides 250 MFlops of execution speed for our fluid flow simulations. Utilizing the parallel virtual machine (PVM) language, at periodic intervals data is automatically transferred from the MP-1 to a cluster of workstations where individual three-dimensional images are rendered for inclusion in a single animation sequence. Work is underway to replace executions on the MP-1 with simulations performed on the 512-node CM-5 at NCSA and to simultaneously gain access to more potent volume rendering workstations.
Sabouri, Sepideh; Matene, Elhacene; Vinet, Alain; Richer, Louis-Philippe; Cardinal, René; Armour, J Andrew; Pagé, Pierre; Kus, Teresa; Jacquemet, Vincent
2014-01-01
Epicardial high-density electrical mapping is a well-established experimental instrument to monitor in vivo the activity of the atria in response to modulations of the autonomic nervous system in sinus rhythm. In regions that are not accessible by epicardial mapping, noncontact endocardial mapping performed through a balloon catheter may provide a more comprehensive description of atrial activity. We developed a computer model of the canine right atrium to compare epicardial and noncontact endocardial mapping. The model was derived from an experiment in which electroanatomical reconstruction, epicardial mapping (103 electrodes), noncontact endocardial mapping (2048 virtual electrodes computed from a 64-channel balloon catheter), and direct-contact endocardial catheter recordings were simultaneously performed in a dog. The recording system was simulated in the computer model. For simulations and experiments (after atrio-ventricular node suppression), activation maps were computed during sinus rhythm. Repolarization was assessed by measuring the area under the atrial T wave (ATa), a marker of repolarization gradients. Results showed an epicardial-endocardial correlation coefficients of 0.80 and 0.63 (two dog experiments) and 0.96 (simulation) between activation times, and a correlation coefficients of 0.57 and 0.46 (two dog experiments) and 0.92 (simulation) between ATa values. Despite distance (balloon-atrial wall) and dimension reduction (64 electrodes), some information about atrial repolarization remained present in noncontact signals.
Sabouri, Sepideh; Matene, Elhacene; Vinet, Alain; Richer, Louis-Philippe; Cardinal, René; Armour, J. Andrew; Pagé, Pierre; Kus, Teresa; Jacquemet, Vincent
2014-01-01
Epicardial high-density electrical mapping is a well-established experimental instrument to monitor in vivo the activity of the atria in response to modulations of the autonomic nervous system in sinus rhythm. In regions that are not accessible by epicardial mapping, noncontact endocardial mapping performed through a balloon catheter may provide a more comprehensive description of atrial activity. We developed a computer model of the canine right atrium to compare epicardial and noncontact endocardial mapping. The model was derived from an experiment in which electroanatomical reconstruction, epicardial mapping (103 electrodes), noncontact endocardial mapping (2048 virtual electrodes computed from a 64-channel balloon catheter), and direct-contact endocardial catheter recordings were simultaneously performed in a dog. The recording system was simulated in the computer model. For simulations and experiments (after atrio-ventricular node suppression), activation maps were computed during sinus rhythm. Repolarization was assessed by measuring the area under the atrial T wave (ATa), a marker of repolarization gradients. Results showed an epicardial-endocardial correlation coefficients of 0.80 and 0.63 (two dog experiments) and 0.96 (simulation) between activation times, and a correlation coefficients of 0.57 and 0.46 (two dog experiments) and 0.92 (simulation) between ATa values. Despite distance (balloon-atrial wall) and dimension reduction (64 electrodes), some information about atrial repolarization remained present in noncontact signals. PMID:24598778
Multi-channel, passive, short-range anti-aircraft defence system
NASA Astrophysics Data System (ADS)
Gapiński, Daniel; Krzysztofik, Izabela; Koruba, Zbigniew
2018-01-01
The paper presents a novel method for tracking several air targets simultaneously. The developed concept concerns a multi-channel, passive, short-range anti-aircraft defence system based on the programmed selection of air targets and an algorithm of simultaneous synchronisation of several modified optical scanning seekers. The above system is supposed to facilitate simultaneous firing of several self-guided infrared rocket missiles at many different air targets. From the available information, it appears that, currently, there are no passive self-guided seekers that fulfil such tasks. This paper contains theoretical discussions and simulations of simultaneous detection and tracking of many air targets by mutually integrated seekers of several rocket missiles. The results of computer simulation research have been presented in a graphical form.
Automating NEURON Simulation Deployment in Cloud Resources.
Stockton, David B; Santamaria, Fidel
2017-01-01
Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the OpenStack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon's proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model.
Automating NEURON Simulation Deployment in Cloud Resources
Santamaria, Fidel
2016-01-01
Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the Open-Stack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon’s proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model. PMID:27655341
NASA Technical Reports Server (NTRS)
Schilling, D. L.; Oh, S. J.; Thau, F.
1975-01-01
Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.
Hybrid computer optimization of systems with random parameters
NASA Technical Reports Server (NTRS)
White, R. C., Jr.
1972-01-01
A hybrid computer Monte Carlo technique for the simulation and optimization of systems with random parameters is presented. The method is applied to the simultaneous optimization of the means and variances of two parameters in the radar-homing missile problem treated by McGhee and Levine.
A Note on Verification of Computer Simulation Models
ERIC Educational Resources Information Center
Aigner, Dennis J.
1972-01-01
Establishes an argument that questions the validity of one test'' of goodness-of-fit (the extent to which a series of obtained measures agrees with a series of theoretical measures) for the simulated time path of a simple endogenous (internally developed) variable in a simultaneous, perhaps dynamic econometric model. (Author)
NeuroManager: a workflow analysis based simulation management engine for computational neuroscience
Stockton, David B.; Santamaria, Fidel
2015-01-01
We developed NeuroManager, an object-oriented simulation management software engine for computational neuroscience. NeuroManager automates the workflow of simulation job submissions when using heterogeneous computational resources, simulators, and simulation tasks. The object-oriented approach (1) provides flexibility to adapt to a variety of neuroscience simulators, (2) simplifies the use of heterogeneous computational resources, from desktops to super computer clusters, and (3) improves tracking of simulator/simulation evolution. We implemented NeuroManager in MATLAB, a widely used engineering and scientific language, for its signal and image processing tools, prevalence in electrophysiology analysis, and increasing use in college Biology education. To design and develop NeuroManager we analyzed the workflow of simulation submission for a variety of simulators, operating systems, and computational resources, including the handling of input parameters, data, models, results, and analyses. This resulted in 22 stages of simulation submission workflow. The software incorporates progress notification, automatic organization, labeling, and time-stamping of data and results, and integrated access to MATLAB's analysis and visualization tools. NeuroManager provides users with the tools to automate daily tasks, and assists principal investigators in tracking and recreating the evolution of research projects performed by multiple people. Overall, NeuroManager provides the infrastructure needed to improve workflow, manage multiple simultaneous simulations, and maintain provenance of the potentially large amounts of data produced during the course of a research project. PMID:26528175
NeuroManager: a workflow analysis based simulation management engine for computational neuroscience.
Stockton, David B; Santamaria, Fidel
2015-01-01
We developed NeuroManager, an object-oriented simulation management software engine for computational neuroscience. NeuroManager automates the workflow of simulation job submissions when using heterogeneous computational resources, simulators, and simulation tasks. The object-oriented approach (1) provides flexibility to adapt to a variety of neuroscience simulators, (2) simplifies the use of heterogeneous computational resources, from desktops to super computer clusters, and (3) improves tracking of simulator/simulation evolution. We implemented NeuroManager in MATLAB, a widely used engineering and scientific language, for its signal and image processing tools, prevalence in electrophysiology analysis, and increasing use in college Biology education. To design and develop NeuroManager we analyzed the workflow of simulation submission for a variety of simulators, operating systems, and computational resources, including the handling of input parameters, data, models, results, and analyses. This resulted in 22 stages of simulation submission workflow. The software incorporates progress notification, automatic organization, labeling, and time-stamping of data and results, and integrated access to MATLAB's analysis and visualization tools. NeuroManager provides users with the tools to automate daily tasks, and assists principal investigators in tracking and recreating the evolution of research projects performed by multiple people. Overall, NeuroManager provides the infrastructure needed to improve workflow, manage multiple simultaneous simulations, and maintain provenance of the potentially large amounts of data produced during the course of a research project.
A computer simulation of an adaptive noise canceler with a single input
NASA Astrophysics Data System (ADS)
Albert, Stuart D.
1991-06-01
A description of an adaptive noise canceler using Widrows' LMS algorithm is presented. A computer simulation of canceler performance (adaptive convergence time and frequency transfer function) was written for use as a design tool. The simulations, assumptions, and input parameters are described in detail. The simulation is used in a design example to predict the performance of an adaptive noise canceler in the simultaneous presence of both strong and weak narrow-band signals (a cosited frequency hopping radio scenario). On the basis of the simulation results, it is concluded that the simulation is suitable for use as an adaptive noise canceler design tool; i.e., it can be used to evaluate the effect of design parameter changes on canceler performance.
Computer Aided Battery Engineering Consortium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pesaran, Ahmad
A multi-national lab collaborative team was assembled that includes experts from academia and industry to enhance recently developed Computer-Aided Battery Engineering for Electric Drive Vehicles (CAEBAT)-II battery crush modeling tools and to develop microstructure models for electrode design - both computationally efficient. Task 1. The new Multi-Scale Multi-Domain model framework (GH-MSMD) provides 100x to 1,000x computation speed-up in battery electrochemical/thermal simulation while retaining modularity of particles and electrode-, cell-, and pack-level domains. The increased speed enables direct use of the full model in parameter identification. Task 2. Mechanical-electrochemical-thermal (MECT) models for mechanical abuse simulation were simultaneously coupled, enabling simultaneous modelingmore » of electrochemical reactions during the short circuit, when necessary. The interactions between mechanical failure and battery cell performance were studied, and the flexibility of the model for various batteries structures and loading conditions was improved. Model validation is ongoing to compare with test data from Sandia National Laboratories. The ABDT tool was established in ANSYS. Task 3. Microstructural modeling was conducted to enhance next-generation electrode designs. This 3- year project will validate models for a variety of electrodes, complementing Advanced Battery Research programs. Prototype tools have been developed for electrochemical simulation and geometric reconstruction.« less
Analysis of the possibilities and limits of the Moldflow method
NASA Astrophysics Data System (ADS)
Brierre, M.
1982-01-01
The Moldflow information and computation service is presented. Moldflow is a computer program and data bank available as a computer aid to dimensioning thermoplastic injection molding equipment and processes. It is based on the simultaneous solution of thermal and rheological equations and is intended to completely simulate the injection process. The Moldflow system is described and algorithms are discussed, based on Moldflow listings.
Strong coupling in electromechanical computation
NASA Astrophysics Data System (ADS)
Füzi, János
2000-06-01
A method is presented to carry out simultaneously electromagnetic field and force computation, electrical circuit analysis and mechanical computation to simulate the dynamic operation of electromagnetic actuators. The equation system is solved by a predictor-corrector scheme containing a Powell error minimization algorithm which ensures that every differential equation (coil current, field strength rate, flux rate, speed of the keeper) is fulfilled within the same time step.
Flow field prediction in full-scale Carrousel oxidation ditch by using computational fluid dynamics.
Yang, Yin; Wu, Yingying; Yang, Xiao; Zhang, Kai; Yang, Jiakuan
2010-01-01
In order to optimize the flow field in a full-scale Carrousel oxidation ditch with many sets of disc aerators operating simultaneously, an experimentally validated numerical tool, based on computational fluid dynamics (CFD), was proposed. A full-scale, closed-loop bioreactor (Carrousel oxidation ditch) in Ping Dingshan Sewage Treatment Plant in Ping Dingshan City, a medium-sized city in Henan Province of China, was evaluated using CFD. Moving wall model was created to simulate many sets of disc aerators which created fluid motion in the ditch. The simulated results were acceptable compared with the experimental data and the following results were obtained: (1) a new method called moving wall model could simulate the flow field in Carrousel oxidation ditch with many sets of disc aerators operating simultaneously. The whole number of cells of grids decreased significantly, thus the calculation amount decreased, and (2) CFD modeling generally characterized the flow pattern in the full-scale tank. 3D simulation could be a good supplement for improving the hydrodynamic performance in oxidation ditch designs.
A New Computational Technique for the Generation of Optimised Aircraft Trajectories
NASA Astrophysics Data System (ADS)
Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto
2017-12-01
A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.
NASA Technical Reports Server (NTRS)
Dayman, B., Jr.; Fiore, A. W.
1974-01-01
The present work discusses in general terms the various kinds of ground facilities, in particular, wind tunnels, which support aerodynamic testing. Since not all flight parameters can be simulated simultaneously, an important problem consists in matching parameters. It is pointed out that there is a lack of wind tunnels for a complete Reynolds-number simulation. Using a computer to simulate flow fields can result in considerable reduction of wind-tunnel hours required to develop a given flight vehicle.
Applying Parallel Adaptive Methods with GeoFEST/PYRAMID to Simulate Earth Surface Crustal Dynamics
NASA Technical Reports Server (NTRS)
Norton, Charles D.; Lyzenga, Greg; Parker, Jay; Glasscoe, Margaret; Donnellan, Andrea; Li, Peggy
2006-01-01
This viewgraph presentation reviews the use Adaptive Mesh Refinement (AMR) in simulating the Crustal Dynamics of Earth's Surface. AMR simultaneously improves solution quality, time to solution, and computer memory requirements when compared to generating/running on a globally fine mesh. The use of AMR in simulating the dynamics of the Earth's Surface is spurred by future proposed NASA missions, such as InSAR for Earth surface deformation and other measurements. These missions will require support for large-scale adaptive numerical methods using AMR to model observations. AMR was chosen because it has been successful in computation fluid dynamics for predictive simulation of complex flows around complex structures.
Data multiplexing in radio interferometric calibration
NASA Astrophysics Data System (ADS)
Yatawatta, Sarod; Diblen, Faruk; Spreeuw, Hanno; Koopmans, L. V. E.
2018-03-01
New and upcoming radio interferometers will produce unprecedented amount of data that demand extremely powerful computers for processing. This is a limiting factor due to the large computational power and energy costs involved. Such limitations restrict several key data processing steps in radio interferometry. One such step is calibration where systematic errors in the data are determined and corrected. Accurate calibration is an essential component in reaching many scientific goals in radio astronomy and the use of consensus optimization that exploits the continuity of systematic errors across frequency significantly improves calibration accuracy. In order to reach full consensus, data at all frequencies need to be calibrated simultaneously. In the SKA regime, this can become intractable if the available compute agents do not have the resources to process data from all frequency channels simultaneously. In this paper, we propose a multiplexing scheme that is based on the alternating direction method of multipliers with cyclic updates. With this scheme, it is possible to simultaneously calibrate the full data set using far fewer compute agents than the number of frequencies at which data are available. We give simulation results to show the feasibility of the proposed multiplexing scheme in simultaneously calibrating a full data set when a limited number of compute agents are available.
Vecchiato, Giovanni; Borghini, Gianluca; Aricò, Pietro; Graziani, Ilenia; Maglione, Anton Giulio; Cherubino, Patrizia; Babiloni, Fabio
2016-10-01
Brain-computer interfaces (BCIs) are widely used for clinical applications and exploited to design robotic and interactive systems for healthy people. We provide evidence to control a sensorimotor electroencephalographic (EEG) BCI system while piloting a flight simulator and attending a double attentional task simultaneously. Ten healthy subjects were trained to learn how to manage a flight simulator, use the BCI system, and answer to the attentional tasks independently. Afterward, the EEG activity was collected during a first flight where subjects were required to concurrently use the BCI, and a second flight where they were required to simultaneously use the BCI and answer to the attentional tasks. Results showed that the concurrent use of the BCI system during the flight simulation does not affect the flight performances. However, BCI performances decrease from the 83 to 63 % while attending additional alertness and vigilance tasks. This work shows that it is possible to successfully control a BCI system during the execution of multiple tasks such as piloting a flight simulator with an extra cognitive load induced by attentional tasks. Such framework aims to foster the knowledge on BCI systems embedded into vehicles and robotic devices to allow the simultaneous execution of secondary tasks.
Integrated computational materials engineering: Tools, simulations and new applications
Madison, Jonathan D.
2016-03-30
Here, Integrated Computational Materials Engineering (ICME) is a relatively new methodology full of tremendous potential to revolutionize how science, engineering and manufacturing work together. ICME was motivated by the desire to derive greater understanding throughout each portion of the development life cycle of materials, while simultaneously reducing the time between discovery to implementation [1,2].
Identifying Differential Item Functioning in Multi-Stage Computer Adaptive Testing
ERIC Educational Resources Information Center
Gierl, Mark J.; Lai, Hollis; Li, Johnson
2013-01-01
The purpose of this study is to evaluate the performance of CATSIB (Computer Adaptive Testing-Simultaneous Item Bias Test) for detecting differential item functioning (DIF) when items in the matching and studied subtest are administered adaptively in the context of a realistic multi-stage adaptive test (MST). MST was simulated using a 4-item…
1985-03-01
scene contents should provide the needed information simultaneously in each perspec- tive as prioritized. For the others, the requirement is that...turn the airplane using nosewheel steering until lineup is accomplished. Minimize side loads. (3) Apply forward elevator pressure to ensure positive... simultaneously advancing the power toward the computed takeoff setting. Set final takeoff thrust by approxi- mately 60 knots. (6) As the airplane accelerates, keep
Versatile microwave-driven trapped ion spin system for quantum information processing
Piltz, Christian; Sriarunothai, Theeraphot; Ivanov, Svetoslav S.; Wölk, Sabine; Wunderlich, Christof
2016-01-01
Using trapped atomic ions, we demonstrate a tailored and versatile effective spin system suitable for quantum simulations and universal quantum computation. By simply applying microwave pulses, selected spins can be decoupled from the remaining system and, thus, can serve as a quantum memory, while simultaneously, other coupled spins perform conditional quantum dynamics. Also, microwave pulses can change the sign of spin-spin couplings, as well as their effective strength, even during the course of a quantum algorithm. Taking advantage of the simultaneous long-range coupling between three spins, a coherent quantum Fourier transform—an essential building block for many quantum algorithms—is efficiently realized. This approach, which is based on microwave-driven trapped ions and is complementary to laser-based methods, opens a new route to overcoming technical and physical challenges in the quest for a quantum simulator and a quantum computer. PMID:27419233
Introducing Seismic Tomography with Computational Modeling
NASA Astrophysics Data System (ADS)
Neves, R.; Neves, M. L.; Teodoro, V.
2011-12-01
Learning seismic tomography principles and techniques involves advanced physical and computational knowledge. In depth learning of such computational skills is a difficult cognitive process that requires a strong background in physics, mathematics and computer programming. The corresponding learning environments and pedagogic methodologies should then involve sets of computational modelling activities with computer software systems which allow students the possibility to improve their mathematical or programming knowledge and simultaneously focus on the learning of seismic wave propagation and inverse theory. To reduce the level of cognitive opacity associated with mathematical or programming knowledge, several computer modelling systems have already been developed (Neves & Teodoro, 2010). Among such systems, Modellus is particularly well suited to achieve this goal because it is a domain general environment for explorative and expressive modelling with the following main advantages: 1) an easy and intuitive creation of mathematical models using just standard mathematical notation; 2) the simultaneous exploration of images, tables, graphs and object animations; 3) the attribution of mathematical properties expressed in the models to animated objects; and finally 4) the computation and display of mathematical quantities obtained from the analysis of images and graphs. Here we describe virtual simulations and educational exercises which enable students an easy grasp of the fundamental of seismic tomography. The simulations make the lecture more interactive and allow students the possibility to overcome their lack of advanced mathematical or programming knowledge and focus on the learning of seismological concepts and processes taking advantage of basic scientific computation methods and tools.
NASA Astrophysics Data System (ADS)
Huppert, J.; Michal Lomask, S.; Lazarowitz, R.
2002-08-01
Computer-assisted learning, including simulated experiments, has great potential to address the problem solving process which is a complex activity. It requires a highly structured approach in order to understand the use of simulations as an instructional device. This study is based on a computer simulation program, 'The Growth Curve of Microorganisms', which required tenth grade biology students to use problem solving skills whilst simultaneously manipulating three independent variables in one simulated experiment. The aims were to investigate the computer simulation's impact on students' academic achievement and on their mastery of science process skills in relation to their cognitive stages. The results indicate that the concrete and transition operational students in the experimental group achieved significantly higher academic achievement than their counterparts in the control group. The higher the cognitive operational stage, the higher students' achievement was, except in the control group where students in the concrete and transition operational stages did not differ. Girls achieved equally with the boys in the experimental group. Students' academic achievement may indicate the potential impact a computer simulation program can have, enabling students with low reasoning abilities to cope successfully with learning concepts and principles in science which require high cognitive skills.
Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture
NASA Technical Reports Server (NTRS)
Jones, W. H.
1983-01-01
The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.
NASA Astrophysics Data System (ADS)
Watanabe, Koji; Matsuno, Kenichi
This paper presents a new method for simulating flows driven by a body traveling with neither restriction on motion nor a limit of a region size. In the present method named 'Moving Computational Domain Method', the whole of the computational domain including bodies inside moves in the physical space without the limit of region size. Since the whole of the grid of the computational domain moves according to the movement of the body, a flow solver of the method has to be constructed on the moving grid system and it is important for the flow solver to satisfy physical and geometric conservation laws simultaneously on moving grid. For this issue, the Moving-Grid Finite-Volume Method is employed as the flow solver. The present Moving Computational Domain Method makes it possible to simulate flow driven by any kind of motion of the body in any size of the region with satisfying physical and geometric conservation laws simultaneously. In this paper, the method is applied to the flow around a high-speed car passing through a hairpin curve. The distinctive flow field driven by the car at the hairpin curve has been demonstrated in detail. The results show the promising feature of the method.
Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi
2011-11-01
Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.
Computer simulation of on-orbit manned maneuvering unit operations
NASA Technical Reports Server (NTRS)
Stuart, G. M.; Garcia, K. D.
1986-01-01
Simulation of spacecraft on-orbit operations is discussed in reference to Martin Marietta's Space Operations Simulation laboratory's use of computer software models to drive a six-degree-of-freedom moving base carriage and two target gimbal systems. In particular, key simulation issues and related computer software models associated with providing real-time, man-in-the-loop simulations of the Manned Maneuvering Unit (MMU) are addressed with special attention given to how effectively these models and motion systems simulate the MMU's actual on-orbit operations. The weightless effects of the space environment require the development of entirely new devices for locomotion. Since the access to space is very limited, it is necessary to design, build, and test these new devices within the physical constraints of earth using simulators. The simulation method that is discussed here is the technique of using computer software models to drive a Moving Base Carriage (MBC) that is capable of providing simultaneous six-degree-of-freedom motions. This method, utilized at Martin Marietta's Space Operations Simulation (SOS) laboratory, provides the ability to simulate the operation of manned spacecraft, provides the pilot with proper three-dimensional visual cues, and allows training of on-orbit operations. The purpose here is to discuss significant MMU simulation issues, the related models that were developed in response to these issues and how effectively these models simulate the MMU's actual on-orbiter operations.
Data acquisition and path selection decision making for an autonomous roving vehicle
NASA Technical Reports Server (NTRS)
Frederick, D. K.; Shen, C. N.; Yerazunis, S. W.
1976-01-01
Problems related to the guidance of an autonomous rover for unmanned planetary exploration were investigated. Topics included in these studies were: simulation on an interactive graphics computer system of the Rapid Estimation Technique for detection of discrete obstacles; incorporation of a simultaneous Bayesian estimate of states and inputs in the Rapid Estimation Scheme; development of methods for estimating actual laser rangefinder errors and their application to date provided by Jet Propulsion Laboratory; and modification of a path selection system simulation computer code for evaluation of a hazard detection system based on laser rangefinder data.
Novel 3D/VR interactive environment for MD simulations, visualization and analysis.
Doblack, Benjamin N; Allis, Tim; Dávila, Lilian P
2014-12-18
The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced.
Novel 3D/VR Interactive Environment for MD Simulations, Visualization and Analysis
Doblack, Benjamin N.; Allis, Tim; Dávila, Lilian P.
2014-01-01
The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced. PMID:25549300
Acceleration of discrete stochastic biochemical simulation using GPGPU.
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.
Acceleration of discrete stochastic biochemical simulation using GPGPU
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936
2011-02-07
Sensor UGVs (SUGV) or Disruptor UGVs, depending on their payload. The SUGVs included vision, GPS/IMU, and LIDAR systems for identifying and tracking...employed by all the MAGICian research groups. Objects of interest were tracked using standard LIDAR and Computer Vision template-based feature...tracking approaches. Mapping was solved through Multi-Agent particle-filter based Simultaneous Locali- zation and Mapping ( SLAM ). Our system contains
Magician Simulator. A Realistic Simulator for Heterogenous Teams of Autonomous Robots
2011-01-18
IMU, and LIDAR systems for identifying and tracking mobile OOI at long range (>20m), providing early warnings and allowing neutralization from a... LIDAR and Computer Vision template-based feature tracking approaches. Mapping was solved through Multi-Agent particle-filter based Simultaneous...Locali- zation and Mapping ( SLAM ). Our system contains two maps, a physical map and an influence map (location of hostile OOI, explored and unexplored
Chen, Jin; Venugopal, Vivek; Intes, Xavier
2011-01-01
Time-resolved fluorescence optical tomography allows 3-dimensional localization of multiple fluorophores based on lifetime contrast while providing a unique data set for improved resolution. However, to employ the full fluorescence time measurements, a light propagation model that accurately simulates weakly diffused and multiple scattered photons is required. In this article, we derive a computationally efficient Monte Carlo based method to compute time-gated fluorescence Jacobians for the simultaneous imaging of two fluorophores with lifetime contrast. The Monte Carlo based formulation is validated on a synthetic murine model simulating the uptake in the kidneys of two distinct fluorophores with lifetime contrast. Experimentally, the method is validated using capillaries filled with 2.5nmol of ICG and IRDye™800CW respectively embedded in a diffuse media mimicking the average optical properties of mice. Combining multiple time gates in one inverse problem allows the simultaneous reconstruction of multiple fluorophores with increased resolution and minimal crosstalk using the proposed formulation. PMID:21483610
NASA Astrophysics Data System (ADS)
Lynch, Amanda H.; Abramson, David; Görgen, Klaus; Beringer, Jason; Uotila, Petteri
2007-10-01
Fires in the Australian savanna have been hypothesized to affect monsoon evolution, but the hypothesis is controversial and the effects have not been quantified. A distributed computing approach allows the development of a challenging experimental design that permits simultaneous variation of all fire attributes. The climate model simulations are distributed around multiple independent computer clusters in six countries, an approach that has potential for a range of other large simulation applications in the earth sciences. The experiment clarifies that savanna burning can shape the monsoon through two mechanisms. Boundary-layer circulation and large-scale convergence is intensified monotonically through increasing fire intensity and area burned. However, thresholds of fire timing and area are evident in the consequent influence on monsoon rainfall. In the optimal band of late, high intensity fires with a somewhat limited extent, it is possible for the wet season to be significantly enhanced.
Time-partitioning simulation models for calculation on parallel computers
NASA Technical Reports Server (NTRS)
Milner, Edward J.; Blech, Richard A.; Chima, Rodrick V.
1987-01-01
A technique allowing time-staggered solution of partial differential equations is presented in this report. Using this technique, called time-partitioning, simulation execution speedup is proportional to the number of processors used because all processors operate simultaneously, with each updating of the solution grid at a different time point. The technique is limited by neither the number of processors available nor by the dimension of the solution grid. Time-partitioning was used to obtain the flow pattern through a cascade of airfoils, modeled by the Euler partial differential equations. An execution speedup factor of 1.77 was achieved using a two processor Cray X-MP/24 computer.
NASA Astrophysics Data System (ADS)
Chun, Poo-Reum; Lee, Se-Ah; Yook, Yeong-Geun; Choi, Kwang-Sung; Cho, Deog-Geun; Yu, Dong-Hun; Chang, Won-Seok; Kwon, Deuk-Chul; Im, Yeon-Ho
2013-09-01
Although plasma etch profile simulation has been attracted much interest for developing reliable plasma etching, there still exist big gaps between current research status and predictable modeling due to the inherent complexity of plasma process. As an effort to address this issue, we present 3D feature profile simulation coupled with well-defined plasma-surface kinetic model for silicon dioxide etching process under fluorocarbon plasmas. To capture the realistic plasma surface reaction behaviors, a polymer layer based surface kinetic model was proposed to consider the simultaneous polymer deposition and oxide etching. Finally, the realistic plasma surface model was used for calculation of speed function for 3D topology simulation, which consists of multiple level set based moving algorithm, and ballistic transport module. In addition, the time consumable computations in the ballistic transport calculation were improved drastically by GPU based numerical computation, leading to the real time computation. Finally, we demonstrated that the surface kinetic model could be coupled successfully for 3D etch profile simulations in high-aspect ratio contact hole plasma etching.
Nurhuda, M; Rouf, A
2017-09-01
The paper presents a method for simultaneous computation of eigenfunction and eigenvalue of the stationary Schrödinger equation on a grid, without imposing boundary-value condition. The method is based on the filter operator, which selects the eigenfunction from wave packet at the rate comparable to δ function. The efficacy and reliability of the method are demonstrated by comparing the simulation results with analytical or numerical solutions obtained by using other methods for various boundary-value conditions. It is found that the method is robust, accurate, and reliable. Further prospect of filter method for simulation of the Schrödinger equation in higher-dimensional space will also be highlighted.
Gibbs sampling on large lattice with GMRF
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Allard, Denis
2018-02-01
Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.
NASA Technical Reports Server (NTRS)
Reid, G. F.
1976-01-01
A technique is presented for determining state variable feedback gains that will place both the poles and zeros of a selected transfer function of a dual-input control system at pre-determined locations in the s-plane. Leverrier's algorithm is used to determine the numerator and denominator coefficients of the closed-loop transfer function as functions of the feedback gains. The values of gain that match these coefficients to those of a pre-selected model are found by solving two systems of linear simultaneous equations. The algorithm has been used in a computer simulation of the CH-47 helicopter to control longitudinal dynamics.
A polymorphic reconfigurable emulator for parallel simulation
NASA Technical Reports Server (NTRS)
Parrish, E. A., Jr.; Mcvey, E. S.; Cook, G.
1980-01-01
Microprocessor and arithmetic support chip technology was applied to the design of a reconfigurable emulator for real time flight simulation. The system developed consists of master control system to perform all man machine interactions and to configure the hardware to emulate a given aircraft, and numerous slave compute modules (SCM) which comprise the parallel computational units. It is shown that all parts of the state equations can be worked on simultaneously but that the algebraic equations cannot (unless they are slowly varying). Attempts to obtain algorithms that will allow parellel updates are reported. The word length and step size to be used in the SCM's is determined and the architecture of the hardware and software is described.
Overlapped Fourier coding for optical aberration removal
Horstmeyer, Roarke; Ou, Xiaoze; Chung, Jaebum; Zheng, Guoan; Yang, Changhuei
2014-01-01
We present an imaging procedure that simultaneously optimizes a camera’s resolution and retrieves a sample’s phase over a sequence of snapshots. The technique, termed overlapped Fourier coding (OFC), first digitally pans a small aperture across a camera’s pupil plane with a spatial light modulator. At each aperture location, a unique image is acquired. The OFC algorithm then fuses these low-resolution images into a full-resolution estimate of the complex optical field incident upon the detector. Simultaneously, the algorithm utilizes redundancies within the acquired dataset to computationally estimate and remove unknown optical aberrations and system misalignments via simulated annealing. The result is an imaging system that can computationally overcome its optical imperfections to offer enhanced resolution, at the expense of taking multiple snapshots over time. PMID:25321982
NASA Astrophysics Data System (ADS)
Li, Jinghe; Song, Linping; Liu, Qing Huo
2016-02-01
A simultaneous multiple frequency contrast source inversion (CSI) method is applied to reconstructing hydrocarbon reservoir targets in a complex multilayered medium in two dimensions. It simulates the effects of a salt dome sedimentary formation in the context of reservoir monitoring. In this method, the stabilized biconjugate-gradient fast Fourier transform (BCGS-FFT) algorithm is applied as a fast solver for the 2D volume integral equation for the forward computation. The inversion technique with CSI combines the efficient FFT algorithm to speed up the matrix-vector multiplication and the stable convergence of the simultaneous multiple frequency CSI in the iteration process. As a result, this method is capable of making quantitative conductivity image reconstruction effectively for large-scale electromagnetic oil exploration problems, including the vertical electromagnetic profiling (VEP) survey investigated here. A number of numerical examples have been demonstrated to validate the effectiveness and capacity of the simultaneous multiple frequency CSI method for a limited array view in VEP.
A Balanced Mixture of Antagonistic Pressures Promotes the Evolution of Parallel Movement
NASA Astrophysics Data System (ADS)
Demšar, Jure; Štrumbelj, Erik; Lebar Bajec, Iztok
2016-12-01
A common hypothesis about the origins of collective behaviour suggests that animals might live and move in groups to increase their chances of surviving predator attacks. This hypothesis is supported by several studies that use computational models to simulate natural evolution. These studies, however, either tune an ad-hoc model to ‘reproduce’ collective behaviour, or concentrate on a single type of predation pressure, or infer the emergence of collective behaviour from an increase in prey density. In nature, prey are often targeted by multiple predator species simultaneously and this might have played a pivotal role in the evolution of collective behaviour. We expand on previous research by using an evolutionary rule-based system to simulate the evolution of prey behaviour when prey are subject to multiple simultaneous predation pressures. We analyse the evolved behaviour via prey density, polarization, and angular momentum. Our results suggest that a mixture of antagonistic external pressures that simultaneously steer prey towards grouping and dispersing might be required for prey individuals to evolve dynamic parallel movement.
CFD simulation of flow through heart: a perspective review.
Khalafvand, S S; Ng, E Y K; Zhong, L
2011-01-01
The heart is an organ which pumps blood around the body by contraction of muscular wall. There is a coupled system in the heart containing the motion of wall and the motion of blood fluid; both motions must be computed simultaneously, which make biological computational fluid dynamics (CFD) difficult. The wall of the heart is not rigid and hence proper boundary conditions are essential for CFD modelling. Fluid-wall interaction is very important for real CFD modelling. There are many assumptions for CFD simulation of the heart that make it far from a real model. A realistic fluid-structure interaction modelling the structure by the finite element method and the fluid flow by CFD use more realistic coupling algorithms. This type of method is very powerful to solve the complex properties of the cardiac structure and the sensitive interaction of fluid and structure. The final goal of heart modelling is to simulate the total heart function by integrating cardiac anatomy, electrical activation, mechanics, metabolism and fluid mechanics together, as in the computational framework.
NASA Astrophysics Data System (ADS)
Kergadallan, Xavier; Bernardara, Pietro; Benoit, Michel; Andreewsky, Marc; Weiss, Jérôme
2013-04-01
Estimating the probability of occurrence of extreme sea levels is a central issue for the protection of the coast. Return periods of sea level with wave set-up contribution are estimated here in one site : Cherbourg in France in the English Channel. The methodology follows two steps : the first one is computation of joint probability of simultaneous wave height and still sea level, the second one is interpretation of that joint probabilities to assess a sea level for a given return period. Two different approaches were evaluated to compute joint probability of simultaneous wave height and still sea level : the first one is multivariate extreme values distributions of logistic type in which all components of the variables become large simultaneously, the second one is conditional approach for multivariate extreme values in which only one component of the variables have to be large. Two different methods were applied to estimate sea level with wave set-up contribution for a given return period : Monte-Carlo simulation in which estimation is more accurate but needs higher calculation time and classical ocean engineering design contours of type inverse-FORM in which the method is simpler and allows more complex estimation of wave setup part (wave propagation to the coast for example). We compare results from the two different approaches with the two different methods. To be able to use both Monte-Carlo simulation and design contours methods, wave setup is estimated with an simple empirical formula. We show advantages of the conditional approach compared to the multivariate extreme values approach when extreme sea-level occurs when either surge or wave height is large. We discuss the validity of the ocean engineering design contours method which is an alternative when computation of sea levels is too complex to use Monte-Carlo simulation method.
Persistence of opinion in the Sznajd consensus model: computer simulation
NASA Astrophysics Data System (ADS)
Stauffer, D.; de Oliveira, P. M. C.
2002-12-01
The density of never changed opinions during the Sznajd consensus-finding process decays with time t as 1/t^θ. We find θ simeq 3/8 for a chain, compatible with the exact Ising result of Derrida et al. In higher dimensions, however, the exponent differs from the Ising θ. With simultaneous updating of sublattices instead of the usual random sequential updating, the number of persistent opinions decays roughly exponentially. Some of the simulations used multi-spin coding.
A Mathematical Model for the Middle Ear Ventilation
NASA Astrophysics Data System (ADS)
Molnárka, G.; Miletics, E. M.; Fücsek, M.
2008-09-01
The otitis media is one of the mostly existing illness for the children, therefore investigation of the human middle ear ventilation is an actual problem. In earlier investigations both experimental and theoretical approach one can find in ([l]-[3]). Here we give a new mathematical and computer model to simulate this ventilation process. This model able to describe the diffusion and flow processes simultaneously, therefore it gives more precise results than earlier models did. The article contains the mathematical model and some results of the simulation.
ERIC Educational Resources Information Center
Tambade, Popat S.
2011-01-01
The objective of this article is to graphically illustrate to the students the physical phenomenon of motion of charged particle under the action of simultaneous electric and magnetic fields by simulating particle motion on a computer. Differential equations of motions are solved analytically and path of particle in three-dimensional space are…
Resolution enhancement using simultaneous couple illumination
NASA Astrophysics Data System (ADS)
Hussain, Anwar; Martínez Fuentes, José Luis
2016-10-01
A super-resolution technique based on structured illumination created by a liquid crystal on silicon spatial light modulator (LCOS-SLM) is presented. Single and simultaneous pairs of tilted beams are generated to illuminate a target object. Resolution enhancement of an optical 4f system is demonstrated by using numerical simulations. The resulting intensity images are recorded at a charged couple device (CCD) and stored in the computer memory for further processing. One dimension enhancement can be performed with only 15 images. Two dimensional complete improvement requires 153 different images. The resolution of the optical system is extended three times compared to the band limited system.
Crevillén-García, D
2018-04-01
Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.
Addition of simultaneous heat and solute transport and variable fluid viscosity to SEAWAT
Thorne, D.; Langevin, C.D.; Sukop, M.C.
2006-01-01
SEAWAT is a finite-difference computer code designed to simulate coupled variable-density ground water flow and solute transport. This paper describes a new version of SEAWAT that adds the ability to simultaneously model energy and solute transport. This is necessary for simulating the transport of heat and salinity in coastal aquifers for example. This work extends the equation of state for fluid density to vary as a function of temperature and/or solute concentration. The program has also been modified to represent the effects of variable fluid viscosity as a function of temperature and/or concentration. The viscosity mechanism is verified against an analytical solution, and a test of temperature-dependent viscosity is provided. Finally, the classic Henry-Hilleke problem is solved with the new code. ?? 2006 Elsevier Ltd. All rights reserved.
Physically Based Modeling and Simulation with Dynamic Spherical Volumetric Simplex Splines
Tan, Yunhao; Hua, Jing; Qin, Hong
2009-01-01
In this paper, we present a novel computational modeling and simulation framework based on dynamic spherical volumetric simplex splines. The framework can handle the modeling and simulation of genus-zero objects with real physical properties. In this framework, we first develop an accurate and efficient algorithm to reconstruct the high-fidelity digital model of a real-world object with spherical volumetric simplex splines which can represent with accuracy geometric, material, and other properties of the object simultaneously. With the tight coupling of Lagrangian mechanics, the dynamic volumetric simplex splines representing the object can accurately simulate its physical behavior because it can unify the geometric and material properties in the simulation. The visualization can be directly computed from the object’s geometric or physical representation based on the dynamic spherical volumetric simplex splines during simulation without interpolation or resampling. We have applied the framework for biomechanic simulation of brain deformations, such as brain shifting during the surgery and brain injury under blunt impact. We have compared our simulation results with the ground truth obtained through intra-operative magnetic resonance imaging and the real biomechanic experiments. The evaluations demonstrate the excellent performance of our new technique. PMID:20161636
TSaT-MUSIC: a novel algorithm for rapid and accurate ultrasonic 3D localization
NASA Astrophysics Data System (ADS)
Mizutani, Kyohei; Ito, Toshio; Sugimoto, Masanori; Hashizume, Hiromichi
2011-12-01
We describe a fast and accurate indoor localization technique using the multiple signal classification (MUSIC) algorithm. The MUSIC algorithm is known as a high-resolution method for estimating directions of arrival (DOAs) or propagation delays. A critical problem in using the MUSIC algorithm for localization is its computational complexity. Therefore, we devised a novel algorithm called Time Space additional Temporal-MUSIC, which can rapidly and simultaneously identify DOAs and delays of mul-ticarrier ultrasonic waves from transmitters. Computer simulations have proved that the computation time of the proposed algorithm is almost constant in spite of increasing numbers of incoming waves and is faster than that of existing methods based on the MUSIC algorithm. The robustness of the proposed algorithm is discussed through simulations. Experiments in real environments showed that the standard deviation of position estimations in 3D space is less than 10 mm, which is satisfactory for indoor localization.
Modeling and Simulation of A Microchannel Cooling System for Vitrification of Cells and Tissues.
Wang, Y; Zhou, X M; Jiang, C J; Yu, Y T
The microchannel heat exchange system has several advantages and can be used to enhance heat transfer for vitrification. To evaluate the microchannel cooling method and to analyze the effects of key parameters such as channel structure, flow rate and sample size. A computational flow dynamics model is applied to study the two-phase flow in microchannels and its related heat transfer process. The fluid-solid coupling problem is solved with a whole field solution method (i.e., flow profile in channels and temperature distribution in the system being simulated simultaneously). Simulation indicates that a cooling rate >10 4 C/min is easily achievable using the microchannel method with the high flow rate for a board range of sample sizes. Channel size and material used have significant impact on cooling performance. Computational flow dynamics is useful for optimizing the design and operation of the microchannel system.
Software Framework for Advanced Power Plant Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
John Widmann; Sorin Munteanu; Aseem Jain
2010-08-01
This report summarizes the work accomplished during the Phase II development effort of the Advanced Process Engineering Co-Simulator (APECS). The objective of the project is to develop the tools to efficiently combine high-fidelity computational fluid dynamics (CFD) models with process modeling software. During the course of the project, a robust integration controller was developed that can be used in any CAPE-OPEN compliant process modeling environment. The controller mediates the exchange of information between the process modeling software and the CFD software. Several approaches to reducing the time disparity between CFD simulations and process modeling have been investigated and implemented. Thesemore » include enabling the CFD models to be run on a remote cluster and enabling multiple CFD models to be run simultaneously. Furthermore, computationally fast reduced-order models (ROMs) have been developed that can be 'trained' using the results from CFD simulations and then used directly within flowsheets. Unit operation models (both CFD and ROMs) can be uploaded to a model database and shared between multiple users.« less
Coupled multi-disciplinary simulation of composite engine structures in propulsion environment
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Singhal, Surendra N.
1992-01-01
A computational simulation procedure is described for the coupled response of multi-layered multi-material composite engine structural components which are subjected to simultaneous multi-disciplinary thermal, structural, vibration, and acoustic loadings including the effect of hostile environments. The simulation is based on a three dimensional finite element analysis technique in conjunction with structural mechanics codes and with acoustic analysis methods. The composite material behavior is assessed at the various composite scales, i.e., the laminate/ply/constituents (fiber/matrix), via a nonlinear material characterization model. Sample cases exhibiting nonlinear geometrical, material, loading, and environmental behavior of aircraft engine fan blades, are presented. Results for deformed shape, vibration frequency, mode shapes, and acoustic noise emitted from the fan blade, are discussed for their coupled effect in hot and humid environments. Results such as acoustic noise for coupled composite-mechanics/heat transfer/structural/vibration/acoustic analyses demonstrate the effectiveness of coupled multi-disciplinary computational simulation and the various advantages of composite materials compared to metals.
Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware.
Rast, Alexander; Galluppi, Francesco; Davies, Sergio; Plana, Luis; Patterson, Cameron; Sharp, Thomas; Lester, David; Furber, Steve
2011-11-01
Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNaker's asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience. Copyright © 2011 Elsevier Ltd. All rights reserved.
The Structure and Properties of Silica Glass Nanostructures using Novel Computational Systems
NASA Astrophysics Data System (ADS)
Doblack, Benjamin N.
The structure and properties of silica glass nanostructures are examined using computational methods in this work. Standard synthesis methods of silica and its associated material properties are first discussed in brief. A review of prior experiments on this amorphous material is also presented. Background and methodology for the simulation of mechanical tests on amorphous bulk silica and nanostructures are later presented. A new computational system for the accurate and fast simulation of silica glass is also presented, using an appropriate interatomic potential for this material within the open-source molecular dynamics computer program LAMMPS. This alternative computational method uses modern graphics processors, Nvidia CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model select materials, this enhancement allows the addition of accelerated molecular dynamics simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal of this project is to investigate the structure and size dependent mechanical properties of silica glass nanohelical structures under tensile MD conditions using the innovative computational system. Specifically, silica nanoribbons and nanosprings are evaluated which revealed unique size dependent elastic moduli when compared to the bulk material. For the nanoribbons, the tensile behavior differed widely between the models simulated, with distinct characteristic extended elastic regions. In the case of the nanosprings simulated, more clear trends are observed. In particular, larger nanospring wire cross-sectional radii (r) lead to larger Young's moduli, while larger helical diameters (2R) resulted in smaller Young's moduli. Structural transformations and theoretical models are also analyzed to identify possible factors which might affect the mechanical response of silica nanostructures under tension. The work presented outlines an innovative simulation methodology, and discusses how results can be validated against prior experimental and simulation findings. The ultimate goal is to develop new computational methods for the study of nanostructures which will make the field of materials science more accessible, cost effective and efficient.
Development and validation of real-time simulation of X-ray imaging with respiratory motion.
Vidal, Franck P; Villard, Pierre-Frédéric
2016-04-01
We present a framework that combines evolutionary optimisation, soft tissue modelling and ray tracing on GPU to simultaneously compute the respiratory motion and X-ray imaging in real-time. Our aim is to provide validated building blocks with high fidelity to closely match both the human physiology and the physics of X-rays. A CPU-based set of algorithms is presented to model organ behaviours during respiration. Soft tissue deformation is computed with an extension of the Chain Mail method. Rigid elements move according to kinematic laws. A GPU-based surface rendering method is proposed to compute the X-ray image using the Beer-Lambert law. It is provided as an open-source library. A quantitative validation study is provided to objectively assess the accuracy of both components: (i) the respiration against anatomical data, and (ii) the X-ray against the Beer-Lambert law and the results of Monte Carlo simulations. Our implementation can be used in various applications, such as interactive medical virtual environment to train percutaneous transhepatic cholangiography in interventional radiology, 2D/3D registration, computation of digitally reconstructed radiograph, simulation of 4D sinograms to test tomography reconstruction tools. Copyright © 2015 Elsevier Ltd. All rights reserved.
Organ radiation exposure with EOS: GATE simulations versus TLD measurements
NASA Astrophysics Data System (ADS)
Clavel, A. H.; Thevenard-Berger, P.; Verdun, F. R.; Létang, J. M.; Darbon, A.
2016-03-01
EOS® is an innovative X-ray imaging system allowing the acquisition of two simultaneous images of a patient in the standing position, during the vertical scan of two orthogonal fan beams. This study aimed to compute organs radiation exposure to a patient, in the particular geometry of this system. Two different positions of the patient in the machine were studied, corresponding to postero-anterior plus left lateral projections (PA-LLAT) and antero-posterior plus right lateral projections (AP-RLAT). To achieve this goal, a Monte-Carlo simulation was developed based on a GATE environment. To model the physical properties of the patient, a computational phantom was produced based on computed tomography scan data of an anthropomorphic phantom. The simulations provided several organs doses, which were compared to previously published dose results measured with Thermo Luminescent Detectors (TLD) in the same conditions and with the same phantom. The simulation results showed a good agreement with measured doses at the TLD locations, for both AP-RLAT and PA-LLAT projections. This study also showed that the organ dose assessed only from a sample of locations, rather than considering the whole organ, introduced significant bias, depending on organs and projections.
1974-08-01
Node Control Logic 2-27 2.16 Pitch Channel Frequence Response 2-36 2.17 Yaw Channel Frequency Response 2-37 K 4 2.18 Analog Computer Mechanlzation of...8217S 0 121 £l1:c IL-I. TABLE I Elements of the Slgma 5 Digital Computer System Xerox Model- Performance MIOP Channel Description Number Characteristics...transfer control signals to or from the CPU. The MIOP can handle up to 32 I/0 channels each operating simultaneously, provided the overall data
Coupled multi-disciplinary composites behavior simulation
NASA Technical Reports Server (NTRS)
Singhal, Surendra N.; Murthy, Pappu L. N.; Chamis, Christos C.
1993-01-01
The capabilities of the computer code CSTEM (Coupled Structural/Thermal/Electro-Magnetic Analysis) are discussed and demonstrated. CSTEM computationally simulates the coupled response of layered multi-material composite structures subjected to simultaneous thermal, structural, vibration, acoustic, and electromagnetic loads and includes the effect of aggressive environments. The composite material behavior and structural response is determined at its various inherent scales: constituents (fiber/matrix), ply, laminate, and structural component. The thermal and mechanical properties of the constituents are considered to be nonlinearly dependent on various parameters such as temperature and moisture. The acoustic and electromagnetic properties also include dependence on vibration and electromagnetic wave frequencies, respectively. The simulation is based on a three dimensional finite element analysis in conjunction with composite mechanics and with structural tailoring codes, and with acoustic and electromagnetic analysis methods. An aircraft engine composite fan blade is selected as a typical structural component to demonstrate the CSTEM capabilities. Results of various coupled multi-disciplinary heat transfer, structural, vibration, acoustic, and electromagnetic analyses for temperature distribution, stress and displacement response, deformed shape, vibration frequencies, mode shapes, acoustic noise, and electromagnetic reflection from the fan blade are discussed for their coupled effects in hot and humid environments. Collectively, these results demonstrate the effectiveness of the CSTEM code in capturing the coupled effects on the various responses of composite structures subjected to simultaneous multiple real-life loads.
Evaluation of Airframe Noise Reduction Concepts via Simulations Using a Lattice Boltzmann Approach
NASA Technical Reports Server (NTRS)
Fares, Ehab; Casalino, Damiano; Khorrami, Mehdi R.
2015-01-01
Unsteady computations are presented for a high-fidelity, 18% scale, semi-span Gulfstream aircraft model in landing configuration, i.e. flap deflected at 39 degree and main landing gear deployed. The simulations employ the lattice Boltzmann solver PowerFLOW® to simultaneously capture the flow physics and acoustics in the near field. Sound propagation to the far field is obtained using a Ffowcs Williams and Hawkings acoustic analogy approach. In addition to the baseline geometry, which was presented previously, various noise reduction concepts for the flap and main landing gear are simulated. In particular, care is taken to fully resolve the complex geometrical details associated with these concepts in order to capture the resulting intricate local flow field thus enabling accurate prediction of their acoustic behavior. To determine aeroacoustic performance, the farfield noise predicted with the concepts applied is compared to high-fidelity simulations of the untreated baseline configurations. To assess the accuracy of the computed results, the aerodynamic and aeroacoustic impact of the noise reduction concepts is evaluated numerically and compared to experimental results for the same model. The trends and effectiveness of the simulated noise reduction concepts compare well with measured values and demonstrate that the computational approach is capable of capturing the primary effects of the acoustic treatment on a full aircraft model.
Monte Carlo simulation of nonadiabatic expansion in cometary atmospheres - Halley
NASA Astrophysics Data System (ADS)
Hodges, R. R.
1990-02-01
Monte Carlo methods developed for the characterization of velocity-dependent collision processes and ballistic transports in planetary exospheres form the basis of the present computer simulation of icy comet atmospheres, which iteratively undertakes the simultaneous determination of velocity distribution for five neutral species (water, together with suprathermal OH, H2, O, and H) in a flow regime varying from the hydrodynamic to the ballistic. Experimental data from the neutral mass spectrometer carried by Giotto for its March, 1986 encounter with Halley are compared with a model atmosphere.
Computer-Based Technologies in Dentistry: Types and Applications
Albuha Al-Mussawi, Raja’a M.; Farid, Farzaneh
2016-01-01
During dental education, dental students learn how to examine patients, make diagnosis, plan treatment and perform dental procedures perfectly and efficiently. However, progresses in computer-based technologies including virtual reality (VR) simulators, augmented reality (AR) and computer aided design/computer aided manufacturing (CAD/CAM) systems have resulted in new modalities for instruction and practice of dentistry. Virtual reality dental simulators enable repeated, objective and assessable practice in various controlled situations. Superimposition of three-dimensional (3D) virtual images on actual images in AR allows surgeons to simultaneously visualize the surgical site and superimpose informative 3D images of invisible regions on the surgical site to serve as a guide. The use of CAD/CAM systems for designing and manufacturing of dental appliances and prostheses has been well established. This article reviews computer-based technologies, their application in dentistry and their potentials and limitations in promoting dental education, training and practice. Practitioners will be able to choose from a broader spectrum of options in their field of practice by becoming familiar with new modalities of training and practice. PMID:28392819
Computer-Based Technologies in Dentistry: Types and Applications.
Albuha Al-Mussawi, Raja'a M; Farid, Farzaneh
2016-06-01
During dental education, dental students learn how to examine patients, make diagnosis, plan treatment and perform dental procedures perfectly and efficiently. However, progresses in computer-based technologies including virtual reality (VR) simulators, augmented reality (AR) and computer aided design/computer aided manufacturing (CAD/CAM) systems have resulted in new modalities for instruction and practice of dentistry. Virtual reality dental simulators enable repeated, objective and assessable practice in various controlled situations. Superimposition of three-dimensional (3D) virtual images on actual images in AR allows surgeons to simultaneously visualize the surgical site and superimpose informative 3D images of invisible regions on the surgical site to serve as a guide. The use of CAD/CAM systems for designing and manufacturing of dental appliances and prostheses has been well established. This article reviews computer-based technologies, their application in dentistry and their potentials and limitations in promoting dental education, training and practice. Practitioners will be able to choose from a broader spectrum of options in their field of practice by becoming familiar with new modalities of training and practice.
Using parallel computing for the display and simulation of the space debris environment
NASA Astrophysics Data System (ADS)
Möckel, M.; Wiedemann, C.; Flegel, S.; Gelhaus, J.; Vörsmann, P.; Klinkrad, H.; Krag, H.
2011-07-01
Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction to OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.
Using parallel computing for the display and simulation of the space debris environment
NASA Astrophysics Data System (ADS)
Moeckel, Marek; Wiedemann, Carsten; Flegel, Sven Kevin; Gelhaus, Johannes; Klinkrad, Heiner; Krag, Holger; Voersmann, Peter
Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction of OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.
Phipps, Eric T.; D'Elia, Marta; Edwards, Harold C.; ...
2017-04-18
In this study, quantifying simulation uncertainties is a critical component of rigorous predictive simulation. A key component of this is forward propagation of uncertainties in simulation input data to output quantities of interest. Typical approaches involve repeated sampling of the simulation over the uncertain input data, and can require numerous samples when accurately propagating uncertainties from large numbers of sources. Often simulation processes from sample to sample are similar and much of the data generated from each sample evaluation could be reused. We explore a new method for implementing sampling methods that simultaneously propagates groups of samples together in anmore » embedded fashion, which we call embedded ensemble propagation. We show how this approach takes advantage of properties of modern computer architectures to improve performance by enabling reuse between samples, reducing memory bandwidth requirements, improving memory access patterns, improving opportunities for fine-grained parallelization, and reducing communication costs. We describe a software technique for implementing embedded ensemble propagation based on the use of C++ templates and describe its integration with various scientific computing libraries within Trilinos. We demonstrate improved performance, portability and scalability for the approach applied to the simulation of partial differential equations on a variety of CPU, GPU, and accelerator architectures, including up to 131,072 cores on a Cray XK7 (Titan).« less
NASA Astrophysics Data System (ADS)
Brandon, Simon; Derby, Jeffrey J.; Atherton, L. Jeffrey; Roberts, David H.; Vital, Russel L.
1993-09-01
A novel process modification, the simultaneous growth of three cylindrical Cr:LiCaAlf 6 (Cr:LiCAF) crystals grown from a common seed in a vertical Bridgman furnace of rectangular cross section, is assessed using computational modeling. The analysis employs the FIDAP finite-element package and accounts for three-dimensional, steady-state, conductive heat transfer throughout the system. The induction heating system is rigorously simulated via solution of Maxwell's equations. The implementation of realistic thermal boundary conditions and furnace details is shown to be important. Furnace design features are assessed through calculations, and simulations indicate expected growth conditions to be favorable. In addition, the validity of using ampoules containing "dummy" charges for experimental furnace characterization measurements is examined through test computations.
Solar Ellerman Bombs in 1D Radiative Hydrodynamics
NASA Astrophysics Data System (ADS)
Reid, A.; Mathioudakis, M.; Kowalski, A.; Doyle, J. G.; Allred, J. C.
2017-02-01
Recent observations from the Interface Region Imaging Spectrograph appear to show impulsive brightenings in high temperature lines, which when combined with simultaneous ground-based observations in Hα, appear co-spatial to Ellerman Bombs (EBs). We use the RADYN one-dimensional radiative transfer code in an attempt to try and reproduce the observed line profiles and simulate the atmospheric conditions of these events. Combined with the MULTI/RH line synthesis codes, we compute the Hα, Ca II 8542 Å, and Mg II h and k lines for these simulated events and compare them to previous observations. Our findings hint that the presence of superheated regions in the photosphere (>10,000 K) is not a plausible explanation for the production of EB signatures. While we are able to recreate EB-like line profiles in Hα, Ca II 8542 Å, and Mg II h and k, we cannot achieve agreement with all of these simultaneously.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynolds, John; Jankovsky, Zachary; Metzroth, Kyle G
2018-04-04
The purpose of the ADAPT code is to generate Dynamic Event Trees (DET) using a user specified set of simulators. ADAPT can utilize any simulation tool which meets a minimal set of requirements. ADAPT is based on the concept of DET which uses explicit modeling of the deterministic dynamic processes that take place during a nuclear reactor plant system (or other complex system) evolution along with stochastic modeling. When DET are used to model various aspects of Probabilistic Risk Assessment (PRA), all accident progression scenarios starting from an initiating event are considered simultaneously. The DET branching occurs at user specifiedmore » times and/or when an action is required by the system and/or the operator. These outcomes then decide how the dynamic system variables will evolve in time for each DET branch. Since two different outcomes at a DET branching may lead to completely different paths for system evolution, the next branching for these paths may occur not only at separate times, but can be based on different branching criteria. The computational infrastructure allows for flexibility in ADAPT to link with different system simulation codes, parallel processing of the scenarios under consideration, on-line scenario management (initiation as well as termination), analysis of results, and user friendly graphical capabilities. The ADAPT system is designed for a distributed computing environment; the scheduler can track multiple concurrent branches simultaneously. The scheduler is modularized so that the DET branching strategy can be modified (e.g. biasing towards the worst-case scenario/event). Independent database systems store data from the simulation tasks and the DET structure so that the event tree can be constructed and analyzed later. ADAPT is provided with a user-friendly client which can easily sort through and display the results of an experiment, precluding the need for the user to manually inspect individual simulator runs.« less
Simultaneous Inference For The Mean Function Based on Dense Functional Data
Cao, Guanqun; Yang, Lijian; Todem, David
2012-01-01
A polynomial spline estimator is proposed for the mean function of dense functional data together with a simultaneous confidence band which is asymptotically correct. In addition, the spline estimator and its accompanying confidence band enjoy oracle efficiency in the sense that they are asymptotically the same as if all random trajectories are observed entirely and without errors. The confidence band is also extended to the difference of mean functions of two populations of functional data. Simulation experiments provide strong evidence that corroborates the asymptotic theory while computing is efficient. The confidence band procedure is illustrated by analyzing the near infrared spectroscopy data. PMID:22665964
Atmospheric cloud physics thermal systems analysis
NASA Technical Reports Server (NTRS)
1977-01-01
Engineering analyses performed on the Atmospheric Cloud Physics (ACPL) Science Simulator expansion chamber and associated thermal control/conditioning system are reported. Analyses were made to develop a verified thermal model and to perform parametric thermal investigations to evaluate systems performance characteristics. Thermal network representations of solid components and the complete fluid conditioning system were solved simultaneously using the Systems Improved Numerical Differencing Analyzer (SINDA) computer program.
Design of a Multi-Touch Tabletop for Simulation-Based Training
2014-06-01
receive, for example using point and click mouse-based computer interactions to specify the routes that vehicles take as part of a convoy...learning, coordination and support for planning. We first provide background in tabletop interaction in general and survey earlier efforts to use...tremendous progress over the past five years. Touch detection technologies now enable multiple users to interact simultaneously on large areas with
Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions.
Salis, Howard; Kaznessis, Yiannis
2005-02-01
The dynamical solution of a well-mixed, nonlinear stochastic chemical kinetic system, described by the Master equation, may be exactly computed using the stochastic simulation algorithm. However, because the computational cost scales with the number of reaction occurrences, systems with one or more "fast" reactions become costly to simulate. This paper describes a hybrid stochastic method that partitions the system into subsets of fast and slow reactions, approximates the fast reactions as a continuous Markov process, using a chemical Langevin equation, and accurately describes the slow dynamics using the integral form of the "Next Reaction" variant of the stochastic simulation algorithm. The key innovation of this method is its mechanism of efficiently monitoring the occurrences of slow, discrete events while simultaneously simulating the dynamics of a continuous, stochastic or deterministic process. In addition, by introducing an approximation in which multiple slow reactions may occur within a time step of the numerical integration of the chemical Langevin equation, the hybrid stochastic method performs much faster with only a marginal decrease in accuracy. Multiple examples, including a biological pulse generator and a large-scale system benchmark, are simulated using the exact and proposed hybrid methods as well as, for comparison, a previous hybrid stochastic method. Probability distributions of the solutions are compared and the weak errors of the first two moments are computed. In general, these hybrid methods may be applied to the simulation of the dynamics of a system described by stochastic differential, ordinary differential, and Master equations.
NASA Astrophysics Data System (ADS)
Spandan, Vamsi; Meschini, Valentina; Ostilla-Mónico, Rodolfo; Lohse, Detlef; Querzoli, Giorgio; de Tullio, Marco D.; Verzicco, Roberto
2017-11-01
In this paper we show and discuss how the deformation dynamics of closed liquid-liquid interfaces (for example drops and bubbles) can be replicated with use of a phenomenological interaction potential model. This new approach to simulate liquid-liquid interfaces is based on the fundamental principle of minimum potential energy where the total potential energy depends on the extent of deformation of a spring network distributed on the surface of the immersed drop or bubble. Simulating liquid-liquid interfaces using this model require computing ad-hoc elastic constants which is done through a reverse-engineered approach. The results from our simulations agree very well with previous studies on the deformation of drops in standard flow configurations such as a deforming drop in a shear flow or cross flow. The interaction potential model is highly versatile, computationally efficient and can be easily incorporated into generic single phase fluid solvers to also simulate complex fluid-structure interaction problems. This is shown by simulating flow in the left ventricle of the heart with mechanical and natural mitral valves where the imposed flow, motion of ventricle and valves dynamically govern the behaviour of each other. Results from these simulations are compared with ad-hoc in-house experimental measurements. Finally, we present a simple and easy to implement parallelisation scheme, as high performance computing is unavoidable when studying large scale problems involving several thousands of simultaneously deforming bodies in highly turbulent flows.
NASA Astrophysics Data System (ADS)
Dávila, H. Olaya; Sevilla, A. C.; Castro, H. F.; Martínez, S. A.
2016-07-01
Using the Geant4 based simulation framework SciFW1, a detailed simulation was performed for a detector array in the hybrid tomography prototype for small animals called ClearPET / XPAD, which was built in the Centre de Physique des Particules de Marseille. The detector system consists of an array of phoswich scintillation detectors: LSO (Lutetium Oxy-ortosilicate doped with cerium Lu2SiO5:Ce) and LuYAP (Lutetium Ortoaluminate of Yttrium doped with cerium Lu0.7Y0.3AlO3:Ce) for Positron Emission Tomography (PET) and hybrid pixel detector XPAD for Computed Tomography (CT). Simultaneous acquisition of deposited energy and the corresponding time - position for each recorded event were analyzed, independently, for both detectors. interference between detection modules for PET and CT. Information about amount of radiation reaching each phoswich crystal and XPAD detector using a phantom in order to study the effectiveness by radiation attenuation and influence the positioning of the radioactive source 22Na was obtained. The simulation proposed will improve distribution of detectors rings and interference values will be taken into account in the new versions of detectors.
NASA Astrophysics Data System (ADS)
Spurzem, R.; Berczik, P.; Zhong, S.; Nitadori, K.; Hamada, T.; Berentzen, I.; Veles, A.
2012-07-01
Astrophysical Computer Simulations of Dense Star Clusters in Galactic Nuclei with Supermassive Black Holes are presented using new cost-efficient supercomputers in China accelerated by graphical processing cards (GPU). We use large high-accuracy direct N-body simulations with Hermite scheme and block-time steps, parallelised across a large number of nodes on the large scale and across many GPU thread processors on each node on the small scale. A sustained performance of more than 350 Tflop/s for a science run on using simultaneously 1600 Fermi C2050 GPUs is reached; a detailed performance model is presented and studies for the largest GPU clusters in China with up to Petaflop/s performance and 7000 Fermi GPU cards. In our case study we look at two supermassive black holes with equal and unequal masses embedded in a dense stellar cluster in a galactic nucleus. The hardening processes due to interactions between black holes and stars, effects of rotation in the stellar system and relativistic forces between the black holes are simultaneously taken into account. The simulation stops at the complete relativistic merger of the black holes.
Elenchezhiyan, M; Prakash, J
2015-09-01
In this work, state estimation schemes for non-linear hybrid dynamic systems subjected to stochastic state disturbances and random errors in measurements using interacting multiple-model (IMM) algorithms are formulated. In order to compute both discrete modes and continuous state estimates of a hybrid dynamic system either an IMM extended Kalman filter (IMM-EKF) or an IMM based derivative-free Kalman filters is proposed in this study. The efficacy of the proposed IMM based state estimation schemes is demonstrated by conducting Monte-Carlo simulation studies on the two-tank hybrid system and switched non-isothermal continuous stirred tank reactor system. Extensive simulation studies reveal that the proposed IMM based state estimation schemes are able to generate fairly accurate continuous state estimates and discrete modes. In the presence and absence of sensor bias, the simulation studies reveal that the proposed IMM unscented Kalman filter (IMM-UKF) based simultaneous state and parameter estimation scheme outperforms multiple-model UKF (MM-UKF) based simultaneous state and parameter estimation scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Handford, Matthew L.; Srinivasan, Manoj
2016-02-01
Robotic lower limb prostheses can improve the quality of life for amputees. Development of such devices, currently dominated by long prototyping periods, could be sped up by predictive simulations. In contrast to some amputee simulations which track experimentally determined non-amputee walking kinematics, here, we explicitly model the human-prosthesis interaction to produce a prediction of the user’s walking kinematics. We obtain simulations of an amputee using an ankle-foot prosthesis by simultaneously optimizing human movements and prosthesis actuation, minimizing a weighted sum of human metabolic and prosthesis costs. The resulting Pareto optimal solutions predict that increasing prosthesis energy cost, decreasing prosthesis mass, and allowing asymmetric gaits all decrease human metabolic rate for a given speed and alter human kinematics. The metabolic rates increase monotonically with speed. Remarkably, by performing an analogous optimization for a non-amputee human, we predict that an amputee walking with an appropriately optimized robotic prosthesis can have a lower metabolic cost - even lower than assuming that the non-amputee’s ankle torques are cost-free.
Energy conserving, linear scaling Born-Oppenheimer molecular dynamics.
Cawkwell, M J; Niklasson, Anders M N
2012-10-07
Born-Oppenheimer molecular dynamics simulations with long-term conservation of the total energy and a computational cost that scales linearly with system size have been obtained simultaneously. Linear scaling with a low pre-factor is achieved using density matrix purification with sparse matrix algebra and a numerical threshold on matrix elements. The extended Lagrangian Born-Oppenheimer molecular dynamics formalism [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] yields microcanonical trajectories with the approximate forces obtained from the linear scaling method that exhibit no systematic drift over hundreds of picoseconds and which are indistinguishable from trajectories computed using exact forces.
NASA Astrophysics Data System (ADS)
Agata, R.; Ichimura, T.; Hori, T.; Hirahara, K.; Hashimoto, C.; Hori, M.
2016-12-01
Estimation of the coseismic/postseismic slip using postseismic deformation observation data is an important topic in the field of geodetic inversion. Estimation methods for this purpose are expected to be improved by introducing numerical simulation tools (e.g. finite element (FE) method) of viscoelastic deformation, in which the computation model is of high fidelity to the available high-resolution crustal data. The authors have proposed a large-scale simulation method using such FE high-fidelity models (HFM), assuming use of a large-scale computation environment such as the K computer in Japan (Ichimura et al. 2016). On the other hand, the values of viscosity in the heterogeneous viscoelastic structure in the high-fidelity model are not trivial. In this study, we developed an adjoint-based optimization method incorporating HFM, in which fault slip and asthenosphere viscosity are simultaneously estimated. We carried out numerical experiments using synthetic crustal deformation data. We constructed an HFM in the domain of 2048x1536x850 km, which includes the Tohoku region in northeast Japan based on Ichimura et al. (2013). We used the model geometry data set of JTOPO30 (2003), Koketsu et al. (2008) and CAMP standard model (Hashimoto et al. 2004). The geometry of crustal structures in HFM is in 1km resolution, resulting in 36 billion degrees-of-freedom. Synthetic crustal deformation data due to prescribed coseismic slip and after slips in the location of GEONET, GPS/A observation points, and S-net are used. The target inverse analysis is formulated as minimization of L2 norm of the difference between the FE simulation results and the observation data with respect to viscosity and fault slip, combining the quasi-Newton algorithm with the adjoint method. Use of this combination decreases the necessary number of forward analyses in the optimization calculation. As a result, we are now able to finish the estimation using 2560 computer nodes of the K computer for less than 17 hours. Thus, the target inverse analysis is completed in a realistic time because of the combination of the fast solver and the adjoint method. In the future, we would like to apply the method to the actual data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bootsma, G. J., E-mail: Gregory.Bootsma@rmp.uhn.on.ca; Verhaegen, F.; Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3G 1A4
2015-01-15
Purpose: X-ray scatter is a significant impediment to image quality improvements in cone-beam CT (CBCT). The authors present and demonstrate a novel scatter correction algorithm using a scatter estimation method that simultaneously combines multiple Monte Carlo (MC) CBCT simulations through the use of a concurrently evaluated fitting function, referred to as concurrent MC fitting (CMCF). Methods: The CMCF method uses concurrently run MC CBCT scatter projection simulations that are a subset of the projection angles used in the projection set, P, to be corrected. The scattered photons reaching the detector in each MC simulation are simultaneously aggregated by an algorithmmore » which computes the scatter detector response, S{sub MC}. S{sub MC} is fit to a function, S{sub F}, and if the fit of S{sub F} is within a specified goodness of fit (GOF), the simulations are terminated. The fit, S{sub F}, is then used to interpolate the scatter distribution over all pixel locations for every projection angle in the set P. The CMCF algorithm was tested using a frequency limited sum of sines and cosines as the fitting function on both simulated and measured data. The simulated data consisted of an anthropomorphic head and a pelvis phantom created from CT data, simulated with and without the use of a compensator. The measured data were a pelvis scan of a phantom and patient taken on an Elekta Synergy platform. The simulated data were used to evaluate various GOF metrics as well as determine a suitable fitness value. The simulated data were also used to quantitatively evaluate the image quality improvements provided by the CMCF method. A qualitative analysis was performed on the measured data by comparing the CMCF scatter corrected reconstruction to the original uncorrected and corrected by a constant scatter correction reconstruction, as well as a reconstruction created using a set of projections taken with a small cone angle. Results: Pearson’s correlation, r, proved to be a suitable GOF metric with strong correlation with the actual error of the scatter fit, S{sub F}. Fitting the scatter distribution to a limited sum of sine and cosine functions using a low-pass filtered fast Fourier transform provided a computationally efficient and accurate fit. The CMCF algorithm reduces the number of photon histories required by over four orders of magnitude. The simulated experiments showed that using a compensator reduced the computational time by a factor between 1.5 and 1.75. The scatter estimates for the simulated and measured data were computed between 35–93 s and 114–122 s, respectively, using 16 Intel Xeon cores (3.0 GHz). The CMCF scatter correction improved the contrast-to-noise ratio by 10%–50% and reduced the reconstruction error to under 3% for the simulated phantoms. Conclusions: The novel CMCF algorithm significantly reduces the computation time required to estimate the scatter distribution by reducing the statistical noise in the MC scatter estimate and limiting the number of projection angles that must be simulated. Using the scatter estimate provided by the CMCF algorithm to correct both simulated and real projection data showed improved reconstruction image quality.« less
Chitale, Rohan; Ghobrial, George M; Lobel, Darlene; Harrop, James
2013-10-01
The learning and development of technical skills are paramount for neurosurgical trainees. External influences and a need for maximizing efficiency and proficiency have encouraged advancements in simulator-based learning models. To confirm the importance of establishing an educational curriculum for teaching minimally invasive techniques of pedicle screw placement using a computer-enhanced physical model of percutaneous pedicle screw placement with simultaneous didactic and technical components. A 2-hour educational curriculum was created to educate neurosurgical residents on anatomy, pathophysiology, and technical aspects associated with image-guided pedicle screw placement. Predidactic and postdidactic practical and written scores were analyzed and compared. Scores were calculated for each participant on the basis of the optimal pedicle screw starting point and trajectory for both fluoroscopy and computed tomographic navigation. Eight trainees participated in this module. Average mean scores on the written didactic test improved from 78% to 100%. The technical component scores for fluoroscopic guidance improved from 58.8 to 52.9. Technical score for computed tomography-navigated guidance also improved from 28.3 to 26.6. Didactic and technical quantitative scores with a simulator-based educational curriculum improved objectively measured resident performance. A minimally invasive spine simulation model and curriculum may serve a valuable function in the education of neurosurgical residents and outcomes for patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerrier, C.; Holcman, D., E-mail: david.holcman@ens.fr; Mathematical Institute, Oxford OX2 6GG, Newton Institute
The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationallymore » greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.« less
Modeling plastic deformation of post-irradiated copper micro-pillars
NASA Astrophysics Data System (ADS)
Crosby, Tamer; Po, Giacomo; Ghoniem, Nasr M.
2014-12-01
We present here an application of a fundamentally new theoretical framework for description of the simultaneous evolution of radiation damage and plasticity that can describe both in situ and ex situ deformation of structural materials [1]. The theory is based on the variational principle of maximum entropy production rate; with constraints on dislocation climb motion that are imposed by point defect fluxes as a result of irradiation. The developed theory is implemented in a new computational code that facilitates the simulation of irradiated and unirradiated materials alike in a consistent fashion [2]. Discrete Dislocation Dynamics (DDD) computer simulations are presented here for irradiated fcc metals that address the phenomenon of dislocation channel formation in post-irradiated copper. The focus of the simulations is on the role of micro-pillar boundaries and the statistics of dislocation pinning by stacking-fault tetrahedra (SFTs) on the onset of dislocation channel and incipient surface crack formation. The simulations show that the spatial heterogeneity in the distribution of SFTs naturally leads to localized plastic deformation and incipient surface fracture of micro-pillars.
NASA Technical Reports Server (NTRS)
Jansen, B. J., Jr.
1998-01-01
The features of the data acquisition and control systems of the NASA Langley Research Center's Jet Noise Laboratory are presented. The Jet Noise Laboratory is a facility that simulates realistic mixed flow turbofan jet engine nozzle exhaust systems in simulated flight. The system is capable of acquiring data for a complete take-off assessment of noise and nozzle performance. This paper describes the development of an integrated system to control and measure the behavior of model jet nozzles featuring dual independent high pressure combusting air streams with wind tunnel flow. The acquisition and control system is capable of simultaneous measurement of forces, moments, static and dynamic model pressures and temperatures, and jet noise. The design concepts for the coordination of the control computers and multiple data acquisition computers and instruments are discussed. The control system design and implementation are explained, describing the features, equipment, and the experiences of using a primarily Personal Computer based system. Areas for future development are examined.
Luo, Ming-Xing; Li, Hui-Ran; Lai, Hong
2016-01-01
Most of previous quantum computations only take use of one degree of freedom (DoF) of photons. An experimental system may possess various DoFs simultaneously. In this paper, with the weak cross-Kerr nonlinearity, we investigate the parallel quantum computation dependent on photonic systems with two DoFs. We construct nearly deterministic controlled-not (CNOT) gates operating on the polarization spatial DoFs of the two-photon or one-photon system. These CNOT gates show that two photonic DoFs can be encoded as independent qubits without auxiliary DoF in theory. Only the coherent states are required. Thus one half of quantum simulation resources may be saved in quantum applications if more complicated circuits are involved. Hence, one may trade off the implementation complexity and simulation resources by using different photonic systems. These CNOT gates are also used to complete various applications including the quantum teleportation and quantum superdense coding. PMID:27424767
Exploiting current-generation graphics hardware for synthetic-scene generation
NASA Astrophysics Data System (ADS)
Tanner, Michael A.; Keen, Wayne A.
2010-04-01
Increasing seeker frame rate and pixel count, as well as the demand for higher levels of scene fidelity, have driven scene generation software for hardware-in-the-loop (HWIL) and software-in-the-loop (SWIL) testing to higher levels of parallelization. Because modern PC graphics cards provide multiple computational cores (240 shader cores for a current NVIDIA Corporation GeForce and Quadro cards), implementation of phenomenology codes on graphics processing units (GPUs) offers significant potential for simultaneous enhancement of simulation frame rate and fidelity. To take advantage of this potential requires algorithm implementation that is structured to minimize data transfers between the central processing unit (CPU) and the GPU. In this paper, preliminary methodologies developed at the Kinetic Hardware In-The-Loop Simulator (KHILS) will be presented. Included in this paper will be various language tradeoffs between conventional shader programming, Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), including performance trades and possible pathways for future tool development.
Luo, Ming-Xing; Li, Hui-Ran; Lai, Hong
2016-07-18
Most of previous quantum computations only take use of one degree of freedom (DoF) of photons. An experimental system may possess various DoFs simultaneously. In this paper, with the weak cross-Kerr nonlinearity, we investigate the parallel quantum computation dependent on photonic systems with two DoFs. We construct nearly deterministic controlled-not (CNOT) gates operating on the polarization spatial DoFs of the two-photon or one-photon system. These CNOT gates show that two photonic DoFs can be encoded as independent qubits without auxiliary DoF in theory. Only the coherent states are required. Thus one half of quantum simulation resources may be saved in quantum applications if more complicated circuits are involved. Hence, one may trade off the implementation complexity and simulation resources by using different photonic systems. These CNOT gates are also used to complete various applications including the quantum teleportation and quantum superdense coding.
NASA Technical Reports Server (NTRS)
Piziali, R. A.; Trenka, A. R.
1974-01-01
The results of a study to investigate the theoretical potential of a jet-flap control system for reducing the vertical and horizontal non-cancelling helicopter rotor blade root shears are presented. A computer simulation describing the jet-flap control rotor system was developed to examine the reduction of each harmonic of the transmitted shears as a function of various rotor and jet parameters, rotor operating conditions and rotor configurations. The computer simulation of the air-loads included the influences of nonuniform inflow and blade elastic motions. (no hub motions were allowed.) The rotor trim and total rotor power (including jet compressor power) were also determined. It was found that all harmonics of the transmitted horizontal and vertical shears could be suppressed simultaneously using a single jet control.
Optical memories in digital computing
NASA Technical Reports Server (NTRS)
Alford, C. O.; Gaylord, T. K.
1979-01-01
High capacity optical memories with relatively-high data-transfer rate and multiport simultaneous access capability may serve as basis for new computer architectures. Several computer structures that might profitably use memories are: a) simultaneous record-access system, b) simultaneously-shared memory computer system, and c) parallel digital processing structure.
Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messer, Bronson; Sewell, Christopher; Heitmann, Katrin
2015-01-01
Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial inmore » situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.« less
NASA Astrophysics Data System (ADS)
Bootsma, Gregory J.
X-ray scatter in cone-beam computed tomography (CBCT) is known to reduce image quality by introducing image artifacts, reducing contrast, and limiting computed tomography (CT) number accuracy. The extent of the effect of x-ray scatter on CBCT image quality is determined by the shape and magnitude of the scatter distribution in the projections. A method to allay the effects of scatter is imperative to enable application of CBCT to solve a wider domain of clinical problems. The work contained herein proposes such a method. A characterization of the scatter distribution through the use of a validated Monte Carlo (MC) model is carried out. The effects of imaging parameters and compensators on the scatter distribution are investigated. The spectral frequency components of the scatter distribution in CBCT projection sets are analyzed using Fourier analysis and found to reside predominately in the low frequency domain. The exact frequency extents of the scatter distribution are explored for different imaging configurations and patient geometries. Based on the Fourier analysis it is hypothesized the scatter distribution can be represented by a finite sum of sine and cosine functions. The fitting of MC scatter distribution estimates enables the reduction of the MC computation time by diminishing the number of photon tracks required by over three orders of magnitude. The fitting method is incorporated into a novel scatter correction method using an algorithm that simultaneously combines multiple MC scatter simulations. Running concurrent MC simulations while simultaneously fitting the results allows for the physical accuracy and flexibility of MC methods to be maintained while enhancing the overall efficiency. CBCT projection set scatter estimates, using the algorithm, are computed on the order of 1--2 minutes instead of hours or days. Resulting scatter corrected reconstructions show a reduction in artifacts and improvement in tissue contrast and voxel value accuracy.
NASA Astrophysics Data System (ADS)
Cattaneo, A.; Blaizot, J.; Devriendt, J. E. G.; Mamon, G. A.; Tollet, E.; Dekel, A.; Guiderdoni, B.; Kucukbas, M.; Thob, A. C. R.
2017-10-01
GalICS 2.0 is a new semi-analytic code to model the formation and evolution of galaxies in a cosmological context. N-body simulations based on a Planck cosmology are used to construct halo merger trees, track subhaloes, compute spins and measure concentrations. The accretion of gas on to galaxies and the morphological evolution of galaxies are modelled with prescriptions derived from hydrodynamic simulations. Star formation and stellar feedback are described with phenomenological models (as in other semi-analytic codes). GalICS 2.0 computes rotation speeds from the gravitational potential of the dark matter, the disc and the central bulge. As the rotation speed depends not only on the virial velocity but also on the ratio of baryons to dark matter within a galaxy, our calculation predicts a different Tully-Fisher relation from models in which vrot ∝ vvir. This is why, GalICS 2.0 is able to reproduce the galaxy stellar mass function and the Tully-Fisher relation simultaneously. Our results are also in agreement with halo masses from weak lensing and satellite kinematics, gas fractions, the relation between star formation rate (SFR) and stellar mass, the evolution of the cosmic SFR density, bulge-to-disc ratios, disc sizes and the Faber-Jackson relation.
Presentation Extensions of the SOAP
NASA Technical Reports Server (NTRS)
Carnright, Robert; Stodden, David; Coggi, John
2009-01-01
A set of extensions of the Satellite Orbit Analysis Program (SOAP) enables simultaneous and/or sequential presentation of information from multiple sources. SOAP is used in the aerospace community as a means of collaborative visualization and analysis of data on planned spacecraft missions. The following definitions of terms also describe the display modalities of SOAP as now extended: In SOAP terminology, View signifies an animated three-dimensional (3D) scene, two-dimensional still image, plot of numerical data, or any other visible display derived from a computational simulation or other data source; a) "Viewport" signifies a rectangular portion of a computer-display window containing a view; b) "Palette" signifies a collection of one or more viewports configured for simultaneous (split-screen) display in the same window; c) "Slide" signifies a palette with a beginning and ending time and an animation time step; and d) "Presentation" signifies a prescribed sequence of slides. For example, multiple 3D views from different locations can be crafted for simultaneous display and combined with numerical plots and other representations of data for both qualitative and quantitative analysis. The resulting sets of views can be temporally sequenced to convey visual impressions of a sequence of events for a planned mission.
NASA Astrophysics Data System (ADS)
Bai, Chao-ying; He, Lei-yu; Li, Xing-wang; Sun, Jia-yu
2018-05-01
To conduct forward and simultaneous inversion in a complex geological model, including an irregular topography (or irregular reflector or velocity anomaly), we in this paper combined our previous multiphase arrival tracking method (referred as triangular shortest-path method, TSPM) in triangular (2D) or tetrahedral (3D) cell model and a linearized inversion solver (referred to as damped minimum norms and constrained least squares problem solved using the conjugate gradient method, DMNCLS-CG) to formulate a simultaneous travel time inversion method for updating both velocity and reflector geometry by using multiphase arrival times. In the triangular/tetrahedral cells, we deduced the partial derivative of velocity variation with respective to the depth change of reflector. The numerical simulation results show that the computational accuracy can be tuned to a high precision in forward modeling and the irregular velocity anomaly and reflector geometry can be accurately captured in the simultaneous inversion, because the triangular/tetrahedral cell can be easily used to stitch the irregular topography or subsurface interface.
NASA Astrophysics Data System (ADS)
Bai, Chao-ying; He, Lei-yu; Li, Xing-wang; Sun, Jia-yu
2017-12-01
To conduct forward and simultaneous inversion in a complex geological model, including an irregular topography (or irregular reflector or velocity anomaly), we in this paper combined our previous multiphase arrival tracking method (referred as triangular shortest-path method, TSPM) in triangular (2D) or tetrahedral (3D) cell model and a linearized inversion solver (referred to as damped minimum norms and constrained least squares problem solved using the conjugate gradient method, DMNCLS-CG) to formulate a simultaneous travel time inversion method for updating both velocity and reflector geometry by using multiphase arrival times. In the triangular/tetrahedral cells, we deduced the partial derivative of velocity variation with respective to the depth change of reflector. The numerical simulation results show that the computational accuracy can be tuned to a high precision in forward modeling and the irregular velocity anomaly and reflector geometry can be accurately captured in the simultaneous inversion, because the triangular/tetrahedral cell can be easily used to stitch the irregular topography or subsurface interface.
Komarov, Ivan; D'Souza, Roshan M
2012-01-01
The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.
NASA Astrophysics Data System (ADS)
Rizvi, Syed S.; Shah, Dipali; Riasat, Aasia
The Time Wrap algorithm [3] offers a run time recovery mechanism that deals with the causality errors. These run time recovery mechanisms consists of rollback, anti-message, and Global Virtual Time (GVT) techniques. For rollback, there is a need to compute GVT which is used in discrete-event simulation to reclaim the memory, commit the output, detect the termination, and handle the errors. However, the computation of GVT requires dealing with transient message problem and the simultaneous reporting problem. These problems can be dealt in an efficient manner by the Samadi's algorithm [8] which works fine in the presence of causality errors. However, the performance of both Time Wrap and Samadi's algorithms depends on the latency involve in GVT computation. Both algorithms give poor latency for large simulation systems especially in the presence of causality errors. To improve the latency and reduce the processor ideal time, we implement tree and butterflies barriers with the optimistic algorithm. Our analysis shows that the use of synchronous barriers such as tree and butterfly with the optimistic algorithm not only minimizes the GVT latency but also minimizes the processor idle time.
Modeling of anomalous electron mobility in Hall thrusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koo, Justin W.; Boyd, Iain D.
Accurate modeling of the anomalous electron mobility is absolutely critical for successful simulation of Hall thrusters. In this work, existing computational models for the anomalous electron mobility are used to simulate the UM/AFRL P5 Hall thruster (a 5 kW laboratory model) in a two-dimensional axisymmetric hybrid particle-in-cell Monte Carlo collision code. Comparison to experimental results indicates that, while these computational models can be tuned to reproduce the correct thrust or discharge current, it is very difficult to match all integrated performance parameters (thrust, power, discharge current, etc.) simultaneously. Furthermore, multiple configurations of these computational models can produce reasonable integrated performancemore » parameters. A semiempirical electron mobility profile is constructed from a combination of internal experimental data and modeling assumptions. This semiempirical electron mobility profile is used in the code and results in more accurate simulation of both the integrated performance parameters and the mean potential profile of the thruster. Results indicate that the anomalous electron mobility, while absolutely necessary in the near-field region, provides a substantially smaller contribution to the total electron mobility in the high Hall current region near the thruster exit plane.« less
Sampling free energy surfaces as slices by combining umbrella sampling and metadynamics.
Awasthi, Shalini; Kapil, Venkat; Nair, Nisanth N
2016-06-15
Metadynamics (MTD) is a very powerful technique to sample high-dimensional free energy landscapes, and due to its self-guiding property, the method has been successful in studying complex reactions and conformational changes. MTD sampling is based on filling the free energy basins by biasing potentials and thus for cases with flat, broad, and unbound free energy wells, the computational time to sample them becomes very large. To alleviate this problem, we combine the standard Umbrella Sampling (US) technique with MTD to sample orthogonal collective variables (CVs) in a simultaneous way. Within this scheme, we construct the equilibrium distribution of CVs from biased distributions obtained from independent MTD simulations with umbrella potentials. Reweighting is carried out by a procedure that combines US reweighting and Tiwary-Parrinello MTD reweighting within the Weighted Histogram Analysis Method (WHAM). The approach is ideal for a controlled sampling of a CV in a MTD simulation, making it computationally efficient in sampling flat, broad, and unbound free energy surfaces. This technique also allows for a distributed sampling of a high-dimensional free energy surface, further increasing the computational efficiency in sampling. We demonstrate the application of this technique in sampling high-dimensional surface for various chemical reactions using ab initio and QM/MM hybrid molecular dynamics simulations. Further, to carry out MTD bias reweighting for computing forward reaction barriers in ab initio or QM/MM simulations, we propose a computationally affordable approach that does not require recrossing trajectories. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Li, Zheng; Jiang, Yi-han; Duan, Lian; Zhu, Chao-zhe
2017-08-01
Objective. Functional near infra-red spectroscopy (fNIRS) is a promising brain imaging technology for brain-computer interfaces (BCI). Future clinical uses of fNIRS will likely require operation over long time spans, during which neural activation patterns may change. However, current decoders for fNIRS signals are not designed to handle changing activation patterns. The objective of this study is to test via simulations a new adaptive decoder for fNIRS signals, the Gaussian mixture model adaptive classifier (GMMAC). Approach. GMMAC can simultaneously classify and track activation pattern changes without the need for ground-truth labels. This adaptive classifier uses computationally efficient variational Bayesian inference to label new data points and update mixture model parameters, using the previous model parameters as priors. We test GMMAC in simulations in which neural activation patterns change over time and compare to static decoders and unsupervised adaptive linear discriminant analysis classifiers. Main results. Our simulation experiments show GMMAC can accurately decode under time-varying activation patterns: shifts of activation region, expansions of activation region, and combined contractions and shifts of activation region. Furthermore, the experiments show the proposed method can track the changing shape of the activation region. Compared to prior work, GMMAC performed significantly better than the other unsupervised adaptive classifiers on a difficult activation pattern change simulation: 99% versus <54% in two-choice classification accuracy. Significance. We believe GMMAC will be useful for clinical fNIRS-based brain-computer interfaces, including neurofeedback training systems, where operation over long time spans is required.
Nawafleh, Noor; Öchsner, Andreas; George, Roy
2018-01-01
PURPOSE The aim of this in vitro study was to investigate the fracture resistance under chewing simulation of implant-supported posterior restorations (crowns cemented to hybrid-abutments) made of different all-ceramic materials. MATERIALS AND METHODS Monolithic zirconia (MZr) and monolithic lithium disilicate (MLD) crowns for mandibular first molar were fabricated using computer-aided design/computer-aided manufacturing technology and then cemented to zirconia hybrid-abutments (Ti-based). Each group was divided into two subgroups (n=10): (A) control group, crowns were subjected to single load to fracture; (B) test group, crowns underwent chewing simulation using multiple loads for 1.2 million cycles at 1.2 Hz with simultaneous thermocycling between 5℃ and 55℃. Data was statistically analyzed with one-way ANOVA and a Post-Hoc test. RESULTS All tested crowns survived chewing simulation resulting in 100% survival rate. However, wear facets were observed on all the crowns at the occlusal contact point. Fracture load of monolithic lithium disilicate crowns was statistically significantly lower than that of monolithic zirconia crowns. Also, fracture load was significantly reduced in both of the all-ceramic materials after exposure to chewing simulation and thermocycling. Crowns of all test groups exhibited cohesive fracture within the monolithic crown structure only, and no abutment fractures or screw loosening were observed. CONCLUSION When supported by implants, monolithic zirconia restorations cemented to hybrid abutments withstand masticatory forces. Also, fatigue loading accompanied by simultaneous thermocycling significantly reduces the strength of both of the all-ceramic materials. Moreover, further research is needed to define potentials, limits, and long-term serviceability of the materials and hybrid abutments. PMID:29503716
Real time animation of space plasma phenomena
NASA Technical Reports Server (NTRS)
Jordan, K. F.; Greenstadt, E. W.
1987-01-01
In pursuit of real time animation of computer simulated space plasma phenomena, the code was rewritten for the Massively Parallel Processor (MPP). The program creates a dynamic representation of the global bowshock which is based on actual spacecraft data and designed for three dimensional graphic output. This output consists of time slice sequences which make up the frames of the animation. With the MPP, 16384, 512 or 4 frames can be calculated simultaneously depending upon which characteristic is being computed. The run time was greatly reduced which promotes the rapid sequence of images and makes real time animation a foreseeable goal. The addition of more complex phenomenology in the constructed computer images is now possible and work proceeds to generate these images.
Simultaneous Heat and Mass Transfer Model for Convective Drying of Building Material
NASA Astrophysics Data System (ADS)
Upadhyay, Ashwani; Chandramohan, V. P.
2018-04-01
A mathematical model of simultaneous heat and moisture transfer is developed for convective drying of building material. A rectangular brick is considered for sample object. Finite-difference method with semi-implicit scheme is used for solving the transient governing heat and mass transfer equation. Convective boundary condition is used, as the product is exposed in hot air. The heat and mass transfer equations are coupled through diffusion coefficient which is assumed as the function of temperature of the product. Set of algebraic equations are generated through space and time discretization. The discretized algebraic equations are solved by Gauss-Siedel method via iteration. Grid and time independent studies are performed for finding the optimum number of nodal points and time steps respectively. A MATLAB computer code is developed to solve the heat and mass transfer equations simultaneously. Transient heat and mass transfer simulations are performed to find the temperature and moisture distribution inside the brick.
Modeling And Simulation Of Multimedia Communication Networks
NASA Astrophysics Data System (ADS)
Vallee, Richard; Orozco-Barbosa, Luis; Georganas, Nicolas D.
1989-05-01
In this paper, we present a simulation study of a browsing system involving radiological image servers. The proposed IEEE 802.6 DQDB MAN standard is designated as the computer network to transfer radiological images from file servers to medical workstations, and to simultaneously support real time voice communications. Storage and transmission of original raster scanned images and images compressed according to pyramid data structures are considered. Different types of browsing as well as various image sizes and bit rates in the DQDB MAN are also compared. The elapsed time, measured from the time an image request is issued until the image is displayed on the monitor, is the parameter considered to evaluate the system performance. Simulation results show that image browsing can be supported by the DQDB MAN.
CT to Cone-beam CT Deformable Registration With Simultaneous Intensity Correction
Zhen, Xin; Gu, Xuejun; Yan, Hao; Zhou, Linghong; Jia, Xun; Jiang, Steve B.
2012-01-01
Computed tomography (CT) to cone-beam computed tomography (CBCT) deformable image registration (DIR) is a crucial step in adaptive radiation therapy. Current intensity-based registration algorithms, such as demons, may fail in the context of CT-CBCT DIR because of inconsistent intensities between the two modalities. In this paper, we propose a variant of demons, called Deformation with Intensity Simultaneously Corrected (DISC), to deal with CT-CBCT DIR. DISC distinguishes itself from the original demons algorithm by performing an adaptive intensity correction step on the CBCT image at every iteration step of the demons registration. Specifically, the intensity correction of a voxel in CBCT is achieved by matching the first and the second moments of the voxel intensities inside a patch around the voxel with those on the CT image. It is expected that such a strategy can remove artifacts in the CBCT image, as well as ensuring the intensity consistency between the two modalities. DISC is implemented on computer graphics processing units (GPUs) in compute unified device architecture (CUDA) programming environment. The performance of DISC is evaluated on a simulated patient case and six clinical head-and-neck cancer patient data. It is found that DISC is robust against the CBCT artifacts and intensity inconsistency and significantly improves the registration accuracy when compared with the original demons. PMID:23032638
NASA Astrophysics Data System (ADS)
Volkov, D.
2017-12-01
We introduce an algorithm for the simultaneous reconstruction of faults and slip fields on those faults. We define a regularized functional to be minimized for the reconstruction. We prove that the minimum of that functional converges to the unique solution of the related fault inverse problem. Due to inherent uncertainties in measurements, rather than seeking a deterministic solution to the fault inverse problem, we consider a Bayesian approach. The advantage of such an approach is that we obtain a way of quantifying uncertainties as part of our final answer. On the downside, this Bayesian approach leads to a very large computation. To contend with the size of this computation we developed an algorithm for the numerical solution to the stochastic minimization problem which can be easily implemented on a parallel multi-core platform and we discuss techniques to save on computational time. After showing how this algorithm performs on simulated data and assessing the effect of noise, we apply it to measured data. The data was recorded during a slow slip event in Guerrero, Mexico.
NASA Astrophysics Data System (ADS)
Enin, S. S.; Omelchenko, E. Y.; Fomin, N. V.; Beliy, A. V.
2018-03-01
The paper has a description of a computer model of an overhead crane system. The designed overhead crane system consists of hoisting, trolley and crane mechanisms as well as a payload two-axis system. With the help of the differential equation of specified mechanisms movement derived through Lagrange equation of the II kind, it is possible to build an overhead crane computer model. The computer model was obtained using Matlab software. Transients of coordinate, linear speed and motor torque of trolley and crane mechanism systems were simulated. In addition, transients of payload swaying were obtained with respect to the vertical axis. A trajectory of the trolley mechanism with simultaneous operation with the crane mechanism is represented in the paper as well as a two-axis trajectory of payload. The designed computer model of an overhead crane is a great means for studying positioning control and anti-sway control systems.
NASA Astrophysics Data System (ADS)
Lunt, T.; Fuchs, J. C.; Mank, K.; Feng, Y.; Brochard, F.; Herrmann, A.; Rohde, V.; Endstrasser, N.; ASDEX Upgrade Team
2010-11-01
A generally available and easy-to-use viewer for the simultaneous visualisation of the ASDEX Upgrade vacuum vessel computer aided design models, diagnostics and magnetic geometry, solutions of 3D plasma simulation codes and 2D camera images was developed. Here we report on the working principle of this software and give several examples of its technical and scientific application.
NASA Astrophysics Data System (ADS)
Feng, Bo; Gao, Feng; Zhao, Huijuan; Zhang, Limin; Li, Jiao; Zhou, Zhongxing
2018-02-01
The purpose of this work is to introduce and study a novel x-ray beam irradiation pattern for X-ray Luminescence Computed Tomography (XLCT), termed multiple intensity-weighted narrow-beam irradiation. The proposed XLCT imaging method is studied through simulations of x-ray and diffuse lights propagation. The emitted optical photons from X-ray excitable nanophosphors were collected by optical fiber bundles from the right-side surface of the phantom. The implementation of image reconstruction is based on the simulated measurements from 6 or 12 angular projections in terms of 3 or 5 x-ray beams scanning mode. The proposed XLCT imaging method is compared against the constant intensity weighted narrow-beam XLCT. From the reconstructed XLCT images, we found that the Dice similarity and quantitative ratio of targets have a certain degree of improvement. The results demonstrated that the proposed method can offer simultaneously high image quality and fast image acquisition.
Nowak, Andreas; Langebach, Robin; Klemm, Eckart; Heller, Winfried
2012-04-01
We describe an innovative computer-based method for the analysis of gas flow using a modified airway management technique to perform percutaneous dilatational tracheotomy (PDT) with a rigid tracheotomy endoscope (TED). A test lung was connected via an artificial trachea with the tracheotomy endoscope and ventilated using superimposed high-frequency jet ventilation. Red packed cells were instilled during the puncture phase of a simulated percutaneous tracheotomy in a trachea model and migration of the red packed cells during breathing was continuously measured. Simultaneously, the calculation of the gas-flow within the endoscope was numerically simulated. In the experimental study, no backflow of blood occurred during the use of superimposed high-frequency jet ventilation (SHFJV) from the trachea into the endoscope nor did any transportation of blood into the lower respiratory tract occur. In parallel, the numerical simulations of the openings of TED show almost positive volume flows. Under the conditions investigated there is no risk of blood aspiration during PDT using the TED and simultaneous ventilation with SHFJV. In addition, no risk of impairment of endoscopic visibility exists through a backflow of blood into the TED. The method of numerical simulation offers excellent insight into the fluid flow even under highly transient conditions like jet ventilation.
Efficient field-theoretic simulation of polymer solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villet, Michael C.; Fredrickson, Glenn H., E-mail: ghf@mrl.ucsb.edu; Department of Materials, University of California, Santa Barbara, California 93106
2014-12-14
We present several developments that facilitate the efficient field-theoretic simulation of polymers by complex Langevin sampling. A regularization scheme using finite Gaussian excluded volume interactions is used to derive a polymer solution model that appears free of ultraviolet divergences and hence is well-suited for lattice-discretized field theoretic simulation. We show that such models can exhibit ultraviolet sensitivity, a numerical pathology that dramatically increases sampling error in the continuum lattice limit, and further show that this pathology can be eliminated by appropriate model reformulation by variable transformation. We present an exponential time differencing algorithm for integrating complex Langevin equations for fieldmore » theoretic simulation, and show that the algorithm exhibits excellent accuracy and stability properties for our regularized polymer model. These developments collectively enable substantially more efficient field-theoretic simulation of polymers, and illustrate the importance of simultaneously addressing analytical and numerical pathologies when implementing such computations.« less
Simulating Isotope Enrichment by Gaseous Diffusion
NASA Astrophysics Data System (ADS)
Reed, Cameron
2015-04-01
A desktop-computer simulation of isotope enrichment by gaseous diffusion has been developed. The simulation incorporates two non-interacting point-mass species whose members pass through a cascade of cells containing porous membranes and retain constant speeds as they reflect off the walls of the cells and the spaces between holes in the membranes. A particular feature is periodic forward recycling of enriched material to cells further along the cascade along with simultaneous return of depleted material to preceding cells. The number of particles, the mass ratio, the initial fractional abundance of the lighter species, and the time between recycling operations can be chosen by the user. The simulation is simple enough to be understood on the basis of two-dimensional kinematics, and demonstrates that the fractional abundance of the lighter-isotope species increases along the cascade. The logic of the simulation will be described and results of some typical runs will be presented and discussed.
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Strong, J.; Woodward, R. H.; Pierce, H.
1991-01-01
Results are presented on an automatic stereo analysis of cloud-top heights from nearly simultaneous satellite image pairs from the GOES and NOAA satellites, using a massively parallel processor computer. Comparisons of computer-derived height fields and manually analyzed fields show that the automatic analysis technique shows promise for performing routine stereo analysis in a real-time environment, providing a useful forecasting tool by augmenting observational data sets of severe thunderstorms and hurricanes. Simulations using synthetic stereo data show that it is possible to automatically resolve small-scale features such as 4000-m-diam clouds to about 1500 m in the vertical.
Adaptive thinking & leadership simulation game training for special forces officers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raybourn, Elaine Marie; Mendini, Kip; Heneghan, Jerry
Complex problem solving approaches and novel strategies employed by the military at the squad, team, and commander level are often best learned experimentally. Since live action exercises can be costly, advances in simulation game training technology offer exciting ways to enhance current training. Computer games provide an environment for active, critical learning. Games open up possibilities for simultaneous learning on multiple levels; players may learn from contextual information embedded in the dynamics of the game, the organic process generated by the game, and through the risks, benefits, costs, outcomes, and rewards of alternative strategies that result from decision making. Inmore » the present paper we discuss a multiplayer computer game simulation created for the Adaptive Thinking & Leadership (ATL) Program to train Special Forces Team Leaders. The ATL training simulation consists of a scripted single-player and an immersive multiplayer environment for classroom use which leverages immersive computer game technology. We define adaptive thinking as consisting of competencies such as negotiation and consensus building skills, the ability to communicate effectively, analyze ambiguous situations, be self-aware, think innovatively, and critically use effective problem solving skills. Each of these competencies is an essential element of leader development training for the U.S. Army Special Forces. The ATL simulation is used to augment experiential learning in the curriculum for the U.S. Army JFK Special Warfare Center & School (SWCS) course in Adaptive Thinking & Leadership. The school is incorporating the ATL simulation game into two additional training pipelines (PSYOPS and Civil Affairs Qualification Courses) that are also concerned with developing cultural awareness, interpersonal communication adaptability, and rapport-building skills. In the present paper, we discuss the design, development, and deployment of the training simulation, and emphasize how the multiplayer simulation game is successfully used in the Special Forces Officer training program.« less
NASA Astrophysics Data System (ADS)
Karamat, Muhammad I.; Farncombe, Troy H.
2015-10-01
Simultaneous multi-isotope Single Photon Emission Computed Tomography (SPECT) imaging has a number of applications in cardiac, brain, and cancer imaging. The major concern however, is the significant crosstalk contamination due to photon scatter between the different isotopes. The current study focuses on a method of crosstalk compensation between two isotopes in simultaneous dual isotope SPECT acquisition applied to cancer imaging using 99mTc and 111In. We have developed an iterative image reconstruction technique that simulates the photon down-scatter from one isotope into the acquisition window of a second isotope. Our approach uses an accelerated Monte Carlo (MC) technique for the forward projection step in an iterative reconstruction algorithm. The MC estimated scatter contamination of a radionuclide contained in a given projection view is then used to compensate for the photon contamination in the acquisition window of other nuclide. We use a modified ordered subset-expectation maximization (OS-EM) algorithm named simultaneous ordered subset-expectation maximization (Sim-OSEM), to perform this step. We have undertaken a number of simulation tests and phantom studies to verify this approach. The proposed reconstruction technique was also evaluated by reconstruction of experimentally acquired phantom data. Reconstruction using Sim-OSEM showed very promising results in terms of contrast recovery and uniformity of object background compared to alternative reconstruction methods implementing alternative scatter correction schemes (i.e., triple energy window or separately acquired projection data). In this study the evaluation is based on the quality of reconstructed images and activity estimated using Sim-OSEM. In order to quantitate the possible improvement in spatial resolution and signal to noise ratio (SNR) observed in this study, further simulation and experimental studies are required.
NASA Astrophysics Data System (ADS)
Langenberg, J. H.; Bucur, I. B.; Archirel, P.
1997-09-01
We show that in the simple case of van der Waals ionic clusters, the optimisation of orbitals within VB can be easily simulated with the help of pseudopotentials. The procedure yields the ground and the first excited states of the cluster simultaneously. This makes the calculation of potential energy surfaces for tri- and tetraatomic clusters possible, with very acceptable computation times. We give potential curves for (ArCO) +, (ArN 2) + and N 4+. An application to the simulation of the SCF method is shown for Na +H 2O.
QMMMW: A wrapper for QM/MM simulations with QUANTUM ESPRESSO and LAMMPS
NASA Astrophysics Data System (ADS)
Ma, Changru; Martin-Samos, Layla; Fabris, Stefano; Laio, Alessandro; Piccinin, Simone
2015-10-01
We present QMMMW, a new program aimed at performing Quantum Mechanics/Molecular Mechanics (QM/MM) molecular dynamics. The package operates as a wrapper that patches PWscf code included in the QUANTUM ESPRESSO distribution and LAMMPS Molecular Dynamics Simulator. It is designed with a paradigm based on three guidelines: (i) minimal amount of modifications on the parent codes, (ii) flexibility and computational efficiency of the communication layer and (iii) accuracy of the Hamiltonian describing the interaction between the QM and MM subsystems. These three features are seldom present simultaneously in other implementations of QMMM. The QMMMW project is hosted by qe-forge at
Vortical structures for nanomagnetic memory induced by dipole-dipole interaction in monolayer disks
NASA Astrophysics Data System (ADS)
Liu, Zhaosen; Ciftja, Orion; Zhang, Xichao; Zhou, Yan; Ian, Hou
2018-05-01
It is well known that magnetic domains in nanodisks can be used as storage units for computer memory. Using two quantum simulation approaches, we show here that spin vortices on magnetic monolayer nanodisks, which are chirality-free, can be induced by dipole-dipole interaction (DDI) on the disk-plane. When DDI is sufficiently strong, vortical and anti-vortical multi-domain textures can be generated simultaneously. Especially, a spin vortex can be easily created and deleted through either external magnetic or electrical signals, making them ideal to be used in nanomagnetic memory and logical devices. We demonstrate these properties in our simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Hale, Elaine; Hodge, Bri-Mathias
2016-08-11
This paper discusses the development of, approaches for, experiences with, and some results from a large-scale, high-performance-computer-based (HPC-based) co-simulation of electric power transmission and distribution systems using the Integrated Grid Modeling System (IGMS). IGMS was developed at the National Renewable Energy Laboratory (NREL) as a novel Independent System Operator (ISO)-to-appliance scale electric power system modeling platform that combines off-the-shelf tools to simultaneously model 100s to 1000s of distribution systems in co-simulation with detailed ISO markets, transmission power flows, and AGC-level reserve deployment. Lessons learned from the co-simulation architecture development are shared, along with a case study that explores the reactivemore » power impacts of PV inverter voltage support on the bulk power system.« less
Validation of chemistry models employed in a particle simulation method
NASA Technical Reports Server (NTRS)
Haas, Brian L.; Mcdonald, Jeffrey D.
1991-01-01
The chemistry models employed in a statistical particle simulation method, as implemented in the Intel iPSC/860 multiprocessor computer, are validated and applied. Chemical relaxation of five-species air in these reservoirs involves 34 simultaneous dissociation, recombination, and atomic-exchange reactions. The reaction rates employed in the analytic solutions are obtained from Arrhenius experimental correlations as functions of temperature for adiabatic gas reservoirs in thermal equilibrium. Favorable agreement with the analytic solutions validates the simulation when applied to relaxation of O2 toward equilibrium in reservoirs dominated by dissociation and recombination, respectively, and when applied to relaxation of air in the temperature range 5000 to 30,000 K. A flow of O2 over a circular cylinder at high Mach number is simulated to demonstrate application of the method to multidimensional reactive flows.
Dual-Tracer PET Using Generalized Factor Analysis of Dynamic Sequences
Fakhri, Georges El; Trott, Cathryn M.; Sitek, Arkadiusz; Bonab, Ali; Alpert, Nathaniel M.
2013-01-01
Purpose With single-photon emission computed tomography, simultaneous imaging of two physiological processes relies on discrimination of the energy of the emitted gamma rays, whereas the application of dual-tracer imaging to positron emission tomography (PET) imaging has been limited by the characteristic 511-keV emissions. Procedures To address this limitation, we developed a novel approach based on generalized factor analysis of dynamic sequences (GFADS) that exploits spatio-temporal differences between radiotracers and applied it to near-simultaneous imaging of 2-deoxy-2-[18F]fluoro-D-glucose (FDG) (brain metabolism) and 11C-raclopride (D2) with simulated human data and experimental rhesus monkey data. We show theoretically and verify by simulation and measurement that GFADS can separate FDG and raclopride measurements that are made nearly simultaneously. Results The theoretical development shows that GFADS can decompose the studies at several levels: (1) It decomposes the FDG and raclopride study so that they can be analyzed as though they were obtained separately. (2) If additional physiologic/anatomic constraints can be imposed, further decomposition is possible. (3) For the example of raclopride, specific and nonspecific binding can be determined on a pixel-by-pixel basis. We found good agreement between the estimated GFADS factors and the simulated ground truth time activity curves (TACs), and between the GFADS factor images and the corresponding ground truth activity distributions with errors less than 7.3±1.3 %. Biases in estimation of specific D2 binding and relative metabolism activity were within 5.9±3.6 % compared to the ground truth values. We also evaluated our approach in simultaneous dual-isotope brain PET studies in a rhesus monkey and obtained accuracy of better than 6 % in a mid-striatal volume, for striatal activity estimation. Conclusions Dynamic image sequences acquired following near-simultaneous injection of two PET radiopharmaceuticals can be separated into components based on the differences in the kinetics, provided their kinetic behaviors are distinct. PMID:23636489
Glowacki, David R; O'Connor, Michael; Calabró, Gaetano; Price, James; Tew, Philip; Mitchell, Thomas; Hyde, Joseph; Tew, David P; Coughtrie, David J; McIntosh-Smith, Simon
2014-01-01
With advances in computational power, the rapidly growing role of computational/simulation methodologies in the physical sciences, and the development of new human-computer interaction technologies, the field of interactive molecular dynamics seems destined to expand. In this paper, we describe and benchmark the software algorithms and hardware setup for carrying out interactive molecular dynamics utilizing an array of consumer depth sensors. The system works by interpreting the human form as an energy landscape, and superimposing this landscape on a molecular dynamics simulation to chaperone the motion of the simulated atoms, affecting both graphics and sonified simulation data. GPU acceleration has been key to achieving our target of 60 frames per second (FPS), giving an extremely fluid interactive experience. GPU acceleration has also allowed us to scale the system for use in immersive 360° spaces with an array of up to ten depth sensors, allowing several users to simultaneously chaperone the dynamics. The flexibility of our platform for carrying out molecular dynamics simulations has been considerably enhanced by wrappers that facilitate fast communication with a portable selection of GPU-accelerated molecular force evaluation routines. In this paper, we describe a 360° atmospheric molecular dynamics simulation we have run in a chemistry/physics education context. We also describe initial tests in which users have been able to chaperone the dynamics of 10-alanine peptide embedded in an explicit water solvent. Using this system, both expert and novice users have been able to accelerate peptide rare event dynamics by 3-4 orders of magnitude.
Pham, Tuan Anh; Ogitsu, Tadashi; Lau, Edmond Y; Schwegler, Eric
2016-10-21
Establishing an accurate and predictive computational framework for the description of complex aqueous solutions is an ongoing challenge for density functional theory based first-principles molecular dynamics (FPMD) simulations. In this context, important advances have been made in recent years, including the development of sophisticated exchange-correlation functionals. On the other hand, simulations based on simple generalized gradient approximation (GGA) functionals remain an active field, particularly in the study of complex aqueous solutions due to a good balance between the accuracy, computational expense, and the applicability to a wide range of systems. Such simulations are often performed at elevated temperatures to artificially "correct" for GGA inaccuracies in the description of liquid water; however, a detailed understanding of how the choice of temperature affects the structure and dynamics of other components, such as solvated ions, is largely unknown. To address this question, we carried out a series of FPMD simulations at temperatures ranging from 300 to 460 K for liquid water and three representative aqueous solutions containing solvated Na + , K + , and Cl - ions. We show that simulations at 390-400 K with the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional yield water structure and dynamics in good agreement with experiments at ambient conditions. Simultaneously, this computational setup provides ion solvation structures and ion effects on water dynamics consistent with experiments. Our results suggest that an elevated temperature around 390-400 K with the PBE functional can be used for the description of structural and dynamical properties of liquid water and complex solutions with solvated ions at ambient conditions.
Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin
2017-08-01
Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously. Copyright © 2017 Elsevier B.V. All rights reserved.
Improved atmospheric 3D BSDF model in earthlike exoplanet using ray-tracing based method
NASA Astrophysics Data System (ADS)
Ryu, Dongok; Kim, Sug-Whan; Seong, Sehyun
2012-10-01
The studies on planetary radiative transfer computation have become important elements to disk-averaged spectral characterization of potential exoplanets. In this paper, we report an improved ray-tracing based atmospheric simulation model as a part of 3-D earth-like planet model with 3 principle sub-components i.e. land, sea and atmosphere. Any changes in ray paths and their characteristics such as radiative power and direction are computed as they experience reflection, refraction, transmission, absorption and scattering. Improved atmospheric BSDF algorithms uses Q.Liu's combined Rayleigh and aerosol Henrey-Greenstein scattering phase function. The input cloud-free atmosphere model consists of 48 layers with vertical absorption profiles and a scattering layer with their input characteristics using the GIOVANNI database. Total Solar Irradiance data are obtained from Solar Radiation and Climate Experiment (SORCE) mission. Using aerosol scattering computation, we first tested the atmospheric scattering effects with imaging simulation with HRIV, EPOXI. Then we examined the computational validity of atmospheric model with the measurements of global, direct and diffuse radiation taken from NREL(National Renewable Energy Laboratory)s pyranometers and pyrheliometers on a ground station for cases of single incident angle and for simultaneous multiple incident angles of the solar beam.
Improving Fidelity of Launch Vehicle Liftoff Acoustic Simulations
NASA Technical Reports Server (NTRS)
Liever, Peter; West, Jeff
2016-01-01
Launch vehicles experience high acoustic loads during ignition and liftoff affected by the interaction of rocket plume generated acoustic waves with launch pad structures. Application of highly parallelized Computational Fluid Dynamics (CFD) analysis tools optimized for application on the NAS computer systems such as the Loci/CHEM program now enable simulation of time-accurate, turbulent, multi-species plume formation and interaction with launch pad geometry and capture the generation of acoustic noise at the source regions in the plume shear layers and impingement regions. These CFD solvers are robust in capturing the acoustic fluctuations, but they are too dissipative to accurately resolve the propagation of the acoustic waves throughout the launch environment domain along the vehicle. A hybrid Computational Fluid Dynamics and Computational Aero-Acoustics (CFD/CAA) modeling framework has been developed to improve such liftoff acoustic environment predictions. The framework combines the existing highly-scalable NASA production CFD code, Loci/CHEM, with a high-order accurate discontinuous Galerkin (DG) solver, Loci/THRUST, developed in the same computational framework. Loci/THRUST employs a low dissipation, high-order, unstructured DG method to accurately propagate acoustic waves away from the source regions across large distances. The DG solver is currently capable of solving up to 4th order solutions for non-linear, conservative acoustic field propagation. Higher order boundary conditions are implemented to accurately model the reflection and refraction of acoustic waves on launch pad components. The DG solver accepts generalized unstructured meshes, enabling efficient application of common mesh generation tools for CHEM and THRUST simulations. The DG solution is coupled with the CFD solution at interface boundaries placed near the CFD acoustic source regions. Both simulations are executed simultaneously with coordinated boundary condition data exchange.
Burton, Brett M; Aras, Kedar K; Good, Wilson W; Tate, Jess D; Zenger, Brian; MacLeod, Rob S
2018-05-21
The biophysical basis for electrocardiographic evaluation of myocardial ischemia stems from the notion that ischemic tissues develop, with relative uniformity, along the endocardial aspects of the heart. These injured regions of subendocardial tissue give rise to intramural currents that lead to ST segment deflections within electrocardiogram (ECG) recordings. The concept of subendocardial ischemic regions is often used in clinical practice, providing a simple and intuitive description of ischemic injury; however, such a model grossly oversimplifies the presentation of ischemic disease-inadvertently leading to errors in ECG-based diagnoses. Furthermore, recent experimental studies have brought into question the subendocardial ischemia paradigm suggesting instead a more distributed pattern of tissue injury. These findings come from experiments and so have both the impact and the limitations of measurements from living organisms. Computer models have often been employed to overcome the constraints of experimental approaches and have a robust history in cardiac simulation. To this end, we have developed a computational simulation framework aimed at elucidating the effects of ischemia on measurable cardiac potentials. To validate our framework, we simulated, visualized, and analyzed 226 experimentally derived acute myocardial ischemic events. Simulation outcomes agreed both qualitatively (feature comparison) and quantitatively (correlation, average error, and significance) with experimentally obtained epicardial measurements, particularly under conditions of elevated ischemic stress. Our simulation framework introduces a novel approach to incorporating subject-specific, geometric models and experimental results that are highly resolved in space and time into computational models. We propose this framework as a means to advance the understanding of the underlying mechanisms of ischemic disease while simultaneously putting in place the computational infrastructure necessary to study and improve ischemia models aimed at reducing diagnostic errors in the clinic.
Using Adjoint Methods to Improve 3-D Velocity Models of Southern California
NASA Astrophysics Data System (ADS)
Liu, Q.; Tape, C.; Maggi, A.; Tromp, J.
2006-12-01
We use adjoint methods popular in climate and ocean dynamics to calculate Fréchet derivatives for tomographic inversions in southern California. The Fréchet derivative of an objective function χ(m), where m denotes the Earth model, may be written in the generic form δχ=int Km(x) δln m(x) d3x, where δln m=δ m/m denotes the relative model perturbation. For illustrative purposes, we construct the 3-D finite-frequency banana-doughnut kernel Km, corresponding to the misfit of a single traveltime measurement, by simultaneously computing the 'adjoint' wave field s† forward in time and reconstructing the regular wave field s backward in time. The adjoint wave field is produced by using the time-reversed velocity at the receiver as a fictitious source, while the regular wave field is reconstructed on the fly by propagating the last frame of the wave field saved by a previous forward simulation backward in time. The approach is based upon the spectral-element method, and only two simulations are needed to produce density, shear-wave, and compressional-wave sensitivity kernels. This method is applied to the SCEC southern California velocity model. Various density, shear-wave, and compressional-wave sensitivity kernels are presented for different phases in the seismograms. We also generate 'event' kernels for Pnl, S and surface waves, which are the Fréchet kernels of misfit functions that measure the P, S or surface wave traveltime residuals at all the receivers simultaneously for one particular event. Effectively, an event kernel is a sum of weighted Fréchet kernels, with weights determined by the associated traveltime anomalies. By the nature of the 3-D simulation, every event kernel is also computed based upon just two simulations, i.e., its construction costs the same amount of computation time as an individual banana-doughnut kernel. One can think of the sum of the event kernels for all available earthquakes, called the 'misfit' kernel, as a graphical representation of the gradient of the misfit function. With the capability of computing both the value of the misfit function and its gradient, which assimilates the traveltime anomalies, we are ready to use a non-linear conjugate gradient algorithm to iteratively improve velocity models of southern California.
A high-resolution physically-based global flood hazard map
NASA Astrophysics Data System (ADS)
Kaheil, Y.; Begnudelli, L.; McCollum, J.
2016-12-01
We present the results from a physically-based global flood hazard model. The model uses a physically-based hydrologic model to simulate river discharges, and 2D hydrodynamic model to simulate inundation. The model is set up such that it allows the application of large-scale flood hazard through efficient use of parallel computing. For hydrology, we use the Hillslope River Routing (HRR) model. HRR accounts for surface hydrology using Green-Ampt parameterization. The model is calibrated against observed discharge data from the Global Runoff Data Centre (GRDC) network, among other publicly-available datasets. The parallel-computing framework takes advantage of the river network structure to minimize cross-processor messages, and thus significantly increases computational efficiency. For inundation, we implemented a computationally-efficient 2D finite-volume model with wetting/drying. The approach consists of simulating flood along the river network by forcing the hydraulic model with the streamflow hydrographs simulated by HRR, and scaled up to certain return levels, e.g. 100 years. The model is distributed such that each available processor takes the next simulation. Given an approximate criterion, the simulations are ordered from most-demanding to least-demanding to ensure that all processors finalize almost simultaneously. Upon completing all simulations, the maximum envelope of flood depth is taken to generate the final map. The model is applied globally, with selected results shown from different continents and regions. The maps shown depict flood depth and extent at different return periods. These maps, which are currently available at 3 arc-sec resolution ( 90m) can be made available at higher resolutions where high resolution DEMs are available. The maps can be utilized by flood risk managers at the national, regional, and even local levels to further understand their flood risk exposure, exercise certain measures of mitigation, and/or transfer the residual risk financially through flood insurance programs.
Optical systolic array processor using residue arithmetic
NASA Technical Reports Server (NTRS)
Jackson, J.; Casasent, D.
1983-01-01
The use of residue arithmetic to increase the accuracy and reduce the dynamic range requirements of optical matrix-vector processors is evaluated. It is determined that matrix-vector operations and iterative algorithms can be performed totally in residue notation. A new parallel residue quantizer circuit is developed which significantly improves the performance of the systolic array feedback processor. Results are presented of a computer simulation of this system used to solve a set of three simultaneous equations.
NASA Astrophysics Data System (ADS)
Hautot, Felix; Dubart, Philippe; Bacri, Charles-Olivier; Chagneau, Benjamin; Abou-Khalil, Roger
2017-09-01
New developments in the field of robotics and computer vision enables to merge sensors to allow fast realtime localization of radiological measurements in the space/volume with near-real time radioactive sources identification and characterization. These capabilities lead nuclear investigations to a more efficient way for operators' dosimetry evaluation, intervention scenarii and risks mitigation and simulations, such as accidents in unknown potentially contaminated areas or during dismantling operations
OBERON: OBliquity and Energy balance Run on N-body systems
NASA Astrophysics Data System (ADS)
Forgan, Duncan H.
2016-08-01
OBERON (OBliquity and Energy balance Run on N-body systems) models the climate of Earthlike planets under the effects of an arbitrary number and arrangement of other bodies, such as stars, planets and moons. The code, written in C++, simultaneously computes N body motions using a 4th order Hermite integrator, simulates climates using a 1D latitudinal energy balance model, and evolves the orbital spin of bodies using the equations of Laskar (1986a,b).
Potjans, Wiebke; Morrison, Abigail; Diesmann, Markus
2010-01-01
A major puzzle in the field of computational neuroscience is how to relate system-level learning in higher organisms to synaptic plasticity. Recently, plasticity rules depending not only on pre- and post-synaptic activity but also on a third, non-local neuromodulatory signal have emerged as key candidates to bridge the gap between the macroscopic and the microscopic level of learning. Crucial insights into this topic are expected to be gained from simulations of neural systems, as these allow the simultaneous study of the multiple spatial and temporal scales that are involved in the problem. In particular, synaptic plasticity can be studied during the whole learning process, i.e., on a time scale of minutes to hours and across multiple brain areas. Implementing neuromodulated plasticity in large-scale network simulations where the neuromodulatory signal is dynamically generated by the network itself is challenging, because the network structure is commonly defined purely by the connectivity graph without explicit reference to the embedding of the nodes in physical space. Furthermore, the simulation of networks with realistic connectivity entails the use of distributed computing. A neuromodulated synapse must therefore be informed in an efficient way about the neuromodulatory signal, which is typically generated by a population of neurons located on different machines than either the pre- or post-synaptic neuron. Here, we develop a general framework to solve the problem of implementing neuromodulated plasticity in a time-driven distributed simulation, without reference to a particular implementation language, neuromodulator, or neuromodulated plasticity mechanism. We implement our framework in the simulator NEST and demonstrate excellent scaling up to 1024 processors for simulations of a recurrent network incorporating neuromodulated spike-timing dependent plasticity. PMID:21151370
NASA Astrophysics Data System (ADS)
Kaur, K.; Laanearu, J.; Annus, I.
2017-10-01
The numerical experiments are carried out for qualitative and quantitative interpretation of a multi-phase flow processes associated with malfunctioning of the Tallinn storm-water system during rain storms. The investigations are focused on the single-line inverted siphon, which is used as under-road connection of pipes of the storm-water system under interest. A multi-phase flow solver of Computational Fluid Dynamics software OpenFOAM is used for simulating the three-phase flow dynamics in the hydraulic system. The CFD simulations are performed with different inflow rates under same initial conditions. The computational results are compared essentially in two cases 1) design flow rate and 2) larger flow rate, for emptying the initially filled inverted siphon from a slurry-fluid. The larger flow-rate situations are under particular interest to detected possible flooding. In this regard, it is anticipated that the CFD solutions provide an important insight to functioning of inverted siphon under a restricted water-flow conditions at simultaneous presence of air and slurry-fluid.
Systems-on-chip approach for real-time simulation of wheel-rail contact laws
NASA Astrophysics Data System (ADS)
Mei, T. X.; Zhou, Y. J.
2013-04-01
This paper presents the development of a systems-on-chip approach to speed up the simulation of wheel-rail contact laws, which can be used to reduce the requirement for high-performance computers and enable simulation in real time for the use of hardware-in-loop for experimental studies of the latest vehicle dynamic and control technologies. The wheel-rail contact laws are implemented using a field programmable gate array (FPGA) device with a design that substantially outperforms modern general-purpose PC platforms or fixed architecture digital signal processor devices in terms of processing time, configuration flexibility and cost. In order to utilise the FPGA's parallel-processing capability, the operations in the contact laws algorithms are arranged in a parallel manner and multi-contact patches are tackled simultaneously in the design. The interface between the FPGA device and the host PC is achieved by using a high-throughput and low-latency Ethernet link. The development is based on FASTSIM algorithms, although the design can be adapted and expanded for even more computationally demanding tasks.
Wafer hotspot prevention using etch aware OPC correction
NASA Astrophysics Data System (ADS)
Hamouda, Ayman; Power, Dave; Salama, Mohamed; Chen, Ao
2016-03-01
As technology development advances into deep-sub-wavelength nodes, multiple patterning is becoming more essential to achieve the technology shrink requirements. Recently, Optical Proximity Correction (OPC) technology has proposed simultaneous correction of multiple mask-patterns to enable multiple patterning awareness during OPC correction. This is essential to prevent inter-layer hot-spots during the final pattern transfer. In state-of-art literature, multi-layer awareness is achieved using simultaneous resist-contour simulations to predict and correct for hot-spots during mask generation. However, this approach assumes a uniform etch shrink response for all patterns independent of their proximity, which isn't sufficient for the full prevention of inter-exposure hot-spot, for example different color space violations post etch or via coverage/enclosure post etch. In this paper, we explain the need to include the etch component during multiple patterning OPC. We also introduce a novel approach for Etch-aware simultaneous Multiple-patterning OPC, where we calibrate and verify a lumped model that includes the combined resist and etch responses. Adding this extra simulation condition during OPC is suitable for full chip processing from a computation intensity point of view. Also, using this model during OPC to predict and correct inter-exposures hot-spots is similar to previously proposed multiple-patterning OPC, yet our proposed approach more accurately corrects post-etch defects too.
Load Balancing Strategies for Multiphase Flows on Structured Grids
NASA Astrophysics Data System (ADS)
Olshefski, Kristopher; Owkes, Mark
2017-11-01
The computation time required to perform large simulations of complex systems is currently one of the leading bottlenecks of computational research. Parallelization allows multiple processing cores to perform calculations simultaneously and reduces computational times. However, load imbalances between processors waste computing resources as processors wait for others to complete imbalanced tasks. In multiphase flows, these imbalances arise due to the additional computational effort required at the gas-liquid interface. However, many current load balancing schemes are only designed for unstructured grid applications. The purpose of this research is to develop a load balancing strategy while maintaining the simplicity of a structured grid. Several approaches are investigated including brute force oversubscription, node oversubscription through Message Passing Interface (MPI) commands, and shared memory load balancing using OpenMP. Each of these strategies are tested with a simple one-dimensional model prior to implementation into the three-dimensional NGA code. Current results show load balancing will reduce computational time by at least 30%.
NASA Astrophysics Data System (ADS)
Xiao, Ying; Michalski, Darek; Censor, Yair; Galvin, James M.
2004-07-01
The efficient delivery of intensity modulated radiation therapy (IMRT) depends on finding optimized beam intensity patterns that produce dose distributions, which meet given constraints for the tumour as well as any critical organs to be spared. Many optimization algorithms that are used for beamlet-based inverse planning are susceptible to large variations of neighbouring intensities. Accurately delivering an intensity pattern with a large number of extrema can prove impossible given the mechanical limitations of standard multileaf collimator (MLC) delivery systems. In this study, we apply Cimmino's simultaneous projection algorithm to the beamlet-based inverse planning problem, modelled mathematically as a system of linear inequalities. We show that using this method allows us to arrive at a smoother intensity pattern. Including nonlinear terms in the simultaneous projection algorithm to deal with dose-volume histogram (DVH) constraints does not compromise this property from our experimental observation. The smoothness properties are compared with those from other optimization algorithms which include simulated annealing and the gradient descent method. The simultaneous property of these algorithms is ideally suited to parallel computing technologies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gelbard, F.; Fitzgerald, J.W.; Hoppel, W.A.
1998-07-01
We present the theoretical framework and computational methods that were used by {ital Fitzgerald} {ital et al.} [this issue (a), (b)] describing a one-dimensional sectional model to simulate multicomponent aerosol dynamics in the marine boundary layer. The concepts and limitations of modeling spatially varying multicomponent aerosols are elucidated. New numerical sectional techniques are presented for simulating multicomponent aerosol growth, settling, and eddy transport, coupled to time-dependent and spatially varying condensing vapor concentrations. Comparisons are presented with new exact solutions for settling and particle growth by simultaneous dynamic condensation of one vapor and by instantaneous equilibration with a spatially varying secondmore » vapor. {copyright} 1998 American Geophysical Union« less
Discontinuous Galerkin Methods for Turbulence Simulation
NASA Technical Reports Server (NTRS)
Collis, S. Scott
2002-01-01
A discontinuous Galerkin (DG) method is formulated, implemented, and tested for simulation of compressible turbulent flows. The method is applied to turbulent channel flow at low Reynolds number, where it is found to successfully predict low-order statistics with fewer degrees of freedom than traditional numerical methods. This reduction is achieved by utilizing local hp-refinement such that the computational grid is refined simultaneously in all three spatial coordinates with decreasing distance from the wall. Another advantage of DG is that Dirichlet boundary conditions can be enforced weakly through integrals of the numerical fluxes. Both for a model advection-diffusion problem and for turbulent channel flow, weak enforcement of wall boundaries is found to improve results at low resolution. Such weak boundary conditions may play a pivotal role in wall modeling for large-eddy simulation.
GROMACS 4: Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation.
Hess, Berk; Kutzner, Carsten; van der Spoel, David; Lindahl, Erik
2008-03-01
Molecular simulation is an extremely useful, but computationally very expensive tool for studies of chemical and biomolecular systems. Here, we present a new implementation of our molecular simulation toolkit GROMACS which now both achieves extremely high performance on single processors from algorithmic optimizations and hand-coded routines and simultaneously scales very well on parallel machines. The code encompasses a minimal-communication domain decomposition algorithm, full dynamic load balancing, a state-of-the-art parallel constraint solver, and efficient virtual site algorithms that allow removal of hydrogen atom degrees of freedom to enable integration time steps up to 5 fs for atomistic simulations also in parallel. To improve the scaling properties of the common particle mesh Ewald electrostatics algorithms, we have in addition used a Multiple-Program, Multiple-Data approach, with separate node domains responsible for direct and reciprocal space interactions. Not only does this combination of algorithms enable extremely long simulations of large systems but also it provides that simulation performance on quite modest numbers of standard cluster nodes.
Users matter : multi-agent systems model of high performance computing cluster users.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, M. J.; Hood, C. S.; Decision and Information Sciences
2005-01-01
High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex duemore » to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.« less
The potential of multi-port optical memories in digital computing
NASA Technical Reports Server (NTRS)
Alford, C. O.; Gaylord, T. K.
1975-01-01
A high-capacity memory with a relatively high data transfer rate and multi-port simultaneous access capability may serve as the basis for new computer architectures. The implementation of a multi-port optical memory is discussed. Several computer structures are presented that might profitably use such a memory. These structures include (1) a simultaneous record access system, (2) a simultaneously shared memory computer system, and (3) a parallel digital processing structure.
Phase transformations at interfaces: Observations from atomistic modeling
Frolov, T.; Asta, M.; Mishin, Y.
2016-10-01
Here, we review the recent progress in theoretical understanding and atomistic computer simulations of phase transformations in materials interfaces, focusing on grain boundaries (GBs) in metallic systems. Recently developed simulation approaches enable the search and structural characterization of GB phases in single-component metals and binary alloys, calculation of thermodynamic properties of individual GB phases, and modeling of the effect of the GB phase transformations on GB kinetics. Atomistic simulations demonstrate that the GB transformations can be induced by varying the temperature, loading the GB with point defects, or varying the amount of solute segregation. The atomic-level understanding obtained from suchmore » simulations can provide input for further development of thermodynamics theories and continuous models of interface phase transformations while simultaneously serving as a testing ground for validation of theories and models. They can also help interpret and guide experimental work in this field.« less
A qualitative and quantitative assessment for a bone marrow harvest simulator.
Machado, Liliane S; Moraes, Ronei M
2009-01-01
Several approaches to perform assessment in training simulators based on virtual reality have been proposed. There are two kinds of assessment methods: offline and online. The main requirements related to online training assessment methodologies applied to virtual reality systems are the low computational complexity and the high accuracy. In the literature it can be found several approaches for general cases which can satisfy such requirements. An inconvenient about those approaches is related to an unsatisfactory solution for specific cases, as in some medical procedures, where there are quantitative and qualitative information available to perform the assessment. In this paper, we present an approach to online training assessment based on a Modified Naive Bayes which can manipulate qualitative and quantitative variables simultaneously. A special medical case was simulated in a bone marrow harvest simulator. The results obtained were satisfactory and evidenced the applicability of the method.
Finite Element Analysis of Single Wheat Mechanical Response to Wind and Rain Loads
NASA Astrophysics Data System (ADS)
Liang, Li; Guo, Yuming
One variety of wheat in the breeding process was chosen to determine the wheat morphological traits and biomechanical properties. ANSYS was used to build the mechanical model of wheat to wind load and the dynamic response of wheat to wind load was simulated. The maximum Von Mises stress is obtained by the powerful calculation function of ANSYS. And the changing stress and displacement of each node and finite element in the process of simulation can be output through displacement nephogram and stress nephogram. The load support capability can be evaluated and to predict the wheat lodging. It is concluded that computer simulation technology has unique advantages such as convenient and efficient in simulating mechanical response of wheat stalk under wind and rain load. Especially it is possible to apply various load types on model and the deformation process can be observed simultaneously.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com; Suprijadi; Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132
Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic imagesmore » and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.« less
Predictive Control of Networked Multiagent Systems via Cloud Computing.
Liu, Guo-Ping
2017-01-18
This paper studies the design and analysis of networked multiagent predictive control systems via cloud computing. A cloud predictive control scheme for networked multiagent systems (NMASs) is proposed to achieve consensus and stability simultaneously and to compensate for network delays actively. The design of the cloud predictive controller for NMASs is detailed. The analysis of the cloud predictive control scheme gives the necessary and sufficient conditions of stability and consensus of closed-loop networked multiagent control systems. The proposed scheme is verified to characterize the dynamical behavior and control performance of NMASs through simulations. The outcome provides a foundation for the development of cooperative and coordinative control of NMASs and its applications.
Simulation of polymer translocation through protein channels
Muthukumar, M.; Kong, C. Y.
2006-01-01
A modeling algorithm is presented to compute simultaneously polymer conformations and ionic current, as single polymer molecules undergo translocation through protein channels. The method is based on a combination of Langevin dynamics for coarse-grained models of polymers and the Poisson–Nernst–Planck formalism for ionic current. For the illustrative example of ssDNA passing through the α-hemolysin pore, vivid details of conformational fluctuations of the polymer inside the vestibule and β-barrel compartments of the protein pore, and their consequent effects on the translocation time and extent of blocked ionic current are presented. In addition to yielding insights into several experimentally reported puzzles, our simulations offer experimental strategies to sequence polymers more efficiently. PMID:16567657
ERIC Educational Resources Information Center
Coleman, Mari Beth; Cherry, Rebecca A.; Moore, Tara C.; Yujeong, Park; Cihak, David F.
2015-01-01
The purpose of this study was to compare the effects of teacher-directed simultaneous prompting to computer-assisted simultaneous prompting for teaching sight words to 3 elementary school students with intellectual disability. Activities in the computer-assisted condition were designed with Intellitools Classroom Suite software whereas traditional…
NASA Astrophysics Data System (ADS)
Hoover, Wm. G.; Hoover, Carol G.
2012-02-01
We compare the Gram-Schmidt and covariant phase-space-basis-vector descriptions for three time-reversible harmonic oscillator problems, in two, three, and four phase-space dimensions respectively. The two-dimensional problem can be solved analytically. The three-dimensional and four-dimensional problems studied here are simultaneously chaotic, time-reversible, and dissipative. Our treatment is intended to be pedagogical, for use in an updated version of our book on Time Reversibility, Computer Simulation, and Chaos. Comments are very welcome.
Permanent bending and alignment of ZnO nanowires.
Borschel, Christian; Spindler, Susann; Lerose, Damiana; Bochmann, Arne; Christiansen, Silke H; Nietzsche, Sandor; Oertel, Michael; Ronning, Carsten
2011-05-06
Ion beams can be used to permanently bend and re-align nanowires after growth. We have irradiated ZnO nanowires with energetic ions, achieving bending and alignment in different directions. Not only the bending of single nanowires is studied in detail, but also the simultaneous alignment of large ensembles of ZnO nanowires. Computer simulations reveal how the bending is initiated by ion beam induced damage. Detailed structural characterization identifies dislocations to relax stresses and make the bending and alignment permanent, even surviving annealing procedures.
Applications of complex systems theory in nursing education, research, and practice.
Clancy, Thomas R; Effken, Judith A; Pesut, Daniel
2008-01-01
The clinical and administrative processes in today's healthcare environment are becoming increasingly complex. Multiple providers, new technology, competition, and the growing ubiquity of information all contribute to the notion of health care as a complex system. A complex system (CS) is characterized by a highly connected network of entities (e.g., physical objects, people or groups of people) from which higher order behavior emerges. Research in the transdisciplinary field of CS has focused on the use of computational modeling and simulation as a methodology for analyzing CS behavior. The creation of virtual worlds through computer simulation allows researchers to analyze multiple variables simultaneously and begin to understand behaviors that are common regardless of the discipline. The application of CS principles, mediated through computer simulation, informs nursing practice of the benefits and drawbacks of new procedures, protocols and practices before having to actually implement them. The inclusion of new computational tools and their applications in nursing education is also gaining attention. For example, education in CSs and applied computational applications has been endorsed by The Institute of Medicine, the American Organization of Nurse Executives and the American Association of Colleges of Nursing as essential training of nurse leaders. The purpose of this article is to review current research literature regarding CS science within the context of expert practice and implications for the education of nurse leadership roles. The article focuses on 3 broad areas: CS defined, literature review and exemplars from CS research and applications of CS theory in nursing leadership education. The article also highlights the key role nursing informaticists play in integrating emerging computational tools in the analysis of complex nursing systems.
NASA Astrophysics Data System (ADS)
Jang, W.; Engda, T. A.; Neff, J. C.; Herrick, J.
2017-12-01
Many crop models are increasingly used to evaluate crop yields at regional and global scales. However, implementation of these models across large areas using fine-scale grids is limited by computational time requirements. In order to facilitate global gridded crop modeling with various scenarios (i.e., different crop, management schedule, fertilizer, and irrigation) using the Environmental Policy Integrated Climate (EPIC) model, we developed a distributed parallel computing framework in Python. Our local desktop with 14 cores (28 threads) was used to test the distributed parallel computing framework in Iringa, Tanzania which has 406,839 grid cells. High-resolution soil data, SoilGrids (250 x 250 m), and climate data, AgMERRA (0.25 x 0.25 deg) were also used as input data for the gridded EPIC model. The framework includes a master file for parallel computing, input database, input data formatters, EPIC model execution, and output analyzers. Through the master file for parallel computing, the user-defined number of threads of CPU divides the EPIC simulation into jobs. Then, Using EPIC input data formatters, the raw database is formatted for EPIC input data and the formatted data moves into EPIC simulation jobs. Then, 28 EPIC jobs run simultaneously and only interesting results files are parsed and moved into output analyzers. We applied various scenarios with seven different slopes and twenty-four fertilizer ranges. Parallelized input generators create different scenarios as a list for distributed parallel computing. After all simulations are completed, parallelized output analyzers are used to analyze all outputs according to the different scenarios. This saves significant computing time and resources, making it possible to conduct gridded modeling at regional to global scales with high-resolution data. For example, serial processing for the Iringa test case would require 113 hours, while using the framework developed in this study requires only approximately 6 hours, a nearly 95% reduction in computing time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, Tuan Anh; Ogitsu, Tadashi; Lau, Edmond Y.
Establishing an accurate and predictive computational framework for the description of complex aqueous solutions is an ongoing challenge for density functional theory based first-principles molecular dynamics (FPMD) simulations. In this context, important advances have been made in recent years, including the development of sophisticated exchange-correlation functionals. On the other hand, simulations based on simple generalized gradient approximation (GGA) functionals remain an active field, particularly in the study of complex aqueous solutions due to a good balance between the accuracy, computational expense, and the applicability to a wide range of systems. In such simulations we often perform them at elevated temperaturesmore » to artificially “correct” for GGA inaccuracies in the description of liquid water; however, a detailed understanding of how the choice of temperature affects the structure and dynamics of other components, such as solvated ions, is largely unknown. In order to address this question, we carried out a series of FPMD simulations at temperatures ranging from 300 to 460 K for liquid water and three representative aqueous solutions containing solvated Na +, K +, and Cl - ions. We show that simulations at 390–400 K with the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional yield water structure and dynamics in good agreement with experiments at ambient conditions. Simultaneously, this computational setup provides ion solvation structures and ion effects on water dynamics consistent with experiments. These results suggest that an elevated temperature around 390–400 K with the PBE functional can be used for the description of structural and dynamical properties of liquid water and complex solutions with solvated ions at ambient conditions.« less
Pham, Tuan Anh; Ogitsu, Tadashi; Lau, Edmond Y.; ...
2016-10-17
Establishing an accurate and predictive computational framework for the description of complex aqueous solutions is an ongoing challenge for density functional theory based first-principles molecular dynamics (FPMD) simulations. In this context, important advances have been made in recent years, including the development of sophisticated exchange-correlation functionals. On the other hand, simulations based on simple generalized gradient approximation (GGA) functionals remain an active field, particularly in the study of complex aqueous solutions due to a good balance between the accuracy, computational expense, and the applicability to a wide range of systems. In such simulations we often perform them at elevated temperaturesmore » to artificially “correct” for GGA inaccuracies in the description of liquid water; however, a detailed understanding of how the choice of temperature affects the structure and dynamics of other components, such as solvated ions, is largely unknown. In order to address this question, we carried out a series of FPMD simulations at temperatures ranging from 300 to 460 K for liquid water and three representative aqueous solutions containing solvated Na +, K +, and Cl - ions. We show that simulations at 390–400 K with the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional yield water structure and dynamics in good agreement with experiments at ambient conditions. Simultaneously, this computational setup provides ion solvation structures and ion effects on water dynamics consistent with experiments. These results suggest that an elevated temperature around 390–400 K with the PBE functional can be used for the description of structural and dynamical properties of liquid water and complex solutions with solvated ions at ambient conditions.« less
NASA Astrophysics Data System (ADS)
Lin, Tsungpo
Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.
Simulation of Attitude and Trajectory Dynamics and Control of Multiple Spacecraft
NASA Technical Reports Server (NTRS)
Stoneking, Eric T.
2009-01-01
Agora software is a simulation of spacecraft attitude and orbit dynamics. It supports spacecraft models composed of multiple rigid bodies or flexible structural models. Agora simulates multiple spacecraft simultaneously, supporting rendezvous, proximity operations, and precision formation flying studies. The Agora environment includes ephemerides for all planets and major moons in the solar system, supporting design studies for deep space as well as geocentric missions. The environment also contains standard models for gravity, atmospheric density, and magnetic fields. Disturbance force and torque models include aerodynamic, gravity-gradient, solar radiation pressure, and third-body gravitation. In addition to the dynamic and environmental models, Agora supports geometrical visualization through an OpenGL interface. Prototype models are provided for common sensors, actuators, and control laws. A clean interface accommodates linking in actual flight code in place of the prototype control laws. The same simulation may be used for rapid feasibility studies, and then used for flight software validation as the design matures. Agora is open-source and portable across computing platforms, making it customizable and extensible. It is written to support the entire GNC (guidance, navigation, and control) design cycle, from rapid prototyping and design analysis, to high-fidelity flight code verification. As a top-down design, Agora is intended to accommodate a large range of missions, anywhere in the solar system. Both two-body and three-body flight regimes are supported, as well as seamless transition between them. Multiple spacecraft may be simultaneously simulated, enabling simulation of rendezvous scenarios, as well as formation flying. Built-in reference frames and orbit perturbation dynamics provide accurate modeling of precision formation control.
Multibody dynamic simulation of knee contact mechanics
Bei, Yanhong; Fregly, Benjamin J.
2006-01-01
Multibody dynamic musculoskeletal models capable of predicting muscle forces and joint contact pressures simultaneously would be valuable for studying clinical issues related to knee joint degeneration and restoration. Current three-dimensional multi-body knee models are either quasi-static with deformable contact or dynamic with rigid contact. This study proposes a computationally efficient methodology for combining multibody dynamic simulation methods with a deformable contact knee model. The methodology requires preparation of the articular surface geometry, development of efficient methods to calculate distances between contact surfaces, implementation of an efficient contact solver that accounts for the unique characteristics of human joints, and specification of an application programming interface for integration with any multibody dynamic simulation environment. The current implementation accommodates natural or artificial tibiofemoral joint models, small or large strain contact models, and linear or nonlinear material models. Applications are presented for static analysis (via dynamic simulation) of a natural knee model created from MRI and CT data and dynamic simulation of an artificial knee model produced from manufacturer’s CAD data. Small and large strain natural knee static analyses required 1 min of CPU time and predicted similar contact conditions except for peak pressure, which was higher for the large strain model. Linear and nonlinear artificial knee dynamic simulations required 10 min of CPU time and predicted similar contact force and torque but different contact pressures, which were lower for the nonlinear model due to increased contact area. This methodology provides an important step toward the realization of dynamic musculoskeletal models that can predict in vivo knee joint motion and loading simultaneously. PMID:15564115
Toward Interactive Scenario Analysis and Exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gayle, Thomas R.; Summers, Kenneth Lee; Jungels, John
2015-01-01
As Modeling and Simulation (M&S) tools have matured, their applicability and importance have increased across many national security challenges. In particular, they provide a way to test how something may behave without the need to do real world testing. However, current and future changes across several factors including capabilities, policy, and funding are driving a need for rapid response or evaluation in ways that many M&S tools cannot address. Issues around large data, computational requirements, delivery mechanisms, and analyst involvement already exist and pose significant challenges. Furthermore, rising expectations, rising input complexity, and increasing depth of analysis will only increasemore » the difficulty of these challenges. In this study we examine whether innovations in M&S software coupled with advances in ''cloud'' computing and ''big-data'' methodologies can overcome many of these challenges. In particular, we propose a simple, horizontally-scalable distributed computing environment that could provide the foundation (i.e. ''cloud'') for next-generation M&S-based applications based on the notion of ''parallel multi-simulation''. In our context, the goal of parallel multi- simulation is to consider as many simultaneous paths of execution as possible. Therefore, with sufficient resources, the complexity is dominated by the cost of single scenario runs as opposed to the number of runs required. We show the feasibility of this architecture through a stable prototype implementation coupled with the Umbra Simulation Framework [6]. Finally, we highlight the utility through multiple novel analysis tools and by showing the performance improvement compared to existing tools.« less
CHOLLA: A New Massively Parallel Hydrodynamics Code for Astrophysical Simulation
NASA Astrophysics Data System (ADS)
Schneider, Evan E.; Robertson, Brant E.
2015-04-01
We present Computational Hydrodynamics On ParaLLel Architectures (Cholla ), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Using GPUs, Cholla evolves the fluid properties of thousands of cells simultaneously and can update over 10 million cells per GPU-second while using an exact Riemann solver and PPM reconstruction. Owing to the massively parallel architecture of GPUs and the design of the Cholla code, astrophysical simulations with physically interesting grid resolutions (≳2563) can easily be computed on a single device. We use the Message Passing Interface library to extend calculations onto multiple devices and demonstrate nearly ideal scaling beyond 64 GPUs. A suite of test problems highlights the physical accuracy of our modeling and provides a useful comparison to other codes. We then use Cholla to simulate the interaction of a shock wave with a gas cloud in the interstellar medium, showing that the evolution of the cloud is highly dependent on its density structure. We reconcile the computed mixing time of a turbulent cloud with a realistic density distribution destroyed by a strong shock with the existing analytic theory for spherical cloud destruction by describing the system in terms of its median gas density.
Bühren, Jens; Yoon, Geunyoung; MacRae, Scott; Huxlin, Krystel
2010-01-01
PURPOSE To simulate the simultaneous contribution of optical zone decentration and pupil dilation on retinal image quality using wavefront error data from a myopic photorefractive keratectomy (PRK) cat model. METHODS Wavefront error differences were obtained from five cat eyes 19±7 weeks (range: 12 to 24 weeks) after spherical myopic PRK for −6.00 diopters (D) (three eyes) and −10.00 D (two eyes). A computer model was used to simulate decentration of a 6-mm sub-aperture relative to the measured wavefront error difference. Changes in image quality (visual Strehl ratio based on the optical transfer function [VSOTF]) were computed for simulated decentrations from 0 to 1500 μm over pupil diameters of 3.5 to 6.0 mm in 0.5-mm steps. For each eye, a bivariate regression model was applied to calculate the simultaneous contribution of pupil dilation and decentration on the pre- to postoperative change of the log VSOTF. RESULTS Pupil diameter and decentration explained up to 95% of the variance of VSOTF change (adjusted R2=0.95). Pupil diameter had a higher impact on VSOTF (median β=−0.88, P<.001) than decentration (median β= −0.45, P<.001). If decentration-induced lower order aberrations were corrected, the impact of decentration further decreased (β= −0.26) compared to the influence of pupil dilation (β= −0.95). CONCLUSIONS Both pupil dilation and decentration of the optical zone affected the change of retinal image quality (VSOTF) after myopic PRK with decentration exerting a lower impact on VSOTF change. Thus, under physiological conditions pupil dilation is likely to have more effect on VSOTF change after PRK than optical zone decentration. PMID:20229950
Bühren, Jens; Yoon, Geunyoung; MacRae, Scott; Huxlin, Krystel
2010-03-01
To simulate the simultaneous contribution of optical zone decentration and pupil dilation on retinal image quality using wavefront error data from a myopic photorefractive keratectomy (PRK) cat model. Wavefront error differences were obtained from five cat eyes 19+/-7 weeks (range: 12 to 24 weeks) after spherical myopic PRK for -6.00 diopters (D) (three eyes) and -10.00 D (two eyes). A computer model was used to simulate decentration of a 6-mm sub-aperture relative to the measured wavefront error difference. Changes in image quality (visual Strehl ratio based on the optical transfer function [VSOTF]) were computed for simulated decentrations from 0 to 1500 mum over pupil diameters of 3.5 to 6.0 mm in 0.5-mm steps. For each eye, a bivariate regression model was applied to calculate the simultaneous contribution of pupil dilation and decentration on the pre- to postoperative change of the log VSOTF. Pupil diameter and decentration explained up to 95% of the variance of VSOTF change (adjusted R(2)=0.95). Pupil diameter had a higher impact on VSOTF (median beta=-0.88, P<.001) than decentration (median beta=-0.45, P<.001). If decentration-induced lower order aberrations were corrected, the impact of decentration further decreased (beta=-0.26) compared to the influence of pupil dilation (beta=-0.95). Both pupil dilation and decentration of the optical zone affected the change of retinal image quality (VSOTF) after myopic PRK with decentration exerting a lower impact on VSOTF change. Thus, under physiological conditions pupil dilation is likely to have more effect on VSOTF change after PRK than optical zone decentration. Copyright 2010, SLACK Incorporated.
Simultaneous quaternion estimation (QUEST) and bias determination
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Tests of a new method for the simultaneous estimation of spacecraft attitude and sensor biases, based on a quaternion estimation algorithm minimizing Wahba's loss function are presented. The new method is compared with a conventional batch least-squares differential correction algorithm. The estimates are based on data from strapdown gyros and star trackers, simulated with varying levels of Gaussian noise for both inertially-fixed and Earth-pointing reference attitudes. Both algorithms solve for the spacecraft attitude and the gyro drift rate biases. They converge to the same estimates at the same rate for inertially-fixed attitude, but the new algorithm converges more slowly than the differential correction for Earth-pointing attitude. The slower convergence of the new method for non-zero attitude rates is believed to be due to the use of an inadequate approximation for a partial derivative matrix. The new method requires about twice the computational effort of the differential correction. Improving the approximation for the partial derivative matrix in the new method is expected to improve its convergence at the cost of increased computational effort.
Aguiar Santos, Susana; Robens, Anne; Boehm, Anna; Leonhardt, Steffen; Teichmann, Daniel
2016-01-01
A new prototype of a multi-frequency electrical impedance tomography system is presented. The system uses a field-programmable gate array as a main controller and is configured to measure at different frequencies simultaneously through a composite waveform. Both real and imaginary components of the data are computed for each frequency and sent to the personal computer over an ethernet connection, where both time-difference imaging and frequency-difference imaging are reconstructed and visualized. The system has been tested for both time-difference and frequency-difference imaging for diverse sets of frequency pairs in a resistive/capacitive test unit and in self-experiments. To our knowledge, this is the first work that shows preliminary frequency-difference images of in-vivo experiments. Results of time-difference imaging were compared with simulation results and shown that the new prototype performs well at all frequencies in the tested range of 60 kHz–960 kHz. For frequency-difference images, further development of algorithms and an improved normalization process is required to correctly reconstruct and interpreted the resulting images. PMID:27463715
A Combined Eulerian-Lagrangian Data Representation for Large-Scale Applications.
Sauer, Franz; Xie, Jinrong; Ma, Kwan-Liu
2017-10-01
The Eulerian and Lagrangian reference frames each provide a unique perspective when studying and visualizing results from scientific systems. As a result, many large-scale simulations produce data in both formats, and analysis tasks that simultaneously utilize information from both representations are becoming increasingly popular. However, due to their fundamentally different nature, drawing correlations between these data formats is a computationally difficult task, especially in a large-scale setting. In this work, we present a new data representation which combines both reference frames into a joint Eulerian-Lagrangian format. By reorganizing Lagrangian information according to the Eulerian simulation grid into a "unit cell" based approach, we can provide an efficient out-of-core means of sampling, querying, and operating with both representations simultaneously. We also extend this design to generate multi-resolution subsets of the full data to suit the viewer's needs and provide a fast flow-aware trajectory construction scheme. We demonstrate the effectiveness of our method using three large-scale real world scientific datasets and provide insight into the types of performance gains that can be achieved.
NASA Astrophysics Data System (ADS)
Pavel, Akeed A.; Khan, Mehjabeen A.; Kirawanich, Phumin; Islam, N. E.
2008-10-01
A methodology to simulate memory structures with metal nanocrystal islands embedded as floating gate in a high-κ dielectric material for simultaneous enhancement of programming speed and retention time is presented. The computational concept is based on a model for charge transport in nano-scaled structures presented earlier, where quantum mechanical tunneling is defined through the wave impedance that is analogous to the transmission line theory. The effects of substrate-tunnel dielectric conduction band offset and metal work function on the tunneling current that determines the programming speed and retention time is demonstrated. Simulation results confirm that a high-κ dielectric material can increase programming current due to its lower conduction band offset with the substrate and also can be effectively integrated with suitable embedded metal nanocrystals having high work function for efficient data retention. A nano-memory cell designed with silver (Ag) nanocrystals embedded in Al 2O 3 has been compared with similar structure consisting of Si nanocrystals in SiO 2 to validate the concept.
Computer-Simulated Arthroscopic Knee Surgery: Effects of Distraction on Resident Performance.
Cowan, James B; Seeley, Mark A; Irwin, Todd A; Caird, Michelle S
2016-01-01
Orthopedic surgeons cite "full focus" and "distraction control" as important factors for achieving excellent outcomes. Surgical simulation is a safe and cost-effective way for residents to practice surgical skills, and it is a suitable tool to study the effects of distraction on resident surgical performance. This study investigated the effects of distraction on arthroscopic knee simulator performance among residents at various levels of experience. The authors hypothesized that environmental distractions would negatively affect performance. Twenty-five orthopedic surgery residents performed a diagnostic knee arthroscopy computer simulation according to a checklist of structures to identify and tasks to complete. Participants were evaluated on arthroscopy time, number of chondral injuries, instances of looking down at their hands, and completion of checklist items. Residents repeated this task at least 2 weeks later while simultaneously answering distracting questions. During distracted simulation, the residents had significantly fewer completed checklist items (P<.02) compared with the initial simulation. Senior residents completed the initial simulation in less time (P<.001), with fewer chondral injuries (P<.005) and fewer instances of looking down at their hands (P<.012), compared with junior residents. Senior residents also completed 97% of the diagnostic checklist, whereas junior residents completed 89% (P<.019). During distracted simulation, senior residents continued to complete tasks more quickly (P<.006) and with fewer instances of looking down at their hands (P<.042). Residents at all levels appear to be susceptible to the detrimental effects of distraction when performing arthroscopic simulation. Addressing even straightforward questions intraoperatively may affect surgeon performance. Copyright 2016, SLACK Incorporated.
Numerical Propulsion System Simulation (NPSS): An Award Winning Propulsion System Simulation Tool
NASA Technical Reports Server (NTRS)
Stauber, Laurel J.; Naiman, Cynthia G.
2002-01-01
The Numerical Propulsion System Simulation (NPSS) is a full propulsion system simulation tool used by aerospace engineers to predict and analyze the aerothermodynamic behavior of commercial jet aircraft, military applications, and space transportation. The NPSS framework was developed to support aerospace, but other applications are already leveraging the initial capabilities, such as aviation safety, ground-based power, and alternative energy conversion devices such as fuel cells. By using the framework and developing the necessary components, future applications that NPSS could support include nuclear power, water treatment, biomedicine, chemical processing, and marine propulsion. NPSS will dramatically reduce the time, effort, and expense necessary to design and test jet engines. It accomplishes that by generating sophisticated computer simulations of an aerospace object or system, thus enabling engineers to "test" various design options without having to conduct costly, time-consuming real-life tests. The ultimate goal of NPSS is to create a numerical "test cell" that enables engineers to create complete engine simulations overnight on cost-effective computing platforms. Using NPSS, engine designers will be able to analyze different parts of the engine simultaneously, perform different types of analysis simultaneously (e.g., aerodynamic and structural), and perform analysis in a more efficient and less costly manner. NPSS will cut the development time of a new engine in half, from 10 years to 5 years. And NPSS will have a similar effect on the cost of development: new jet engines will cost about a billion dollars to develop rather than two billion. NPSS is also being applied to the development of space transportation technologies, and it is expected that similar efficiencies and cost savings will result. Advancements of NPSS in fiscal year 2001 included enhancing the NPSS Developer's Kit to easily integrate external components of varying fidelities, providing the initial Visual-Based Syntax (VBS) capability, and developing additional capabilities to support space transportation. NPSS was supported under NASA's High Performance Computing and Communications Program. Through the NASA/Industry Cooperative Effort agreement, NASA Glenn and its industry and Government partners are developing NPSS. The NPSS team consists of propulsion experts and software engineers from GE Aircraft Engines, Pratt & Whitney, The Boeing Company, Honeywell, Rolls-Royce Corporation, Williams International, Teledyne Continental Motors, Arnold Engineering Development Center, Wright Patterson Air Force Base, and the NASA Glenn Research Center. Glenn is leading the way in developing NPSS--a method for solving complex design problems that's faster, better, and cheaper.
NASA Technical Reports Server (NTRS)
Ross, M. D.; Linton, S. W.; Parnas, B. R.
2000-01-01
A quasi-three-dimensional finite-volume numerical simulator was developed to study passive voltage spread in vestibular macular afferents. The method, borrowed from computational fluid dynamics, discretizes events transpiring in small volumes over time. The afferent simulated had three calyces with processes. The number of processes and synapses, and direction and timing of synapse activation, were varied. Simultaneous synapse activation resulted in shortest latency, while directional activation (proximal to distal and distal to proximal) yielded most regular discharges. Color-coded visualizations showed that the simulator discretized events and demonstrated that discharge produced a distal spread of voltage from the spike initiator into the ending. The simulations indicate that directional input, morphology, and timing of synapse activation can affect discharge properties, as must also distal spread of voltage from the spike initiator. The finite volume method has generality and can be applied to more complex neurons to explore discrete synaptic effects in four dimensions.
NASA Astrophysics Data System (ADS)
Bourasseau, Emeric; Dubois, Vincent; Desbiens, Nicolas; Maillet, Jean-Bernard
2007-06-01
The simultaneous use of the Reaction Ensemble Monte Carlo (ReMC) method and the Adaptative Erpenbeck EOS (AE-EOS) method allows us to calculate direclty the thermodynamical and chemical equilibrium of a mixture on the hugoniot curve. The ReMC method allow to reach chemical equilibrium of detonation products and the AE-EOS method constraints ths system to satisfy the Hugoniot relation. Once the Crussard curve of detonation products has been established, CJ state properties may be calculated. An additional NPT simulation is performed at CJ conditions in order to compute derivative thermodynamic quantities like Cp, Cv, Gruneisen gama, sound velocity, and compressibility factor. Several explosives has been studied, of which PETN, nitromethane, tetranitromethane, and hexanitroethane. In these first simulations, solid carbon is eventually treated using an EOS.
EEG-fMRI Bayesian framework for neural activity estimation: a simulation study
NASA Astrophysics Data System (ADS)
Croce, Pierpaolo; Basti, Alessio; Marzetti, Laura; Zappasodi, Filippo; Del Gratta, Cosimo
2016-12-01
Objective. Due to the complementary nature of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), and given the possibility of simultaneous acquisition, the joint data analysis can afford a better understanding of the underlying neural activity estimation. In this simulation study we want to show the benefit of the joint EEG-fMRI neural activity estimation in a Bayesian framework. Approach. We built a dynamic Bayesian framework in order to perform joint EEG-fMRI neural activity time course estimation. The neural activity is originated by a given brain area and detected by means of both measurement techniques. We have chosen a resting state neural activity situation to address the worst case in terms of the signal-to-noise ratio. To infer information by EEG and fMRI concurrently we used a tool belonging to the sequential Monte Carlo (SMC) methods: the particle filter (PF). Main results. First, despite a high computational cost, we showed the feasibility of such an approach. Second, we obtained an improvement in neural activity reconstruction when using both EEG and fMRI measurements. Significance. The proposed simulation shows the improvements in neural activity reconstruction with EEG-fMRI simultaneous data. The application of such an approach to real data allows a better comprehension of the neural dynamics.
EEG-fMRI Bayesian framework for neural activity estimation: a simulation study.
Croce, Pierpaolo; Basti, Alessio; Marzetti, Laura; Zappasodi, Filippo; Gratta, Cosimo Del
2016-12-01
Due to the complementary nature of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), and given the possibility of simultaneous acquisition, the joint data analysis can afford a better understanding of the underlying neural activity estimation. In this simulation study we want to show the benefit of the joint EEG-fMRI neural activity estimation in a Bayesian framework. We built a dynamic Bayesian framework in order to perform joint EEG-fMRI neural activity time course estimation. The neural activity is originated by a given brain area and detected by means of both measurement techniques. We have chosen a resting state neural activity situation to address the worst case in terms of the signal-to-noise ratio. To infer information by EEG and fMRI concurrently we used a tool belonging to the sequential Monte Carlo (SMC) methods: the particle filter (PF). First, despite a high computational cost, we showed the feasibility of such an approach. Second, we obtained an improvement in neural activity reconstruction when using both EEG and fMRI measurements. The proposed simulation shows the improvements in neural activity reconstruction with EEG-fMRI simultaneous data. The application of such an approach to real data allows a better comprehension of the neural dynamics.
Normalized Temperature Contrast Processing in Infrared Flash Thermography
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2016-01-01
The paper presents further development in normalized contrast processing used in flash infrared thermography method. Method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided. Methods of converting image contrast to temperature contrast and vice versa are provided. Normalized contrast processing in flash thermography is useful in quantitative analysis of flash thermography data including flaw characterization and comparison of experimental results with simulation. Computation of normalized temperature contrast involves use of flash thermography data acquisition set-up with high reflectivity foil and high emissivity tape such that the foil, tape and test object are imaged simultaneously. Methods of assessing other quantitative parameters such as emissivity of object, afterglow heat flux, reflection temperature change and surface temperature during flash thermography are also provided. Temperature imaging and normalized temperature contrast processing provide certain advantages over normalized image contrast processing by reducing effect of reflected energy in images and measurements, therefore providing better quantitative data. Examples of incorporating afterglow heat-flux and reflection temperature evolution in flash thermography simulation are also discussed.
EOG-sEMG Human Interface for Communication
Tamura, Hiroki; Yan, Mingmin; Sakurai, Keiko; Tanno, Koichi
2016-01-01
The aim of this study is to present electrooculogram (EOG) and surface electromyogram (sEMG) signals that can be used as a human-computer interface. Establishing an efficient alternative channel for communication without overt speech and hand movements is important for increasing the quality of life for patients suffering from amyotrophic lateral sclerosis, muscular dystrophy, or other illnesses. In this paper, we propose an EOG-sEMG human-computer interface system for communication using both cross-channels and parallel lines channels on the face with the same electrodes. This system could record EOG and sEMG signals as “dual-modality” for pattern recognition simultaneously. Although as much as 4 patterns could be recognized, dealing with the state of the patients, we only choose two classes (left and right motion) of EOG and two classes (left blink and right blink) of sEMG which are easily to be realized for simulation and monitoring task. From the simulation results, our system achieved four-pattern classification with an accuracy of 95.1%. PMID:27418924
NASA Astrophysics Data System (ADS)
Bao, Xiurong; Zhao, Qingchun; Yin, Hongxi; Qin, Jie
2018-05-01
In this paper, an all-optical parallel reservoir computing (RC) system with two channels for the optical packet header recognition is proposed and simulated, which is based on a semiconductor ring laser (SRL) with the characteristic of bidirectional light paths. The parallel optical loops are built through the cross-feedback of the bidirectional light paths where every optical loop can independently recognize each injected optical packet header. Two input signals are mapped and recognized simultaneously by training all-optical parallel reservoir, which is attributed to the nonlinear states in the laser. The recognition of optical packet headers for two channels from 4 bits to 32 bits is implemented through the simulation optimizing system parameters and therefore, the optimal recognition error ratio is 0. Since this structure can combine with the wavelength division multiplexing (WDM) optical packet switching network, the wavelength of each channel of optical packet headers for recognition can be different, and a better recognition result can be obtained.
NASA Astrophysics Data System (ADS)
Dodani, Sheel C.; Kiss, Gert; Cahn, Jackson K. B.; Su, Ye; Pande, Vijay S.; Arnold, Frances H.
2016-05-01
The dynamic motions of protein structural elements, particularly flexible loops, are intimately linked with diverse aspects of enzyme catalysis. Engineering of these loop regions can alter protein stability, substrate binding and even dramatically impact enzyme function. When these flexible regions are unresolvable structurally, computational reconstruction in combination with large-scale molecular dynamics simulations can be used to guide the engineering strategy. Here we present a collaborative approach that consists of both experiment and computation and led to the discovery of a single mutation in the F/G loop of the nitrating cytochrome P450 TxtE that simultaneously controls loop dynamics and completely shifts the enzyme's regioselectivity from the C4 to the C5 position of L-tryptophan. Furthermore, we find that this loop mutation is naturally present in a subset of homologous nitrating P450s and confirm that these uncharacterized enzymes exclusively produce 5-nitro-L-tryptophan, a previously unknown biosynthetic intermediate.
EOG-sEMG Human Interface for Communication.
Tamura, Hiroki; Yan, Mingmin; Sakurai, Keiko; Tanno, Koichi
2016-01-01
The aim of this study is to present electrooculogram (EOG) and surface electromyogram (sEMG) signals that can be used as a human-computer interface. Establishing an efficient alternative channel for communication without overt speech and hand movements is important for increasing the quality of life for patients suffering from amyotrophic lateral sclerosis, muscular dystrophy, or other illnesses. In this paper, we propose an EOG-sEMG human-computer interface system for communication using both cross-channels and parallel lines channels on the face with the same electrodes. This system could record EOG and sEMG signals as "dual-modality" for pattern recognition simultaneously. Although as much as 4 patterns could be recognized, dealing with the state of the patients, we only choose two classes (left and right motion) of EOG and two classes (left blink and right blink) of sEMG which are easily to be realized for simulation and monitoring task. From the simulation results, our system achieved four-pattern classification with an accuracy of 95.1%.
Computational design of high efficiency release targets for use at ISOL facilities
NASA Astrophysics Data System (ADS)
Liu, Y.; Alton, G. D.; Middleton, J. W.
1999-06-01
This report describes efforts made at the Oak Ridge National Laboratory to design high-efficiency-release targets that simultaneously incorporate the short diffusion lengths, high permeabilities, controllable temperatures, and heat removal properties required for the generation of useful radioactive ion beam (RIB) intensities for nuclear physics and astrophysics research using the isotope separation on-line (ISOL) technique. Short diffusion lengths are achieved either by using thin fibrous target materials or by coating thin layers of selected target material onto low-density carbon fibers such as reticulated vitreous carbon fiber (RVCF) or carbon-bonded-carbon-fiber (CBCF) to form highly permeable composite target matrices. Computational studies which simulate the generation and removal of primary beam deposited heat from target materials have been conducted to optimize the design of target/heat-sink systems for generating RIBs. The results derived from diffusion release-rate simulation studies for selected targets and thermal analyses of temperature distributions within a prototype target/heat-sink system subjected to primary ion beam irradiation will be presented in this report.
High-efficiency-release targets for use at ISOL facilities: computational design
NASA Astrophysics Data System (ADS)
Liu, Y.; Alton, G. D.
1999-12-01
This report describes efforts made at the Oak Ridge National Laboratory to design high-efficiency-release targets that simultaneously incorporate the short diffusion lengths, high permeabilities, controllable temperatures, and heat-removal properties required for the generation of useful radioactive ion beam (RIB) intensities for nuclear physics and astrophysics research using the isotope separation on-line (ISOL) technique. Short diffusion lengths are achieved either by using thin fibrous target materials or by coating thin layers of selected target material onto low-density carbon fibers such as reticulated-vitreous-carbon fiber (RVCF) or carbon-bonded-carbon fiber (CBCF) to form highly permeable composite target matrices. Computational studies that simulate the generation and removal of primary beam deposited heat from target materials have been conducted to optimize the design of target/heat-sink systems for generating RIBs. The results derived from diffusion release-rate simulation studies for selected targets and thermal analyses of temperature distributions within a prototype target/heat-sink system subjected to primary ion beam irradiation are presented in this report.
NASA Astrophysics Data System (ADS)
Bernstein, V.; Kolodney, E.
2017-10-01
We have recently observed, both experimentally and computationally, the phenomenon of postcollision multifragmentation in sub-keV surface collisions of a C60 projectile. Namely, delayed multiparticle breakup of a strongly impact deformed and vibrationally excited large cluster collider into several large fragments, after leaving the surface. Molecular dynamics simulations with extensive statistics revealed a nearly simultaneous event, within a sub-psec time window. Here we study, computationally, additional essential aspects of this new delayed collisional fragmentation which were not addressed before. Specifically, we study here the delayed (binary) fission channel for different impact energies both by calculating mass distributions over all fission events and by calculating and analyzing lifetime distributions of the scattered projectile. We observe an asymmetric fission resulting in a most probable fission channel and we find an activated exponential (statistical) decay. Finally, we also calculate and discuss the fragment mass distribution in (triple) multifragmentation over different time windows, in terms of most abundant fragments.
MODFLOW-2005 : the U.S. Geological Survey modular ground-water model--the ground-water flow process
Harbaugh, Arlen W.
2005-01-01
This report presents MODFLOW-2005, which is a new version of the finite-difference ground-water model commonly called MODFLOW. Ground-water flow is simulated using a block-centered finite-difference approach. Layers can be simulated as confined or unconfined. Flow associated with external stresses, such as wells, areal recharge, evapotranspiration, drains, and rivers, also can be simulated. The report includes detailed explanations of physical and mathematical concepts on which the model is based, an explanation of how those concepts are incorporated in the modular structure of the computer program, instructions for using the model, and details of the computer code. The modular structure consists of a MAIN Program and a series of highly independent subroutines. The subroutines are grouped into 'packages.' Each package deals with a specific feature of the hydrologic system that is to be simulated, such as flow from rivers or flow into drains, or with a specific method of solving the set of simultaneous equations resulting from the finite-difference method. Several solution methods are incorporated, including the Preconditioned Conjugate-Gradient method. The division of the program into packages permits the user to examine specific hydrologic features of the model independently. This also facilitates development of additional capabilities because new packages can be added to the program without modifying the existing packages. The input and output systems of the computer program also are designed to permit maximum flexibility. The program is designed to allow other capabilities, such as transport and optimization, to be incorporated, but this report is limited to describing the ground-water flow capability. The program is written in Fortran 90 and will run without modification on most computers that have a Fortran 90 compiler.
Current status of robotic simulators in acquisition of robotic surgical skills.
Kumar, Anup; Smith, Roger; Patel, Vipul R
2015-03-01
This article provides an overview of the current status of simulator systems in robotic surgery training curriculum, focusing on available simulators for training, their comparison, new technologies introduced in simulation focusing on concepts of training along with existing challenges and future perspectives of simulator training in robotic surgery. The different virtual reality simulators available in the market like dVSS, dVT, RoSS, ProMIS and SEP have shown face, content and construct validity in robotic skills training for novices outside the operating room. Recently, augmented reality simulators like HoST, Maestro AR and RobotiX Mentor have been introduced in robotic training providing a more realistic operating environment, emphasizing more on procedure-specific robotic training . Further, the Xperience Team Trainer, which provides training to console surgeon and bed-side assistant simultaneously, has been recently introduced to emphasize the importance of teamwork and proper coordination. Simulator training holds an important place in current robotic training curriculum of future robotic surgeons. There is a need for more procedure-specific augmented reality simulator training, utilizing advancements in computing and graphical capabilities for new innovations in simulator technology. Further studies are required to establish its cost-benefit ratio along with concurrent and predictive validity.
NASA Astrophysics Data System (ADS)
Sundberg, Mikaela
While the distinction between theory and experiment is often used to discuss the place of simulation from a philosophical viewpoint, other distinctions are possible from a sociological perspective. Turkle (1995) distinguishes between cultures of calculation and cultures of simulation and relates these cultures to the distinction between modernity and postmodernity, respectively. What can we understand about contemporary simulation practices in science by looking at them from the point of view of these two computer cultures? What new questions does such an analysis raise for further studies? On the basis of two case studies, the present paper compares and discusses simulation activities in astrophysics and meteorology. It argues that simulation practices manifest aspects of both of these cultures simultaneously, but in different situations. By employing the dichotomies surface/depth, play/seriousness, and extreme/reasonable to characterize and operationalize cultures of calculation and cultures of simulation as sensitizing concepts, the analysis shows how simulation code work shifts from development to use, the importance of but also resistance towards too much visualizations, and how simulation modelers play with extreme values, yet also try to achieve reasonable results compared to observations.
NASA Technical Reports Server (NTRS)
Plante, Ianik; Cucinotta, Francis A.
2011-01-01
Radiolytic species are formed approximately 1 ps after the passage of ionizing radiation through matter. After their formation, they diffuse and chemically react with other radiolytic species and neighboring biological molecules, leading to various oxidative damage. Therefore, the simulation of radiation chemistry is of considerable importance to understand how radiolytic species damage biological molecules [1]. The step-by-step simulation of chemical reactions is difficult, because the radiolytic species are distributed non-homogeneously in the medium. Consequently, computational approaches based on Green functions for diffusion-influenced reactions should be used [2]. Recently, Green functions for more complex type of reactions have been published [3-4]. We have developed exact random variate generators of these Green functions [5], which will allow us to use them in radiation chemistry codes. Moreover, simulating chemistry using the Green functions is which is computationally very demanding, because the probabilities of reactions between each pair of particles should be evaluated at each timestep [2]. This kind of problem is well adapted for General Purpose Graphic Processing Units (GPGPU), which can handle a large number of similar calculations simultaneously. These new developments will allow us to include more complex reactions in chemistry codes, and to improve the calculation time. This code should be of importance to link radiation track structure simulations and DNA damage models.
Simple Queueing Model Applied to the City of Portland
NASA Astrophysics Data System (ADS)
Simon, Patrice M.; Esser, Jörg; Nagel, Kai
We use a simple traffic micro-simulation model based on queueing dynamics as introduced by Gawron [IJMPC, 9(3):393, 1998] in order to simulate traffic in Portland/Oregon. Links have a flow capacity, that is, they do not release more vehicles per second than is possible according to their capacity. This leads to queue built-up if demand exceeds capacity. Links also have a storage capacity, which means that once a link is full, vehicles that want to enter the link need to wait. This leads to queue spill-back through the network. The model is compatible with route-plan-based approaches such as TRANSIMS, where each vehicle attempts to follow its pre-computed path. Yet, both the data requirements and the computational requirements are considerably lower than for the full TRANSIMS microsimulation. Indeed, the model uses standard emme/2 network data, and runs about eight times faster than real time with more than 100 000 vehicles simultaneously in the simulation on a single Pentium-type CPU. We derive the model's fundamental diagrams and explain it. The simulation is used to simulate traffic on the emme/2 network of the Portland (Oregon) metropolitan region (20 000 links). Demand is generated by a simplified home-to-work destination assignment which generates about half a million trips for the morning peak. Route assignment is done by iterative feedback between micro-simulation and router. An iterative solution of the route assignment for the above problem can be achieved within about half a day of computing time on a desktop workstation. We compare results with field data and with results of traditional assignment runs by the Portland Metropolitan Planning Organization. Thus, with a model such as this one, it is possible to use a dynamic, activities-based approach to transportation simulation (such as in TRANSIMS) with affordable data and hardware. This should enable systematic research about the coupling of demand generation, route assignment, and micro-simulation output.
Differential modal Zernike wavefront sensor employing a computer-generated hologram: a proposal.
Mishra, Sanjay K; Bhatt, Rahul; Mohan, Devendra; Gupta, Arun Kumar; Sharma, Anurag
2009-11-20
The process of Zernike mode detection with a Shack-Hartmann wavefront sensor is computationally extensive. A holographic modal wavefront sensor has therefore evolved to process the data optically by use of the concept of equal and opposite phase bias. Recently, a multiplexed computer-generated hologram (CGH) technique was developed in which the output is in the form of bright dots that specify the presence and strength of a specific Zernike mode. We propose a wavefront sensor using the concept of phase biasing in the latter technique such that the output is a pair of bright dots for each mode to be sensed. A normalized difference signal between the intensities of the two dots is proportional to the amplitude of the sensed Zernike mode. In our method the number of holograms to be multiplexed is decreased, thereby reducing the modal cross talk significantly. We validated the proposed method through simulation studies for several cases. The simulation results demonstrate simultaneous wavefront detection of lower-order Zernike modes with a resolution better than lambda/50 for the wide measurement range of +/-3.5lambda with much reduced cross talk at high speed.
Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A
2009-06-01
In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.
NASA Astrophysics Data System (ADS)
Venkatachari, Balaji Shankar; Chang, Chau-Lyan
2016-11-01
The focus of this study is scale-resolving simulations of the canonical normal shock- isotropic turbulence interaction using unstructured tetrahedral meshes and the space-time conservation element solution element (CESE) method. Despite decades of development in unstructured mesh methods and its potential benefits of ease of mesh generation around complex geometries and mesh adaptation, direct numerical or large-eddy simulations of turbulent flows are predominantly carried out using structured hexahedral meshes. This is due to the lack of consistent multi-dimensional numerical formulations in conventional schemes for unstructured meshes that can resolve multiple physical scales and flow discontinuities simultaneously. The CESE method - due to its Riemann-solver-free shock capturing capabilities, non-dissipative baseline schemes, and flux conservation in time as well as space - has the potential to accurately simulate turbulent flows using tetrahedral meshes. As part of the study, various regimes of the shock-turbulence interaction (wrinkled and broken shock regimes) will be investigated along with a study on how adaptive refinement of tetrahedral meshes benefits this problem. The research funding for this paper has been provided by Revolutionary Computational Aerosciences (RCA) subproject under the NASA Transformative Aeronautics Concepts Program (TACP).
NASA Astrophysics Data System (ADS)
Michelon, M. F.; Antonelli, A.
2010-03-01
We have developed a methodology to study the thermodynamics of order-disorder transformations in n -component substitutional alloys that combines nonequilibrium methods, which can efficiently compute free energies, with Monte Carlo simulations, in which configurational and vibrational degrees of freedom are simultaneously considered on an equal footing basis. Furthermore, with this methodology one can easily perform simulations in the canonical and in the isobaric-isothermal ensembles, which allow the investigation of the bulk volume effect. We have applied this methodology to calculate configurational and vibrational contributions to the entropy of the Ni3Al alloy as functions of temperature. The simulations show that when the volume of the system is kept constant, the vibrational entropy does not change upon transition while constant-pressure calculations indicate that the volume increase at the order-disorder transition causes a vibrational entropy increase of 0.08kB/atom . This is significant when compared to the configurational entropy increase of 0.27kB/atom . Our calculations also indicate that the inclusion of vibrations reduces in about 30% the order-disorder transition temperature determined solely considering the configurational degrees of freedom.
NASA Technical Reports Server (NTRS)
Hammrs, Stephan R.
2008-01-01
Virtual Satellite (VirtualSat) is a computer program that creates an environment that facilitates the development, verification, and validation of flight software for a single spacecraft or for multiple spacecraft flying in formation. In this environment, enhanced functionality and autonomy of navigation, guidance, and control systems of a spacecraft are provided by a virtual satellite that is, a computational model that simulates the dynamic behavior of the spacecraft. Within this environment, it is possible to execute any associated software, the development of which could benefit from knowledge of, and possible interaction (typically, exchange of data) with, the virtual satellite. Examples of associated software include programs for simulating spacecraft power and thermal- management systems. This environment is independent of the flight hardware that will eventually host the flight software, making it possible to develop the software simultaneously with, or even before, the hardware is delivered. Optionally, by use of interfaces included in VirtualSat, hardware can be used instead of simulated. The flight software, coded in the C or C++ programming language, is compilable and loadable into VirtualSat without any special modifications. Thus, VirtualSat can serve as a relatively inexpensive software test-bed for development test, integration, and post-launch maintenance of spacecraft flight software.
Numerical demonstration of neuromorphic computing with photonic crystal cavities.
Laporte, Floris; Katumba, Andrew; Dambre, Joni; Bienstman, Peter
2018-04-02
We propose a new design for a passive photonic reservoir computer on a silicon photonics chip which can be used in the context of optical communication applications, and study it through detailed numerical simulations. The design consists of a photonic crystal cavity with a quarter-stadium shape, which is known to foster interesting mixing dynamics. These mixing properties turn out to be very useful for memory-dependent optical signal processing tasks, such as header recognition. The proposed, ultra-compact photonic crystal cavity exhibits a memory of up to 6 bits, while simultaneously accepting bitrates in a wide region of operation. Moreover, because of the inherent low losses in a high-Q photonic crystal cavity, the proposed design is very power efficient.
Spatial distribution of nuclei in progressive nucleation: Modeling and application
NASA Astrophysics Data System (ADS)
Tomellini, Massimo
2018-04-01
Phase transformations ruled by non-simultaneous nucleation and growth do not lead to random distribution of nuclei. Since nucleation is only allowed in the untransformed portion of space, positions of nuclei are correlated. In this article an analytical approach is presented for computing pair-correlation function of nuclei in progressive nucleation. This quantity is further employed for characterizing the spatial distribution of nuclei through the nearest neighbor distribution function. The modeling is developed for nucleation in 2D space with power growth law and it is applied to describe electrochemical nucleation where correlation effects are significant. Comparison with both computer simulations and experimental data lends support to the model which gives insights into the transition from Poissonian to correlated nearest neighbor probability density.
A VLBI variance-covariance analysis interactive computer program. M.S. Thesis
NASA Technical Reports Server (NTRS)
Bock, Y.
1980-01-01
An interactive computer program (in FORTRAN) for the variance covariance analysis of VLBI experiments is presented for use in experiment planning, simulation studies and optimal design problems. The interactive mode is especially suited to these types of analyses providing ease of operation as well as savings in time and cost. The geodetic parameters include baseline vector parameters and variations in polar motion and Earth rotation. A discussion of the theroy on which the program is based provides an overview of the VLBI process emphasizing the areas of interest to geodesy. Special emphasis is placed on the problem of determining correlations between simultaneous observations from a network of stations. A model suitable for covariance analyses is presented. Suggestions towards developing optimal observation schedules are included.
NASA Astrophysics Data System (ADS)
Stellmach, Stephan; Hansen, Ulrich
2008-05-01
Numerical simulations of the process of convection and magnetic field generation in planetary cores still fail to reach geophysically realistic control parameter values. Future progress in this field depends crucially on efficient numerical algorithms which are able to take advantage of the newest generation of parallel computers. Desirable features of simulation algorithms include (1) spectral accuracy, (2) an operation count per time step that is small and roughly proportional to the number of grid points, (3) memory requirements that scale linear with resolution, (4) an implicit treatment of all linear terms including the Coriolis force, (5) the ability to treat all kinds of common boundary conditions, and (6) reasonable efficiency on massively parallel machines with tens of thousands of processors. So far, algorithms for fully self-consistent dynamo simulations in spherical shells do not achieve all these criteria simultaneously, resulting in strong restrictions on the possible resolutions. In this paper, we demonstrate that local dynamo models in which the process of convection and magnetic field generation is only simulated for a small part of a planetary core in Cartesian geometry can achieve the above goal. We propose an algorithm that fulfills the first five of the above criteria and demonstrate that a model implementation of our method on an IBM Blue Gene/L system scales impressively well for up to O(104) processors. This allows for numerical simulations at rather extreme parameter values.
Multiple-Flat-Panel System Displays Multidimensional Data
NASA Technical Reports Server (NTRS)
Gundo, Daniel; Levit, Creon; Henze, Christopher; Sandstrom, Timothy; Ellsworth, David; Green, Bryan; Joly, Arthur
2006-01-01
The NASA Ames hyperwall is a display system designed to facilitate the visualization of sets of multivariate and multidimensional data like those generated in complex engineering and scientific computations. The hyperwall includes a 77 matrix of computer-driven flat-panel video display units, each presenting an image of 1,280 1,024 pixels. The term hyperwall reflects the fact that this system is a more capable successor to prior computer-driven multiple-flat-panel display systems known by names that include the generic term powerwall and the trade names PowerWall and Powerwall. Each of the 49 flat-panel displays is driven by a rack-mounted, dual-central-processing- unit, workstation-class personal computer equipped with a hig-hperformance graphical-display circuit card and with a hard-disk drive having a storage capacity of 100 GB. Each such computer is a slave node in a master/ slave computing/data-communication system (see Figure 1). The computer that acts as the master node is similar to the slave-node computers, except that it runs the master portion of the system software and is equipped with a keyboard and mouse for control by a human operator. The system utilizes commercially available master/slave software along with custom software that enables the human controller to interact simultaneously with any number of selected slave nodes. In a powerwall, a single rendering task is spread across multiple processors and then the multiple outputs are tiled into one seamless super-display. It must be noted that the hyperwall concept subsumes the powerwall concept in that a single scene could be rendered as a mosaic image on the hyperwall. However, the hyperwall offers a wider set of capabilities to serve a different purpose: The hyperwall concept is one of (1) simultaneously displaying multiple different but related images, and (2) providing means for composing and controlling such sets of images. In place of elaborate software or hardware crossbar switches, the hyperwall concept substitutes reliance on the human visual system for integration, synthesis, and discrimination of patterns in complex and high-dimensional data spaces represented by the multiple displayed images. The variety of multidimensional data sets that can be displayed on the hyperwall is practically unlimited. For example, Figure 2 shows a hyperwall display of surface pressures and streamlines from a computational simulation of airflow about an aerospacecraft at various Mach numbers and angles of attack. In this display, Mach numbers increase from left to right and angles of attack increase from bottom to top. That is, all images in the same column represent simulations at the same Mach number, while all images in the same row represent simulations at the same angle of attack. The same viewing transformations and the same mapping from surface pressure to colors were used in generating all the images.
Stochastic simulation of the spray formation assisted by a high pressure
NASA Astrophysics Data System (ADS)
Gorokhovski, M.; Chtab-Desportes, A.; Voloshina, I.; Askarova, A.
2010-03-01
The stochastic model of spray formation in the vicinity of the injector and in the far-field has been described and assessed by comparison with measurements in Diesel-like conditions. In the proposed mesh-free approach, the 3D configuration of continuous liquid core is simulated stochastically by ensemble of spatial trajectories of the specifically introduced stochastic particles. The parameters of the stochastic process are presumed from the physics of primary atomization. The spray formation model consists in computation of spatial distribution of the probability of finding the non-fragmented liquid jet in the near-to-injector region. This model is combined with KIVA II computation of atomizing Diesel spray in two-ways. First, simultaneously with the gas phase RANS computation, the ensemble of stochastic particles is tracking and the probability field of their positions is calculated, which is used for sampling of initial locations of primary blobs. Second, the velocity increment of the gas due to the liquid injection is computed from the mean volume fraction of the simulated liquid core. Two novelties are proposed in the secondary atomization modeling. The first one is due to unsteadiness of the injection velocity. When the injection velocity increment in time is decreasing, the supplementary breakup may be induced. Therefore the critical Weber number is based on such increment. Second, a new stochastic model of the secondary atomization is proposed, in which the intermittent turbulent stretching is taken into account as the main mechanism. The measurements reported by Arcoumanis et al. (time-history of the mean axial centre-line velocity of droplet, and of the centre-line Sauter Mean Diameter), are compared with computations.
A novel visual-inertial monocular SLAM
NASA Astrophysics Data System (ADS)
Yue, Xiaofeng; Zhang, Wenjuan; Xu, Li; Liu, JiangGuo
2018-02-01
With the development of sensors and computer vision research community, cameras, which are accurate, compact, wellunderstood and most importantly cheap and ubiquitous today, have gradually been at the center of robot location. Simultaneous localization and mapping (SLAM) using visual features, which is a system getting motion information from image acquisition equipment and rebuild the structure in unknown environment. We provide an analysis of bioinspired flights in insects, employing a novel technique based on SLAM. Then combining visual and inertial measurements to get high accuracy and robustness. we present a novel tightly-coupled Visual-Inertial Simultaneous Localization and Mapping system which get a new attempt to address two challenges which are the initialization problem and the calibration problem. experimental results and analysis show the proposed approach has a more accurate quantitative simulation of insect navigation, which can reach the positioning accuracy of centimeter level.
Development of Human Posture Simulation Method for Assessing Posture Angles and Spinal Loads
Lu, Ming-Lun; Waters, Thomas; Werren, Dwight
2015-01-01
Video-based posture analysis employing a biomechanical model is gaining a growing popularity for ergonomic assessments. A human posture simulation method of estimating multiple body postural angles and spinal loads from a video record was developed to expedite ergonomic assessments. The method was evaluated by a repeated measures study design with three trunk flexion levels, two lift asymmetry levels, three viewing angles and three trial repetitions as experimental factors. The study comprised two phases evaluating the accuracy of simulating self and other people’s lifting posture via a proxy of a computer-generated humanoid. The mean values of the accuracy of simulating self and humanoid postures were 12° and 15°, respectively. The repeatability of the method for the same lifting condition was excellent (~2°). The least simulation error was associated with side viewing angle. The estimated back compressive force and moment, calculated by a three dimensional biomechanical model, exhibited a range of 5% underestimation. The posture simulation method enables researchers to simultaneously quantify body posture angles and spinal loading variables with accuracy and precision comparable to on-screen posture matching methods. PMID:26361435
GADEN: A 3D Gas Dispersion Simulator for Mobile Robot Olfaction in Realistic Environments.
Monroy, Javier; Hernandez-Bennets, Victor; Fan, Han; Lilienthal, Achim; Gonzalez-Jimenez, Javier
2017-06-23
This work presents a simulation framework developed under the widely used Robot Operating System (ROS) to enable the validation of robotics systems and gas sensing algorithms under realistic environments. The framework is rooted in the principles of computational fluid dynamics and filament dispersion theory, modeling wind flow and gas dispersion in 3D real-world scenarios (i.e., accounting for walls, furniture, etc.). Moreover, it integrates the simulation of different environmental sensors, such as metal oxide gas sensors, photo ionization detectors, or anemometers. We illustrate the potential and applicability of the proposed tool by presenting a simulation case in a complex and realistic office-like environment where gas leaks of different chemicals occur simultaneously. Furthermore, we accomplish quantitative and qualitative validation by comparing our simulated results against real-world data recorded inside a wind tunnel where methane was released under different wind flow profiles. Based on these results, we conclude that our simulation framework can provide a good approximation to real world measurements when advective airflows are present in the environment.
GADEN: A 3D Gas Dispersion Simulator for Mobile Robot Olfaction in Realistic Environments
Hernandez-Bennetts, Victor; Fan, Han; Lilienthal, Achim; Gonzalez-Jimenez, Javier
2017-01-01
This work presents a simulation framework developed under the widely used Robot Operating System (ROS) to enable the validation of robotics systems and gas sensing algorithms under realistic environments. The framework is rooted in the principles of computational fluid dynamics and filament dispersion theory, modeling wind flow and gas dispersion in 3D real-world scenarios (i.e., accounting for walls, furniture, etc.). Moreover, it integrates the simulation of different environmental sensors, such as metal oxide gas sensors, photo ionization detectors, or anemometers. We illustrate the potential and applicability of the proposed tool by presenting a simulation case in a complex and realistic office-like environment where gas leaks of different chemicals occur simultaneously. Furthermore, we accomplish quantitative and qualitative validation by comparing our simulated results against real-world data recorded inside a wind tunnel where methane was released under different wind flow profiles. Based on these results, we conclude that our simulation framework can provide a good approximation to real world measurements when advective airflows are present in the environment. PMID:28644375
Multispectral computational ghost imaging with multiplexed illumination
NASA Astrophysics Data System (ADS)
Huang, Jian; Shi, Dongfeng
2017-07-01
Computational ghost imaging has attracted wide attention from researchers in many fields over the last two decades. Multispectral imaging as one application of computational ghost imaging possesses spatial and spectral resolving abilities, and is very useful for surveying scenes and extracting detailed information. Existing multispectral imagers mostly utilize narrow band filters or dispersive optical devices to separate light of different wavelengths, and then use multiple bucket detectors or an array detector to record them separately. Here, we propose a novel multispectral ghost imaging method that uses one single bucket detector with multiplexed illumination to produce a colored image. The multiplexed illumination patterns are produced by three binary encoded matrices (corresponding to the red, green and blue colored information, respectively) and random patterns. The results of the simulation and experiment have verified that our method can be effective in recovering the colored object. Multispectral images are produced simultaneously by one single-pixel detector, which significantly reduces the amount of data acquisition.
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.
1995-01-01
Integrated Product and Process Development (IPPD) embodies the simultaneous application of both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. Georgia Tech has proposed the development of an Integrated Design Engineering Simulator that will merge Integrated Product and Process Development with interdisciplinary analysis techniques and state-of-the-art computational technologies. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. The current status of development is given and future directions are outlined.
Numerical Simulations of Buoyancy Effects in low Density Gas Jets
NASA Technical Reports Server (NTRS)
Satti, R. P.; Pasumarthi, K. S.; Agrawal, A. K.
2004-01-01
This paper deals with the computational analysis of buoyancy effects in the near field of an isothermal helium jet injected into quiescent ambient air environment. The transport equations of helium mass fraction coupled with the conservation equations of mixture mass and momentum were solved using a staggered grid finite volume method. Laminar, axisymmetric, unsteady flow conditions were considered for the analysis. An orthogonal system with non-uniform grids was used to capture the instability phenomena. Computations were performed for Earth gravity and during transition from Earth to different gravitational levels. The flow physics was described by simultaneous visualizations of velocity and concentration fields at Earth and microgravity conditions. Computed results were validated by comparing with experimental data substantiating that buoyancy induced global flow oscillations present in Earth gravity are absent in microgravity. The dependence of oscillation frequency and amplitude on gravitational forcing was presented to further quantify the buoyancy effects.
New spatial diversity equalizer based on PLL
NASA Astrophysics Data System (ADS)
Rao, Wei
2011-10-01
A new Spatial Diversity Equalizer (SDE) based on phase-locked loop (PLL) is proposed to overcome the inter-symbol interference (ISI) and phase rotations simultaneously in the digital communication system. The proposed SDE consists of equal gain combining technique based on a famous blind equalization algorithm constant modulus algorithm (CMA) and a PLL. Compared with conventional SDE, the proposed SDE has not only faster convergence rate and lower residual error but also the ability to recover carrier phase rotation. The efficiency of the method is proved by computer simulation.
2011-08-01
resource management games (e.g., Sim City 2000), board game simulations (e.g., VASSAL), and abstract games (e.g., Tetris). The second purpose of the...which occur simultaneously o E.g., Starcraft Board game o A computer game that emulates a board game o E.g., Archon 2D Side View o A game...a mouse Joypad o E.g., A playstation/X-box controller Accelerometer o E.g., A Wii Controller Touch 22 Distribution A: Approved for
Verification of a three-dimensional viscous flow analysis for a single stage compressor
NASA Astrophysics Data System (ADS)
Matsuoka, Akinori; Hashimoto, Keisuke; Nozaki, Osamu; Kikuchi, Kazuo; Fukuda, Masahiro; Tamura, Atsuhiro
1992-12-01
A transonic flowfield around rotor blades of a highly loaded single stage axial compressor was numerically analyzed by a three dimensional compressible Navier-Stokes equation code using Chakravarthy and Osher type total variation diminishing (TVD) scheme. A stage analysis which calculates both flowfields around inlet guide vane (IGV) and rotor blades simultaneously was carried out. Comparing with design values and experimental data, computed results show slight difference quantitatively. But the numerical calculation simulates well the pressure rise characteristics of the compressor and its flow pattern including strong shock surface.
FESTR: Finite-Element Spectral Transfer of Radiation spectroscopic modeling and analysis code
Hakel, Peter
2016-10-01
Here we report on the development of a new spectral postprocessor of hydrodynamic simulations of hot, dense plasmas. Based on given time histories of one-, two-, and three-dimensional spatial distributions of materials, and their local temperature and density conditions, spectroscopically-resolved signals are computed. The effects of radiation emission and absorption by the plasma on the emergent spectra are simultaneously taken into account. This program can also be used independently of hydrodynamic calculations to analyze available experimental data with the goal of inferring plasma conditions.
Reconstructed Image Spatial Resolution of Multiple Coincidences Compton Imager
NASA Astrophysics Data System (ADS)
Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna
2010-02-01
We study the multiple coincidences Compton imager (MCCI) which is based on a simultaneous acquisition of several photons emitted in cascade from a single nuclear decay. Theoretically, this technique should provide a major improvement in localization of a single radioactive source as compared to a standard Compton camera. In this work, we investigated the performance and limitations of MCCI using Monte Carlo computer simulations. Spatial resolutions of the reconstructed point source have been studied as a function of the MCCI parameters, including geometrical dimensions and detector characteristics such as materials, energy and spatial resolutions.
FESTR: Finite-Element Spectral Transfer of Radiation spectroscopic modeling and analysis code
NASA Astrophysics Data System (ADS)
Hakel, Peter
2016-10-01
We report on the development of a new spectral postprocessor of hydrodynamic simulations of hot, dense plasmas. Based on given time histories of one-, two-, and three-dimensional spatial distributions of materials, and their local temperature and density conditions, spectroscopically-resolved signals are computed. The effects of radiation emission and absorption by the plasma on the emergent spectra are simultaneously taken into account. This program can also be used independently of hydrodynamic calculations to analyze available experimental data with the goal of inferring plasma conditions.
Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li
2017-03-01
The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.
Quantum walks and wavepacket dynamics on a lattice with twisted photons.
Cardano, Filippo; Massa, Francesco; Qassim, Hammam; Karimi, Ebrahim; Slussarenko, Sergei; Paparo, Domenico; de Lisio, Corrado; Sciarrino, Fabio; Santamato, Enrico; Boyd, Robert W; Marrucci, Lorenzo
2015-03-01
The "quantum walk" has emerged recently as a paradigmatic process for the dynamic simulation of complex quantum systems, entanglement production and quantum computation. Hitherto, photonic implementations of quantum walks have mainly been based on multipath interferometric schemes in real space. We report the experimental realization of a discrete quantum walk taking place in the orbital angular momentum space of light, both for a single photon and for two simultaneous photons. In contrast to previous implementations, the whole process develops in a single light beam, with no need of interferometers; it requires optical resources scaling linearly with the number of steps; and it allows flexible control of input and output superposition states. Exploiting the latter property, we explored the system band structure in momentum space and the associated spin-orbit topological features by simulating the quantum dynamics of Gaussian wavepackets. Our demonstration introduces a novel versatile photonic platform for quantum simulations.
Zhu, Sha; Degnan, James H; Goldstien, Sharyn J; Eldon, Bjarki
2015-09-15
There has been increasing interest in coalescent models which admit multiple mergers of ancestral lineages; and to model hybridization and coalescence simultaneously. Hybrid-Lambda is a software package that simulates gene genealogies under multiple merger and Kingman's coalescent processes within species networks or species trees. Hybrid-Lambda allows different coalescent processes to be specified for different populations, and allows for time to be converted between generations and coalescent units, by specifying a population size for each population. In addition, Hybrid-Lambda can generate simulated datasets, assuming the infinitely many sites mutation model, and compute the F ST statistic. As an illustration, we apply Hybrid-Lambda to infer the time of subdivision of certain marine invertebrates under different coalescent processes. Hybrid-Lambda makes it possible to investigate biogeographic concordance among high fecundity species exhibiting skewed offspring distribution.
Quantum walks and wavepacket dynamics on a lattice with twisted photons
Cardano, Filippo; Massa, Francesco; Qassim, Hammam; Karimi, Ebrahim; Slussarenko, Sergei; Paparo, Domenico; de Lisio, Corrado; Sciarrino, Fabio; Santamato, Enrico; Boyd, Robert W.; Marrucci, Lorenzo
2015-01-01
The “quantum walk” has emerged recently as a paradigmatic process for the dynamic simulation of complex quantum systems, entanglement production and quantum computation. Hitherto, photonic implementations of quantum walks have mainly been based on multipath interferometric schemes in real space. We report the experimental realization of a discrete quantum walk taking place in the orbital angular momentum space of light, both for a single photon and for two simultaneous photons. In contrast to previous implementations, the whole process develops in a single light beam, with no need of interferometers; it requires optical resources scaling linearly with the number of steps; and it allows flexible control of input and output superposition states. Exploiting the latter property, we explored the system band structure in momentum space and the associated spin-orbit topological features by simulating the quantum dynamics of Gaussian wavepackets. Our demonstration introduces a novel versatile photonic platform for quantum simulations. PMID:26601157
Automatic Clustering Using Multi-objective Particle Swarm and Simulated Annealing
Abubaker, Ahmad; Baharum, Adam; Alrefaei, Mahmoud
2015-01-01
This paper puts forward a new automatic clustering algorithm based on Multi-Objective Particle Swarm Optimization and Simulated Annealing, “MOPSOSA”. The proposed algorithm is capable of automatic clustering which is appropriate for partitioning datasets to a suitable number of clusters. MOPSOSA combines the features of the multi-objective based particle swarm optimization (PSO) and the Multi-Objective Simulated Annealing (MOSA). Three cluster validity indices were optimized simultaneously to establish the suitable number of clusters and the appropriate clustering for a dataset. The first cluster validity index is centred on Euclidean distance, the second on the point symmetry distance, and the last cluster validity index is based on short distance. A number of algorithms have been compared with the MOPSOSA algorithm in resolving clustering problems by determining the actual number of clusters and optimal clustering. Computational experiments were carried out to study fourteen artificial and five real life datasets. PMID:26132309
Scout: high-performance heterogeneous computing made simple
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jablin, James; Mc Cormick, Patrick; Herlihy, Maurice
2011-01-26
Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly navigate both software and hardware. Co-design between computer scientists and research scientists can alleviate but not solve this problem. The science community requires better tools for developing, optimizing, and future-proofing codes, allowing scientists to focusmore » on their research while still achieving high computational performance. Scout is a parallel programming language and extensible compiler framework targeting heterogeneous architectures. It provides the abstraction required to buffer scientists from the constantly-shifting details of hardware while still realizing higb-performance by encapsulating software and hardware optimization within a compiler framework.« less
ERIC Educational Resources Information Center
Ozen, Arzu; Ergenekon, Yasemin; Ulke-Kurkcuoglu, Burcu
2017-01-01
The current study investigated the relation between simultaneous prompting (SP), computer-assisted instruction (CAI), and the receptive identification of target pictures (presented on laptop computer) for four preschool students with developmental disabilities. The students' acquisition of nontarget information through observational learning also…
Multidisciplinary tailoring of hot composite structures
NASA Technical Reports Server (NTRS)
Singhal, Surendra N.; Chamis, Christos C.
1993-01-01
A computational simulation procedure is described for multidisciplinary analysis and tailoring of layered multi-material hot composite engine structural components subjected to simultaneous multiple discipline-specific thermal, structural, vibration, and acoustic loads. The effect of aggressive environments is also simulated. The simulation is based on a three-dimensional finite element analysis technique in conjunction with structural mechanics codes, thermal/acoustic analysis methods, and tailoring procedures. The integrated multidisciplinary simulation procedure is general-purpose including the coupled effects of nonlinearities in structure geometry, material, loading, and environmental complexities. The composite material behavior is assessed at all composite scales, i.e., laminate/ply/constituents (fiber/matrix), via a nonlinear material characterization hygro-thermo-mechanical model. Sample tailoring cases exhibiting nonlinear material/loading/environmental behavior of aircraft engine fan blades, are presented. The various multidisciplinary loads lead to different tailored designs, even those competing with each other, as in the case of minimum material cost versus minimum structure weight and in the case of minimum vibration frequency versus minimum acoustic noise.
IGMS: An Integrated ISO-to-Appliance Scale Grid Modeling System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Hale, Elaine; Hansen, Timothy M.
This paper describes the Integrated Grid Modeling System (IGMS), a novel electric power system modeling platform for integrated transmission-distribution analysis that co-simulates off-the-shelf tools on high performance computing (HPC) platforms to offer unprecedented resolution from ISO markets down to appliances and other end uses. Specifically, the system simultaneously models hundreds or thousands of distribution systems in co-simulation with detailed Independent System Operator (ISO) markets and AGC-level reserve deployment. IGMS uses a new MPI-based hierarchical co-simulation framework to connect existing sub-domain models. Our initial efforts integrate opensource tools for wholesale markets (FESTIV), bulk AC power flow (MATPOWER), and full-featured distribution systemsmore » including physics-based end-use and distributed generation models (many instances of GridLAB-D[TM]). The modular IGMS framework enables tool substitution and additions for multi-domain analyses. This paper describes the IGMS tool, characterizes its performance, and demonstrates the impacts of the coupled simulations for analyzing high-penetration solar PV and price responsive load scenarios.« less
Seismic Wave Propagation on the Tablet Computer
NASA Astrophysics Data System (ADS)
Emoto, K.
2015-12-01
Tablet computers widely used in recent years. The performance of the tablet computer is improving year by year. Some of them have performance comparable to the personal computer of a few years ago with respect to the calculation speed and the memory size. The convenience and the intuitive operation are the advantage of the tablet computer compared to the desktop PC. I developed the iPad application of the numerical simulation of the seismic wave propagation. The numerical simulation is based on the 2D finite difference method with the staggered-grid scheme. The number of the grid points is 512 x 384 = 196,608. The grid space is 200m in both horizontal and vertical directions. That is the calculation area is 102km x 77km. The time step is 0.01s. In order to reduce the user waiting time, the image of the wave field is drawn simultaneously with the calculation rather than playing the movie after the whole calculation. P and S wave energies are plotted on the screen every 20 steps (0.2s). There is the trade-off between the smooth simulation and the resolution of the wave field image. In the current setting, it takes about 30s to calculate the 10s wave propagation (50 times image updates). The seismogram at the receiver is displayed below of the wave field updated in real time. The default medium structure consists of 3 layers. The layer boundary is defined by 10 movable points with linear interpolation. Users can intuitively change to the arbitrary boundary shape by moving the point. Also users can easily change the source and the receiver positions. The favorite structure can be saved and loaded. For the advance simulation, users can introduce the random velocity fluctuation whose spectrum can be changed to the arbitrary shape. By using this application, everyone can simulate the seismic wave propagation without the special knowledge of the elastic wave equation. So far, the Japanese version of the application is released on the App Store. Now I am preparing the English version.
Indoor Multi-Sensor Acquisition System for Projects on Energy Renovation of Buildings.
Armesto, Julia; Sánchez-Villanueva, Claudio; Patiño-Cambeiro, Faustino; Patiño-Barbeito, Faustino
2016-05-28
Energy rehabilitation actions in buildings have become a great economic opportunity for the construction sector. They also constitute a strategic goal in the European Union (EU), given the energy dependence and the compromises with climate change of its member states. About 75% of existing buildings in the EU were built when energy efficiency codes had not been developed. Approximately 75% to 90% of those standing buildings are expected to remain in use in 2050. Significant advances have been achieved in energy analysis, simulation tools, and computer fluid dynamics for building energy evaluation. However, the gap between predictions and real savings might still be improved. Geomatics and computer science disciplines can really help in modelling, inspection, and diagnosis procedures. This paper presents a multi-sensor acquisition system capable of automatically and simultaneously capturing the three-dimensional geometric information, thermographic, optical, and panoramic images, ambient temperature map, relative humidity map, and light level map. The system integrates a navigation system based on a Simultaneous Localization and Mapping (SLAM) approach that allows georeferencing every data to its position in the building. The described equipment optimizes the energy inspection and diagnosis steps and facilitates the energy modelling of the building.
Cartesian control of redundant robots
NASA Technical Reports Server (NTRS)
Colbaugh, R.; Glass, K.
1989-01-01
A Cartesian-space position/force controller is presented for redundant robots. The proposed control structure partitions the control problem into a nonredundant position/force trajectory tracking problem and a redundant mapping problem between Cartesian control input F is a set member of the set R(sup m) and robot actuator torque T is a set member of the set R(sup n) (for redundant robots, m is less than n). The underdetermined nature of the F yields T map is exploited so that the robot redundancy is utilized to improve the dynamic response of the robot. This dynamically optimal F yields T map is implemented locally (in time) so that it is computationally efficient for on-line control; however, it is shown that the map possesses globally optimal characteristics. Additionally, it is demonstrated that the dynamically optimal F yields T map can be modified so that the robot redundancy is used to simultaneously improve the dynamic response and realize any specified kinematic performance objective (e.g., manipulability maximization or obstacle avoidance). Computer simulation results are given for a four degree of freedom planar redundant robot under Cartesian control, and demonstrate that position/force trajectory tracking and effective redundancy utilization can be achieved simultaneously with the proposed controller.
Indoor Multi-Sensor Acquisition System for Projects on Energy Renovation of Buildings
Armesto, Julia; Sánchez-Villanueva, Claudio; Patiño-Cambeiro, Faustino; Patiño-Barbeito, Faustino
2016-01-01
Energy rehabilitation actions in buildings have become a great economic opportunity for the construction sector. They also constitute a strategic goal in the European Union (EU), given the energy dependence and the compromises with climate change of its member states. About 75% of existing buildings in the EU were built when energy efficiency codes had not been developed. Approximately 75% to 90% of those standing buildings are expected to remain in use in 2050. Significant advances have been achieved in energy analysis, simulation tools, and computer fluid dynamics for building energy evaluation. However, the gap between predictions and real savings might still be improved. Geomatics and computer science disciplines can really help in modelling, inspection, and diagnosis procedures. This paper presents a multi-sensor acquisition system capable of automatically and simultaneously capturing the three-dimensional geometric information, thermographic, optical, and panoramic images, ambient temperature map, relative humidity map, and light level map. The system integrates a navigation system based on a Simultaneous Localization and Mapping (SLAM) approach that allows georeferencing every data to its position in the building. The described equipment optimizes the energy inspection and diagnosis steps and facilitates the energy modelling of the building. PMID:27240379
NASA Technical Reports Server (NTRS)
Eidson, T. M.; Erlebacher, G.
1994-01-01
While parallel computers offer significant computational performance, it is generally necessary to evaluate several programming strategies. Two programming strategies for a fairly common problem - a periodic tridiagonal solver - are developed and evaluated. Simple model calculations as well as timing results are presented to evaluate the various strategies. The particular tridiagonal solver evaluated is used in many computational fluid dynamic simulation codes. The feature that makes this algorithm unique is that these simulation codes usually require simultaneous solutions for multiple right-hand-sides (RHS) of the system of equations. Each RHS solutions is independent and thus can be computed in parallel. Thus a Gaussian elimination type algorithm can be used in a parallel computation and the more complicated approaches such as cyclic reduction are not required. The two strategies are a transpose strategy and a distributed solver strategy. For the transpose strategy, the data is moved so that a subset of all the RHS problems is solved on each of the several processors. This usually requires significant data movement between processor memories across a network. The second strategy attempts to have the algorithm allow the data across processor boundaries in a chained manner. This usually requires significantly less data movement. An approach to accomplish this second strategy in a near-perfect load-balanced manner is developed. In addition, an algorithm will be shown to directly transform a sequential Gaussian elimination type algorithm into the parallel chained, load-balanced algorithm.
NASA Astrophysics Data System (ADS)
Kim, Sungho
2017-06-01
Automatic target recognition (ATR) is a traditionally challenging problem in military applications because of the wide range of infrared (IR) image variations and the limited number of training images. IR variations are caused by various three-dimensional target poses, noncooperative weather conditions (fog and rain), and difficult target acquisition environments. Recently, deep convolutional neural network-based approaches for RGB images (RGB-CNN) showed breakthrough performance in computer vision problems, such as object detection and classification. The direct use of RGB-CNN to the IR ATR problem fails to work because of the IR database problems (limited database size and IR image variations). An IR variation-reduced deep CNN (IVR-CNN) to cope with the problems is presented. The problem of limited IR database size is solved by a commercial thermal simulator (OKTAL-SE). The second problem of IR variations is mitigated by the proposed shifted ramp function-based intensity transformation. This can suppress the background and enhance the target contrast simultaneously. The experimental results on the synthesized IR images generated by the thermal simulator (OKTAL-SE) validated the feasibility of IVR-CNN for military ATR applications.
Advanced Signal Processing for Integrated LES-RANS Simulations: Anti-aliasing Filters
NASA Technical Reports Server (NTRS)
Schlueter, J. U.
2003-01-01
Currently, a wide variety of flow phenomena are addressed with numerical simulations. Many flow solvers are optimized to simulate a limited spectrum of flow effects effectively, such as single parts of a flow system, but are either inadequate or too expensive to be applied to a very complex problem. As an example, the flow through a gas turbine can be considered. In the compressor and the turbine section, the flow solver has to be able to handle the moving blades, model the wall turbulence, and predict the pressure and density distribution properly. This can be done by a flow solver based on the Reynolds-Averaged Navier-Stokes (RANS) approach. On the other hand, the flow in the combustion chamber is governed by large scale turbulence, chemical reactions, and the presence of fuel spray. Experience shows that these phenomena require an unsteady approach. Hence, for the combustor, the use of a Large Eddy Simulation (LES) flow solver is desirable. While many design problems of a single flow passage can be addressed by separate computations, only the simultaneous computation of all parts can guarantee the proper prediction of multi-component phenomena, such as compressor/combustor instability and combustor/turbine hot-streak migration. Therefore, a promising strategy to perform full aero-thermal simulations of gas-turbine engines is the use of a RANS flow solver for the compressor sections, an LES flow solver for the combustor, and again a RANS flow solver for the turbine section.
NASA Astrophysics Data System (ADS)
Du, Xiaosong; Leifsson, Leifur; Grandin, Robert; Meeker, William; Roberts, Ronald; Song, Jiming
2018-04-01
Probability of detection (POD) is widely used for measuring reliability of nondestructive testing (NDT) systems. Typically, POD is determined experimentally, while it can be enhanced by utilizing physics-based computational models in combination with model-assisted POD (MAPOD) methods. With the development of advanced physics-based methods, such as ultrasonic NDT testing, the empirical information, needed for POD methods, can be reduced. However, performing accurate numerical simulations can be prohibitively time-consuming, especially as part of stochastic analysis. In this work, stochastic surrogate models for computational physics-based measurement simulations are developed for cost savings of MAPOD methods while simultaneously ensuring sufficient accuracy. The stochastic surrogate is used to propagate the random input variables through the physics-based simulation model to obtain the joint probability distribution of the output. The POD curves are then generated based on those results. Here, the stochastic surrogates are constructed using non-intrusive polynomial chaos (NIPC) expansions. In particular, the NIPC methods used are the quadrature, ordinary least-squares (OLS), and least-angle regression sparse (LARS) techniques. The proposed approach is demonstrated on the ultrasonic testing simulation of a flat bottom hole flaw in an aluminum block. The results show that the stochastic surrogates have at least two orders of magnitude faster convergence on the statistics than direct Monte Carlo sampling (MCS). Moreover, the evaluation of the stochastic surrogate models is over three orders of magnitude faster than the underlying simulation model for this case, which is the UTSim2 model.
NASA Technical Reports Server (NTRS)
Drozda, Tomasz G.; Axdahl, Erik L.; Cabell, Karen F.
2014-01-01
With the increasing costs of physics experiments and simultaneous increase in availability and maturity of computational tools it is not surprising that computational fluid dynamics (CFD) is playing an increasingly important role, not only in post-test investigations, but also in the early stages of experimental planning. This paper describes a CFD-based effort executed in close collaboration between computational fluid dynamicists and experimentalists to develop a virtual experiment during the early planning stages of the Enhanced Injection and Mixing project at NASA Langley Research Center. This projects aims to investigate supersonic combustion ramjet (scramjet) fuel injection and mixing physics, improve the understanding of underlying physical processes, and develop enhancement strategies and functional relationships relevant to flight Mach numbers greater than 8. The purpose of the virtual experiment was to provide flow field data to aid in the design of the experimental apparatus and the in-stream rake probes, to verify the nonintrusive measurements based on NO-PLIF, and to perform pre-test analysis of quantities obtainable from the experiment and CFD. The approach also allowed for the joint team to develop common data processing and analysis tools, and to test research ideas. The virtual experiment consisted of a series of Reynolds-averaged simulations (RAS). These simulations included the facility nozzle, the experimental apparatus with a baseline strut injector, and the test cabin. Pure helium and helium-air mixtures were used to determine the efficacy of different inert gases to model hydrogen injection. The results of the simulations were analyzed by computing mixing efficiency, total pressure recovery, and stream thrust potential. As the experimental effort progresses, the simulation results will be compared with the experimental data to calibrate the modeling constants present in the CFD and validate simulation fidelity. CFD will also be used to investigate different injector concepts, improve understanding of the flow structure and flow physics, and develop functional relationships. Both RAS and large eddy simulations (LES) are planned for post-test analysis of the experimental data.
Physics-based multiscale coupling for full core nuclear reactor simulation
Gaston, Derek R.; Permann, Cody J.; Peterson, John W.; ...
2015-10-01
Numerical simulation of nuclear reactors is a key technology in the quest for improvements in efficiency, safety, and reliability of both existing and future reactor designs. Historically, simulation of an entire reactor was accomplished by linking together multiple existing codes that each simulated a subset of the relevant multiphysics phenomena. Recent advances in the MOOSE (Multiphysics Object Oriented Simulation Environment) framework have enabled a new approach: multiple domain-specific applications, all built on the same software framework, are efficiently linked to create a cohesive application. This is accomplished with a flexible coupling capability that allows for a variety of different datamore » exchanges to occur simultaneously on high performance parallel computational hardware. Examples based on the KAIST-3A benchmark core, as well as a simplified Westinghouse AP-1000 configuration, demonstrate the power of this new framework for tackling—in a coupled, multiscale manner—crucial reactor phenomena such as CRUD-induced power shift and fuel shuffle. 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-SA license« less
GPU-accelerated Red Blood Cells Simulations with Transport Dissipative Particle Dynamics.
Blumers, Ansel L; Tang, Yu-Hang; Li, Zhen; Li, Xuejin; Karniadakis, George E
2017-08-01
Mesoscopic numerical simulations provide a unique approach for the quantification of the chemical influences on red blood cell functionalities. The transport Dissipative Particles Dynamics (tDPD) method can lead to such effective multiscale simulations due to its ability to simultaneously capture mesoscopic advection, diffusion, and reaction. In this paper, we present a GPU-accelerated red blood cell simulation package based on a tDPD adaptation of our red blood cell model, which can correctly recover the cell membrane viscosity, elasticity, bending stiffness, and cross-membrane chemical transport. The package essentially processes all computational workloads in parallel by GPU, and it incorporates multi-stream scheduling and non-blocking MPI communications to improve inter-node scalability. Our code is validated for accuracy and compared against the CPU counterpart for speed. Strong scaling and weak scaling are also presented to characterizes scalability. We observe a speedup of 10.1 on one GPU over all 16 cores within a single node, and a weak scaling efficiency of 91% across 256 nodes. The program enables quick-turnaround and high-throughput numerical simulations for investigating chemical-driven red blood cell phenomena and disorders.
On the Performance of Alternate Conceptual Ecohydrological Models for Streamflow Prediction
NASA Astrophysics Data System (ADS)
Naseem, Bushra; Ajami, Hoori; Cordery, Ian; Sharma, Ashish
2016-04-01
A merging of a lumped conceptual hydrological model with two conceptual dynamic vegetation models is presented to assess the performance of these models for simultaneous simulations of streamflow and leaf area index (LAI). Two conceptual dynamic vegetation models with differing representation of ecological processes are merged with a lumped conceptual hydrological model (HYMOD) to predict catchment scale streamflow and LAI. The merged RR-LAI-I model computes relative leaf biomass based on transpiration rates while the RR-LAI-II model computes above ground green and dead biomass based on net primary productivity and water use efficiency in response to soil moisture dynamics. To assess the performance of these models, daily discharge and 8-day MODIS LAI product for 27 catchments of 90 - 1600km2 in size located in the Murray - Darling Basin in Australia are used. Our results illustrate that when single-objective optimisation was focussed on maximizing the objective function for streamflow or LAI, the other un-calibrated predicted outcome (LAI if streamflow is the focus) was consistently compromised. Thus, single-objective optimization cannot take into account the essence of all processes in the conceptual ecohydrological models. However, multi-objective optimisation showed great strength for streamflow and LAI predictions. Both response outputs were better simulated by RR-LAI-II than RR-LAI-I due to better representation of physical processes such as net primary productivity (NPP) in RR-LAI-II. Our results highlight that simultaneous calibration of streamflow and LAI using a multi-objective algorithm proves to be an attractive tool for improved streamflow predictions.
NASA Astrophysics Data System (ADS)
Yamamoto, Tetsuya; Takeda, Kazuki; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide a better bit error rate (BER) performance than rake combining. To further improve the BER performance, cyclic delay transmit diversity (CDTD) can be used. CDTD simultaneously transmits the same signal from different antennas after adding different cyclic delays to increase the number of equivalent propagation paths. Although a joint use of CDTD and MMSE-FDE for direct sequence code division multiple access (DS-CDMA) achieves larger frequency diversity gain, the BER performance improvement is limited by the residual inter-chip interference (ICI) after FDE. In this paper, we propose joint FDE and despreading for DS-CDMA using CDTD. Equalization and despreading are simultaneously performed in the frequency-domain to suppress the residual ICI after FDE. A theoretical conditional BER analysis is presented for the given channel condition. The BER analysis is confirmed by computer simulation.
NASA Astrophysics Data System (ADS)
Fishkova, T. Ya.
2018-01-01
An optimal set of geometric and electrical parameters of a high-aperture electrostatic charged-particle spectrograph with a range of simultaneously recorded energies of E/ E min = 1-50 has been found by computer simulation, which is especially important for the energy analysis of charged particles during fast processes in various materials. The spectrograph consists of two coaxial electrodes with end faces closed by flat electrodes. The external electrode with a conical-cylindrical form is cut into parts with potentials that increase linearly, except for the last cylindrical part, which is electrically connected to the rear end electrode. The internal cylindrical electrode and the front end electrode are grounded. In the entire energy range, the system is sharply focused on the internal cylindrical electrode, which provides an energy resolution of no worse than 3 × 10-3.
Hirose, Makoto; Shimomura, Kei; Suzuki, Akihiro; Burdet, Nicolas; Takahashi, Yukio
2016-05-30
The sample size must be less than the diffraction-limited focal spot size of the incident beam in single-shot coherent X-ray diffraction imaging (CXDI) based on a diffract-before-destruction scheme using X-ray free electron lasers (XFELs). This is currently a major limitation preventing its wider applications. We here propose multiple defocused CXDI, in which isolated objects are sequentially illuminated with a divergent beam larger than the objects and the coherent diffraction pattern of each object is recorded. This method can simultaneously reconstruct both objects and a probe from the coherent X-ray diffraction patterns without any a priori knowledge. We performed a computer simulation of the prposed method and then successfully demonstrated it in a proof-of-principle experiment at SPring-8. The prposed method allows us to not only observe broad samples but also characterize focused XFEL beams.
A Method to Measure and Estimate Normalized Contrast in Infrared Flash Thermography
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2016-01-01
The paper presents further development in normalized contrast processing used in flash infrared thermography method. Method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided. Methods of converting image contrast to temperature contrast and vice versa are provided. Normalized contrast processing in flash thermography is useful in quantitative analysis of flash thermography data including flaw characterization and comparison of experimental results with simulation. Computation of normalized temperature contrast involves use of flash thermography data acquisition set-up with high reflectivity foil and high emissivity tape such that the foil, tape and test object are imaged simultaneously. Methods of assessing other quantitative parameters such as emissivity of object, afterglow heat flux, reflection temperature change and surface temperature during flash thermography are also provided. Temperature imaging and normalized temperature contrast processing provide certain advantages over normalized image contrast processing by reducing effect of reflected energy in images and measurements, therefore providing better quantitative data. Examples of incorporating afterglow heat-flux and reflection temperature evolution in flash thermography simulation are also discussed.
ABSIM. Simulation of Absorption Systems in Flexible and Modular Form
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grossman, G.
1994-06-01
The computer code has been developed for simulation of absorption systems at steady-state in a flexible and modular form, making it possible to investigate various cycle configurations with different working fluids. The code is based on unit subroutines containing the governing equations for the system`s components. When all the equations have been established, a mathematical solver routine is employed to solve them simultaneously. Property subroutines contained in a separate data base serve to provide thermodynamic properties of the working fluids. The code is user-oriented and requires a relatively simple input containing the given operating conditions and the working fluid atmore » each state point. the user conveys to the computer an image of the cycle by specifying the different components and their interconnections. Based on this information, the program calculates the temperature, flowrate, concentration, pressure and vapor fraction at each state point in the system and the heat duty at each unit, from which the coefficient of performance may be determined. A graphical user-interface is provided to facilitate interactive input and study of the output.« less
ABSIM. Simulation of Absorption Systems in Flexible and Modular Form
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grossman, G.
1994-06-01
The computer code has been developed for simulation of absorption systems at steady-state in a flexible and modular form, making it possible to investigate various cycle configurations with different working fluids. The code is based on unit subroutines containing the governing equations for the system's components. When all the equations have been established, a mathematical solver routine is employed to solve them simultaneously. Property subroutines contained in a separate data base serve to provide thermodynamic properties of the working fluids. The code is user-oriented and requires a relatively simple input containing the given operating conditions and the working fluid atmore » each state point. the user conveys to the computer an imagev of the cycle by specifying the different components and their interconnections. Based on this information, the program calculates the temperature, flowrate, concentration, pressure and vapor fraction at each state point in the system and the heat duty at each unit, from which the coefficient of performance may be determined. A graphical user-interface is provided to fcilitate interactive input and study of the output.« less
NASA Astrophysics Data System (ADS)
Hawkins, L. R.; Rupp, D. E.; Li, S.; Sarah, S.; McNeall, D. J.; Mote, P.; Betts, R. A.; Wallom, D.
2017-12-01
Changing regional patterns of surface temperature, precipitation, and humidity may cause ecosystem-scale changes in vegetation, altering the distribution of trees, shrubs, and grasses. A changing vegetation distribution, in turn, alters the albedo, latent heat flux, and carbon exchanged with the atmosphere with resulting feedbacks onto the regional climate. However, a wide range of earth-system processes that affect the carbon, energy, and hydrologic cycles occur at sub grid scales in climate models and must be parameterized. The appropriate parameter values in such parameterizations are often poorly constrained, leading to uncertainty in predictions of how the ecosystem will respond to changes in forcing. To better understand the sensitivity of regional climate to parameter selection and to improve regional climate and vegetation simulations, we used a large perturbed physics ensemble and a suite of statistical emulators. We dynamically downscaled a super-ensemble (multiple parameter sets and multiple initial conditions) of global climate simulations using a 25-km resolution regional climate model HadRM3p with the land-surface scheme MOSES2 and dynamic vegetation module TRIFFID. We simultaneously perturbed land surface parameters relating to the exchange of carbon, water, and energy between the land surface and atmosphere in a large super-ensemble of regional climate simulations over the western US. Statistical emulation was used as a computationally cost-effective tool to explore uncertainties in interactions. Regions of parameter space that did not satisfy observational constraints were eliminated and an ensemble of parameter sets that reduce regional biases and span a range of plausible interactions among earth system processes were selected. This study demonstrated that by combining super-ensemble simulations with statistical emulation, simulations of regional climate could be improved while simultaneously accounting for a range of plausible land-atmosphere feedback strengths.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Hui; Rasch, Philip J.; Zhang, Kai
2014-09-08
This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivitymore » of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model version 5. The first example demonstrates that the method is capable of characterizing the model cloud and precipitation sensitivity to time step length. A nudging technique is also applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol lifecycle are perturbed simultaneously in order to explore which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. Results show that in both examples, short ensembles are able to correctly reproduce the main signals of model sensitivities revealed by traditional long-term climate simulations for fast processes in the climate system. The efficiency of the ensemble method makes it particularly useful for the development of high-resolution, costly and complex climate models.« less
Orr, Mark G; Thrush, Roxanne; Plaut, David C
2013-01-01
The reasoned action approach, although ubiquitous in health behavior theory (e.g., Theory of Reasoned Action/Planned Behavior), does not adequately address two key dynamical aspects of health behavior: learning and the effect of immediate social context (i.e., social influence). To remedy this, we put forth a computational implementation of the Theory of Reasoned Action (TRA) using artificial-neural networks. Our model re-conceptualized behavioral intention as arising from a dynamic constraint satisfaction mechanism among a set of beliefs. In two simulations, we show that constraint satisfaction can simultaneously incorporate the effects of past experience (via learning) with the effects of immediate social context to yield behavioral intention, i.e., intention is dynamically constructed from both an individual's pre-existing belief structure and the beliefs of others in the individual's social context. In a third simulation, we illustrate the predictive ability of the model with respect to empirically derived behavioral intention. As the first known computational model of health behavior, it represents a significant advance in theory towards understanding the dynamics of health behavior. Furthermore, our approach may inform the development of population-level agent-based models of health behavior that aim to incorporate psychological theory into models of population dynamics.
Orr, Mark G.; Thrush, Roxanne; Plaut, David C.
2013-01-01
The reasoned action approach, although ubiquitous in health behavior theory (e.g., Theory of Reasoned Action/Planned Behavior), does not adequately address two key dynamical aspects of health behavior: learning and the effect of immediate social context (i.e., social influence). To remedy this, we put forth a computational implementation of the Theory of Reasoned Action (TRA) using artificial-neural networks. Our model re-conceptualized behavioral intention as arising from a dynamic constraint satisfaction mechanism among a set of beliefs. In two simulations, we show that constraint satisfaction can simultaneously incorporate the effects of past experience (via learning) with the effects of immediate social context to yield behavioral intention, i.e., intention is dynamically constructed from both an individual’s pre-existing belief structure and the beliefs of others in the individual’s social context. In a third simulation, we illustrate the predictive ability of the model with respect to empirically derived behavioral intention. As the first known computational model of health behavior, it represents a significant advance in theory towards understanding the dynamics of health behavior. Furthermore, our approach may inform the development of population-level agent-based models of health behavior that aim to incorporate psychological theory into models of population dynamics. PMID:23671603
NASA Astrophysics Data System (ADS)
Zhang, Guannan; Del-Castillo-Negrete, Diego
2017-10-01
Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.
NASA Astrophysics Data System (ADS)
Brereton, Carol A.; Joynes, Ian M.; Campbell, Lucy J.; Johnson, Matthew R.
2018-05-01
Fugitive emissions are important sources of greenhouse gases and lost product in the energy sector that can be difficult to detect, but are often easily mitigated once they are known, located, and quantified. In this paper, a scalar transport adjoint-based optimization method is presented to locate and quantify unknown emission sources from downstream measurements. This emission characterization approach correctly predicted locations to within 5 m and magnitudes to within 13% of experimental release data from Project Prairie Grass. The method was further demonstrated on simulated simultaneous releases in a complex 3-D geometry based on an Alberta gas plant. Reconstructions were performed using both the complex 3-D transient wind field used to generate the simulated release data and using a sequential series of steady-state RANS wind simulations (SSWS) representing 30 s intervals of physical time. Both the detailed transient and the simplified wind field series could be used to correctly locate major sources and predict their emission rates within 10%, while predicting total emission rates from all sources within 24%. This SSWS case would be much easier to implement in a real-world application, and gives rise to the possibility of developing pre-computed databases of both wind and scalar transport adjoints to reduce computational time.
Jaraíz, Martín; Enríquez, Lourdes; Pinacho, Ruth; Rubio, José E; Lesarri, Alberto; López-Pérez, José L
2017-04-07
A novel DFT-based Reaction Kinetics (DFT-RK) simulation approach, employed in combination with real-time data from reaction monitoring instrumentation (like UV-vis, FTIR, Raman, and 2D NMR benchtop spectrometers), is shown to provide a detailed methodology for the analysis and design of complex synthetic chemistry schemes. As an example, it is applied to the opening of epoxides by titanocene in THF, a catalytic system with abundant experimental data available. Through a DFT-RK analysis of real-time IR data, we have developed a comprehensive mechanistic model that opens new perspectives to understand previous experiments. Although derived specifically from the opening of epoxides, the prediction capabilities of the model, built on elementary reactions, together with its practical side (reaction kinetics simulations of real experimental conditions) make it a useful simulation tool for the design of new experiments, as well as for the conception and development of improved versions of the reagents. From the perspective of the methodology employed, because both the computational (DFT-RK) and the experimental (spectroscopic data) components can follow the time evolution of several species simultaneously, it is expected to provide a helpful tool for the study of complex systems in synthetic chemistry.
Singharoy, Abhishek; Sereda, Yuriy
2012-01-01
Macromolecular assemblies often display a hierarchical organization of macromolecules or their sub-assemblies. To model this, we have formulated a space warping method that enables capturing overall macromolecular structure and dynamics via a set of coarse-grained order parameters (OPs). This article is the first of two describing the construction and computational implementation of an additional class of OPs that has built into them the hierarchical architecture of macromolecular assemblies. To accomplish this, first, the system is divided into subsystems, each of which is described via a representative set of OPs. Then, a global set of variables is constructed from these subsystem-centered OPs to capture overall system organization. Dynamical properties of the resulting OPs are compared to those of our previous nonhierarchical ones, and implied conceptual and computational advantages are discussed for a 100ns, 2 million atom solvated Human Papillomavirus-like particle simulation. In the second article, the hierarchical OPs are shown to enable a multiscale analysis that starts with the N-atom Liouville equation and yields rigorous Langevin equations of stochastic OP dynamics. The latter is demonstrated via a force-field based simulation algorithm that probes key structural transition pathways, simultaneously accounting for all-atom details and overall structure. PMID:22661911
Control Law Design in a Computational Aeroelasticity Environment
NASA Technical Reports Server (NTRS)
Newsom, Jerry R.; Robertshaw, Harry H.; Kapania, Rakesh K.
2003-01-01
A methodology for designing active control laws in a computational aeroelasticity environment is given. The methodology involves employing a systems identification technique to develop an explicit state-space model for control law design from the output of a computational aeroelasticity code. The particular computational aeroelasticity code employed in this paper solves the transonic small disturbance aerodynamic equation using a time-accurate, finite-difference scheme. Linear structural dynamics equations are integrated simultaneously with the computational fluid dynamics equations to determine the time responses of the structure. These structural responses are employed as the input to a modern systems identification technique that determines the Markov parameters of an "equivalent linear system". The Eigensystem Realization Algorithm is then employed to develop an explicit state-space model of the equivalent linear system. The Linear Quadratic Guassian control law design technique is employed to design a control law. The computational aeroelasticity code is modified to accept control laws and perform closed-loop simulations. Flutter control of a rectangular wing model is chosen to demonstrate the methodology. Various cases are used to illustrate the usefulness of the methodology as the nonlinearity of the aeroelastic system is increased through increased angle-of-attack changes.
Application of the TEMPEST computer code to canister-filling heat transfer problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farnsworth, R.K.; Faletti, D.W.; Budden, M.J.
Pacific Northwest Laboratory (PNL) researchers used the TEMPEST computer code to simulate thermal cooldown behavior of nuclear waste glass after it was poured into steel canisters for long-term storage. The objective of this work was to determine the accuracy and applicability of the TEMPEST code when used to compute canister thermal histories. First, experimental data were obtained to provide the basis for comparing TEMPEST-generated predictions. Five canisters were instrumented with appropriately located radial and axial thermocouples. The canister were filled using the pilot-scale ceramic melter (PSCM) at PNL. Each canister was filled in either a continous or a batch fillingmore » mode. One of the canisters was also filled within a turntable simulant (a group of cylindrical shells with heat transfer resistances similar to those in an actual melter turntable). This was necessary to provide a basis for assessing the ability of the TEMPEST code to also model the transient cooling of canisters in a melter turntable. The continous-fill model, Version M, was found to predict temperatures with more accuracy. The turntable simulant experiment demonstrated that TEMPEST can adequately model the asymmetric temperature field caused by the turntable geometry. Further, TEMPEST can acceptably predict the canister cooling history within a turntable, despite code limitations in computing simultaneous radiation and convection heat transfer between shells, along with uncertainty in stainless-steel surface emissivities. Based on the successful performance of TEMPEST Version M, development was initiated to incorporate 1) full viscous glass convection, 2) a dynamically adaptive grid that automatically follows the glass/air interface throughout the transient, and 3) a full enclosure radiation model to allow radiation heat transfer to non-nearest neighbor cells. 5 refs., 47 figs., 17 tabs.« less
Simulating Operations at a Spaceport
NASA Technical Reports Server (NTRS)
Nevins, Michael R.
2007-01-01
SPACESIM is a computer program for detailed simulation of operations at a spaceport. SPACESIM is being developed to greatly improve existing spaceports and to aid in designing, building, and operating future spaceports, given that there is a worldwide trend in spaceport operations from very expensive, research- oriented launches to more frequent commercial launches. From an operational perspective, future spaceports are expected to resemble current airports and seaports, for which it is necessary to resolve issues of safety, security, efficient movement of machinery and people, cost effectiveness, timeliness, and maximizing effectiveness in utilization of resources. Simulations can be performed, for example, to (1) simultaneously analyze launches of reusable and expendable rockets and identify bottlenecks arising from competition for limited resources or (2) perform what-if scenario analyses to identify optimal scenarios prior to making large capital investments. SPACESIM includes an object-oriented discrete-event-simulation engine. (Discrete- event simulation has been used to assess processes at modern seaports.) The simulation engine is built upon the Java programming language for maximum portability. Extensible Markup Language (XML) is used for storage of data to enable industry-standard interchange of data with other software. A graphical user interface facilitates creation of scenarios and analysis of data.
Design and construction of miniature artificial ecosystem based on dynamic response optimization
NASA Astrophysics Data System (ADS)
Hu, Dawei; Liu, Hong; Tong, Ling; Li, Ming; Hu, Enzhu
The miniature artificial ecosystem (MAES) is a combination of man, silkworm, salad and mi-croalgae to partially regenerate O2 , sanitary water and food, simultaneously dispose CO2 and wastes, therefore it have a fundamental life support function. In order to enhance the safety and reliability of MAES and eliminate the influences of internal variations and external dis-turbances, it was necessary to configure MAES as a closed-loop control system, and it could be considered as a prototype for future bioregenerative life support system. However, MAES is a complex system possessing large numbers of parameters, intricate nonlinearities, time-varying factors as well as uncertainties, hence it is difficult to perfectly design and construct a prototype through merely conducting experiments by trial and error method. Our research presented an effective way to resolve preceding problem by use of dynamic response optimiza-tion. Firstly the mathematical model of MAES with first-order nonlinear ordinary differential equations including parameters was developed based on relevant mechanisms and experimental data, secondly simulation model of MAES was derived on the platform of MatLab/Simulink to perform model validation and further digital simulations, thirdly reference trajectories of de-sired dynamic response of system outputs were specified according to prescribed requirements, and finally optimization for initial values, tuned parameter and independent parameters was carried out using the genetic algorithm, the advanced direct search method along with parallel computing methods through computer simulations. The result showed that all parameters and configurations of MAES were determined after a series of computer experiments, and its tran-sient response performances and steady characteristics closely matched the reference curves. Since the prototype is a physical system that represents the mathematical model with reason-able accuracy, so the process of designing and constructing a prototype of MAES is the reverse of mathematical modeling, and must have prerequisite assists from these results of computer simulation.
Eiber, Calvin D; Morley, John W; Lovell, Nigel H; Suaning, Gregg J
2014-01-01
We present a computational model of the optic pathway which has been adapted to simulate cortical responses to visual-prosthetic stimulation. This model reproduces the statistically observed distributions of spikes for cortical recordings of sham and maximum-intensity stimuli, while simultaneously generating cellular receptive fields consistent with those observed using traditional visual neuroscience methods. By inverting this model to generate candidate phosphenes which could generate the responses observed to novel stimulation strategies, we hope to aid the development of said strategies in-vivo before being deployed in clinical settings.
NASA Astrophysics Data System (ADS)
Huang, Qian
2014-09-01
Scientific computing often requires the availability of a massive number of computers for performing large-scale simulations, and computing in mineral physics is no exception. In order to investigate physical properties of minerals at extreme conditions in computational mineral physics, parallel computing technology is used to speed up the performance by utilizing multiple computer resources to process a computational task simultaneously thereby greatly reducing computation time. Traditionally, parallel computing has been addressed by using High Performance Computing (HPC) solutions and installed facilities such as clusters and super computers. Today, it has been seen that there is a tremendous growth in cloud computing. Infrastructure as a Service (IaaS), the on-demand and pay-as-you-go model, creates a flexible and cost-effective mean to access computing resources. In this paper, a feasibility report of HPC on a cloud infrastructure is presented. It is found that current cloud services in IaaS layer still need to improve performance to be useful to research projects. On the other hand, Software as a Service (SaaS), another type of cloud computing, is introduced into an HPC system for computing in mineral physics, and an application of which is developed. In this paper, an overall description of this SaaS application is presented. This contribution can promote cloud application development in computational mineral physics, and cross-disciplinary studies.
Hidden Statistics Approach to Quantum Simulations
NASA Technical Reports Server (NTRS)
Zak, Michail
2010-01-01
Recent advances in quantum information theory have inspired an explosion of interest in new quantum algorithms for solving hard computational (quantum and non-quantum) problems. The basic principle of quantum computation is that the quantum properties can be used to represent structure data, and that quantum mechanisms can be devised and built to perform operations with this data. Three basic non-classical properties of quantum mechanics superposition, entanglement, and direct-product decomposability were main reasons for optimism about capabilities of quantum computers that promised simultaneous processing of large massifs of highly correlated data. Unfortunately, these advantages of quantum mechanics came with a high price. One major problem is keeping the components of the computer in a coherent state, as the slightest interaction with the external world would cause the system to decohere. That is why the hardware implementation of a quantum computer is still unsolved. The basic idea of this work is to create a new kind of dynamical system that would preserve the main three properties of quantum physics superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. In other words, such a system would reinforce the advantages and minimize limitations of both quantum and classical aspects. Based upon a concept of hidden statistics, a new kind of dynamical system for simulation of Schroedinger equation is proposed. The system represents a modified Madelung version of Schroedinger equation. It preserves superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. Such an optimal combination of characteristics is a perfect match for simulating quantum systems. The model includes a transitional component of quantum potential (that has been overlooked in previous treatment of the Madelung equation). The role of the transitional potential is to provide a jump from a deterministic state to a random state with prescribed probability density. This jump is triggered by blowup instability due to violation of Lipschitz condition generated by the quantum potential. As a result, the dynamics attains quantum properties on a classical scale. The model can be implemented physically as an analog VLSI-based (very-large-scale integration-based) computer, or numerically on a digital computer. This work opens a way of developing fundamentally new algorithms for quantum simulations of exponentially complex problems that expand NASA capabilities in conducting space activities. It has been illustrated that the complexity of simulations of particle interaction can be reduced from an exponential one to a polynomial one.
Hydrodynamic Simulations and Tomographic Reconstructions of the Intergalactic Medium
NASA Astrophysics Data System (ADS)
Stark, Casey William
The Intergalactic Medium (IGM) is the dominant reservoir of matter in the Universe from which the cosmic web and galaxies form. The structure and physical state of the IGM provides insight into the cosmological model of the Universe, the origin and timeline of the reionization of the Universe, as well as being an essential ingredient in our understanding of galaxy formation and evolution. Our primary handle on this information is a signal known as the Lyman-alpha forest (or Ly-alpha forest) -- the collection of absorption features in high-redshift sources due to intervening neutral hydrogen, which scatters HI Ly-alpha photons out of the line of sight. The Ly-alpha forest flux traces density fluctuations at high redshift and at moderate overdensities, making it an excellent tool for mapping large-scale structure and constraining cosmological parameters. Although the computational methodology for simulating the Ly-alpha forest has existed for over a decade, we are just now approaching the scale of computing power required to simultaneously capture large cosmological scales and the scales of the smallest absorption systems. My thesis focuses on using simulations at the edge of modern computing to produce precise predictions of the statistics of the Ly-alpha forest and to better understand the structure of the IGM. In the first part of my thesis, I review the state of hydrodynamic simulations of the IGM, including pitfalls of the existing under-resolved simulations. Our group developed a new cosmological hydrodynamics code to tackle the computational challenge, and I developed a distributed analysis framework to compute flux statistics from our simulations. I present flux statistics derived from a suite of our large hydrodynamic simulations and demonstrate convergence to the per cent level. I also compare flux statistics derived from simulations using different discretizations and hydrodynamic schemes (Eulerian finite volume vs. smoothed particle hydrodynamics) and discuss differences in their convergence behavior, their overall agreement, and the implications for cosmological constraints. In the second part of my thesis, I present a tomographic reconstruction method that allows us to make 3D maps of the IGM with Mpc resolution. In order to make reconstructions of large surveys computationally feasible, I developed a new Wiener Filter application with an algorithm specialized to our problem, which significantly reduces the space and time complexity compared to previous implementations. I explore two scientific applications of the maps: finding protoclusters by searching the maps for large, contiguous regions of low flux and finding cosmic voids by searching the maps for regions of high flux. Using a large N-body simulation, I identify and characterize both protoclusters and voids at z = 2.5, in the middle of the redshift range being mapped by ongoing surveys. I provide simple methods for identifying protocluster and void candidates in the tomographic flux maps, and then test them on mock surveys and reconstructions. I present forecasts for sample purity and completeness and other scientific applications of these large, high-redshift objects.
Wireless Medical Devices for MRI-Guided Interventions
NASA Astrophysics Data System (ADS)
Venkateswaran, Madhav
Wireless techniques can play an important role in next-generation, image-guided surgical techniques with integration strategies being the key. We present our investigations on three wireless applications. First, we validate a position and orientation independent method to noninvasively monitor wireless power delivery using current perturbation measurements of switched load modulation of the RF carrier. This is important for safe and efficient powering without using bulky batteries or invasive cables. Use of MRI transmit RF pulses for simultaneous powering is investigated in the second part. We develop system models for the MRI transmit chain, wireless powering circuits and a typical load. Detailed analysis and validation of nonlinear and cascaded modeling strategies are performed, useful for decoupled optimization of the harvester coil and RF-DC converter. MRI pulse sequences are investigated for suitability for simultaneous powering. Simulations indicate that a 1.8V, 2 mA load can be powered with a 100% duty cycle using a 30° fGRE sequence, despite the RF duty cycle being 44 mW for a 30° flip angle, consistent with model predictions. Investigations on imaging artifacts indicates that distortion is mostly restricted to within the physical span of the harvester coil in the imaging volume, with the homogeneous B1+ transmit field providing positioning flexibility to minimize this for simultaneous powering. The models are potentially valuable in designing wireless powering solutions for implantable devices with simultaneous real-time imaging in MRI-guided surgical suites. Finally in the last section, we model endovascular MRI coil coupling during RF transmit. FEM models for a series-resonant multimode coil and quadrature birdcage coil fields are developed and computationally efficient, circuit and full-wave simulations are used to model inductive coupling. The Bloch Siegert B1 mapping sequence is used for validating at 24, 28 and 34 microT background excitation. Quantitative performance metrics are successfully predicted and the role of simulation in geometric optimization is demonstrated. In a pig study, we demonstrate navigation of a catheter, with tip-tracking and high-resolution intravascular imaging, through the vasculature into the heart, followed by contextual visualization. A potentially significant application is in MRI-guided cardiac ablation procedures.
Punzalan, Florencio Rusty; Kunieda, Yoshitoshi; Amano, Akira
2015-01-01
Clinical and experimental studies involving human hearts can have certain limitations. Methods such as computer simulations can be an important alternative or supplemental tool. Physiological simulation at the tissue or organ level typically involves the handling of partial differential equations (PDEs). Boundary conditions and distributed parameters, such as those used in pharmacokinetics simulation, add to the complexity of the PDE solution. These factors can tailor PDE solutions and their corresponding program code to specific problems. Boundary condition and parameter changes in the customized code are usually prone to errors and time-consuming. We propose a general approach for handling PDEs and boundary conditions in computational models using a replacement scheme for discretization. This study is an extension of a program generator that we introduced in a previous publication. The program generator can generate code for multi-cell simulations of cardiac electrophysiology. Improvements to the system allow it to handle simultaneous equations in the biological function model as well as implicit PDE numerical schemes. The replacement scheme involves substituting all partial differential terms with numerical solution equations. Once the model and boundary equations are discretized with the numerical solution scheme, instances of the equations are generated to undergo dependency analysis. The result of the dependency analysis is then used to generate the program code. The resulting program code are in Java or C programming language. To validate the automatic handling of boundary conditions in the program code generator, we generated simulation code using the FHN, Luo-Rudy 1, and Hund-Rudy cell models and run cell-to-cell coupling and action potential propagation simulations. One of the simulations is based on a published experiment and simulation results are compared with the experimental data. We conclude that the proposed program code generator can be used to generate code for physiological simulations and provides a tool for studying cardiac electrophysiology. PMID:26356082
TIERRAS: A package to simulate high energy cosmic ray showers underground, underwater and under-ice
NASA Astrophysics Data System (ADS)
Tueros, Matías; Sciutto, Sergio
2010-02-01
In this paper we present TIERRAS, a Monte Carlo simulation program based on the well-known AIRES air shower simulations system that enables the propagation of particle cascades underground, providing a tool to study particles arriving underground from a primary cosmic ray on the atmosphere or to initiate cascades directly underground and propagate them, exiting into the atmosphere if necessary. We show several cross-checks of its results against CORSIKA, FLUKA, GEANT and ZHS simulations and we make some considerations regarding its possible use and limitations. The first results of full underground shower simulations are presented, as an example of the package capabilities. Program summaryProgram title: TIERRAS for AIRES Catalogue identifier: AEFO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 36 489 No. of bytes in distributed program, including test data, etc.: 3 261 669 Distribution format: tar.gz Programming language: Fortran 77 and C Computer: PC, Alpha, IBM, HP, Silicon Graphics and Sun workstations Operating system: Linux, DEC Unix, AIX, SunOS, Unix System V RAM: 22 Mb bytes Classification: 1.1 External routines: TIERRAS requires AIRES 2.8.4 to be installed on the system. AIRES 2.8.4 can be downloaded from http://www.fisica.unlp.edu.ar/auger/aires/eg_AiresDownload.html. Nature of problem: Simulation of high and ultra high energy underground particle showers. Solution method: Modification of the AIRES 2.8.4 code to accommodate underground conditions. Restrictions: In AIRES some processes that are not statistically significant on the atmosphere are not simulated. In particular, it does not include muon photonuclear processes. This imposes a limitation on the application of this package to a depth of 1 km of standard rock (or 2.5 km of water equivalent). Neutrinos are not tracked on the simulation, but their energy is taken into account in decays. Running time: A TIERRAS for AIRES run of a 10 eV shower with statistical sampling (thinning) below 10 eV and 0.2 weight factor (see [1]) uses approximately 1 h of CPU time on an Intel Core 2 Quad Q6600 at 2.4 GHz. It uses only one core, so 4 simultaneous simulations can be run on this computer. Aires includes a spooling system to run several simultaneous jobs of any type. References:S. Sciutto, AIRES 2.6 User Manual, http://www.fisica.unlp.edu.ar/auger/aires/.
Laser pulse induced multi-exciton dynamics in molecular systems
NASA Astrophysics Data System (ADS)
Wang, Luxia; May, Volkhard
2018-03-01
Ultrafast optical excitation of an arrangement of identical molecules is analyzed theoretically. The computations are particularly dedicated to molecules where the excitation energy into the second excited singlet state E(S 2) - E(S 0) is larger than twice the excitation energy into the first excited singlet state E(S 1) - E(S 0). Then, exciton-exciton annihilation is diminished and resonant and intensive excitation may simultaneously move different molecules into their first excited singlet state | {S}1> . To describe the temporal evolution of the thus created multi-exciton state a direct computation of the related wave function is circumvented. Instead, we derive equations of motion for expectation values formed by different arrangements of single-molecule transition operators | {S}1> < {S}0| . First simulation results are presented and the approximate treatment suggested recently in 2016 Phys. Rev. B 94 195413 is evaluated.
NASA Technical Reports Server (NTRS)
Reynolds, W. C. (Editor); Maccormack, R. W.
1981-01-01
Topics discussed include polygon transformations in fluid mechanics, computation of three-dimensional horseshoe vortex flow using the Navier-Stokes equations, an improved surface velocity method for transonic finite-volume solutions, transonic flow calculations with higher order finite elements, the numerical calculation of transonic axial turbomachinery flows, and the simultaneous solutions of inviscid flow and boundary layer at transonic speeds. Also considered are analytical solutions for the reflection of unsteady shock waves and relevant numerical tests, reformulation of the method of characteristics for multidimensional flows, direct numerical simulations of turbulent shear flows, the stability and separation of freely interacting boundary layers, computational models of convective motions at fluid interfaces, viscous transonic flow over airfoils, and mixed spectral/finite difference approximations for slightly viscous flows.
Folding Proteins at 500 ns/hour with Work Queue.
Abdul-Wahid, Badi'; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A
2012-10-01
Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour.
Folding Proteins at 500 ns/hour with Work Queue
Abdul-Wahid, Badi’; Yu, Li; Rajan, Dinesh; Feng, Haoyun; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A.
2014-01-01
Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour. PMID:25540799
NASA Astrophysics Data System (ADS)
Johnson, J.; Brackley, C. A.; Cook, P. R.; Marenduzzo, D.
2015-02-01
We present computer simulations of the phase behaviour of an ensemble of proteins interacting with a polymer, mimicking non-specific binding to a piece of bacterial DNA or eukaryotic chromatin. The proteins can simultaneously bind to the polymer in two or more places to create protein bridges. Despite the lack of any explicit interaction between the proteins or between DNA segments, our simulations confirm previous results showing that when the protein-polymer interaction is sufficiently strong, the proteins come together to form clusters. Furthermore, a sufficiently large concentration of bridging proteins leads to the compaction of the swollen polymer into a globular phase. Here we characterise both the formation of protein clusters and the polymer collapse as a function of protein concentration, protein-polymer affinity and fibre flexibility.
Memoryless cooperative graph search based on the simulated annealing algorithm
NASA Astrophysics Data System (ADS)
Hou, Jian; Yan, Gang-Feng; Fan, Zhen
2011-04-01
We have studied the problem of reaching a globally optimal segment for a graph-like environment with a single or a group of autonomous mobile agents. Firstly, two efficient simulated-annealing-like algorithms are given for a single agent to solve the problem in a partially known environment and an unknown environment, respectively. It shows that under both proposed control strategies, the agent will eventually converge to a globally optimal segment with probability 1. Secondly, we use multi-agent searching to simultaneously reduce the computation complexity and accelerate convergence based on the algorithms we have given for a single agent. By exploiting graph partition, a gossip-consensus method based scheme is presented to update the key parameter—radius of the graph, ensuring that the agents spend much less time finding a globally optimal segment.
Laszlo, Sarah; Plaut, David C
2012-03-01
The Parallel Distributed Processing (PDP) framework has significant potential for producing models of cognitive tasks that approximate how the brain performs the same tasks. To date, however, there has been relatively little contact between PDP modeling and data from cognitive neuroscience. In an attempt to advance the relationship between explicit, computational models and physiological data collected during the performance of cognitive tasks, we developed a PDP model of visual word recognition which simulates key results from the ERP reading literature, while simultaneously being able to successfully perform lexical decision-a benchmark task for reading models. Simulations reveal that the model's success depends on the implementation of several neurally plausible features in its architecture which are sufficiently domain-general to be relevant to cognitive modeling more generally. Copyright © 2011 Elsevier Inc. All rights reserved.
Electric and hybrid electric vehicle study utilizing a time-stepping simulation
NASA Technical Reports Server (NTRS)
Schreiber, Jeffrey G.; Shaltens, Richard K.; Beremand, Donald G.
1992-01-01
The applicability of NASA's advanced power technologies to electric and hybrid vehicles was assessed using a time-stepping computer simulation to model electric and hybrid vehicles operating over the Federal Urban Driving Schedule (FUDS). Both the energy and power demands of the FUDS were taken into account and vehicle economy, range, and performance were addressed simultaneously. Results indicate that a hybrid electric vehicle (HEV) configured with a flywheel buffer energy storage device and a free-piston Stirling convertor fulfills the emissions, fuel economy, range, and performance requirements that would make it acceptable to the consumer. It is noted that an assessment to determine which of the candidate technologies are suited for the HEV application has yet to be made. A proper assessment should take into account the fuel economy and range, along with the driveability and total emissions produced.
Direct conversion of solar energy to thermal energy
NASA Astrophysics Data System (ADS)
Sizmann, Rudolf
1986-12-01
Selective coatings (cermets) were produced by simultaneous evaporation of copper and silicon dioxide, and analyzed by computer assisted spectral photometers and ellipsometers; hemispherical emittance was measured. Steady state test procedures for covered and uncovered collectors were investigated. A method for evaluating the transient behavior of collectors was developed. The derived transfer functions describe their transient behavior. A stochastic approach was used for reducing the meteorological data volume. Data sets which are statistically equivalent to the original data can be synthesized. A simulation program for solar systems using analytical solutions of differential equations was developed. A large solar DHW system was optimized by a detailed modular simulation program. A microprocessor assisted data aquisition records the four characteristics of solar cells and solar cell systems in less than 10 msec. Measurements of a large photovoltaic installation (50 sqm) are reported.
Linearized radiative transfer models for retrieval of cloud parameters from EPIC/DSCOVR measurements
NASA Astrophysics Data System (ADS)
Molina García, Víctor; Sasi, Sruthy; Efremenko, Dmitry S.; Doicu, Adrian; Loyola, Diego
2018-07-01
In this paper, we describe several linearized radiative transfer models which can be used for the retrieval of cloud parameters from EPIC (Earth Polychromatic Imaging Camera) measurements. The approaches under examination are (1) the linearized forward approach, represented in this paper by the linearized discrete ordinate and matrix operator methods with matrix exponential, and (2) the forward-adjoint approach based on the discrete ordinate method with matrix exponential. To enhance the performance of the radiative transfer computations, the correlated k-distribution method and the Principal Component Analysis (PCA) technique are used. We provide a compact description of the proposed methods, as well as a numerical analysis of their accuracy and efficiency when simulating EPIC measurements in the oxygen A-band channel at 764 nm. We found that the computation time of the forward-adjoint approach using the correlated k-distribution method in conjunction with PCA is approximately 13 s for simultaneously computing the derivatives with respect to cloud optical thickness and cloud top height.
Integrative prescreening in analysis of multiple cancer genomic studies
2012-01-01
Background In high throughput cancer genomic studies, results from the analysis of single datasets often suffer from a lack of reproducibility because of small sample sizes. Integrative analysis can effectively pool and analyze multiple datasets and provides a cost effective way to improve reproducibility. In integrative analysis, simultaneously analyzing all genes profiled may incur high computational cost. A computationally affordable remedy is prescreening, which fits marginal models, can be conducted in a parallel manner, and has low computational cost. Results An integrative prescreening approach is developed for the analysis of multiple cancer genomic datasets. Simulation shows that the proposed integrative prescreening has better performance than alternatives, particularly including prescreening with individual datasets, an intensity approach and meta-analysis. We also analyze multiple microarray gene profiling studies on liver and pancreatic cancers using the proposed approach. Conclusions The proposed integrative prescreening provides an effective way to reduce the dimensionality in cancer genomic studies. It can be coupled with existing analysis methods to identify cancer markers. PMID:22799431
Flight-Time Identification of a UH-60A Helicopter and Slung Load
NASA Technical Reports Server (NTRS)
Cicolani, Luigi S.; McCoy, Allen H.; Tischler, Mark B.; Tucker, George E.; Gatenio, Pinhas; Marmar, Dani
1998-01-01
This paper describes a flight test demonstration of a system for identification of the stability and handling qualities parameters of a helicopter-slung load configuration simultaneously with flight testing, and the results obtained.Tests were conducted with a UH-60A Black Hawk at speeds from hover to 80 kts. The principal test load was an instrumented 8 x 6 x 6 ft cargo container. The identification used frequency domain analysis in the frequency range to 2 Hz, and focussed on the longitudinal and lateral control axes since these are the axes most affected by the load pendulum modes in the frequency range of interest for handling qualities. Results were computed for stability margins, handling qualities parameters and load pendulum stability. The computations took an average of 4 minutes before clearing the aircraft to the next test point. Important reductions in handling qualities were computed in some cases, depending, on control axis and load-slung combination. A database, including load dynamics measurements, was accumulated for subsequent simulation development and validation.
SimGen: A General Simulation Method for Large Systems.
Taylor, William R
2017-02-03
SimGen is a stand-alone computer program that reads a script of commands to represent complex macromolecules, including proteins and nucleic acids, in a structural hierarchy that can then be viewed using an integral graphical viewer or animated through a high-level application programming interface in C++. Structural levels in the hierarchy range from α-carbon or phosphate backbones through secondary structure to domains, molecules, and multimers with each level represented in an identical data structure that can be manipulated using the application programming interface. Unlike most coarse-grained simulation approaches, the higher-level objects represented in SimGen can be soft, allowing the lower-level objects that they contain to interact directly. The default motion simulated by SimGen is a Brownian-like diffusion that can be set to occur across all levels of representation in the hierarchy. Links can also be defined between objects, which, when combined with large high-level random movements, result in an effective search strategy for constraint satisfaction, including structure prediction from predicted pairwise distances. The implementation of SimGen makes use of the hierarchic data structure to avoid unnecessary calculation, especially for collision detection, allowing it to be simultaneously run and viewed on a laptop computer while simulating large systems of over 20,000 objects. It has been used previously to model complex molecular interactions including the motion of a myosin-V dimer "walking" on an actin fibre, RNA stem-loop packing, and the simulation of cell motion and aggregation. Several extensions to this original functionality are described. Copyright © 2016 The Francis Crick Institute. Published by Elsevier Ltd.. All rights reserved.
Insights from 3D numerical simulations on the dynamics of the India-Asia collision zone
NASA Astrophysics Data System (ADS)
Pusok, A. E.; Kaus, B.; Popov, A.
2013-12-01
The dynamics of the India-Asia collision zone remains one of the most remarkable topics of the current research interest: the transition from subduction to collision and uplift, followed by the rise of the abnormally thick Tibetan plateau, and the deformation at its Eastern and Western syntaxes, are processes still not fully understood. Models that have addressed this topic include wholescale underthrusting of Indian lithospheric mantle under Tibet, distributed homogeneous shortening or the thin-sheet model, slip-line field model for lateral extrusion or lower crustal flow models for the exhumation of the Himalayan units and lateral spreading of the Tibetan plateau. Of these, the thin-sheet model has successfully illustrated some of the basic physics of continental collision and has the advantage of a 3D model being reduced to 2D, but one of its major shortcomings is that it cannot simultaneously represent channel flow and gravitational collapse of the mantle lithosphere, since these mechanisms require the lithosphere to interact with the underlying mantle, or to have a vertically non-homogeneous rheology. As a consequence, 3D models are emerging as powerful tools to understand the dynamics of coupled systems. However, because of yet recent developments and various complexities, the current 3D models simulating the dynamics of continent collision zones have relied on certain explicit assumptions, such as replacing part of the asthenosphere with various types of boundary conditions that mimic the effect of mantle flow, in order to focus on the lithospheric/crustal deformation. Here, we employ the parallel 3D code LaMEM (Lithosphere and Mantle Evolution Model), with a finite difference staggered grid solver, which is capable of simulating lithospheric deformation while simultaneously taking mantle flow and a free surface into account. We present qualitative results on lithospheric and upper-mantle scale simulations in which the Indian lithosphere is subducted and/or indented into Asia. We investigate the way deep processes affect continental tectonics at convergent margins, addressing the role the continent subduction and indentation plays on the development of continental tectonics during convergence and we discuss the implications these offer for the Asian tectonics. Acknowledgements: Funding was provided by the European Research Council under the European Community's Seventh Framework Program (FP7/2007-2013) / ERC Grant agreement #258830. Numerical computations have been performed on MOGON (ZDV Mainz computing center) and JUQUEEN (Jülich high-performance computing center).
Direct Large-Scale N-Body Simulations of Planetesimal Dynamics
NASA Astrophysics Data System (ADS)
Richardson, Derek C.; Quinn, Thomas; Stadel, Joachim; Lake, George
2000-01-01
We describe a new direct numerical method for simulating planetesimal dynamics in which N˜10 6 or more bodies can be evolved simultaneously in three spatial dimensions over hundreds of dynamical times. This represents several orders of magnitude improvement in resolution over previous studies. The advance is made possible through modification of a stable and tested cosmological code optimized for massively parallel computers. However, owing to the excellent scalability and portability of the code, modest clusters of workstations can treat problems with N˜10 5 particles in a practical fashion. The code features algorithms for detection and resolution of collisions and takes into account the strong central force field and flattened Keplerian disk geometry of planetesimal systems. We demonstrate the range of problems that can be addressed by presenting simulations that illustrate oligarchic growth of protoplanets, planet formation in the presence of giant planet perturbations, the formation of the jovian moons, and orbital migration via planetesimal scattering. We also describe methods under development for increasing the timescale of the simulations by several orders of magnitude.
Samant, Asawari; Ogunnaike, Babatunde A; Vlachos, Dionisios G
2007-05-24
The fundamental role that intrinsic stochasticity plays in cellular functions has been shown via numerous computational and experimental studies. In the face of such evidence, it is important that intracellular networks are simulated with stochastic algorithms that can capture molecular fluctuations. However, separation of time scales and disparity in species population, two common features of intracellular networks, make stochastic simulation of such networks computationally prohibitive. While recent work has addressed each of these challenges separately, a generic algorithm that can simultaneously tackle disparity in time scales and population scales in stochastic systems is currently lacking. In this paper, we propose the hybrid, multiscale Monte Carlo (HyMSMC) method that fills in this void. The proposed HyMSMC method blends stochastic singular perturbation concepts, to deal with potential stiffness, with a hybrid of exact and coarse-grained stochastic algorithms, to cope with separation in population sizes. In addition, we introduce the computational singular perturbation (CSP) method as a means of systematically partitioning fast and slow networks and computing relaxation times for convergence. We also propose a new criteria of convergence of fast networks to stochastic low-dimensional manifolds, which further accelerates the algorithm. We use several prototype and biological examples, including a gene expression model displaying bistability, to demonstrate the efficiency, accuracy and applicability of the HyMSMC method. Bistable models serve as stringent tests for the success of multiscale MC methods and illustrate limitations of some literature methods.
He, L; Huang, G H; Lu, H W
2010-04-15
Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.
Finite Element Simulation of Residual Stress Development in Thermally Sprayed Coatings
NASA Astrophysics Data System (ADS)
Elhoriny, Mohamed; Wenzelburger, Martin; Killinger, Andreas; Gadow, Rainer
2017-04-01
The coating buildup process of Al2O3/TiO2 ceramic powder deposited on stainless-steel substrate by atmospheric plasma spraying has been simulated by creating thermomechanical finite element models that utilize element death and birth techniques in ANSYS commercial software and self-developed codes. The simulation process starts with side-by-side deposition of coarse subparts of the ceramic layer until the entire coating is created. Simultaneously, the heat flow into the material, thermal deformation, and initial quenching stress are computed. The aim is to be able to predict—for the considered spray powder and substrate material—the development of residual stresses and to assess the risk of coating failure. The model allows the prediction of the heat flow, temperature profile, and residual stress development over time and position in the coating and substrate. The proposed models were successfully run and the results compared with actual residual stresses measured by the hole drilling method.
Control of wind turbine generators connected to power systems
NASA Technical Reports Server (NTRS)
Hwang, H. H.; Mozeico, H. V.; Gilbert, L. J.
1978-01-01
A unique simulation model based on a Mode-O wind turbine is developed for simulating both speed and power control. An analytical representation for a wind turbine that employs blade pitch angle feedback control is presented, and a mathematical model is formulated. For Mode-O serving as a practical case study, results of a computer simulation of the model as applied to the problems of synchronization and dynamic stability are provided. It is shown that the speed and output of a wind turbine can be satisfactorily controlled within reasonable limits by employing the existing blade pitch control system under specified conditions. For power control, an additional excitation control is required so that the terminal voltage, output power factor, and armature current can be held within narrow limits. As a result, the variation of torque angle is limited even if speed control is not implemented simultaneously with power control. Design features of the ERDA/NASA 100-kW Mode-O wind turbine are included.
NASA Astrophysics Data System (ADS)
Lyu, Bo-Han; Wang, Chen; Tsai, Chun-Wei
2017-08-01
Jasper Display Corp. (JDC) offer high reflectivity, high resolution Liquid Crystal on Silicon - Spatial Light Modulator (LCoS-SLM) which include an associated controller ASIC and LabVIEW based modulation software. Based on this LCoS-SLM, also called Education Kit (EDK), we provide a training platform which includes a series of optical theory and experiments to university students. This EDK not only provides a LabVIEW based operation software to produce Computer Generated Holograms (CGH) to generate some basic diffraction image or holographic image, but also provides simulation software to verity the experiment results simultaneously. However, we believe that a robust LCoSSLM, operation software, simulation software, training system, and training course can help students to study the fundamental optics, wave optics, and Fourier optics more easily. Based on these fundamental knowledges, they could develop their unique skills and create their new innovations on the optoelectronic application in the future.
Left Ventricular Diastolic and Systolic Material Property Estimation from Image Data
Krishnamurthy, Adarsh; Villongco, Christopher; Beck, Amanda; Omens, Jeffrey; McCulloch, Andrew
2015-01-01
Cardiovascular simulations using patient-specific geometries can help researchers understand the mechanical behavior of the heart under different loading or disease conditions. However, to replicate the regional mechanics of the heart accurately, both the nonlinear passive and active material properties must be estimated reliably. In this paper, automated methods were used to determine passive material properties while simultaneously computing the unloaded reference geometry of the ventricles for stress analysis. Two different approaches were used to model systole. In the first, a physiologically-based active contraction model [1] coupled to a hemodynamic three-element Windkessel model of the circulation was used to simulate ventricular ejection. In the second, developed active tension was directly adjusted to match ventricular volumes at end-systole while prescribing the known end-systolic pressure. These methods were tested in four normal dogs using the data provided for the LV mechanics challenge [2]. The resulting end-diastolic and end-systolic geometry from the simulation were compared with measured image data. PMID:25729778
Measurements and Simulations of Nadir-Viewing Radar Returns from the Melting Layer at X- and W-Bands
NASA Technical Reports Server (NTRS)
Liao, Liang; Meneghini, Robert; Tian, Lin; Heymsfield, Gerald M.
2010-01-01
Simulated radar signatures within the melting layer in stratiform rain, namely the radar bright band, are checked by means of comparisons with simultaneous measurements of the bright band made by the EDOP (X-band) and CRS (W-band) airborne Doppler radars during the CRYSTAL-FACE campaign in 2002. A stratified-sphere model, allowing the fractional water content to vary along the radius of the particle, is used to compute the scattering properties of individual melting snowflakes. Using the effective dielectric constants computed by the conjugate gradient-fast Fourier transform (CGFFT) numerical method for X and W bands, and expressing the fractional water content of melting particle as an exponential function in particle radius, it is found that at X band the simulated radar bright-band profiles are in an excellent agreement with the measured profiles. It is also found that the simulated W-band profiles usually resemble the shapes of the measured bright-band profiles even though persistent offsets between them are present. These offsets, however, can be explained by the attenuation caused by cloud water and water vapor at W band. This is confirmed by the comparisons of the radar profiles made in the rain regions where the un-attenuated W-band reflectivity profiles can be estimated through the X- and W band Doppler velocity measurements. The bright-band model described in this paper has the potential to be used effectively for both radar and radiometer algorithms relevant to the TRMM and GPM satellite missions.
A case for spiking neural network simulation based on configurable multiple-FPGA systems.
Yang, Shufan; Wu, Qiang; Li, Renfa
2011-09-01
Recent neuropsychological research has begun to reveal that neurons encode information in the timing of spikes. Spiking neural network simulations are a flexible and powerful method for investigating the behaviour of neuronal systems. Simulation of the spiking neural networks in software is unable to rapidly generate output spikes in large-scale of neural network. An alternative approach, hardware implementation of such system, provides the possibility to generate independent spikes precisely and simultaneously output spike waves in real time, under the premise that spiking neural network can take full advantage of hardware inherent parallelism. We introduce a configurable FPGA-oriented hardware platform for spiking neural network simulation in this work. We aim to use this platform to combine the speed of dedicated hardware with the programmability of software so that it might allow neuroscientists to put together sophisticated computation experiments of their own model. A feed-forward hierarchy network is developed as a case study to describe the operation of biological neural systems (such as orientation selectivity of visual cortex) and computational models of such systems. This model demonstrates how a feed-forward neural network constructs the circuitry required for orientation selectivity and provides platform for reaching a deeper understanding of the primate visual system. In the future, larger scale models based on this framework can be used to replicate the actual architecture in visual cortex, leading to more detailed predictions and insights into visual perception phenomenon.
Multiphysics Code Demonstrated for Propulsion Applications
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Melis, Matthew E.
1998-01-01
The utility of multidisciplinary analysis tools for aeropropulsion applications is being investigated at the NASA Lewis Research Center. The goal of this project is to apply Spectrum, a multiphysics code developed by Centric Engineering Systems, Inc., to simulate multidisciplinary effects in turbomachinery components. Many engineering problems today involve detailed computer analyses to predict the thermal, aerodynamic, and structural response of a mechanical system as it undergoes service loading. Analysis of aerospace structures generally requires attention in all three disciplinary areas to adequately predict component service behavior, and in many cases, the results from one discipline substantially affect the outcome of the other two. There are numerous computer codes currently available in the engineering community to perform such analyses in each of these disciplines. Many of these codes are developed and used in-house by a given organization, and many are commercially available. However, few, if any, of these codes are designed specifically for multidisciplinary analyses. The Spectrum code has been developed for performing fully coupled fluid, thermal, and structural analyses on a mechanical system with a single simulation that accounts for all simultaneous interactions, thus eliminating the requirement for running a large number of sequential, separate, disciplinary analyses. The Spectrum code has a true multiphysics analysis capability, which improves analysis efficiency as well as accuracy. Centric Engineering, Inc., working with a team of Lewis and AlliedSignal Engines engineers, has been evaluating Spectrum for a variety of propulsion applications including disk quenching, drum cavity flow, aeromechanical simulations, and a centrifugal compressor flow simulation.
NASA Astrophysics Data System (ADS)
Mousavi, Seyed Hosein; Nazemi, Ali; Hafezalkotob, Ashkan
2015-03-01
With the formation of the competitive electricity markets in the world, optimization of bidding strategies has become one of the main discussions in studies related to market designing. Market design is challenged by multiple objectives that need to be satisfied. The solution of those multi-objective problems is searched often over the combined strategy space, and thus requires the simultaneous optimization of multiple parameters. The problem is formulated analytically using the Nash equilibrium concept for games composed of large numbers of players having discrete and large strategy spaces. The solution methodology is based on a characterization of Nash equilibrium in terms of minima of a function and relies on a metaheuristic optimization approach to find these minima. This paper presents some metaheuristic algorithms to simulate how generators bid in the spot electricity market viewpoint of their profit maximization according to the other generators' strategies, such as genetic algorithm (GA), simulated annealing (SA) and hybrid simulated annealing genetic algorithm (HSAGA) and compares their results. As both GA and SA are generic search methods, HSAGA is also a generic search method. The model based on the actual data is implemented in a peak hour of Tehran's wholesale spot market in 2012. The results of the simulations show that GA outperforms SA and HSAGA on computing time, number of function evaluation and computing stability, as well as the results of calculated Nash equilibriums by GA are less various and different from each other than the other algorithms.
Peleg, Micha; Normand, Mark D
2015-09-01
When a vitamin's, pigment's or other food component's chemical degradation follows a known fixed order kinetics, and its rate constant's temperature-dependence follows a two parameter model, then, at least theoretically, it is possible to extract these two parameters from two successive experimental concentration ratios determined during the food's non-isothermal storage. This requires numerical solution of two simultaneous equations, themselves the numerical solutions of two differential rate equations, with a program especially developed for the purpose. Once calculated, these parameters can be used to reconstruct the entire degradation curve for the particular temperature history and predict the degradation curves for other temperature histories. The concept and computation method were tested with simulated degradation under rising and/or falling oscillating temperature conditions, employing the exponential model to characterize the rate constant's temperature-dependence. In computer simulations, the method's predictions were robust against minor errors in the two concentration ratios. The program to do the calculations was posted as freeware on the Internet. The temperature profile can be entered as an algebraic expression that can include 'If' statements, or as an imported digitized time-temperature data file, to be converted into an Interpolating Function by the program. The numerical solution of the two simultaneous equations requires close initial guesses of the exponential model's parameters. Programs were devised to obtain these initial values by matching the two experimental concentration ratios with a generated degradation curve whose parameters can be varied manually with sliders on the screen. These programs too were made available as freeware on the Internet and were tested with published data on vitamin A. Copyright © 2015 Elsevier Ltd. All rights reserved.
Simultaneously optimizing dose and schedule of a new cytotoxic agent.
Braun, Thomas M; Thall, Peter F; Nguyen, Hoang; de Lima, Marcos
2007-01-01
Traditionally, phase I clinical trial designs are based upon one predefined course of treatment while varying among patients the dose given at each administration. In actual medical practice, patients receive a schedule comprised of several courses of treatment, and some patients may receive one or more dose reductions or delays during treatment. Consequently, the overall risk of toxicity for each patient is a function of both actual schedule of treatment and the differing doses used at each adminstration. Our goal is to provide a practical phase I clinical trial design that more accurately reflects actual medical practice by accounting for both dose per administration and schedule. We propose an outcome-adaptive Bayesian design that simultaneously optimizes both dose and schedule in terms of the overall risk of toxicity, based on time-to-toxicity outcomes. We use computer simulation as a tool to calibrate design parameters. We describe a phase I trial in allogeneic bone marrow transplantation that was designed and is currently being conducted using our new method. Our computer simulations demonstrate that our method outperforms any method that searches for an optimal dose but does not allow schedule to vary, both in terms of the probability of identifying optimal (dose, schedule) combinations, and the numbers of patients assigned to those combinations in the trial. Our design requires greater sample sizes than those seen in traditional phase I studies due to the larger number of treatment combinations examined. Our design also assumes that the effects of multiple administrations are independent of each other and that the hazard of toxicity is the same for all administrations. Our design is the first for phase I clinical trials that is sufficiently flexible and practical to truly reflect clinical practice by varying both dose and the timing and number of administrations given to each patient.
Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades.
Orchard, Garrick; Jayawant, Ajinkya; Cohen, Gregory K; Thakor, Nitish
2015-01-01
Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labeling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.
NASA Astrophysics Data System (ADS)
Molde, H.; Zwick, D.; Muskulus, M.
2014-12-01
Support structures for offshore wind turbines are contributing a large part to the total project cost, and a cost saving of a few percent would have considerable impact. At present support structures are designed with simplified methods, e.g., spreadsheet analysis, before more detailed load calculations are performed. Due to the large number of loadcases only a few semimanual design iterations are typically executed. Computer-assisted optimization algorithms could help to further explore design limits and avoid unnecessary conservatism. In this study the simultaneous perturbation stochastic approximation method developed by Spall in the 1990s was assessed with respect to its suitability for support structure optimization. The method depends on a few parameters and an objective function that need to be chosen carefully. In each iteration the structure is evaluated by time-domain analyses, and joint fatigue lifetimes and ultimate strength utilization are computed from stress concentration factors. A pseudo-gradient is determined from only two analysis runs and the design is adjusted in the direction that improves it the most. The algorithm is able to generate considerably improved designs, compared to other methods, in a few hundred iterations, which is demonstrated for the NOWITECH 10 MW reference turbine.
Time and number of displays impact critical signal detection in fetal heart rate tracings.
Anderson, Brittany L; Scerbo, Mark W; Belfore, Lee A; Abuhamad, Alfred Z
2011-06-01
Interest in centralized monitoring in labor and delivery units is growing because it affords the opportunity to monitor multiple patients simultaneously. However, a long history of research on sustained attention reveals these types of monitoring tasks can be problematic. The goal of the present experiment was to examine the ability of individuals to detect critical signals in fetal heart rate (FHR) tracings in one or more displays over an extended period of time. Seventy-two participants monitored one, two, or four computer-simulated FHR tracings on a computer display for the appearance of late decelerations over a 48-minute vigil. Measures of subjective stress and workload were also obtained before and after the vigil. The results showed that detection accuracy decreased over time and also declined as the number of displays increased. The subjective reports indicated that participants found the task to be stressful and mentally demanding, effortful, and frustrating. The results suggest that centralized monitoring that allows many patients to be monitored simultaneously may impose a detrimental attentional burden on the observer. Furthermore, this seemingly benign task may impose an additional source of stress and mental workload above what is commonly found in labor and delivery units. © Thieme Medical Publishers.
Synapse-Centric Mapping of Cortical Models to the SpiNNaker Neuromorphic Architecture
Knight, James C.; Furber, Steve B.
2016-01-01
While the adult human brain has approximately 8.8 × 1010 neurons, this number is dwarfed by its 1 × 1015 synapses. From the point of view of neuromorphic engineering and neural simulation in general this makes the simulation of these synapses a particularly complex problem. SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Current solutions for simulating spiking neural networks on SpiNNaker are heavily inspired by work on distributed high-performance computing. However, while SpiNNaker shares many characteristics with such distributed systems, its component nodes have much more limited resources and, as the system lacks global synchronization, the computation performed on each node must complete within a fixed time step. We first analyze the performance of the current SpiNNaker neural simulation software and identify several problems that occur when it is used to simulate networks of the type often used to model the cortex which contain large numbers of sparsely connected synapses. We then present a new, more flexible approach for mapping the simulation of such networks to SpiNNaker which solves many of these problems. Finally we analyze the performance of our new approach using both benchmarks, designed to represent cortical connectivity, and larger, functional cortical models. In a benchmark network where neurons receive input from 8000 STDP synapses, our new approach allows 4× more neurons to be simulated on each SpiNNaker core than has been previously possible. We also demonstrate that the largest plastic neural network previously simulated on neuromorphic hardware can be run in real time using our new approach: double the speed that was previously achieved. Additionally this network contains two types of plastic synapse which previously had to be trained separately but, using our new approach, can be trained simultaneously. PMID:27683540
Andrianakis, Ioannis; Vernon, Ian R.; McCreesh, Nicky; McKinley, Trevelyan J.; Oakley, Jeremy E.; Nsubuga, Rebecca N.; Goldstein, Michael; White, Richard G.
2015-01-01
Advances in scientific computing have allowed the development of complex models that are being routinely applied to problems in disease epidemiology, public health and decision making. The utility of these models depends in part on how well they can reproduce empirical data. However, fitting such models to real world data is greatly hindered both by large numbers of input and output parameters, and by long run times, such that many modelling studies lack a formal calibration methodology. We present a novel method that has the potential to improve the calibration of complex infectious disease models (hereafter called simulators). We present this in the form of a tutorial and a case study where we history match a dynamic, event-driven, individual-based stochastic HIV simulator, using extensive demographic, behavioural and epidemiological data available from Uganda. The tutorial describes history matching and emulation. History matching is an iterative procedure that reduces the simulator's input space by identifying and discarding areas that are unlikely to provide a good match to the empirical data. History matching relies on the computational efficiency of a Bayesian representation of the simulator, known as an emulator. Emulators mimic the simulator's behaviour, but are often several orders of magnitude faster to evaluate. In the case study, we use a 22 input simulator, fitting its 18 outputs simultaneously. After 9 iterations of history matching, a non-implausible region of the simulator input space was identified that was times smaller than the original input space. Simulator evaluations made within this region were found to have a 65% probability of fitting all 18 outputs. History matching and emulation are useful additions to the toolbox of infectious disease modellers. Further research is required to explicitly address the stochastic nature of the simulator as well as to account for correlations between outputs. PMID:25569850
Hossain, Md Shakhawath; Bergstrom, D J; Chen, X B
2015-11-01
The in vitro chondrocyte cell culture process in a perfusion bioreactor provides enhanced nutrient supply as well as the flow-induced shear stress that may have a positive influence on the cell growth. Mathematical and computational modelling of such a culture process, by solving the coupled flow, mass transfer and cell growth equations simultaneously, can provide important insight into the biomechanical environment of a bioreactor and the related cell growth process. To do this, a two-way coupling between the local flow field and cell growth is required. Notably, most of the computational and mathematical models to date have not taken into account the influence of the cell growth on the local flow field and nutrient concentration. The present research aimed at developing a mathematical model and performing a numerical simulation using the lattice Boltzmann method to predict the chondrocyte cell growth without a scaffold on a flat plate placed inside a perfusion bioreactor. The model considers the two-way coupling between the cell growth and local flow field, and the simulation has been performed for 174 culture days. To incorporate the cell growth into the model, a control-volume-based surface growth modelling approach has been adopted. The simulation results show the variation of local fluid velocity, shear stress and concentration distribution during the culture period due to the growth of the cell phase and also illustrate that the shear stress can increase the cell volume fraction to a certain extent.
The feasibility of an efficient drug design method with high-performance computers.
Yamashita, Takefumi; Ueda, Akihiko; Mitsui, Takashi; Tomonaga, Atsushi; Matsumoto, Shunji; Kodama, Tatsuhiko; Fujitani, Hideaki
2015-01-01
In this study, we propose a supercomputer-assisted drug design approach involving all-atom molecular dynamics (MD)-based binding free energy prediction after the traditional design/selection step. Because this prediction is more accurate than the empirical binding affinity scoring of the traditional approach, the compounds selected by the MD-based prediction should be better drug candidates. In this study, we discuss the applicability of the new approach using two examples. Although the MD-based binding free energy prediction has a huge computational cost, it is feasible with the latest 10 petaflop-scale computer. The supercomputer-assisted drug design approach also involves two important feedback procedures: The first feedback is generated from the MD-based binding free energy prediction step to the drug design step. While the experimental feedback usually provides binding affinities of tens of compounds at one time, the supercomputer allows us to simultaneously obtain the binding free energies of hundreds of compounds. Because the number of calculated binding free energies is sufficiently large, the compounds can be classified into different categories whose properties will aid in the design of the next generation of drug candidates. The second feedback, which occurs from the experiments to the MD simulations, is important to validate the simulation parameters. To demonstrate this, we compare the binding free energies calculated with various force fields to the experimental ones. The results indicate that the prediction will not be very successful, if we use an inaccurate force field. By improving/validating such simulation parameters, the next prediction can be made more accurate.
Simultaneous Mean and Covariance Correction Filter for Orbit Estimation.
Wang, Xiaoxu; Pan, Quan; Ding, Zhengtao; Ma, Zhengya
2018-05-05
This paper proposes a novel filtering design, from a viewpoint of identification instead of the conventional nonlinear estimation schemes (NESs), to improve the performance of orbit state estimation for a space target. First, a nonlinear perturbation is viewed or modeled as an unknown input (UI) coupled with the orbit state, to avoid the intractable nonlinear perturbation integral (INPI) required by NESs. Then, a simultaneous mean and covariance correction filter (SMCCF), based on a two-stage expectation maximization (EM) framework, is proposed to simply and analytically fit or identify the first two moments (FTM) of the perturbation (viewed as UI), instead of directly computing such the INPI in NESs. Orbit estimation performance is greatly improved by utilizing the fit UI-FTM to simultaneously correct the state estimation and its covariance. Third, depending on whether enough information is mined, SMCCF should outperform existing NESs or the standard identification algorithms (which view the UI as a constant independent of the state and only utilize the identified UI-mean to correct the state estimation, regardless of its covariance), since it further incorporates the useful covariance information in addition to the mean of the UI. Finally, our simulations demonstrate the superior performance of SMCCF via an orbit estimation example.
NASA Astrophysics Data System (ADS)
Lin, Hsien-I.; Nguyen, Xuan-Anh
2017-05-01
To operate a redundant manipulator to accomplish the end-effector trajectory planning and simultaneously control its gesture in online programming, incorporating the human motion is a useful and flexible option. This paper focuses on a manipulative instrument that can simultaneously control its arm gesture and end-effector trajectory via human teleoperation. The instrument can be classified by two parts; first, for the human motion capture and data processing, marker systems are proposed to capture human gesture. Second, the manipulator kinematics control is implemented by an augmented multi-tasking method, and forward and backward reaching inverse kinematics, respectively. Especially, the local-solution and divergence problems of a multi-tasking method are resolved by the proposed augmented multi-tasking method. Computer simulations and experiments with a 7-DOF (degree of freedom) redundant manipulator were used to validate the proposed method. Comparison among the single-tasking, original multi-tasking, and augmented multi-tasking algorithms were performed and the result showed that the proposed augmented method had a good end-effector position accuracy and the most similar gesture to the human gesture. Additionally, the experimental results showed that the proposed instrument was realized online.
Physics Computing '92: Proceedings of the 4th International Conference
NASA Astrophysics Data System (ADS)
de Groot, Robert A.; Nadrchal, Jaroslav
1993-04-01
The Table of Contents for the book is as follows: * Preface * INVITED PAPERS * Ab Initio Theoretical Approaches to the Structural, Electronic and Vibrational Properties of Small Clusters and Fullerenes: The State of the Art * Neural Multigrid Methods for Gauge Theories and Other Disordered Systems * Multicanonical Monte Carlo Simulations * On the Use of the Symbolic Language Maple in Physics and Chemistry: Several Examples * Nonequilibrium Phase Transitions in Catalysis and Population Models * Computer Algebra, Symmetry Analysis and Integrability of Nonlinear Evolution Equations * The Path-Integral Quantum Simulation of Hydrogen in Metals * Digital Optical Computing: A New Approach of Systolic Arrays Based on Coherence Modulation of Light and Integrated Optics Technology * Molecular Dynamics Simulations of Granular Materials * Numerical Implementation of a K.A.M. Algorithm * Quasi-Monte Carlo, Quasi-Random Numbers and Quasi-Error Estimates * What Can We Learn from QMC Simulations * Physics of Fluctuating Membranes * Plato, Apollonius, and Klein: Playing with Spheres * Steady States in Nonequilibrium Lattice Systems * CONVODE: A REDUCE Package for Differential Equations * Chaos in Coupled Rotators * Symplectic Numerical Methods for Hamiltonian Problems * Computer Simulations of Surfactant Self Assembly * High-dimensional and Very Large Cellular Automata for Immunological Shape Space * A Review of the Lattice Boltzmann Method * Electronic Structure of Solids in the Self-interaction Corrected Local-spin-density Approximation * Dedicated Computers for Lattice Gauge Theory Simulations * Physics Education: A Survey of Problems and Possible Solutions * Parallel Computing and Electronic-Structure Theory * High Precision Simulation Techniques for Lattice Field Theory * CONTRIBUTED PAPERS * Case Study of Microscale Hydrodynamics Using Molecular Dynamics and Lattice Gas Methods * Computer Modelling of the Structural and Electronic Properties of the Supported Metal Catalysis * Ordered Particle Simulations for Serial and MIMD Parallel Computers * "NOLP" -- Program Package for Laser Plasma Nonlinear Optics * Algorithms to Solve Nonlinear Least Square Problems * Distribution of Hydrogen Atoms in Pd-H Computed by Molecular Dynamics * A Ray Tracing of Optical System for Protein Crystallography Beamline at Storage Ring-SIBERIA-2 * Vibrational Properties of a Pseudobinary Linear Chain with Correlated Substitutional Disorder * Application of the Software Package Mathematica in Generalized Master Equation Method * Linelist: An Interactive Program for Analysing Beam-foil Spectra * GROMACS: A Parallel Computer for Molecular Dynamics Simulations * GROMACS Method of Virial Calculation Using a Single Sum * The Interactive Program for the Solution of the Laplace Equation with the Elimination of Singularities for Boundary Functions * Random-Number Generators: Testing Procedures and Comparison of RNG Algorithms * Micro-TOPIC: A Tokamak Plasma Impurities Code * Rotational Molecular Scattering Calculations * Orthonormal Polynomial Method for Calibrating of Cryogenic Temperature Sensors * Frame-based System Representing Basis of Physics * The Role of Massively Data-parallel Computers in Large Scale Molecular Dynamics Simulations * Short-range Molecular Dynamics on a Network of Processors and Workstations * An Algorithm for Higher-order Perturbation Theory in Radiative Transfer Computations * Hydrostochastics: The Master Equation Formulation of Fluid Dynamics * HPP Lattice Gas on Transputers and Networked Workstations * Study on the Hysteresis Cycle Simulation Using Modeling with Different Functions on Intervals * Refined Pruning Techniques for Feed-forward Neural Networks * Random Walk Simulation of the Motion of Transient Charges in Photoconductors * The Optical Hysteresis in Hydrogenated Amorphous Silicon * Diffusion Monte Carlo Analysis of Modern Interatomic Potentials for He * A Parallel Strategy for Molecular Dynamics Simulations of Polar Liquids on Transputer Arrays * Distribution of Ions Reflected on Rough Surfaces * The Study of Step Density Distribution During Molecular Beam Epitaxy Growth: Monte Carlo Computer Simulation * Towards a Formal Approach to the Construction of Large-scale Scientific Applications Software * Correlated Random Walk and Discrete Modelling of Propagation through Inhomogeneous Media * Teaching Plasma Physics Simulation * A Theoretical Determination of the Au-Ni Phase Diagram * Boson and Fermion Kinetics in One-dimensional Lattices * Computational Physics Course on the Technical University * Symbolic Computations in Simulation Code Development and Femtosecond-pulse Laser-plasma Interaction Studies * Computer Algebra and Integrated Computing Systems in Education of Physical Sciences * Coordinated System of Programs for Undergraduate Physics Instruction * Program Package MIRIAM and Atomic Physics of Extreme Systems * High Energy Physics Simulation on the T_Node * The Chapman-Kolmogorov Equation as Representation of Huygens' Principle and the Monolithic Self-consistent Numerical Modelling of Lasers * Authoring System for Simulation Developments * Molecular Dynamics Study of Ion Charge Effects in the Structure of Ionic Crystals * A Computational Physics Introductory Course * Computer Calculation of Substrate Temperature Field in MBE System * Multimagnetical Simulation of the Ising Model in Two and Three Dimensions * Failure of the CTRW Treatment of the Quasicoherent Excitation Transfer * Implementation of a Parallel Conjugate Gradient Method for Simulation of Elastic Light Scattering * Algorithms for Study of Thin Film Growth * Algorithms and Programs for Physics Teaching in Romanian Technical Universities * Multicanonical Simulation of 1st order Transitions: Interface Tension of the 2D 7-State Potts Model * Two Numerical Methods for the Calculation of Periodic Orbits in Hamiltonian Systems * Chaotic Behavior in a Probabilistic Cellular Automata? * Wave Optics Computing by a Networked-based Vector Wave Automaton * Tensor Manipulation Package in REDUCE * Propagation of Electromagnetic Pulses in Stratified Media * The Simple Molecular Dynamics Model for the Study of Thermalization of the Hot Nucleon Gas * Electron Spin Polarization in PdCo Alloys Calculated by KKR-CPA-LSD Method * Simulation Studies of Microscopic Droplet Spreading * A Vectorizable Algorithm for the Multicolor Successive Overrelaxation Method * Tetragonality of the CuAu I Lattice and Its Relation to Electronic Specific Heat and Spin Susceptibility * Computer Simulation of the Formation of Metallic Aggregates Produced by Chemical Reactions in Aqueous Solution * Scaling in Growth Models with Diffusion: A Monte Carlo Study * The Nucleus as the Mesoscopic System * Neural Network Computation as Dynamic System Simulation * First-principles Theory of Surface Segregation in Binary Alloys * Data Smooth Approximation Algorithm for Estimating the Temperature Dependence of the Ice Nucleation Rate * Genetic Algorithms in Optical Design * Application of 2D-FFT in the Study of Molecular Exchange Processes by NMR * Advanced Mobility Model for Electron Transport in P-Si Inversion Layers * Computer Simulation for Film Surfaces and its Fractal Dimension * Parallel Computation Techniques and the Structure of Catalyst Surfaces * Educational SW to Teach Digital Electronics and the Corresponding Text Book * Primitive Trinomials (Mod 2) Whose Degree is a Mersenne Exponent * Stochastic Modelisation and Parallel Computing * Remarks on the Hybrid Monte Carlo Algorithm for the ∫4 Model * An Experimental Computer Assisted Workbench for Physics Teaching * A Fully Implicit Code to Model Tokamak Plasma Edge Transport * EXPFIT: An Interactive Program for Automatic Beam-foil Decay Curve Analysis * Mapping Technique for Solving General, 1-D Hamiltonian Systems * Freeway Traffic, Cellular Automata, and Some (Self-Organizing) Criticality * Photonuclear Yield Analysis by Dynamic Programming * Incremental Representation of the Simply Connected Planar Curves * Self-convergence in Monte Carlo Methods * Adaptive Mesh Technique for Shock Wave Propagation * Simulation of Supersonic Coronal Streams and Their Interaction with the Solar Wind * The Nature of Chaos in Two Systems of Ordinary Nonlinear Differential Equations * Considerations of a Window-shopper * Interpretation of Data Obtained by RTP 4-Channel Pulsed Radar Reflectometer Using a Multi Layer Perceptron * Statistics of Lattice Bosons for Finite Systems * Fractal Based Image Compression with Affine Transformations * Algorithmic Studies on Simulation Codes for Heavy-ion Reactions * An Energy-Wise Computer Simulation of DNA-Ion-Water Interactions Explains the Abnormal Structure of Poly[d(A)]:Poly[d(T)] * Computer Simulation Study of Kosterlitz-Thouless-Like Transitions * Problem-oriented Software Package GUN-EBT for Computer Simulation of Beam Formation and Transport in Technological Electron-Optical Systems * Parallelization of a Boundary Value Solver and its Application in Nonlinear Dynamics * The Symbolic Classification of Real Four-dimensional Lie Algebras * Short, Singular Pulses Generation by a Dye Laser at Two Wavelengths Simultaneously * Quantum Monte Carlo Simulations of the Apex-Oxygen-Model * Approximation Procedures for the Axial Symmetric Static Einstein-Maxwell-Higgs Theory * Crystallization on a Sphere: Parallel Simulation on a Transputer Network * FAMULUS: A Software Product (also) for Physics Education * MathCAD vs. FAMULUS -- A Brief Comparison * First-principles Dynamics Used to Study Dissociative Chemisorption * A Computer Controlled System for Crystal Growth from Melt * A Time Resolved Spectroscopic Method for Short Pulsed Particle Emission * Green's Function Computation in Radiative Transfer Theory * Random Search Optimization Technique for One-criteria and Multi-criteria Problems * Hartley Transform Applications to Thermal Drift Elimination in Scanning Tunneling Microscopy * Algorithms of Measuring, Processing and Interpretation of Experimental Data Obtained with Scanning Tunneling Microscope * Time-dependent Atom-surface Interactions * Local and Global Minima on Molecular Potential Energy Surfaces: An Example of N3 Radical * Computation of Bifurcation Surfaces * Symbolic Computations in Quantum Mechanics: Energies in Next-to-solvable Systems * A Tool for RTP Reactor and Lamp Field Design * Modelling of Particle Spectra for the Analysis of Solid State Surface * List of Participants
Coleman, Mari Beth; Cherry, Rebecca A; Moore, Tara C; Park, Yujeong; Cihak, David F
2015-06-01
The purpose of this study was to compare the effects of teacher-directed simultaneous prompting to computer-assisted simultaneous prompting for teaching sight words to 3 elementary school students with intellectual disability. Activities in the computer-assisted condition were designed with Intellitools Classroom Suite software whereas traditional materials (i.e., flashcards) were used in the teacher-directed condition. Treatment conditions were compared using an adapted alternating treatments design. Acquisition of sight words occurred in both conditions for all 3 participants; however, each participant either clearly responded better in the teacher-directed condition or reported a preference for the teacher-directed condition when performance was similar with computer-assisted instruction being more efficient. Practical implications and directions for future research are discussed.
Number Strings: Daily Computational Fluency
ERIC Educational Resources Information Center
Lambert, Rachel; Imm, Kara; Williams, Dina A.
2017-01-01
In this article, the authors illustrate how the practice of number strings--used regularly in a classroom community--can simultaneously support computational fluency and building conceptual understanding. Specifically, the authors will demonstrate how a lesson about multi-digit addition (CCSSM 2NBT.B.5) can simultaneously serve as an invitation to…
Liquid Crystal Spatial Light Modulators for Simulating Zonal Multifocal Lenses.
Li, Yiyu; Bradley, Arthur; Xu, Renfeng; Kollbaum, Pete S
2017-09-01
To maximize efficiency of the normally lengthy and costly multizone lens design and testing process, it is advantageous to evaluate the potential efficacy of a design as thoroughly as possible prior to lens fabrication and on-eye testing. The current work describes an ex vivo approach of optical design testing. The aim of this study was to describe a system capable of examining the optical characteristics of multizone bifocal and multifocal optics by subaperture stitching using liquid crystal technologies. A liquid crystal spatial light modulator (SLM) was incorporated in each of two channels to generate complementary subapertures by amplitude modulation. Additional trial lenses and phase plates were placed in pupil conjugate planes of either channel to integrate the desired bifocal and multifocal optics once the two optical paths were recombined. A high-resolution Shack-Hartmann aberrometer was integrated to measure the optics of the dual-channel system. Power and wavefront error maps as well as point spread functions were measured and computed for each of three multizone multifocal designs. High transmission modulation was achieved by introducing half-wavelength optical path differences to create two- and five-zone bifocal apertures. Dual-channel stitching revealed classic annular rings in the point spread functions generated from two-zone designs when the outer annular optic was defocused. However, low efficiency of the SLM prevented us from simultaneously measuring the eye + simulator aberrations, and the higher-order diffraction patterns generated by the cellular structure of the liquid crystal arrays limited the visual field to ±0.45 degrees. The system successfully simulated bifocal and multifocal simultaneous lenses allowing for future evaluation of both objective and subjective evaluation of complex optical designs. However, low efficiency and diffraction phenomena of the SLM limit the utility of this technology for simulating multizone and multifocal optics.
Modeling the Proton Radiation Belt With Van Allen Probes Relativistic Electron-Proton Telescope Data
NASA Technical Reports Server (NTRS)
Kanekal, S. G.; Li, X.; Baker, D. N.; Selesnick, R. S.; Hoxie, V. C.
2018-01-01
An empirical model of the proton radiation belt is constructed from data taken during 2013-2017 by the Relativistic Electron-Proton Telescopes on the Van Allen Probes satellites. The model intensity is a function of time, kinetic energy in the range 18-600 megaelectronvolts, equatorial pitch angle, and L shell of proton guiding centers. Data are selected, on the basis of energy deposits in each of the nine silicon detectors, to reduce background caused by hard proton energy spectra at low L. Instrument response functions are computed by Monte Carlo integration, using simulated proton paths through a simplified structural model, to account for energy loss in shielding material for protons outside the nominal field of view. Overlap of energy channels, their wide angular response, and changing satellite orientation require the model dependencies on all three independent variables be determined simultaneously. This is done by least squares minimization with a customized steepest descent algorithm. Model uncertainty accounts for statistical data error and systematic error in the simulated instrument response. A proton energy spectrum is also computed from data taken during the 8 January 2014 solar event, to illustrate methods for the simpler case of an isotropic and homogeneous model distribution. Radiation belt and solar proton results are compared to intensities computed with a simplified, on-axis response that can provide a good approximation under limited circumstances.
Modeling the Proton Radiation Belt With Van Allen Probes Relativistic Electron-Proton Telescope Data
NASA Astrophysics Data System (ADS)
Selesnick, R. S.; Baker, D. N.; Kanekal, S. G.; Hoxie, V. C.; Li, X.
2018-01-01
An empirical model of the proton radiation belt is constructed from data taken during 2013-2017 by the Relativistic Electron-Proton Telescopes on the Van Allen Probes satellites. The model intensity is a function of time, kinetic energy in the range 18-600 MeV, equatorial pitch angle, and L shell of proton guiding centers. Data are selected, on the basis of energy deposits in each of the nine silicon detectors, to reduce background caused by hard proton energy spectra at low L. Instrument response functions are computed by Monte Carlo integration, using simulated proton paths through a simplified structural model, to account for energy loss in shielding material for protons outside the nominal field of view. Overlap of energy channels, their wide angular response, and changing satellite orientation require the model dependencies on all three independent variables be determined simultaneously. This is done by least squares minimization with a customized steepest descent algorithm. Model uncertainty accounts for statistical data error and systematic error in the simulated instrument response. A proton energy spectrum is also computed from data taken during the 8 January 2014 solar event, to illustrate methods for the simpler case of an isotropic and homogeneous model distribution. Radiation belt and solar proton results are compared to intensities computed with a simplified, on-axis response that can provide a good approximation under limited circumstances.
Parallel algorithm for multiscale atomistic/continuum simulations using LAMMPS
NASA Astrophysics Data System (ADS)
Pavia, F.; Curtin, W. A.
2015-07-01
Deformation and fracture processes in engineering materials often require simultaneous descriptions over a range of length and time scales, with each scale using a different computational technique. Here we present a high-performance parallel 3D computing framework for executing large multiscale studies that couple an atomic domain, modeled using molecular dynamics and a continuum domain, modeled using explicit finite elements. We use the robust Coupled Atomistic/Discrete-Dislocation (CADD) displacement-coupling method, but without the transfer of dislocations between atoms and continuum. The main purpose of the work is to provide a multiscale implementation within an existing large-scale parallel molecular dynamics code (LAMMPS) that enables use of all the tools associated with this popular open-source code, while extending CADD-type coupling to 3D. Validation of the implementation includes the demonstration of (i) stability in finite-temperature dynamics using Langevin dynamics, (ii) elimination of wave reflections due to large dynamic events occurring in the MD region and (iii) the absence of spurious forces acting on dislocations due to the MD/FE coupling, for dislocations further than 10 Å from the coupling boundary. A first non-trivial example application of dislocation glide and bowing around obstacles is shown, for dislocation lengths of ∼50 nm using fewer than 1 000 000 atoms but reproducing results of extremely large atomistic simulations at much lower computational cost.
PLAYGROUND: preparing students for the cyber battleground
NASA Astrophysics Data System (ADS)
Nielson, Seth James
2016-12-01
Attempting to educate practitioners of computer security can be difficult if for no other reason than the breadth of knowledge required today. The security profession includes widely diverse subfields including cryptography, network architectures, programming, programming languages, design, coding practices, software testing, pattern recognition, economic analysis, and even human psychology. While an individual may choose to specialize in one of these more narrow elements, there is a pressing need for practitioners that have a solid understanding of the unifying principles of the whole. We created the Playground network simulation tool and used it in the instruction of a network security course to graduate students. This tool was created for three specific purposes. First, it provides simulation sufficiently powerful to permit rigorous study of desired principles while simultaneously reducing or eliminating unnecessary and distracting complexities. Second, it permitted the students to rapidly prototype a suite of security protocols and mechanisms. Finally, with equal rapidity, the students were able to develop attacks against the protocols that they themselves had created. Based on our own observations and student reviews, we believe that these three features combine to create a powerful pedagogical tool that provides students with a significant amount of breadth and intense emotional connection to computer security in a single semester.
Hu, Jiayu; Chen, Zhenxian; Xin, Hua; Zhang, Qida; Jin, Zhongmin
2018-05-01
Detailed knowledge of the in vivo loading and kinematics in the knee joint is essential to understand its normal functions and the aetiology of osteoarthritis. Computer models provide a viable non-invasive solution for estimating joint loading and kinematics during different physiological activities. However, the joint loading and kinematics of the tibiofemoral and patellofemoral joints during a gait cycle were not typically investigated concurrently in previous computational simulations. In this study, a natural knee architecture was incorporated into a lower extremity musculoskeletal multibody dynamics model based on a force-dependent kinematics approach to investigate the contact mechanics and kinematics of a natural knee joint during a walking cycle. Specifically, the contact forces between the femoral/tibial articular cartilages and menisci and between the femoral and tibial/patellar articular cartilages were quantified. The contact forces and kinematics of the tibiofemoral and patellofemoral joints and the muscle activations and ligament forces were predicted simultaneously with a reasonable level of accuracy. The developed musculoskeletal multibody dynamics model with a natural knee architecture can serve as a potential platform for assisting clinical decision-making and postoperative rehabilitation planning.
A Simulation Framework for Battery Cell Impact Safety Modeling Using LS-DYNA
Marcicki, James; Zhu, Min; Bartlett, Alexander; ...
2017-02-04
The development process of electrified vehicles can benefit significantly from computer-aided engineering tools that predict themultiphysics response of batteries during abusive events. A coupled structural, electrical, electrochemical, and thermal model framework has been developed within the commercially available LS-DYNA software. The finite element model leverages a three-dimensional mesh structure that fully resolves the unit cell components. The mechanical solver predicts the distributed stress and strain response with failure thresholds leading to the onset of an internal short circuit. In this implementation, an arbitrary compressive strain criterion is applied locally to each unit cell. A spatially distributed equivalent circuit model providesmore » an empirical representation of the electrochemical responsewith minimal computational complexity.The thermalmodel provides state information to index the electrical model parameters, while simultaneously accepting irreversible and reversible sources of heat generation. The spatially distributed models of the electrical and thermal dynamics allow for the localization of current density and corresponding temperature response. The ability to predict the distributed thermal response of the cell as its stored energy is completely discharged through the short circuit enables an engineering safety assessment. A parametric analysis of an exemplary model is used to demonstrate the simulation capabilities.« less
Predictive Simulations of Neuromuscular Coordination and Joint-Contact Loading in Human Gait.
Lin, Yi-Chung; Walter, Jonathan P; Pandy, Marcus G
2018-04-18
We implemented direct collocation on a full-body neuromusculoskeletal model to calculate muscle forces, ground reaction forces and knee contact loading simultaneously for one cycle of human gait. A data-tracking collocation problem was solved for walking at the normal speed to establish the practicality of incorporating a 3D model of articular contact and a model of foot-ground interaction explicitly in a dynamic optimization simulation. The data-tracking solution then was used as an initial guess to solve predictive collocation problems, where novel patterns of movement were generated for walking at slow and fast speeds, independent of experimental data. The data-tracking solutions accurately reproduced joint motion, ground forces and knee contact loads measured for two total knee arthroplasty patients walking at their preferred speeds. RMS errors in joint kinematics were < 2.0° for rotations and < 0.3 cm for translations while errors in the model-computed ground-reaction and knee-contact forces were < 0.07 BW and < 0.4 BW, respectively. The predictive solutions were also consistent with joint kinematics, ground forces, knee contact loads and muscle activation patterns measured for slow and fast walking. The results demonstrate the feasibility of performing computationally-efficient, predictive, dynamic optimization simulations of movement using full-body, muscle-actuated models with realistic representations of joint function.
A Network Thermodynamic Approach to Compartmental Analysis
Mikulecky, D. C.; Huf, E. G.; Thomas, S. R.
1979-01-01
We introduce a general network thermodynamic method for compartmental analysis which uses a compartmental model of sodium flows through frog skin as an illustrative example (Huf and Howell, 1974a). We use network thermodynamics (Mikulecky et al., 1977b) to formulate the problem, and a circuit simulation program (ASTEC 2, SPICE2, or PCAP) for computation. In this way, the compartment concentrations and net fluxes between compartments are readily obtained for a set of experimental conditions involving a square-wave pulse of labeled sodium at the outer surface of the skin. Qualitative features of the influx at the outer surface correlate very well with those observed for the short circuit current under another similar set of conditions by Morel and LeBlanc (1975). In related work, the compartmental model is used as a basis for simulation of the short circuit current and sodium flows simultaneously using a two-port network (Mikulecky et al., 1977a, and Mikulecky et al., A network thermodynamic model for short circuit current transients in frog skin. Manuscript in preparation; Gary-Bobo et al., 1978). The network approach lends itself to computation of classic compartmental problems in a simple manner using circuit simulation programs (Chua and Lin, 1975), and it further extends the compartmental models to more complicated situations involving coupled flows and non-linearities such as concentration dependencies, chemical reaction kinetics, etc. PMID:262387
Network thermodynamic approach compartmental analysis. Na+ transients in frog skin.
Mikulecky, D C; Huf, E G; Thomas, S R
1979-01-01
We introduce a general network thermodynamic method for compartmental analysis which uses a compartmental model of sodium flows through frog skin as an illustrative example (Huf and Howell, 1974a). We use network thermodynamics (Mikulecky et al., 1977b) to formulate the problem, and a circuit simulation program (ASTEC 2, SPICE2, or PCAP) for computation. In this way, the compartment concentrations and net fluxes between compartments are readily obtained for a set of experimental conditions involving a square-wave pulse of labeled sodium at the outer surface of the skin. Qualitative features of the influx at the outer surface correlate very well with those observed for the short circuit current under another similar set of conditions by Morel and LeBlanc (1975). In related work, the compartmental model is used as a basis for simulation of the short circuit current and sodium flows simultaneously using a two-port network (Mikulecky et al., 1977a, and Mikulecky et al., A network thermodynamic model for short circuit current transients in frog skin. Manuscript in preparation; Gary-Bobo et al., 1978). The network approach lends itself to computation of classic compartmental problems in a simple manner using circuit simulation programs (Chua and Lin, 1975), and it further extends the compartmental models to more complicated situations involving coupled flows and non-linearities such as concentration dependencies, chemical reaction kinetics, etc.
Geometrical modeling of optical phase difference for analyzing atmospheric turbulence
NASA Astrophysics Data System (ADS)
Yuksel, Demet; Yuksel, Heba
2013-09-01
Ways of calculating phase shifts between laser beams propagating through atmospheric turbulence can give us insight towards the understanding of spatial diversity in Free-Space Optical (FSO) links. We propose a new geometrical model to estimate phase shifts between rays as the laser beam propagates through a simulated turbulent media. Turbulence is simulated by filling the propagation path with spherical bubbles of varying sizes and refractive index discontinuities statistically distributed according to various models. The level of turbulence is increased by elongating the range and/or increasing the number of bubbles that the rays interact with along their path. For each statistical representation of the atmosphere, the trajectories of two parallel rays separated by a particular distance are analyzed and computed simultaneously using geometrical optics. The three-dimensional geometry of the spheres is taken into account in the propagation of the rays. The bubble model is used to calculate the correlation between the two rays as their separation distance changes. The total distance traveled by each ray as both rays travel to the target is computed. The difference in the path length traveled will yield the phase difference between the rays. The mean square phase difference is taken to be the phase structure function which in the literature, for a pair of collimated parallel pencil thin rays, obeys a five-third law assuming weak turbulence. All simulation results will be compared with the predictions of wave theory.
Robust Multivariable Optimization and Performance Simulation for ASIC Design
NASA Technical Reports Server (NTRS)
DuMonthier, Jeffrey; Suarez, George
2013-01-01
Application-specific-integrated-circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power, and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem, which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques, which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable, are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way that facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as a framework of software modules, templates, and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation.
Three-Dimensional Simulations of Electron Beams Focused by Periodic Permanent Magnets
NASA Technical Reports Server (NTRS)
Kory, Carol L.
1999-01-01
A fully three-dimensional (3D) model of an electron beam focused by a periodic permanent magnet (PPM) stack has been developed. First, the simulation code MAFIA was used to model a PPM stack using the magnetostatic solver. The exact geometry of the magnetic focusing structure was modeled; thus, no approximations were made regarding the off-axis fields. The fields from the static solver were loaded into the 3D particle-in-cell (PIC) solver of MAFIA where fully 3D behavior of the beam was simulated in the magnetic focusing field. The PIC solver computes the time-integration of electromagnetic fields simultaneously with the time integration of the equations of motion of charged particles that move under the influence of those fields. Fields caused by those moving charges are also taken into account; thus, effects like space charge and magnetic forces between particles are fully simulated. The electron beam is simulated by a number of macro-particles. These macro-particles represent a given charge Q amounting to that of several million electrons in order to conserve computational time and memory. Particle motion is unrestricted, so particle trajectories can cross paths and move in three dimensions under the influence of 3D electric and magnetic fields. Correspondingly, there is no limit on the initial current density distribution of the electron beam, nor its density distribution at any time during the simulation. Simulation results including beam current density, percent ripple and percent transmission will be presented, and the effects current, magnetic focusing strength and thermal velocities have on beam behavior will be demonstrated using 3D movies showing the evolution of beam characteristics in time and space. Unlike typical beam optics models, this 3D model allows simulation of asymmetric designs such as non- circularly symmetric electrostatic or magnetic focusing as well as the inclusion of input/output couplers.
Illumination-based synchronization of high-speed vision sensors.
Hou, Lei; Kagami, Shingo; Hashimoto, Koichi
2010-01-01
To acquire images of dynamic scenes from multiple points of view simultaneously, the acquisition time of vision sensors should be synchronized. This paper describes an illumination-based synchronization method derived from the phase-locked loop (PLL) algorithm. Incident light to a vision sensor from an intensity-modulated illumination source serves as the reference signal for synchronization. Analog and digital computation within the vision sensor forms a PLL to regulate the output signal, which corresponds to the vision frame timing, to be synchronized with the reference. Simulated and experimental results show that a 1,000 Hz frame rate vision sensor was successfully synchronized with 32 μs jitters.
Kalal, M; Nugent, K A; Luther-Davies, B
1987-05-01
An interferometric technique which enables simultaneous phase and amplitude imaging of optically transparent objects is discussed with respect to its application for the measurement of spontaneous toroidal magnetic fields generated in laser-produced plasmas. It is shown that this technique can replace the normal independent pair of optical systems (interferometry and shadowgraphy) by one system and use computer image processing to recover both the plasma density and magnetic field information with high accuracy. A fully automatic algorithm for the numerical analysis of the data has been developed and its performance demonstrated for the case of simulated as well as experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalal, M.; Nugent, K.A.; Luther-Davies, B.
1987-05-01
An interferometric technique which enables simultaneous phase and amplitude imaging of optically transparent objects is discussed with respect to its application for the measurement of spontaneous toroidal magnetic fields generated in laser-produced plasmas. It is shown that this technique can replace the normal independent pair of optical systems (interferometry and shadowgraphy) by one system and use computer image processing to recover both the plasma density and magnetic field information with high accuracy. A fully automatic algorithm for the numerical analysis of the data has been developed and its performance demonstrated for the case of simulated as well as experimental data.
Comprehensive evaluation of attitude and orbit estimation using real earth magnetic field data
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Bar-Itzhack, Itzhack
1997-01-01
A single, augmented extended Kalman filter (EKF) which simultaneously and autonomously estimates spacecraft attitude and orbit was developed and tested with simulated and real magnetometer and rate data. Since the earth's magnetic field is a function of time and position, and since time is accurately known, the differences between the computed and measured magnetic field components, as measured by the magnetometers throughout the entire spacecraft's orbit, are a function of orbit and attitude errors. These differences can be used to estimate the orbit and attitude. The test results of the EKF with magnetometer and gyro data from three NASA satellites are presented and evaluated.
Chaurasia, Ashok; Harel, Ofer
2015-02-10
Tests for regression coefficients such as global, local, and partial F-tests are common in applied research. In the framework of multiple imputation, there are several papers addressing tests for regression coefficients. However, for simultaneous hypothesis testing, the existing methods are computationally intensive because they involve calculation with vectors and (inversion of) matrices. In this paper, we propose a simple method based on the scalar entity, coefficient of determination, to perform (global, local, and partial) F-tests with multiply imputed data. The proposed method is evaluated using simulated data and applied to suicide prevention data. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Prikner, K.
1996-07-01
Three series of simultaneous pulsation measurements ( f<0.06 Hz) on the Freja satellite and at the Budkov Observatory have been spectrally processed (FFT) in 6-min intervals of Freja's transits near the local Budkov field line. Doppler-shifted, weighted spectral-peak frequencies, determined in both transverse magnetic components in the mean field-aligned coordinate system on Freja, allowed the estimation, by comparison with the stable frequency at Budkov, of fundamental frequencies of the local magnetic-field-line resonance ranged from 13 to 17 mHz in two pulsation events analyzed, with Kp=2+ to 0+. The ratio of total amplitudes of the spectral-pulsation components on the ground and on Freja at an altitude of ~1700 km (values <0.7) characterizes the transmissivity of the ionosphere. In the Pc3 frequency range this correlates well with simulation computations using models of the ionosphere under low solar activity. Acknowledgements. The Editor in Chief thanks two referees for their help in evaluating this paper.--> Correspondence to: L. Alperovich-->
Stochastic Model of Clogging in a Microfluidic Cell Sorter
NASA Astrophysics Data System (ADS)
Fai, Thomas; Rycroft, Chris
2016-11-01
Microfluidic devices for sorting cells by deformability show promise for various medical purposes, e.g. detecting sickle cell anemia and circulating tumor cells. One class of such devices consists of a two-dimensional array of narrow channels, each column containing several identical channels in parallel. Cells are driven through the device by an applied pressure or flow rate. Such devices allows for many cells to be sorted simultaneously, but cells eventually clog individual channels and change the device properties in an unpredictable manner. In this talk, we propose a stochastic model for the failure of such microfluidic devices by clogging and present preliminary theoretical and computational results. The model can be recast as an ODE that exhibits finite time blow-up under certain conditions. The failure time distribution is investigated analytically in certain limiting cases, and more realistic versions of the model are solved by computer simulation.
Cable Discharge System for fundamental detonator studies
NASA Technical Reports Server (NTRS)
Peevy, Gregg R.; Barnhart, Steven G.; Brigham, William P.
1994-01-01
Sandia National Laboratories has recently completed the modification and installation of a cable discharge system (CDS) which will be used to study the physics of exploding bridgewire (EBW) detonators and exploding foil initiators (EFI or slapper). Of primary interest are the burst characteristics of these devices when subjected to the constant current pulse delivered by this system. The burst process involves the heating of the bridge material to a conductive plasma and is essential in describing the electrical properties of the bridgewire foil for use in diagnostics or computer models. The CDS described herein is capable of delivering up to an 8000 A pulse of 3 micron duration. Experiments conducted with the CDS to characterize the EBW and EFI burst behavior are also described. In addition, the CDS simultaneous VISAR capability permits updating the EFI electrical Gurney analysis parameters used in our computer simulation codes. Examples of CDS generated data for a typical EFI and EBW detonator are provided.
In situ single-atom array synthesis using dynamic holographic optical tweezers
Kim, Hyosub; Lee, Woojun; Lee, Han-gyeol; Jo, Hanlae; Song, Yunheung; Ahn, Jaewook
2016-01-01
Establishing a reliable method to form scalable neutral-atom platforms is an essential cornerstone for quantum computation, quantum simulation and quantum many-body physics. Here we demonstrate a real-time transport of single atoms using holographic microtraps controlled by a liquid-crystal spatial light modulator. For this, an analytical design approach to flicker-free microtrap movement is devised and cold rubidium atoms are simultaneously rearranged with 2N motional degrees of freedom, representing unprecedented space controllability. We also accomplish an in situ feedback control for single-atom rearrangements with the high success rate of 99% for up to 10 μm translation. We hope this proof-of-principle demonstration of high-fidelity atom-array preparations will be useful for deterministic loading of N single atoms, especially on arbitrary lattice locations, and also for real-time qubit shuttling in high-dimensional quantum computing architectures. PMID:27796372
A parallel graded-mesh FDTD algorithm for human-antenna interaction problems.
Catarinucci, Luca; Tarricone, Luciano
2009-01-01
The finite difference time domain method (FDTD) is frequently used for the numerical solution of a wide variety of electromagnetic (EM) problems and, among them, those concerning human exposure to EM fields. In many practical cases related to the assessment of occupational EM exposure, large simulation domains are modeled and high space resolution adopted, so that strong memory and central processing unit power requirements have to be satisfied. To better afford the computational effort, the use of parallel computing is a winning approach; alternatively, subgridding techniques are often implemented. However, the simultaneous use of subgridding schemes and parallel algorithms is very new. In this paper, an easy-to-implement and highly-efficient parallel graded-mesh (GM) FDTD scheme is proposed and applied to human-antenna interaction problems, demonstrating its appropriateness in dealing with complex occupational tasks and showing its capability to guarantee the advantages of a traditional subgridding technique without affecting the parallel FDTD performance.
A new procedure for dynamic adaption of three-dimensional unstructured grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger
1993-01-01
A new procedure is presented for the simultaneous coarsening and refinement of three-dimensional unstructured tetrahedral meshes. This algorithm allows for localized grid adaption that is used to capture aerodynamic flow features such as vortices and shock waves in helicopter flowfield simulations. The mesh-adaption algorithm is implemented in the C programming language and uses a data structure consisting of a series of dynamically-allocated linked lists. These lists allow the mesh connectivity to be rapidly reconstructed when individual mesh points are added and/or deleted. The algorithm allows the mesh to change in an anisotropic manner in order to efficiently resolve directional flow features. The procedure has been successfully implemented on a single processor of a Cray Y-MP computer. Two sample cases are presented involving three-dimensional transonic flow. Computed results show good agreement with conventional structured-grid solutions for the Euler equations.
Cognitive diagnosis modelling incorporating item response times.
Zhan, Peida; Jiao, Hong; Liao, Dandan
2018-05-01
To provide more refined diagnostic feedback with collateral information in item response times (RTs), this study proposed joint modelling of attributes and response speed using item responses and RTs simultaneously for cognitive diagnosis. For illustration, an extended deterministic input, noisy 'and' gate (DINA) model was proposed for joint modelling of responses and RTs. Model parameter estimation was explored using the Bayesian Markov chain Monte Carlo (MCMC) method. The PISA 2012 computer-based mathematics data were analysed first. These real data estimates were treated as true values in a subsequent simulation study. A follow-up simulation study with ideal testing conditions was conducted as well to further evaluate model parameter recovery. The results indicated that model parameters could be well recovered using the MCMC approach. Further, incorporating RTs into the DINA model would improve attribute and profile correct classification rates and result in more accurate and precise estimation of the model parameters. © 2017 The British Psychological Society.
Rapid Technology Assessment via Unified Deployment of Global Optical and Virtual Diagnostics
NASA Technical Reports Server (NTRS)
Jordan, Jeffrey D.; Watkins, A. Neal; Fleming, Gary A.; Leighty, Bradley D.; Schwartz, Richard J.; Ingram, JoAnne L.; Grinstead, Keith D., Jr.; Oglesby, Donald M.; Tyler, Charles
2003-01-01
This paper discusses recent developments in rapid technology assessment resulting from an active collaboration between researchers at the Air Force Research Laboratory (AFRL) at Wright Patterson Air Force Base (WPAFB) and the NASA Langley Research Center (LaRC). This program targets the unified development and deployment of global measurement technologies coupled with a virtual diagnostic interface to enable the comparative evaluation of experimental and computational results. Continuing efforts focus on the development of seamless data translation methods to enable integration of data sets of disparate file format in a common platform. Results from a successful low-speed wind tunnel test at WPAFB in which global surface pressure distributions were acquired simultaneously with model deformation and geometry measurements are discussed and comparatively evaluated with numerical simulations. Intensity- and lifetime-based pressure-sensitive paint (PSP) and projection moire interferometry (PMI) results are presented within the context of rapid technology assessment to enable simulation-based R&D.
TOWARDS A MULTI-SCALE AGENT-BASED PROGRAMMING LANGUAGE METHODOLOGY
Somogyi, Endre; Hagar, Amit; Glazier, James A.
2017-01-01
Living tissues are dynamic, heterogeneous compositions of objects, including molecules, cells and extra-cellular materials, which interact via chemical, mechanical and electrical process and reorganize via transformation, birth, death and migration processes. Current programming language have difficulty describing the dynamics of tissues because: 1: Dynamic sets of objects participate simultaneously in multiple processes, 2: Processes may be either continuous or discrete, and their activity may be conditional, 3: Objects and processes form complex, heterogeneous relationships and structures, 4: Objects and processes may be hierarchically composed, 5: Processes may create, destroy and transform objects and processes. Some modeling languages support these concepts, but most cannot translate models into executable simulations. We present a new hybrid executable modeling language paradigm, the Continuous Concurrent Object Process Methodology (CCOPM) which naturally expresses tissue models, enabling users to visually create agent-based models of tissues, and also allows computer simulation of these models. PMID:29282379
TOWARDS A MULTI-SCALE AGENT-BASED PROGRAMMING LANGUAGE METHODOLOGY.
Somogyi, Endre; Hagar, Amit; Glazier, James A
2016-12-01
Living tissues are dynamic, heterogeneous compositions of objects , including molecules, cells and extra-cellular materials, which interact via chemical, mechanical and electrical process and reorganize via transformation, birth, death and migration processes . Current programming language have difficulty describing the dynamics of tissues because: 1: Dynamic sets of objects participate simultaneously in multiple processes, 2: Processes may be either continuous or discrete, and their activity may be conditional, 3: Objects and processes form complex, heterogeneous relationships and structures, 4: Objects and processes may be hierarchically composed, 5: Processes may create, destroy and transform objects and processes. Some modeling languages support these concepts, but most cannot translate models into executable simulations. We present a new hybrid executable modeling language paradigm, the Continuous Concurrent Object Process Methodology ( CCOPM ) which naturally expresses tissue models, enabling users to visually create agent-based models of tissues, and also allows computer simulation of these models.
Two-color field enhancement at an STM junction for spatiotemporally resolved photoemission
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Xiang; Jin, Wencan; Yang, Hao
Here, we report measurements and numerical simulations of ultrafast laser-excited carrier flow across a scanning tunneling microscope (STM) junction. The current from a nanoscopic tungsten tip across a ~1 nm vacuum gap to a silver surface is driven by a two-color excitation scheme that uses an optical delay-modulation technique to extract the two-color signal from background contributions. The role of optical field enhancements in driving the current is investigated using density functional theory and full three-dimensional finite-difference time-domain computations. We find that simulated field-enhanced two-photon photoemission (2PPE) currents are in excellent agreement with the observed exponential decay of the two-colormore » photoexcited current with increasing tip–surface separation, as well as its optical-delay dependence. The results suggest an approach to 2PPE with simultaneous subpicosecond temporal and nanometer spatial resolution.« less
Probable LAGEOS contributions to a worldwide geodynamics control network
NASA Technical Reports Server (NTRS)
Bender, P. L.; Goad, C. C.
1979-01-01
The paper describes simulations performed on the contributions which LAGEOS laser ranging data can make to the establishment of a worldwide geodynamics control network. A distribution of 10 fixed ranging stations was assumed for most of the calculations, and a single 7-day arc was used, measurements assumed to be made every 10 minutes in order to avoid artificial reductions in the uncertainties due to oversampling. Computer simulations were carried out in which the coordinates of the stations and improvements in the gravity field coefficients were solved for simultaneously. It is suggested that good accuracy for station coordinates can be expected, even with the present gravity field model uncertainties, if sufficient measurement accuracy is achieved at a reasonable distribution of stations. Further, it is found that even 2-cm range measurement errors would be likely to be the main source of station coordinate errors in retrospective analyses of LAGEOS ranging results five or six years from now.
Two-color field enhancement at an STM junction for spatiotemporally resolved photoemission
Meng, Xiang; Jin, Wencan; Yang, Hao; ...
2017-06-30
Here, we report measurements and numerical simulations of ultrafast laser-excited carrier flow across a scanning tunneling microscope (STM) junction. The current from a nanoscopic tungsten tip across a ~1 nm vacuum gap to a silver surface is driven by a two-color excitation scheme that uses an optical delay-modulation technique to extract the two-color signal from background contributions. The role of optical field enhancements in driving the current is investigated using density functional theory and full three-dimensional finite-difference time-domain computations. We find that simulated field-enhanced two-photon photoemission (2PPE) currents are in excellent agreement with the observed exponential decay of the two-colormore » photoexcited current with increasing tip–surface separation, as well as its optical-delay dependence. The results suggest an approach to 2PPE with simultaneous subpicosecond temporal and nanometer spatial resolution.« less
Modeling shock responses of plastic bonded explosives using material point method
NASA Astrophysics Data System (ADS)
Shang, Hailin; Zhao, Feng; Fu, Hua
2017-01-01
Shock responses of plastic bonded explosives are modeled using material point method as implemented in the Uintah Computational Framework. Two-dimensional simulation model was established based on the micrograph of PBX9501. Shock loading for the explosive was performed by a piston moving at a constant velocity. Unreactive simulation results indicate that under shock loading serious plastic strain appears on the boundary of HMX grains. Simultaneously, the plastic strain energy transforms to thermal energy, causing the temperature to rise rapidly on grain boundary areas. The influence of shock strength on the responses of explosive was also investigated by increasing the piston velocity. And the results show that with increasing shock strength, the distribution of plastic strain and temperature does not have significant changes, but their values increase obviously. Namely, the higher the shock strength is, the higher the temperature rise will be.
NASA Astrophysics Data System (ADS)
Akiyama, S.; Kawaji, K.; Fujihara, S.
2013-12-01
Since fault fracturing due to an earthquake can simultaneously cause ground motion and tsunami, it is appropriate to evaluate the ground motion and the tsunami by single fault model. However, several source models are used independently in the ground motion simulation or the tsunami simulation, because of difficulty in evaluating both phenomena simultaneously. Many source models for the 2011 off the Pacific coast of Tohoku Earthquake are proposed from the inversion analyses of seismic observations or from those of tsunami observations. Most of these models show the similar features, which large amount of slip is located at the shallower part of fault area near the Japan Trench. This indicates that the ground motion and the tsunami can be evaluated by the single source model. Therefore, we examine the possibility of the tsunami prediction, using the fault model estimated from seismic observation records. In this study, we try to carry out the tsunami simulation using the displacement field of oceanic crustal movements, which is calculated from the ground motion simulation of the 2011 off the Pacific coast of Tohoku Earthquake. We use two fault models by Yoshida et al. (2011), which are based on both the teleseismic body wave and on the strong ground motion records. Although there is the common feature in those fault models, the amount of slip near the Japan trench is lager in the fault model from the strong ground motion records than in that from the teleseismic body wave. First, the large-scale ground motion simulations applying those fault models used by the voxel type finite element method are performed for the whole eastern Japan. The synthetic waveforms computed from the simulations are generally consistent with the observation records of K-NET (Kinoshita (1998)) and KiK-net stations (Aoi et al. (2000)), deployed by the National Research Institute for Earth Science and Disaster Prevention (NIED). Next, the tsunami simulations are performed by the finite difference calculation based on the shallow water theory. The initial wave height for tsunami generation is estimated from the vertical displacement of ocean bottom due to the crustal movements, which is obtained from the ground motion simulation mentioned above. The results of tsunami simulations are compared with the observations of the GPS wave gauges to evaluate the validity for the tsunami prediction using the fault model based on the seismic observation records.
NASA Technical Reports Server (NTRS)
Srivastava, Priyaka; Kraus, Jeff; Murawski, Robert; Golden, Bertsel, Jr.
2015-01-01
NASAs Space Communications and Navigation (SCaN) program manages three active networks: the Near Earth Network, the Space Network, and the Deep Space Network. These networks simultaneously support NASA missions and provide communications services to customers worldwide. To efficiently manage these resources and their capabilities, a team of student interns at the NASA Glenn Research Center is developing a distributed system to model the SCaN networks. Once complete, the system shall provide a platform that enables users to perform capacity modeling of current and prospective missions with finer-grained control of information between several simulation and modeling tools. This will enable the SCaN program to access a holistic view of its networks and simulate the effects of modifications in order to provide NASA with decisional information. The development of this capacity modeling system is managed by NASAs Strategic Center for Education, Networking, Integration, and Communication (SCENIC). Three primary third-party software tools offer their unique abilities in different stages of the simulation process. MagicDraw provides UMLSysML modeling, AGIs Systems Tool Kit simulates the physical transmission parameters and de-conflicts scheduled communication, and Riverbed Modeler (formerly OPNET) simulates communication protocols and packet-based networking. SCENIC developers are building custom software extensions to integrate these components in an end-to-end space communications modeling platform. A central control module acts as the hub for report-based messaging between client wrappers. Backend databases provide information related to mission parameters and ground station configurations, while the end user defines scenario-specific attributes for the model. The eight SCENIC interns are working under the direction of their mentors to complete an initial version of this capacity modeling system during the summer of 2015. The intern team is composed of four students in Computer Science, two in Computer Engineering, one in Electrical Engineering, and one studying Space Systems Engineering.
Simulation of a group of rangefinders adapted to alterations of measurement angle
NASA Astrophysics Data System (ADS)
Baikov, D. V.; Pastushkova, A. A.; Danshin, V. V.; Chepin, E. V.
2017-01-01
As part of the National Research Nuclear University of National Research Nuclear University MEPhI (MEPhI) at the Department of Computer Systems and Technologies working laboratory "Robotics." University teachers and laboratory staff implement a training program for master's program "Computer technology in robotics." Undergraduates and graduate students conduct laboratory research and development in several promising areas in robotics. One of the methodologies that are actively used in carrying out dissertation research is the modeling of advanced hardware and software systems, robotics. This article presents the results of such a study. The purpose of this article is to simulate a sensor comprised of a group of laser rangefinders. The rangefinders should be simulated according to the following principle. Beams will originate from one point though with a deviation from normal, providing thereby simultaneous scanning of different points. The data obtained in our virtual test room should be used to indicate an average distance from the device to obstacles for all the four sensors in real time. By leveling the divergence angle of the beams we can simulate different kinds of rangefinders (laser and ultrasonic ones). By adjusting noise parameters we can achieve results similar to those of real models (rangefinders), and obtain a surface map displaying irregularities. We should use a model of an aircraft (quadcopter) as a device to install the sensor. In the article we made an overview of works on rangefinder simulation undertaken at institutions around the world and performed tests. The article draws a conclusion about the relevance of the suggested approach, the methods used, necessity and feasibility of further research in this area.
On the relationship between parallel computation and graph embedding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, A.K.
1989-01-01
The problem of efficiently simulating an algorithm designed for an n-processor parallel machine G on an m-processor parallel machine H with n > m arises when parallel algorithms designed for an ideal size machine are simulated on existing machines which are of a fixed size. The author studies this problem when every processor of H takes over the function of a number of processors in G, and he phrases the simulation problem as a graph embedding problem. New embeddings presented address relevant issues arising from the parallel computation environment. The main focus centers around embedding complete binary trees into smaller-sizedmore » binary trees, butterflies, and hypercubes. He also considers simultaneous embeddings of r source machines into a single hypercube. Constant factors play a crucial role in his embeddings since they are not only important in practice but also lead to interesting theoretical problems. All of his embeddings minimize dilation and load, which are the conventional cost measures in graph embeddings and determine the maximum amount of time required to simulate one step of G on H. His embeddings also optimize a new cost measure called ({alpha},{beta})-utilization which characterizes how evenly the processors of H are used by the processors of G. Ideally, the utilization should be balanced (i.e., every processor of H simulates at most (n/m) processors of G) and the ({alpha},{beta})-utilization measures how far off from a balanced utilization the embedding is. He presents embeddings for the situation when some processors of G have different capabilities (e.g. memory or I/O) than others and the processors with different capabilities are to be distributed uniformly among the processors of H. Placing such conditions on an embedding results in an increase in some of the cost measures.« less
Gao, Guihua; Li, Sijia; Li, Shuo; Wang, Yudan; Zhao, Pan; Zhang, Xiangyu; Hou, Xiaohong
2018-04-01
In this work, computational and experimental methods were used to study the adsorption of estrogens and glucocorticoids on metal-organic frameworks (MOFs). Computer-aided molecular simulation was applied to predict the adsorption of eight analytes on four MOFs (MIL-101(Cr), MIL-100(Fe), MIL-53(Al), and UiO-66(Zr)) by examining molecular interactions and calculating free binding energies. Subsequently, the four water-stable MOFs were synthesized and evaluated as adsorbents for the target hormones in aqueous solution. As the MOF exhibiting the highest adsorption capacity in both computations and experiments, MIL-53(Al) was chosen as a sorbent to develop a dispersive micro-solid-phase extraction procedure coupled to ultra-performance liquid chromatography tandem mass spectrometry for simultaneous determination of the target analytes in water and human urine samples. Experimental parameters affecting the extraction recoveries, including pH, ionic strength, MIL-53(Al) amount, extraction time, desorption time, and desorption solvent, were optimized. The optimized method provided a linear range of 0.005025-368.6μg/L with good correlation coefficients (0.9982 ≤ r 2 ≤ 0.9992), and limits of detection (S/N = 3) and quantification (S/N = 10) of 0.0015-1.0μg/L and 0.005-1.8μg/L, respectively. The analyte recoveries were in the range of 80.6-98.4% in water samples and 88.4-93.2% in urine samples. Furthermore, MIL-53(Al) showed good stability over 10 extraction cycles (RSD < 10.0%). Good agreement between experimental measurements and computational results showed the potential of this approach for elucidating adsorption mechanisms and predicating extraction efficiencies for MOFs and targets, providing new directions for the development and utilization of MOFs. Copyright © 2017 Elsevier B.V. All rights reserved.
Kling, Daniel; Tillmar, Andreas; Egeland, Thore; Mostad, Petter
2015-09-01
Several applications necessitate an unbiased determination of relatedness, be it in linkage or association studies or in a forensic setting. An appropriate model to compute the joint probability of some genetic data for a set of persons given some hypothesis about the pedigree structure is then required. The increasing number of markers available through high-density SNP microarray typing and NGS technologies intensifies the demand, where using a large number of markers may lead to biased results due to strong dependencies between closely located loci, both within pedigrees (linkage) and in the population (allelic association or linkage disequilibrium (LD)). We present a new general model, based on a Markov chain for inheritance patterns and another Markov chain for founder allele patterns, the latter allowing us to account for LD. We also demonstrate a specific implementation for X chromosomal markers that allows for computation of likelihoods based on hypotheses of alleged relationships and genetic marker data. The algorithm can simultaneously account for linkage, LD, and mutations. We demonstrate its feasibility using simulated examples. The algorithm is implemented in the software FamLinkX, providing a user-friendly GUI for Windows systems (FamLinkX, as well as further usage instructions, is freely available at www.famlink.se ). Our software provides the necessary means to solve cases where no previous implementation exists. In addition, the software has the possibility to perform simulations in order to further study the impact of linkage and LD on computed likelihoods for an arbitrary set of markers.
High Fidelity Simulations of Plume Impingement to the International Space Station
NASA Technical Reports Server (NTRS)
Lumpkin, Forrest E., III; Marichalar, Jeremiah; Stewart, Benedicte D.
2012-01-01
With the retirement of the Space Shuttle, the United States now depends on recently developed commercial spacecraft to supply the International Space Station (ISS) with cargo. These new vehicles supplement ones from international partners including the Russian Progress, the European Autonomous Transfer Vehicle (ATV), and the Japanese H-II Transfer Vehicle (HTV). Furthermore, to carry crew to the ISS and supplement the capability currently provided exclusively by the Russian Soyuz, new designs and a refinement to a cargo vehicle design are in work. Many of these designs include features such as nozzle scarfing or simultaneous firing of multiple thrusters resulting in complex plumes. This results in a wide variety of complex plumes impinging upon the ISS. Therefore, to ensure safe "proximity operations" near the ISS, the need for accurate and efficient high fidelity simulation of plume impingement to the ISS is as high as ever. A capability combining computational fluid dynamics (CFD) and the Direct Simulation Monte Carlo (DSMC) techniques has been developed to properly model the large density variations encountered as the plume expands from the high pressure in the combustion chamber to the near vacuum conditions at the orbiting altitude of the ISS. Details of the computational tools employed by this method, including recent software enhancements and the best practices needed to achieve accurate simulations, are discussed. Several recent examples of the application of this high fidelity capability are presented. These examples highlight many of the real world, complex features of plume impingement that occur when "visiting vehicles" operate in the vicinity of the ISS.
NASA Astrophysics Data System (ADS)
Garabito, German; Cruz, João Carlos Ribeiro; Oliva, Pedro Andrés Chira; Söllner, Walter
2017-01-01
The Common Reflection Surface stack is a robust method for simulating zero-offset and common-offset sections with high accuracy from multi-coverage seismic data. For simulating common-offset sections, the Common-Reflection-Surface stack method uses a hyperbolic traveltime approximation that depends on five kinematic parameters for each selected sample point of the common-offset section to be simulated. The main challenge of this method is to find a computationally efficient data-driven optimization strategy for accurately determining the five kinematic stacking parameters on which each sample of the stacked common-offset section depends. Several authors have applied multi-step strategies to obtain the optimal parameters by combining different pre-stack data configurations. Recently, other authors used one-step data-driven strategies based on a global optimization for estimating simultaneously the five parameters from multi-midpoint and multi-offset gathers. In order to increase the computational efficiency of the global optimization process, we use in this paper a reduced form of the Common-Reflection-Surface traveltime approximation that depends on only four parameters, the so-called Common Diffraction Surface traveltime approximation. By analyzing the convergence of both objective functions and the data enhancement effect after applying the two traveltime approximations to the Marmousi synthetic dataset and a real land dataset, we conclude that the Common-Diffraction-Surface approximation is more efficient within certain aperture limits and preserves at the same time a high image accuracy. The preserved image quality is also observed in a direct comparison after applying both approximations for simulating common-offset sections on noisy pre-stack data.
SNPs selection using support vector regression and genetic algorithms in GWAS
2014-01-01
Introduction This paper proposes a new methodology to simultaneously select the most relevant SNPs markers for the characterization of any measurable phenotype described by a continuous variable using Support Vector Regression with Pearson Universal kernel as fitness function of a binary genetic algorithm. The proposed methodology is multi-attribute towards considering several markers simultaneously to explain the phenotype and is based jointly on statistical tools, machine learning and computational intelligence. Results The suggested method has shown potential in the simulated database 1, with additive effects only, and real database. In this simulated database, with a total of 1,000 markers, and 7 with major effect on the phenotype and the other 993 SNPs representing the noise, the method identified 21 markers. Of this total, 5 are relevant SNPs between the 7 but 16 are false positives. In real database, initially with 50,752 SNPs, we have reduced to 3,073 markers, increasing the accuracy of the model. In the simulated database 2, with additive effects and interactions (epistasis), the proposed method matched to the methodology most commonly used in GWAS. Conclusions The method suggested in this paper demonstrates the effectiveness in explaining the real phenotype (PTA for milk), because with the application of the wrapper based on genetic algorithm and Support Vector Regression with Pearson Universal, many redundant markers were eliminated, increasing the prediction and accuracy of the model on the real database without quality control filters. The PUK demonstrated that it can replicate the performance of linear and RBF kernels. PMID:25573332
Grierson, Lawrence; Melnyk, Megan; Jowlett, Nathan; Backstein, David; Dubrowski, Adam
2011-01-01
Skills training in simulation laboratories is becoming increasingly common. However, the educational benefit of these laboratories remains unclear. This study examined whether such training enables better performance on the simultaneous execution of technical skill and knowledge retention. Twenty-four novice trainees completed the elliptical excision on baseline testing. Following baseline testing twelve of the novices completed a technical practice (simulation training group) session, while the other twelve did not (control group). One week later, all participants returned for dual-task follow up testing in which they performed the excision while listening to a didactic lesson on the staging and treatment of cutaneous melanoma. The dual-tasking during the post test was standardized, whereby excision sutures 3 and 5 were performed alone (single), and sutures 4 and 6 were performed concurrently with the didactic lecture (dual). Seven additional trainees also participated as controls that were randomized to listen to the didactic lesson alone (knowledge retention alone group). Knowledge retention was assessed by a multiple choice questionnaire (MCQ). Technical performance was evaluated with computer and expert-based measures. Time to complete the performance improved among both groups completing the elliptical excision on follow-up testing (p<0.01). The simulation training group demonstrated superior hand motion performance on simultaneous didactic lesson testing (p<0.01). Novices from the no-training group performed statistically worse while suturing concurrently with the didactic lesson (p<0.01). The pretraining of novices in surgical skills laboratories leads to improved technical performance during periods of increased attention demands.
NASA Technical Reports Server (NTRS)
Goodwin, Sabine A.; Raj, P.
1999-01-01
Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.
Kuriyama, Shinichi; Ishikawa, Masahiro; Nakamura, Shinichiro; Furu, Moritoshi; Ito, Hiromu; Matsuda, Shuichi
2016-08-01
Condylar lift-off can induce excessive polyethylene wear after total knee arthroplasty (TKA). A computer simulation was used to evaluate the influence of femoral varus alignment and lateral collateral ligament (LCL) laxity on lift-off after single-design TKA. It was hypothesised that proper ligament balancing and coronal alignment would prevent lift-off. The computer model in this study is a dynamic musculoskeletal program that simulates gait up to 60° of knee flexion. The lift-off phenomenon was defined as positive with an intercomponent distance of >2 mm. In neutrally aligned components in the coronal plane, the femoral and tibial components were set perpendicular to the femoral and tibial mechanical axis, respectively. The femoral coronal alignment was changed from neutral to 5° varus in 1° increments. Simultaneously, the LCL length was elongated from 0 to 5 mm in 1-mm increments to provide a model of pathological slack. Within 2° of femoral varus alignment, lift-off did not occur even if the LCL was elongated by up to 5 mm. However, lift-off occurred easily in the stance phase in femoral varus alignments of >3° with slight LCL slack. The contact forces of the tibiofemoral joint were influenced more by femoral varus alignment than by LCL laxity. Aiming for neutral alignment in severely varus knees makes it difficult to achieve appropriate ligament balance. Our study suggests that no lift-off occurs with excessive LCL laxity alone in a neutrally aligned TKA and therefore that varus alignment should be avoided to decrease lift-off after TKA. Case series, Level IV.
Wu, Xiongwu; Brooks, Bernard R.
2015-01-01
Chemical and thermodynamic equilibrium of multiple states is a fundamental phenomenon in biology systems and has been the focus of many experimental and computational studies. This work presents a simulation method to directly study the equilibrium of multiple states. This method constructs a virtual mixture of multiple states (VMMS) to sample the conformational space of all chemical states simultaneously. The VMMS system consists of multiple subsystems, one for each state. The subsystem contains a solute and a solvent environment. The solute molecules in all subsystems share the same conformation but have their own solvent environments. Transition between states is implicated by the change of their molar fractions. Simulation of a VMMS system allows efficient calculation of relative free energies of all states, which in turn determine their equilibrium molar fractions. For systems with a large number of state transition sites, an implicit site approximation is introduced to minimize the cost of simulation. A direct application of the VMMS method is for constant pH simulation to study protonation equilibrium. Applying the VMMS method to a heptapeptide of 3 ionizable residues, we calculated the pKas of those residues both with all explicit states and with implicit sites and obtained consistent results. For mouse epidermal growth factor of 9 ionizable groups, our VMMS simulations with implicit sites produced pKas of all 9 ionizable groups and the results agree qualitatively with NMR measurement. This example demonstrates the VMMS method can be applied to systems of a large number of ionizable groups and the computational cost scales linearly with the number of ionizable groups. For one of the most challenging systems in constant pH calculation, SNase Δ+PHS/V66K, our VMMS simulation shows that it is the state-dependent water penetration that causes the large deviation in lysine66’s pKa. PMID:26506245
Wu, Xiongwu; Brooks, Bernard R
2015-10-01
Chemical and thermodynamic equilibrium of multiple states is a fundamental phenomenon in biology systems and has been the focus of many experimental and computational studies. This work presents a simulation method to directly study the equilibrium of multiple states. This method constructs a virtual mixture of multiple states (VMMS) to sample the conformational space of all chemical states simultaneously. The VMMS system consists of multiple subsystems, one for each state. The subsystem contains a solute and a solvent environment. The solute molecules in all subsystems share the same conformation but have their own solvent environments. Transition between states is implicated by the change of their molar fractions. Simulation of a VMMS system allows efficient calculation of relative free energies of all states, which in turn determine their equilibrium molar fractions. For systems with a large number of state transition sites, an implicit site approximation is introduced to minimize the cost of simulation. A direct application of the VMMS method is for constant pH simulation to study protonation equilibrium. Applying the VMMS method to a heptapeptide of 3 ionizable residues, we calculated the pKas of those residues both with all explicit states and with implicit sites and obtained consistent results. For mouse epidermal growth factor of 9 ionizable groups, our VMMS simulations with implicit sites produced pKas of all 9 ionizable groups and the results agree qualitatively with NMR measurement. This example demonstrates the VMMS method can be applied to systems of a large number of ionizable groups and the computational cost scales linearly with the number of ionizable groups. For one of the most challenging systems in constant pH calculation, SNase Δ+PHS/V66K, our VMMS simulation shows that it is the state-dependent water penetration that causes the large deviation in lysine66's pKa.
Evaluation of a musculoskeletal model with prosthetic knee through six experimental gait trials.
Kia, Mohammad; Stylianou, Antonis P; Guess, Trent M
2014-03-01
Knowledge of the forces acting on musculoskeletal joint tissues during movement benefits tissue engineering, artificial joint replacement, and our understanding of ligament and cartilage injury. Computational models can be used to predict these internal forces, but musculoskeletal models that simultaneously calculate muscle force and the resulting loading on joint structures are rare. This study used publicly available gait, skeletal geometry, and instrumented prosthetic knee loading data [1] to evaluate muscle driven forward dynamics simulations of walking. Inputs to the simulation were measured kinematics and outputs included muscle, ground reaction, ligament, and joint contact forces. A full body musculoskeletal model with subject specific lower extremity geometries was developed in the multibody framework. A compliant contact was defined between the prosthetic femoral component and tibia insert geometries. Ligament structures were modeled with a nonlinear force-strain relationship. The model included 45 muscles on the right lower leg. During forward dynamics simulations a feedback control scheme calculated muscle forces using the error signal between the current muscle lengths and the lengths recorded during inverse kinematics simulations. Predicted tibio-femoral contact force, ground reaction forces, and muscle forces were compared to experimental measurements for six different gait trials using three different gait types (normal, trunk sway, and medial thrust). The mean average deviation (MAD) and root mean square deviation (RMSD) over one gait cycle are reported. The muscle driven forward dynamics simulations were computationally efficient and consistently reproduced the inverse kinematics motion. The forward simulations also predicted total knee contact forces (166N
ERIC Educational Resources Information Center
Pennington, Robert C.; Ault, Melinda Jones; Schuster, John W.; Sanders, Ann
2011-01-01
In the current study, the researchers evaluated the effects of simultaneous prompting and computer-assisted instruction on the story-writing responses of 3 males with autism, 7 to 10 ears of age. Classroom teachers conducted all probe and training sessions. The researchers used a multiple baseline across participants design to evaluate the…
Using Simultaneous Prompting to Teach Computer-Based Story Writing to a Student with Autism
ERIC Educational Resources Information Center
Pennington, Robert C.; Stenhoff, Donald M.; Gibson, Jason; Ballou, Kristina
2012-01-01
Writing is a critical skill because it is used to access reinforcement in a variety of contexts. Unfortunately, there has been little research on writing skills instruction for students with intellectual disabilities and autism spectrum disorders. The purpose of this study was to evaluate the effects simultaneous prompting and computer-assisted…
A novel nonlinear adaptive filter using a pipelined second-order Volterra recurrent neural network.
Zhao, Haiquan; Zhang, Jiashu
2009-12-01
To enhance the performance and overcome the heavy computational complexity of recurrent neural networks (RNN), a novel nonlinear adaptive filter based on a pipelined second-order Volterra recurrent neural network (PSOVRNN) is proposed in this paper. A modified real-time recurrent learning (RTRL) algorithm of the proposed filter is derived in much more detail. The PSOVRNN comprises of a number of simple small-scale second-order Volterra recurrent neural network (SOVRNN) modules. In contrast to the standard RNN, these modules of a PSOVRNN can be performed simultaneously in a pipelined parallelism fashion, which can lead to a significant improvement in its total computational efficiency. Moreover, since each module of the PSOVRNN is a SOVRNN in which nonlinearity is introduced by the recursive second-order Volterra (RSOV) expansion, its performance can be further improved. Computer simulations have demonstrated that the PSOVRNN performs better than the pipelined recurrent neural network (PRNN) and RNN for nonlinear colored signals prediction and nonlinear channel equalization. However, the superiority of the PSOVRNN over the PRNN is at the cost of increasing computational complexity due to the introduced nonlinear expansion of each module.
Multi-step EMG Classification Algorithm for Human-Computer Interaction
NASA Astrophysics Data System (ADS)
Ren, Peng; Barreto, Armando; Adjouadi, Malek
A three-electrode human-computer interaction system, based on digital processing of the Electromyogram (EMG) signal, is presented. This system can effectively help disabled individuals paralyzed from the neck down to interact with computers or communicate with people through computers using point-and-click graphic interfaces. The three electrodes are placed on the right frontalis, the left temporalis and the right temporalis muscles in the head, respectively. The signal processing algorithm used translates the EMG signals during five kinds of facial movements (left jaw clenching, right jaw clenching, eyebrows up, eyebrows down, simultaneous left & right jaw clenching) into five corresponding types of cursor movements (left, right, up, down and left-click), to provide basic mouse control. The classification strategy is based on three principles: the EMG energy of one channel is typically larger than the others during one specific muscle contraction; the spectral characteristics of the EMG signals produced by the frontalis and temporalis muscles during different movements are different; the EMG signals from adjacent channels typically have correlated energy profiles. The algorithm is evaluated on 20 pre-recorded EMG signal sets, using Matlab simulations. The results show that this method provides improvements and is more robust than other previous approaches.
Effects of convection electric field on upwelling and escape of ionospheric O(+)
NASA Technical Reports Server (NTRS)
Cladis, J. B.; Chiu, Yam T.; Peterson, William K.
1992-01-01
A Monte Carlo code is used to explore the full effects of the convection electric field on distributions of upflowing O(+) ions from the cusp/cleft ionosphere. Trajectories of individual ions/neutrals are computed as they undergo multiple charge-exchange collisions. In the ion state, the trajectories are computed in realistic models of the magnetic field and the convection, corotation, and ambipolar electric fields. The effects of ion-ion collisions are included, and the trajectories are computed with and without simultaneous stochastic heating perpendicular to the magnetic field by a realistic model of broadband, low frequency waves. In the neutral state, ballistic trajectories in the gravitational field are computed. The initial conditions of the ions, in addition to ambipolar electric field and the number densities and temperatures of O(+), H(+), and electrons as a function of height in the cusp/cleft region were obtained from the results of Gombosi and Killeen (1987), who used a hydrodynamic code to simulate the time-dependent frictional-heating effects in a magnetic tube during its motion though the convection throat. The distribution of the ion fluxes as a function of height are constructed from the case histories.
Transient Three-Dimensional Analysis of Side Load in Liquid Rocket Engine Nozzles
NASA Technical Reports Server (NTRS)
Wang, Ten-See
2004-01-01
Three-dimensional numerical investigations on the nozzle start-up side load physics were performed. The objective of this study is to identify the three-dimensional side load physics and to compute the associated aerodynamic side load using an anchored computational methodology. The computational methodology is based on an unstructured-grid, and pressure-based computational fluid dynamics formulation, and a simulated inlet condition based on a system calculation. Finite-rate chemistry was used throughout the study so that combustion effect is always included, and the effect of wall cooling on side load physics is studied. The side load physics captured include the afterburning wave, transition from free- shock to restricted-shock separation, and lip Lambda shock oscillation. With the adiabatic nozzle, free-shock separation reappears after the transition from free-shock separation to restricted-shock separation, and the subsequent flow pattern of the simultaneous free-shock and restricted-shock separations creates a very asymmetric Mach disk flow. With the cooled nozzle, the more symmetric restricted-shock separation persisted throughout the start-up transient after the transition, leading to an overall lower side load than that of the adiabatic nozzle. The tepee structures corresponding to the maximum side load were addressed.
NASA Astrophysics Data System (ADS)
Sibra, A.; Dupays, J.; Murrone, A.; Laurent, F.; Massot, M.
2017-06-01
In this paper, we tackle the issue of the accurate simulation of evaporating and reactive polydisperse sprays strongly coupled to unsteady gaseous flows. In solid propulsion, aluminum particles are included in the propellant to improve the global performances but the distributed combustion of these droplets in the chamber is suspected to be a driving mechanism of hydrodynamic and acoustic instabilities. The faithful prediction of two-phase interactions is a determining step for future solid rocket motor optimization. When looking at saving computational ressources as required for industrial applications, performing reliable simulations of two-phase flow instabilities appears as a challenge for both modeling and scientific computing. The size polydispersity, which conditions the droplet dynamics, is a key parameter that has to be accounted for. For moderately dense sprays, a kinetic approach based on a statistical point of view is particularly appropriate. The spray is described by a number density function and its evolution follows a Williams-Boltzmann transport equation. To solve it, we use Eulerian Multi-Fluid methods, based on a continuous discretization of the size phase space into sections, which offer an accurate treatment of the polydispersion. The objective of this paper is threefold: first to derive a new Two Size Moment Multi-Fluid model that is able to tackle evaporating polydisperse sprays at low cost while accurately describing the main driving mechanisms, second to develop a dedicated evaporation scheme to treat simultaneously mass, moment and energy exchanges with the gas and between the sections. Finally, to design a time splitting operator strategy respecting both reactive two-phase flow physics and cost/accuracy ratio required for industrial computations. Using a research code, we provide 0D validations of the new scheme before assessing the splitting technique's ability on a reference two-phase flow acoustic case. Implemented in the industrial-oriented CEDRE code, all developments allow to simulate realistic solid rocket motor configurations featuring the first polydisperse reactive computations with a fully Eulerian method.
NASA Astrophysics Data System (ADS)
Otake, Yoshito; Esnault, Matthieu; Grupp, Robert; Kosugi, Shinichi; Sato, Yoshinobu
2016-03-01
The determination of in vivo motion of multiple-bones using dynamic fluoroscopic images and computed tomography (CT) is useful for post-operative assessment of orthopaedic surgeries such as medial patellofemoral ligament reconstruction. We propose a robust method to measure the 3D motion of multiple rigid objects with high accuracy using a series of bi-plane fluoroscopic images and a multi-resolution, intensity-based, 2D-3D registration. A Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimizer was used with a gradient correlation similarity metric. Four approaches to register three rigid objects (femur, tibia-fibula and patella) were implemented: 1) an individual bone approach registering one bone at a time, each with optimization of a six degrees of freedom (6DOF) parameter, 2) a sequential approach registering one bone at a time but using the previous bone results as the background in DRR generation, 3) a simultaneous approach registering all the bones together (18DOF) and 4) a combination of the sequential and the simultaneous approaches. These approaches were compared in experiments using simulated images generated from the CT of a healthy volunteer and measured fluoroscopic images. Over the 120 simulated frames of motion, the simultaneous approach showed improved registration accuracy compared to the individual approach: with less than 0.68mm root-mean-square error (RMSE) for translation and less than 1.12° RMSE for rotation. A robustness evaluation was conducted with 45 trials of a randomly perturbed initialization showed that the sequential approach improved robustness significantly (74% success rate) compared to the individual bone approach (34% success) for patella registration (femur and tibia-fibula registration had a 100% success rate with each approach).
Hydrodynamics of an electrochemical membrane bioreactor.
Wang, Ya-Zhou; Wang, Yun-Kun; He, Chuan-Shu; Yang, Hou-Yun; Sheng, Guo-Ping; Shen, Jin-You; Mu, Yang; Yu, Han-Qing
2015-05-22
An electrochemical membrane bioreactor (EMBR) has recently been developed for energy recovery and wastewater treatment. The hydrodynamics of the EMBR would significantly affect the mass transfers and reaction kinetics, exerting a pronounced effect on reactor performance. However, only scarce information is available to date. In this study, the hydrodynamic characteristics of the EMBR were investigated through various approaches. Tracer tests were adopted to generate residence time distribution curves at various hydraulic residence times, and three hydraulic models were developed to simulate the results of tracer studies. In addition, the detailed flow patterns of the EMBR were acquired from a computational fluid dynamics (CFD) simulation. Compared to the tank-in-series and axial dispersion ones, the Martin model could describe hydraulic performance of the EBMR better. CFD simulation results clearly indicated the existence of a preferential or circuitous flow in the EMBR. Moreover, the possible locations of dead zones in the EMBR were visualized through the CFD simulation. Based on these results, the relationship between the reactor performance and the hydrodynamics of EMBR was further elucidated relative to the current generation. The results of this study would benefit the design, operation and optimization of the EMBR for simultaneous energy recovery and wastewater treatment.
Pfefer, T Joshua; Wang, Quanzeng; Drezek, Rebekah A
2011-11-01
Computational approaches for simulation of light-tissue interactions have provided extensive insight into biophotonic procedures for diagnosis and therapy. However, few studies have addressed simulation of time-resolved fluorescence (TRF) in tissue and none have combined Monte Carlo simulations with standard TRF processing algorithms to elucidate approaches for cancer detection in layered biological tissue. In this study, we investigate how illumination-collection parameters (e.g., collection angle and source-detector separation) influence the ability to measure fluorophore lifetime and tissue layer thickness. Decay curves are simulated with a Monte Carlo TRF light propagation model. Multi-exponential iterative deconvolution is used to determine lifetimes and fractional signal contributions. The ability to detect changes in mucosal thickness is optimized by probes that selectively interrogate regions superficial to the mucosal-submucosal boundary. Optimal accuracy in simultaneous determination of lifetimes in both layers is achieved when each layer contributes 40-60% of the signal. These results indicate that depth-selective approaches to TRF have the potential to enhance disease detection in layered biological tissue and that modeling can play an important role in probe design optimization. Published by Elsevier Ireland Ltd.
Collision detection and modeling of rigid and deformable objects in laparoscopic simulator
NASA Astrophysics Data System (ADS)
Dy, Mary-Clare; Tagawa, Kazuyoshi; Tanaka, Hiromi T.; Komori, Masaru
2015-03-01
Laparoscopic simulators are viable alternatives for surgical training and rehearsal. Haptic devices can also be incorporated with virtual reality simulators to provide additional cues to the users. However, to provide realistic feedback, the haptic device must be updated by 1kHz. On the other hand, realistic visual cues, that is, the collision detection and deformation between interacting objects must be rendered at least 30 fps. Our current laparoscopic simulator detects the collision between a point on the tool tip, and on the organ surfaces, in which haptic devices are attached on actual tool tips for realistic tool manipulation. The triangular-mesh organ model is rendered using a mass spring deformation model, or finite element method-based models. In this paper, we investigated multi-point-based collision detection on the rigid tool rods. Based on the preliminary results, we propose a method to improve the collision detection scheme, and speed up the organ deformation reaction. We discuss our proposal for an efficient method to compute simultaneous multiple collision between rigid (laparoscopic tools) and deformable (organs) objects, and perform the subsequent collision response, with haptic feedback, in real-time.
Antoniotti, M; Park, F; Policriti, A; Ugel, N; Mishra, B
2003-01-01
The analysis of large amounts of data, produced as (numerical) traces of in vivo, in vitro and in silico experiments, has become a central activity for many biologists and biochemists. Recent advances in the mathematical modeling and computation of biochemical systems have moreover increased the prominence of in silico experiments; such experiments typically involve the simulation of sets of Differential Algebraic Equations (DAE), e.g., Generalized Mass Action systems (GMA) and S-systems. In this paper we reason about the necessary theoretical and pragmatic foundations for a query and simulation system capable of analyzing large amounts of such trace data. To this end, we propose to combine in a novel way several well-known tools from numerical analysis (approximation theory), temporal logic and verification, and visualization. The result is a preliminary prototype system: simpathica/xssys. When dealing with simulation data simpathica/xssys exploits the special structure of the underlying DAE, and reduces the search space in an efficient way so as to facilitate any queries about the traces. The proposed system is designed to give the user possibility to systematically analyze and simultaneously query different possible timed evolutions of the modeled system.
Hydrodynamics of an Electrochemical Membrane Bioreactor
Wang, Ya-Zhou; Wang, Yun-Kun; He, Chuan-Shu; Yang, Hou-Yun; Sheng, Guo-Ping; Shen, Jin-You; Mu, Yang; Yu, Han-Qing
2015-01-01
An electrochemical membrane bioreactor (EMBR) has recently been developed for energy recovery and wastewater treatment. The hydrodynamics of the EMBR would significantly affect the mass transfers and reaction kinetics, exerting a pronounced effect on reactor performance. However, only scarce information is available to date. In this study, the hydrodynamic characteristics of the EMBR were investigated through various approaches. Tracer tests were adopted to generate residence time distribution curves at various hydraulic residence times, and three hydraulic models were developed to simulate the results of tracer studies. In addition, the detailed flow patterns of the EMBR were acquired from a computational fluid dynamics (CFD) simulation. Compared to the tank-in-series and axial dispersion ones, the Martin model could describe hydraulic performance of the EBMR better. CFD simulation results clearly indicated the existence of a preferential or circuitous flow in the EMBR. Moreover, the possible locations of dead zones in the EMBR were visualized through the CFD simulation. Based on these results, the relationship between the reactor performance and the hydrodynamics of EMBR was further elucidated relative to the current generation. The results of this study would benefit the design, operation and optimization of the EMBR for simultaneous energy recovery and wastewater treatment. PMID:25997399
NASA Astrophysics Data System (ADS)
Guo, L.; Huang, H.; Gaston, D.; Redden, G. D.; Fox, D. T.; Fujita, Y.
2010-12-01
Inducing mineral precipitation in the subsurface is one potential strategy for immobilizing trace metal and radionuclide contaminants. Generating mineral precipitates in situ can be achieved by manipulating chemical conditions, typically through injection or in situ generation of reactants. How these reactants transport, mix and react within the medium controls the spatial distribution and composition of the resulting mineral phases. Multiple processes, including fluid flow, dispersive/diffusive transport of reactants, biogeochemical reactions and changes in porosity-permeability, are tightly coupled over a number of scales. Numerical modeling can be used to investigate the nonlinear coupling effects of these processes which are quite challenging to explore experimentally. Many subsurface reactive transport simulators employ a de-coupled or operator-splitting approach where transport equations and batch chemistry reactions are solved sequentially. However, such an approach has limited applicability for biogeochemical systems with fast kinetics and strong coupling between chemical reactions and medium properties. A massively parallel, fully coupled, fully implicit Reactive Transport simulator (referred to as “RAT”) based on a parallel multi-physics object-oriented simulation framework (MOOSE) has been developed at the Idaho National Laboratory. Within this simulator, systems of transport and reaction equations can be solved simultaneously in a fully coupled, fully implicit manner using the Jacobian Free Newton-Krylov (JFNK) method with additional advanced computing capabilities such as (1) physics-based preconditioning for solution convergence acceleration, (2) massively parallel computing and scalability, and (3) adaptive mesh refinements for 2D and 3D structured and unstructured mesh. The simulator was first tested against analytical solutions, then applied to simulating induced calcium carbonate mineral precipitation in 1D columns and 2D flow cells as analogs to homogeneous and heterogeneous porous media, respectively. In 1D columns, calcium carbonate mineral precipitation was driven by urea hydrolysis catalyzed by urease enzyme, and in 2D flow cells, calcium carbonate mineral forming reactants were injected sequentially, forming migrating reaction fronts that are typically highly nonuniform. The RAT simulation results for the spatial and temporal distributions of precipitates, reaction rates and major species in the system, and also for changes in porosity and permeability, were compared to both laboratory experimental data and computational results obtained using other reactive transport simulators. The comparisons demonstrate the ability of RAT to simulate complex nonlinear systems and the advantages of fully coupled approaches, over de-coupled methods, for accurate simulation of complex, dynamic processes such as engineered mineral precipitation in subsurface environments.
A computer program for simulating geohydrologic systems in three dimensions
Posson, D.R.; Hearne, G.A.; Tracy, J.V.; Frenzel, P.F.
1980-01-01
This document is directed toward individuals who wish to use a computer program to simulate ground-water flow in three dimensions. The strongly implicit procedure (SIP) numerical method is used to solve the set of simultaneous equations. New data processing techniques and program input and output options are emphasized. The quifer system to be modeled may be heterogeneous and anisotropic, and may include both artesian and water-table conditions. Systems which consist of well defined alternating layers of highly permeable and poorly permeable material may be represented by a sequence of equations for two dimensional flow in each of the highly permeable units. Boundaries where head or flux is user-specified may be irregularly shaped. The program also allows the user to represent streams as limited-source boundaries when the streamflow is small in relation to the hydraulic stress on the system. The data-processing techniques relating to ' cube ' input and output, to swapping of layers, to restarting of simulation, to free-format NAMELIST input, to the details of each sub-routine 's logic, and to the overlay program structure are discussed. The program is capable of processing large models that might overflow computer memories with conventional programs. Detailed instructions for selecting program options, for initializing the data arrays, for defining ' cube ' output lists and maps, and for plotting hydrographs of calculated and observed heads and/or drawdowns are provided. Output may be restricted to those nodes of particular interest, thereby reducing the volumes of printout for modelers, which may be critical when working at remote terminals. ' Cube ' input commands allow the modeler to set aquifer parameters and initialize the model with very few input records. Appendixes provide instructions to compile the program, definitions and cross-references for program variables, summary of the FLECS structured FORTRAN programming language, listings of the FLECS and FORTRAN source code, and samples of input and output for example simulations. (USGS)
Mesoscale Simulation of Blood Flow in Small Vessels
Bagchi, Prosenjit
2007-01-01
Computational modeling of blood flow in microvessels with internal diameter 20–500 μm is a major challenge. It is because blood in such vessels behaves as a multiphase suspension of deformable particles. A continuum model of blood is not adequate if the motion of individual red blood cells in the suspension is of interest. At the same time, multiple cells, often a few thousands in number, must also be considered to account for cell-cell hydrodynamic interaction. Moreover, the red blood cells (RBCs) are highly deformable. Deformation of the cells must also be considered in the model, as it is a major determinant of many physiologically significant phenomena, such as formation of a cell-free layer, and the Fahraeus-Lindqvist effect. In this article, we present two-dimensional computational simulation of blood flow in vessels of size 20–300 μm at discharge hematocrit of 10–60%, taking into consideration the particulate nature of blood and cell deformation. The numerical model is based on the immersed boundary method, and the red blood cells are modeled as liquid capsules. A large RBC population comprising of as many as 2500 cells are simulated. Migration of the cells normal to the wall of the vessel and the formation of the cell-free layer are studied. Results on the trajectory and velocity traces of the RBCs, and their fluctuations are presented. Also presented are the results on the plug-flow velocity profile of blood, the apparent viscosity, and the Fahraeus-Lindqvist effect. The numerical results also allow us to investigate the variation of apparent blood viscosity along the cross-section of a vessel. The computational results are compared with the experimental results. To the best of our knowledge, this article presents the first simulation to simultaneously consider a large ensemble of red blood cells and the cell deformation. PMID:17208982
Schaffranek, Raymond W.
2004-01-01
A numerical model for simulation of surface-water integrated flow and transport in two (horizontal-space) dimensions is documented. The model solves vertically integrated forms of the equations of mass and momentum conservation and solute transport equations for heat, salt, and constituent fluxes. An equation of state for salt balance directly couples solution of the hydrodynamic and transport equations to account for the horizontal density gradient effects of salt concentrations on flow. The model can be used to simulate the hydrodynamics, transport, and water quality of well-mixed bodies of water, such as estuaries, coastal seas, harbors, lakes, rivers, and inland waterways. The finite-difference model can be applied to geographical areas bounded by any combination of closed land or open water boundaries. The simulation program accounts for sources of internal discharges (such as tributary rivers or hydraulic outfalls), tidal flats, islands, dams, and movable flow barriers or sluices. Water-quality computations can treat reactive and (or) conservative constituents simultaneously. Input requirements include bathymetric and topographic data defining land-surface elevations, time-varying water level or flow conditions at open boundaries, and hydraulic coefficients. Optional input includes the geometry of hydraulic barriers and constituent concentrations at open boundaries. Time-dependent water level, flow, and constituent-concentration data are required for model calibration and verification. Model output consists of printed reports and digital files of numerical results in forms suitable for postprocessing by graphical software programs and (or) scientific visualization packages. The model is compatible with most mainframe, workstation, mini- and micro-computer operating systems and FORTRAN compilers. This report defines the mathematical formulation and computational features of the model, explains the solution technique and related model constraints, describes the model framework, documents the type and format of inputs required, and identifies the type and format of output available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fellinger, Michael R.; Hector, Jr., Louis G.; Trinkle, Dallas R.
In this study, we compute changes in the lattice parameters and elastic stiffness coefficients C ij of body-centered tetragonal (bct) Fe due to Al, B, C, Cu, Mn, Si, and N solutes. Solute strain misfit tensors determine changes in the lattice parameters as well as strain contributions to the changes in the C ij. We also compute chemical contributions to the changes in the C ij, and show that the sum of the strain and chemical contributions agree with more computationally expensive direct calculations that simultaneously incorporate both contributions. Octahedral interstitial solutes, with C being the most important addition inmore » steels, must be present to stabilize the bct phase over the body-centered cubic phase. We therefore compute the effects of interactions between interstitial C solutes and substitutional solutes on the bct lattice parameters and C ij for all possible solute configurations in the dilute limit, and thermally average the results to obtain effective changes in properties due to each solute. Finally, the computed data can be used to estimate solute-induced changes in mechanical properties such as strength and ductility, and can be directly incorporated into mesoscale simulations of multiphase steels to model solute effects on the bct martensite phase.« less
Pulmonary imaging using respiratory motion compensated simultaneous PET/MR
Dutta, Joyita; Huang, Chuan; Li, Quanzheng; El Fakhri, Georges
2015-01-01
Purpose: Pulmonary positron emission tomography (PET) imaging is confounded by blurring artifacts caused by respiratory motion. These artifacts degrade both image quality and quantitative accuracy. In this paper, the authors present a complete data acquisition and processing framework for respiratory motion compensated image reconstruction (MCIR) using simultaneous whole body PET/magnetic resonance (MR) and validate it through simulation and clinical patient studies. Methods: The authors have developed an MCIR framework based on maximum a posteriori or MAP estimation. For fast acquisition of high quality 4D MR images, the authors developed a novel Golden-angle RAdial Navigated Gradient Echo (GRANGE) pulse sequence and used it in conjunction with sparsity-enforcing k-t FOCUSS reconstruction. The authors use a 1D slice-projection navigator signal encapsulated within this pulse sequence along with a histogram-based gate assignment technique to retrospectively sort the MR and PET data into individual gates. The authors compute deformation fields for each gate via nonrigid registration. The deformation fields are incorporated into the PET data model as well as utilized for generating dynamic attenuation maps. The framework was validated using simulation studies on the 4D XCAT phantom and three clinical patient studies that were performed on the Biograph mMR, a simultaneous whole body PET/MR scanner. Results: The authors compared MCIR (MC) results with ungated (UG) and one-gate (OG) reconstruction results. The XCAT study revealed contrast-to-noise ratio (CNR) improvements for MC relative to UG in the range of 21%–107% for 14 mm diameter lung lesions and 39%–120% for 10 mm diameter lung lesions. A strategy for regularization parameter selection was proposed, validated using XCAT simulations, and applied to the clinical studies. The authors’ results show that the MC image yields 19%–190% increase in the CNR of high-intensity features of interest affected by respiratory motion relative to UG and a 6%–51% increase relative to OG. Conclusions: Standalone MR is not the traditional choice for lung scans due to the low proton density, high magnetic susceptibility, and low T2∗ relaxation time in the lungs. By developing and validating this PET/MR pulmonary imaging framework, the authors show that simultaneous PET/MR, unique in its capability of combining structural information from MR with functional information from PET, shows promise in pulmonary imaging. PMID:26133621
Pulmonary imaging using respiratory motion compensated simultaneous PET/MR.
Dutta, Joyita; Huang, Chuan; Li, Quanzheng; El Fakhri, Georges
2015-07-01
Pulmonary positron emission tomography (PET) imaging is confounded by blurring artifacts caused by respiratory motion. These artifacts degrade both image quality and quantitative accuracy. In this paper, the authors present a complete data acquisition and processing framework for respiratory motion compensated image reconstruction (MCIR) using simultaneous whole body PET/magnetic resonance (MR) and validate it through simulation and clinical patient studies. The authors have developed an MCIR framework based on maximum a posteriori or MAP estimation. For fast acquisition of high quality 4D MR images, the authors developed a novel Golden-angle RAdial Navigated Gradient Echo (GRANGE) pulse sequence and used it in conjunction with sparsity-enforcing k-t FOCUSS reconstruction. The authors use a 1D slice-projection navigator signal encapsulated within this pulse sequence along with a histogram-based gate assignment technique to retrospectively sort the MR and PET data into individual gates. The authors compute deformation fields for each gate via nonrigid registration. The deformation fields are incorporated into the PET data model as well as utilized for generating dynamic attenuation maps. The framework was validated using simulation studies on the 4D XCAT phantom and three clinical patient studies that were performed on the Biograph mMR, a simultaneous whole body PET/MR scanner. The authors compared MCIR (MC) results with ungated (UG) and one-gate (OG) reconstruction results. The XCAT study revealed contrast-to-noise ratio (CNR) improvements for MC relative to UG in the range of 21%-107% for 14 mm diameter lung lesions and 39%-120% for 10 mm diameter lung lesions. A strategy for regularization parameter selection was proposed, validated using XCAT simulations, and applied to the clinical studies. The authors' results show that the MC image yields 19%-190% increase in the CNR of high-intensity features of interest affected by respiratory motion relative to UG and a 6%-51% increase relative to OG. Standalone MR is not the traditional choice for lung scans due to the low proton density, high magnetic susceptibility, and low T2 (∗) relaxation time in the lungs. By developing and validating this PET/MR pulmonary imaging framework, the authors show that simultaneous PET/MR, unique in its capability of combining structural information from MR with functional information from PET, shows promise in pulmonary imaging.
An imperialist competitive algorithm for virtual machine placement in cloud computing
NASA Astrophysics Data System (ADS)
Jamali, Shahram; Malektaji, Sepideh; Analoui, Morteza
2017-05-01
Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user's applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.
NASA Astrophysics Data System (ADS)
Calafiura, Paolo; Leggett, Charles; Seuster, Rolf; Tsulaia, Vakhtang; Van Gemmeren, Peter
2015-12-01
AthenaMP is a multi-process version of the ATLAS reconstruction, simulation and data analysis framework Athena. By leveraging Linux fork and copy-on-write mechanisms, it allows for sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated that it can reduce the memory usage of certain configurations of ATLAS production jobs by a factor of 2. AthenaMP has also evolved to become the parallel event-processing core of the recently developed ATLAS infrastructure for fine-grained event processing (Event Service) which allows the running of AthenaMP inside massively parallel distributed applications on hundreds of compute nodes simultaneously. We present the architecture of AthenaMP, various strategies implemented by AthenaMP for scheduling workload to worker processes (for example: Shared Event Queue and Shared Distributor of Event Tokens) and the usage of AthenaMP in the diversity of ATLAS event processing workloads on various computing resources: Grid, opportunistic resources and HPC.
Fovargue, Daniel E; Mitran, Sorin; Smith, Nathan B; Sankin, Georgy N; Simmons, Walter N; Zhong, Pei
2013-08-01
A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model.
Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.
2015-01-01
Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228
Fovargue, Daniel E.; Mitran, Sorin; Smith, Nathan B.; Sankin, Georgy N.; Simmons, Walter N.; Zhong, Pei
2013-01-01
A multiphysics computational model of the focusing of an acoustic pulse and subsequent shock wave formation that occurs during extracorporeal shock wave lithotripsy is presented. In the electromagnetic lithotripter modeled in this work the focusing is achieved via a polystyrene acoustic lens. The transition of the acoustic pulse through the solid lens is modeled by the linear elasticity equations and the subsequent shock wave formation in water is modeled by the Euler equations with a Tait equation of state. Both sets of equations are solved simultaneously in subsets of a single computational domain within the BEARCLAW framework which uses a finite-volume Riemann solver approach. This model is first validated against experimental measurements with a standard (or original) lens design. The model is then used to successfully predict the effects of a lens modification in the form of an annular ring cut. A second model which includes a kidney stone simulant in the domain is also presented. Within the stone the linear elasticity equations incorporate a simple damage model. PMID:23927200
NASA Astrophysics Data System (ADS)
Ngo, N. H.; Hartmann, J.-M.
2017-12-01
We propose a strategy to generate parameters of the Hartmann-Tran profile (HTp) by simultaneously using first principle calculations and broadening coefficients deduced from Voigt/Lorentz fits of experimental spectra. We start from reference absorptions simulated, at pressures between 10 and 950 Torr, using the HTp with parameters recently obtained from high quality experiments for the P(1) and P(17) lines of the 3-0 band of CO in He, Ar and Kr. Using requantized Classical Molecular Dynamics Simulations (rCMDS), we calculate spectra under the same conditions. We then correct them using a single parameter deduced from Lorentzian fits of both reference and calculated absorptions at a single pressure. The corrected rCMDS spectra are then simultaneously fitted using the HTp, yielding the parameters of this model and associated spectra. Comparisons between the retrieved and input (reference) HTp parameters show a quite satisfactory agreement. Furthermore, differences between the reference spectra and those computed with the HT model fitted to the corrected-rCMDS predictions are much smaller than those obtained with a Voigt line shape. Their full amplitudes are in most cases smaller than 1%, and often below 0.5%, of the peak absorption. This opens the route to completing spectroscopic databases using calculations and the very numerous broadening coefficients available from Voigt fits of laboratory spectra.
Computer simulation of gene detection without PCR by single molecule detection
NASA Astrophysics Data System (ADS)
Davis, Lloyd M.; Williams, John G.; Lamb, Don T.
1999-01-01
Pioneer Hi-Bred is developing a low-cost method for rapid screening of DNA, for use in research on elite crop seed genetics. Unamplified genomic DNA with the requisite base sequence is simultaneously labeled by two different colored fluorescent probes, which hybridize near the selected gene. Dual-channel single molecule detection (SMD) within a flow cell, then provides a sensitive and specific assay for the gene. The technique has been demonstrated using frequency- doubled Nd:YAG laser excitation of two visible-wavelength dyes. A prototype instrument employing infrared fluorophores and laser diodes for excitation has been developed. Here, we report results from a Monte Carlo simulation of the new instrument, in which experimentally determined photophysical parameters for candidate infrared dyes are used for parametric studies of experimental operating conditions. Fluorophore photostability is found to be a key factor in determining the instrument sensitivity. Most infrared dyes have poor photostability, resulting in inefficient SMD. However, the normalized cross-correlation function of the photon signals from each of the two channels can still yield a discernable peak, provided that the concentration of dual- labeled molecules is sufficiently high. Further, for low concentrations, processing of the two photon streams with Gaussian -weighted sliding sum digital filters and selection of simultaneously occurring peaks can also provide a sensitive indicator of the presence of dual-labeled molecules, although accidental coincidences must be considered in the interpretation of results.
Self-consistent core-pedestal transport simulations with neural network accelerated models
Meneghini, Orso; Smith, Sterling P.; Snyder, Philip B.; ...
2017-07-12
Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflowmore » that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. Finally, the NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.« less
Self-consistent core-pedestal transport simulations with neural network accelerated models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meneghini, Orso; Smith, Sterling P.; Snyder, Philip B.
Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflowmore » that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. Finally, the NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.« less
Self-consistent core-pedestal transport simulations with neural network accelerated models
NASA Astrophysics Data System (ADS)
Meneghini, O.; Smith, S. P.; Snyder, P. B.; Staebler, G. M.; Candy, J.; Belli, E.; Lao, L.; Kostuk, M.; Luce, T.; Luda, T.; Park, J. M.; Poli, F.
2017-08-01
Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflow that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. The NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.
Efficient Fourier-based algorithms for time-periodic unsteady problems
NASA Astrophysics Data System (ADS)
Gopinath, Arathi Kamath
2007-12-01
This dissertation work proposes two algorithms for the simulation of time-periodic unsteady problems via the solution of Unsteady Reynolds-Averaged Navier-Stokes (URANS) equations. These algorithms use a Fourier representation in time and hence solve for the periodic state directly without resolving transients (which consume most of the resources in a time-accurate scheme). In contrast to conventional Fourier-based techniques which solve the governing equations in frequency space, the new algorithms perform all the calculations in the time domain, and hence require minimal modifications to an existing solver. The complete space-time solution is obtained by iterating in a fifth pseudo-time dimension. Various time-periodic problems such as helicopter rotors, wind turbines, turbomachinery and flapping-wings can be simulated using the Time Spectral method. The algorithm is first validated using pitching airfoil/wing test cases. The method is further extended to turbomachinery problems, and computational results verified by comparison with a time-accurate calculation. The technique can be very memory intensive for large problems, since the solution is computed (and hence stored) simultaneously at all time levels. Often, the blade counts of a turbomachine are rescaled such that a periodic fraction of the annulus can be solved. This approximation enables the solution to be obtained at a fraction of the cost of a full-scale time-accurate solution. For a viscous computation over a three-dimensional single-stage rescaled compressor, an order of magnitude savings is achieved. The second algorithm, the reduced-order Harmonic Balance method is applicable only to turbomachinery flows, and offers even larger computational savings than the Time Spectral method. It simulates the true geometry of the turbomachine using only one blade passage per blade row as the computational domain. In each blade row of the turbomachine, only the dominant frequencies are resolved, namely, combinations of neighbor's blade passing. An appropriate set of frequencies can be chosen by the analyst/designer based on a trade-off between accuracy and computational resources available. A cost comparison with a time-accurate computation for an Euler calculation on a two-dimensional multi-stage compressor obtained an order of magnitude savings, and a RANS calculation on a three-dimensional single-stage compressor achieved two orders of magnitude savings, with comparable accuracy.
Barron, Martin; Zhang, Siyuan
2018-01-01
Abstract Cell types in cell populations change as the condition changes: some cell types die out, new cell types may emerge and surviving cell types evolve to adapt to the new condition. Using single-cell RNA-sequencing data that measure the gene expression of cells before and after the condition change, we propose an algorithm, SparseDC, which identifies cell types, traces their changes across conditions and identifies genes which are marker genes for these changes. By solving a unified optimization problem, SparseDC completes all three tasks simultaneously. SparseDC is highly computationally efficient and demonstrates its accuracy on both simulated and real data. PMID:29140455
An Efficient Downlink Scheduling Strategy Using Normal Graphs for Multiuser MIMO Wireless Systems
NASA Astrophysics Data System (ADS)
Chen, Jung-Chieh; Wu, Cheng-Hsuan; Lee, Yao-Nan; Wen, Chao-Kai
Inspired by the success of the low-density parity-check (LDPC) codes in the field of error-control coding, in this paper we propose transforming the downlink multiuser multiple-input multiple-output scheduling problem into an LDPC-like problem using the normal graph. Based on the normal graph framework, soft information, which indicates the probability that each user will be scheduled to transmit packets at the access point through a specified angle-frequency sub-channel, is exchanged among the local processors to iteratively optimize the multiuser transmission schedule. Computer simulations show that the proposed algorithm can efficiently schedule simultaneous multiuser transmission which then increases the overall channel utilization and reduces the average packet delay.
A Dual-Beam Irradiation Facility for a Novel Hybrid Cancer Therapy
NASA Astrophysics Data System (ADS)
Sabchevski, Svilen Petrov; Idehara, Toshitaka; Ishiyama, Shintaro; Miyoshi, Norio; Tatsukawa, Toshiaki
2013-01-01
In this paper we present the main ideas and discuss both the feasibility and the conceptual design of a novel hybrid technique and equipment for an experimental cancer therapy based on the simultaneous and/or sequential application of two beams, namely a beam of neutrons and a CW (continuous wave) or intermittent sub-terahertz wave beam produced by a gyrotron for treatment of cancerous tumors. The main simulation tools for the development of the computer aided design (CAD) of the prospective experimental facility for clinical trials and study of such new medical technology are briefly reviewed. Some tasks for a further continuation of this feasibility analysis are formulated as well.
NASA Astrophysics Data System (ADS)
Fishkova, T. Ya.
2017-06-01
Using computer simulation, I have determined the parameters of a multichannel analyzer of charged particles of a simple design that I have proposed having the form of a cylindrical capacitor with a discrete outer cylinder and closed ends in a wide range of simultaneously recorded energies ( E max/ E min = 100). When introducing an additional cylindrical electrode of small dimensions near the front end of the system, it is possible to improve the resolution by more than an order of magnitude in the low-energy region. At the same time, the energy resolution of the analyzer in all the above energy range is ρ = (4-6) × 10-3.
NASA Astrophysics Data System (ADS)
Enayatifar, Rasul; Sadaei, Hossein Javedani; Abdullah, Abdul Hanan; Lee, Malrey; Isnin, Ismail Fauzi
2015-08-01
Currently, there are many studies have conducted on developing security of the digital image in order to protect such data while they are sending on the internet. This work aims to propose a new approach based on a hybrid model of the Tinkerbell chaotic map, deoxyribonucleic acid (DNA) and cellular automata (CA). DNA rules, DNA sequence XOR operator and CA rules are used simultaneously to encrypt the plain-image pixels. To determine rule number in DNA sequence and also CA, a 2-dimension Tinkerbell chaotic map is employed. Experimental results and computer simulations, both confirm that the proposed scheme not only demonstrates outstanding encryption, but also resists various typical attacks.
NASA Technical Reports Server (NTRS)
1973-01-01
The HD 220 program was created as part of the space shuttle solid rocket booster recovery system definition. The model was generated to investigate the damage to SRB components under water impact loads. The random nature of environmental parameters, such as ocean waves and wind conditions, necessitates estimation of the relative frequency of occurrence for these parameters. The nondeterministic nature of component strengths also lends itself to probabilistic simulation. The Monte Carlo technique allows the simultaneous perturbation of multiple independent parameters and provides outputs describing the probability distribution functions of the dependent parameters. This allows the user to determine the required statistics for each output parameter.
Simultaneous Aerodynamic and Structural Design Optimization (SASDO) for a 3-D Wing
NASA Technical Reports Server (NTRS)
Gumbert, Clyde R.; Hou, Gene J.-W.; Newman, Perry A.
2001-01-01
The formulation and implementation of an optimization method called Simultaneous Aerodynamic and Structural Design Optimization (SASDO) is shown as an extension of the Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) method. It is extended by the inclusion of structure element sizing parameters as design variables and Finite Element Method (FEM) analysis responses as constraints. The method aims to reduce the computational expense. incurred in performing shape and sizing optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, FEM structural analysis and sensitivity analysis tools. SASDO is applied to a simple. isolated, 3-D wing in inviscid flow. Results show that the method finds the saine local optimum as a conventional optimization method with some reduction in the computational cost and without significant modifications; to the analysis tools.
FHSA-SED: Two-Locus Model Detection for Genome-Wide Association Study with Harmony Search Algorithm.
Tuo, Shouheng; Zhang, Junying; Yuan, Xiguo; Zhang, Yuanyuan; Liu, Zhaowen
2016-01-01
Two-locus model is a typical significant disease model to be identified in genome-wide association study (GWAS). Due to intensive computational burden and diversity of disease models, existing methods have drawbacks on low detection power, high computation cost, and preference for some types of disease models. In this study, two scoring functions (Bayesian network based K2-score and Gini-score) are used for characterizing two SNP locus as a candidate model, the two criteria are adopted simultaneously for improving identification power and tackling the preference problem to disease models. Harmony search algorithm (HSA) is improved for quickly finding the most likely candidate models among all two-locus models, in which a local search algorithm with two-dimensional tabu table is presented to avoid repeatedly evaluating some disease models that have strong marginal effect. Finally G-test statistic is used to further test the candidate models. We investigate our method named FHSA-SED on 82 simulated datasets and a real AMD dataset, and compare it with two typical methods (MACOED and CSE) which have been developed recently based on swarm intelligent search algorithm. The results of simulation experiments indicate that our method outperforms the two compared algorithms in terms of detection power, computation time, evaluation times, sensitivity (TPR), specificity (SPC), positive predictive value (PPV) and accuracy (ACC). Our method has identified two SNPs (rs3775652 and rs10511467) that may be also associated with disease in AMD dataset.
FHSA-SED: Two-Locus Model Detection for Genome-Wide Association Study with Harmony Search Algorithm
Tuo, Shouheng; Zhang, Junying; Yuan, Xiguo; Zhang, Yuanyuan; Liu, Zhaowen
2016-01-01
Motivation Two-locus model is a typical significant disease model to be identified in genome-wide association study (GWAS). Due to intensive computational burden and diversity of disease models, existing methods have drawbacks on low detection power, high computation cost, and preference for some types of disease models. Method In this study, two scoring functions (Bayesian network based K2-score and Gini-score) are used for characterizing two SNP locus as a candidate model, the two criteria are adopted simultaneously for improving identification power and tackling the preference problem to disease models. Harmony search algorithm (HSA) is improved for quickly finding the most likely candidate models among all two-locus models, in which a local search algorithm with two-dimensional tabu table is presented to avoid repeatedly evaluating some disease models that have strong marginal effect. Finally G-test statistic is used to further test the candidate models. Results We investigate our method named FHSA-SED on 82 simulated datasets and a real AMD dataset, and compare it with two typical methods (MACOED and CSE) which have been developed recently based on swarm intelligent search algorithm. The results of simulation experiments indicate that our method outperforms the two compared algorithms in terms of detection power, computation time, evaluation times, sensitivity (TPR), specificity (SPC), positive predictive value (PPV) and accuracy (ACC). Our method has identified two SNPs (rs3775652 and rs10511467) that may be also associated with disease in AMD dataset. PMID:27014873
Trescott, Peter C.; Pinder, George Francis; Larson, S.P.
1976-01-01
The model will simulate ground-water flow in an artesian aquifer, a water-table aquifer, or a combined artesian and water-table aquifer. The aquifer may be heterogeneous and anisotropic and have irregular boundaries. The source term in the flow equation may include well discharge, constant recharge, leakage from confining beds in which the effects of storage are considered, and evapotranspiration as a linear function of depth to water. The theoretical development includes presentation of the appropriate flow equations and derivation of the finite-difference approximations (written for a variable grid). The documentation emphasizes the numerical techniques that can be used for solving the simultaneous equations and describes the results of numerical experiments using these techniques. Of the three numerical techniques available in the model, the strongly implicit procedure, in general, requires less computer time and has fewer numerical difficulties than do the iterative alternating direction implicit procedure and line successive overrelaxation (which includes a two-dimensional correction procedure to accelerate convergence). The documentation includes a flow chart, program listing, an example simulation, and sections on designing an aquifer model and requirements for data input. It illustrates how model results can be presented on the line printer and pen plotters with a program that utilizes the graphical display software available from the Geological Survey Computer Center Division. In addition the model includes options for reading input data from a disk and writing intermediate results on a disk.
Ketelhut, Diane Jass; Niemi, Steven M
2007-01-01
This article examines several new and exciting communication technologies. Many of the technologies were developed by the entertainment industry; however, other industries are adopting and modifying them for their own needs. These new technologies allow people to collaborate across distance and time and to learn in simulated work contexts. The article explores the potential utility of these technologies for advancing laboratory animal care and use through better education and training. Descriptions include emerging technologies such as augmented reality and multi-user virtual environments, which offer new approaches with different capabilities. Augmented reality interfaces, characterized by the use of handheld computers to infuse the virtual world into the real one, result in deeply immersive simulations. In these simulations, users can access virtual resources and communicate with real and virtual participants. Multi-user virtual environments enable multiple participants to simultaneously access computer-based three-dimensional virtual spaces, called "worlds," and to interact with digital tools. They allow for authentic experiences that promote collaboration, mentoring, and communication. Because individuals may learn or train differently, it is advantageous to combine the capabilities of these technologies and applications with more traditional methods to increase the number of students who are served by using current methods alone. The use of these technologies in animal care and use programs can create detailed training and education environments that allow students to learn the procedures more effectively, teachers to assess their progress more objectively, and researchers to gain insights into animal care.
Robust Real-Time Musculoskeletal Modeling Driven by Electromyograms.
Durandau, Guillaume; Farina, Dario; Sartori, Massimo
2018-03-01
Current clinical biomechanics involves lengthy data acquisition and time-consuming offline analyses with biomechanical models not operating in real-time for man-machine interfacing. We developed a method that enables online analysis of neuromusculoskeletal function in vivo in the intact human. We used electromyography (EMG)-driven musculoskeletal modeling to simulate all transformations from muscle excitation onset (EMGs) to mechanical moment production around multiple lower-limb degrees of freedom (DOFs). We developed a calibration algorithm that enables adjusting musculoskeletal model parameters specifically to an individual's anthropometry and force-generating capacity. We incorporated the modeling paradigm into a computationally efficient, generic framework that can be interfaced in real-time with any movement data collection system. The framework demonstrated the ability of computing forces in 13 lower-limb muscle-tendon units and resulting moments about three joint DOFs simultaneously in real-time. Remarkably, it was capable of extrapolating beyond calibration conditions, i.e., predicting accurate joint moments during six unseen tasks and one unseen DOF. The proposed framework can dramatically reduce evaluation latency in current clinical biomechanics and open up new avenues for establishing prompt and personalized treatments, as well as for establishing natural interfaces between patients and rehabilitation systems. The integration of EMG with numerical modeling will enable simulating realistic neuromuscular strategies in conditions including muscular/orthopedic deficit, which could not be robustly simulated via pure modeling formulations. This will enable translation to clinical settings and development of healthcare technologies including real-time bio-feedback of internal mechanical forces and direct patient-machine interfacing.
The impact of home computer use on children's activities and development.
Subrahmanyam, K; Kraut, R E; Greenfield, P M; Gross, E F
2000-01-01
The increasing amount of time children are spending on computers at home and school has raised questions about how the use of computer technology may make a difference in their lives--from helping with homework to causing depression to encouraging violent behavior. This article provides an overview of the limited research on the effects of home computer use on children's physical, cognitive, and social development. Initial research suggests, for example, that access to computers increases the total amount of time children spend in front of a television or computer screen at the expense of other activities, thereby putting them at risk for obesity. At the same time, cognitive research suggests that playing computer games can be an important building block to computer literacy because it enhances children's ability to read and visualize images in three-dimensional space and track multiple images simultaneously. The limited evidence available also indicates that home computer use is linked to slightly better academic performance. The research findings are more mixed, however, regarding the effects on children's social development. Although little evidence indicates that the moderate use of computers to play games has a negative impact on children's friendships and family relationships, recent survey data show that increased use of the Internet may be linked to increases in loneliness and depression. Of most concern are the findings that playing violent computer games may increase aggressiveness and desensitize a child to suffering, and that the use of computers may blur a child's ability to distinguish real life from simulation. The authors conclude that more systematic research is needed in these areas to help parents and policymakers maximize the positive effects and to minimize the negative effects of home computers in children's lives.
Phase synchrony reveals organization in human atrial fibrillation
Vidmar, David; Narayan, Sanjiv M.
2015-01-01
It remains unclear if human atrial fibrillation (AF) is spatially nonhierarchical or exhibits a hierarchy of organization sustained by sources. We utilize activation times obtained at discrete locations during AF to compute the phase synchrony between tissue regions, to examine underlying spatial dynamics throughout both atria. We construct a binary synchronization network and show that this network can accurately define regions of coherence in coarse-grained in silico data. Specifically, domains controlled by spiral waves exhibit regions of high phase synchrony. We then apply this analysis to clinical data from patients experiencing cardiac arrhythmias using multielectrode catheters to simultaneously record from a majority of both atria. We show that pharmaceutical intervention with ibutilide organizes activation by increasing the size of the synchronized domain in AF and quantify the increase in temporal organization when arrhythmia changes from fibrillation to tachycardia. Finally, in recordings from 24 patients in AF we show that the level of synchrony is spatially broad with some patients showing large spatially contiguous regions of synchronization, while in others synchrony is localized to small pockets. Using computer simulations, we show that this distribution is inconsistent with distributions obtained from simulations that mimic multiwavelet reentry but is consistent with mechanisms in which one or more spatially conserved spiral waves is surrounded by tissue in which activation is disorganized. PMID:26475585
Source analysis of MEG activities during sleep (abstract)
NASA Astrophysics Data System (ADS)
Ueno, S.; Iramina, K.
1991-04-01
The present study focuses on magnetic fields of the brain activities during sleep, in particular on K-complexes, vertex waves, and sleep spindles in human subjects. We analyzed these waveforms based on both topographic EEG (electroencephalographic) maps and magnetic fields measurements, called MEGs (magnetoencephalograms). The components of magnetic fields perpendicular to the surface of the head were measured using a dc SQUID magnetometer with a second derivative gradiometer. In our computer simulation, the head is assumed to be a homogeneous spherical volume conductor, with electric sources of brain activity modeled as current dipoles. Comparison of computer simulations with the measured data, particularly the MEG, suggests that the source of K-complexes can be modeled by two current dipoles. A source for the vertex wave is modeled by a single current dipole which orients along the body axis out of the head. By again measuring the simultaneous MEG and EEG signals, it is possible to uniquely determine the orientation of this dipole, particularly when it is tilted slightly off-axis. In sleep stage 2, fast waves of magnetic fields consistently appeared, but EEG spindles appeared intermittently. The results suggest that there exist sources which are undetectable by electrical measurement but are detectable by magnetic-field measurement. Such source can be described by a pair of opposing dipoles of which directions are oppositely oriented.
Algorithm and code development for unsteady three-dimensional Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Obayashi, Shigeru
1994-01-01
Aeroelastic tests require extensive cost and risk. An aeroelastic wind-tunnel experiment is an order of magnitude more expensive than a parallel experiment involving only aerodynamics. By complementing the wind-tunnel experiments with numerical simulations, the overall cost of the development of aircraft can be considerably reduced. In order to accurately compute aeroelastic phenomenon it is necessary to solve the unsteady Euler/Navier-Stokes equations simultaneously with the structural equations of motion. These equations accurately describe the flow phenomena for aeroelastic applications. At ARC a code, ENSAERO, is being developed for computing the unsteady aerodynamics and aeroelasticity of aircraft, and it solves the Euler/Navier-Stokes equations. The purpose of this cooperative agreement was to enhance ENSAERO in both algorithm and geometric capabilities. During the last five years, the algorithms of the code have been enhanced extensively by using high-resolution upwind algorithms and efficient implicit solvers. The zonal capability of the code has been extended from a one-to-one grid interface to a mismatching unsteady zonal interface. The geometric capability of the code has been extended from a single oscillating wing case to a full-span wing-body configuration with oscillating control surfaces. Each time a new capability was added, a proper validation case was simulated, and the capability of the code was demonstrated.
Efficient simulation of intrinsic, extrinsic and external noise in biochemical systems
Pischel, Dennis; Sundmacher, Kai; Flassig, Robert J.
2017-01-01
Abstract Motivation: Biological cells operate in a noisy regime influenced by intrinsic, extrinsic and external noise, which leads to large differences of individual cell states. Stochastic effects must be taken into account to characterize biochemical kinetics accurately. Since the exact solution of the chemical master equation, which governs the underlying stochastic process, cannot be derived for most biochemical systems, approximate methods are used to obtain a solution. Results: In this study, a method to efficiently simulate the various sources of noise simultaneously is proposed and benchmarked on several examples. The method relies on the combination of the sigma point approach to describe extrinsic and external variability and the τ-leaping algorithm to account for the stochasticity due to probabilistic reactions. The comparison of our method to extensive Monte Carlo calculations demonstrates an immense computational advantage while losing an acceptable amount of accuracy. Additionally, the application to parameter optimization problems in stochastic biochemical reaction networks is shown, which is rarely applied due to its huge computational burden. To give further insight, a MATLAB script is provided including the proposed method applied to a simple toy example of gene expression. Availability and implementation: MATLAB code is available at Bioinformatics online. Contact: flassig@mpi-magdeburg.mpg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28881987
Rocinante, a virtual collaborative visualizer
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDonald, M.J.; Ice, L.G.
1996-12-31
With the goal of improving the ability of people around the world to share the development and use of intelligent systems, Sandia National Laboratories` Intelligent Systems and Robotics Center is developing new Virtual Collaborative Engineering (VCE) and Virtual Collaborative Control (VCC) technologies. A key area of VCE and VCC research is in shared visualization of virtual environments. This paper describes a Virtual Collaborative Visualizer (VCV), named Rocinante, that Sandia developed for VCE and VCC applications. Rocinante allows multiple participants to simultaneously view dynamic geometrically-defined environments. Each viewer can exclude extraneous detail or include additional information in the scene as desired.more » Shared information can be saved and later replayed in a stand-alone mode. Rocinante automatically scales visualization requirements with computer system capabilities. Models with 30,000 polygons and 4 Megabytes of texture display at 12 to 15 frames per second (fps) on an SGI Onyx and at 3 to 8 fps (without texture) on Indigo 2 Extreme computers. In its networked mode, Rocinante synchronizes its local geometric model with remote simulators and sensory systems by monitoring data transmitted through UDP packets. Rocinante`s scalability and performance make it an ideal VCC tool. Users throughout the country can monitor robot motions and the thinking behind their motion planners and simulators.« less
Development of the Patient-specific Cardiovascular Modeling System Using Immersed Boundary Technique
NASA Astrophysics Data System (ADS)
Tay, Wee-Beng; Lin, Liang-Yu; Tseng, Wen-Yih; Tseng, Yu-Heng
2010-05-01
A computational fluid dynamics (CFD) based, patient-specific cardiovascular modeling system is under-developed. The system can identify possible diseased conditions and facilitate physicians' diagnosis at early stage through the hybrid CFD simulation and time-resolved magnetic resonance imaging (MRI). The CFD simulation is initially based on the three-dimensional heart model developed by McQueen and Peskin, which can simultaneously compute fluid motions and elastic boundary motions using the immersed boundary method. We extend and improve the three-dimensional heart model for the clinical application by including the patient-specific hemodynamic information. The flow features in the ventricles and their responses are investigated under different inflow and outflow conditions during diastole and systole phases based on the quasi-realistic heart model, which takes advantage of the observed flow scenarios. Our results indicate distinct differences between the two groups of participants, including the vortex formation process in the left ventricle (LV), as well as the flow rate distributions at different identified sources such as the aorta, vena cava and pulmonary veins/artery. We further identify some key parameters which may affect the vortex formation in the LV. Thus it is hypothesized that disease-related dysfunctions in intervals before complete heart failure can be observed in the dynamics of transmitral blood flow during early LV diastole.
HERMES: Hadamard Encoding and Reconstruction of MEGA-Edited Spectroscopy
Chan, Kimberly L.; Puts, Nicolaas A. J.; Schär, Michael; Barker, Peter B.; Edden, Richard A. E.
2017-01-01
Purpose To investigate a novel Hadamard-encoded spectral editing scheme and evaluate its performance in simultaneously quantifying N-acetyl aspartate (NAA) and N-acetyl aspartyl glutamate (NAAG) at 3 Tesla. Methods Editing pulses applied according to a Hadamard encoding scheme allow the simultaneous acquisition of multiple metabolites. The method, called HERMES (Hadamard Encoding and Reconstruction of MEGA-Edited Spectroscopy), was optimized to detect NAA and NAAG simultaneously using density-matrix simulations and validated in phantoms at 3T. In vivo data were acquired in the centrum semiovale of 12 normal subjects. The NAA:NAAG concentration ratio was determined by modeling in vivo data using simulated basis functions. Simulations were also performed for potentially coedited molecules with signals within the detected NAA/NAAG region. Results Simulations and phantom experiments show excellent segregation of NAA and NAAG signals into the intended spectra, with minimal crosstalk. Multiplet patterns show good agreement between simulations and phantom and in vivo data. In vivo measurements show that the relative peak intensities of the NAA and NAAG spectra are consistent with a NAA:NAAG concentration ratio of 4.22:1 in good agreement with literature. Simulations indicate some coediting of aspartate and glutathione near the detected region (editing efficiency: 4.5% and 78.2%, respectively, for the NAAG reconstruction and 5.1% and 19.5%, respectively, for the NAA reconstruction). Conclusion The simultaneous and separable detection of two otherwise overlapping metabolites using HERMES is possible at 3T. PMID:27089868
Vandelanotte, Corneel; De Bourdeaudhuij, Ilse; Sallis, James F; Spittaels, Heleen; Brug, Johannes
2005-04-01
Little evidence exists about the effectiveness of "interactive" computer-tailored interventions and about the combined effectiveness of tailored interventions on physical activity and diet. Furthermore, it is unknown whether they should be executed sequentially or simultaneously. The purpose of this study was to examine (a) the effectiveness of interactive computer-tailored interventions for increasing physical activity and decreasing fat intake and (b) which intervening mode, sequential or simultaneous, is most effective in behavior change. Participants (N = 771) were randomly assigned to receive (a) the physical activity and fat intake interventions simultaneously at baseline, (b) the physical activity intervention at baseline and the fat intake intervention 3 months later, (c) the fat intake intervention at baseline and the physical activity intervention 3 months later, or (d) a place in the control group. Six months postbaseline, the results showed that the tailored interventions produced significantly higher physical activity scores, F(2, 573) = 11.4, p < .001, and lower fat intake scores, F(2, 565) = 31.4, p < .001, in the experimental groups when compared to the control group. For both behaviors, the sequential and simultaneous intervening modes showed to be effective; however, for the fat intake intervention and for the participants who did not meet the recommendation in the physical activity intervention, the simultaneous mode appeared to work better than the sequential mode.
Curtin, Lindsay B; Finn, Laura A; Czosnowski, Quinn A; Whitman, Craig B; Cawley, Michael J
2011-08-10
To assess the impact of computer-based simulation on the achievement of student learning outcomes during mannequin-based simulation. Participants were randomly assigned to rapid response teams of 5-6 students and then teams were randomly assigned to either a group that completed either computer-based or mannequin-based simulation cases first. In both simulations, students used their critical thinking skills and selected interventions independent of facilitator input. A predetermined rubric was used to record and assess students' performance in the mannequin-based simulations. Feedback and student performance scores were generated by the software in the computer-based simulations. More of the teams in the group that completed the computer-based simulation before completing the mannequin-based simulation achieved the primary outcome for the exercise, which was survival of the simulated patient (41.2% vs. 5.6%). The majority of students (>90%) recommended the continuation of simulation exercises in the course. Students in both groups felt the computer-based simulation should be completed prior to the mannequin-based simulation. The use of computer-based simulation prior to mannequin-based simulation improved the achievement of learning goals and outcomes. In addition to improving participants' skills, completing the computer-based simulation first may improve participants' confidence during the more real-life setting achieved in the mannequin-based simulation.
Boyle, Peter A.; Christ, Norman H.; Gara, Alan; Mawhinney, Robert D.; Ohmacht, Martin; Sugavanam, Krishnan
2012-12-11
A prefetch system improves a performance of a parallel computing system. The parallel computing system includes a plurality of computing nodes. A computing node includes at least one processor and at least one memory device. The prefetch system includes at least one stream prefetch engine and at least one list prefetch engine. The prefetch system operates those engines simultaneously. After the at least one processor issues a command, the prefetch system passes the command to a stream prefetch engine and a list prefetch engine. The prefetch system operates the stream prefetch engine and the list prefetch engine to prefetch data to be needed in subsequent clock cycles in the processor in response to the passed command.
NASA Astrophysics Data System (ADS)
Moore, James; Yu, Hang; Tang, Chi-Hsien; Wang, Teng; Barbot, Sylvain; Peng, Dongju; Masuti, Sagar; Dauwels, Justin; Hsu, Ya-Ju; Lambert, Valere; Nanjundiah, Priyamvada; Wei, Shengji; Lindsey, Eric; Feng, Lujia; Qiang, Qiu
2017-04-01
Studies of geodetic data across the earthquake cycle indicate a wide range of mechanisms contribute to cycles of stress buildup and relaxation. Both on-fault rate and state friction and off-fault rheologies can contribute to the observed deformation; in particular, the postseismic transient phase of the earthquake cycle. One problem with many of these models is that there is a wide range of parameter space to be investigated, with each parameter pair possessing their own tradeoffs. This becomes especially problematic when trying to model both on-fault and off-fault deformation simultaneously. The computational time to simulate these processes simultaneously using finite element and spectral methods can restrict parametric investigations. We present a novel approach to simulate on-fault and off-fault deformation simultaneously using analytical Green's functions for distributed deformation at depth [Barbot, Moore and Lambert., 2016]. This allows us to jointly explore dynamic frictional properties on the fault, and the plastic properties of the bulk rocks (including grain size and water distribution) in the lower crust with low computational cost. These new displacement and stress Green's functions can be used for both forward and inverse modelling of distributed shear, where the calculated strain-rates can be converted to effective viscosities. Here, we draw insight from the postseismic geodetic observations following the 2015 Mw 7.8 Gorkha earthquake. We forward model afterslip using rate and state friction on the megathrust geometry with the two ramp-décollement system presented by Hubbard et al., (pers. comm., 2015) and viscoelastic relaxation using recent experimentally derived flow laws with transient rheology and the thermal structure from [Cattin et al., 2001]. The calculated strain-rates can be converted to effective viscosities. The postseismic deformation brings new insights into the distribution of brittle and ductile crustal processes beneath Nepal. References Barbot S., Moore J. D. P., Lambert V. 2016. Displacements and Stress Associated with Distributed Inelastic Deformation in a Half Space. BSSA, Submitted. Cattin R., Martelet G., Henry P., Avouac J. P., Diament M., Shakya T. R. 2001. Gravity anomalies, crustal structure and thermo-mechanical support of the Himalaya of Central Nepal. Geophysical Journal International, Volume 147, Issue 2, 381-392.
Transient Three-Dimensional Analysis of Nozzle Side Load in Regeneratively Cooled Engines
NASA Technical Reports Server (NTRS)
Wang, Ten-See
2005-01-01
Three-dimensional numerical investigations on the start-up side load physics for a regeneratively cooled, high-aspect-ratio nozzle were performed. The objectives of this study are to identify the three-dimensional side load physics and to compute the associated aerodynamic side load using an anchored computational methodology. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics formulation, and a transient inlet condition based on an engine system simulation. Computations were performed for both the adiabatic and cooled walls in order to understand the effect of boundary conditions. Finite-rate chemistry was used throughout the study so that combustion effect is always included. The results show that three types of shock evolution are responsible for side loads: generation of combustion wave; transitions among free-shock separation, restricted-shock separation, and simultaneous free-shock and restricted shock separations; along with oscillation of shocks across the lip. Wall boundary conditions drastically affect the computed side load physics: the adiabatic nozzle prefers free-shock separation while the cooled nozzle favors restricted-shock separation, resulting in higher peak side load for the cooled nozzle than that of the adiabatic nozzle. By comparing the computed physics with those of test observations, it is concluded that cooled wall is a more realistic boundary condition, and the oscillation of the restricted-shock separation flow pattern across the lip along with its associated tangential shock motion are the dominant side load physics for a regeneratively cooled, high aspect-ratio rocket engine.
Chalon, A; Favre, J; Piotrowski, B; Landmann, V; Grandmougin, D; Maureira, J-P; Laheurte, P; Tran, N
2018-06-01
Implantation of a Left Ventricular Assist Device (LVAD) may produce both excessive local tissue stress and resulting strain-induced tissue rupture that are potential iatrogenic factors influencing the success of the surgical attachment of the LVAD into the myocardium. By using a computational simulation compared to mechanical tests, we sought to investigate the characteristics of stress-induced suture material on porcine myocardium. Tensile strength experiments (n = 8) were performed on bulk left myocardium to establish a hyperelastic reduced polynomial constitutive law. Simultaneously, suture strength tests on left myocardium (n = 6) were performed with a standard tensile test setup. Experiments were made on bulk ventricular wall with a single U-suture (polypropylene 3-0) and a PTFE pledget. Then, a Finite Element simulation of a LVAD suture case was performed. Strength versus displacement behavior was compared between mechanical and numerical experiments. Local stress fields in the model were thus analyzed. A strong correlation between the experimental and the numerical responses was observed, validating the relevance of the numerical model. A secure damage limit of 100 kPa on heart tissue was defined from mechanical suture testing and used to describe numerical results. The impact of suture on heart tissue could be accurately determined through new parameters of numerical data (stress diffusion, triaxiality stress). Finally, an ideal spacing between sutures of 2 mm was proposed. Our computational model showed a reliable ability to provide and predict various local tissue stresses created by suture penetration into the myocardium. In addition, this model contributed to providing valuable information useful to design less traumatic sutures for LVAD implantation. Therefore, our computational model is a promising tool to predict and optimize LVAD myocardial suture. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bednar, Earl; Drager, Steven L.
2007-04-01
Quantum information processing's objective is to utilize revolutionary computing capability based on harnessing the paradigm shift offered by quantum computing to solve classically hard and computationally challenging problems. Some of our computationally challenging problems of interest include: the capability for rapid image processing, rapid optimization of logistics, protecting information, secure distributed simulation, and massively parallel computation. Currently, one important problem with quantum information processing is that the implementation of quantum computers is difficult to realize due to poor scalability and great presence of errors. Therefore, we have supported the development of Quantum eXpress and QuIDD Pro, two quantum computer simulators running on classical computers for the development and testing of new quantum algorithms and processes. This paper examines the different methods used by these two quantum computing simulators. It reviews both simulators, highlighting each simulators background, interface, and special features. It also demonstrates the implementation of current quantum algorithms on each simulator. It concludes with summary comments on both simulators.
Power and Efficiency Optimized in Traveling-Wave Tubes Over a Broad Frequency Bandwidth
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.
2001-01-01
A traveling-wave tube (TWT) is an electron beam device that is used to amplify electromagnetic communication waves at radio and microwave frequencies. TWT's are critical components in deep space probes, communication satellites, and high-power radar systems. Power conversion efficiency is of paramount importance for TWT's employed in deep space probes and communication satellites. A previous effort was very successful in increasing efficiency and power at a single frequency (ref. 1). Such an algorithm is sufficient for narrow bandwidth designs, but for optimal designs in applications that require high radiofrequency power over a wide bandwidth, such as high-density communications or high-resolution radar, the variation of the circuit response with respect to frequency must be considered. This work at the NASA Glenn Research Center is the first to develop techniques for optimizing TWT efficiency and output power over a broad frequency bandwidth (ref. 2). The techniques are based on simulated annealing, which has the advantage over conventional optimization techniques in that it enables the best possible solution to be obtained (ref. 3). Two new broadband simulated annealing algorithms were developed that optimize (1) minimum saturated power efficiency over a frequency bandwidth and (2) simultaneous bandwidth and minimum power efficiency over the frequency band with constant input power. The algorithms were incorporated into the NASA coupled-cavity TWT computer model (ref. 4) and used to design optimal phase velocity tapers using the 59- to 64-GHz Hughes 961HA coupled-cavity TWT as a baseline model. In comparison to the baseline design, the computational results of the first broad-band design algorithm show an improvement of 73.9 percent in minimum saturated efficiency (see the top graph). The second broadband design algorithm (see the bottom graph) improves minimum radiofrequency efficiency with constant input power drive by a factor of 2.7 at the high band edge (64 GHz) and increases simultaneous bandwidth by 500 MHz.
NASA Astrophysics Data System (ADS)
Gallo, Emanuela Carolina Angela
Width increased dual-pump enhanced coherent anti-Stokes Raman spectroscopy (WIDECARS) measurements were conducted in a McKenna air-ethylene premixed burner, at nominal equivalence ratio range between 0.55 and 2.50 to provide quantitative measurements of six major combustion species (C2H 4, N2, O2, H2, CO, CO2) concentration and temperature simultaneously. The purpose of this test was to investigate the uncertainties in the experimental and spectral modeling methods in preparation for an subsequent scramjet C2H4/air combustion test at the University of Virginia-Aerospace Research Laboratory. A broadband Pyrromethene (PM) PM597 and PM650 dye laser mixture and optical cavity were studied and optimized to excite the Raman shift of all the target species. Two hundred single shot recorded spectra were processed, theoretically fitted and then compared to computational models, to verify where chemical equilibrium or adiabatic condition occurred, providing experimental flame location and formation, species concentrations, temperature, and heat losses inputs to computational kinetic models. The Stark effect, temperature, and concentration errors are discussed. Subsequently, WIDECARS measurements of a premixed air-ethylene flame were successfully acquired in a direct connect small-scale dual-mode scramjet combustor, at University of Virginia Supersonic Combustion Facility (UVaSCF). A nominal Mach 5 flight condition was simulated (stagnation pressure p0 = 300 kPa, temperature T0 = 1200 K, equivalence ratio range ER = 0.3 -- 0.4). The purpose of this test was to provide quantitative measurements of the six major combustion species concentration and temperature. Point-wise measurements were taken by mapping four two-dimensional orthogonal planes (before, within, and two planes after the cavity flame holder) with respect to the combustor freestream direction. Two hundred single shot recorded spectra were processed and theoretically fitted. Mean flow and standard deviation are provided for each investigated case. Within the flame limits tested, WIDECARS data were analyzed and compared with CFD simulations and OH-PLIF measurements.
Safari, Mahdi; Mosleminiya, Navid; Abdolali, Ali
2017-10-01
Since the development of communication devices and expansion of their applications, there have been concerns about their harmful health effects. The main aim of this study was to investigate laptop thermal effects caused by exposure to electromagnetic fields and thermal sources simultaneously; propose a nondestructive, replicable process that is less expensive than clinical measurements; and to study the effects of positioning any new device near the human body in steady state conditions to ensure safety by U.S. and European standard thresholds. A computer simulation was designed to obtain laptop heat flux from SolidWorks flow simulation. Increase in body temperature due to heat flux was calculated, and antenna radiation was calculated using Computer Simulation Technology (CST) Microwave Studio software. Steady state temperature and specific absorption rate (SAR) distribution in user's body, and heat flux beneath the laptop, were obtained from simulations. The laptop in its high performance mode caused 420 (W/m 2 ) peak two-dimensional heat flux beneath it. The cumulative effect of laptop in high performance mode and 1 W antenna radiation resulted in temperatures of 42.9, 38.1, and 37.2 °C in lap skin, scrotum, and testis, that is, 5.6, 2.1, and 1.4 °C increase in temperature, respectively. Also, 1 W antenna radiation caused 0.37 × 10 -3 and 0.13 × 10 -1 (W/kg) peak three-dimensional SAR at 2.4 and 5 GHz, respectively, which could be ignored in reference to standards and temperature rise due to laptop use. Bioelectromagnetics. 38:550-558, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Zhu, Dehua; Echendu, Shirley; Xuan, Yunqing; Webster, Mike; Cluckie, Ian
2016-11-01
Impact-focused studies of extreme weather require coupling of accurate simulations of weather and climate systems and impact-measuring hydrological models which themselves demand larger computer resources. In this paper, we present a preliminary analysis of a high-performance computing (HPC)-based hydrological modelling approach, which is aimed at utilizing and maximizing HPC power resources, to support the study on extreme weather impact due to climate change. Here, four case studies are presented through implementation on the HPC Wales platform of the UK mesoscale meteorological Unified Model (UM) with high-resolution simulation suite UKV, alongside a Linux-based hydrological model, Hydrological Predictions for the Environment (HYPE). The results of this study suggest that the coupled hydro-meteorological model was still able to capture the major flood peaks, compared with the conventional gauge- or radar-driving forecast, but with the added value of much extended forecast lead time. The high-resolution rainfall estimation produced by the UKV performs similarly to that of radar rainfall products in the first 2-3 days of tested flood events, but the uncertainties particularly increased as the forecast horizon goes beyond 3 days. This study takes a step forward to identify how the online mode approach can be used, where both numerical weather prediction and the hydrological model are executed, either simultaneously or on the same hardware infrastructures, so that more effective interaction and communication can be achieved and maintained between the models. But the concluding comments are that running the entire system on a reasonably powerful HPC platform does not yet allow for real-time simulations, even without the most complex and demanding data simulation part.
OASIS - ORBIT ANALYSIS AND SIMULATION SOFTWARE
NASA Technical Reports Server (NTRS)
Wu, S. C.
1994-01-01
The Orbit Analysis and Simulation Software, OASIS, is a software system developed for covariance and simulation analyses of problems involving earth satellites, especially the Global Positioning System (GPS). It provides a flexible, versatile and efficient accuracy analysis tool for earth satellite navigation and GPS-based geodetic studies. To make future modifications and enhancements easy, the system is modular, with five major modules: PATH/VARY, REGRES, PMOD, FILTER/SMOOTHER, and OUTPUT PROCESSOR. PATH/VARY generates satellite trajectories. Among the factors taken into consideration are: 1) the gravitational effects of the planets, moon and sun; 2) space vehicle orientation and shapes; 3) solar pressure; 4) solar radiation reflected from the surface of the earth; 5) atmospheric drag; and 6) space vehicle gas leaks. The REGRES module reads the user's input, then determines if a measurement should be made based on geometry and time. PMOD modifies a previously generated REGRES file to facilitate various analysis needs. FILTER/SMOOTHER is especially suited to a multi-satellite precise orbit determination and geodetic-type problems. It can be used for any situation where parameters are simultaneously estimated from measurements and a priori information. Examples of nonspacecraft areas of potential application might be Very Long Baseline Interferometry (VLBI) geodesy and radio source catalogue studies. OUTPUT PROCESSOR translates covariance analysis results generated by FILTER/SMOOTHER into user-desired easy-to-read quantities, performs mapping of orbit covariances and simulated solutions, transforms results into different coordinate systems, and computes post-fit residuals. The OASIS program was developed in 1986. It is designed to be implemented on a DEC VAX 11/780 computer using VAX VMS 3.7 or higher. It can also be implemented on a Micro VAX II provided sufficient disk space is available.
Incorporation of MRI-AIF Information For Improved Kinetic Modelling of Dynamic PET Data
NASA Astrophysics Data System (ADS)
Sari, Hasan; Erlandsson, Kjell; Thielemans, Kris; Atkinson, David; Ourselin, Sebastien; Arridge, Simon; Hutton, Brian F.
2015-06-01
In the analysis of dynamic PET data, compartmental kinetic analysis methods require an accurate knowledge of the arterial input function (AIF). Although arterial blood sampling is the gold standard of the methods used to measure the AIF, it is usually not preferred as it is an invasive method. An alternative method is the simultaneous estimation method (SIME), where physiological parameters and the AIF are estimated together, using information from different anatomical regions. Due to the large number of parameters to estimate in its optimisation, SIME is a computationally complex method and may sometimes fail to give accurate estimates. In this work, we try to improve SIME by utilising an input function derived from a simultaneously obtained DSC-MRI scan. With the assumption that the true value of one of the six parameter PET-AIF model can be derived from an MRI-AIF, the method is tested using simulated data. The results indicate that SIME can yield more robust results when the MRI information is included with a significant reduction in absolute bias of Ki estimates.
Tian, Ye; Schwieters, Charles D.; Opella, Stanley J.; Marassi, Francesca M.
2011-01-01
AssignFit is a computer program developed within the XPLOR-NIH package for the assignment of dipolar coupling (DC) and chemical shift anisotropy (CSA) restraints derived from the solid-state NMR spectra of protein samples with uniaxial order. The method is based on minimizing the difference between experimentally observed solid-state NMR spectra and the frequencies back calculated from a structural model. Starting with a structural model and a set of DC and CSA restraints grouped only by amino acid type, as would be obtained by selective isotopic labeling, AssignFit generates all of the possible assignment permutations and calculates the corresponding atomic coordinates oriented in the alignment frame, together with the associated set of NMR frequencies, which are then compared with the experimental data for best fit. Incorporation of AssignFit in a simulated annealing refinement cycle provides an approach for simultaneous assignment and structure refinement (SASR) of proteins from solid-state NMR orientation restraints. The methods are demonstrated with data from two integral membrane proteins, one α-helical and one β-barrel, embedded in phospholipid bilayer membranes. PMID:22036904
NASA Astrophysics Data System (ADS)
Niu, Chun-Yang; Qi, Hong; Huang, Xing; Ruan, Li-Ming; Tan, He-Ping
2016-11-01
A rapid computational method called generalized sourced multi-flux method (GSMFM) was developed to simulate outgoing radiative intensities in arbitrary directions at the boundary surfaces of absorbing, emitting, and scattering media which were served as input for the inverse analysis. A hybrid least-square QR decomposition-stochastic particle swarm optimization (LSQR-SPSO) algorithm based on the forward GSMFM solution was developed to simultaneously reconstruct multi-dimensional temperature distribution and absorption and scattering coefficients of the cylindrical participating media. The retrieval results for axisymmetric temperature distribution and non-axisymmetric temperature distribution indicated that the temperature distribution and scattering and absorption coefficients could be retrieved accurately using the LSQR-SPSO algorithm even with noisy data. Moreover, the influences of extinction coefficient and scattering albedo on the accuracy of the estimation were investigated, and the results suggested that the reconstruction accuracy decreased with the increase of extinction coefficient and the scattering albedo. Finally, a non-contact measurement platform of flame temperature field based on the light field imaging was set up to validate the reconstruction model experimentally.
Segev, Danny; Levi, Retsef; Dunn, Peter F; Sandberg, Warren S
2012-06-01
Transportation of patients is a key hospital operational activity. During a large construction project, our patient admission and prep area will relocate from immediately adjacent to the operating room suite to another floor of a different building. Transportation will require extra distance and elevator trips to deliver patients and recycle transporters (specifically: personnel who transport patients). Management intuition suggested that starting all 52 first cases simultaneously would require many of the 18 available elevators. To test this, we developed a data-driven simulation tool to allow decision makers to simultaneously address planning and evaluation questions about patient transportation. We coded a stochastic simulation tool for a generalized model treating all factors contributing to the process as JAVA objects. The model includes elevator steps, explicitly accounting for transporter speed and distance to be covered. We used the model for sensitivity analyses of the number of dedicated elevators, dedicated transporters, transporter speed and the planned process start time on lateness of OR starts and the number of cases with serious delays (i.e., more than 15 min). Allocating two of the 18 elevators and 7 transporters reduced lateness and the number of cases with serious delays. Additional elevators and/or transporters yielded little additional benefit. If the admission process produced ready-for-transport patients 20 min earlier, almost all delays would be eliminated. Modeling results contradicted clinical managers' intuition that starting all first cases on time requires many dedicated elevators. This is explained by the principle of decreasing marginal returns for increasing capacity when there are other limiting constraints in the system.
Characterizing a four-qubit planar lattice for arbitrary error detection
NASA Astrophysics Data System (ADS)
Chow, Jerry M.; Srinivasan, Srikanth J.; Magesan, Easwar; Córcoles, A. D.; Abraham, David W.; Gambetta, Jay M.; Steffen, Matthias
2015-05-01
Quantum error correction will be a necessary component towards realizing scalable quantum computers with physical qubits. Theoretically, it is possible to perform arbitrarily long computations if the error rate is below a threshold value. The two-dimensional surface code permits relatively high fault-tolerant thresholds at the ~1% level, and only requires a latticed network of qubits with nearest-neighbor interactions. Superconducting qubits have continued to steadily improve in coherence, gate, and readout fidelities, to become a leading candidate for implementation into larger quantum networks. Here we describe characterization experiments and calibration of a system of four superconducting qubits arranged in a planar lattice, amenable to the surface code. Insights into the particular qubit design and comparison between simulated parameters and experimentally determined parameters are given. Single- and two-qubit gate tune-up procedures are described and results for simultaneously benchmarking pairs of two-qubit gates are given. All controls are eventually used for an arbitrary error detection protocol described in separate work [Corcoles et al., Nature Communications, 6, 2015].
Numerical Modeling of Saturated Boiling in a Heated Tube
NASA Technical Reports Server (NTRS)
Majumdar, Alok; LeClair, Andre; Hartwig, Jason
2017-01-01
This paper describes a mathematical formulation and numerical solution of boiling in a heated tube. The mathematical formulation involves a discretization of the tube into a flow network consisting of fluid nodes and branches and a thermal network consisting of solid nodes and conductors. In the fluid network, the mass, momentum and energy conservation equations are solved and in the thermal network, the energy conservation equation of solids is solved. A pressure-based, finite-volume formulation has been used to solve the equations in the fluid network. The system of equations is solved by a hybrid numerical scheme which solves the mass and momentum conservation equations by a simultaneous Newton-Raphson method and the energy conservation equation by a successive substitution method. The fluid network and thermal network are coupled through heat transfer between the solid and fluid nodes which is computed by Chen's correlation of saturated boiling heat transfer. The computer model is developed using the Generalized Fluid System Simulation Program and the numerical predictions are compared with test data.
Program Predicts Time Courses of Human/Computer Interactions
NASA Technical Reports Server (NTRS)
Vera, Alonso; Howes, Andrew
2005-01-01
CPM X is a computer program that predicts sequences of, and amounts of time taken by, routine actions performed by a skilled person performing a task. Unlike programs that simulate the interaction of the person with the task environment, CPM X predicts the time course of events as consequences of encoded constraints on human behavior. The constraints determine which cognitive and environmental processes can occur simultaneously and which have sequential dependencies. The input to CPM X comprises (1) a description of a task and strategy in a hierarchical description language and (2) a description of architectural constraints in the form of rules governing interactions of fundamental cognitive, perceptual, and motor operations. The output of CPM X is a Program Evaluation Review Technique (PERT) chart that presents a schedule of predicted cognitive, motor, and perceptual operators interacting with a task environment. The CPM X program allows direct, a priori prediction of skilled user performance on complex human-machine systems, providing a way to assess critical interfaces before they are deployed in mission contexts.
Time Dependent Tomography of the Solar Corona in Three Spatial Dimensions
NASA Astrophysics Data System (ADS)
Butala, M. D.; Frazin, R. A.; Kamalabadi, F.
2006-12-01
The combination of the soon to be launched STEREO mission with SOHO will provide scientists with three simultaneous space-borne views of the Sun. The increase in available measurements will reduce the data acquisition time necessary to obtain 3D coronal electron density (N_e) estimates from coronagraph images using a technique called solar rotational tomography (SRT). However, the data acquisition period will still be long enough for the corona to dynamically evolve, requiring time dependent solar tomography. The Kalman filter (KF) would seem to be an ideal computational method for time dependent SRT. Unfortunately, the KF scales poorly with problem size and is, as a result, inapplicable. A Monte Carlo approximation to the KF called the localized ensemble Kalman filter was developed for massive applications and has the promise of making the time dependent estimation of the 3D coronal N_e possible. We present simulations showing that this method will make time dependent tomography in three spatial dimensions computationally feasible.
NASA Astrophysics Data System (ADS)
Drescher, Anushka C.; Yost, Michael G.; Park, Doo Y.; Levine, Steven P.; Gadgil, Ashok J.; Fischer, Marc L.; Nazaroff, William W.
1995-05-01
Optical remote sensing and iterative computed tomography (CT) can be combined to measure the spatial distribution of gaseous pollutant concentrations in a plane. We have conducted chamber experiments to test this combination of techniques using an Open Path Fourier Transform Infrared Spectrometer (OP-FTIR) and a standard algebraic reconstruction technique (ART). ART was found to converge to solutions that showed excellent agreement with the ray integral concentrations measured by the FTIR but were inconsistent with simultaneously gathered point sample concentration measurements. A new CT method was developed based on (a) the superposition of bivariate Gaussians to model the concentration distribution and (b) a simulated annealing minimization routine to find the parameters of the Gaussians that resulted in the best fit to the ray integral concentration data. This new method, named smooth basis function minimization (SBFM) generated reconstructions that agreed well, both qualitatively and quantitatively, with the concentration profiles generated from point sampling. We present one set of illustrative experimental data to compare the performance of ART and SBFM.
Two-dimensional computer simulation of EMVJ and grating solar cells under AMO illumination
NASA Technical Reports Server (NTRS)
Gray, J. L.; Schwartz, R. J.
1984-01-01
A computer program, SCAP2D (Solar Cell Analysis Program in 2-Dimensions), is used to evaluate the Etched Multiple Vertical Junction (EMVJ) and grating solar cells. The aim is to demonstrate how SCAP2D can be used to evaluate cell designs. The cell designs studied are by no means optimal designs. The SCAP2D program solves the three coupled, nonlinear partial differential equations, Poisson's Equation and the hole and electron continuity equations, simultaneously in two-dimensions using finite differences to discretize the equations and Newton's Method to linearize them. The variables solved for are the electrostatic potential and the hole and electron concentrations. Each linear system of equations is solved directly by Gaussian Elimination. Convergence of the Newton Iteration is assumed when the largest correction to the electrostatic potential or hole or electron quasi-potential is less than some predetermined error. A typical problem involves 2000 nodes with a Jacobi matrix of order 6000 and a bandwidth of 243.
Analysis and design of a six-degree-of-freedom Stewart platform-based robotic wrist
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Antrazi, Sami; Zhou, Zhen-Lei
1991-01-01
The kinematic analysis and implementation of a six degree of freedom robotic wrist which is mounted to a general open-kinetic chain manipulator to serve as a restbed for studying precision robotic assembly in space is discussed. The wrist design is based on the Stewart Platform mechanism and consists mainly of two platforms and six linear actuators driven by DC motors. Position feedback is achieved by linear displacement transducers mounted along the actuators and force feedback is obtained by a 6 degree of freedom force sensor mounted between the gripper and the payload platform. The robot wrist inverse kinematics which computes the required actuator lengths corresponding to Cartesian variables has a closed-form solution. The forward kinematics is solved iteratively using the Newton-Ralphson method which simultaneously provides a modified Jacobian Matrix which relates length velocities to Cartesian translational velocities and time rates of change of roll-pitch-yaw angles. Results of computer simulation conducted to evaluate the efficiency of the forward kinematics and Modified Jacobian Matrix are discussed.
A breakthrough for experiencing and understanding simulated physics
NASA Technical Reports Server (NTRS)
Watson, Val
1988-01-01
The use of computer simulation in physics research is discussed, focusing on improvements to graphic workstations. Simulation capabilities and applications of enhanced visualization tools are outlined. The elements of an ideal computer simulation are presented and the potential for improving various simulation elements is examined. The interface between the human and the computer and simulation models are considered. Recommendations are made for changes in computer simulation practices and applications of simulation technology in education.
NASA Astrophysics Data System (ADS)
Zoller, Christian; Hohmann, Ansgar; Ertl, Thomas; Kienle, Alwin
2017-07-01
The Monte Carlo method is often referred as the gold standard to calculate the light propagation in turbid media [1]. Especially for complex shaped geometries where no analytical solutions are available the Monte Carlo method becomes very important [1, 2]. In this work a Monte Carlo software is presented, to simulate the light propagation in complex shaped geometries. To improve the simulation time the code is based on OpenCL such that graphics cards can be used as well as other computing devices. Within the software an illumination concept is presented to realize easily all kinds of light sources, like spatial frequency domain (SFD), optical fibers or Gaussian beam profiles. Moreover different objects, which are not connected to each other, can be considered simultaneously, without any additional preprocessing. This Monte Carlo software can be used for many applications. In this work the transmission spectrum of a tooth and the color reconstruction of a virtual object are shown, using results from the Monte Carlo software.
Cosmic ray muon computed tomography of spent nuclear fuel in dry storage casks
Poulson, Daniel Cris; Durham, J. Matthew; Guardincerri, Elena; ...
2016-10-22
Radiography with cosmic ray muon scattering has proven to be a successful method of imaging nuclear material through heavy shielding. Of particular interest is monitoring dry storage casks for diversion of plutonium contained in spent reactor fuel. Using muon tracking detectors that surround a cylindrical cask, cosmic ray muon scattering can be simultaneously measured from all azimuthal angles, giving complete tomographic coverage of the cask interior. This article describes the first application of filtered back projection algorithms, typically used in medical imaging, to cosmic ray muon scattering imaging. The specific application to monitoring spent nuclear fuel in dry storage casksmore » is investigated via GEANT4 simulations. With a cylindrical muon tracking detector surrounding a typical spent fuel cask, simulations indicate that missing fuel bundles can be detected with a statistical significance of ~18σ in less than two days exposure and a sensitivity at 1σ to a 5% missing portion of a fuel bundle. Finally, we discuss potential detector technologies and geometries.« less
NASA Astrophysics Data System (ADS)
Westerhausen, Markus; Martin, Tanja; Kappel, Marcel; Hofmann, Boris
2018-02-01
We present a measurement setup consisting of two fluid-filled pressure chambers to mimic the mechanical stress likely to that of small body movements on biomedical flexible micro-electrode arrays for the analysis of various degradation mechanisms. Our main goal was the simulation of micro-motions in fluid conditions, while maintaining an electric access to the device. These micro-motions would be likely to those occurring in the human body caused by the intracranial pressure in magnitudes of 7-25 mmHg, which translates to a fluid pressure of 9-33 mbar. Furthermore, severe mechanical stress can be administered to the samples under the previously mentioned environment. Therefore, a flexible, polyimide-based sample with various metal test structures was fabricated and analyzed in the presented measurement setup. A comparison of the elongation of the sample's surface as a function of the applied hydrostatic pressure is given with computer simulations.
Imaging sensor constellation for tomographic chemical cloud mapping.
Cosofret, Bogdan R; Konno, Daisei; Faghfouri, Aram; Kindle, Harry S; Gittins, Christopher M; Finson, Michael L; Janov, Tracy E; Levreault, Mark J; Miyashiro, Rex K; Marinelli, William J
2009-04-01
A sensor constellation capable of determining the location and detailed concentration distribution of chemical warfare agent simulant clouds has been developed and demonstrated on government test ranges. The constellation is based on the use of standoff passive multispectral infrared imaging sensors to make column density measurements through the chemical cloud from two or more locations around its periphery. A computed tomography inversion method is employed to produce a 3D concentration profile of the cloud from the 2D line density measurements. We discuss the theoretical basis of the approach and present results of recent field experiments where controlled releases of chemical warfare agent simulants were simultaneously viewed by three chemical imaging sensors. Systematic investigations of the algorithm using synthetic data indicate that for complex functions, 3D reconstruction errors are less than 20% even in the case of a limited three-sensor measurement network. Field data results demonstrate the capability of the constellation to determine 3D concentration profiles that account for ~?86%? of the total known mass of material released.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thi, Thanh Binh Nguyen, E-mail: nttbinh@kit.ac.jp; Yokoyama, Atsushi, E-mail: yokoyama@kit.ac.jp; Hamanaka, Senji
The theoretical fiber-interaction model for calculating the fiber orientation in the injection molded short fiber/thermoplastic composite parts was proposed. The proposed model included the fiber dynamics simulation in order to obtain an equation of the global interaction coefficient and accurate estimate of the fiber interacts at all orientation states. The steps to derive the equation for this coefficient in short fiber suspension as a function of the fiber aspect ratio, volume fraction and general shear rate are delineated. Simultaneously, the high-resolution 3D X-ray computed tomography system XVA-160α was used to observe fiber distribution of short-glass-fiber-reinforced polyamide specimens using different cavitymore » geometries. The fiber orientation tensor components are then calculated. Experimental orientation measurements of short-glass-fiber-reinforced polyamide is used to check the ability of present theory for predicting orientation. The experiments and predictions show a quantitative agreement and confirm the basic understanding of fiber orientation in injection-molded composites.« less
Numerical model of self-propulsion in a fluid
Farnell, D.J.J; David, T; Barton, D.C
2005-01-01
We provide initial evidence that a structure formed from an articulated series of linked elements, where each element has a given stiffness, damping and driving term with respect to its neighbours, may ‘swim’ through a fluid under certain conditions. We derive a Lagrangian for this system and, in particular, we note that we allow the leading edge to move along the x-axis. We assume that no lateral displacement of the leading edge of the structure is possible, although head ‘yaw’ is allowed. The fluid is simulated using a computational fluid dynamics technique, and we are able to determine and solve Euler–Lagrange equations for the structure. These two calculations are solved simultaneously by using a weakly coupled solver. We illustrate our method by showing that we are able to induce both forward and backward swimming. A discussion of the relevance of these simulations to a slowly swimming body, such as a mechanical device or a fish, is given. PMID:16849167
Cosmic ray muon computed tomography of spent nuclear fuel in dry storage casks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poulson, Daniel Cris; Durham, J. Matthew; Guardincerri, Elena
Radiography with cosmic ray muon scattering has proven to be a successful method of imaging nuclear material through heavy shielding. Of particular interest is monitoring dry storage casks for diversion of plutonium contained in spent reactor fuel. Using muon tracking detectors that surround a cylindrical cask, cosmic ray muon scattering can be simultaneously measured from all azimuthal angles, giving complete tomographic coverage of the cask interior. This article describes the first application of filtered back projection algorithms, typically used in medical imaging, to cosmic ray muon scattering imaging. The specific application to monitoring spent nuclear fuel in dry storage casksmore » is investigated via GEANT4 simulations. With a cylindrical muon tracking detector surrounding a typical spent fuel cask, simulations indicate that missing fuel bundles can be detected with a statistical significance of ~18σ in less than two days exposure and a sensitivity at 1σ to a 5% missing portion of a fuel bundle. Finally, we discuss potential detector technologies and geometries.« less
C-arm technique using distance driven method for nephrolithiasis and kidney stones detection
NASA Astrophysics Data System (ADS)
Malalla, Nuhad; Sun, Pengfei; Chen, Ying; Lipkin, Michael E.; Preminger, Glenn M.; Qin, Jun
2016-04-01
Distance driven represents a state of art method that used for reconstruction for x-ray techniques. C-arm tomography is an x-ray imaging technique that provides three dimensional information of the object by moving the C-shaped gantry around the patient. With limited view angle, C-arm system was investigated to generate volumetric data of the object with low radiation dosage and examination time. This paper is a new simulation study with two reconstruction methods based on distance driven including: simultaneous algebraic reconstruction technique (SART) and Maximum Likelihood expectation maximization (MLEM). Distance driven is an efficient method that has low computation cost and free artifacts compared with other methods such as ray driven and pixel driven methods. Projection images of spherical objects were simulated with a virtual C-arm system with a total view angle of 40 degrees. Results show the ability of limited angle C-arm technique to generate three dimensional images with distance driven reconstruction.
Cosmic ray muon computed tomography of spent nuclear fuel in dry storage casks
NASA Astrophysics Data System (ADS)
Poulson, D.; Durham, J. M.; Guardincerri, E.; Morris, C. L.; Bacon, J. D.; Plaud-Ramos, K.; Morley, D.; Hecht, A. A.
2017-01-01
Radiography with cosmic ray muon scattering has proven to be a successful method of imaging nuclear material through heavy shielding. Of particular interest is monitoring dry storage casks for diversion of plutonium contained in spent reactor fuel. Using muon tracking detectors that surround a cylindrical cask, cosmic ray muon scattering can be simultaneously measured from all azimuthal angles, giving complete tomographic coverage of the cask interior. This paper describes the first application of filtered back projection algorithms, typically used in medical imaging, to cosmic ray muon scattering imaging. The specific application to monitoring spent nuclear fuel in dry storage casks is investigated via GEANT4 simulations. With a cylindrical muon tracking detector surrounding a typical spent fuel cask, simulations indicate that missing fuel bundles can be detected with a statistical significance of ∼ 18 σ in less than two days exposure and a sensitivity at 1σ to a 5% missing portion of a fuel bundle. Potential detector technologies and geometries are discussed.
Inertial particle manipulation in microscale oscillatory flows
NASA Astrophysics Data System (ADS)
Agarwal, Siddhansh; Rallabandi, Bhargav; Raju, David; Hilgenfeldt, Sascha
2017-11-01
Recent work has shown that inertial effects in oscillating flows can be exploited for simultaneous transport and differential displacement of microparticles, enabling size sorting of such particles on extraordinarily short time scales. Generalizing previous theory efforts, we here derive a two-dimensional time-averaged version of the Maxey-Riley equation that includes the effect of an oscillating interface to model particle dynamics in such flows. Separating the steady transport time scale from the oscillatory time scale results in a simple and computationally efficient reduced model that preserves all slow-time features of the full unsteady Maxey-Riley simulations, including inertial particle displacement. Comparison is made not only to full simulations, but also to experiments using oscillating bubbles as the driving interfaces. In this case, the theory predicts either an attraction to or a repulsion from the bubble interface due to inertial effects, so that versatile particle manipulation is possible using differences in particle size, particle/fluid density contrast and streaming strength. We also demonstrate that these predictions are in agreement with experiments.
NASA Astrophysics Data System (ADS)
Thi, Thanh Binh Nguyen; Yokoyama, Atsushi; Hamanaka, Senji; Yamashita, Katsuhisa; Nonomura, Chisato
2016-03-01
The theoretical fiber-interaction model for calculating the fiber orientation in the injection molded short fiber/thermoplastic composite parts was proposed. The proposed model included the fiber dynamics simulation in order to obtain an equation of the global interaction coefficient and accurate estimate of the fiber interacts at all orientation states. The steps to derive the equation for this coefficient in short fiber suspension as a function of the fiber aspect ratio, volume fraction and general shear rate are delineated. Simultaneously, the high-resolution 3D X-ray computed tomography system XVA-160α was used to observe fiber distribution of short-glass-fiber-reinforced polyamide specimens using different cavity geometries. The fiber orientation tensor components are then calculated. Experimental orientation measurements of short-glass-fiber-reinforced polyamide is used to check the ability of present theory for predicting orientation. The experiments and predictions show a quantitative agreement and confirm the basic understanding of fiber orientation in injection-molded composites.
Optimization of startup and shutdown operation of simulated moving bed chromatographic processes.
Li, Suzhou; Kawajiri, Yoshiaki; Raisch, Jörg; Seidel-Morgenstern, Andreas
2011-06-24
This paper presents new multistage optimal startup and shutdown strategies for simulated moving bed (SMB) chromatographic processes. The proposed concept allows to adjust transient operating conditions stage-wise, and provides capability to improve transient performance and to fulfill product quality specifications simultaneously. A specially tailored decomposition algorithm is developed to ensure computational tractability of the resulting dynamic optimization problems. By examining the transient operation of a literature separation example characterized by nonlinear competitive isotherm, the feasibility of the solution approach is demonstrated, and the performance of the conventional and multistage optimal transient regimes is evaluated systematically. The quantitative results clearly show that the optimal operating policies not only allow to significantly reduce both duration of the transient phase and desorbent consumption, but also enable on-spec production even during startup and shutdown periods. With the aid of the developed transient procedures, short-term separation campaigns with small batch sizes can be performed more flexibly and efficiently by SMB chromatography. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Barreiro, Andrea K.; Ly, Cheng
2017-08-01
Rapid experimental advances now enable simultaneous electrophysiological recording of neural activity at single-cell resolution across large regions of the nervous system. Models of this neural network activity will necessarily increase in size and complexity, thus increasing the computational cost of simulating them and the challenge of analyzing them. Here we present a method to approximate the activity and firing statistics of a general firing rate network model (of the Wilson-Cowan type) subject to noisy correlated background inputs. The method requires solving a system of transcendental equations and is fast compared to Monte Carlo simulations of coupled stochastic differential equations. We implement the method with several examples of coupled neural networks and show that the results are quantitatively accurate even with moderate coupling strengths and an appreciable amount of heterogeneity in many parameters. This work should be useful for investigating how various neural attributes qualitatively affect the spiking statistics of coupled neural networks.
Resolving Dynamic Properties of Polymers through Coarse-Grained Computational Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salerno, K. Michael; Agrawal, Anupriya; Perahia, Dvora
2016-02-05
Coupled length and time scales determine the dynamic behavior of polymers and underlie their unique viscoelastic properties. To resolve the long-time dynamics it is imperative to determine which time and length scales must be correctly modeled. In this paper, we probe the degree of coarse graining required to simultaneously retain significant atomistic details and access large length and time scales. The degree of coarse graining in turn sets the minimum length scale instrumental in defining polymer properties and dynamics. Using linear polyethylene as a model system, we probe how the coarse-graining scale affects the measured dynamics. Iterative Boltzmann inversion ismore » used to derive coarse-grained potentials with 2–6 methylene groups per coarse-grained bead from a fully atomistic melt simulation. We show that atomistic detail is critical to capturing large-scale dynamics. Finally, using these models we simulate polyethylene melts for times over 500 μs to study the viscoelastic properties of well-entangled polymer melts.« less
He, Bo; Zhang, Shujing; Yan, Tianhong; Zhang, Tao; Liang, Yan; Zhang, Hongjin
2011-01-01
Mobile autonomous systems are very important for marine scientific investigation and military applications. Many algorithms have been studied to deal with the computational efficiency problem required for large scale simultaneous localization and mapping (SLAM) and its related accuracy and consistency. Among these methods, submap-based SLAM is a more effective one. By combining the strength of two popular mapping algorithms, the Rao-Blackwellised particle filter (RBPF) and extended information filter (EIF), this paper presents a combined SLAM-an efficient submap-based solution to the SLAM problem in a large scale environment. RBPF-SLAM is used to produce local maps, which are periodically fused into an EIF-SLAM algorithm. RBPF-SLAM can avoid linearization of the robot model during operating and provide a robust data association, while EIF-SLAM can improve the whole computational speed, and avoid the tendency of RBPF-SLAM to be over-confident. In order to further improve the computational speed in a real time environment, a binary-tree-based decision-making strategy is introduced. Simulation experiments show that the proposed combined SLAM algorithm significantly outperforms currently existing algorithms in terms of accuracy and consistency, as well as the computing efficiency. Finally, the combined SLAM algorithm is experimentally validated in a real environment by using the Victoria Park dataset.
Longitudinal data analysis with non-ignorable missing data.
Tseng, Chi-hong; Elashoff, Robert; Li, Ning; Li, Gang
2016-02-01
A common problem in the longitudinal data analysis is the missing data problem. Two types of missing patterns are generally considered in statistical literature: monotone and non-monotone missing data. Nonmonotone missing data occur when study participants intermittently miss scheduled visits, while monotone missing data can be from discontinued participation, loss to follow-up, and mortality. Although many novel statistical approaches have been developed to handle missing data in recent years, few methods are available to provide inferences to handle both types of missing data simultaneously. In this article, a latent random effects model is proposed to analyze longitudinal outcomes with both monotone and non-monotone missingness in the context of missing not at random. Another significant contribution of this article is to propose a new computational algorithm for latent random effects models. To reduce the computational burden of high-dimensional integration problem in latent random effects models, we develop a new computational algorithm that uses a new adaptive quadrature approach in conjunction with the Taylor series approximation for the likelihood function to simplify the E-step computation in the expectation-maximization algorithm. Simulation study is performed and the data from the scleroderma lung study are used to demonstrate the effectiveness of this method. © The Author(s) 2012.
NASA Astrophysics Data System (ADS)
Cao, Chao
2009-03-01
Nano-scale physical phenomena and processes, especially those in electronics, have drawn great attention in the past decade. Experiments have shown that electronic and transport properties of functionalized carbon nanotubes are sensitive to adsorption of gas molecules such as H2, NO2, and NH3. Similar measurements have also been performed to study adsorption of proteins on other semiconductor nano-wires. These experiments suggest that nano-scale systems can be useful for making future chemical and biological sensors. Aiming to understand the physical mechanisms underlying and governing property changes at nano-scale, we start off by investigating, via first-principles method, the electronic structure of Pd-CNT before and after hydrogen adsorption, and continue with coherent electronic transport using non-equilibrium Green’s function techniques combined with density functional theory. Once our results are fully analyzed they can be used to interpret and understand experimental data, with a few difficult issues to be addressed. Finally, we discuss a newly developed multi-scale computing architecture, OPAL, that coordinates simultaneous execution of multiple codes. Inspired by the capabilities of this computing framework, we present a scenario of future modeling and simulation of multi-scale, multi-physical processes.
Gamut relativity: a new computational approach to brightness and lightness perception.
Vladusich, Tony
2013-01-09
This article deconstructs the conventional theory that "brightness" and "lightness" constitute perceptual dimensions corresponding to the physical dimensions of luminance and reflectance, and builds in its place the theory that brightness and lightness correspond to computationally defined "modes," rather than dimensions, of perception. According to the theory, called gamut relativity, "blackness" and "whiteness" constitute the perceptual dimensions (forming a two-dimensional "blackness-whiteness" space) underlying achromatic color perception (black, white, and gray shades). These perceptual dimensions are postulated to be related to the neural activity levels in the ON and OFF channels of vision. The theory unifies and generalizes a number of extant concepts in the brightness and lightness literature, such as simultaneous contrast, anchoring, and scission, and quantitatively simulates several challenging perceptual phenomena, including the staircase Gelb effect and the effects of task instructions on achromatic color-matching behavior, all with a single free parameter. The theory also provides a new conception of achromatic color constancy in terms of the relative distances between points in blackness-whiteness space. The theory suggests a host of striking conclusions, the most important of which is that the perceptual dimensions of vision should be generically specified according to the computational properties of the brain, rather than in terms of "reified" physical dimensions. This new approach replaces the computational goal of estimating absolute physical quantities ("inverse optics") with the goal of computing object properties relatively.
On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers
NASA Astrophysics Data System (ADS)
Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.
2017-10-01
This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.
Symplectic molecular dynamics simulations on specially designed parallel computers.
Borstnik, Urban; Janezic, Dusanka
2005-01-01
We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.
System Simulation by Recursive Feedback: Coupling a Set of Stand-Alone Subsystem Simulations
NASA Technical Reports Server (NTRS)
Nixon, D. D.
2001-01-01
Conventional construction of digital dynamic system simulations often involves collecting differential equations that model each subsystem, arran g them to a standard form, and obtaining their numerical gin solution as a single coupled, total-system simultaneous set. Simulation by numerical coupling of independent stand-alone subsimulations is a fundamentally different approach that is attractive because, among other things, the architecture naturally facilitates high fidelity, broad scope, and discipline independence. Recursive feedback is defined and discussed as a candidate approach to multidiscipline dynamic system simulation by numerical coupling of self-contained, single-discipline subsystem simulations. A satellite motion example containing three subsystems (orbit dynamics, attitude dynamics, and aerodynamics) has been defined and constructed using this approach. Conventional solution methods are used in the subsystem simulations. Distributed and centralized implementations of coupling have been considered. Numerical results are evaluated by direct comparison with a standard total-system, simultaneous-solution approach.
Interaction of hydraulic and buckling mechanisms in blowout fractures.
Nagasao, Tomohisa; Miyamoto, Junpei; Jiang, Hua; Tamaki, Tamotsu; Kaneko, Tsuyoshi
2010-04-01
The etiology of blowout fractures is generally attributed to 2 mechanisms--increase in the pressure of the orbital contents (the hydraulic mechanism) and direct transmission of impacts on the orbital walls (the buckling mechanism). The present study aims to elucidate whether or not an interaction exists between these 2 mechanisms. We performed a simulation experiment using 10 Computer-Aided-Design skull models. We applied destructive energy to the orbits of the 10 models in 3 different ways. First, to simulate pure hydraulic mechanism, energy was applied solely on the internal walls of the orbit. Second, to simulate pure buckling mechanism, energy was applied solely on the inferior rim of the orbit. Third, to simulate the combined effect of the hydraulic and buckling mechanisms, energy was applied both on the internal wall of the orbit and inferior rim of the orbit. After applying the energy, we calculated the areas of the regions where fracture occurred in the models. Thereafter, we compared the areas among the 3 energy application patterns. When the hydraulic and buckling mechanisms work simultaneously, fracture occurs on wider areas of the orbital walls than when each of these mechanisms works separately. The hydraulic and buckling mechanisms interact, enhancing each other's effect. This information should be taken into consideration when we examine patients in whom blowout fracture is suspected.
NASA Astrophysics Data System (ADS)
Li, Tao; Xie, Wei
2017-04-01
The spiral tunnel arises as a new form of tunnel, with great differences in fire development pattern when compared with traditional straight line tunnel, this paper takes method of numerical simulation, based on computation fluid dynamics theory and fire-turbulence numerical simulation theory, establishing a full-scale spiral tunnel model, and applies CFX simulation software to research full-scale spiral tunnel fire and its ventilation condition. The results indicate that with increasing tunnel slope, high temperature area gradually extends to downstream area, high temperature mainly distributes near fire source area, and symmetrically distributes among the fire center point; With increasing tunnel slope, the highest temperature underneath tunnel arch rises first followed by a downward trend and then rising again, which strengthens chimney effect, and promotes more fresh cold air flow into the tunnel, suppressing fire smoke backflow and simultaneously accelerating fire smoke spread to downstream area; Fire plume presents vertical slender shape with 1% or 3% tunnel slope, and burning flame hits tunnel arch and then extending all around into the ceiling jet flow, when tunnel slope increases to 5% or 7%, fire plume cross section grows bigger and wider with unstable burning flame swaying in all directions, integrally incline to fire downstream.
Hub, Jochen S.; Salditt, Tim; Rheinstädter, Maikel C.; de Groot, Bert L.
2007-01-01
We present an extensive comparison of short-range order and short wavelength dynamics of a hydrated phospholipid bilayer derived by molecular dynamics simulations, elastic x-ray, and inelastic neutron scattering experiments. The quantities that are compared between simulation and experiment include static and dynamic structure factors, reciprocal space mappings, and electron density profiles. We show that the simultaneous use of molecular dynamics and diffraction data can help to extract real space properties like the area per lipid and the lipid chain ordering from experimental data. In addition, we assert that the interchain distance can be computed to high accuracy from the interchain correlation peak of the structure factor. Moreover, it is found that the position of the interchain correlation peak is not affected by the area per lipid, while its correlation length decreases linearly with the area per lipid. This finding allows us to relate a property of the structure factor quantitatively to the area per lipid. Finally, the short wavelength dynamics obtained from the simulations and from inelastic neutron scattering are analyzed and compared. The conventional interpretation in terms of the three-effective-eigenmode model is found to be only partly suitable to describe the complex fluid dynamics of lipid chains. PMID:17631531
A geophysical shock and air blast simulator at the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fournier, K. B.; Brown, C. G.; May, M. J.
2014-09-15
The energy partitioning energy coupling experiments at the National Ignition Facility (NIF) have been designed to measure simultaneously the coupling of energy from a laser-driven target into both ground shock and air blast overpressure to nearby media. The source target for the experiment is positioned at a known height above the ground-surface simulant and is heated by four beams from the NIF. The resulting target energy density and specific energy are equal to those of a low-yield nuclear device. The ground-shock stress waves and atmospheric overpressure waveforms that result in our test system are hydrodynamically scaled analogs of full-scale seismicmore » and air blast phenomena. This report summarizes the development of the platform, the simulations, and calculations that underpin the physics measurements that are being made, and finally the data that were measured. Agreement between the data and simulation of the order of a factor of two to three is seen for air blast quantities such as peak overpressure. Historical underground test data for seismic phenomena measured sensor displacements; we measure the stresses generated in our ground-surrogate medium. We find factors-of-a-few agreement between our measured peak stresses and predictions with modern geophysical computer codes.« less
A geophysical shock and air blast simulator at the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fournier, K. B.; Brown, C. G.; May, M. J.
2014-09-01
The energy partitioning energy coupling experiments at the National Ignition Facility (NIF) have been designed to measure simultaneously the coupling of energy from a laser-driven target into both ground shock and air blast overpressure to nearby media. The source target for the experiment is positioned at a known height above the ground-surface simulant and is heated by four beams from the NIF. The resulting target energy density and specific energy are equal to those of a low-yield nuclear device. The ground-shock stress waves and atmospheric overpressure waveforms that result in our test system are hydrodynamically scaled analogs of full-scale seismicmore » and air blast phenomena. This report summarizes the development of the platform, the simulations, and calculations that underpin the physics measurements that are being made, and finally the data that were measured. Agreement between the data and simulation of the order of a factor of two to three is seen for air blast quantities such as peak overpressure. Historical underground test data for seismic phenomena measured sensor displacements; we measure the stresses generated in our ground-surrogate medium. We find factors-of-a-few agreement between our measured peak stresses and predictions with modern geophysical computer codes.« less
Investigation of the transport shortfall in Alcator C-Mod L-mode plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howard, N. T.; White, A. E.; Greenwald, M.
2013-03-15
A so-called 'transport shortfall,' where ion and electron heat fluxes and turbulence are underpredicted by gyrokinetic codes, has been robustly identified in DIII-D L-mode plasmas for {rho}>0.55[T. L. Rhodes et al., Nucl. Fusion 51(6), 063022 (2011); and C. Holland et al., Phys. Plasmas 16(5), 052301 (2009)]. To probe the existence of a transport shortfall across different tokamaks, a dedicated scan of auxiliary heated L-mode discharges in Alcator C-Mod are studied in detail with nonlinear gyrokinetic simulations for the first time. Two discharges, only differing by the amount of auxiliary heating are investigated using both linear and nonlinear simulation of themore » GYRO code [J. Candy and R. E. Waltz, J. Comput. Phys. 186, 545 (2003)]. Nonlinear gyrokinetic simulation of the low and high input power discharges reveals a discrepancy between simulation and experiment in only the electron heat flux channel of the low input power discharge. However, both discharges demonstrate excellent agreement in the ion heat flux channel, and the high input power discharge demonstrates simultaneous agreement with experiment in both the electron and ion heat flux channels. A summary of linear and nonlinear gyrokinetic results and a discussion of possible explanations for the agreement/disagreement in each heat flux channel is presented.« less
Liu, Zhenqiu; Hsiao, William; Cantarel, Brandi L; Drábek, Elliott Franco; Fraser-Liggett, Claire
2011-12-01
Direct sequencing of microbes in human ecosystems (the human microbiome) has complemented single genome cultivation and sequencing to understand and explore the impact of commensal microbes on human health. As sequencing technologies improve and costs decline, the sophistication of data has outgrown available computational methods. While several existing machine learning methods have been adapted for analyzing microbiome data recently, there is not yet an efficient and dedicated algorithm available for multiclass classification of human microbiota. By combining instance-based and model-based learning, we propose a novel sparse distance-based learning method for simultaneous class prediction and feature (variable or taxa, which is used interchangeably) selection from multiple treatment populations on the basis of 16S rRNA sequence count data. Our proposed method simultaneously minimizes the intraclass distance and maximizes the interclass distance with many fewer estimated parameters than other methods. It is very efficient for problems with small sample sizes and unbalanced classes, which are common in metagenomic studies. We implemented this method in a MATLAB toolbox called MetaDistance. We also propose several approaches for data normalization and variance stabilization transformation in MetaDistance. We validate this method on several real and simulated 16S rRNA datasets to show that it outperforms existing methods for classifying metagenomic data. This article is the first to address simultaneous multifeature selection and class prediction with metagenomic count data. The MATLAB toolbox is freely available online at http://metadistance.igs.umaryland.edu/. zliu@umm.edu Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Shah, S.; Gray, F.; Yang, J.; Crawshaw, J.; Boek, E.
2016-12-01
Advances in 3D pore-scale imaging and computational methods have allowed an exceptionally detailed quantitative and qualitative analysis of the fluid flow in complex porous media. A fundamental problem in pore-scale imaging and modelling is how to represent and model the range of scales encountered in porous media, starting from the smallest pore spaces. In this study, a novel method is presented for determining the representative elementary volume (REV) of a rock for several parameters simultaneously. We calculate the two main macroscopic petrophysical parameters, porosity and single-phase permeability, using micro CT imaging and Lattice Boltzmann (LB) simulations for 14 different porous media, including sandpacks, sandstones and carbonates. The concept of the `Convex Hull' is then applied to calculate the REV for both parameters simultaneously using a plot of the area of the convex hull as a function of the sub-volume, capturing the different scales of heterogeneity from the pore-scale imaging. The results also show that the area of the convex hull (for well-chosen parameters such as the log of the permeability and the porosity) decays exponentially with sub-sample size suggesting a computationally efficient way to determine the system size needed to calculate the parameters to high accuracy (small convex hull area). Finally we propose using a characteristic length such as the pore size to choose an efficient absolute voxel size for the numerical rock.
Long sequence correlation coprocessor
NASA Astrophysics Data System (ADS)
Gage, Douglas W.
1994-09-01
A long sequence correlation coprocessor (LSCC) accelerates the bitwise correlation of arbitrarily long digital sequences by calculating in parallel the correlation score for 16, for example, adjacent bit alignments between two binary sequences. The LSCC integrated circuit is incorporated into a computer system with memory storage buffers and a separate general purpose computer processor which serves as its controller. Each of the LSCC's set of sequential counters simultaneously tallies a separate correlation coefficient. During each LSCC clock cycle, computer enable logic associated with each counter compares one bit of a first sequence with one bit of a second sequence to increment the counter if the bits are the same. A shift register assures that the same bit of the first sequence is simultaneously compared to different bits of the second sequence to simultaneously calculate the correlation coefficient by the different counters to represent different alignments of the two sequences.
Signal processing applications of massively parallel charge domain computing devices
NASA Technical Reports Server (NTRS)
Fijany, Amir (Inventor); Barhen, Jacob (Inventor); Toomarian, Nikzad (Inventor)
1999-01-01
The present invention is embodied in a charge coupled device (CCD)/charge injection device (CID) architecture capable of performing a Fourier transform by simultaneous matrix vector multiplication (MVM) operations in respective plural CCD/CID arrays in parallel in O(1) steps. For example, in one embodiment, a first CCD/CID array stores charge packets representing a first matrix operator based upon permutations of a Hartley transform and computes the Fourier transform of an incoming vector. A second CCD/CID array stores charge packets representing a second matrix operator based upon different permutations of a Hartley transform and computes the Fourier transform of an incoming vector. The incoming vector is applied to the inputs of the two CCD/CID arrays simultaneously, and the real and imaginary parts of the Fourier transform are produced simultaneously in the time required to perform a single MVM operation in a CCD/CID array.
Numerical simulation of pseudoelastic shape memory alloys using the large time increment method
NASA Astrophysics Data System (ADS)
Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad
2017-04-01
The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.
Dynamic Displacement Disorder of Cubic BaTiO3
NASA Astrophysics Data System (ADS)
Paściak, M.; Welberry, T. R.; Kulda, J.; Leoni, S.; Hlinka, J.
2018-04-01
The three-dimensional distribution of the x-ray diffuse scattering intensity of BaTiO3 has been recorded in a synchrotron experiment and simultaneously computed using molecular dynamics simulations of a shell model. Together, these have allowed the details of the disorder in paraelectric BaTiO3 to be clarified. The narrow sheets of diffuse scattering, related to the famous anisotropic longitudinal correlations of Ti ions, are shown to be caused by the overdamped anharmonic soft phonon branch. This finding demonstrates that the occurrence of narrow sheets of diffuse scattering agrees with a displacive picture of the cubic phase of this textbook ferroelectric material. The presented methodology allows one to go beyond the harmonic approximation in the analysis of phonons and phonon-related scattering.
Short-time Lyapunov exponent analysis and the transition to chaos in Taylor-Couette flow
NASA Technical Reports Server (NTRS)
Vastano, John A.; Moser, Robert D.
1991-01-01
The physical mechanism driving the weakly chaotic Taylor-Couette flow is investigated using the short-time Liapunov exponent analysis. In this procedure, the transition from quasi-periodicity to chaos is studied using direct numerical 3D simulations of axially periodic Taylor-Couette flow, and a partial Liapunov exponent spectrum for the flow is computed by simultaneously advancing the full solution and a set of perturbations. It is shown that the short-time Liapunov exponent analysis yields more information on the exponents and dimension than that obtained from the common Liapunov exponent calculations. Results show that the chaotic state studied here is caused by a Kelvin-Helmholtz-type instability of the outflow boundary jet of Taylor vortices.
Multiresolution motion planning for autonomous agents via wavelet-based cell decompositions.
Cowlagi, Raghvendra V; Tsiotras, Panagiotis
2012-10-01
We present a path- and motion-planning scheme that is "multiresolution" both in the sense of representing the environment with high accuracy only locally and in the sense of addressing the vehicle kinematic and dynamic constraints only locally. The proposed scheme uses rectangular multiresolution cell decompositions, efficiently generated using the wavelet transform. The wavelet transform is widely used in signal and image processing, with emerging applications in autonomous sensing and perception systems. The proposed motion planner enables the simultaneous use of the wavelet transform in both the perception and in the motion-planning layers of vehicle autonomy, thus potentially reducing online computations. We rigorously prove the completeness of the proposed path-planning scheme, and we provide numerical simulation results to illustrate its efficacy.
NASA Astrophysics Data System (ADS)
Tomellini, M.; Fanfoni, M.
1999-10-01
On the basis of the quasi-static approximation and for simultaneous nucleation the adatom lifetime, τ, during film growth at solid surfaces has been computed by Monte Carlo (MC) simulation. The quantity DN0τ, N0 and D being respectively the cluster density and the adatom diffusion coefficient, is found to depend upon the portion of surface covered by clusters and, very weakly, on N0. Moreover, a stochastic approach based on the Johnson-Mehl-Avrami-Kolmogorov (JMAK) theory has been developed to obtain the analytical expression of the MC curve. The collision factor of the mean island has been calculated and compared with those previously obtained from the uniform depletion approximation and the lattice approximation.
NASA Astrophysics Data System (ADS)
Wang, Yao-yao; Zhang, Juan; Zhao, Xue-wei; Song, Li-pei; Zhang, Bo; Zhao, Xing
2018-03-01
In order to improve depth extraction accuracy, a method using moving array lenslet technique (MALT) in pickup stage is proposed, which can decrease the depth interval caused by pixelation. In this method, the lenslet array is moved along the horizontal and vertical directions simultaneously for N times in a pitch to get N sets of elemental images. Computational integral imaging reconstruction method for MALT is taken to obtain the slice images of the 3D scene, and the sum modulus (SMD) blur metric is taken on these slice images to achieve the depth information of the 3D scene. Simulation and optical experiments are carried out to verify the feasibility of this method.
On Fitting a Multivariate Two-Part Latent Growth Model
Xu, Shu; Blozis, Shelley A.; Vandewater, Elizabeth A.
2017-01-01
A 2-part latent growth model can be used to analyze semicontinuous data to simultaneously study change in the probability that an individual engages in a behavior, and if engaged, change in the behavior. This article uses a Monte Carlo (MC) integration algorithm to study the interrelationships between the growth factors of 2 variables measured longitudinally where each variable can follow a 2-part latent growth model. A SAS macro implementing Mplus is developed to estimate the model to take into account the sampling uncertainty of this simulation-based computational approach. A sample of time-use data is used to show how maximum likelihood estimates can be obtained using a rectangular numerical integration method and an MC integration method. PMID:29333054
Design and experimental validation of a flutter suppression controller for the active flexible wing
NASA Technical Reports Server (NTRS)
Waszak, Martin R.; Srinathkumar, S.
1992-01-01
The synthesis and experimental validation of an active flutter suppression controller for the Active Flexible Wing wind tunnel model is presented. The design is accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and extensive simulation based analysis. The design approach uses a fundamental understanding of the flutter mechanism to formulate a simple controller structure to meet stringent design specifications. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite modeling errors in predicted flutter dynamic pressure and flutter frequency. The flutter suppression controller was also successfully operated in combination with another controller to perform flutter suppression during rapid rolling maneuvers.
Digital adaptive control of a VTOL aircraft
NASA Technical Reports Server (NTRS)
Reid, G. F.
1976-01-01
A technique has been developed for calculating feedback and feedforward gain matrices that stabilize a VTOL aircraft while enabling it to track input commands of forward and vertical velocity. Leverrier's algorithm is used in a procedure for determining a set of state variable, feedback gains that force the closed loop poles and zeroes of one pilot input transfer function to be at preselected positions in the s plane. This set of feedback gains is then used to calculate the feedback and feedforward gains for the velocity command controller. The method is computationally attractive since the gains are determined by solving systems of linear, simultaneous equations. Responses obtained using a digital simulation of the longitudinal dynamics of the CH-47 helicopter are presented.
NASA Technical Reports Server (NTRS)
Liu, H. K.
1978-01-01
A phase modulated triple exposure technique was incorporated into a holographic nondestructive test (HNDT) system. The technique was able to achieve a goal of simultaneously identifying the zero-order fringe and determining the direction of motion (or displacement). Basically, the technique involves the addition of one more exposure, during the loading of the tested object, to the conventional double-exposure hologram. A phase shifter is added to either the object beam or the reference beam during the second and third exposure. Theoretical analysis with the assistance of computer simulation illustrated the feasibility of implementing the phase modulation and triple-exposure in the HNDT systems. Main advantages of the technique are the enhancement of accuracy in data interpretation and a better determination of the nature of the flaws in the tested object.
2011-01-01
Background Safety assessment of genetically modified organisms is currently often performed by comparative evaluation. However, natural variation of plant characteristics between commercial varieties is usually not considered explicitly in the statistical computations underlying the assessment. Results Statistical methods are described for the assessment of the difference between a genetically modified (GM) plant variety and a conventional non-GM counterpart, and for the assessment of the equivalence between the GM variety and a group of reference plant varieties which have a history of safe use. It is proposed to present the results of both difference and equivalence testing for all relevant plant characteristics simultaneously in one or a few graphs, as an aid for further interpretation in safety assessment. A procedure is suggested to derive equivalence limits from the observed results for the reference plant varieties using a specific implementation of the linear mixed model. Three different equivalence tests are defined to classify any result in one of four equivalence classes. The performance of the proposed methods is investigated by a simulation study, and the methods are illustrated on compositional data from a field study on maize grain. Conclusions A clear distinction of practical relevance is shown between difference and equivalence testing. The proposed tests are shown to have appropriate performance characteristics by simulation, and the proposed simultaneous graphical representation of results was found to be helpful for the interpretation of results from a practical field trial data set. PMID:21324199
Study on recognition algorithm for paper currency numbers based on neural network
NASA Astrophysics Data System (ADS)
Li, Xiuyan; Liu, Tiegen; Li, Yuanyao; Zhang, Zhongchuan; Deng, Shichao
2008-12-01
Based on the unique characteristic, the paper currency numbers can be put into record and the automatic identification equipment for paper currency numbers is supplied to currency circulation market in order to provide convenience for financial sectors to trace the fiduciary circulation socially and provide effective supervision on paper currency. Simultaneously it is favorable for identifying forged notes, blacklisting the forged notes numbers and solving the major social problems, such as armor cash carrier robbery, money laundering. For the purpose of recognizing the paper currency numbers, a recognition algorithm based on neural network is presented in the paper. Number lines in original paper currency images can be draw out through image processing, such as image de-noising, skew correction, segmentation, and image normalization. According to the different characteristics between digits and letters in serial number, two kinds of classifiers are designed. With the characteristics of associative memory, optimization-compute and rapid convergence, the Discrete Hopfield Neural Network (DHNN) is utilized to recognize the letters; with the characteristics of simple structure, quick learning and global optimum, the Radial-Basis Function Neural Network (RBFNN) is adopted to identify the digits. Then the final recognition results are obtained by combining the two kinds of recognition results in regular sequence. Through the simulation tests, it is confirmed by simulation results that the recognition algorithm of combination of two kinds of recognition methods has such advantages as high recognition rate and faster recognition simultaneously, which is worthy of broad application prospect.
Simultaneous calibration phantom commission and geometry calibration in cone beam CT
NASA Astrophysics Data System (ADS)
Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong
2017-09-01
Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.
Computer-Assisted Instruction in Statistics. Technical Report.
ERIC Educational Resources Information Center
Cooley, William W.
A paper given at a conference on statistical computation discussed teaching statistics with computers. It concluded that computer-assisted instruction is most appropriately employed in the numerical demonstration of statistical concepts, and for statistical laboratory instruction. The student thus learns simultaneously about the use of computers…
PanDA: Exascale Federation of Resources for the ATLAS Experiment at the LHC
NASA Astrophysics Data System (ADS)
Barreiro Megino, Fernando; Caballero Bejar, Jose; De, Kaushik; Hover, John; Klimentov, Alexei; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Petrosyan, Artem; Wenaus, Torre
2016-02-01
After a scheduled maintenance and upgrade period, the world's largest and most powerful machine - the Large Hadron Collider(LHC) - is about to enter its second run at unprecedented energies. In order to exploit the scientific potential of the machine, the experiments at the LHC face computational challenges with enormous data volumes that need to be analysed by thousand of physics users and compared to simulated data. Given diverse funding constraints, the computational resources for the LHC have been deployed in a worldwide mesh of data centres, connected to each other through Grid technologies. The PanDA (Production and Distributed Analysis) system was developed in 2005 for the ATLAS experiment on top of this heterogeneous infrastructure to seamlessly integrate the computational resources and give the users the feeling of a unique system. Since its origins, PanDA has evolved together with upcoming computing paradigms in and outside HEP, such as changes in the networking model, Cloud Computing and HPC. It is currently running steadily up to 200 thousand simultaneous cores (limited by the available resources for ATLAS), up to two million aggregated jobs per day and processes over an exabyte of data per year. The success of PanDA in ATLAS is triggering the widespread adoption and testing by other experiments. In this contribution we will give an overview of the PanDA components and focus on the new features and upcoming challenges that are relevant to the next decade of distributed computing workload management using PanDA.
NASA Astrophysics Data System (ADS)
Imamura, N.; Schultz, A.
2016-12-01
Recently, a full waveform time domain inverse solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion to solve simultaneously for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations, the ability to operate in areas of high levels of source signal spatial complexity, and non-stationarity. This goal would not be obtainable if one were to adopt the pure time domain solution for the inverse problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across a large frequency bandwidth. This means that for the forward simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a sensitivity matrix that is computationally burdensome to solve a model update. We have implemented a code that addresses this situation through the use of cascade decimation decomposition to reduce the size of the sensitivity matrix substantially, through quasi-equivalent time domain decomposition. We also use a fictitious wave domain method to speed up computation time of the forward simulation in the time domain. By combining these refinements, we have developed a full waveform joint source field/earth conductivity inverse modeling method. We found that cascade decimation speeds computations of the sensitivity matrices dramatically, keeping the solution close to that of the undecimated case. For example, for a model discretized into 2.6x105 cells, we obtain model updates in less than 1 hour on a 4U rack-mounted workgroup Linux server, which is a practical computational time for the inverse problem.
NASA Astrophysics Data System (ADS)
Lu, J.; Wakai, K.; Takahashi, S.; Shimizu, S.
2000-06-01
The algorithm which takes into account the effect of refraction of sound wave paths for acoustic computer tomography (CT) is developed. Incorporating the algorithm of refraction into ordinary CT algorithms which are based on Fourier transformation is very difficult. In this paper, the least-squares method, which is capable of considering the refraction effect, is employed to reconstruct the two-dimensional temperature distribution. The refraction effect is solved by writing a set of differential equations which is derived from Fermat's theorem and the calculus of variations. It is impossible to carry out refraction analysis and the reconstruction of temperature distribution simultaneously, so the problem is solved using the iteration method. The measurement field is assumed to take the shape of a circle and 16 speakers, also serving as the receivers, are set around it isometrically. The algorithm is checked through computer simulation with various kinds of temperature distributions. It is shown that the present method which takes into account the algorithm of the refraction effect can reconstruct temperature distributions with much greater accuracy than can methods which do not include the refraction effect.
Numerical simulation code for self-gravitating Bose-Einstein condensates
NASA Astrophysics Data System (ADS)
Madarassy, Enikő J. M.; Toth, Viktor T.
2013-04-01
We completed the development of simulation code that is designed to study the behavior of a conjectured dark matter galactic halo that is in the form of a Bose-Einstein Condensate (BEC). The BEC is described by the Gross-Pitaevskii equation, which can be solved numerically using the Crank-Nicholson method. The gravitational potential, in turn, is described by Poisson’s equation, that can be solved using the relaxation method. Our code combines these two methods to study the time evolution of a self-gravitating BEC. The inefficiency of the relaxation method is balanced by the fact that in subsequent time iterations, previously computed values of the gravitational field serve as very good initial estimates. The code is robust (as evidenced by its stability on coarse grids) and efficient enough to simulate the evolution of a system over the course of 109 years using a finer (100×100×100) spatial grid, in less than a day of processor time on a contemporary desktop computer. Catalogue identifier: AEOR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOR_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5248 No. of bytes in distributed program, including test data, etc.: 715402 Distribution format: tar.gz Programming language: C++ or FORTRAN. Computer: PCs or workstations. Operating system: Linux or Windows. Classification: 1.5. Nature of problem: Simulation of a self-gravitating Bose-Einstein condensate by simultaneous solution of the Gross-Pitaevskii and Poisson equations in three dimensions. Solution method: The Gross-Pitaevskii equation is solved numerically using the Crank-Nicholson method; Poisson’s equation is solved using the relaxation method. The time evolution of the system is governed by the Gross-Pitaevskii equation; the solution of Poisson’s equation at each time step is used as an initial estimate for the next time step, which dramatically increases the efficiency of the relaxation method. Running time: Depends on the chosen size of the problem. On a typical personal computer, a 100×100×100 grid can be solved with a time span of 10 Gyr in approx. a day of running time.
Combined Recipe for Clinical Target Volume and Planning Target Volume Margins
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stroom, Joep, E-mail: joep.stroom@fundacaochampalimaud.pt; Gilhuijs, Kenneth; Vieira, Sandra
2014-03-01
Purpose: To develop a combined recipe for clinical target volume (CTV) and planning target volume (PTV) margins. Methods and Materials: A widely accepted PTV margin recipe is M{sub geo} = aΣ{sub geo} + bσ{sub geo}, with Σ{sub geo} and σ{sub geo} standard deviations (SDs) representing systematic and random geometric uncertainties, respectively. On the basis of histopathology data of breast and lung tumors, we suggest describing the distribution of microscopic islets around the gross tumor volume (GTV) by a half-Gaussian with SD Σ{sub micro}, yielding as possible CTV margin recipe: M{sub micro} = ƒ(N{sub i}) × Σ{sub micro}, with N{sub i}more » the average number of microscopic islets per patient. To determine ƒ(N{sub i}), a computer model was developed that simulated radiation therapy of a spherical GTV with isotropic distribution of microscopic disease in a large group of virtual patients. The minimal margin that yielded D{sub min} <95% in maximally 10% of patients was calculated for various Σ{sub micro} and N{sub i}. Because Σ{sub micro} is independent of Σ{sub geo}, we propose they should be added quadratically, yielding for a combined GTV-to-PTV margin recipe: M{sub GTV-PTV} = √([aΣ{sub geo}]{sup 2} + [ƒ(N{sub i})Σ{sub micro}]{sup 2}) + bσ{sub geo}. This was validated by the computer model through numerous simultaneous simulations of microscopic and geometric uncertainties. Results: The margin factor ƒ(N{sub i}) in a relevant range of Σ{sub micro} and N{sub i} can be given by: ƒ(N{sub i}) = 1.4 + 0.8log(N{sub i}). Filling in the other factors found in our simulations (a = 2.1 and b = 0.8) yields for the combined recipe: M{sub GTV-PTV} = √((2.1Σ{sub geo}){sup 2} + ([1.4 + 0.8log(N{sub i})] × Σ{sub micro}){sup 2}) + 0.8σ{sub geo}. The average margin difference between the simultaneous simulations and the above recipe was 0.2 ± 0.8 mm (1 SD). Calculating M{sub geo} and M{sub micro} separately and adding them linearly overestimated PTVs by on average 5 mm. Margin recipes based on tumor control probability (TCP) instead of D{sub min} criteria yielded similar results. Conclusions: A general recipe for GTV-to-PTV margins is proposed, which shows that CTV and PTV margins should be added in quadrature instead of linearly.« less
Kendon, Vivien M; Nemoto, Kae; Munro, William J
2010-08-13
We briefly review what a quantum computer is, what it promises to do for us and why it is so hard to build one. Among the first applications anticipated to bear fruit is the quantum simulation of quantum systems. While most quantum computation is an extension of classical digital computation, quantum simulation differs fundamentally in how the data are encoded in the quantum computer. To perform a quantum simulation, the Hilbert space of the system to be simulated is mapped directly onto the Hilbert space of the (logical) qubits in the quantum computer. This type of direct correspondence is how data are encoded in a classical analogue computer. There is no binary encoding, and increasing precision becomes exponentially costly: an extra bit of precision doubles the size of the computer. This has important consequences for both the precision and error-correction requirements of quantum simulation, and significant open questions remain about its practicality. It also means that the quantum version of analogue computers, continuous-variable quantum computers, becomes an equally efficient architecture for quantum simulation. Lessons from past use of classical analogue computers can help us to build better quantum simulators in future.
Forecasting of Storm Surge Floods Using ADCIRC and Optimized DEMs
NASA Technical Reports Server (NTRS)
Valenti, Elizabeth; Fitzpatrick, Patrick
2005-01-01
Increasing the accuracy of storm surge flood forecasts is essential for improving preparedness for hurricanes and other severe storms and, in particular, for optimizing evacuation scenarios. An interactive database, developed by WorldWinds, Inc., contains atlases of storm surge flood levels for the Louisiana/Mississippi gulf coast region. These atlases were developed to improve forecasting of flooding along the coastline and estuaries and in adjacent inland areas. Storm surge heights depend on a complex interaction of several factors, including: storm size, central minimum pressure, forward speed of motion, bottom topography near the point of landfall, astronomical tides, and most importantly, maximum wind speed. The information in the atlases was generated in over 100 computational simulations, partly by use of a parallel-processing version of the ADvanced CIRCulation (ADCIRC) model. ADCIRC is a nonlinear computational model of hydrodynamics, developed by the U.S. Army Corps of Engineers and the US Navy, as a family of two- and three-dimensional finite element based codes. It affords a capability for simulating tidal circulation and storm surge propagation over very large computational domains, while simultaneously providing high-resolution output in areas of complex shoreline and bathymetry. The ADCIRC finite-element grid for this project covered the Gulf of Mexico and contiguous basins, extending into the deep Atlantic Ocean with progressively higher resolution approaching the study area. The advantage of using ADCIRC over other storm surge models, such as SLOSH, is that input conditions can include all or part of wind stress, tides, wave stress, and river discharge, which serve to make the model output more accurate.
NASA Astrophysics Data System (ADS)
Rosolem, R.; Rahman, M.; Kollet, S. J.; Wagener, T.
2017-12-01
Understanding the impacts of land cover and climate changes on terrestrial hydrometeorology is important across a range of spatial and temporal scales. Earth System Models (ESMs) provide a robust platform for evaluating these impacts. However, current ESMs lack the representation of key hydrological processes (e.g., preferential water flow, and direct interactions with aquifers) in general. The typical "free drainage" conceptualization of land models can misrepresent the magnitude of those interactions, consequently affecting the exchange of energy and water at the surface as well as estimates of groundwater recharge. Recent studies show the benefits of explicitly simulating the interactions between subsurface and surface processes in similar models. However, such parameterizations are often computationally demanding resulting in limited application for large/global-scale studies. Here, we take a different approach in developing a novel parameterization for groundwater dynamics. Instead of directly adding another complex process to an established land model, we examine a set of comprehensive experimental scenarios using a very robust and establish three-dimensional hydrological model to develop a simpler parameterization that represents the aquifer to land surface interactions. The main goal of our developed parameterization is to simultaneously maximize the computational gain (i.e., "efficiency") while minimizing simulation errors in comparison to the full 3D model (i.e., "robustness") to allow for easy implementation in ESMs globally. Our study focuses primarily on understanding both the dynamics for groundwater recharge and discharge, respectively. Preliminary results show that our proposed approach significantly reduced the computational demand while model deviations from the full 3D model are considered to be small for these processes.
Student Ability, Confidence, and Attitudes Toward Incorporating a Computer into a Patient Interview.
Ray, Sarah; Valdovinos, Katie
2015-05-25
To improve pharmacy students' ability to effectively incorporate a computer into a simulated patient encounter and to improve their awareness of barriers and attitudes towards and their confidence in using a computer during simulated patient encounters. Students completed a survey that assessed their awareness of, confidence in, and attitudes towards computer use during simulated patient encounters. Students were evaluated with a rubric on their ability to incorporate a computer into a simulated patient encounter. Students were resurveyed and reevaluated after instruction. Students improved in their ability to effectively incorporate computer usage into a simulated patient encounter. They also became more aware of and improved their attitudes toward barriers regarding such usage and gained more confidence in their ability to use a computer during simulated patient encounters. Instruction can improve pharmacy students' ability to incorporate a computer into simulated patient encounters. This skill is critical to developing efficiency while maintaining rapport with patients.
Using discrete event computer simulation to improve patient flow in a Ghanaian acute care hospital.
Best, Allyson M; Dixon, Cinnamon A; Kelton, W David; Lindsell, Christopher J; Ward, Michael J
2014-08-01
Crowding and limited resources have increased the strain on acute care facilities and emergency departments worldwide. These problems are particularly prevalent in developing countries. Discrete event simulation is a computer-based tool that can be used to estimate how changes to complex health care delivery systems such as emergency departments will affect operational performance. Using this modality, our objective was to identify operational interventions that could potentially improve patient throughput of one acute care setting in a developing country. We developed a simulation model of acute care at a district level hospital in Ghana to test the effects of resource-neutral (eg, modified staff start times and roles) and resource-additional (eg, increased staff) operational interventions on patient throughput. Previously captured deidentified time-and-motion data from 487 acute care patients were used to develop and test the model. The primary outcome was the modeled effect of interventions on patient length of stay (LOS). The base-case (no change) scenario had a mean LOS of 292 minutes (95% confidence interval [CI], 291-293). In isolation, adding staffing, changing staff roles, and varying shift times did not affect overall patient LOS. Specifically, adding 2 registration workers, history takers, and physicians resulted in a 23.8-minute (95% CI, 22.3-25.3) LOS decrease. However, when shift start times were coordinated with patient arrival patterns, potential mean LOS was decreased by 96 minutes (95% CI, 94-98), and with the simultaneous combination of staff roles (registration and history taking), there was an overall mean LOS reduction of 152 minutes (95% CI, 150-154). Resource-neutral interventions identified through discrete event simulation modeling have the potential to improve acute care throughput in this Ghanaian municipal hospital. Discrete event simulation offers another approach to identifying potentially effective interventions to improve patient flow in emergency and acute care in resource-limited settings. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hasan, S.; Basmage, O.; Stokes, J. T.; Hashmi, M. S. J.
2018-05-01
A review of wire coating studies using plasto-hydrodynamic pressure shows that most of the works were carried out by conducting experiments simultaneously with simulation analysis based upon Bernoulli's principle and Euler and Navier-Stokes (N-S) equations. These characteristics relate to the domain of Computational Fluid Dynamics (CFD) which is an interdisciplinary topic (Fluid Mechanics, Numerical Analysis of Fluid flow and Computer Science). This research investigates two aspects: (i) simulation work and (ii) experimentation. A mathematical model was developed to investigate the flow pattern of the molten polymer and pressure distribution within the wire-drawing dies, assessment of polymer coating thickness on the coated wires and speed of coating on the wires at the outlet of the drawing dies, without deploying any pressurizing pump. In addition to a physical model which was developed within ANSYS™ environment through the simulation design of ANSYS™ Workbench. The design was customized to simulate the process of wire-coating on the fine stainless-steel wires using drawing dies having different bore geometries such as: stepped parallel bore, tapered bore and combined parallel and tapered bore. The convergence of the designed CFD model and numerical and physical solution parameters for simulation were dynamically monitored for the viscous flow of the polypropylene (PP) polymer. Simulation results were validated against experimental results and used to predict the ideal bore shape to produce a thin coating on stainless wires with different diameter. Simulation studies confirmed that a specific speed should be attained by the stainless-steel wires while passing through the drawing dies. It has been observed that all the speed values within specific speed range did not produce a coating thickness having the desired coating characteristic features. Therefore, some optimization of the experimental set up through design of experiments (Stat-Ease) was applied to validate the results. Further rapid solidification of the viscous coating on the wires was targeted so that the coated wires do not stick to the winding spool after the coating process.
Progress on single barrier varactors for submillimeter wave power generation
NASA Technical Reports Server (NTRS)
Nilsen, Svein M.; Groenqvist, Hans; Hjelmgren, Hans; Rydberg, Anders; Kollberg, Erik L.
1992-01-01
Theoretical work on Single Barrier Varactor (SBV) diodes, indicate that the efficiency for a multiplier has a maximum for a considerably smaller capacitance variation than previously thought. The theoretical calculations are performed, both with a simple theoretical model and a complete computer simulation using the method of harmonic balance. Modeling of the SBV is carried out in two steps. First, the semiconductor transport equations are solved simultaneously using a finite difference scheme in one dimension. Secondly, the calculated I-V, and C-V characteristics are input to a multiplier simulator which calculates the optimum impedances, and output powers at the frequencies of interest. Multiple barrier varactors can also be modeled in this way. Several examples on how to design the semiconductor layers to obtain certain characteristics are given. The calculated conversion efficiencies of the modeled structures, in a multiplier circuit, are also presented. Computer simulations for a case study of a 750 GHz multiplier show that InAs diodes perform favorably compared to GaAs diodes. InAs and InGaAs SBV diodes have been fabricated and their current vs. voltage characteristics are presented. In the InAs diode, was the large bandgap semiconductor AlSb used as barrier. The InGaAs diode was grown lattice matched to an InP substrate with InAlAs as a barrier material. The current density is greatly reduced for these two material combinations, compared to that of GaAs/AlGaAs SBV diodes. GaAs based diodes can be biased to higher voltages than InAs diodes.
Hamzehpour, Hossein; Rasaei, M Reza; Sahimi, Muhammad
2007-05-01
We describe a method for the development of the optimal spatial distributions of the porosity phi and permeability k of a large-scale porous medium. The optimal distributions are constrained by static and dynamic data. The static data that we utilize are limited data for phi and k, which the method honors in the optimal model and utilizes their correlation functions in the optimization process. The dynamic data include the first-arrival (FA) times, at a number of receivers, of seismic waves that have propagated in the porous medium, and the time-dependent production rates of a fluid that flows in the medium. The method combines the simulated-annealing method with a simulator that solves numerically the three-dimensional (3D) acoustic wave equation and computes the FA times, and a second simulator that solves the 3D governing equation for the fluid's pressure as a function of time. To our knowledge, this is the first time that an optimization method has been developed to determine simultaneously the global minima of two distinct total energy functions. As a stringent test of the method's accuracy, we solve for flow of two immiscible fluids in the same porous medium, without using any data for the two-phase flow problem in the optimization process. We show that the optimal model, in addition to honoring the data, also yields accurate spatial distributions of phi and k, as well as providing accurate quantitative predictions for the single- and two-phase flow problems. The efficiency of the computations is discussed in detail.
NASA Technical Reports Server (NTRS)
Derkevorkian, Armen; Peterson, Lee; Kolaini, Ali R.; Hendricks, Terry J.; Nesmith, Bill J.
2016-01-01
An analytic approach is demonstrated to reveal potential pyroshock -driven dynamic effects causing power losses in the Thermo -Electric (TE) module bars of the Mars Science Laboratory (MSL) Multi -Mission Radioisotope Thermoelectric Generator (MMRTG). This study utilizes high- fidelity finite element analysis with SIERRA/PRESTO codes to estimate wave propagation effects due to large -amplitude suddenly -applied pyro shock loads in the MMRTG. A high fidelity model of the TE module bar was created with approximately 30 million degrees -of-freedom (DOF). First, a quasi -static preload was applied on top of the TE module bar, then transient tri- axial acceleration inputs were simultaneously applied on the preloaded module. The applied input acceleration signals were measured during MMRTG shock qualification tests performed at the Jet Propulsion Laboratory. An explicit finite element solver in the SIERRA/PRESTO computational environment, along with a 3000 processor parallel super -computing framework at NASA -AMES, was used for the simulation. The simulation results were investigated both qualitatively and quantitatively. The predicted shock wave propagation results provide detailed structural responses throughout the TE module bar, and key insights into the dynamic response (i.e., loads, displacements, accelerations) of critical internal spring/piston compression systems, TE materials, and internal component interfaces in the MMRTG TE module bar. They also provide confidence on the viability of this high -fidelity modeling scheme to accurately predict shock wave propagation patterns within complex structures. This analytic approach is envisioned for modeling shock sensitive hardware susceptible to intense shock environments positioned near shock separation devices in modern space vehicles and systems.
Geometrical verification system using Adobe Photoshop in radiotherapy.
Ishiyama, Hiromichi; Suzuki, Koji; Niino, Keiji; Hosoya, Takaaki; Hayakawa, Kazushige
2005-02-01
Adobe Photoshop is used worldwide and is useful for comparing portal films with simulation films. It is possible to scan images and then view them simultaneously with this software. The purpose of this study was to assess the accuracy of a geometrical verification system using Adobe Photoshop. We prepared the following two conditions for verification. Under one condition, films were hanged on light boxes, and examiners measured distances between the isocenter on simulation films and that on portal films by adjusting the bony structures. Under the other condition, films were scanned into a computer and displayed using Adobe Photoshop, and examiners measured distances between the isocenter on simulation films and those on portal films by adjusting the bony structures. To obtain control data, lead balls were used as a fiducial point for matching the films accurately. The errors, defined as the differences between the control data and the measurement data, were assessed. Errors of the data obtained using Adobe Photoshop were significantly smaller than those of the data obtained from films on light boxes (p < 0.007). The geometrical verification system using Adobe Photoshop is available on any PC with this software and is useful for improving the accuracy of verification.
Integrated Thermal Response Tool for Earth Entry Vehicles
NASA Technical Reports Server (NTRS)
Chen, Y.-K.; Milos, F. S.; Partridge, Harry (Technical Monitor)
2001-01-01
A system is presented for multi-dimensional, fully-coupled thermal response modeling of hypersonic entry vehicles. The system consists of a two-dimensional implicit thermal response, pyrolysis and ablation program (TITAN), a commercial finite-element thermal and mechanical analysis code (MARC), and a high fidelity Navier-Stokes equation solver (GIANTS). The simulations performed by this integrated system include hypersonic flow-field, fluid and solid interaction, ablation, shape change, pyrolysis gas generation and flow, and thermal response of heatshield and structure. The thermal response of the ablating and charring heatshield material is simulated using TITAN, and that of the underlying structural is simulated using MARC. The ablating heatshield is treated as an outer boundary condition of the structure, and continuity conditions of temperature and heat flux are imposed at the interface between TITAN and MARC. Aerothermal environments with fluid and solid interaction are predicted by coupling TITAN and GIANTS through surface energy balance equations. With this integrated system, the aerothermal environments for an entry vehicle and the thermal response of both the heatshield and the structure can be obtained simultaneously. Representative computations for a proposed blunt body earth entry vehicle are presented and discussed in detail.
NASA Astrophysics Data System (ADS)
Alimohammadi, Shahrouz; Cavaglieri, Daniele; Beyhaghi, Pooriya; Bewley, Thomas R.
2016-11-01
This work applies a recently developed Derivative-free optimization algorithm to derive a new mixed implicit-explicit (IMEX) time integration scheme for Computational Fluid Dynamics (CFD) simulations. This algorithm allows imposing a specified order of accuracy for the time integration and other important stability properties in the form of nonlinear constraints within the optimization problem. In this procedure, the coefficients of the IMEX scheme should satisfy a set of constraints simultaneously. Therefore, the optimization process, at each iteration, estimates the location of the optimal coefficients using a set of global surrogates, for both the objective and constraint functions, as well as a model of the uncertainty function of these surrogates based on the concept of Delaunay triangulation. This procedure has been proven to converge to the global minimum of the constrained optimization problem provided the constraints and objective functions are twice differentiable. As a result, a new third-order, low-storage IMEX Runge-Kutta time integration scheme is obtained with remarkably fast convergence. Numerical tests are then performed leveraging the turbulent channel flow simulations to validate the theoretical order of accuracy and stability properties of the new scheme.
Thermal Response Modeling System for a Mars Sample Return Vehicle
NASA Technical Reports Server (NTRS)
Chen, Y.-K.; Miles, Frank S.; Arnold, Jim (Technical Monitor)
2001-01-01
A multi-dimensional, coupled thermal response modeling system for analysis of hypersonic entry vehicles is presented. The system consists of a high fidelity Navier-Stokes equation solver (GIANTS), a two-dimensional implicit thermal response, pyrolysis and ablation program (TITAN), and a commercial finite-element thermal and mechanical analysis code (MARC). The simulations performed by this integrated system include hypersonic flowfield, fluid and solid interaction, ablation, shape change, pyrolysis gas eneration and flow, and thermal response of heatshield and structure. The thermal response of the heatshield is simulated using TITAN, and that of the underlying structural is simulated using MARC. The ablating heatshield is treated as an outer boundary condition of the structure, and continuity conditions of temperature and heat flux are imposed at the interface between TITAN and MARC. Aerothermal environments with fluid and solid interaction are predicted by coupling TITAN and GIANTS through surface energy balance equations. With this integrated system, the aerothermal environments for an entry vehicle and the thermal response of the entire vehicle can be obtained simultaneously. Representative computations for a flat-faced arc-jet test model and a proposed Mars sample return capsule are presented and discussed.
Thermal Response Modeling System for a Mars Sample Return Vehicle
NASA Technical Reports Server (NTRS)
Chen, Y.-K.; Milos, F. S.
2002-01-01
A multi-dimensional, coupled thermal response modeling system for analysis of hypersonic entry vehicles is presented. The system consists of a high fidelity Navier-Stokes equation solver (GIANTS), a two-dimensional implicit thermal response, pyrolysis and ablation program (TITAN), and a commercial finite element thermal and mechanical analysis code (MARC). The simulations performed by this integrated system include hypersonic flowfield, fluid and solid interaction, ablation, shape change, pyrolysis gas generation and flow, and thermal response of heatshield and structure. The thermal response of the heatshield is simulated using TITAN, and that of the underlying structural is simulated using MARC. The ablating heatshield is treated as an outer boundary condition of the structure, and continuity conditions of temperature and heat flux are imposed at the interface between TITAN and MARC. Aerothermal environments with fluid and solid interaction are predicted by coupling TITAN and GIANTS through surface energy balance equations. With this integrated system, the aerothermal environments for an entry vehicle and the thermal response of the entire vehicle can be obtained simultaneously. Representative computations for a flat-faced arc-jet test model and a proposed Mars sample return capsule are presented and discussed.
Evaluating the material parameters of the human cornea in a numerical model.
Sródka, Wiesław
2011-01-01
The values of the biomechanical human eyeball model parameters reported in the literature are still being disputed. The primary motivation behind this work was to predict the material parameters of the cornea through numerical simulations and to assess the applicability of the ubiquitously accepted law of applanation tonometry - the Imbert-Fick equation. Numerical simulations of a few states of eyeball loading were run to determine the stroma material parameters. In the computations, the elasticity moduli of the material were related to the stress sign, instead of the orientation in space. Stroma elasticity secant modulus E was predicted to be close to 0.3 MPa. The numerically simulated applanation tonometer readings for the cornea with the calibration dimensions were found to be lower by 11 mmHg then IOP = 48 mmHg. This discrepancy is the result of a strictly mechanical phenomenon taking place in the tensioned and simultaneously flattened corneal shell and is not related to the tonometer measuring accuracy. The observed deviation has not been amenable to any GAT corrections, contradicting the Imbert-Fick law. This means a new approach to the calculation of corrections for GAT readings is needed.
Molecular Monte Carlo Simulations Using Graphics Processing Units: To Waste Recycle or Not?
Kim, Jihan; Rodgers, Jocelyn M; Athènes, Manuel; Smit, Berend
2011-10-11
In the waste recycling Monte Carlo (WRMC) algorithm, (1) multiple trial states may be simultaneously generated and utilized during Monte Carlo moves to improve the statistical accuracy of the simulations, suggesting that such an algorithm may be well posed for implementation in parallel on graphics processing units (GPUs). In this paper, we implement two waste recycling Monte Carlo algorithms in CUDA (Compute Unified Device Architecture) using uniformly distributed random trial states and trial states based on displacement random-walk steps, and we test the methods on a methane-zeolite MFI framework system to evaluate their utility. We discuss the specific implementation details of the waste recycling GPU algorithm and compare the methods to other parallel algorithms optimized for the framework system. We analyze the relationship between the statistical accuracy of our simulations and the CUDA block size to determine the efficient allocation of the GPU hardware resources. We make comparisons between the GPU and the serial CPU Monte Carlo implementations to assess speedup over conventional microprocessors. Finally, we apply our optimized GPU algorithms to the important problem of determining free energy landscapes, in this case for molecular motion through the zeolite LTA.