Sample records for computer simulation feasibility

  1. A FRAMEWORK FOR FINE-SCALE COMPUTATIONAL FLUID DYNAMICS AIR QUALITY MODELING AND ANALYSIS

    EPA Science Inventory

    Fine-scale Computational Fluid Dynamics (CFD) simulation of pollutant concentrations within roadway and building microenvironments is feasible using high performance computing. Unlike currently used regulatory air quality models, fine-scale CFD simulations are able to account rig...

  2. Simulating Data for Clinical Research: A Tutorial

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander

    2018-01-01

    Simulation studies use computer-generated data to examine questions of interest that have traditionally been used to study properties of statistics and estimating algorithms. With the recent advent of powerful processing capabilities in affordable computers along with readily usable software, it is now feasible to use a simulation study to aid in…

  3. Feasibility study, software design, layout and simulation of a two-dimensional Fast Fourier Transform machine for use in optical array interferometry

    NASA Technical Reports Server (NTRS)

    Boriakoff, Valentin

    1994-01-01

    The goal of this project was the feasibility study of a particular architecture of a digital signal processing machine operating in real time which could do in a pipeline fashion the computation of the fast Fourier transform (FFT) of a time-domain sampled complex digital data stream. The particular architecture makes use of simple identical processors (called inner product processors) in a linear organization called a systolic array. Through computer simulation the new architecture to compute the FFT with systolic arrays was proved to be viable, and computed the FFT correctly and with the predicted particulars of operation. Integrated circuits to compute the operations expected of the vital node of the systolic architecture were proven feasible, and even with a 2 micron VLSI technology can execute the required operations in the required time. Actual construction of the integrated circuits was successful in one variant (fixed point) and unsuccessful in the other (floating point).

  4. Numerical propulsion system simulation: An interdisciplinary approach

    NASA Technical Reports Server (NTRS)

    Nichols, Lester D.; Chamis, Christos C.

    1991-01-01

    The tremendous progress being made in computational engineering and the rapid growth in computing power that is resulting from parallel processing now make it feasible to consider the use of computer simulations to gain insights into the complex interactions in aerospace propulsion systems and to evaluate new concepts early in the design process before a commitment to hardware is made. Described here is a NASA initiative to develop a Numerical Propulsion System Simulation (NPSS) capability.

  5. Numerical propulsion system simulation - An interdisciplinary approach

    NASA Technical Reports Server (NTRS)

    Nichols, Lester D.; Chamis, Christos C.

    1991-01-01

    The tremendous progress being made in computational engineering and the rapid growth in computing power that is resulting from parallel processing now make it feasible to consider the use of computer simulations to gain insights into the complex interactions in aerospace propulsion systems and to evaluate new concepts early in the design process before a commitment to hardware is made. Described here is a NASA initiative to develop a Numerical Propulsion System Simulation (NPSS) capability.

  6. Training and Personnel Systems Technology R&D Program Description FY 1988/1989. Revision

    DTIC Science & Technology

    1988-05-20

    scenario software /database, and computer generated imagery (CIG) subsystem resources; (d) investigation of feasibility of, and preparation of plans... computer language to Army flight simulator for demonstration and evaluation. The objective is to have flight simulators which use the same software as...the Automated Performance and Readiness Training System (APARTS), which is a computer software system which facilitates training management through

  7. Computer-Assisted Scheduling of Army Unit Training: An Application of Simulated Annealing.

    ERIC Educational Resources Information Center

    Hart, Roland J.; Goehring, Dwight J.

    This report of an ongoing research project intended to provide computer assistance to Army units for the scheduling of training focuses on the feasibility of simulated annealing, a heuristic approach for solving scheduling problems. Following an executive summary and brief introduction, the document is divided into three sections. First, the Army…

  8. Digital Simulation and Modelling.

    ERIC Educational Resources Information Center

    Hawthorne, G. B., Jr.

    A basically tutorial point of view is taken in this general discussion. The author examines the basic concepts and principles of simulation and modelling and the application of digital computers to these tasks. Examples of existing simulations, a discussion of the applicability and feasibility of simulation studies, a review of simulation…

  9. Simulation and control engineering studies of NASA-Ames 40 foot by 80 foot/80 foot by 120 foot wind tunnels

    NASA Technical Reports Server (NTRS)

    Bohn, J. G.; Jones, J. E.

    1978-01-01

    The development and use of a digital computer simulation of the proposed wind tunnel facility is described. The feasibility of automatic control of wind tunnel airspeed and other parameters was examined. Specifications and implementation recommendations for a computer based automatic control and monitoring system are presented.

  10. A demonstrative model of a lunar base simulation on a personal computer

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The initial demonstration model of a lunar base simulation is described. This initial model was developed on the personal computer level to demonstrate feasibility and technique before proceeding to a larger computer-based model. Lotus Symphony Version 1.1 software was used to base the demonstration model on an personal computer with an MS-DOS operating system. The personal computer-based model determined the applicability of lunar base modeling techniques developed at an LSPI/NASA workshop. In addition, the personnal computer-based demonstration model defined a modeling structure that could be employed on a larger, more comprehensive VAX-based lunar base simulation. Refinement of this personal computer model and the development of a VAX-based model is planned in the near future.

  11. A study of the feasibility of statistical analysis of airport performance simulation

    NASA Technical Reports Server (NTRS)

    Myers, R. H.

    1982-01-01

    The feasibility of conducting a statistical analysis of simulation experiments to study airport capacity is investigated. First, the form of the distribution of airport capacity is studied. Since the distribution is non-Gaussian, it is important to determine the effect of this distribution on standard analysis of variance techniques and power calculations. Next, power computations are made in order to determine how economic simulation experiments would be if they are designed to detect capacity changes from condition to condition. Many of the conclusions drawn are results of Monte-Carlo techniques.

  12. Computational Modeling Approaches to Multiscale Design of Icephobic Surfaces

    NASA Technical Reports Server (NTRS)

    Tallman, Aaron; Wang, Yan; Vargas, Mario

    2017-01-01

    To aid in the design of surfaces that prevent icing, a model and computational simulation of impact ice formation at the single droplet scale was implemented. The nucleation of a single supercooled droplet impacting on a substrate, in rime ice conditions, was simulated. Open source computational fluid dynamics (CFD) software was used for the simulation. To aid in the design of surfaces that prevent icing, a model of impact ice formation at the single droplet scale was proposed•No existing model simulates simultaneous impact and freezing of a single super-cooled water droplet•For the 10-week project, a low-fidelity feasibility study was the goal.

  13. An ultra-low-cost moving-base driving simulator

    DOT National Transportation Integrated Search

    2001-11-04

    A novel approach to driving simulation is described, one that potentially overcomes the limitations of both motion fidelity and cost. It has become feasible only because of recent advances in computer-based image generation speed and fidelity and in ...

  14. Feasibility study of automatic control of crew comfort in the shuttle Extravehicular Mobility Unit. [liquid cooled garment regulator

    NASA Technical Reports Server (NTRS)

    Cook, D. W.

    1977-01-01

    Computer simulation is used to demonstrate that crewman comfort can be assured by using automatic control of the inlet temperature of the coolant into the liquid cooled garment when input to the controller consists of measurements of the garment inlet temperature and the garment outlet temperature difference. Subsequent tests using a facsimile of the control logic developed in the computer program confirmed the feasibility of such a design scheme.

  15. Teaching Ionic Solvation Structure with a Monte Carlo Liquid Simulation Program

    ERIC Educational Resources Information Center

    Serrano, Agostinho; Santos, Flavia M. T.; Greca, Ileana M.

    2004-01-01

    The use of molecular dynamics and Monte Carlo methods has provided efficient means to stimulate the behavior of molecular liquids and solutions. A Monte Carlo simulation program is used to compute the structure of liquid water and of water as a solvent to Na(super +), Cl(super -), and Ar on a personal computer to show that it is easily feasible to…

  16. Verifying the Simulation Hypothesis via Infinite Nested Universe Simulacrum Loops

    NASA Astrophysics Data System (ADS)

    Sharma, Vikrant

    2017-01-01

    The simulation hypothesis proposes that local reality exists as a simulacrum within a hypothetical computer's dimension. More specifically, Bostrom's trilemma proposes that the number of simulations an advanced 'posthuman' civilization could produce makes the proposition very likely. In this paper a hypothetical method to verify the simulation hypothesis is discussed using infinite regression applied to a new type of infinite loop. Assign dimension n to any computer in our present reality, where dimension signifies the hierarchical level in nested simulations our reality exists in. A computer simulating known reality would be dimension (n-1), and likewise a computer simulating an artificial reality, such as a video game, would be dimension (n +1). In this method, among others, four key assumptions are made about the nature of the original computer dimension n. Summations show that regressing such a reality infinitely will create convergence, implying that the verification of whether local reality is a grand simulation is feasible to detect with adequate compute capability. The action of reaching said convergence point halts the simulation of local reality. Sensitivities to the four assumptions and implications are discussed.

  17. [Computer simulation by passenger wound analysis of vehicle collision].

    PubMed

    Zou, Dong-Hua; Liu, Nning-Guo; Shen, Jie; Zhang, Xiao-Yun; Jin, Xian-Long; Chen, Yi-Jiu

    2006-08-15

    To reconstruct the course of vehicle collision, so that to provide the reference for forensic identification and disposal of traffic accidents. Through analyzing evidences left both on passengers and vehicles, technique of momentum impulse combined with multi-dynamics was applied to simulate the motion and injury of passengers as well as the track of vehicles. Model of computer stimulation perfectly reconstructed phases of the traffic collision, which coincide with details found by forensic investigation. Computer stimulation is helpful and feasible for forensic identification in traffic accidents.

  18. Modeling and Simulation of Explosively Driven Electromechanical Devices

    NASA Astrophysics Data System (ADS)

    Demmie, Paul N.

    2002-07-01

    Components that store electrical energy in ferroelectric materials and produce currents when their permittivity is explosively reduced are used in a variety of applications. The modeling and simulation of such devices is a challenging problem since one has to represent the coupled physics of detonation, shock propagation, and electromagnetic field generation. The high fidelity modeling and simulation of complicated electromechanical devices was not feasible prior to having the Accelerated Strategic Computing Initiative (ASCI) computers and the ASCI developed codes at Sandia National Laboratories (SNL). The EMMA computer code is used to model such devices and simulate their operation. In this paper, I discuss the capabilities of the EMMA code for the modeling and simulation of one such electromechanical device, a slim-loop ferroelectric (SFE) firing set.

  19. Analysis of Plane-Parallel Electron Beam Propagation in Different Media by Numerical Simulation Methods

    NASA Astrophysics Data System (ADS)

    Miloichikova, I. A.; Bespalov, V. I.; Krasnykh, A. A.; Stuchebrov, S. G.; Cherepennikov, Yu. M.; Dusaev, R. R.

    2018-04-01

    Simulation by the Monte Carlo method is widely used to calculate the character of ionizing radiation interaction with substance. A wide variety of programs based on the given method allows users to choose the most suitable package for solving computational problems. In turn, it is important to know exactly restrictions of numerical systems to avoid gross errors. Results of estimation of the feasibility of application of the program PCLab (Computer Laboratory, version 9.9) for numerical simulation of the electron energy distribution absorbed in beryllium, aluminum, gold, and water for industrial, research, and clinical beams are presented. The data obtained using programs ITS and Geant4 being the most popular software packages for solving the given problems and the program PCLab are presented in the graphic form. A comparison and an analysis of the results obtained demonstrate the feasibility of application of the program PCLab for simulation of the absorbed energy distribution and dose of electrons in various materials for energies in the range 1-20 MeV.

  20. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    PubMed

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  1. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    PubMed Central

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  2. Application of wildfire simulation models for risk analysis

    Treesearch

    Alan A. Ager; Mark A. Finney

    2009-01-01

    Wildfire simulation models are being widely used by fire and fuels specialists in the U.S. to support tactical and strategic decisions related to the mitigation of wildfire risk. Much of this application has resulted from the development of a minimum travel time (MTT) fire spread algorithm (M. Finney) that makes it computationally feasible to simulate thousands of...

  3. Using a million cell simulation of the cerebellum: network scaling and task generality.

    PubMed

    Li, Wen-Ke; Hausknecht, Matthew J; Stone, Peter; Mauk, Michael D

    2013-11-01

    Several factors combine to make it feasible to build computer simulations of the cerebellum and to test them in biologically realistic ways. These simulations can be used to help understand the computational contributions of various cerebellar components, including the relevance of the enormous number of neurons in the granule cell layer. In previous work we have used a simulation containing 12000 granule cells to develop new predictions and to account for various aspects of eyelid conditioning, a form of motor learning mediated by the cerebellum. Here we demonstrate the feasibility of scaling up this simulation to over one million granule cells using parallel graphics processing unit (GPU) technology. We observe that this increase in number of granule cells requires only twice the execution time of the smaller simulation on the GPU. We demonstrate that this simulation, like its smaller predecessor, can emulate certain basic features of conditioned eyelid responses, with a slight improvement in performance in one measure. We also use this simulation to examine the generality of the computation properties that we have derived from studying eyelid conditioning. We demonstrate that this scaled up simulation can learn a high level of performance in a classic machine learning task, the cart-pole balancing task. These results suggest that this parallel GPU technology can be used to build very large-scale simulations whose connectivity ratios match those of the real cerebellum and that these simulations can be used guide future studies on cerebellar mediated tasks and on machine learning problems. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Computational aerodynamics development and outlook /Dryden Lecture in Research for 1979/

    NASA Technical Reports Server (NTRS)

    Chapman, D. R.

    1979-01-01

    Some past developments and current examples of computational aerodynamics are briefly reviewed. An assessment is made of the requirements on future computer memory and speed imposed by advanced numerical simulations, giving emphasis to the Reynolds averaged Navier-Stokes equations and to turbulent eddy simulations. Experimental scales of turbulence structure are used to determine the mesh spacings required to adequately resolve turbulent energy and shear. Assessment also is made of the changing market environment for developing future large computers, and of the projections of micro-electronics memory and logic technology that affect future computer capability. From the two assessments, estimates are formed of the future time scale in which various advanced types of aerodynamic flow simulations could become feasible. Areas of research judged especially relevant to future developments are noted.

  5. Antenna design and development for the microwave subsystem experiments for the terminal configured vehicle project

    NASA Technical Reports Server (NTRS)

    Becher, J.; Cohen, N.; Rublee, J.

    1981-01-01

    The feasibility of classifying an airport terminal area for multipath effects, i.e., fadeout potentials or limits of video resolution, is examined. Established transmission links in terminal areas were modeled for landing approaches and overflight patterns. A computer program to obtain signal strength based on a described flight path was written. The application of this model to evaluate the signal transmission obtained in an actual flight equipped with additional signal strength monitoring equipment is described. The actual and computed received signal are compared, and the feasibility of the computer simulation for predicting signal amplitude fluctuation is evaluated.

  6. Semi-physical Simulation Platform of a Parafoil Nonlinear Dynamic System

    NASA Astrophysics Data System (ADS)

    Gao, Hai-Tao; Yang, Sheng-Bo; Zhu, Er-Lin; Sun, Qing-Lin; Chen, Zeng-Qiang; Kang, Xiao-Feng

    2013-11-01

    Focusing on the problems in the process of simulation and experiment on a parafoil nonlinear dynamic system, such as limited methods, high cost and low efficiency we present a semi-physical simulation platform. It is designed by connecting parts of physical objects to a computer, and remedies the defect that a computer simulation is divorced from a real environment absolutely. The main components of the platform and its functions, as well as simulation flows, are introduced. The feasibility and validity are verified through a simulation experiment. The experimental results show that the platform has significance for improving the quality of the parafoil fixed-point airdrop system, shortening the development cycle and saving cost.

  7. Numerical aerodynamic simulation facility feasibility study, executive summary

    NASA Technical Reports Server (NTRS)

    1979-01-01

    There were three major issues examined in the feasibility study. First, the ability of the proposed system architecture to support the anticipated workload was evaluated. Second, the throughput of the computational engine (the flow model processor) was studied using real application programs. Third, the availability, reliability, and maintainability of the system were modeled. The evaluations were based on the baseline systems. The results show that the implementation of the Numerical Aerodynamic Simulation Facility, in the form considered, would indeed be a feasible project with an acceptable level of risk. The technology required (both hardware and software) either already exists or, in the case of a few parts, is expected to be announced this year.

  8. Automated design of spacecraft systems power subsystems

    NASA Technical Reports Server (NTRS)

    Terrile, Richard J.; Kordon, Mark; Mandutianu, Dan; Salcedo, Jose; Wood, Eric; Hashemi, Mona

    2006-01-01

    This paper discusses the application of evolutionary computing to a dynamic space vehicle power subsystem resource and performance simulation in a parallel processing environment. Our objective is to demonstrate the feasibility, application and advantage of using evolutionary computation techniques for the early design search and optimization of space systems.

  9. Computer Graphics and Physics Teaching.

    ERIC Educational Resources Information Center

    Bork, Alfred M.; Ballard, Richard

    New, more versatile and inexpensive terminals will make computer graphics more feasible in science instruction than before. This paper describes the use of graphics in physics teaching at the University of California at Irvine. Commands and software are detailed in established programs, which include a lunar landing simulation and a program which…

  10. Development of Microcomputer Simulations for Vocational/Technical Education. Final Report.

    ERIC Educational Resources Information Center

    Randolph Technical Coll., Asheboro, NC.

    A project investigated the feasibility of developing equipment simulations in vocational curricula using videotapes and microcomputers. To conduct the research, two pieces of equipment that could be used in vocational curricula throughout the North Carolina Community College System were chosen: (1) computer numerical control (CNC) lathe used in…

  11. FACE computer simulation. [Flexible Arm Controls Experiment

    NASA Technical Reports Server (NTRS)

    Sadeh, Willy Z.; Szmyd, Jeffrey A.

    1990-01-01

    A computer simulation of the FACE (Flexible Arm Controls Experiment) was conducted to assess its design for use in the Space Shuttle. The FACE is supposed to be a 14-ft long articulate structure with 4 degrees of freedom, consisting of shoulder pitch and yaw, elbow pitch, and wrist pitch. Kinematics of the FACE was simulated to obtain data on arm operation, function, workspace and interaction. Payload capture ability was modeled. The simulation indicates the capability for detailed kinematic simulation and payload capture ability analysis, and the feasibility of real-time simulation was determined. In addition, the potential for interactive real-time training through integration of the simulation with various interface controllers was revealed. At this stage, the flexibility of the arm was not yet considered.

  12. Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.; Patera, Anthony

    1993-01-01

    Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.

  13. Improving the Computational Effort of Set-Inversion-Based Prandial Insulin Delivery for Its Integration in Insulin Pumps

    PubMed Central

    León-Vargas, Fabian; Calm, Remei; Bondia, Jorge; Vehí, Josep

    2012-01-01

    Objective Set-inversion-based prandial insulin delivery is a new model-based bolus advisor for postprandial glucose control in type 1 diabetes mellitus (T1DM). It automatically coordinates the values of basal–bolus insulin to be infused during the postprandial period so as to achieve some predefined control objectives. However, the method requires an excessive computation time to compute the solution set of feasible insulin profiles, which impedes its integration into an insulin pump. In this work, a new algorithm is presented, which reduces computation time significantly and enables the integration of this new bolus advisor into current processing features of smart insulin pumps. Methods A new strategy was implemented that focused on finding the combined basal–bolus solution of interest rather than an extensive search of the feasible set of solutions. Analysis of interval simulations, inclusion of physiological assumptions, and search domain contractions were used. Data from six real patients with T1DM were used to compare the performance between the optimized and the conventional computations. Results In all cases, the optimized version yielded the basal–bolus combination recommended by the conventional method and in only 0.032% of the computation time. Simulations show that the mean number of iterations for the optimized computation requires approximately 3.59 s at 20 MHz processing power, in line with current features of smart pumps. Conclusions A computationally efficient method for basal–bolus coordination in postprandial glucose control has been presented and tested. The results indicate that an embedded algorithm within smart insulin pumps is now feasible. Nonetheless, we acknowledge that a clinical trial will be needed in order to justify this claim. PMID:23294789

  14. Ultra-Scale Computing for Emergency Evacuation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhaduri, Budhendra L; Nutaro, James J; Liu, Cheng

    2010-01-01

    Emergency evacuations are carried out in anticipation of a disaster such as hurricane landfall or flooding, and in response to a disaster that strikes without a warning. Existing emergency evacuation modeling and simulation tools are primarily designed for evacuation planning and are of limited value in operational support for real time evacuation management. In order to align with desktop computing, these models reduce the data and computational complexities through simple approximations and representations of real network conditions and traffic behaviors, which rarely represent real-world scenarios. With the emergence of high resolution physiographic, demographic, and socioeconomic data and supercomputing platforms, itmore » is possible to develop micro-simulation based emergency evacuation models that can foster development of novel algorithms for human behavior and traffic assignments, and can simulate evacuation of millions of people over a large geographic area. However, such advances in evacuation modeling and simulations demand computational capacity beyond the desktop scales and can be supported by high performance computing platforms. This paper explores the motivation and feasibility of ultra-scale computing for increasing the speed of high resolution emergency evacuation simulations.« less

  15. A simulation-based approach for evaluating logging residue handling systems.

    Treesearch

    B. Bruce Bare; Benjamin A. Jayne; Brian F. Anholt

    1976-01-01

    Describes a computer simulation model for evaluating logging residue handling systems. The flow of resources is traced through a prespecified combination of operations including yarding, chipping, sorting, loading, transporting, and unloading. The model was used to evaluate the feasibility of converting logging residues to chips that could be used, for example, to...

  16. A glacier runoff extension to the Precipitation Runoff Modeling System

    Treesearch

    A. E. Van Beusekom; R. J. Viger

    2016-01-01

    A module to simulate glacier runoff, PRMSglacier, was added to PRMS (Precipitation Runoff Modeling System), a distributed-parameter, physical-process hydrological simulation code. The extension does not require extensive on-glacier measurements or computational expense but still relies on physical principles over empirical relations as much as is feasible while...

  17. Numerical aerodynamic simulation facility feasibility study

    NASA Technical Reports Server (NTRS)

    1979-01-01

    There were three major issues examined in the feasibility study. First, the ability of the proposed system architecture to support the anticipated workload was evaluated. Second, the throughput of the computational engine (the flow model processor) was studied using real application programs. Third, the availability reliability, and maintainability of the system were modeled. The evaluations were based on the baseline systems. The results show that the implementation of the Numerical Aerodynamic Simulation Facility, in the form considered, would indeed be a feasible project with an acceptable level of risk. The technology required (both hardware and software) either already exists or, in the case of a few parts, is expected to be announced this year. Facets of the work described include the hardware configuration, software, user language, and fault tolerance.

  18. Parallelization of a Fully-Distributed Hydrologic Model using Sub-basin Partitioning

    NASA Astrophysics Data System (ADS)

    Vivoni, E. R.; Mniszewski, S.; Fasel, P.; Springer, E.; Ivanov, V. Y.; Bras, R. L.

    2005-12-01

    A primary obstacle towards advances in watershed simulations has been the limited computational capacity available to most models. The growing trend of model complexity, data availability and physical representation has not been matched by adequate developments in computational efficiency. This situation has created a serious bottleneck which limits existing distributed hydrologic models to small domains and short simulations. In this study, we present novel developments in the parallelization of a fully-distributed hydrologic model. Our work is based on the TIN-based Real-time Integrated Basin Simulator (tRIBS), which provides continuous hydrologic simulation using a multiple resolution representation of complex terrain based on a triangulated irregular network (TIN). While the use of TINs reduces computational demand, the sequential version of the model is currently limited over large basins (>10,000 km2) and long simulation periods (>1 year). To address this, a parallel MPI-based version of the tRIBS model has been implemented and tested using high performance computing resources at Los Alamos National Laboratory. Our approach utilizes domain decomposition based on sub-basin partitioning of the watershed. A stream reach graph based on the channel network structure is used to guide the sub-basin partitioning. Individual sub-basins or sub-graphs of sub-basins are assigned to separate processors to carry out internal hydrologic computations (e.g. rainfall-runoff transformation). Routed streamflow from each sub-basin forms the major hydrologic data exchange along the stream reach graph. Individual sub-basins also share subsurface hydrologic fluxes across adjacent boundaries. We demonstrate how the sub-basin partitioning provides computational feasibility and efficiency for a set of test watersheds in northeastern Oklahoma. We compare the performance of the sequential and parallelized versions to highlight the efficiency gained as the number of processors increases. We also discuss how the coupled use of TINs and parallel processing can lead to feasible long-term simulations in regional watersheds while preserving basin properties at high-resolution.

  19. Computer model of hydroponics nutrient solution pH control using ammonium.

    PubMed

    Pitts, M; Stutte, G

    1999-01-01

    A computer simulation of a hydroponics-based plant growth chamber using ammonium to control pH was constructed to determine the feasibility of such a system. In nitrate-based recirculating hydroponics systems, the pH will increase as plants release hydroxide ions into the nutrient solution to maintain plant charge balance. Ammonium is an attractive alternative to traditional pH controls in an ALSS, but requires careful monitoring and control to avoid overdosing the plants with ammonium. The primary advantage of using NH4+ for pH control is that it exploits the existing plant nutrient uptake charge balance mechanisms to maintain solution pH. The simulation models growth, nitrogen uptake, and pH of a l-m2 stand of wheat. Simulation results indicated that ammonium-based control of nutrient solution pH is feasible using a proportional integral controller. Use of a 1 mmol/L buffer (Ka = 1.6 x 10(-6)) in the nutrient solution is required.

  20. Maestro: an orchestration framework for large-scale WSN simulations.

    PubMed

    Riliskis, Laurynas; Osipov, Evgeny

    2014-03-18

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.

  1. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    PubMed Central

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  2. Estimation of electrical conductivity distribution within the human head from magnetic flux density measurement.

    PubMed

    Gao, Nuo; Zhu, S A; He, Bin

    2005-06-07

    We have developed a new algorithm for magnetic resonance electrical impedance tomography (MREIT), which uses only one component of the magnetic flux density to reconstruct the electrical conductivity distribution within the body. The radial basis function (RBF) network and simplex method are used in the present approach to estimate the conductivity distribution by minimizing the errors between the 'measured' and model-predicted magnetic flux densities. Computer simulations were conducted in a realistic-geometry head model to test the feasibility of the proposed approach. Single-variable and three-variable simulations were performed to estimate the brain-skull conductivity ratio and the conductivity values of the brain, skull and scalp layers. When SNR = 15 for magnetic flux density measurements with the target skull-to-brain conductivity ratio being 1/15, the relative error (RE) between the target and estimated conductivity was 0.0737 +/- 0.0746 in the single-variable simulations. In the three-variable simulations, the RE was 0.1676 +/- 0.0317. Effects of electrode position uncertainty were also assessed by computer simulations. The present promising results suggest the feasibility of estimating important conductivity values within the head from noninvasive magnetic flux density measurements.

  3. Computation of the sound generated by isotropic turbulence

    NASA Technical Reports Server (NTRS)

    Sarkar, S.; Hussaini, M. Y.

    1993-01-01

    The acoustic radiation from isotropic turbulence is computed numerically. A hybrid direct numerical simulation approach which combines direct numerical simulation (DNS) of the turbulent flow with the Lighthill acoustic analogy is utilized. It is demonstrated that the hybrid DNS method is a feasible approach to the computation of sound generated by turbulent flows. The acoustic efficiency in the simulation of isotropic turbulence appears to be substantially less than that in subsonic jet experiments. The dominant frequency of the computed acoustic pressure is found to be somewhat larger than the dominant frequency of the energy-containing scales of motion. The acoustic power in the simulations is proportional to epsilon (M(sub t))(exp 5) where epsilon is the turbulent dissipation rate and M(sub t) is the turbulent Mach number. This is in agreement with the analytical result of Proudman (1952), but the constant of proportionality is smaller than the analytical result. Two different methods of computing the acoustic power from the DNS data bases yielded consistent results.

  4. GPU-accelerated phase-field simulation of dendritic solidification in a binary alloy

    NASA Astrophysics Data System (ADS)

    Yamanaka, Akinori; Aoki, Takayuki; Ogawa, Satoi; Takaki, Tomohiro

    2011-03-01

    The phase-field simulation for dendritic solidification of a binary alloy has been accelerated by using a graphic processing unit (GPU). To perform the phase-field simulation of the alloy solidification on GPU, a program code was developed with computer unified device architecture (CUDA). In this paper, the implementation technique of the phase-field model on GPU is presented. Also, we evaluated the acceleration performance of the three-dimensional solidification simulation by using a single NVIDIA TESLA C1060 GPU and the developed program code. The results showed that the GPU calculation for 5763 computational grids achieved the performance of 170 GFLOPS by utilizing the shared memory as a software-managed cache. Furthermore, it can be demonstrated that the computation with the GPU is 100 times faster than that with a single CPU core. From the obtained results, we confirmed the feasibility of realizing a real-time full three-dimensional phase-field simulation of microstructure evolution on a personal desktop computer.

  5. Numerical propulsion system simulation

    NASA Technical Reports Server (NTRS)

    Lytle, John K.; Remaklus, David A.; Nichols, Lester D.

    1990-01-01

    The cost of implementing new technology in aerospace propulsion systems is becoming prohibitively expensive. One of the major contributors to the high cost is the need to perform many large scale system tests. Extensive testing is used to capture the complex interactions among the multiple disciplines and the multiple components inherent in complex systems. The objective of the Numerical Propulsion System Simulation (NPSS) is to provide insight into these complex interactions through computational simulations. This will allow for comprehensive evaluation of new concepts early in the design phase before a commitment to hardware is made. It will also allow for rapid assessment of field-related problems, particularly in cases where operational problems were encountered during conditions that would be difficult to simulate experimentally. The tremendous progress taking place in computational engineering and the rapid increase in computing power expected through parallel processing make this concept feasible within the near future. However it is critical that the framework for such simulations be put in place now to serve as a focal point for the continued developments in computational engineering and computing hardware and software. The NPSS concept which is described will provide that framework.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprague, Michael A.

    Enabled by petascale supercomputing, the next generation of computer models for wind energy will simulate a vast range of scales and physics, spanning from turbine structural dynamics and blade-scale turbulence to mesoscale atmospheric flow. A single model covering all scales and physics is not feasible. Thus, these simulations will require the coupling of different models/codes, each for different physics, interacting at their domain boundaries.

  7. Micromagnetics on high-performance workstation and mobile computational platforms

    NASA Astrophysics Data System (ADS)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  8. TESS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dmitriy Morozov, Tom Peterka

    2014-07-29

    Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets. As the scale of simulations and observations surpasses billions of particles, a distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this software is a distributed-memory parallel Delaunay and Voronoi tessellation algorithm based on existing serial computational geometry libraries that automatically determines which neighbor points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include the addition of periodic and wall boundary conditions.

  9. Feasibility study for a numerical aerodynamic simulation facility. Volume 1

    NASA Technical Reports Server (NTRS)

    Lincoln, N. R.; Bergman, R. O.; Bonstrom, D. B.; Brinkman, T. W.; Chiu, S. H. J.; Green, S. S.; Hansen, S. D.; Klein, D. L.; Krohn, H. E.; Prow, R. P.

    1979-01-01

    A Numerical Aerodynamic Simulation Facility (NASF) was designed for the simulation of fluid flow around three-dimensional bodies, both in wind tunnel environments and in free space. The application of numerical simulation to this field of endeavor promised to yield economies in aerodynamic and aircraft body designs. A model for a NASF/FMP (Flow Model Processor) ensemble using a possible approach to meeting NASF goals is presented. The computer hardware and software are presented, along with the entire design and performance analysis and evaluation.

  10. IDA (Institute for Defense Analyses) GAMMA-Ray Laser Annual Summary Report (1986). Investigation of the Feasibility of Developing a Laser Using Nuclear Transitions

    DTIC Science & Technology

    1988-12-01

    a computer simulation for a small value of r .................................... 25 Figure 5. A typical pulse shape for r = 8192...26 Figure 6. Pulse duration as function of r from the statistical simulations , assuming a spontaneous lifetime of 1 s...scaling factor from the statistical simulations ................. 29 Figure 10. Basic pulse characteristics and associated Bloch vector angles for the

  11. Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Gulshan B., E-mail: gbsharma@ucalgary.ca; University of Pittsburgh, Swanson School of Engineering, Department of Bioengineering, Pittsburgh, Pennsylvania 15213; University of Calgary, Schulich School of Engineering, Department of Mechanical and Manufacturing Engineering, Calgary, Alberta T2N 1N4

    Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respondmore » over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula’s material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element’s remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than actual specimen. Low predicted bone density was lower than actual specimen. Differences were probably due to applied muscle and joint reaction loads, boundary conditions, and values of constants used. Work is underway to study this. Nonetheless, the results demonstrate three dimensional bone remodeling simulation validity and potential. Such adaptive predictions take physiological bone remodeling simulations one step closer to reality. Computational analyses are needed that integrate biological remodeling rules and predict how bone will respond over time. We expect the combination of computational static stress analyses together with adaptive bone remodeling simulations to become effective tools for regenerative medicine research.« less

  12. Adaptive scapula bone remodeling computational simulation: Relevance to regenerative medicine

    NASA Astrophysics Data System (ADS)

    Sharma, Gulshan B.; Robertson, Douglas D.

    2013-07-01

    Shoulder arthroplasty success has been attributed to many factors including, bone quality, soft tissue balancing, surgeon experience, and implant design. Improved long-term success is primarily limited by glenoid implant loosening. Prosthesis design examines materials and shape and determines whether the design should withstand a lifetime of use. Finite element (FE) analyses have been extensively used to study stresses and strains produced in implants and bone. However, these static analyses only measure a moment in time and not the adaptive response to the altered environment produced by the therapeutic intervention. Computational analyses that integrate remodeling rules predict how bone will respond over time. Recent work has shown that subject-specific two- and three dimensional adaptive bone remodeling models are feasible and valid. Feasibility and validation were achieved computationally, simulating bone remodeling using an intact human scapula, initially resetting the scapular bone material properties to be uniform, numerically simulating sequential loading, and comparing the bone remodeling simulation results to the actual scapula's material properties. Three-dimensional scapula FE bone model was created using volumetric computed tomography images. Muscle and joint load and boundary conditions were applied based on values reported in the literature. Internal bone remodeling was based on element strain-energy density. Initially, all bone elements were assigned a homogeneous density. All loads were applied for 10 iterations. After every iteration, each bone element's remodeling stimulus was compared to its corresponding reference stimulus and its material properties modified. The simulation achieved convergence. At the end of the simulation the predicted and actual specimen bone apparent density were plotted and compared. Location of high and low predicted bone density was comparable to the actual specimen. High predicted bone density was greater than actual specimen. Low predicted bone density was lower than actual specimen. Differences were probably due to applied muscle and joint reaction loads, boundary conditions, and values of constants used. Work is underway to study this. Nonetheless, the results demonstrate three dimensional bone remodeling simulation validity and potential. Such adaptive predictions take physiological bone remodeling simulations one step closer to reality. Computational analyses are needed that integrate biological remodeling rules and predict how bone will respond over time. We expect the combination of computational static stress analyses together with adaptive bone remodeling simulations to become effective tools for regenerative medicine research.

  13. Simulating Quantile Models with Applications to Economics and Management

    NASA Astrophysics Data System (ADS)

    Machado, José A. F.

    2010-05-01

    The massive increase in the speed of computers over the past forty years changed the way that social scientists, applied economists and statisticians approach their trades and also the very nature of the problems that they could feasibly tackle. The new methods that use intensively computer power go by the names of "computer-intensive" or "simulation". My lecture will start with bird's eye view of the uses of simulation in Economics and Statistics. Then I will turn out to my own research on uses of computer- intensive methods. From a methodological point of view the question I address is how to infer marginal distributions having estimated a conditional quantile process, (Counterfactual Decomposition of Changes in Wage Distributions using Quantile Regression," Journal of Applied Econometrics 20, 2005). Illustrations will be provided of the use of the method to perform counterfactual analysis in several different areas of knowledge.

  14. Using Reconstructed POD Modes as Turbulent Inflow for LES Wind Turbine Simulations

    NASA Astrophysics Data System (ADS)

    Nielson, Jordan; Bhaganagar, Kiran; Juttijudata, Vejapong; Sirisup, Sirod

    2016-11-01

    Currently, in order to get realistic atmospheric effects of turbulence, wind turbine LES simulations require computationally expensive precursor simulations. At times, the precursor simulation is more computationally expensive than the wind turbine simulation. The precursor simulations are important because they capture turbulence in the atmosphere and as stated above, turbulence impacts the power production estimation. On the other hand, POD analysis has been shown to be capable of capturing turbulent structures. The current study was performed to determine the plausibility of using lower dimension models from POD analysis of LES simulations as turbulent inflow to wind turbine LES simulations. The study will aid the wind energy community by lowering the computational cost of full scale wind turbine LES simulations, while maintaining a high level of turbulent information and being able to quickly apply the turbulent inflow to multi turbine wind farms. This will be done by comparing a pure LES precursor wind turbine simulation with simulations that use reduced POD mod inflow conditions. The study shows the feasibility of using lower dimension models as turbulent inflow of LES wind turbine simulations. Overall the power production estimation and velocity field of the wind turbine wake are well captured with small errors.

  15. Clustering molecular dynamics trajectories for optimizing docking experiments.

    PubMed

    De Paris, Renata; Quevedo, Christian V; Ruiz, Duncan D; Norberto de Souza, Osmar; Barros, Rodrigo C

    2015-01-01

    Molecular dynamics simulations of protein receptors have become an attractive tool for rational drug discovery. However, the high computational cost of employing molecular dynamics trajectories in virtual screening of large repositories threats the feasibility of this task. Computational intelligence techniques have been applied in this context, with the ultimate goal of reducing the overall computational cost so the task can become feasible. Particularly, clustering algorithms have been widely used as a means to reduce the dimensionality of molecular dynamics trajectories. In this paper, we develop a novel methodology for clustering entire trajectories using structural features from the substrate-binding cavity of the receptor in order to optimize docking experiments on a cloud-based environment. The resulting partition was selected based on three clustering validity criteria, and it was further validated by analyzing the interactions between 20 ligands and a fully flexible receptor (FFR) model containing a 20 ns molecular dynamics simulation trajectory. Our proposed methodology shows that taking into account features of the substrate-binding cavity as input for the k-means algorithm is a promising technique for accurately selecting ensembles of representative structures tailored to a specific ligand.

  16. Clinical feasibility and efficacy of using virtual surgical planning in bimaxillary orthognathic surgery without intermediate splint.

    PubMed

    Li, Yunfeng; Jiang, Yangmei; Zhang, Nan; Xu, Rui; Hu, Jing; Zhu, Songsong

    2015-03-01

    Computer-aided jaw surgery has been extensively studied recently. The purpose of this study was to determine the clinical feasibility of performing bimaxillary orthognathic surgery without intermediate splint using virtual surgical planning and rapid prototyping technology. Twelve consecutive patients who underwent bimaxillary orthognathic surgery were included. The presented treatment plan here mainly consists of 6 procedures: (1) data acquisition from computed tomography (CT) of the skull and laser scanning of the dentition; (2) reconstruction and fusion of a virtual skull model with accurate dentition; (3) virtual surgery simulation including osteotomy and movement and repositioning of bony segments; (4) final surgical splint fabrication (no intermediate splint) using computer-aided design and rapid prototyping technology; (5) transfer of the virtual surgical plan to the operating room; and (6) comparison of the actual surgical outcome to the virtual surgical plan. All procedures of the treatment were successfully performed on all 12 patients. In quantification of differences between simulated and actual postoperative outcome, we found that the mean linear difference was less than 1.8 mm, and the mean angular difference was less than 2.5 degrees in all evaluated patients. Results from this study suggested that it was feasible to perform bimaxillary orthognathic surgery without intermediate splint. Virtual surgical planning and the guiding splints facilitated the diagnosis, treatment planning, accurate osteotomy, and bony segments repositioning in orthognathic surgery.

  17. Research on the effects of urbanization on small stream flow quantity

    DOT National Transportation Integrated Search

    1978-12-01

    This study is a preliminary investigation into the feasibility of using simple techniques to evaluate the effects of urbanization on flood flows in small streams. A number of regression techniques and computer simulation techniques were evaluated, an...

  18. What Fermenter?

    ERIC Educational Resources Information Center

    Terry, John

    1987-01-01

    Discusses the feasibility of using fermenters in secondary school laboratories. Includes discussions of equipment, safety, and computer interfacing. Describes how a simple fermenter could be used to simulate large-scale processes. Concludes that, although teachers and technicians will require additional training, the prospects for biotechnology in…

  19. GATE Monte Carlo simulation in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Rowedder, Blake Austin

    The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.

  20. Statistical Surrogate Modeling of Atmospheric Dispersion Events Using Bayesian Adaptive Splines

    NASA Astrophysics Data System (ADS)

    Francom, D.; Sansó, B.; Bulaevskaya, V.; Lucas, D. D.

    2016-12-01

    Uncertainty in the inputs of complex computer models, including atmospheric dispersion and transport codes, is often assessed via statistical surrogate models. Surrogate models are computationally efficient statistical approximations of expensive computer models that enable uncertainty analysis. We introduce Bayesian adaptive spline methods for producing surrogate models that capture the major spatiotemporal patterns of the parent model, while satisfying all the necessities of flexibility, accuracy and computational feasibility. We present novel methodological and computational approaches motivated by a controlled atmospheric tracer release experiment conducted at the Diablo Canyon nuclear power plant in California. Traditional methods for building statistical surrogate models often do not scale well to experiments with large amounts of data. Our approach is well suited to experiments involving large numbers of model inputs, large numbers of simulations, and functional output for each simulation. Our approach allows us to perform global sensitivity analysis with ease. We also present an approach to calibration of simulators using field data.

  1. Lightweight, low compression aircraft diesel engine. [converting a spark ignition engine to the diesel cycle

    NASA Technical Reports Server (NTRS)

    Gaynor, T. L.; Bottrell, M. S.; Eagle, C. D.; Bachle, C. F.

    1977-01-01

    The feasibility of converting a spark ignition aircraft engine to the diesel cycle was investigated. Procedures necessary for converting a single cylinder GTS10-520 are described as well as a single cylinder diesel engine test program. The modification of the engine for the hot port cooling concept is discussed. A digital computer graphics simulation of a twin engine aircraft incorporating the diesel engine and Hot Fort concept is presented showing some potential gains in aircraft performance. Sample results of the computer program used in the simulation are included.

  2. Application of digital control to a magnetic model suspension and balance model

    NASA Technical Reports Server (NTRS)

    Luh, P. B.; Covert, E. E.; Whitaker, H. P.; Haldeman, C. W.

    1978-01-01

    The feasibility of using a digital computer for performing the automatic control functions for a magnetic suspension and balance system (MSBS) for use with wind tunnel models was investigated. Modeling was done using both a prototype MSBS and a one dimensional magnetic balance. A microcomputer using the Intel 8080 microprocessor is described and results are given using this microprocessor to control the one dimensional balance. Hybrid simulations for one degree of freedom of the MSBS were also performed and are reported. It is concluded that use of a digital computer to control the MSBS is eminently feasible and should extend both the accuracy and utility of the system.

  3. Assessing Functional Performance using a Computer-Based Simulations of Everyday Activities

    PubMed Central

    Czaja, Sara J.; Loewenstein, David A.; Lee, Chin Chin; Fu, Shih Hua; Harvey, Philip D.

    2016-01-01

    Current functional capacity (FC) measures for patients with schizophrenia typically involve informant assessments or are in paper and pencil format, requiring in-person administration by a skilled assessor. This approach presents logistic problems and limits the possibilities for remote assessment, an important issue for these patients. This study evaluated the feasibility of using a computer-based assessment battery, including simulations of everyday activities. The battery was compared to in-person standard assessments of cognition and FC with respect to baseline convergence and sensitivity to group differences. The battery, administered on a touch screen computer, included measures of critical everyday activities, including: ATM Banking/Financial Management, Prescriptions Refill via Telephone/Voice Menu System, and Forms Completion (simulating a clinic and patient history form). The sample included 77 older adult patients with schizophrenia and 24 older adult healthy controls that were administered the battery at two time points. The results indicated that the battery was sensitive to group differences in FC. Performance on the battery was also moderately correlated with standard measures of cognitive abilities and showed convergence with standard measures of FC, while demonstrating good test-retest reliability. Our results show that it is feasible to use technology-based assessment protocols with older adults and patients with schizophrenia. The battery overcomes logistic constraints associated with current FC assessment protocols as the battery is computer-based, can be delivered remotely and does not require a healthcare professional for administration. PMID:27913159

  4. Optical solver of combinatorial problems: nanotechnological approach.

    PubMed

    Cohen, Eyal; Dolev, Shlomi; Frenkel, Sergey; Kryzhanovsky, Boris; Palagushkin, Alexandr; Rosenblit, Michael; Zakharov, Victor

    2013-09-01

    We present an optical computing system to solve NP-hard problems. As nano-optical computing is a promising venue for the next generation of computers performing parallel computations, we investigate the application of submicron, or even subwavelength, computing device designs. The system utilizes a setup of exponential sized masks with exponential space complexity produced in polynomial time preprocessing. The masks are later used to solve the problem in polynomial time. The size of the masks is reduced to nanoscaled density. Simulations were done to choose a proper design, and actual implementations show the feasibility of such a system.

  5. Radar multipath study for rain-on-radome experiments at the Aircraft Landing Dynamics Facility

    NASA Technical Reports Server (NTRS)

    Mackenzie, Anne I.; Staton, Leo D.

    1990-01-01

    An analytical study to determine the feasibility of a rain-on-radome experiment at the Aircraft Landing Dynamics Facility (ALDF) at the Langley Research Center is described. The experiment would measure the effects of heavy rain on the transmission of X-band weather radar signals, looking in particular for sources of anomalous attenuation. Feasibility is determined with regard to multipath signals arising from the major structural components of the ALDF. A computer program simulates the transmit and receive antennas, direct-path and multipath signals, and expected attenuation by rain. In the simulation, antenna height, signal polarization, and rainfall rate are variable parameters. The study shows that the rain-on-radome experiment is feasible with regard to multipath signals. The total received signal, taking into account multipath effects, could be measured by commercially available equipment. The study also shows that horizontally polarized signals would produce better experimental results than vertically polarized signals.

  6. Concept for a Satellite-Based Advanced Air Traffic Management System : Volume 9. System and Subsystem Performance Models.

    DOT National Transportation Integrated Search

    1973-02-01

    The volume presents the models used to analyze basic features of the system, establish feasibility of techniques, and evaluate system performance. The models use analytical expressions and computer simulations to represent the relationship between sy...

  7. Good coupling for the multiscale patch scheme on systems with microscale heterogeneity

    NASA Astrophysics Data System (ADS)

    Bunder, J. E.; Roberts, A. J.; Kevrekidis, I. G.

    2017-05-01

    Computational simulation of microscale detailed systems is frequently only feasible over spatial domains much smaller than the macroscale of interest. The 'equation-free' methodology couples many small patches of microscale computations across space to empower efficient computational simulation over macroscale domains of interest. Motivated by molecular or agent simulations, we analyse the performance of various coupling schemes for patches when the microscale is inherently 'rough'. As a canonical problem in this universality class, we systematically analyse the case of heterogeneous diffusion on a lattice. Computer algebra explores how the dynamics of coupled patches predict the large scale emergent macroscale dynamics of the computational scheme. We determine good design for the coupling of patches by comparing the macroscale predictions from patch dynamics with the emergent macroscale on the entire domain, thus minimising the computational error of the multiscale modelling. The minimal error on the macroscale is obtained when the coupling utilises averaging regions which are between a third and a half of the patch. Moreover, when the symmetry of the inter-patch coupling matches that of the underlying microscale structure, patch dynamics predicts the desired macroscale dynamics to any specified order of error. The results confirm that the patch scheme is useful for macroscale computational simulation of a range of systems with microscale heterogeneity.

  8. [Application of 3D printing and computer-assisted surgical simulation in preoperative planning for acetabular fracture].

    PubMed

    Liu, Xin; Zeng, Can-Jun; Lu, Jian-Sen; Lin, Xu-Chen; Huang, Hua-Jun; Tan, Xin-Yu; Cai, Dao-Zhang

    2017-03-20

    To evaluate the feasibility and effectiveness of using 3D printing and computer-assisted surgical simulation in preoperative planning for acetabular fractures. A retrospective analysis was performed in 53 patients with pelvic fracture, who underwent surgical treatment between September, 2013 and December, 2015 with complete follow-up data. Among them, 19 patients were treated with CT three-dimensional reconstruction, computer-assisted virtual reset internal fixation, 3D model printing, and personalized surgery simulation before surgery (3D group), and 34 patients underwent routine preoperative examination (conventional group). The intraoperative blood loss, transfusion volume, times of intraoperative X-ray, operation time, Matta score and Merle D' Aubigne & Postel score were recorded in the 2 groups. Preoperative planning and postoperative outcomes in the two groups were compared. All the operations were completed successfully. In 3D group, significantly less intraoperative blood loss, transfusion volume, fewer times of X-ray, and shortened operation time were recorded compared with those in the conventional group (P<0.05). According to the Matta scores, excellent or good fracture reduction was achieved in 94.7% (18/19) of the patients in 3D group and in 82.4% (28/34) of the patients in conventional group; the rates of excellent and good hip function at the final follow-up were 89.5% (17/19) in the 3D group and 85.3% (29/34) in the conventional group (P>0.05). In the 3D group, the actual internal fixation well matched the preoperative design. 3D printing and computer-assisted surgical simulation for preoperative planning is feasible and accurate for management of acetabular fracture and can effectively improve the operation efficiency.

  9. Applicability of APT aided-inertial system to crustal movement monitoring

    NASA Technical Reports Server (NTRS)

    Soltz, J. A.

    1978-01-01

    The APT system, its stage of development, hardware, and operations are described. The algorithms required to perform the real-time functions of navigation and profiling are presented. The results of computer simulations demonstrate the feasibility of APT for its primary mission: topographic mapping with an accuracy of 15 cm in the vertical. Also discussed is the suitability of modifying APT for the purpose of making vertical crustal movement measurements accurate to 2 cm in the vertical, and at least marginal feasibility is indicated.

  10. Parallel spatial direct numerical simulations on the Intel iPSC/860 hypercube

    NASA Technical Reports Server (NTRS)

    Joslin, Ronald D.; Zubair, Mohammad

    1993-01-01

    The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube is documented. The direct numerical simulation approach is used to compute spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows. The feasibility of using the PSDNS on the hypercube to perform transition studies is examined. The results indicate that the direct numerical simulation approach can effectively be parallelized on a distributed-memory parallel machine. By increasing the number of processors nearly ideal linear speedups are achieved with nonoptimized routines; slower than linear speedups are achieved with optimized (machine dependent library) routines. This slower than linear speedup results because the Fast Fourier Transform (FFT) routine dominates the computational cost and because the routine indicates less than ideal speedups. However with the machine-dependent routines the total computational cost decreases by a factor of 4 to 5 compared with standard FORTRAN routines. The computational cost increases linearly with spanwise wall-normal and streamwise grid refinements. The hypercube with 32 processors was estimated to require approximately twice the amount of Cray supercomputer single processor time to complete a comparable simulation; however it is estimated that a subgrid-scale model which reduces the required number of grid points and becomes a large-eddy simulation (PSLES) would reduce the computational cost and memory requirements by a factor of 10 over the PSDNS. This PSLES implementation would enable transition simulations on the hypercube at a reasonable computational cost.

  11. A PC-based simulation of the National Transonic Facitity's safety microprocessor

    NASA Technical Reports Server (NTRS)

    Thibodeaux, J. J.; Kilgore, W. A.; Balakrishna, S.

    1993-01-01

    A brief study was undertaken to demonstrate the feasibility of using a state-of-the-art off-the-shelf high speed personal computer for simulating a microprocessor presently used for wind tunnel safety purposes at Langley Research Center's National Transonic Facility (NTF). Currently, there is no active display of tunnel alarm/alert safety information provided to the tunnel operators, but rather such information is periodically recorded on a process monitoring computer printout. This does not provide on-line situational information nor permit rapid identification of safety operational violations which are able to halt tunnel operations. It was therefore decided to simulate the existing algorithms and briefly evaluate a real-time display which could provide both position and trouble shooting information.

  12. Adaptation of a program for nonlinear finite element analysis to the CDC STAR 100 computer

    NASA Technical Reports Server (NTRS)

    Pifko, A. B.; Ogilvie, P. L.

    1978-01-01

    The conversion of a nonlinear finite element program to the CDC STAR 100 pipeline computer is discussed. The program called DYCAST was developed for the crash simulation of structures. Initial results with the STAR 100 computer indicated that significant gains in computation time are possible for operations on gloval arrays. However, for element level computations that do not lend themselves easily to long vector processing, the STAR 100 was slower than comparable scalar computers. On this basis it is concluded that in order for pipeline computers to impact the economic feasibility of large nonlinear analyses it is absolutely essential that algorithms be devised to improve the efficiency of element level computations.

  13. High-Performance Computation of Distributed-Memory Parallel 3D Voronoi and Delaunay Tessellation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterka, Tom; Morozov, Dmitriy; Phillips, Carolyn

    2014-11-14

    Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets: N-body simulations, molecular dynamics codes, and LIDAR point clouds are just a few examples. Such computational geometry methods are common in data analysis and visualization; but as the scale of simulations and observations surpasses billions of particles, the existing serial and shared-memory algorithms no longer suffice. A distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this paper is a new parallel Delaunay and Voronoi tessellation algorithm that automatically determines which neighbormore » points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include periodic and wall boundary conditions, comparison of our method using two popular serial libraries, and application to numerous science datasets.« less

  14. Clustering Molecular Dynamics Trajectories for Optimizing Docking Experiments

    PubMed Central

    De Paris, Renata; Quevedo, Christian V.; Ruiz, Duncan D.; Norberto de Souza, Osmar; Barros, Rodrigo C.

    2015-01-01

    Molecular dynamics simulations of protein receptors have become an attractive tool for rational drug discovery. However, the high computational cost of employing molecular dynamics trajectories in virtual screening of large repositories threats the feasibility of this task. Computational intelligence techniques have been applied in this context, with the ultimate goal of reducing the overall computational cost so the task can become feasible. Particularly, clustering algorithms have been widely used as a means to reduce the dimensionality of molecular dynamics trajectories. In this paper, we develop a novel methodology for clustering entire trajectories using structural features from the substrate-binding cavity of the receptor in order to optimize docking experiments on a cloud-based environment. The resulting partition was selected based on three clustering validity criteria, and it was further validated by analyzing the interactions between 20 ligands and a fully flexible receptor (FFR) model containing a 20 ns molecular dynamics simulation trajectory. Our proposed methodology shows that taking into account features of the substrate-binding cavity as input for the k-means algorithm is a promising technique for accurately selecting ensembles of representative structures tailored to a specific ligand. PMID:25873944

  15. GPU-based prompt gamma ray imaging from boron neutron capture therapy.

    PubMed

    Yoon, Do-Kun; Jung, Joo-Young; Jo Hong, Key; Sil Lee, Keum; Suk Suh, Tae

    2015-01-01

    The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU). Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray image reconstruction using the GPU computation for BNCT simulations.

  16. Blood flow in intracranial aneurysms treated with Pipeline embolization devices: computational simulation and verification with Doppler ultrasonography on phantom models

    PubMed Central

    2015-01-01

    Purpose: The aim of this study was to validate a computational fluid dynamics (CFD) simulation of flow-diverter treatment through Doppler ultrasonography measurements in patient-specific models of intracranial bifurcation and side-wall aneurysms. Methods: Computational and physical models of patient-specific bifurcation and sidewall aneurysms were constructed from computed tomography angiography with use of stereolithography, a three-dimensional printing technology. Flow dynamics parameters before and after flow-diverter treatment were measured with pulse-wave and color Doppler ultrasonography, and then compared with CFD simulations. Results: CFD simulations showed drastic flow reduction after flow-diverter treatment in both aneurysms. The mean volume flow rate decreased by 90% and 85% for the bifurcation aneurysm and the side-wall aneurysm, respectively. Velocity contour plots from computer simulations before and after flow diversion closely resembled the patterns obtained by color Doppler ultrasonography. Conclusion: The CFD estimation of flow reduction in aneurysms treated with a flow-diverting stent was verified by Doppler ultrasonography in patient-specific phantom models of bifurcation and side-wall aneurysms. The combination of CFD and ultrasonography may constitute a feasible and reliable technique in studying the treatment of intracranial aneurysms with flow-diverting stents. PMID:25754367

  17. IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    William M. Bond; Salih Ersayin

    2007-03-30

    This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency ofmore » individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern Minnesota, and future proposals are pending with non-taconite mineral processing applications.« less

  18. Dynamic Deployment Simulations of Inflatable Space Structures

    NASA Technical Reports Server (NTRS)

    Wang, John T.

    2005-01-01

    The feasibility of using Control Volume (CV) method and the Arbitrary Lagrangian Eulerian (ALE) method in LSDYNA to simulate the dynamic deployment of inflatable space structures is investigated. The CV and ALE methods were used to predict the inflation deployments of three folded tube configurations. The CV method was found to be a simple and computationally efficient method that may be adequate for modeling slow inflation deployment sine the inertia of the inflation gas can be neglected. The ALE method was found to be very computationally intensive since it involves the solving of three conservative equations of fluid as well as dealing with complex fluid structure interactions.

  19. GATE Monte Carlo simulation of dose distribution using MapReduce in a cloud computing environment.

    PubMed

    Liu, Yangchuan; Tang, Yuguo; Gao, Xin

    2017-12-01

    The GATE Monte Carlo simulation platform has good application prospects of treatment planning and quality assurance. However, accurate dose calculation using GATE is time consuming. The purpose of this study is to implement a novel cloud computing method for accurate GATE Monte Carlo simulation of dose distribution using MapReduce. An Amazon Machine Image installed with Hadoop and GATE is created to set up Hadoop clusters on Amazon Elastic Compute Cloud (EC2). Macros, the input files for GATE, are split into a number of self-contained sub-macros. Through Hadoop Streaming, the sub-macros are executed by GATE in Map tasks and the sub-results are aggregated into final outputs in Reduce tasks. As an evaluation, GATE simulations were performed in a cubical water phantom for X-ray photons of 6 and 18 MeV. The parallel simulation on the cloud computing platform is as accurate as the single-threaded simulation on a local server and the simulation correctness is not affected by the failure of some worker nodes. The cloud-based simulation time is approximately inversely proportional to the number of worker nodes. For the simulation of 10 million photons on a cluster with 64 worker nodes, time decreases of 41× and 32× were achieved compared to the single worker node case and the single-threaded case, respectively. The test of Hadoop's fault tolerance showed that the simulation correctness was not affected by the failure of some worker nodes. The results verify that the proposed method provides a feasible cloud computing solution for GATE.

  20. A dynamical-systems approach for computing ice-affected streamflow

    USGS Publications Warehouse

    Holtschlag, David J.

    1996-01-01

    A dynamical-systems approach was developed and evaluated for computing ice-affected streamflow. The approach provides for dynamic simulation and parameter estimation of site-specific equations relating ice effects to routinely measured environmental variables. Comparison indicates that results from the dynamical-systems approach ranked higher than results from 11 analytical methods previously investigated on the basis of accuracy and feasibility criteria. Additional research will likely lead to further improvements in the approach.

  1. Video Guidance, Landing, and Imaging system (VGLIS) for space missions

    NASA Technical Reports Server (NTRS)

    Schappell, R. T.; Knickerbocker, R. L.; Tietz, J. C.; Grant, C.; Flemming, J. C.

    1975-01-01

    The feasibility of an autonomous video guidance system that is capable of observing a planetary surface during terminal descent and selecting the most acceptable landing site was demonstrated. The system was breadboarded and "flown" on a physical simulator consisting of a control panel and monitor, a dynamic simulator, and a PDP-9 computer. The breadboard VGLIS consisted of an image dissector camera and the appropriate processing logic. Results are reported.

  2. Development of process control capability through the Browns Ferry Integrated Computer System using Reactor Water Clanup System as an example. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, J.; Mowrey, J.

    1995-12-01

    This report describes the design, development and testing of process controls for selected system operations in the Browns Ferry Nuclear Plant (BFNP) Reactor Water Cleanup System (RWCU) using a Computer Simulation Platform which simulates the RWCU System and the BFNP Integrated Computer System (ICS). This system was designed to demonstrate the feasibility of the soft control (video touch screen) of nuclear plant systems through an operator console. The BFNP Integrated Computer System, which has recently. been installed at BFNP Unit 2, was simulated to allow for operator control functions of the modeled RWCU system. The BFNP Unit 2 RWCU systemmore » was simulated using the RELAP5 Thermal/Hydraulic Simulation Model, which provided the steady-state and transient RWCU process variables and simulated the response of the system to control system inputs. Descriptions of the hardware and software developed are also included in this report. The testing and acceptance program and results are also detailed in this report. A discussion of potential installation of an actual RWCU process control system in BFNP Unit 2 is included. Finally, this report contains a section on industry issues associated with installation of process control systems in nuclear power plants.« less

  3. Simulating and Synthesizing Substructures Using Neural Network and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Liu, Youhua; Kapania, Rakesh K.; VanLandingham, Hugh F.

    1997-01-01

    The feasibility of simulating and synthesizing substructures by computational neural network models is illustrated by investigating a statically indeterminate beam, using both a 1-D and a 2-D plane stress modelling. The beam can be decomposed into two cantilevers with free-end loads. By training neural networks to simulate the cantilever responses to different loads, the original beam problem can be solved as a match-up between two subsystems under compatible interface conditions. The genetic algorithms are successfully used to solve the match-up problem. Simulated results are found in good agreement with the analytical or FEM solutions.

  4. EVALUATION OF PHYSIOLOGY COMPUTER MODELS, AND THE FEASIBILITY OF THEIR USE IN RISK ASSESSMENT.

    EPA Science Inventory

    This project will evaluate the current state of quantitative models that simulate physiological processes, and the how these models might be used in conjunction with the current use of PBPK and BBDR models in risk assessment. The work will include a literature search to identify...

  5. GFDL's unified regional-global weather-climate modeling system with variable resolution capability for severe weather predictions and regional climate simulations

    NASA Astrophysics Data System (ADS)

    Lin, S. J.

    2015-12-01

    The NOAA/Geophysical Fluid Dynamics Laboratory has been developing a unified regional-global modeling system with variable resolution capabilities that can be used for severe weather predictions (e.g., tornado outbreak events and cat-5 hurricanes) and ultra-high-resolution (1-km) regional climate simulations within a consistent global modeling framework. The fundation of this flexible regional-global modeling system is the non-hydrostatic extension of the vertically Lagrangian dynamical core (Lin 2004, Monthly Weather Review) known in the community as FV3 (finite-volume on the cubed-sphere). Because of its flexability and computational efficiency, the FV3 is one of the final candidates of NOAA's Next Generation Global Prediction System (NGGPS). We have built into the modeling system a stretched (single) grid capability, a two-way (regional-global) multiple nested grid capability, and the combination of the stretched and two-way nests, so as to make convection-resolving regional climate simulation within a consistent global modeling system feasible using today's High Performance Computing System. One of our main scientific goals is to enable simulations of high impact weather phenomena (such as tornadoes, thunderstorms, category-5 hurricanes) within an IPCC-class climate modeling system previously regarded as impossible. In this presentation I will demonstrate that it is computationally feasible to simulate not only super-cell thunderstorms, but also the subsequent genesis of tornadoes using a global model that was originally designed for century long climate simulations. As a unified weather-climate modeling system, we evaluated the performance of the model with horizontal resolution ranging from 1 km to as low as 200 km. In particular, for downscaling studies, we have developed various tests to ensure that the large-scale circulation within the global varaible resolution system is well simulated while at the same time the small-scale can be accurately captured within the targeted high resolution region.

  6. Feasibility of using Extreme Ultraviolet Explorer (EUVE) reaction wheels to satisfy Space Infrared Telescope Facility (SIRTF) maneuver requirements

    NASA Technical Reports Server (NTRS)

    Lightsey, W. D.

    1990-01-01

    A digital computer simulation is used to determine if the extreme ultraviolet explorer (EUVE) reaction wheels can provide sufficient torque and momentum storage capability to meet the space infrared telescope facility (SIRTF) maneuver requirements. A brief description of the pointing control system (PCS) and the sensor and actuator dynamic models used in the simulation is presented. A model to represent a disturbance such as fluid sloshing is developed. Results developed with the simulation, and a discussion of these results are presented.

  7. Large Eddy Simulation in the Computation of Jet Noise

    NASA Technical Reports Server (NTRS)

    Mankbadi, R. R.; Goldstein, M. E.; Povinelli, L. A.; Hayder, M. E.; Turkel, E.

    1999-01-01

    Noise can be predicted by solving Full (time-dependent) Compressible Navier-Stokes Equation (FCNSE) with computational domain. The fluctuating near field of the jet produces propagating pressure waves that produce far-field sound. The fluctuating flow field as a function of time is needed in order to calculate sound from first principles. Noise can be predicted by solving the full, time-dependent, compressible Navier-Stokes equations with the computational domain extended to far field - but this is not feasible as indicated above. At high Reynolds number of technological interest turbulence has large range of scales. Direct numerical simulations (DNS) can not capture the small scales of turbulence. The large scales are more efficient than the small scales in radiating sound. The emphasize is thus on calculating sound radiated by large scales.

  8. Feasibility study for a numerical aerodynamic simulation facility. Volume 3: FMP language specification/user manual

    NASA Technical Reports Server (NTRS)

    Kenner, B. G.; Lincoln, N. R.

    1979-01-01

    The manual is intended to show the revisions and additions to the current STAR FORTRAN. The changes are made to incorporate an FMP (Flow Model Processor) for use in the Numerical Aerodynamic Simulation Facility (NASF) for the purpose of simulating fluid flow over three-dimensional bodies in wind tunnel environments and in free space. The FORTRAN programming language for the STAR-100 computer contains both CDC and unique STAR extensions to the standard FORTRAN. Several of the STAR FORTRAN extensions to standard FOR-TRAN allow the FORTRAN user to exploit the vector processing capabilities of the STAR computer. In STAR FORTRAN, vectors can be expressed with an explicit notation, functions are provided that return vector results, and special call statements enable access to any machine instruction.

  9. LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor

    NASA Astrophysics Data System (ADS)

    Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram

    2007-09-01

    Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.

  10. The Shortlist Method for fast computation of the Earth Mover's Distance and finding optimal solutions to transportation problems.

    PubMed

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.

  11. The Shortlist Method for Fast Computation of the Earth Mover's Distance and Finding Optimal Solutions to Transportation Problems

    PubMed Central

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method. PMID:25310106

  12. Computational analysis of nonlinearities within dynamics of cable-based driving systems

    NASA Astrophysics Data System (ADS)

    Anghelache, G. D.; Nastac, S.

    2017-08-01

    This paper deals with computational nonlinear dynamics of mechanical systems containing some flexural parts within the actuating scheme, and, especially, the situations of the cable-based driving systems were treated. It was supposed both functional nonlinearities and the real characteristic of the power supply, in order to obtain a realistically computer simulation model being able to provide very feasible results regarding the system dynamics. It was taken into account the transitory and stable regimes during a regular exploitation cycle. The authors present a particular case of a lift system, supposed to be representatively for the objective of this study. The simulations were made based on the values of the essential parameters acquired from the experimental tests and/or the regular practice in the field. The results analysis and the final discussions reveal the correlated dynamic aspects within the mechanical parts, the driving system, and the power supply, whole of these supplying potential sources of particular resonances, within some transitory phases of the working cycle, and which can affect structural and functional dynamics. In addition, it was underlines the influences of computational hypotheses on the both quantitative and qualitative behaviour of the system. Obviously, the most significant consequence of this theoretical and computational research consist by developing an unitary and feasible model, useful to dignify the nonlinear dynamic effects into the systems with cable-based driving scheme, and hereby to help an optimization of the exploitation regime including a dynamics control measures.

  13. A shuttle and space station manipulator system for assembly, docking, maintenance, cargo handling and spacecraft retrieval (preliminary design). Volume 4: Simulation studies

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Laboratory simulations of three concepts, based on maximum use of available off-the-shelf hardware elements, are described. The concepts are a stereo-foveal-peripheral TV system with symmetric steroscopic split-image registration and 90 deg counter rotation; a computer assisted model control system termed the trajectory following control system; and active manipulator damping. It is concluded that the feasibility of these concepts is established.

  14. Update on Controlling Herds of Cooperative Robots

    NASA Technical Reports Server (NTRS)

    Quadrelli, Marco; Chang, Johnny

    2007-01-01

    A document presents further information on the subject matter of "Controlling Herds of Cooperative Robots". The document describes the results of the computational simulations of a one-blimp, three-surface-sonde herd in various operational scenarios, including sensitivity studies as a function of distributed communication and processing delays between the sondes and the blimp. From results of the simulations, it is concluded that the methodology is feasible, even if there are significant uncertainties in the dynamical models.

  15. Patient-Specific Simulation of Cardiac Blood Flow From High-Resolution Computed Tomography.

    PubMed

    Lantz, Jonas; Henriksson, Lilian; Persson, Anders; Karlsson, Matts; Ebbers, Tino

    2016-12-01

    Cardiac hemodynamics can be computed from medical imaging data, and results could potentially aid in cardiac diagnosis and treatment optimization. However, simulations are often based on simplified geometries, ignoring features such as papillary muscles and trabeculae due to their complex shape, limitations in image acquisitions, and challenges in computational modeling. This severely hampers the use of computational fluid dynamics in clinical practice. The overall aim of this study was to develop a novel numerical framework that incorporated these geometrical features. The model included the left atrium, ventricle, ascending aorta, and heart valves. The framework used image registration to obtain patient-specific wall motion, automatic remeshing to handle topological changes due to the complex trabeculae motion, and a fast interpolation routine to obtain intermediate meshes during the simulations. Velocity fields and residence time were evaluated, and they indicated that papillary muscles and trabeculae strongly interacted with the blood, which could not be observed in a simplified model. The framework resulted in a model with outstanding geometrical detail, demonstrating the feasibility as well as the importance of a framework that is capable of simulating blood flow in physiologically realistic hearts.

  16. GPU-based prompt gamma ray imaging from boron neutron capture therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Do-Kun; Jung, Joo-Young; Suk Suh, Tae, E-mail: suhsanta@catholic.ac.kr

    Purpose: The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. Methods: To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU).more » Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. Results: The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). Conclusions: The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray image reconstruction using the GPU computation for BNCT simulations.« less

  17. TU-FG-BRB-07: GPU-Based Prompt Gamma Ray Imaging From Boron Neutron Capture Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, S; Suh, T; Yoon, D

    Purpose: The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. Methods: To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU).more » Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. Results: The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). Conclusion: The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray reconstruction using the GPU computation for BNCT simulations.« less

  18. Metabolic reconstruction, constraint-based analysis and game theory to probe genome-scale metabolic networks.

    PubMed

    Ruppin, Eytan; Papin, Jason A; de Figueiredo, Luis F; Schuster, Stefan

    2010-08-01

    With the advent of modern omics technologies, it has become feasible to reconstruct (quasi-) whole-cell metabolic networks and characterize them in more and more detail. Computer simulations of the dynamic behavior of such networks are difficult due to a lack of kinetic data and to computational limitations. In contrast, network analysis based on appropriate constraints such as the steady-state condition (constraint-based analysis) is feasible and allows one to derive conclusions about the system's metabolic capabilities. Here, we review methods for the reconstruction of metabolic networks, modeling techniques such as flux balance analysis and elementary flux modes and current progress in their development and applications. Game-theoretical methods for studying metabolic networks are discussed as well. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. A method of semi-quantifying β-AP in brain PET-CT 11C-PiB images.

    PubMed

    Jiang, Jiehui; Lin, Xiaoman; Wen, Junlin; Huang, Zhemin; Yan, Zhuangzhi

    2014-01-01

    Alzheimer's disease (AD) is a common health problem for elderly populations. Positron emission tomography-computed tomography (PET-CT)11C-PiB for beta-P (amyloid-β peptide, β-AP) imaging is an advanced method to diagnose AD in early stage. However, in practice radiologists lack a standardized value to semi-quantify β-AP. This paper proposes such a standardized value: SVβ-AP. This standardized value measures the mean ratio between the dimension of β-AP areas in PET and CT images. A computer aided diagnosis approach is also proposed to achieve SVβ-AP. A simulation experiment was carried out to pre-test the technical feasibility of the CAD approach and SVβ-AP. The experiment results showed that it is technically feasible.

  20. Financial feasibility of marker-aided selection in Douglas-fir.

    Treesearch

    G.R. Johnson; N.C. Wheeler; S.H. Strauss

    2000-01-01

    The land area required for a marker-aided selection (MAS) program to break-even (i.e., have equal costs and benefits) was estimated using computer simulation for coastal Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) in the Pacific Northwestern United States. We compared the selection efficiency obtained when using an index that included the...

  1. Left ventricular fluid mechanics: the long way from theoretical models to clinical applications.

    PubMed

    Pedrizzetti, Gianni; Domenichini, Federico

    2015-01-01

    The flow inside the left ventricle is characterized by the formation of vortices that smoothly accompany blood from the mitral inlet to the aortic outlet. Computational fluid dynamics permitted to shed some light on the fundamental processes involved with vortex motion. More recently, patient-specific numerical simulations are becoming an increasingly feasible tool that can be integrated with the developing imaging technologies. The existing computational methods are reviewed in the perspective of their potential role as a novel aid for advanced clinical analysis. The current results obtained by simulation methods either alone or in combination with medical imaging are summarized. Open problems are highlighted and perspective clinical applications are discussed.

  2. Digital hardware implementation of a stochastic two-dimensional neuron model.

    PubMed

    Grassia, F; Kohno, T; Levi, T

    2016-11-01

    This study explores the feasibility of stochastic neuron simulation in digital systems (FPGA), which realizes an implementation of a two-dimensional neuron model. The stochasticity is added by a source of current noise in the silicon neuron using an Ornstein-Uhlenbeck process. This approach uses digital computation to emulate individual neuron behavior using fixed point arithmetic operation. The neuron model's computations are performed in arithmetic pipelines. It was designed in VHDL language and simulated prior to mapping in the FPGA. The experimental results confirmed the validity of the developed stochastic FPGA implementation, which makes the implementation of the silicon neuron more biologically plausible for future hybrid experiments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Atomdroid: a computational chemistry tool for mobile platforms.

    PubMed

    Feldt, Jonas; Mata, Ricardo A; Dieterich, Johannes M

    2012-04-23

    We present the implementation of a new molecular mechanics program designed for use in mobile platforms, the first specifically built for these devices. The software is designed to run on Android operating systems and is compatible with several modern tablet-PCs and smartphones available in the market. It includes molecular viewer/builder capabilities with integrated routines for geometry optimizations and Monte Carlo simulations. These functionalities allow it to work as a stand-alone tool. We discuss some particular development aspects, as well as the overall feasibility of using computational chemistry software packages in mobile platforms. Benchmark calculations show that through efficient implementation techniques even hand-held devices can be used to simulate midsized systems using force fields.

  4. Tomographic and analog 3-D simulations using NORA. [Non-Overlapping Redundant Image Array formed by multiple pinholes

    NASA Technical Reports Server (NTRS)

    Yin, L. I.; Trombka, J. I.; Bielefeld, M. J.; Seltzer, S. M.

    1984-01-01

    The results of two computer simulations demonstrate the feasibility of using the nonoverlapping redundant array (NORA) to form three-dimensional images of objects with X-rays. Pinholes admit the X-rays to nonoverlapping points on a detector. The object is reconstructed in the analog mode by optical correlation and in the digital mode by tomographic computations. Trials were run with a stick-figure pyramid and extended objects with out-of-focus backgrounds. Substitution of spherical optical lenses for the pinholes increased the light transmission sufficiently that objects could be easily viewed in a dark room. Out-of-focus aberrations in tomographic reconstruction could be eliminated using Chang's (1976) algorithm.

  5. Hadron Cancer Therapy: Role of Nuclear Reactions

    DOE R&D Accomplishments Database

    Chadwick, M. B.

    2000-06-20

    Recently it has become feasible to calculate energy deposition and particle transport in the body by proton and neutron radiotherapy beams, using Monte Carlo transport methods. A number of advances have made this possible, including dramatic increases in computer speeds, a better understanding of the microscopic nuclear reaction cross sections, and the development of methods to model the characteristics of the radiation emerging from the accelerator treatment unit. This paper describes the nuclear reaction mechanisms involved, and how the cross sections have been evaluated from theory and experiment, for use in computer simulations of radiation therapy. The simulations will allow the dose delivered to a tumor to be optimized, whilst minimizing the dos given to nearby organs at risk.

  6. Fully implicit adaptive mesh refinement algorithm for reduced MHD

    NASA Astrophysics Data System (ADS)

    Philip, Bobby; Pernice, Michael; Chacon, Luis

    2006-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)

  7. Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment

    NASA Astrophysics Data System (ADS)

    Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.

    2013-12-01

    Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a quadratic programming based modeling method is proposed. This algorithm performs well with small amount of computing tasks. However, its efficiency decreases significantly as the subdomain number and computing node number increase. 2) To compensate performance decreasing for large scale tasks, a K-Means clustering based algorithm is introduced. Instead of dedicating to get optimized solutions, this method can get relatively good feasible solutions within acceptable time. However, it may introduce imbalance communication for nodes or node-isolated subdomains. This research shows both two algorithms have their own strength and weakness for task allocation. A combination of the two algorithms is under study to obtain a better performance. Keywords: Scheduling; Parallel Computing; Load Balance; Optimization; Cost Model

  8. Silicon material task. Part 3: Low-cost silicon solar array project

    NASA Technical Reports Server (NTRS)

    Roques, R. A.; Coldwell, D. M.

    1977-01-01

    The feasibility of a process for carbon reduction of low impurity silica in a plasma heat source was investigated to produce low-cost solar-grade silicon. Theoretical aspects of the reaction chemistry were studied with the aid of a computer program using iterative free energy minimization. These calculations indicate a threshold temperature exists at 2400 K below which no silicon is formed. The computer simulation technique of molecular dynamics was used to study the quenching of product species.

  9. Large-eddy simulation of flow in a plane, asymmetric diffuser

    NASA Technical Reports Server (NTRS)

    Kaltenbach, Hans-Jakob

    1993-01-01

    Recent improvements in subgrid-scale modeling as well as increases in computer power make it feasible to investigate flows using large-eddy simulation (LES) which have been traditionally studied with techniques based on Reynolds averaging. However, LES has not yet been applied to many flows of immediate technical interest. Preliminary results from LES of a plane diffuser flow are described. The long term goal of this work is to investigate flow separation as well as separation control in ducts and ramp-like geometries.

  10. Dual-energy contrast-enhanced digital mammography (DE-CEDM): optimization on digital subtraction with practical x-ray low/high-energy spectra

    NASA Astrophysics Data System (ADS)

    Chen, Biao; Jing, Zhenxue; Smith, Andrew P.; Parikh, Samir; Parisky, Yuri

    2006-03-01

    Dual-energy contrast enhanced digital mammography (DE-CEDM), which is based upon the digital subtraction of low/high-energy image pairs acquired before/after the administration of contrast agents, may provide physicians physiologic and morphologic information of breast lesions and help characterize their probability of malignancy. This paper proposes to use only one pair of post-contrast low / high-energy images to obtain digitally subtracted dual-energy contrast-enhanced images with an optimal weighting factor deduced from simulated characteristics of the imaging chain. Based upon our previous CEDM framework, quantitative characteristics of the materials and imaging components in the x-ray imaging chain, including x-ray tube (tungsten) spectrum, filters, breast tissues / lesions, contrast agents (non-ionized iodine solution), and selenium detector, were systemically modeled. Using the base-material (polyethylene-PMMA) decomposition method based on entrance low / high-energy x-ray spectra and breast thickness, the optimal weighting factor was calculated to cancel the contrast between fatty and glandular tissues while enhancing the contrast of iodized lesions. By contrast, previous work determined the optimal weighting factor through either a calibration step or through acquisition of a pre-contrast low/high-energy image pair. Computer simulations were conducted to determine weighting factors, lesions' contrast signal values, and dose levels as functions of x-ray techniques and breast thicknesses. Phantom and clinical feasibility studies were performed on a modified Selenia full field digital mammography system to verify the proposed method and computer-simulated results. The resultant conclusions from the computer simulations and phantom/clinical feasibility studies will be used in the upcoming clinical study.

  11. Parallel scalability and efficiency of vortex particle method for aeroelasticity analysis of bluff bodies

    NASA Astrophysics Data System (ADS)

    Tolba, Khaled Ibrahim; Morgenthal, Guido

    2018-01-01

    This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.

  12. Algorithm implementation on the Navier-Stokes computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krist, S.E.; Zang, T.A.

    1987-03-01

    The Navier-Stokes Computer is a multi-purpose parallel-processing supercomputer which is currently under development at Princeton University. It consists of multiple local memory parallel processors, called Nodes, which are interconnected in a hypercube network. Details of the procedures involved in implementing an algorithm on the Navier-Stokes computer are presented. The particular finite difference algorithm considered in this analysis was developed for simulation of laminar-turbulent transition in wall bounded shear flows. Projected timing results for implementing this algorithm indicate that operation rates in excess of 42 GFLOPS are feasible on a 128 Node machine.

  13. Algorithm implementation on the Navier-Stokes computer

    NASA Technical Reports Server (NTRS)

    Krist, Steven E.; Zang, Thomas A.

    1987-01-01

    The Navier-Stokes Computer is a multi-purpose parallel-processing supercomputer which is currently under development at Princeton University. It consists of multiple local memory parallel processors, called Nodes, which are interconnected in a hypercube network. Details of the procedures involved in implementing an algorithm on the Navier-Stokes computer are presented. The particular finite difference algorithm considered in this analysis was developed for simulation of laminar-turbulent transition in wall bounded shear flows. Projected timing results for implementing this algorithm indicate that operation rates in excess of 42 GFLOPS are feasible on a 128 Node machine.

  14. Architectures for Quantum Simulation Showing a Quantum Speedup

    NASA Astrophysics Data System (ADS)

    Bermejo-Vega, Juan; Hangleiter, Dominik; Schwarz, Martin; Raussendorf, Robert; Eisert, Jens

    2018-04-01

    One of the main aims in the field of quantum simulation is to achieve a quantum speedup, often referred to as "quantum computational supremacy," referring to the experimental realization of a quantum device that computationally outperforms classical computers. In this work, we show that one can devise versatile and feasible schemes of two-dimensional, dynamical, quantum simulators showing such a quantum speedup, building on intermediate problems involving nonadaptive, measurement-based, quantum computation. In each of the schemes, an initial product state is prepared, potentially involving an element of randomness as in disordered models, followed by a short-time evolution under a basic translationally invariant Hamiltonian with simple nearest-neighbor interactions and a mere sampling measurement in a fixed basis. The correctness of the final-state preparation in each scheme is fully efficiently certifiable. We discuss experimental necessities and possible physical architectures, inspired by platforms of cold atoms in optical lattices and a number of others, as well as specific assumptions that enter the complexity-theoretic arguments. This work shows that benchmark settings exhibiting a quantum speedup may require little control, in contrast to universal quantum computing. Thus, our proposal puts a convincing experimental demonstration of a quantum speedup within reach in the near term.

  15. Hydrodynamic Simulations of Protoplanetary Disks with GIZMO

    NASA Astrophysics Data System (ADS)

    Rice, Malena; Laughlin, Greg

    2018-01-01

    Over the past several decades, the field of computational fluid dynamics has rapidly advanced as the range of available numerical algorithms and computationally feasible physical problems has expanded. The development of modern numerical solvers has provided a compelling opportunity to reconsider previously obtained results in search for yet undiscovered effects that may be revealed through longer integration times and more precise numerical approaches. In this study, we compare the results of past hydrodynamic disk simulations with those obtained from modern analytical resources. We focus our study on the GIZMO code (Hopkins 2015), which uses meshless methods to solve the homogeneous Euler equations of hydrodynamics while eliminating problems arising as a result of advection between grid cells. By comparing modern simulations with prior results, we hope to provide an improved understanding of the impact of fluid mechanics upon the evolution of protoplanetary disks.

  16. Comparisons of some large scientific computers

    NASA Technical Reports Server (NTRS)

    Credeur, K. R.

    1981-01-01

    In 1975, the National Aeronautics and Space Administration (NASA) began studies to assess the technical and economic feasibility of developing a computer having sustained computational speed of one billion floating point operations per second and a working memory of at least 240 million words. Such a powerful computer would allow computational aerodynamics to play a major role in aeronautical design and advanced fluid dynamics research. Based on favorable results from these studies, NASA proceeded with developmental plans. The computer was named the Numerical Aerodynamic Simulator (NAS). To help insure that the estimated cost, schedule, and technical scope were realistic, a brief study was made of past large scientific computers. Large discrepancies between inception and operation in scope, cost, or schedule were studied so that they could be minimized with NASA's proposed new compter. The main computers studied were the ILLIAC IV, STAR 100, Parallel Element Processor Ensemble (PEPE), and Shuttle Mission Simulator (SMS) computer. Comparison data on memory and speed were also obtained on the IBM 650, 704, 7090, 360-50, 360-67, 360-91, and 370-195; the CDC 6400, 6600, 7600, CYBER 203, and CYBER 205; CRAY 1; and the Advanced Scientific Computer (ASC). A few lessons learned conclude the report.

  17. Higher Order Time Integration Schemes for the Unsteady Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.

    2002-01-01

    The rapid increase in available computational power over the last decade has enabled higher resolution flow simulations and more widespread use of unstructured grid methods for complex geometries. While much of this effort has been focused on steady-state calculations in the aerodynamics community, the need to accurately predict off-design conditions, which may involve substantial amounts of flow separation, points to the need to efficiently simulate unsteady flow fields. Accurate unsteady flow simulations can easily require several orders of magnitude more computational effort than a corresponding steady-state simulation. For this reason, techniques for improving the efficiency of unsteady flow simulations are required in order to make such calculations feasible in the foreseeable future. The purpose of this work is to investigate possible reductions in computer time due to the choice of an efficient time-integration scheme from a series of schemes differing in the order of time-accuracy, and by the use of more efficient techniques to solve the nonlinear equations which arise while using implicit time-integration schemes. This investigation is carried out in the context of a two-dimensional unstructured mesh laminar Navier-Stokes solver.

  18. Facilitating higher-fidelity simulations of axial compressor instability and other turbomachinery flow conditions

    NASA Astrophysics Data System (ADS)

    Herrick, Gregory Paul

    The quest to accurately capture flow phenomena with length-scales both short and long and to accurately represent complex flow phenomena within disparately sized geometry inspires a need for an efficient, high-fidelity, multi-block structured computational fluid dynamics (CFD) parallel computational scheme. This research presents and demonstrates a more efficient computational method by which to perform multi-block structured CFD parallel computational simulations, thus facilitating higher-fidelity solutions of complicated geometries (due to the inclusion of grids for "small'' flow areas which are often merely modeled) and their associated flows. This computational framework offers greater flexibility and user-control in allocating the resource balance between process count and wall-clock computation time. The principal modifications implemented in this revision consist of a "multiple grid block per processing core'' software infrastructure and an analytic computation of viscous flux Jacobians. The development of this scheme is largely motivated by the desire to simulate axial compressor stall inception with more complete gridding of the flow passages (including rotor tip clearance regions) than has been previously done while maintaining high computational efficiency (i.e., minimal consumption of computational resources), and thus this paradigm shall be demonstrated with an examination of instability in a transonic axial compressor. However, the paradigm presented herein facilitates CFD simulation of myriad previously impractical geometries and flows and is not limited to detailed analyses of axial compressor flows. While the simulations presented herein were technically possible under the previous structure of the subject software, they were much less computationally efficient and thus not pragmatically feasible; the previous research using this software to perform three-dimensional, full-annulus, time-accurate, unsteady, full-stage (with sliding-interface) simulations of rotating stall inception in axial compressors utilized tip clearance periodic models, while the scheme here is demonstrated by a simulation of axial compressor stall inception utilizing gridded rotor tip clearance regions. As will be discussed, much previous research---experimental, theoretical, and computational---has suggested that understanding clearance flow behavior is critical to understanding stall inception, and previous computational research efforts which have used tip clearance models have begged the question, "What about the clearance flows?''. This research begins to address that question.

  19. SU-E-T-314: The Application of Cloud Computing in Pencil Beam Scanning Proton Therapy Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Z; Gao, M

    Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster softwaremore » developed at MIT, a Linux cluster with 2–100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 10×10cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.« less

  20. Flight Experiment Demonstration System (FEDS): Mathematical specification

    NASA Technical Reports Server (NTRS)

    Shank, D. E.

    1984-01-01

    Computational models for the flight experiment demonstration system (FEDS) code 580 were developed. The FEDS is a modification of the automated orbit determination system which was developed during 1981 and 1982. The purpose of FEDS is to demonstrate, in a simulated spacecraft environment, the feasibility of using microprocessors to perform onboard orbit determination with limited ground support.

  1. Computer-aided design analysis of 57-mm, angular-contact, cryogenic turbopump bearings

    NASA Technical Reports Server (NTRS)

    Armstrong, Elizabeth S.; Coe, Harold H.

    1988-01-01

    The Space Shuttle main engine high-pressure oxygen turbopumps have not experienced the sevice life required of them. This insufficiency has been due in part to the shortened life of the bearings. To improve the life of the existing turbopump bearings, an effort is under way to investigate bearing modifications that could be retrofitted into the present bearing cavity. Several bearing parameters were optimized using the computer program SHABERTH, which performs a thermomechanical simulation of a load support system. The computer analysis showed that improved bearing performance is feasible if low friction coefficients can be attained. Bearing geometries were optimized considering heat generation, equilibrium temperatures, and relative life. Thermal gradients through the bearings were found to be lower with liquid lubrication than with solid film lubrication, and a liquid oxygen coolant flowrate of approximately 4.0 kg/s was found to be optimal. This paper describes the analytical modeling used to determine these feasible modifications to improve bearing performance.

  2. Cryptanalysis and security enhancement of optical cryptography based on computational ghost imaging

    NASA Astrophysics Data System (ADS)

    Yuan, Sheng; Yao, Jianbin; Liu, Xuemei; Zhou, Xin; Li, Zhongyang

    2016-04-01

    Optical cryptography based on computational ghost imaging (CGI) has attracted much attention of researchers because it encrypts plaintext into a random intensity vector rather than complexed-valued function. This promising feature of the CGI-based cryptography reduces the amount of data to be transmitted and stored and therefore brings convenience in practice. However, we find that this cryptography is vulnerable to chosen-plaintext attack because of the linear relationship between the input and output of the encryption system, and three feasible strategies are proposed to break it in this paper. Even though a large number of plaintexts need to be chosen in these attack methods, it means that this cryptography still exists security risks. To avoid these attacks, a security enhancement method utilizing an invertible matrix modulation is further discussed and the feasibility is verified by numerical simulations.

  3. The feasibility of an efficient drug design method with high-performance computers.

    PubMed

    Yamashita, Takefumi; Ueda, Akihiko; Mitsui, Takashi; Tomonaga, Atsushi; Matsumoto, Shunji; Kodama, Tatsuhiko; Fujitani, Hideaki

    2015-01-01

    In this study, we propose a supercomputer-assisted drug design approach involving all-atom molecular dynamics (MD)-based binding free energy prediction after the traditional design/selection step. Because this prediction is more accurate than the empirical binding affinity scoring of the traditional approach, the compounds selected by the MD-based prediction should be better drug candidates. In this study, we discuss the applicability of the new approach using two examples. Although the MD-based binding free energy prediction has a huge computational cost, it is feasible with the latest 10 petaflop-scale computer. The supercomputer-assisted drug design approach also involves two important feedback procedures: The first feedback is generated from the MD-based binding free energy prediction step to the drug design step. While the experimental feedback usually provides binding affinities of tens of compounds at one time, the supercomputer allows us to simultaneously obtain the binding free energies of hundreds of compounds. Because the number of calculated binding free energies is sufficiently large, the compounds can be classified into different categories whose properties will aid in the design of the next generation of drug candidates. The second feedback, which occurs from the experiments to the MD simulations, is important to validate the simulation parameters. To demonstrate this, we compare the binding free energies calculated with various force fields to the experimental ones. The results indicate that the prediction will not be very successful, if we use an inaccurate force field. By improving/validating such simulation parameters, the next prediction can be made more accurate.

  4. [Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].

    PubMed

    Furuta, Takuya; Sato, Tatsuhiko

    2015-01-01

    Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.

  5. Robot graphic simulation testbed

    NASA Technical Reports Server (NTRS)

    Cook, George E.; Sztipanovits, Janos; Biegl, Csaba; Karsai, Gabor; Springfield, James F.

    1991-01-01

    The objective of this research was twofold. First, the basic capabilities of ROBOSIM (graphical simulation system) were improved and extended by taking advantage of advanced graphic workstation technology and artificial intelligence programming techniques. Second, the scope of the graphic simulation testbed was extended to include general problems of Space Station automation. Hardware support for 3-D graphics and high processing performance make high resolution solid modeling, collision detection, and simulation of structural dynamics computationally feasible. The Space Station is a complex system with many interacting subsystems. Design and testing of automation concepts demand modeling of the affected processes, their interactions, and that of the proposed control systems. The automation testbed was designed to facilitate studies in Space Station automation concepts.

  6. An Exploration of Trainer Filtering Approaches

    NASA Technical Reports Server (NTRS)

    Hester, Patrick; Tolk, Andreas; Gadi, Sandeep; Carver, Quinn; Roland, Philippe

    2011-01-01

    Simutator operators face a twofold entity management problem during Live-Virtual-Constructive (LVC) training events. They first must filter potentially hundreds of thousands of simulation entities in order 10 determine which elements are necessary for optimal trainee comprehension. Secondarily, they must manage the number of entities entering the simulation from those present in the object model in order to limit the computational burden on the simulation system and prevent unnecessary entities from entering the simulation, This paper focuses on the first filtering stage and describes a novel approach to entity filtering undertaken to maximize trainee awareness and learning. The feasibility of this novel approach is demonstrated on a case study and limitations to the proposed approach and future work are discussed.

  7. Ultrasonic Phased Array Simulations of Welded Components at NASA

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Tokars, R. P.; Martin, R. E.; Rauser, R. W.; Aldrin, J. C.

    2009-01-01

    Comprehensive and accurate inspections of welded components have become of increasing importance as NASA develops new hardware such as Ares rocket segments for future exploration missions. Simulation and modeling will play an increasing role in the future for nondestructive evaluation in order to better understand the physics of the inspection process, to prove or disprove the feasibility for an inspection method or inspection scenario, for inspection optimization, for better understanding of experimental results, and for assessment of probability of detection. This study presents simulation and experimental results for an ultrasonic phased array inspection of a critical welded structure important for NASA future exploration vehicles. Keywords: nondestructive evaluation, computational simulation, ultrasonics, weld, modeling, phased array

  8. A simulation model for wind energy storage systems. Volume 1: Technical report

    NASA Technical Reports Server (NTRS)

    Warren, A. W.; Edsinger, R. W.; Chan, Y. K.

    1977-01-01

    A comprehensive computer program for the modeling of wind energy and storage systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic) was developed. The level of detail of Simulation Model for Wind Energy Storage (SIMWEST) is consistent with a role of evaluating the economic feasibility as well as the general performance of wind energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. The first program is a precompiler which generates computer models (in FORTRAN) of complex wind source storage application systems, from user specifications using the respective library components. The second program provides the techno-economic system analysis with the respective I/O, the integration of systems dynamics, and the iteration for conveyance of variables. SIMWEST program, as described, runs on the UNIVAC 1100 series computers.

  9. Optimization analysis of thermal management system for electric vehicle battery pack

    NASA Astrophysics Data System (ADS)

    Gong, Huiqi; Zheng, Minxin; Jin, Peng; Feng, Dong

    2018-04-01

    Electric vehicle battery pack can increase the temperature to affect the power battery system cycle life, charge-ability, power, energy, security and reliability. The Computational Fluid Dynamics simulation and experiment of the charging and discharging process of the battery pack were carried out for the thermal management system of the battery pack under the continuous charging of the battery. The simulation result and the experimental data were used to verify the rationality of the Computational Fluid Dynamics calculation model. In view of the large temperature difference of the battery module in high temperature environment, three optimization methods of the existing thermal management system of the battery pack were put forward: adjusting the installation position of the fan, optimizing the arrangement of the battery pack and reducing the fan opening temperature threshold. The feasibility of the optimization method is proved by simulation and experiment of the thermal management system of the optimized battery pack.

  10. Ship Motions and Capsizing in Astern Seas

    DTIC Science & Technology

    1974-12-01

    result of these experiments and concurrent analytical work,a great deal has been learned about the mechanism of capsizing. This...computer time. It does not appear economically feasible using present-generation machines to numerically simulate a complete experimental...a Fast Cargo Liner in San Francisco Bay." Dept. of Naval Archi- tecture, University of Calif., Berkeley. January 1972. (Dept. of Transp

  11. Growth and survival of Salmonella Paratyphi A in roasted marinated chicken during refrigerated storage: Effect of temperature abuse and computer simulation for cold chain management

    USDA-ARS?s Scientific Manuscript database

    This research was conducted to evaluate the feasibility of using a one-step dynamic numerical analysis and optimization method to directly construct a tertiary model to describe the growth and survival of Salmonella Paratyphi A (SPA) in a marinated roasted chicken product. Multiple dynamic growth a...

  12. Neptune Aerocapture Systems Analysis

    NASA Technical Reports Server (NTRS)

    Lockwood, Mary Kae

    2004-01-01

    A Neptune Aerocapture Systems Analysis is completed to determine the feasibility, benefit and risk of an aeroshell aerocapture system for Neptune and to identify technology gaps and technology performance goals. The high fidelity systems analysis is completed by a five center NASA team and includes the following disciplines and analyses: science; mission design; aeroshell configuration screening and definition; interplanetary navigation analyses; atmosphere modeling; computational fluid dynamics for aerodynamic performance and database definition; initial stability analyses; guidance development; atmospheric flight simulation; computational fluid dynamics and radiation analyses for aeroheating environment definition; thermal protection system design, concepts and sizing; mass properties; structures; spacecraft design and packaging; and mass sensitivities. Results show that aerocapture can deliver 1.4 times more mass to Neptune orbit than an all-propulsive system for the same launch vehicle. In addition aerocapture results in a 3-4 year reduction in trip time compared to all-propulsive systems. Aerocapture is feasible and performance is adequate for the Neptune aerocapture mission. Monte Carlo simulation results show 100% successful capture for all cases including conservative assumptions on atmosphere and navigation. Enabling technologies for this mission include TPS manufacturing; and aerothermodynamic methods and validation for determining coupled 3-D convection, radiation and ablation aeroheating rates and loads, and the effects on surface recession.

  13. Experimental and analytical investigation of active loads control for aircraft landing gear

    NASA Technical Reports Server (NTRS)

    Morris, D. L.; Mcgehee, J. R.

    1983-01-01

    A series hydraulic, active loads control main landing gear from a light, twin-engine civil aircraft was investigated. Tests included landing impact and traversal of simulated runway roughness. It is shown that the active gear is feasible and very effective in reducing the force transmitted to the airframe. Preliminary validation of a multidegree of freedom active gear flexible airframe takeoff and landing analysis computer program, which may be used as a design tool for active gear systems, is accomplished by comparing experimental and computed data for the passive and active gears.

  14. Radiation shielding evaluation of the BNCT treatment room at THOR: a TORT-coupled MCNP Monte Carlo simulation study.

    PubMed

    Chen, A Y; Liu, Y-W H; Sheu, R J

    2008-01-01

    This study investigates the radiation shielding design of the treatment room for boron neutron capture therapy at Tsing Hua Open-pool Reactor using "TORT-coupled MCNP" method. With this method, the computational efficiency is improved significantly by two to three orders of magnitude compared to the analog Monte Carlo MCNP calculation. This makes the calculation feasible using a single CPU in less than 1 day. Further optimization of the photon weight windows leads to additional 50-75% improvement in the overall computational efficiency.

  15. Feasibility study for a numerical aerodynamic simulation facility. Volume 2: Hardware specifications/descriptions

    NASA Technical Reports Server (NTRS)

    Green, F. M.; Resnick, D. R.

    1979-01-01

    An FMP (Flow Model Processor) was designed for use in the Numerical Aerodynamic Simulation Facility (NASF). The NASF was developed to simulate fluid flow over three-dimensional bodies in wind tunnel environments and in free space. The facility is applicable to studying aerodynamic and aircraft body designs. The following general topics are discussed in this volume: (1) FMP functional computer specifications; (2) FMP instruction specification; (3) standard product system components; (4) loosely coupled network (LCN) specifications/description; and (5) three appendices: performance of trunk allocation contention elimination (trace) method, LCN channel protocol and proposed LCN unified second level protocol.

  16. Ultrasonic Phased Array Inspection Experiments and Simulations for AN Isogrid Structural Element with Cracks

    NASA Astrophysics Data System (ADS)

    Roth, D. J.; Tokars, R. P.; Martin, R. E.; Rauser, R. W.; Aldrin, J. C.; Schumacher, E. J.

    2010-02-01

    In this investigation, a T-shaped aluminum alloy isogrid stiffener element used in aerospace applications was inspected with ultrasonic phased array methods. The isogrid stiffener element had various crack configurations emanating from bolt holes. Computational simulation methods were used to mimic the experiments in order to help understand experimental results. The results of this study indicate that it is at least partly feasible to interrogate this type of geometry with the given flaw configurations using phased array ultrasonics. The simulation methods were critical in helping explain the experimental results and, with some limitation, can be used to predict inspection results.

  17. Real-Time Simulation of Aeroheating of the Hyper-X Airplane

    NASA Technical Reports Server (NTRS)

    Gong, Les

    2005-01-01

    A capability for real-time computational simulation of aeroheating has been developed in support of the Hyper-X program, which is directed toward demonstrating the feasibility of operating an air-breathing ramjet/scramjet engine at mach 5, mach 7, and mach 10. The simulation software will serve as a valuable design tool for initial trajectory studies in which aerodynamic heating is expected to exert a major influence in the design of the Hyper-X airplane; this tool will aid in the selection of materials, sizing of structural skin thicknesses, and selection of components of a thermal-protection system (TPS) for structures that must be insulated against aeroheating.

  18. Providing scalable system software for high-end simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenberg, D.

    1997-12-31

    Detailed, full-system, complex physics simulations have been shown to be feasible on systems containing thousands of processors. In order to manage these computer systems it has been necessary to create scalable system services. In this talk Sandia`s research on scalable systems will be described. The key concepts of low overhead data movement through portals and of flexible services through multi-partition architectures will be illustrated in detail. The talk will conclude with a discussion of how these techniques can be applied outside of the standard monolithic MPP system.

  19. Analysis and Simulation of a Blue Energy Cycle

    DOE PAGES

    Sharma, Ms. Ketki; Kim, Yong-Ha; Yiacoumi, Sotira; ...

    2016-01-30

    The mixing process of fresh water and seawater releases a significant amount of energy and is a potential source of renewable energy. The so called ‘blue energy’ or salinity-gradient energy can be harvested by a device consisting of carbon electrodes immersed in an electrolyte solution, based on the principle of capacitive double layer expansion (CDLE). In this study, we have investigated the feasibility of energy production based on the CDLE principle. Experiments and computer simulations were used to study the process. Mesoporous carbon materials, synthesized at the Oak Ridge National Laboratory, were used as electrode materials in the experiments. Neutronmore » imaging of the blue energy cycle was conducted with cylindrical mesoporous carbon electrodes and 0.5 M lithium chloride as the electrolyte solution. For experiments conducted at 0.6 V and 0.9 V applied potential, a voltage increase of 0.061 V and 0.054 V was observed, respectively. From sequences of neutron images obtained for each step of the blue energy cycle, information on the direction and magnitude of lithium ion transport was obtained. A computer code was developed to simulate the process. Experimental data and computer simulations allowed us to predict energy production.« less

  20. Characterization of cardiac flow in heart disease patients by computational fluid dynamics and 4D flow MRI

    NASA Astrophysics Data System (ADS)

    Lantz, Jonas; Gupta, Vikas; Henriksson, Lilian; Karlsson, Matts; Persson, Ander; Carhall, Carljohan; Ebbers, Tino

    2017-11-01

    In this study, cardiac blood flow was simulated using Computational Fluid Dynamics and compared to in vivo flow measurements by 4D Flow MRI. In total, nine patients with various heart diseases were studied. Geometry and heart wall motion for the simulations were obtained from clinical CT measurements, with 0.3x0.3x0.3 mm spatial resolution and 20 time frames covering one heartbeat. The CFD simulations included pulmonary veins, left atrium and ventricle, mitral and aortic valve, and ascending aorta. Mesh sizes were on the order of 6-16 million cells, depending on the size of the heart, in order to resolve both papillary muscles and trabeculae. The computed flow field agreed visually very well with 4D Flow MRI, with characteristic vortices and flow structures seen in both techniques. Regression analysis showed that peak flow rate as well as stroke volume had an excellent agreement for the two techniques. We demonstrated the feasibility, and more importantly, fidelity of cardiac flow simulations by comparing CFD results to in vivo measurements. Both qualitative and quantitative results agreed well with the 4D Flow MRI measurements. Also, the developed simulation methodology enables ``what if'' scenarios, such as optimization of valve replacement and other surgical procedures. Funded by the Wallenberg Foundation.

  1. The application of nonlinear programming and collocation to optimal aeroassisted orbital transfers

    NASA Astrophysics Data System (ADS)

    Shi, Y. Y.; Nelson, R. L.; Young, D. H.; Gill, P. E.; Murray, W.; Saunders, M. A.

    1992-01-01

    Sequential quadratic programming (SQP) and collocation of the differential equations of motion were applied to optimal aeroassisted orbital transfers. The Optimal Trajectory by Implicit Simulation (OTIS) computer program codes with updated nonlinear programming code (NZSOL) were used as a testbed for the SQP nonlinear programming (NLP) algorithms. The state-of-the-art sparse SQP method is considered to be effective for solving large problems with a sparse matrix. Sparse optimizers are characterized in terms of memory requirements and computational efficiency. For the OTIS problems, less than 10 percent of the Jacobian matrix elements are nonzero. The SQP method encompasses two phases: finding an initial feasible point by minimizing the sum of infeasibilities and minimizing the quadratic objective function within the feasible region. The orbital transfer problem under consideration involves the transfer from a high energy orbit to a low energy orbit.

  2. Down to the roughness scale assessment of piston-ring/liner contacts

    NASA Astrophysics Data System (ADS)

    Checo, H. M.; Jaramillo, A.; Ausas, R. F.; Jai, M.; Buscaglia, G. C.

    2017-02-01

    The effects of surface roughness in hydrodynamic bearings been accounted for through several approaches, the most widely used being averaging or stochastic techniques. With these the surface is not treated “as it is”, but by means of an assumed probability distribution for the roughness. The so called direct, deterministic or measured-surface simulation) solve the lubrication problem with realistic surfaces down to the roughness scale. This leads to expensive computational problems. Most researchers have tackled this problem considering non-moving surfaces and neglecting the ring dynamics to reduce the computational burden. What is proposed here is to solve the fully-deterministic simulation both in space and in time, so that the actual movement of the surfaces and the rings dynamics are taken into account. This simulation is much more complex than previous ones, as it is intrinsically transient. The feasibility of these fully-deterministic simulations is illustrated two cases: fully deterministic simulation of liner surfaces with diverse finishings (honed and coated bores) with constant piston velocity and load on the ring and also in real engine conditions.

  3. A Method for Combining Experimentation and Molecular Dynamics Simulation to Improve Cohesive Zone Models for Metallic Microstructures

    NASA Technical Reports Server (NTRS)

    Hochhalter, J. D.; Glaessgen, E. H.; Ingraffea, A. R.; Aquino, W. A.

    2009-01-01

    Fracture processes within a material begin at the nanometer length scale at which the formation, propagation, and interaction of fundamental damage mechanisms occur. Physics-based modeling of these atomic processes quickly becomes computationally intractable as the system size increases. Thus, a multiscale modeling method, based on the aggregation of fundamental damage processes occurring at the nanoscale within a cohesive zone model, is under development and will enable computationally feasible and physically meaningful microscale fracture simulation in polycrystalline metals. This method employs atomistic simulation to provide an optimization loop with an initial prediction of a cohesive zone model (CZM). This initial CZM is then applied at the crack front region within a finite element model. The optimization procedure iterates upon the CZM until the finite element model acceptably reproduces the near-crack-front displacement fields obtained from experimental observation. With this approach, a comparison can be made between the original CZM predicted by atomistic simulation and the converged CZM that is based on experimental observation. Comparison of the two CZMs gives insight into how atomistic simulation scales.

  4. Metallic Fuel Casting Development and Parameter Optimization Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R.S. Fielding; J. Crapps; C. Unal

    One of the advantages of metallic fuel is the abilility to cast the fuel slugs to near net shape with little additional processing. However, the high aspect ratio of the fuel is not ideal for casting. EBR-II fuel was cast using counter gravity injection casting (CGIC) but, concerns have been raised concerning the feasibility of this process for americium bearing alloys. The Fuel Cycle Research and Development program has begun developing gravity casting techniques suitable for fuel production. Compared to CGIC gravity casting does not require a large heel that then is recycled, does not require application of a vacuummore » during melting, and is conducive to re-usable molds. Development has included fabrication of two separate benchscale, approximately 300 grams, systems. To shorten development time computer simulations have been used to ensure mold and crucible designs are feasible and to identify which fluid properties most affect casting behavior and therefore require more characterization.« less

  5. Detection of magnetically enhanced cancer tumors using SQUID magnetometry: A feasibility study

    NASA Astrophysics Data System (ADS)

    Kenning, G. G.; Rodriguez, R.; Zotev, V. S.; Moslemi, A.; Wilson, S.; Hawel, L.; Byus, C.; Kovach, J. S.

    2005-01-01

    Nanoparticles bound to various biological molecules and pharmacological agents can be administered systemically, to humans without apparent toxicity. This opens an era in the targeting of specific tissues and disease processes for noninvasive imaging and treatment. An important class of particles used predominantly for magnetic resonance imaging is based on iron-oxide ferrites. We performed computer simulations using experimentally determined values for concentrations of superparamagnetic particles achievable in specific tissues of the mouse in vivo and concentrations of particles linked to monoclonal antibodies specific to antigens of two human cancer cell lines in vitro. An instrument to target distance of 12cm, into the body, was selected as relevant to our goal of developing a rapid inexpensive method of scanning the body for occult disease. The simulations demonstrate the potential feasibility of superconducting quantum interference device magnetometry to detect induced magnetic fields in focal concentrations of superparamagnetic particles targeted, in vivo, to sites of disease.

  6. Virtual Reality Job Interview Training in Adults with Autism Spectrum Disorder

    PubMed Central

    Smith, Matthew J.; Ginger, Emily; Wright, Katherine; Wright, Michael; Taylor, Julie Lounds; Humm, Laura Boteler; Olsen, Dale; Bell, Morris D.; Fleming, Michael F.

    2014-01-01

    The feasibility and efficacy of Virtual Reality Job Interview Training (VR-JIT) was assessed in a single-blinded randomized controlled trial. Adults with autism spectrum disorder were randomized to VR-JIT (n=16) or treatment as usual (TAU) (n=10) groups. VR-JIT consisted of simulated job interviews with a virtual character and didactic training. Participants attended 90% of lab-based training sessions and found VR-JIT easy-to-use, enjoyable, and they felt prepared for future interviews. VR-JIT participants had greater improvement during live standardized job interview role-play performances than TAU participants (p=0.046). A similar pattern was observed for self-reported self-confidence at a trend level (p=0.060). VR-JIT simulation performance scores increased over time (R-Squared=0.83). Results indicate preliminary support for the feasibility and efficacy of VR-JIT, which can be administered using computer software or via the internet. PMID:24803366

  7. Virtual reality job interview training in adults with autism spectrum disorder.

    PubMed

    Smith, Matthew J; Ginger, Emily J; Wright, Katherine; Wright, Michael A; Taylor, Julie Lounds; Humm, Laura Boteler; Olsen, Dale E; Bell, Morris D; Fleming, Michael F

    2014-10-01

    The feasibility and efficacy of virtual reality job interview training (VR-JIT) was assessed in a single-blinded randomized controlled trial. Adults with autism spectrum disorder were randomized to VR-JIT (n = 16) or treatment-as-usual (TAU) (n = 10) groups. VR-JIT consisted of simulated job interviews with a virtual character and didactic training. Participants attended 90 % of laboratory-based training sessions, found VR-JIT easy to use and enjoyable, and they felt prepared for future interviews. VR-JIT participants had greater improvement during live standardized job interview role-play performances than TAU participants (p = 0.046). A similar pattern was observed for self-reported self-confidence at a trend level (p = 0.060). VR-JIT simulation performance scores increased over time (R(2) = 0.83). Results indicate preliminary support for the feasibility and efficacy of VR-JIT, which can be administered using computer software or via the internet.

  8. AEROELASTIC SIMULATION TOOL FOR INFLATABLE BALLUTE AEROCAPTURE

    NASA Technical Reports Server (NTRS)

    Liever, P. A.; Sheta, E. F.; Habchi, S. D.

    2006-01-01

    A multidisciplinary analysis tool is under development for predicting the impact of aeroelastic effects on the functionality of inflatable ballute aeroassist vehicles in both the continuum and rarefied flow regimes. High-fidelity modules for continuum and rarefied aerodynamics, structural dynamics, heat transfer, and computational grid deformation are coupled in an integrated multi-physics, multi-disciplinary computing environment. This flexible and extensible approach allows the integration of state-of-the-art, stand-alone NASA and industry leading continuum and rarefied flow solvers and structural analysis codes into a computing environment in which the modules can run concurrently with synchronized data transfer. Coupled fluid-structure continuum flow demonstrations were conducted on a clamped ballute configuration. The feasibility of implementing a DSMC flow solver in the simulation framework was demonstrated, and loosely coupled rarefied flow aeroelastic demonstrations were performed. A NASA and industry technology survey identified CFD, DSMC and structural analysis codes capable of modeling non-linear shape and material response of thin-film inflated aeroshells. The simulation technology will find direct and immediate applications with NASA and industry in ongoing aerocapture technology development programs.

  9. Deep convolutional neural networks for estimating porous material parameters with ultrasound tomography

    NASA Astrophysics Data System (ADS)

    Lähivaara, Timo; Kärkkäinen, Leo; Huttunen, Janne M. J.; Hesthaven, Jan S.

    2018-02-01

    We study the feasibility of data based machine learning applied to ultrasound tomography to estimate water-saturated porous material parameters. In this work, the data to train the neural networks is simulated by solving wave propagation in coupled poroviscoelastic-viscoelastic-acoustic media. As the forward model, we consider a high-order discontinuous Galerkin method while deep convolutional neural networks are used to solve the parameter estimation problem. In the numerical experiment, we estimate the material porosity and tortuosity while the remaining parameters which are of less interest are successfully marginalized in the neural networks-based inversion. Computational examples confirms the feasibility and accuracy of this approach.

  10. CFD and PTV steady flow investigation in an anatomically accurate abdominal aortic aneurysm.

    PubMed

    Boutsianis, Evangelos; Guala, Michele; Olgac, Ufuk; Wildermuth, Simon; Hoyer, Klaus; Ventikos, Yiannis; Poulikakos, Dimos

    2009-01-01

    There is considerable interest in computational and experimental flow investigations within abdominal aortic aneurysms (AAAs). This task stipulates advanced grid generation techniques and cross-validation because of the anatomical complexity. The purpose of this study is to examine the feasibility of velocity measurements by particle tracking velocimetry (PTV) in realistic AAA models. Computed tomography and rapid prototyping were combined to digitize and construct a silicone replica of a patient-specific AAA. Three-dimensional velocity measurements were acquired using PTV under steady averaged resting boundary conditions. Computational fluid dynamics (CFD) simulations were subsequently carried out with identical boundary conditions. The computational grid was created by splitting the luminal volume into manifold and nonmanifold subsections. They were filled with tetrahedral and hexahedral elements, respectively. Grid independency was tested on three successively refined meshes. Velocity differences of about 1% in all three directions existed mainly within the AAA sack. Pressure revealed similar variations, with the sparser mesh predicting larger values. PTV velocity measurements were taken along the abdominal aorta and showed good agreement with the numerical data. The results within the aneurysm neck and sack showed average velocity variations of about 5% of the mean inlet velocity. The corresponding average differences increased for all velocity components downstream the iliac bifurcation to as much as 15%. The two domains differed slightly due to flow-induced forces acting on the silicone model. Velocity quantification through narrow branches was problematic due to decreased signal to noise ratio at the larger local velocities. Computational wall pressure and shear fields are also presented. The agreement between CFD simulations and the PTV experimental data was confirmed by three-dimensional velocity comparisons at several locations within the investigated AAA anatomy indicating the feasibility of this approach.

  11. A unified framework for heat and mass transport at the atomic scale

    NASA Astrophysics Data System (ADS)

    Ponga, Mauricio; Sun, Dingyi

    2018-04-01

    We present a unified framework to simulate heat and mass transport in systems of particles. The proposed framework is based on kinematic mean field theory and uses a phenomenological master equation to compute effective transport rates between particles without the need to evaluate operators. We exploit this advantage and apply the model to simulate transport phenomena at the nanoscale. We demonstrate that, when calibrated to experimentally-measured transport coefficients, the model can accurately predict transient and steady state temperature and concentration profiles even in scenarios where the length of the device is comparable to the mean free path of the carriers. Through several example applications, we demonstrate the validity of our model for all classes of materials, including ones that, until now, would have been outside the domain of computational feasibility.

  12. Three-dimensional data-tracking dynamic optimization simulations of human locomotion generated by direct collocation.

    PubMed

    Lin, Yi-Chung; Pandy, Marcus G

    2017-07-05

    The aim of this study was to perform full-body three-dimensional (3D) dynamic optimization simulations of human locomotion by driving a neuromusculoskeletal model toward in vivo measurements of body-segmental kinematics and ground reaction forces. Gait data were recorded from 5 healthy participants who walked at their preferred speeds and ran at 2m/s. Participant-specific data-tracking dynamic optimization solutions were generated for one stride cycle using direct collocation in tandem with an OpenSim-MATLAB interface. The body was represented as a 12-segment, 21-degree-of-freedom skeleton actuated by 66 muscle-tendon units. Foot-ground interaction was simulated using six contact spheres under each foot. The dynamic optimization problem was to find the set of muscle excitations needed to reproduce 3D measurements of body-segmental motions and ground reaction forces while minimizing the time integral of muscle activations squared. Direct collocation took on average 2.7±1.0h and 2.2±1.6h of CPU time, respectively, to solve the optimization problems for walking and running. Model-computed kinematics and foot-ground forces were in good agreement with corresponding experimental data while the calculated muscle excitation patterns were consistent with measured EMG activity. The results demonstrate the feasibility of implementing direct collocation on a detailed neuromusculoskeletal model with foot-ground contact to accurately and efficiently generate 3D data-tracking dynamic optimization simulations of human locomotion. The proposed method offers a viable tool for creating feasible initial guesses needed to perform predictive simulations of movement using dynamic optimization theory. The source code for implementing the model and computational algorithm may be downloaded at http://simtk.org/home/datatracking. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Constructing a patient-specific computer model of the upper airway in sleep apnea patients.

    PubMed

    Dhaliwal, Sandeep S; Hesabgar, Seyyed M; Haddad, Seyyed M H; Ladak, Hanif; Samani, Abbas; Rotenberg, Brian W

    2018-01-01

    The use of computer simulation to develop a high-fidelity model has been proposed as a novel and cost-effective alternative to help guide therapeutic intervention in sleep apnea surgery. We describe a computer model based on patient-specific anatomy of obstructive sleep apnea (OSA) subjects wherein the percentage and sites of upper airway collapse are compared to findings on drug-induced sleep endoscopy (DISE). Basic science computer model generation. Three-dimensional finite element techniques were undertaken for model development in a pilot study of four OSA patients. Magnetic resonance imaging was used to capture patient anatomy and software employed to outline critical anatomical structures. A finite-element mesh was applied to the volume enclosed by each structure. Linear and hyperelastic soft-tissue properties for various subsites (tonsils, uvula, soft palate, and tongue base) were derived using an inverse finite-element technique from surgical specimens. Each model underwent computer simulation to determine the degree of displacement on various structures within the upper airway, and these findings were compared to DISE exams performed on the four study patients. Computer simulation predictions for percentage of airway collapse and site of maximal collapse show agreement with observed results seen on endoscopic visualization. Modeling the upper airway in OSA patients is feasible and holds promise in aiding patient-specific surgical treatment. NA. Laryngoscope, 128:277-282, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  14. Numerical simulations of clinical focused ultrasound functional neurosurgery

    NASA Astrophysics Data System (ADS)

    Pulkkinen, Aki; Werner, Beat; Martin, Ernst; Hynynen, Kullervo

    2014-04-01

    A computational model utilizing grid and finite difference methods were developed to simulate focused ultrasound functional neurosurgery interventions. The model couples the propagation of ultrasound in fluids (soft tissues) and solids (skull) with acoustic and visco-elastic wave equations. The computational model was applied to simulate clinical focused ultrasound functional neurosurgery treatments performed in patients suffering from therapy resistant chronic neuropathic pain. Datasets of five patients were used to derive the treatment geometry. Eight sonications performed in the treatments were then simulated with the developed model. Computations were performed by driving the simulated phased array ultrasound transducer with the acoustic parameters used in the treatments. Resulting focal temperatures and size of the thermal foci were compared quantitatively, in addition to qualitative inspection of the simulated pressure and temperature fields. This study found that the computational model and the simulation parameters predicted an average of 24 ± 13% lower focal temperature elevations than observed in the treatments. The size of the simulated thermal focus was found to be 40 ± 13% smaller in the anterior-posterior direction and 22 ± 14% smaller in the inferior-superior direction than in the treatments. The location of the simulated thermal focus was off from the prescribed target by 0.3 ± 0.1 mm, while the peak focal temperature elevation observed in the measurements was off by 1.6 ± 0.6 mm. Although the results of the simulations suggest that there could be some inaccuracies in either the tissue parameters used, or in the simulation methods, the simulations were able to predict the focal spot locations and temperature elevations adequately for initial treatment planning performed to assess, for example, the feasibility of sonication. The accuracy of the simulations could be improved if more precise ultrasound tissue properties (especially of the skull bone) could be obtained.

  15. Performance evaluation of GPU parallelization, space-time adaptive algorithms, and their combination for simulating cardiac electrophysiology.

    PubMed

    Sachetto Oliveira, Rafael; Martins Rocha, Bernardo; Burgarelli, Denise; Meira, Wagner; Constantinides, Christakis; Weber Dos Santos, Rodrigo

    2018-02-01

    The use of computer models as a tool for the study and understanding of the complex phenomena of cardiac electrophysiology has attained increased importance nowadays. At the same time, the increased complexity of the biophysical processes translates into complex computational and mathematical models. To speed up cardiac simulations and to allow more precise and realistic uses, 2 different techniques have been traditionally exploited: parallel computing and sophisticated numerical methods. In this work, we combine a modern parallel computing technique based on multicore and graphics processing units (GPUs) and a sophisticated numerical method based on a new space-time adaptive algorithm. We evaluate each technique alone and in different combinations: multicore and GPU, multicore and GPU and space adaptivity, multicore and GPU and space adaptivity and time adaptivity. All the techniques and combinations were evaluated under different scenarios: 3D simulations on slabs, 3D simulations on a ventricular mouse mesh, ie, complex geometry, sinus-rhythm, and arrhythmic conditions. Our results suggest that multicore and GPU accelerate the simulations by an approximate factor of 33×, whereas the speedups attained by the space-time adaptive algorithms were approximately 48. Nevertheless, by combining all the techniques, we obtained speedups that ranged between 165 and 498. The tested methods were able to reduce the execution time of a simulation by more than 498× for a complex cellular model in a slab geometry and by 165× in a realistic heart geometry simulating spiral waves. The proposed methods will allow faster and more realistic simulations in a feasible time with no significant loss of accuracy. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Facilitating arrhythmia simulation: the method of quantitative cellular automata modeling and parallel running

    PubMed Central

    Zhu, Hao; Sun, Yan; Rajagopal, Gunaretnam; Mondry, Adrian; Dhar, Pawan

    2004-01-01

    Background Many arrhythmias are triggered by abnormal electrical activity at the ionic channel and cell level, and then evolve spatio-temporally within the heart. To understand arrhythmias better and to diagnose them more precisely by their ECG waveforms, a whole-heart model is required to explore the association between the massively parallel activities at the channel/cell level and the integrative electrophysiological phenomena at organ level. Methods We have developed a method to build large-scale electrophysiological models by using extended cellular automata, and to run such models on a cluster of shared memory machines. We describe here the method, including the extension of a language-based cellular automaton to implement quantitative computing, the building of a whole-heart model with Visible Human Project data, the parallelization of the model on a cluster of shared memory computers with OpenMP and MPI hybrid programming, and a simulation algorithm that links cellular activity with the ECG. Results We demonstrate that electrical activities at channel, cell, and organ levels can be traced and captured conveniently in our extended cellular automaton system. Examples of some ECG waveforms simulated with a 2-D slice are given to support the ECG simulation algorithm. A performance evaluation of the 3-D model on a four-node cluster is also given. Conclusions Quantitative multicellular modeling with extended cellular automata is a highly efficient and widely applicable method to weave experimental data at different levels into computational models. This process can be used to investigate complex and collective biological activities that can be described neither by their governing differentiation equations nor by discrete parallel computation. Transparent cluster computing is a convenient and effective method to make time-consuming simulation feasible. Arrhythmias, as a typical case, can be effectively simulated with the methods described. PMID:15339335

  17. Autonomic arousal and learning in Web-based simulation: a feasibility study.

    PubMed

    Gorrindo, Tristan; Chevalier, Lydia; Goldfarb, Elizabeth; Hoeppner, Bettina B; Birnbaum, Robert J

    2014-01-01

    Autonomic arousal is an important component of understanding learning as it is related to cognitive effort, attention, and emotional arousal. Currently, however, little is known about its relationship to online education. We conducted a study to determine the feasibility of measuring autonomic arousal and engagement in online continuing medical education (CME). Using the Computer Simulation Assessment Tool (CSAT) platform, health care providers were randomly assigned to either high- or low-valence versions of a Web-based simulation on risk assessment for a returning veteran. Data were collected on participants' actions within the simulation, self-reported cognitive engagement, knowledge retention, and autonomic arousal measured using galvanic skin response (GSR). Participants in the high-valence condition (n = 7) chose a lower percentage of critical actions (M = 79.2, SD = 4.2) than participants in the low valence (n = 8) condition (M = 83.9, SD = 3.6, t(1,14) = 2.44, p = .03). While not statistically significant, high-valence participants reported higher cognitive engagement. Participants in the high-valence condition showed a larger increase in physiologic arousal when comparing mean tonic GSR during the orientation simulation to the study simulation (high-valence mean difference = 4.21 μS, SD = 1.23 vs low-valence mean difference = 1.64 μS, SD = 2.32, t(1,13) = -2.62, p = .01). In addition to being consistent with previous engagement research, this experiment functioned as a feasibility study for measuring autonomic arousal in online CME. The current study provides a framework for future studies, which may use neurophysiology to identify the critical autonomic and engagement components associated with effective online learning. © 2014 The Alliance for Continuing Education in the Health Professions, the Society for Academic Continuing Medical Education, and the Council on Continuing Medical Education, Association for Hospital Medical Education.

  18. Scalability of Parallel Spatial Direct Numerical Simulations on Intel Hypercube and IBM SP1 and SP2

    NASA Technical Reports Server (NTRS)

    Joslin, Ronald D.; Hanebutte, Ulf R.; Zubair, Mohammad

    1995-01-01

    The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube and IBM SP1 and SP2 parallel computers is documented. Spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows are computed with the PSDNS code. The feasibility of using the PSDNS to perform transition studies on these computers is examined. The results indicate that PSDNS approach can effectively be parallelized on a distributed-memory parallel machine by remapping the distributed data structure during the course of the calculation. Scalability information is provided to estimate computational costs to match the actual costs relative to changes in the number of grid points. By increasing the number of processors, slower than linear speedups are achieved with optimized (machine-dependent library) routines. This slower than linear speedup results because the computational cost is dominated by FFT routine, which yields less than ideal speedups. By using appropriate compile options and optimized library routines on the SP1, the serial code achieves 52-56 M ops on a single node of the SP1 (45 percent of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a "real world" simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP supercomputer. For the same simulation, 32-nodes of the SP1 and SP2 are required to reach the performance of a Cray C-90. A 32 node SP1 (SP2) configuration is 2.9 (4.6) times faster than a Cray Y/MP for this simulation, while the hypercube is roughly 2 times slower than the Y/MP for this application. KEY WORDS: Spatial direct numerical simulations; incompressible viscous flows; spectral methods; finite differences; parallel computing.

  19. A technique for incorporating the NASA spacelab payload dedicated experiment processor software into the simulation system for the payload crew training complex

    NASA Technical Reports Server (NTRS)

    Bremmer, D. A.

    1986-01-01

    The feasibility of some off-the-shelf microprocessors and state-of-art software is assessed (1) as a development system for the principle investigator (pi) in the design of the experiment model, (2) as an example of available technology application for future PI's experiments, (3) as a system capable of being interactive in the PCTC's simulation of the dedicated experiment processor (DEP), preferably by bringing the PI's DEP software directly into the simulation model, (4) as a system having bus compatibility with host VAX simulation computers, (5) as a system readily interfaced with mock-up panels and information displays, and (6) as a functional system for post mission data analysis.

  20. [The use of open source software in graphic anatomic reconstructions and in biomechanic simulations].

    PubMed

    Ciobanu, O

    2009-01-01

    The objective of this study was to obtain three-dimensional (3D) images and to perform biomechanical simulations starting from DICOM images obtained by computed tomography (CT). Open source software were used to prepare digitized 2D images of tissue sections and to create 3D reconstruction from the segmented structures. Finally, 3D images were used in open source software in order to perform biomechanic simulations. This study demonstrates the applicability and feasibility of open source software developed in our days for the 3D reconstruction and biomechanic simulation. The use of open source software may improve the efficiency of investments in imaging technologies and in CAD/CAM technologies for implants and prosthesis fabrication which need expensive specialized software.

  1. High Performance Computing (HPC)-Enabled Computational Study on the Feasibility of using Shape Memory Alloys for Gas Turbine Blade Actuation

    DTIC Science & Technology

    2016-11-01

    Feasibility of using Shape Memory Alloys for Gas Turbine Blade Actuation by Kathryn Esham, Luis Bravo, Anindya Ghoshal, Muthuvel Murugan, and Michael...Computational Study on the Feasibility of using Shape Memory Alloys for Gas Turbine Blade Actuation by Luis Bravo, Anindya Ghoshal, Muthuvel...High Performance Computing (HPC)-Enabled Computational Study on the Feasibility of using Shape Memory Alloys for Gas Turbine Blade Actuation 5a

  2. Power subsystem automation study

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.; Sewy, D.; Pickering, C.; Sauers, R.

    1984-01-01

    The purpose of the phase 2 of the power subsystem automation study was to demonstrate the feasibility of using computer software to manage an aspect of the electrical power subsystem on a space station. The state of the art in expert systems software was investigated in this study. This effort resulted in the demonstration of prototype expert system software for managing one aspect of a simulated space station power subsystem.

  3. Dynamic Load Predictions for Launchers Using Extra-Large Eddy Simulations X-Les

    NASA Astrophysics Data System (ADS)

    Maseland, J. E. J.; Soemarwoto, B. I.; Kok, J. C.

    2005-02-01

    Flow-induced unsteady loads can have a strong impact on performance and flight characteristics of aerospace vehicles and therefore play a crucial role in their design and operation. Complementary to costly flight tests and delicate wind-tunnel experiments, unsteady loads can be calculated using time-accurate Computational Fluid Dynamics. A capability to accurately predict the dynamic loads on aerospace structures at flight Reynolds numbers can be of great value for the design and analysis of aerospace vehicles. Advanced space launchers are subject to dynamic loads in the base region during the ascent to space. In particular the engine and nozzle experience aerodynamic pressure fluctuations resulting from massive flow separations. Understanding these phenomena is essential for performance enhancements for future launchers which operate a larger nozzle. A new hybrid RANS-LES turbulence modelling approach termed eXtra-Large Eddy Simulations (X-LES) holds the promise to capture the flow structures associated with massive separations and enables the prediction of the broad-band spectrum of dynamic loads. This type of method has become a focal point, reducing the cost of full LES, driven by the demand for their applicability in an industrial environment. The industrial feasibility of X-LES simulations is demonstrated by computing the unsteady aerodynamic loads on the main-engine nozzle of a generic space launcher configuration. The potential to calculate the dynamic loads is qualitatively assessed for transonic flow conditions in a comparison to wind-tunnel experiments. In terms of turn-around-times, X-LES computations are already feasible within the time-frames of the development process to support the structural design. Key words: massive separated flows; buffet loads; nozzle vibrations; space launchers; time-accurate CFD; composite RANS-LES formulation.

  4. A Google Glass navigation system for ultrasound and fluorescence dual-mode image-guided surgery

    NASA Astrophysics Data System (ADS)

    Zhang, Zeshu; Pei, Jing; Wang, Dong; Hu, Chuanzhen; Ye, Jian; Gan, Qi; Liu, Peng; Yue, Jian; Wang, Benzhong; Shao, Pengfei; Povoski, Stephen P.; Martin, Edward W.; Yilmaz, Alper; Tweedle, Michael F.; Xu, Ronald X.

    2016-03-01

    Surgical resection remains the primary curative intervention for cancer treatment. However, the occurrence of a residual tumor after resection is very common, leading to the recurrence of the disease and the need for re-resection. We develop a surgical Google Glass navigation system that combines near infrared fluorescent imaging and ultrasonography for intraoperative detection of sites of tumor and assessment of surgical resection boundaries, well as for guiding sentinel lymph node (SLN) mapping and biopsy. The system consists of a monochromatic CCD camera, a computer, a Google Glass wearable headset, an ultrasonic machine and an array of LED light sources. All the above components, except the Google Glass, are connected to a host computer by a USB or HDMI port. Wireless connection is established between the glass and the host computer for image acquisition and data transport tasks. A control program is written in C++ to call OpenCV functions for image calibration, processing and display. The technical feasibility of the system is tested in both tumor simulating phantoms and in a human subject. When the system is used for simulated phantom resection tasks, the tumor boundaries, invisible to the naked eye, can be clearly visualized with the surgical Google Glass navigation system. This system has also been used in an IRB approved protocol in a single patient during SLN mapping and biopsy in the First Affiliated Hospital of Anhui Medical University, demonstrating the ability to successfully localize and resect all apparent SLNs. In summary, our tumor simulating phantom and human subject studies have demonstrated the technical feasibility of successfully using the proposed goggle navigation system during cancer surgery.

  5. Integrating interactive computational modeling in biology curricula.

    PubMed

    Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A

    2015-03-01

    While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  6. Encryption and display of multiple-image information using computer-generated holography with modified GS iterative algorithm

    NASA Astrophysics Data System (ADS)

    Xiao, Dan; Li, Xiaowei; Liu, Su-Juan; Wang, Qiong-Hua

    2018-03-01

    In this paper, a new scheme of multiple-image encryption and display based on computer-generated holography (CGH) and maximum length cellular automata (MLCA) is presented. With the scheme, the computer-generated hologram, which has the information of the three primitive images, is generated by modified Gerchberg-Saxton (GS) iterative algorithm using three different fractional orders in fractional Fourier domain firstly. Then the hologram is encrypted using MLCA mask. The ciphertext can be decrypted combined with the fractional orders and the rules of MLCA. Numerical simulations and experimental display results have been carried out to verify the validity and feasibility of the proposed scheme.

  7. Control of complex physically simulated robot groups

    NASA Astrophysics Data System (ADS)

    Brogan, David C.

    2001-10-01

    Actuated systems such as robots take many forms and sizes but each requires solving the difficult task of utilizing available control inputs to accomplish desired system performance. Coordinated groups of robots provide the opportunity to accomplish more complex tasks, to adapt to changing environmental conditions, and to survive individual failures. Similarly, groups of simulated robots, represented as graphical characters, can test the design of experimental scenarios and provide autonomous interactive counterparts for video games. The complexity of writing control algorithms for these groups currently hinders their use. A combination of biologically inspired heuristics, search strategies, and optimization techniques serve to reduce the complexity of controlling these real and simulated characters and to provide computationally feasible solutions.

  8. Modeling and analysis of pinhole occulter experiment: Initial study phase

    NASA Technical Reports Server (NTRS)

    Vandervoort, R. J.

    1985-01-01

    The feasibility of using a generic simulation, TREETOPS, to simulate the Pinhole/Occulter Facility (P/OF) to be tested on the space shuttle was demonstrated. The baseline control system was used to determine the pointing performance of the P/OF. The task included modeling the structure as a three body problem (shuttle-instrument pointing system- P/OP) including the flexibility of the 32 meter P/OF boom. Modeling of sensors, actuators, and control algorithms was also required. Detailed mathematical models for the structure, sensors, and actuators are presented, as well as the control algorithm and corresponding design procedure. Closed loop performance using this controller and computer listings for the simulator are also given.

  9. Information Security Scheme Based on Computational Temporal Ghost Imaging.

    PubMed

    Jiang, Shan; Wang, Yurong; Long, Tao; Meng, Xiangfeng; Yang, Xiulun; Shu, Rong; Sun, Baoqing

    2017-08-09

    An information security scheme based on computational temporal ghost imaging is proposed. A sequence of independent 2D random binary patterns are used as encryption key to multiply with the 1D data stream. The cipher text is obtained by summing the weighted encryption key. The decryption process can be realized by correlation measurement between the encrypted information and the encryption key. Due to the instinct high-level randomness of the key, the security of this method is greatly guaranteed. The feasibility of this method and robustness against both occlusion and additional noise attacks are discussed with simulation, respectively.

  10. Intention, emotion, and action: a neural theory based on semantic pointers.

    PubMed

    Schröder, Tobias; Stewart, Terrence C; Thagard, Paul

    2014-06-01

    We propose a unified theory of intentions as neural processes that integrate representations of states of affairs, actions, and emotional evaluation. We show how this theory provides answers to philosophical questions about the concept of intention, psychological questions about human behavior, computational questions about the relations between belief and action, and neuroscientific questions about how the brain produces actions. Our theory of intention ties together biologically plausible mechanisms for belief, planning, and motor control. The computational feasibility of these mechanisms is shown by a model that simulates psychologically important cases of intention. © 2013 Cognitive Science Society, Inc.

  11. Feasible Path Generation Using Bezier Curves for Car-Like Vehicle

    NASA Astrophysics Data System (ADS)

    Latip, Nor Badariyah Abdul; Omar, Rosli

    2017-08-01

    When planning a collision-free path for an autonomous vehicle, the main criteria that have to be considered are the shortest distance, lower computation time and completeness, i.e. a path can be found if one exists. Besides that, a feasible path for the autonomous vehicle is also crucial to guarantee that the vehicle can reach the target destination considering its kinematic constraints such as non-holonomic and minimum turning radius. In order to address these constraints, Bezier curves is applied. In this paper, Bezier curves are modeled and simulated using Matlab software and the feasibility of the resulting path is analyzed. Bezier curve is derived from a piece-wise linear pre-planned path. It is found that the Bezier curves has the capability of making the planned path feasible and could be embedded in a path planning algorithm for an autonomous vehicle with kinematic constraints. It is concluded that the length of segments of the pre-planned path have to be greater than a nominal value, derived from the vehicle wheelbase, maximum steering angle and maximum speed to ensure the path for the autonomous car is feasible.

  12. Feasibility assessment of the interactive use of a Monte Carlo algorithm in treatment planning for intraoperative electron radiation therapy

    NASA Astrophysics Data System (ADS)

    Guerra, Pedro; Udías, José M.; Herranz, Elena; Santos-Miranda, Juan Antonio; Herraiz, Joaquín L.; Valdivieso, Manlio F.; Rodríguez, Raúl; Calama, Juan A.; Pascau, Javier; Calvo, Felipe A.; Illana, Carlos; Ledesma-Carbayo, María J.; Santos, Andrés

    2014-12-01

    This work analysed the feasibility of using a fast, customized Monte Carlo (MC) method to perform accurate computation of dose distributions during pre- and intraplanning of intraoperative electron radiation therapy (IOERT) procedures. The MC method that was implemented, which has been integrated into a specific innovative simulation and planning tool, is able to simulate the fate of thousands of particles per second, and it was the aim of this work to determine the level of interactivity that could be achieved. The planning workflow enabled calibration of the imaging and treatment equipment, as well as manipulation of the surgical frame and insertion of the protection shields around the organs at risk and other beam modifiers. In this way, the multidisciplinary team involved in IOERT has all the tools necessary to perform complex MC dosage simulations adapted to their equipment in an efficient and transparent way. To assess the accuracy and reliability of this MC technique, dose distributions for a monoenergetic source were compared with those obtained using a general-purpose software package used widely in medical physics applications. Once accuracy of the underlying simulator was confirmed, a clinical accelerator was modelled and experimental measurements in water were conducted. A comparison was made with the output from the simulator to identify the conditions under which accurate dose estimations could be obtained in less than 3 min, which is the threshold imposed to allow for interactive use of the tool in treatment planning. Finally, a clinically relevant scenario, namely early-stage breast cancer treatment, was simulated with pre- and intraoperative volumes to verify that it was feasible to use the MC tool intraoperatively and to adjust dose delivery based on the simulation output, without compromising accuracy. The workflow provided a satisfactory model of the treatment head and the imaging system, enabling proper configuration of the treatment planning system and providing good accuracy in the dosage simulation.

  13. Ultrasonic Phased Array Inspection for an Isogrid Structural Element with Cracks

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Tokars, R. P.; Martin, R. E.; Rauser, R. W.; Aldrin, J. C.; Schumacher, E. J.

    2010-01-01

    In this investigation, a T-shaped aluminum alloy isogrid stiffener element used in aerospace applications was inspected with ultrasonic phased array methods. The isogrid stiffener element had various crack configurations emanating from bolt holes. Computational simulation methods were used to mimic the experiments in order to help understand experimental results. The results of this study indicate that it is at least partly feasible to interrogate this type of geometry with the given flaw configurations using phased array ultrasonics. The simulation methods were critical in helping explain the experimental results and, with some limitation, can be used to predict inspection results.

  14. Definition and design of an experiment to test raster scanning with rotating unbalanced-mass devices on gimbaled payloads

    NASA Technical Reports Server (NTRS)

    Lightsey, W. D.; Alhorn, D. C.; Polites, M. E.

    1992-01-01

    An experiment designed to test the feasibility of using rotating unbalanced-mass (RUM) devices for line and raster scanning gimbaled payloads, while expending very little power is described. The experiment is configured for ground-based testing, but the scan concept is applicable to ground-based, balloon-borne, and space-based payloads, as well as free-flying spacecraft. The servos used in scanning are defined; the electronic hardware is specified; and a computer simulation model of the system is described. Simulation results are presented that predict system performance and verify the servo designs.

  15. Task 6 - Subtask 1: PNNL Visit by JAEA Researchers to Evaluate the Feasibility of the FLESCOT Code for the Future JAEA Use for the Fukushima Surface Water Environmental Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onishi, Yasuo

    Four Japan Atomic Energy Agency (JAEA) researchers visited Pacific Northwest National Laboratory (PNNL) for seven working days and have evaluated the suitability and adaptability of FLESCOT to a JAEA’s supercomputer system to effectively simulate cesium behavior in dam reservoirs, river mouths, and coastal areas in Fukushima contaminated by the Fukushima Daiichi nuclear accident. PNNL showed the following to JAEA visitors during the seven-working day period: FLESCOT source code; User’s manual; FLESCOT description – Program structure – Algorism – Solver – Boundary condition handling – Data definition – Input and output methods – How to run. During the visit, JAEA hadmore » access to FLESCOT to run with an input data set to evaluate the capacity and feasibility of adapting it to a JAEA super computer with massive parallel processors. As a part of this evaluation, PNNL ran FLESCOT for sample cases of the contaminant migration simulation to further describe FLESCOT in action. JAEA and PNNL researchers also evaluated time spent for each subroutine of FLESCOT, and the JAEA researcher implemented some initial parallelization schemes to FLESCOT. Based on this code evaluation, JAEA and PNNL determined that FLESCOT is: applicable to Fukushima lakes/dam reservoirs, river mouth areas, and coastal water; and feasible to implement parallelization for the JAEA supercomputer. In addition, PNNL and JAEA researchers discussed molecular modeling approaches on cesium adsorption mechanisms to enhance the JAEA molecular modeling activities. PNNL and JAEA also discussed specific collaboration of molecular and computational modeling activities.« less

  16. Computational materials chemistry for carbon capture using porous materials

    NASA Astrophysics Data System (ADS)

    Sharma, Abhishek; Huang, Runhong; Malani, Ateeque; Babarao, Ravichandar

    2017-11-01

    Control over carbon dioxide (CO2) release is extremely important to decrease its hazardous effects on the environment such as global warming, ocean acidification, etc. For CO2 capture and storage at industrial point sources, nanoporous materials offer an energetically viable and economically feasible approach compared to chemisorption in amines. There is a growing need to design and synthesize new nanoporous materials with enhanced capability for carbon capture. Computational materials chemistry offers tools to screen and design cost-effective materials for CO2 separation and storage, and it is less time consuming compared to trial and error experimental synthesis. It also provides a guide to synthesize new materials with better properties for real world applications. In this review, we briefly highlight the various carbon capture technologies and the need of computational materials design for carbon capture. This review discusses the commonly used computational chemistry-based simulation methods for structural characterization and prediction of thermodynamic properties of adsorbed gases in porous materials. Finally, simulation studies reported on various potential porous materials, such as zeolites, porous carbon, metal organic frameworks (MOFs) and covalent organic frameworks (COFs), for CO2 capture are discussed.

  17. Quantum simulation of quantum field theory using continuous variables

    DOE PAGES

    Marshall, Kevin; Pooser, Raphael C.; Siopsis, George; ...

    2015-12-14

    Much progress has been made in the field of quantum computing using continuous variables over the last couple of years. This includes the generation of extremely large entangled cluster states (10,000 modes, in fact) as well as a fault tolerant architecture. This has lead to the point that continuous-variable quantum computing can indeed be thought of as a viable alternative for universal quantum computing. With that in mind, we present a new algorithm for continuous-variable quantum computers which gives an exponential speedup over the best known classical methods. Specifically, this relates to efficiently calculating the scattering amplitudes in scalar bosonicmore » quantum field theory, a problem that is known to be hard using a classical computer. Thus, we give an experimental implementation based on cluster states that is feasible with today's technology.« less

  18. Quantum simulation of quantum field theory using continuous variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, Kevin; Pooser, Raphael C.; Siopsis, George

    Much progress has been made in the field of quantum computing using continuous variables over the last couple of years. This includes the generation of extremely large entangled cluster states (10,000 modes, in fact) as well as a fault tolerant architecture. This has lead to the point that continuous-variable quantum computing can indeed be thought of as a viable alternative for universal quantum computing. With that in mind, we present a new algorithm for continuous-variable quantum computers which gives an exponential speedup over the best known classical methods. Specifically, this relates to efficiently calculating the scattering amplitudes in scalar bosonicmore » quantum field theory, a problem that is known to be hard using a classical computer. Thus, we give an experimental implementation based on cluster states that is feasible with today's technology.« less

  19. Unsteady numerical simulation of a round jet with impinging microjets for noise suppression

    PubMed Central

    Lew, Phoi-Tack; Najafi-Yazdi, Alireza; Mongeau, Luc

    2013-01-01

    The objective of this study was to determine the feasibility of a lattice-Boltzmann method (LBM)-Large Eddy Simulation methodology for the prediction of sound radiation from a round jet-microjet combination. The distinct advantage of LBM over traditional computational fluid dynamics methods is its ease of handling problems with complex geometries. Numerical simulations of an isothermal Mach 0.5, ReD = 1 × 105 circular jet (Dj = 0.0508 m) with and without the presence of 18 microjets (Dmj = 1 mm) were performed. The presence of microjets resulted in a decrease in the axial turbulence intensity and turbulent kinetic energy. The associated decrease in radiated sound pressure level was around 1 dB. The far-field sound was computed using the porous Ffowcs Williams-Hawkings surface integral acoustic method. The trend obtained is in qualitative agreement with experimental observations. The results of this study support the accuracy of LBM based numerical simulations for predictions of the effects of noise suppression devices on the radiated sound power. PMID:23967931

  20. An efficient and scalable deformable model for virtual reality-based medical applications.

    PubMed

    Choi, Kup-Sze; Sun, Hanqiu; Heng, Pheng-Ann

    2004-09-01

    Modeling of tissue deformation is of great importance to virtual reality (VR)-based medical simulations. Considerable effort has been dedicated to the development of interactively deformable virtual tissues. In this paper, an efficient and scalable deformable model is presented for virtual-reality-based medical applications. It considers deformation as a localized force transmittal process which is governed by algorithms based on breadth-first search (BFS). The computational speed is scalable to facilitate real-time interaction by adjusting the penetration depth. Simulated annealing (SA) algorithms are developed to optimize the model parameters by using the reference data generated with the linear static finite element method (FEM). The mechanical behavior and timing performance of the model have been evaluated. The model has been applied to simulate the typical behavior of living tissues and anisotropic materials. Integration with a haptic device has also been achieved on a generic personal computer (PC) platform. The proposed technique provides a feasible solution for VR-based medical simulations and has the potential for multi-user collaborative work in virtual environment.

  1. Comparative simulation study of chemical synthesis of functional DADNE material.

    PubMed

    Liu, Min Hsien; Liu, Chuan Wen

    2017-01-01

    Amorphous molecular simulation to model the reaction species in the synthesis of chemically inert and energetic 1,1-diamino-2,2-dinitroethene (DADNE) explosive material was performed in this work. Nitromethane was selected as the starting reactant to undergo halogenation, nitration, deprotonation, intermolecular condensation, and dehydration to produce the target DADNE product. The Materials Studio (MS) forcite program allowed fast energy calculations and reliable geometric optimization of all aqueous molecular reaction systems (0.1-0.5 M) at 283 K and 298 K. The MS forcite-computed and Gaussian polarizable continuum model (PCM)-computed results were analyzed and compared in order to explore feasible reaction pathways under suitable conditions for the synthesis of DADNE. Through theoretical simulation, the findings revealed that synthesis was possible, and a total energy barrier of 449.6 kJ mol -1 needed to be overcome in order to carry out the reaction according to MS calculation of the energy barriers at each stage at 283 K, as shown by the reaction profiles. Local analysis of intermolecular interaction, together with calculation of the stabilization energy of each reaction system, provided information that can be used as a reference regarding molecular integrated stability. Graphical Abstract Materials Studio software has been suggested for the computation and simulation of DADNE synthesis.

  2. Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy

    NASA Astrophysics Data System (ADS)

    Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li

    2018-03-01

    In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.

  3. Feasibility study of a procedure to detect and warn of low level wind shear

    NASA Technical Reports Server (NTRS)

    Turkel, B. S.; Kessel, P. A.; Frost, W.

    1981-01-01

    A Doppler radar system which provides an aircraft with advanced warning of longitudinal wind shear is described. This system uses a Doppler radar beamed along the glide slope linked with an on line microprocessor containing a two dimensional, three degree of freedom model of the motion of an aircraft including pilot/autopilot control. The Doppler measured longitudinal glide slope winds are entered into the aircraft motion model, and a simulated controlled aircraft trajectory is calculated. Several flight path deterioration parameters are calculated from the computed aircraft trajectory information. The aircraft trajectory program, pilot control models, and the flight path deterioration parameters are discussed. The performance of the computer model and a test pilot in a flight simulator through longitudinal and vertical wind fields characteristic of a thunderstorm wind field are compared.

  4. What can we learn from in-soil imaging of a live plant: X-ray Computed Tomography and 3D numerical simulation of root-soil system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaofan; Varga, Tamas; Liu, Chongxuan

    Plant roots play a critical role in plant-soil-microbe interactions that occur in the rhizosphere. X-ray Computed Tomography (XCT) has been proven to be an effective tool for non-invasive root imaging and analysis. A combination of XCT, open-source software, and in-house developed code was used to non-invasively image a prairie dropseed (Sporobolus heterolepis) specimen, segment the root data to obtain a 3D image of the root structure, and extract quantitative information from the 3D data, respectively. Based on the explicitly-resolved root structure, pore-scale computational fluid dynamics (CFD) simulations were applied to numerically investigate the root-soil-groundwater system. The plant root conductivity, soilmore » hydraulic conductivity and transpiration rate were shown to control the groundwater distribution. Furthermore, the coupled imaging-modeling approach demonstrates a realistic platform to investigate rhizosphere flow processes and would be feasible to provide useful information linked to upscaled models.« less

  5. What can we learn from in-soil imaging of a live plant: X-ray Computed Tomography and 3D numerical simulation of root-soil system

    DOE PAGES

    Yang, Xiaofan; Varga, Tamas; Liu, Chongxuan; ...

    2017-05-04

    Plant roots play a critical role in plant-soil-microbe interactions that occur in the rhizosphere. X-ray Computed Tomography (XCT) has been proven to be an effective tool for non-invasive root imaging and analysis. A combination of XCT, open-source software, and in-house developed code was used to non-invasively image a prairie dropseed (Sporobolus heterolepis) specimen, segment the root data to obtain a 3D image of the root structure, and extract quantitative information from the 3D data, respectively. Based on the explicitly-resolved root structure, pore-scale computational fluid dynamics (CFD) simulations were applied to numerically investigate the root-soil-groundwater system. The plant root conductivity, soilmore » hydraulic conductivity and transpiration rate were shown to control the groundwater distribution. Furthermore, the coupled imaging-modeling approach demonstrates a realistic platform to investigate rhizosphere flow processes and would be feasible to provide useful information linked to upscaled models.« less

  6. Computer-aided auscultation learning system for nursing technique instruction.

    PubMed

    Hou, Chun-Ju; Chen, Yen-Ting; Hu, Ling-Chen; Chuang, Chih-Chieh; Chiu, Yu-Hsien; Tsai, Ming-Shih

    2008-01-01

    Pulmonary auscultation is a physical assessment skill learned by nursing students for examining the respiratory system. Generally, a sound simulator equipped mannequin is used to group teach auscultation techniques via classroom demonstration. However, nursing students cannot readily duplicate this learning environment for self-study. The advancement of electronic and digital signal processing technologies facilitates simulating this learning environment. This study aims to develop a computer-aided auscultation learning system for assisting teachers and nursing students in auscultation teaching and learning. This system provides teachers with signal recording and processing of lung sounds and immediate playback of lung sounds for students. A graphical user interface allows teachers to control the measuring device, draw lung sound waveforms, highlight lung sound segments of interest, and include descriptive text. Effects on learning lung sound auscultation were evaluated for verifying the feasibility of the system. Fifteen nursing students voluntarily participated in the repeated experiment. The results of a paired t test showed that auscultative abilities of the students were significantly improved by using the computer-aided auscultation learning system.

  7. Fully implicit adaptive mesh refinement MHD algorithm

    NASA Astrophysics Data System (ADS)

    Philip, Bobby

    2005-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.

  8. A Dual-Beam Irradiation Facility for a Novel Hybrid Cancer Therapy

    NASA Astrophysics Data System (ADS)

    Sabchevski, Svilen Petrov; Idehara, Toshitaka; Ishiyama, Shintaro; Miyoshi, Norio; Tatsukawa, Toshiaki

    2013-01-01

    In this paper we present the main ideas and discuss both the feasibility and the conceptual design of a novel hybrid technique and equipment for an experimental cancer therapy based on the simultaneous and/or sequential application of two beams, namely a beam of neutrons and a CW (continuous wave) or intermittent sub-terahertz wave beam produced by a gyrotron for treatment of cancerous tumors. The main simulation tools for the development of the computer aided design (CAD) of the prospective experimental facility for clinical trials and study of such new medical technology are briefly reviewed. Some tasks for a further continuation of this feasibility analysis are formulated as well.

  9. Development of capability for microtopography-resolving simulations of hydrologic processes in permafrost affected regions

    NASA Astrophysics Data System (ADS)

    Painter, S.; Moulton, J. D.; Berndt, M.; Coon, E.; Garimella, R.; Lewis, K. C.; Manzini, G.; Mishra, P.; Travis, B. J.; Wilson, C. J.

    2012-12-01

    The frozen soils of the Arctic and subarctic regions contain vast amounts of stored organic carbon. This carbon is vulnerable to release to the atmosphere as temperatures warm and permafrost degrades. Understanding the response of the subsurface and surface hydrologic system to degrading permafrost is key to understanding the rate, timing, and chemical form of potential carbon releases to the atmosphere. Simulating the hydrologic system in degrading permafrost regions is challenging because of the potential for topographic evolution and associated drainage network reorganization as permafrost thaws and massive ground ice melts. The critical process models required for simulating hydrology include subsurface thermal hydrology of freezing/thawing soils, thermal processes within ice wedges, mechanical deformation processes, overland flow, and surface energy balances including snow dynamics. A new simulation tool, the Arctic Terrestrial Simulator (ATS), is being developed to simulate these coupled processes. The computational infrastructure must accommodate fully unstructured grids that track evolving topography, allow accurate solutions on distorted grids, provide robust and efficient solutions on highly parallel computer architectures, and enable flexibility in the strategies for coupling among the various processes. The ATS is based on Amanzi (Moulton et al. 2012), an object-oriented multi-process simulator written in C++ that provides much of the necessary computational infrastructure. Status and plans for the ATS including major hydrologic process models and validation strategies will be presented. Highly parallel simulations of overland flow using high-resolution digital elevation maps of polygonal patterned ground landscapes demonstrate the feasibility of the approach. Simulations coupling three-phase subsurface thermal hydrology with a simple thaw-induced subsidence model illustrate the strong feedbacks among the processes. D. Moulton, M. Berndt, M. Day, J. Meza, et al., High-Level Design of Amanzi, the Multi-Process High Performance Computing Simulator, Technical Report ASCEM-HPC-2011-03-1, DOE Environmental Management, 2012.

  10. A Dancing Black Hole

    NASA Astrophysics Data System (ADS)

    Shoemaker, Deirdre; Smith, Kenneth; Schnetter, Erik; Fiske, David; Laguna, Pablo; Pullin, Jorge

    2002-04-01

    Recently, stationary black holes have been successfully simulated for up to times of approximately 600-1000M, where M is the mass of the black hole. Considering that the expected burst of gravitational radiation from a binary black hole merger would last approximately 200-500M, black hole codes are approaching the point where simulations of mergers may be feasible. We will present two types of simulations of single black holes obtained with a code based on the Baumgarte-Shapiro-Shibata-Nakamura formulation of the Einstein evolution equations. One type of simulations addresses the stability properties of stationary black hole evolutions. The second type of simulations demonstrates the ability of our code to move a black hole through the computational domain. This is accomplished by shifting the stationary black hole solution to a coordinate system in which the location of the black hole is time dependent.

  11. Feasibility of four-dimensional preoperative simulation for elbow debridement arthroplasty.

    PubMed

    Yamamoto, Michiro; Murakami, Yukimi; Iwatsuki, Katsuyuki; Kurimoto, Shigeru; Hirata, Hitoshi

    2016-04-02

    Recent advances in imaging modalities have enabled three-dimensional preoperative simulation. A four-dimensional preoperative simulation system would be useful for debridement arthroplasty of primary degenerative elbow osteoarthritis because it would be able to detect the impingement lesions. We developed a four-dimensional simulation system by adding the anatomical axis to the three-dimensional computed tomography scan data of the affected arm in one position. Eleven patients with primary degenerative elbow osteoarthritis were included. A "two rings" method was used to calculate the flexion-extension axis of the elbow by converting the surface of the trochlea and capitellum into two rings. A four-dimensional simulation movie was created and showed the optimal range of motion and the impingement area requiring excision. To evaluate the reliability of the flexion-extension axis, interobserver and intraobserver reliabilities regarding the assessment of bony overlap volumes were calculated twice for each patient by two authors. Patients were treated by open or arthroscopic debridement arthroplasties. Pre- and postoperative examinations included elbow range of motion measurement, and completion of the patient-rated questionnaire Hand20, Japanese Orthopaedic Association-Japan Elbow Society Elbow Function Score, and the Mayo Elbow Performance Score. Measurement of the bony overlap volume showed an intraobserver intraclass correlation coefficient of 0.93 and 0.90, and an interobserver intraclass correlation coefficient of 0.94. The mean elbow flexion-extension arc significantly improved from 101° to 125°. The mean Hand20 score significantly improved from 52 to 22. The mean Japanese Orthopaedic Association-Japan Elbow Society Elbow Function Score significantly improved from 67 to 88. The mean Mayo Elbow Performance Score significantly improved from 71 to 91 at the final follow-up evaluation. We showed that four-dimensional, preoperative simulation can be generated by adding the rotation axis to the one-position, three-dimensional computed tomography image of the affected arm. This method is feasible for elbow debridement arthroplasty.

  12. Parallelisation study of a three-dimensional environmental flow model

    NASA Astrophysics Data System (ADS)

    O'Donncha, Fearghal; Ragnoli, Emanuele; Suits, Frank

    2014-03-01

    There are many simulation codes in the geosciences that are serial and cannot take advantage of the parallel computational resources commonly available today. One model important for our work in coastal ocean current modelling is EFDC, a Fortran 77 code configured for optimal deployment on vector computers. In order to take advantage of our cache-based, blade computing system we restructured EFDC from serial to parallel, thereby allowing us to run existing models more quickly, and to simulate larger and more detailed models that were previously impractical. Since the source code for EFDC is extensive and involves detailed computation, it is important to do such a port in a manner that limits changes to the files, while achieving the desired speedup. We describe a parallelisation strategy involving surgical changes to the source files to minimise error-prone alteration of the underlying computations, while allowing load-balanced domain decomposition for efficient execution on a commodity cluster. The use of conjugate gradient posed particular challenges due to implicit non-local communication posing a hindrance to standard domain partitioning schemes; a number of techniques are discussed to address this in a feasible, computationally efficient manner. The parallel implementation demonstrates good scalability in combination with a novel domain partitioning scheme that specifically handles mixed water/land regions commonly found in coastal simulations. The approach presented here represents a practical methodology to rejuvenate legacy code on a commodity blade cluster with reasonable effort; our solution has direct application to other similar codes in the geosciences.

  13. Accelerating cardiac bidomain simulations using graphics processing units.

    PubMed

    Neic, A; Liebmann, M; Hoetzl, E; Mitchell, L; Vigmond, E J; Haase, G; Plank, G

    2012-08-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6-20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20 GPUs, 476 CPU cores were required on a national supercomputing facility.

  14. Accelerating Cardiac Bidomain Simulations Using Graphics Processing Units

    PubMed Central

    Neic, Aurel; Liebmann, Manfred; Hoetzl, Elena; Mitchell, Lawrence; Vigmond, Edward J.; Haase, Gundolf

    2013-01-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6–20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20GPUs, 476 CPU cores were required on a national supercomputing facility. PMID:22692867

  15. Kinetic particle simulation of discharge and wall erosion of a Hall thruster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Shinatora; Komurasaki, Kimiya; Arakawa, Yoshihiro

    2013-06-15

    The primary lifetime limiting factor of Hall thrusters is the wall erosion caused by the ion induced sputtering, which is predominated by dielectric wall sheath and pre-sheath. However, so far only fluid or hybrid simulation models were applied to wall erosion and lifetime studies in which this non-quasi-neutral and non-equilibrium area cannot be treated directly. Thus, in this study, a 2D fully kinetic particle-in-cell model was presented for Hall thruster discharge and lifetime simulation. Because the fully kinetic lifetime simulation was yet to be achieved so far due to the high computational cost, the semi-implicit field solver and the techniquemore » of mass ratio manipulation was employed to accelerate the computation. However, other artificial manipulations like permittivity or geometry scaling were not used in order to avoid unrecoverable change of physics. Additionally, a new physics recovering model for the mass ratio was presented for better preservation of electron mobility at the weakly magnetically confined plasma region. The validity of the presented model was examined by various parametric studies, and the thrust performance and wall erosion rate of a laboratory model magnetic layer type Hall thruster was modeled for different operation conditions. The simulation results successfully reproduced the measurement results with typically less than 10% discrepancy without tuning any numerical parameters. It is also shown that the computational cost was reduced to the level that the Hall thruster fully kinetic lifetime simulation is feasible.« less

  16. A framework for different levels of integration of computational models into web-based virtual patients.

    PubMed

    Kononowicz, Andrzej A; Narracott, Andrew J; Manini, Simone; Bayley, Martin J; Lawford, Patricia V; McCormack, Keith; Zary, Nabil

    2014-01-23

    Virtual patients are increasingly common tools used in health care education to foster learning of clinical reasoning skills. One potential way to expand their functionality is to augment virtual patients' interactivity by enriching them with computational models of physiological and pathological processes. The primary goal of this paper was to propose a conceptual framework for the integration of computational models within virtual patients, with particular focus on (1) characteristics to be addressed while preparing the integration, (2) the extent of the integration, (3) strategies to achieve integration, and (4) methods for evaluating the feasibility of integration. An additional goal was to pilot the first investigation of changing framework variables on altering perceptions of integration. The framework was constructed using an iterative process informed by Soft System Methodology. The Virtual Physiological Human (VPH) initiative has been used as a source of new computational models. The technical challenges associated with development of virtual patients enhanced by computational models are discussed from the perspectives of a number of different stakeholders. Concrete design and evaluation steps are discussed in the context of an exemplar virtual patient employing the results of the VPH ARCH project, as well as improvements for future iterations. The proposed framework consists of four main elements. The first element is a list of feasibility features characterizing the integration process from three perspectives: the computational modelling researcher, the health care educationalist, and the virtual patient system developer. The second element included three integration levels: basic, where a single set of simulation outcomes is generated for specific nodes in the activity graph; intermediate, involving pre-generation of simulation datasets over a range of input parameters; advanced, including dynamic solution of the model. The third element is the description of four integration strategies, and the last element consisted of evaluation profiles specifying the relevant feasibility features and acceptance thresholds for specific purposes. The group of experts who evaluated the virtual patient exemplar found higher integration more interesting, but at the same time they were more concerned with the validity of the result. The observed differences were not statistically significant. This paper outlines a framework for the integration of computational models into virtual patients. The opportunities and challenges of model exploitation are discussed from a number of user perspectives, considering different levels of model integration. The long-term aim for future research is to isolate the most crucial factors in the framework and to determine their influence on the integration outcome.

  17. A Framework for Different Levels of Integration of Computational Models Into Web-Based Virtual Patients

    PubMed Central

    Narracott, Andrew J; Manini, Simone; Bayley, Martin J; Lawford, Patricia V; McCormack, Keith; Zary, Nabil

    2014-01-01

    Background Virtual patients are increasingly common tools used in health care education to foster learning of clinical reasoning skills. One potential way to expand their functionality is to augment virtual patients’ interactivity by enriching them with computational models of physiological and pathological processes. Objective The primary goal of this paper was to propose a conceptual framework for the integration of computational models within virtual patients, with particular focus on (1) characteristics to be addressed while preparing the integration, (2) the extent of the integration, (3) strategies to achieve integration, and (4) methods for evaluating the feasibility of integration. An additional goal was to pilot the first investigation of changing framework variables on altering perceptions of integration. Methods The framework was constructed using an iterative process informed by Soft System Methodology. The Virtual Physiological Human (VPH) initiative has been used as a source of new computational models. The technical challenges associated with development of virtual patients enhanced by computational models are discussed from the perspectives of a number of different stakeholders. Concrete design and evaluation steps are discussed in the context of an exemplar virtual patient employing the results of the VPH ARCH project, as well as improvements for future iterations. Results The proposed framework consists of four main elements. The first element is a list of feasibility features characterizing the integration process from three perspectives: the computational modelling researcher, the health care educationalist, and the virtual patient system developer. The second element included three integration levels: basic, where a single set of simulation outcomes is generated for specific nodes in the activity graph; intermediate, involving pre-generation of simulation datasets over a range of input parameters; advanced, including dynamic solution of the model. The third element is the description of four integration strategies, and the last element consisted of evaluation profiles specifying the relevant feasibility features and acceptance thresholds for specific purposes. The group of experts who evaluated the virtual patient exemplar found higher integration more interesting, but at the same time they were more concerned with the validity of the result. The observed differences were not statistically significant. Conclusions This paper outlines a framework for the integration of computational models into virtual patients. The opportunities and challenges of model exploitation are discussed from a number of user perspectives, considering different levels of model integration. The long-term aim for future research is to isolate the most crucial factors in the framework and to determine their influence on the integration outcome. PMID:24463466

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Ms. Ketki; Kim, Yong-Ha; Yiacoumi, Sotira

    The mixing process of fresh water and seawater releases a significant amount of energy and is a potential source of renewable energy. The so called ‘blue energy’ or salinity-gradient energy can be harvested by a device consisting of carbon electrodes immersed in an electrolyte solution, based on the principle of capacitive double layer expansion (CDLE). In this study, we have investigated the feasibility of energy production based on the CDLE principle. Experiments and computer simulations were used to study the process. Mesoporous carbon materials, synthesized at the Oak Ridge National Laboratory, were used as electrode materials in the experiments. Neutronmore » imaging of the blue energy cycle was conducted with cylindrical mesoporous carbon electrodes and 0.5 M lithium chloride as the electrolyte solution. For experiments conducted at 0.6 V and 0.9 V applied potential, a voltage increase of 0.061 V and 0.054 V was observed, respectively. From sequences of neutron images obtained for each step of the blue energy cycle, information on the direction and magnitude of lithium ion transport was obtained. A computer code was developed to simulate the process. Experimental data and computer simulations allowed us to predict energy production.« less

  19. A glacier runoff extension to the Precipitation Runoff Modeling System

    USGS Publications Warehouse

    Van Beusekom, Ashley E.; Viger, Roland

    2016-01-01

    A module to simulate glacier runoff, PRMSglacier, was added to PRMS (Precipitation Runoff Modeling System), a distributed-parameter, physical-process hydrological simulation code. The extension does not require extensive on-glacier measurements or computational expense but still relies on physical principles over empirical relations as much as is feasible while maintaining model usability. PRMSglacier is validated on two basins in Alaska, Wolverine, and Gulkana Glacier basin, which have been studied since 1966 and have a substantial amount of data with which to test model performance over a long period of time covering a wide range of climatic and hydrologic conditions. When error in field measurements is considered, the Nash-Sutcliffe efficiencies of streamflow are 0.87 and 0.86, the absolute bias fractions of the winter mass balance simulations are 0.10 and 0.08, and the absolute bias fractions of the summer mass balances are 0.01 and 0.03, all computed over 42 years for the Wolverine and Gulkana Glacier basins, respectively. Without taking into account measurement error, the values are still within the range achieved by the more computationally expensive codes tested over shorter time periods.

  20. The Cell Collective: Toward an open and collaborative approach to systems biology

    PubMed Central

    2012-01-01

    Background Despite decades of new discoveries in biomedical research, the overwhelming complexity of cells has been a significant barrier to a fundamental understanding of how cells work as a whole. As such, the holistic study of biochemical pathways requires computer modeling. Due to the complexity of cells, it is not feasible for one person or group to model the cell in its entirety. Results The Cell Collective is a platform that allows the world-wide scientific community to create these models collectively. Its interface enables users to build and use models without specifying any mathematical equations or computer code - addressing one of the major hurdles with computational research. In addition, this platform allows scientists to simulate and analyze the models in real-time on the web, including the ability to simulate loss/gain of function and test what-if scenarios in real time. Conclusions The Cell Collective is a web-based platform that enables laboratory scientists from across the globe to collaboratively build large-scale models of various biological processes, and simulate/analyze them in real time. In this manuscript, we show examples of its application to a large-scale model of signal transduction. PMID:22871178

  1. X-ray-induced acoustic computed tomography for 3D breast imaging: A simulation study.

    PubMed

    Tang, Shanshan; Yang, Kai; Chen, Yong; Xiang, Liangzhong

    2018-04-01

    The objective of this study is to demonstrate the feasibility of x-ray-induced acoustic computed tomography (XACT) for early breast un-palpable microcalcification (μCa) detection in three dimensions (3D). The proposed technique provides the true 3D imaging for breast volume which overcomes the disadvantage of the tissue superposition in mammography. A 3D breast digital phantom was rendered from two-dimensional (2D) breast CT slices. Three different tissue types, including the skin, adipose tissue, and glandular tissue, were labeled in the 3D breast phantom. μCas were manually embedded in different locations inside the breast phantom. For each tissue type, the initial pressure rise caused by the x-ray-induced acoustic (XA) effect was calculated according to its themoacoustic properties. The XA wave's propagation from the point of generation and its detection by ultrasound detector array were simulated by Matlab K-Wave toolbox. The 3D breast XACT volume with μCa was acquired without tissue superposition, and the system was characterized by μCas placed at different locations. The simulation results illustrated that the proposed breast XACT system has the ability to show the μCa cluster in 3D without any tissue superposition. Meanwhile, μCa as small as 100 μm in size can be detected with high imaging contrast, high signal to noise ratio (SNR), and high contrast to noise ratio (CNR). The dose required by the proposed XACT configuration was calculated to be 0.4 mGy for a 4.5 cm-thick compressed breast. This is one-tenth of the dose level of a typical two-view mammography for a breast with the same compression thickness. The initial exploration for the feasibility of 3D breast XACT has been conducted in this study. The system feasibility and characterization were illustrated through a 3D breast phantom and simulation works. The 3D breast XACT with the proposed system configuration has great potential to be applied as a low-dose screening and diagnostic technique for early un-palpable lesion in the breast. © 2018 American Association of Physicists in Medicine.

  2. Utilizing Three-Dimensional Printing Technology to Assess the Feasibility of High-Fidelity Synthetic Ventricular Septal Defect Models for Simulation in Medical Education.

    PubMed

    Costello, John P; Olivieri, Laura J; Krieger, Axel; Thabit, Omar; Marshall, M Blair; Yoo, Shi-Joon; Kim, Peter C; Jonas, Richard A; Nath, Dilip S

    2014-07-01

    The current educational approach for teaching congenital heart disease (CHD) anatomy to students involves instructional tools and techniques that have significant limitations. This study sought to assess the feasibility of utilizing present-day three-dimensional (3D) printing technology to create high-fidelity synthetic heart models with ventricular septal defect (VSD) lesions and applying these models to a novel, simulation-based educational curriculum for premedical and medical students. Archived, de-identified magnetic resonance images of five common VSD subtypes were obtained. These cardiac images were then segmented and built into 3D computer-aided design models using Mimics Innovation Suite software. An Objet500 Connex 3D printer was subsequently utilized to print a high-fidelity heart model for each VSD subtype. Next, a simulation-based educational curriculum using these heart models was developed and implemented in the instruction of 29 premedical and medical students. Assessment of this curriculum was undertaken with Likert-type questionnaires. High-fidelity VSD models were successfully created utilizing magnetic resonance imaging data and 3D printing. Following instruction with these high-fidelity models, all students reported significant improvement in knowledge acquisition (P < .0001), knowledge reporting (P < .0001), and structural conceptualization (P < .0001) of VSDs. It is feasible to use present-day 3D printing technology to create high-fidelity heart models with complex intracardiac defects. Furthermore, this tool forms the foundation for an innovative, simulation-based educational approach to teach students about CHD and creates a novel opportunity to stimulate their interest in this field. © The Author(s) 2014.

  3. Gradient-based optimization with B-splines on sparse grids for solving forward-dynamics simulations of three-dimensional, continuum-mechanical musculoskeletal system models.

    PubMed

    Valentin, J; Sprenger, M; Pflüger, D; Röhrle, O

    2018-05-01

    Investigating the interplay between muscular activity and motion is the basis to improve our understanding of healthy or diseased musculoskeletal systems. To be able to analyze the musculoskeletal systems, computational models are used. Albeit some severe modeling assumptions, almost all existing musculoskeletal system simulations appeal to multibody simulation frameworks. Although continuum-mechanical musculoskeletal system models can compensate for some of these limitations, they are essentially not considered because of their computational complexity and cost. The proposed framework is the first activation-driven musculoskeletal system model, in which the exerted skeletal muscle forces are computed using 3-dimensional, continuum-mechanical skeletal muscle models and in which muscle activations are determined based on a constraint optimization problem. Numerical feasibility is achieved by computing sparse grid surrogates with hierarchical B-splines, and adaptive sparse grid refinement further reduces the computational effort. The choice of B-splines allows the use of all existing gradient-based optimization techniques without further numerical approximation. This paper demonstrates that the resulting surrogates have low relative errors (less than 0.76%) and can be used within forward simulations that are subject to constraint optimization. To demonstrate this, we set up several different test scenarios in which an upper limb model consisting of the elbow joint, the biceps and triceps brachii, and an external load is subjected to different optimization criteria. Even though this novel method has only been demonstrated for a 2-muscle system, it can easily be extended to musculoskeletal systems with 3 or more muscles. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Finite-Element Methods for Real-Time Simulation of Surgery

    NASA Technical Reports Server (NTRS)

    Basdogan, Cagatay

    2003-01-01

    Two finite-element methods have been developed for mathematical modeling of the time-dependent behaviors of deformable objects and, more specifically, the mechanical responses of soft tissues and organs in contact with surgical tools. These methods may afford the computational efficiency needed to satisfy the requirement to obtain computational results in real time for simulating surgical procedures as described in Simulation System for Training in Laparoscopic Surgery (NPO-21192) on page 31 in this issue of NASA Tech Briefs. Simulation of the behavior of soft tissue in real time is a challenging problem because of the complexity of soft-tissue mechanics. The responses of soft tissues are characterized by nonlinearities and by spatial inhomogeneities and rate and time dependences of material properties. Finite-element methods seem promising for integrating these characteristics of tissues into computational models of organs, but they demand much central-processing-unit (CPU) time and memory, and the demand increases with the number of nodes and degrees of freedom in a given finite-element model. Hence, as finite-element models become more realistic, it becomes more difficult to compute solutions in real time. In both of the present methods, one uses approximate mathematical models trading some accuracy for computational efficiency and thereby increasing the feasibility of attaining real-time up36 NASA Tech Briefs, October 2003 date rates. The first of these methods is based on modal analysis. In this method, one reduces the number of differential equations by selecting only the most significant vibration modes of an object (typically, a suitable number of the lowest-frequency modes) for computing deformations of the object in response to applied forces.

  5. Systematic Review of Patient-Specific Surgical Simulation: Toward Advancing Medical Education.

    PubMed

    Ryu, Won Hyung A; Dharampal, Navjit; Mostafa, Ahmed E; Sharlin, Ehud; Kopp, Gail; Jacobs, William Bradley; Hurlbert, Robin John; Chan, Sonny; Sutherland, Garnette R

    Simulation-based education has been shown to be an effective tool to teach foundational technical skills in various surgical specialties. However, most of the current simulations are limited to generic scenarios and do not allow continuation of the learning curve beyond basic technical skills to prepare for more advanced expertise, such as patient-specific surgical planning. The objective of this study was to evaluate the current medical literature with respect to the utilization and educational value of patient-specific simulations for surgical training. We performed a systematic review of the literature using Pubmed, Embase, and Scopus focusing on themes of simulation, patient-specific, surgical procedure, and education. The study included randomized controlled trials, cohort studies, and case-control studies published between 2005 and 2016. Two independent reviewers (W.H.R. and N.D) conducted the study appraisal, data abstraction, and quality assessment of the studies. The search identified 13 studies that met the inclusion criteria; 7 studies employed computer simulations and 6 studies used 3-dimensional (3D) synthetic models. A number of surgical specialties evaluated patient-specific simulation, including neurosurgery, vascular surgery, orthopedic surgery, and interventional radiology. However, most studies were small in size and primarily aimed at feasibility assessments and early validation. Early evidence has shown feasibility and utility of patient-specific simulation for surgical education. With further development of this technology, simulation-based education may be able to support training of higher-level competencies outside the clinical settingto aid learners in their development of surgical skills. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  6. Strong scaling and speedup to 16,384 processors in cardiac electro-mechanical simulations.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U J; Seemann, Gunnar; Dossel, Olaf; Pitman, Michael C; Rice, John J

    2009-01-01

    High performance computing is required to make feasible simulations of whole organ models of the heart with biophysically detailed cellular models in a clinical setting. Increasing model detail by simulating electrophysiology and mechanical models increases computation demands. We present scaling results of an electro - mechanical cardiac model of two ventricles and compare them to our previously published results using an electrophysiological model only. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Fiber orientation was included. Data decomposition for the distribution onto the distributed memory system was carried out by orthogonal recursive bisection. Load weight ratios for non-tissue vs. tissue elements used in the data decomposition were 1:1, 1:2, 1:5, 1:10, 1:25, 1:38.85, 1:50 and 1:100. The ten Tusscher et al. (2004) electrophysiological cell model was used and the Rice et al. (1999) model for the computation of the calcium transient dependent force. Scaling results for 512, 1024, 2048, 4096, 8192 and 16,384 processors were obtained for 1 ms simulation time. The simulations were carried out on an IBM Blue Gene/L supercomputer. The results show linear scaling from 512 to 16,384 processors with speedup factors between 1.82 and 2.14 between partitions. The most optimal load ratio was 1:25 for on all partitions. However, a shift towards load ratios with higher weight for the tissue elements can be recognized as can be expected when adding computational complexity to the model while keeping the same communication setup. This work demonstrates that it is potentially possible to run simulations of 0.5 s using the presented electro-mechanical cardiac model within 1.5 hours.

  7. Concept verification of three dimensional free motion simulator for space robot

    NASA Technical Reports Server (NTRS)

    Okamoto, Osamu; Nakaya, Teruomi; Pokines, Brett

    1994-01-01

    In the development of automatic assembling technologies for space structures, it is an indispensable matter to investigate and simulate the movements of robot satellites concerned with mission operation. The movement investigation and simulation on the ground will be effectively realized by a free motion simulator. Various types of ground systems for simulating free motion have been proposed and utilized. Some of these methods are a neutral buoyancy system, an air or magnetic suspension system, a passive suspension balance system, and a free flying aircraft or drop tower system. In addition, systems can be simulated by computers using an analytical model. Each free motion simulation method has limitations and well known problems, specifically, disturbance by water viscosity, limited number of degrees-of-freedom, complex dynamics induced by the attachment of the simulation system, short experiment time, and the lack of high speed super-computer simulation systems, respectively. The basic idea presented here is to realize 3-dimensional free motion. This is achieved by combining a spherical air bearing, a cylindrical air bearing, and a flat air bearing. A conventional air bearing system has difficulty realizing free vertical motion suspension. The idea of free vertical suspension is that a cylindrical air bearing and counter balance weight realize vertical free motion. This paper presents a design concept, configuration, and basic performance characteristics of an innovative free motion simulator. A prototype simulator verifies the feasibility of 3-dimensional free motion simulation.

  8. Least significant qubit algorithm for quantum images

    NASA Astrophysics Data System (ADS)

    Sang, Jianzhi; Wang, Shen; Li, Qiong

    2016-11-01

    To study the feasibility of the classical image least significant bit (LSB) information hiding algorithm on quantum computer, a least significant qubit (LSQb) information hiding algorithm of quantum image is proposed. In this paper, we focus on a novel quantum representation for color digital images (NCQI). Firstly, by designing the three qubits comparator and unitary operators, the reasonability and feasibility of LSQb based on NCQI are presented. Then, the concrete LSQb information hiding algorithm is proposed, which can realize the aim of embedding the secret qubits into the least significant qubits of RGB channels of quantum cover image. Quantum circuit of the LSQb information hiding algorithm is also illustrated. Furthermore, the secrets extracting algorithm and circuit are illustrated through utilizing control-swap gates. The two merits of our algorithm are: (1) it is absolutely blind and (2) when extracting secret binary qubits, it does not need any quantum measurement operation or any other help from classical computer. Finally, simulation and comparative analysis show the performance of our algorithm.

  9. An investigation of implicit turbulence modeling for laminar-turbulent transition in natural convection

    NASA Astrophysics Data System (ADS)

    Li, Chunggang; Tsubokura, Makoto; Wang, Weihsiang

    2017-11-01

    The automatic dissipation adjustment (ADA) model based on truncated Navier-Stokes equations is utilized to investigate the feasibility of using implicit large eddy simulation (ILES) with ADA model on the transition in natural convection. Due to the high Rayleigh number coming from the larger temperature difference (300K), Roe scheme modified for low Mach numbers coordinating ADA model is used to resolve the complicated flow field. Based on the qualitative agreement of the comparisons with DNS and experimental results and the capability of numerically predicating a -3 decay law for the temporal power spectrum of the temperature fluctuation, this study thus validates the feasibility of ILES with ADA model on turbulent natural convection. With the advantages of ease of implementation because no explicit modeling terms are needed and nearly free of tuning parameters, ADA model offers to become a promising tool for turbulent thermal convection. Part of the results is obtained using the K computer at the RIKEN Advanced Institute for Computational Science (Proposal number hp160232).

  10. Sedimentary Geothermal Feasibility Study: October 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Augustine, Chad; Zerpa, Luis

    The objective of this project is to analyze the feasibility of commercial geothermal projects using numerical reservoir simulation, considering a sedimentary reservoir with low permeability that requires productivity enhancement. A commercial thermal reservoir simulator (STARS, from Computer Modeling Group, CMG) is used in this work for numerical modeling. In the first stage of this project (FY14), a hypothetical numerical reservoir model was developed, and validated against an analytical solution. The following model parameters were considered to obtain an acceptable match between the numerical and analytical solutions: grid block size, time step and reservoir areal dimensions; the latter related to boundarymore » effects on the numerical solution. Systematic model runs showed that insufficient grid sizing generates numerical dispersion that causes the numerical model to underestimate the thermal breakthrough time compared to the analytic model. As grid sizing is decreased, the model results converge on a solution. Likewise, insufficient reservoir model area introduces boundary effects in the numerical solution that cause the model results to differ from the analytical solution.« less

  11. Computer simulations of Hall thrusters without wall losses designed using two permanent magnetic rings

    NASA Astrophysics Data System (ADS)

    Yongjie, Ding; Wuji, Peng; Liqiu, Wei; Guoshun, Sun; Hong, Li; Daren, Yu

    2016-11-01

    A type of Hall thruster without wall losses is designed by adding two permanent magnet rings in the magnetic circuit. The maximum strength of the magnetic field is set outside the channel. Discharge without wall losses is achieved by pushing down the magnetic field and adjusting the channel accordingly. The feasibility of the Hall thrusters without wall losses is verified via a numerical simulation. The simulation results show that the ionization region is located in the discharge channel and the acceleration region is outside the channel, which decreases the energy and flux of ions and electrons spattering on the wall. The power deposition on the channel walls can be reduced by approximately 30 times.

  12. Thermal-hydraulics Analysis of a Radioisotope-powered Mars Hopper Propulsion System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert C. O'Brien; Andrew C. Klein; William T. Taitano

    Thermal-hydraulics analyses results produced using a combined suite of computational design and analysis codes are presented for the preliminary design of a concept Radioisotope Thermal Rocket (RTR) propulsion system. Modeling of the transient heating and steady state temperatures of the system is presented. Simulation results for propellant blow down during impulsive operation are also presented. The results from this study validate the feasibility of a practical thermally capacitive RTR propulsion system.

  13. Feasibility Analysis of UAV Technology to Improve Tactical Surveillance in South Korea’s Rear Area Operations

    DTIC Science & Technology

    2017-03-01

    determine the optimum required operational capability of the unmanned aerial vehicles to support Korean rear area operations. We use Map Aware Non ...area operations. Through further experimentations and analyses, we were able to find the optimum characteristics of an improved unmanned aerial...operations. We use Map Aware Non -Uniform Automata, an agent-based simulation software platform for computational experiments. The study models a scenario

  14. Transient Dynamics Simulation of Airflow in a CT-Scanned Human Airway Tree: More or Fewer Terminal Bronchi?

    PubMed Central

    Zhang, Baihua; Li, Jianhua; Yue, Yong; Qian, Wei

    2017-01-01

    Using computational fluid dynamics (CFD) method, the feasibility of simulating transient airflow in a CT-based airway tree with more than 100 outlets for a whole respiratory period is studied, and the influence of truncations of terminal bronchi on CFD characteristics is investigated. After an airway model with 122 outlets is extracted from CT images, the transient airflow is simulated. Spatial and temporal variations of flow velocity, wall pressure, and wall shear stress are presented; the flow pattern and lobar distribution of air are gotten as well. All results are compared with those of a truncated model with 22 outlets. It is found that the flow pattern shows lobar heterogeneity that the near-wall air in the trachea is inhaled into the upper lobe while the center flow enters the other lobes, and the lobar distribution of air is significantly correlated with the outlet area ratio. The truncation decreases airflow to right and left upper lobes and increases the deviation of airflow distributions between inspiration and expiration. Simulating the transient airflow in an airway tree model with 122 bronchi using CFD is feasible. The model with more terminal bronchi decreases the difference between the lobar distributions at inspiration and at expiration. PMID:29333194

  15. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    DOE PAGES

    Pesce, Lorenzo L.; Lee, Hyong C.; Hereld, Mark; ...

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determinedmore » the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.« less

  16. Characterizing rare-event property distributions via replicate molecular dynamics simulations of proteins.

    PubMed

    Krishnan, Ranjani; Walton, Emily B; Van Vliet, Krystyn J

    2009-11-01

    As computational resources increase, molecular dynamics simulations of biomolecules are becoming an increasingly informative complement to experimental studies. In particular, it has now become feasible to use multiple initial molecular configurations to generate an ensemble of replicate production-run simulations that allows for more complete characterization of rare events such as ligand-receptor unbinding. However, there are currently no explicit guidelines for selecting an ensemble of initial configurations for replicate simulations. Here, we use clustering analysis and steered molecular dynamics simulations to demonstrate that the configurational changes accessible in molecular dynamics simulations of biomolecules do not necessarily correlate with observed rare-event properties. This informs selection of a representative set of initial configurations. We also employ statistical analysis to identify the minimum number of replicate simulations required to sufficiently sample a given biomolecular property distribution. Together, these results suggest a general procedure for generating an ensemble of replicate simulations that will maximize accurate characterization of rare-event property distributions in biomolecules.

  17. The feasibility of using UML to compare the impact of different brands of computer system on the clinical consultation.

    PubMed

    Kumarapeli, Pushpa; de Lusignan, Simon; Koczan, Phil; Jones, Beryl; Sheeler, Ian

    2007-01-01

    UK general practice is universally computerised, with computers used in the consulting room at the point of care. Practices use a range of different brands of computer system, which have developed organically to meet the needs of general practitioners and health service managers. Unified Modelling Language (UML) is a standard modelling and specification notation widely used in software engineering. To examine the feasibility of UML notation to compare the impact of different brands of general practice computer system on the clinical consultation. Multi-channel video recordings of simulated consultation sessions were recorded on three different clinical computer systems in common use (EMIS, iSOFT Synergy and IPS Vision). User action recorder software recorded time logs of keyboard and mouse use, and pattern recognition software captured non-verbal communication. The outputs of these were used to create UML class and sequence diagrams for each consultation. We compared 'definition of the presenting problem' and 'prescribing', as these tasks were present in all the consultations analysed. Class diagrams identified the entities involved in the clinical consultation. Sequence diagrams identified common elements of the consultation (such as prescribing) and enabled comparisons to be made between the different brands of computer system. The clinician and computer system interaction varied greatly between the different brands. UML sequence diagrams are useful in identifying common tasks in the clinical consultation, and for contrasting the impact of the different brands of computer system on the clinical consultation. Further research is needed to see if patterns demonstrated in this pilot study are consistently displayed.

  18. Self-Organized Service Negotiation for Collaborative Decision Making

    PubMed Central

    Zhang, Bo; Zheng, Ziming

    2014-01-01

    This paper proposes a self-organized service negotiation method for CDM in intelligent and automatic manners. It mainly includes three phases: semantic-based capacity evaluation for the CDM sponsor, trust computation of the CDM organization, and negotiation selection of the decision-making service provider (DMSP). In the first phase, the CDM sponsor produces the formal semantic description of the complex decision task for DMSP and computes the capacity evaluation values according to participator instructions from different DMSPs. In the second phase, a novel trust computation approach is presented to compute the subjective belief value, the objective reputation value, and the recommended trust value. And in the third phase, based on the capacity evaluation and trust computation, a negotiation mechanism is given to efficiently implement the service selection. The simulation experiment results show that our self-organized service negotiation method is feasible and effective for CDM. PMID:25243228

  19. Self-organized service negotiation for collaborative decision making.

    PubMed

    Zhang, Bo; Huang, Zhenhua; Zheng, Ziming

    2014-01-01

    This paper proposes a self-organized service negotiation method for CDM in intelligent and automatic manners. It mainly includes three phases: semantic-based capacity evaluation for the CDM sponsor, trust computation of the CDM organization, and negotiation selection of the decision-making service provider (DMSP). In the first phase, the CDM sponsor produces the formal semantic description of the complex decision task for DMSP and computes the capacity evaluation values according to participator instructions from different DMSPs. In the second phase, a novel trust computation approach is presented to compute the subjective belief value, the objective reputation value, and the recommended trust value. And in the third phase, based on the capacity evaluation and trust computation, a negotiation mechanism is given to efficiently implement the service selection. The simulation experiment results show that our self-organized service negotiation method is feasible and effective for CDM.

  20. High-performance scientific computing in the cloud

    NASA Astrophysics Data System (ADS)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  1. SPOKES: An end-to-end simulation facility for spectroscopic cosmological surveys

    DOE PAGES

    Nord, B.; Amara, A.; Refregier, A.; ...

    2016-03-03

    The nature of dark matter, dark energy and large-scale gravity pose some of the most pressing questions in cosmology today. These fundamental questions require highly precise measurements, and a number of wide-field spectroscopic survey instruments are being designed to meet this requirement. A key component in these experiments is the development of a simulation tool to forecast science performance, define requirement flow-downs, optimize implementation, demonstrate feasibility, and prepare for exploitation. We present SPOKES (SPectrOscopic KEn Simulation), an end-to-end simulation facility for spectroscopic cosmological surveys designed to address this challenge. SPOKES is based on an integrated infrastructure, modular function organization, coherentmore » data handling and fast data access. These key features allow reproducibility of pipeline runs, enable ease of use and provide flexibility to update functions within the pipeline. The cyclic nature of the pipeline offers the possibility to make the science output an efficient measure for design optimization and feasibility testing. We present the architecture, first science, and computational performance results of the simulation pipeline. The framework is general, but for the benchmark tests, we use the Dark Energy Spectrometer (DESpec), one of the early concepts for the upcoming project, the Dark Energy Spectroscopic Instrument (DESI). As a result, we discuss how the SPOKES framework enables a rigorous process to optimize and exploit spectroscopic survey experiments in order to derive high-precision cosmological measurements optimally.« less

  2. A prototype knowledge-based simulation support system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, T.R.; Roberts, S.D.

    1987-04-01

    As a preliminary step toward the goal of an intelligent automated system for simulation modeling support, we explore the feasibility of the overall concept by generating and testing a prototypical framework. A prototype knowledge-based computer system was developed to support a senior level course in industrial engineering so that the overall feasibility of an expert simulation support system could be studied in a controlled and observable setting. The system behavior mimics the diagnostic (intelligent) process performed by the course instructor and teaching assistants, finding logical errors in INSIGHT simulation models and recommending appropriate corrective measures. The system was programmed inmore » a non-procedural language (PROLOG) and designed to run interactively with students working on course homework and projects. The knowledge-based structure supports intelligent behavior, providing its users with access to an evolving accumulation of expert diagnostic knowledge. The non-procedural approach facilitates the maintenance of the system and helps merge the roles of expert and knowledge engineer by allowing new knowledge to be easily incorporated without regard to the existing flow of control. The background, features and design of the system are describe and preliminary results are reported. Initial success is judged to demonstrate the utility of the reported approach and support the ultimate goal of an intelligent modeling system which can support simulation modelers outside the classroom environment. Finally, future extensions are suggested.« less

  3. URDME: a modular framework for stochastic simulation of reaction-transport processes in complex geometries.

    PubMed

    Drawert, Brian; Engblom, Stefan; Hellander, Andreas

    2012-06-22

    Experiments in silico using stochastic reaction-diffusion models have emerged as an important tool in molecular systems biology. Designing computational software for such applications poses several challenges. Firstly, realistic lattice-based modeling for biological applications requires a consistent way of handling complex geometries, including curved inner- and outer boundaries. Secondly, spatiotemporal stochastic simulations are computationally expensive due to the fast time scales of individual reaction- and diffusion events when compared to the biological phenomena of actual interest. We therefore argue that simulation software needs to be both computationally efficient, employing sophisticated algorithms, yet in the same time flexible in order to meet present and future needs of increasingly complex biological modeling. We have developed URDME, a flexible software framework for general stochastic reaction-transport modeling and simulation. URDME uses Unstructured triangular and tetrahedral meshes to resolve general geometries, and relies on the Reaction-Diffusion Master Equation formalism to model the processes under study. An interface to a mature geometry and mesh handling external software (Comsol Multiphysics) provides for a stable and interactive environment for model construction. The core simulation routines are logically separated from the model building interface and written in a low-level language for computational efficiency. The connection to the geometry handling software is realized via a Matlab interface which facilitates script computing, data management, and post-processing. For practitioners, the software therefore behaves much as an interactive Matlab toolbox. At the same time, it is possible to modify and extend URDME with newly developed simulation routines. Since the overall design effectively hides the complexity of managing the geometry and meshes, this means that newly developed methods may be tested in a realistic setting already at an early stage of development. In this paper we demonstrate, in a series of examples with high relevance to the molecular systems biology community, that the proposed software framework is a useful tool for both practitioners and developers of spatial stochastic simulation algorithms. Through the combined efforts of algorithm development and improved modeling accuracy, increasingly complex biological models become feasible to study through computational methods. URDME is freely available at http://www.urdme.org.

  4. Genetic Local Search for Optimum Multiuser Detection Problem in DS-CDMA Systems

    NASA Astrophysics Data System (ADS)

    Wang, Shaowei; Ji, Xiaoyong

    Optimum multiuser detection (OMD) in direct-sequence code-division multiple access (DS-CDMA) systems is an NP-complete problem. In this paper, we present a genetic local search algorithm, which consists of an evolution strategy framework and a local improvement procedure. The evolution strategy searches the space of feasible, locally optimal solutions only. A fast iterated local search algorithm, which employs the proprietary characteristics of the OMD problem, produces local optima with great efficiency. Computer simulations show the bit error rate (BER) performance of the GLS outperforms other multiuser detectors in all cases discussed. The computation time is polynomial complexity in the number of users.

  5. An expanded system simulation model for solar energy storage (technical report), volume 1

    NASA Technical Reports Server (NTRS)

    Warren, A. W.

    1979-01-01

    The simulation model for wind energy storage (SIMWEST) program now includes wind and/or photovoltaic systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic) and is available for the UNIVAC 1100 series and the CDC 6000 series computers. The level of detail is consistent with a role of evaluating the economic feasibility as well as the general performance of wind and/or photovoltaic energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. The first program is a precompiler which generates computer models (in FORTRAN) of complex wind and/or photovoltaic source/storage/application systems, from user specifications using the respective library components. The second program provides the techno-economic system analysis with the respective I/0, the integration of system dynamics, and the iteration for conveyance of variables.

  6. The H2 + + He proton transfer reaction: quantum reactive differential cross sections to be linked with future velocity mapping experiments

    NASA Astrophysics Data System (ADS)

    Hernández Vera, Mario; Wester, Roland; Gianturco, Francesco Antonio

    2018-01-01

    We construct the velocity map images of the proton transfer reaction between helium and molecular hydrogen ion {{{H}}}2+. We perform simulations of imaging experiments at one representative total collision energy taking into account the inherent aberrations of the velocity mapping in order to explore the feasibility of direct comparisons between theory and future experiments planned in our laboratory. The asymptotic angular distributions of the fragments in a 3D velocity space is determined from the quantum state-to-state differential reactive cross sections and reaction probabilities which are computed by using the time-independent coupled channel hyperspherical coordinate method. The calculations employ an earlier ab initio potential energy surface computed at the FCI/cc-pVQZ level of theory. The present simulations indicate that the planned experiments would be selective enough to differentiate between product distributions resulting from different initial internal states of the reactants.

  7. On computing stress in polymer systems involving multi-body potentials from molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Fu, Yao; Song, Jeong-Hoon

    2014-08-01

    Hardy stress definition has been restricted to pair potentials and embedded-atom method potentials due to the basic assumptions in the derivation of a symmetric microscopic stress tensor. Force decomposition required in the Hardy stress expression becomes obscure for multi-body potentials. In this work, we demonstrate the invariance of the Hardy stress expression for a polymer system modeled with multi-body interatomic potentials including up to four atoms interaction, by applying central force decomposition of the atomic force. The balance of momentum has been demonstrated to be valid theoretically and tested under various numerical simulation conditions. The validity of momentum conservation justifies the extension of Hardy stress expression to multi-body potential systems. Computed Hardy stress has been observed to converge to the virial stress of the system with increasing spatial averaging volume. This work provides a feasible and reliable linkage between the atomistic and continuum scales for multi-body potential systems.

  8. A simulation model for wind energy storage systems. Volume 2: Operation manual

    NASA Technical Reports Server (NTRS)

    Warren, A. W.; Edsinger, R. W.; Burroughs, J. D.

    1977-01-01

    A comprehensive computer program (SIMWEST) developed for the modeling of wind energy/storage systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel, and pneumatic) is described. Features of the program include: a precompiler which generates computer models (in FORTRAN) of complex wind source/storage/application systems, from user specifications using the respective library components; a program which provides the techno-economic system analysis with the respective I/O the integration of system dynamics, and the iteration for conveyance of variables; and capability to evaluate economic feasibility as well as general performance of wind energy systems. The SIMWEST operation manual is presented and the usage of the SIMWEST program and the design of the library components are described. A number of example simulations intended to familiarize the user with the program's operation is given along with a listing of each SIMWEST library subroutine.

  9. Investigation on the Inertance Tubes of Pulse Tube Cryocooler Without Reservoir

    NASA Astrophysics Data System (ADS)

    Liu, Y. J.; Yang, L. W.; Liang, J. T.; Hong, G. T.

    2010-04-01

    Phase angle is of vital importance for high-efficiency pulse tube cryocoolers (PTCs). Inertance tube as the main phase shifter is useful for the PTCs to obtain appropriate phase angle. Experiments of inertance tube without reservoir under variable frequency, variable length and diameter of inertance tube and variable pressure amplitude are investigated respectively. In addition, the authors used DeltaEC, a computer program to predict the performance of low-amplitude thermoacoustic engines, to simulate the effects of inertance tube without reservoir. According to the comparison of experiments and theoretical simulations, DeltaEC method is feasible and effective to direct and improve the design of inertance tubes.

  10. Three-dimensional numerical simulation of a continuously rotating detonation in the annular combustion chamber with a wide gap and separate delivery of fuel and oxidizer

    NASA Astrophysics Data System (ADS)

    Frolov, S. M.; Dubrovskii, A. V.; Ivanov, V. S.

    2016-07-01

    The possibility of integrating the Continuous Detonation Chamber (CDC) in a gas turbine engine (GTE) is demonstrated by means of three-dimensional (3D) numerical simulations, i. e., the feasibility of the operation process in the annular combustion chamber with a wide gap and with separate feeding of fuel (hydrogen) and oxidizer (air) is proved computationally. The CDC with an upstream isolator damping pressure disturbances propagating towards the compressor is shown to exhibit a gain in the total pressure of 15% as compared with the same combustion chamber operating in the deflagration mode.

  11. Particle simulation on heterogeneous distributed supercomputers

    NASA Technical Reports Server (NTRS)

    Becker, Jeffrey C.; Dagum, Leonardo

    1993-01-01

    We describe the implementation and performance of a three dimensional particle simulation distributed between a Thinking Machines CM-2 and a Cray Y-MP. These are connected by a combination of two high-speed networks: a high-performance parallel interface (HIPPI) and an optical network (UltraNet). This is the first application to use this configuration at NASA Ames Research Center. We describe our experience implementing and using the application and report the results of several timing measurements. We show that the distribution of applications across disparate supercomputing platforms is feasible and has reasonable performance. In addition, several practical aspects of the computing environment are discussed.

  12. Modeling strength data for CREW CHIEF

    NASA Technical Reports Server (NTRS)

    Mcdaniel, Joe W.

    1990-01-01

    The Air Force has developed CREW CHIEF, a computer-aided design (CAD) tool for simulating and evaluating aircraft maintenance to determine if the required activities are feasible. CREW CHIEF gives the designer the ability to simulate maintenance activities with respect to reach, accessibility, strength, hand tool operation, and materials handling. While developing the CREW CHIEF, extensive research was performed to describe workers strength capabilities for using hand tools and manual handling of objects. More than 100,000 strength measures were collected and modeled for CREW CHIEF. These measures involved both male and female subjects in the 12 maintenance postures included in CREW CHIEF. The data collection and modeling effort are described.

  13. Study of Dynamic Characteristics of Aeroelastic Systems Utilizing Randomdec Signatures

    NASA Technical Reports Server (NTRS)

    Chang, C. S.

    1975-01-01

    The feasibility of utilizing the random decrement method in conjunction with a signature analysis procedure to determine the dynamic characteristics of an aeroelastic system for the purpose of on-line prediction of potential on-set of flutter was examined. Digital computer programs were developed to simulate sampled response signals of a two-mode aeroelastic system. Simulated response data were used to test the random decrement method. A special curve-fit approach was developed for analyzing the resulting signatures. A number of numerical 'experiments' were conducted on the combined processes. The method is capable of determining frequency and damping values accurately from randomdec signatures of carefully selected lengths.

  14. Application of the MacCormack scheme to overland flow routing for high-spatial resolution distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Nan, Zhuotong; Liang, Xu; Xu, Yi; Hernández, Felipe; Li, Lianxia

    2018-03-01

    Although process-based distributed hydrological models (PDHMs) are evolving rapidly over the last few decades, their extensive applications are still challenged by the computational expenses. This study attempted, for the first time, to apply the numerically efficient MacCormack algorithm to overland flow routing in a representative high-spatial resolution PDHM, i.e., the distributed hydrology-soil-vegetation model (DHSVM), in order to improve its computational efficiency. The analytical verification indicates that both the semi and full versions of the MacCormack schemes exhibit robust numerical stability and are more computationally efficient than the conventional explicit linear scheme. The full-version outperforms the semi-version in terms of simulation accuracy when a same time step is adopted. The semi-MacCormack scheme was implemented into DHSVM (version 3.1.2) to solve the kinematic wave equations for overland flow routing. The performance and practicality of the enhanced DHSVM-MacCormack model was assessed by performing two groups of modeling experiments in the Mercer Creek watershed, a small urban catchment near Bellevue, Washington. The experiments show that DHSVM-MacCormack can considerably improve the computational efficiency without compromising the simulation accuracy of the original DHSVM model. More specifically, with the same computational environment and model settings, the computational time required by DHSVM-MacCormack can be reduced to several dozen minutes for a simulation period of three months (in contrast with one day and a half by the original DHSVM model) without noticeable sacrifice of the accuracy. The MacCormack scheme proves to be applicable to overland flow routing in DHSVM, which implies that it can be coupled into other PHDMs for watershed routing to either significantly improve their computational efficiency or to make the kinematic wave routing for high resolution modeling computational feasible.

  15. Simplified Modeling of Oxidation of Hydrocarbons

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Harstad, Kenneth

    2008-01-01

    A method of simplified computational modeling of oxidation of hydrocarbons is undergoing development. This is one of several developments needed to enable accurate computational simulation of turbulent, chemically reacting flows. At present, accurate computational simulation of such flows is difficult or impossible in most cases because (1) the numbers of grid points needed for adequate spatial resolution of turbulent flows in realistically complex geometries are beyond the capabilities of typical supercomputers now in use and (2) the combustion of typical hydrocarbons proceeds through decomposition into hundreds of molecular species interacting through thousands of reactions. Hence, the combination of detailed reaction- rate models with the fundamental flow equations yields flow models that are computationally prohibitive. Hence, further, a reduction of at least an order of magnitude in the dimension of reaction kinetics is one of the prerequisites for feasibility of computational simulation of turbulent, chemically reacting flows. In the present method of simplified modeling, all molecular species involved in the oxidation of hydrocarbons are classified as either light or heavy; heavy molecules are those having 3 or more carbon atoms. The light molecules are not subject to meaningful decomposition, and the heavy molecules are considered to decompose into only 13 specified constituent radicals, a few of which are listed in the table. One constructs a reduced-order model, suitable for use in estimating the release of heat and the evolution of temperature in combustion, from a base comprising the 13 constituent radicals plus a total of 26 other species that include the light molecules and related light free radicals. Then rather than following all possible species through their reaction coordinates, one follows only the reduced set of reaction coordinates of the base. The behavior of the base was examined in test computational simulations of the combustion of heptane in a stirred reactor at various initial pressures ranging from 0.1 to 6 MPa. Most of the simulations were performed for stoichiometric mixtures; some were performed for fuel/oxygen mole ratios of 1/2 and 2.

  16. Feasibility of dynamic models of the interaction of potential oil spills with bowhead and gray whales in the Bering, Chukchi, and Beaufort Seas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, M.; Bowles, A.E.; Anderson, E.L.

    1984-08-01

    Feasibility and design considerations for developing computer models of migratory bow-head and gray whales and linking such models to oil spill models for application in Alaskan Outer Continental Shelf areas were evaluated. A summary of all relevant bowhead and gray whale distributional and migration data were summarized and presented at monthly intervals. The data were, for the most part, deemed sufficient to prepare whale migration simulation models. A variety of whale migration conceptual models were devised and ranking was achieved by means of a scaling-weighted protocol. Existing oil spill trajectory and fate models, as well as conceptual models, were similarlymore » ranked.« less

  17. Monitoring temperatures in coal conversion and combustion processes via ultrasound

    NASA Astrophysics Data System (ADS)

    Gopalsami, N.; Raptis, A. C.; Mulcahey, T. P.

    1980-02-01

    The state of the art of instrumentation for monitoring temperatures in coal conversion and combustion systems is examined. The instrumentation types studied include thermocouples, radiation pyrometers, and acoustical thermometers. The capabilities and limitations of each type are reviewed. A feasibility study of the ultrasonic thermometry is described. A mathematical model of a pulse-echo ultrasonic temperature measurement system is developed using linear system theory. The mathematical model lends itself to the adaptation of generalized correlation techniques for the estimation of propagation delays. Computer simulations are made to test the efficacy of the signal processing techniques for noise-free as well as noisy signals. Based on the theoretical study, acoustic techniques to measure temperature in reactors and combustors are feasible.

  18. Feasibility of using the Massively Parallel Processor for large eddy simulations and other Computational Fluid Dynamics applications

    NASA Technical Reports Server (NTRS)

    Bruno, John

    1984-01-01

    The results of an investigation into the feasibility of using the MPP for direct and large eddy simulations of the Navier-Stokes equations is presented. A major part of this study was devoted to the implementation of two of the standard numerical algorithms for CFD. These implementations were not run on the Massively Parallel Processor (MPP) since the machine delivered to NASA Goddard does not have sufficient capacity. Instead, a detailed implementation plan was designed and from these were derived estimates of the time and space requirements of the algorithms on a suitably configured MPP. In addition, other issues related to the practical implementation of these algorithms on an MPP-like architecture were considered; namely, adaptive grid generation, zonal boundary conditions, the table lookup problem, and the software interface. Performance estimates show that the architectural components of the MPP, the Staging Memory and the Array Unit, appear to be well suited to the numerical algorithms of CFD. This combined with the prospect of building a faster and larger MMP-like machine holds the promise of achieving sustained gigaflop rates that are required for the numerical simulations in CFD.

  19. Investigation of the Feasibility of Utilizing Gamma Emission Computed Tomography in Evaluating Fission Product Migration in Irradiated TRISO Fuel Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jason M. Harp; Paul A. Demkowicz

    2014-10-01

    In the High Temperature Gas-Cooled Reactor (HTGR) the TRISO particle fuel serves as the primary fission product containment. However the large number of TRISO particles present in proposed HTGRs dictates that there will be a small fraction (~10 -4 to 10 -5) of as manufactured and in-pile particle failures that will lead to some fission product release. The matrix material surrounding the TRISO particles in fuel compacts and the structural graphite holding the TRISO particles in place can also serve as sinks for containing any released fission products. However data on the migration of solid fission products through these materialsmore » is lacking. One of the primary goals of the AGR-3/4 experiment is to study fission product migration from failed TRISO particles in prototypic HTGR components such as structural graphite and compact matrix material. In this work, the potential for a Gamma Emission Computed Tomography (GECT) technique to non-destructively examine the fission product distribution in AGR-3/4 components and other irradiation experiments is explored. Specifically, the feasibility of using the Idaho National Laboratory (INL) Hot Fuels Examination Facility (HFEF) Precision Gamma Scanner (PGS) system for this GECT application is considered. To test the feasibility, the response of the PGS system to idealized fission product distributions has been simulated using Monte Carlo radiation transport simulations. Previous work that applied similar techniques during the AGR-1 experiment will also be discussed as well as planned uses for the GECT technique during the post irradiation examination of the AGR-2 experiment. The GECT technique has also been applied to other irradiated nuclear fuel systems that were currently available in the HFEF hot cell including oxide fuel pins, metallic fuel pins, and monolithic plate fuel.« less

  20. Brain perfusion imaging using a Reconstruction-of-Difference (RoD) approach for cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Mow, M.; Zbijewski, W.; Sisniega, A.; Xu, J.; Dang, H.; Stayman, J. W.; Wang, X.; Foos, D. H.; Koliatsos, V.; Aygun, N.; Siewerdsen, J. H.

    2017-03-01

    Purpose: To improve the timely detection and treatment of intracranial hemorrhage or ischemic stroke, recent efforts include the development of cone-beam CT (CBCT) systems for perfusion imaging and new approaches to estimate perfusion parameters despite slow rotation speeds compared to multi-detector CT (MDCT) systems. This work describes development of a brain perfusion CBCT method using a reconstruction of difference (RoD) approach to enable perfusion imaging on a newly developed CBCT head scanner prototype. Methods: A new reconstruction approach using RoD with a penalized-likelihood framework was developed to image the temporal dynamics of vascular enhancement. A digital perfusion simulation was developed to give a realistic representation of brain anatomy, artifacts, noise, scanner characteristics, and hemo-dynamic properties. This simulation includes a digital brain phantom, time-attenuation curves and noise parameters, a novel forward projection method for improved computational efficiency, and perfusion parameter calculation. Results: Our results show the feasibility of estimating perfusion parameters from a set of images reconstructed from slow scans, sparse data sets, and arc length scans as short as 60 degrees. The RoD framework significantly reduces noise and time-varying artifacts from inconsistent projections. Proper regularization and the use of overlapping reconstructed arcs can potentially further decrease bias and increase temporal resolution, respectively. Conclusions: A digital brain perfusion simulation with RoD imaging approach has been developed and supports the feasibility of using a CBCT head scanner for perfusion imaging. Future work will include testing with data acquired using a 3D-printed perfusion phantom currently and translation to preclinical and clinical studies.

  1. Simulation of a Real-Time Brain Computer Interface for Detecting a Self-Paced Hitting Task.

    PubMed

    Hammad, Sofyan H; Kamavuako, Ernest N; Farina, Dario; Jensen, Winnie

    2016-12-01

    An invasive brain-computer interface (BCI) is a promising neurorehabilitation device for severely disabled patients. Although some systems have been shown to work well in restricted laboratory settings, their utility must be tested in less controlled, real-time environments. Our objective was to investigate whether a specific motor task could be reliably detected from multiunit intracortical signals from freely moving animals in a simulated, real-time setting. Intracortical signals were first obtained from electrodes placed in the primary motor cortex of four rats that were trained to hit a retractable paddle (defined as a "Hit"). In the simulated real-time setting, the signal-to-noise-ratio was first increased by wavelet denoising. Action potentials were detected, and features were extracted (spike count, mean absolute values, entropy, and combination of these features) within pre-defined time windows (200 ms, 300 ms, and 400 ms) to classify the occurrence of a "Hit." We found higher detection accuracy of a "Hit" (73.1%, 73.4%, and 67.9% for the three window sizes, respectively) when the decision was made based on a combination of features rather than on a single feature. However, the duration of the window length was not statistically significant (p = 0.5). Our results showed the feasibility of detecting a motor task in real time in a less restricted environment compared to environments commonly applied within invasive BCI research, and they showed the feasibility of using information extracted from multiunit recordings, thereby avoiding the time-consuming and complex task of extracting and sorting single units. © 2016 International Neuromodulation Society.

  2. Virtual suturing simulation based on commodity physics engine for medical learning.

    PubMed

    Choi, Kup-Sze; Chan, Sze-Ho; Pang, Wai-Man

    2012-06-01

    Development of virtual-reality medical applications is usually a complicated and labour intensive task. This paper explores the feasibility of using commodity physics engine to develop a suturing simulator prototype for manual skills training in the fields of nursing and medicine, so as to enjoy the benefits of rapid development and hardware-accelerated computation. In the prototype, spring-connected boxes of finite dimension are used to simulate soft tissues, whereas needle and thread are modelled with chained segments. Spherical joints are used to simulate suture's flexibility and to facilitate thread cutting. An algorithm is developed to simulate needle insertion and thread advancement through the tissue. Two-handed manipulations and force feedback are enabled with two haptic devices. Experiments on the closure of a wound show that the prototype is able to simulate suturing procedures at interactive rates. The simulator is also used to study a curvature-adaptive suture modelling technique. Issues and limitations of the proposed approach and future development are discussed.

  3. Toward optimized potential functions for protein-protein interactions in aqueous solutions: osmotic second virial coefficient calculations using the MARTINI coarse-grained force field

    PubMed Central

    Stark, Austin C.; Andrews, Casey T.

    2013-01-01

    Coarse-grained (CG) simulation methods are now widely used to model the structure and dynamics of large biomolecular systems. One important issue for using such methods – especially with regard to using them to model, for example, intracellular environments – is to demonstrate that they can reproduce experimental data on the thermodynamics of protein-protein interactions in aqueous solutions. To examine this issue, we describe here simulations performed using the popular coarse-grained MARTINI force field, aimed at computing the thermodynamics of lysozyme and chymotrypsinogen self-interactions in aqueous solution. Using molecular dynamics simulations to compute potentials of mean force between a pair of protein molecules, we show that the original parameterization of the MARTINI force field is likely to significantly overestimate the strength of protein-protein interactions to the extent that the computed osmotic second virial coefficients are orders of magnitude more negative than experimental estimates. We then show that a simple down-scaling of the van der Waals parameters that describe the interactions between protein pseudo-atoms can bring the simulated thermodynamics into much closer agreement with experiment. Overall, the work shows that it is feasible to test explicit-solvent CG force fields directly against thermodynamic data for proteins in aqueous solutions, and highlights the potential usefulness of osmotic second virial coefficient measurements for fully parameterizing such force fields. PMID:24223529

  4. Toward optimized potential functions for protein-protein interactions in aqueous solutions: osmotic second virial coefficient calculations using the MARTINI coarse-grained force field.

    PubMed

    Stark, Austin C; Andrews, Casey T; Elcock, Adrian H

    2013-09-10

    Coarse-grained (CG) simulation methods are now widely used to model the structure and dynamics of large biomolecular systems. One important issue for using such methods - especially with regard to using them to model, for example, intracellular environments - is to demonstrate that they can reproduce experimental data on the thermodynamics of protein-protein interactions in aqueous solutions. To examine this issue, we describe here simulations performed using the popular coarse-grained MARTINI force field, aimed at computing the thermodynamics of lysozyme and chymotrypsinogen self-interactions in aqueous solution. Using molecular dynamics simulations to compute potentials of mean force between a pair of protein molecules, we show that the original parameterization of the MARTINI force field is likely to significantly overestimate the strength of protein-protein interactions to the extent that the computed osmotic second virial coefficients are orders of magnitude more negative than experimental estimates. We then show that a simple down-scaling of the van der Waals parameters that describe the interactions between protein pseudo-atoms can bring the simulated thermodynamics into much closer agreement with experiment. Overall, the work shows that it is feasible to test explicit-solvent CG force fields directly against thermodynamic data for proteins in aqueous solutions, and highlights the potential usefulness of osmotic second virial coefficient measurements for fully parameterizing such force fields.

  5. Deterministic Stress Modeling of Hot Gas Segregation in a Turbine

    NASA Technical Reports Server (NTRS)

    Busby, Judy; Sondak, Doug; Staubach, Brent; Davis, Roger

    1998-01-01

    Simulation of unsteady viscous turbomachinery flowfields is presently impractical as a design tool due to the long run times required. Designers rely predominantly on steady-state simulations, but these simulations do not account for some of the important unsteady flow physics. Unsteady flow effects can be modeled as source terms in the steady flow equations. These source terms, referred to as Lumped Deterministic Stresses (LDS), can be used to drive steady flow solution procedures to reproduce the time-average of an unsteady flow solution. The goal of this work is to investigate the feasibility of using inviscid lumped deterministic stresses to model unsteady combustion hot streak migration effects on the turbine blade tip and outer air seal heat loads using a steady computational approach. The LDS model is obtained from an unsteady inviscid calculation. The LDS model is then used with a steady viscous computation to simulate the time-averaged viscous solution. Both two-dimensional and three-dimensional applications are examined. The inviscid LDS model produces good results for the two-dimensional case and requires less than 10% of the CPU time of the unsteady viscous run. For the three-dimensional case, the LDS model does a good job of reproducing the time-averaged viscous temperature migration and separation as well as heat load on the outer air seal at a CPU cost that is 25% of that of an unsteady viscous computation.

  6. Using Numerical Modeling to Simulate Space Capsule Ground Landings

    NASA Technical Reports Server (NTRS)

    Heymsfield, Ernie; Fasanella, Edwin L.

    2009-01-01

    Experimental work is being conducted at the National Aeronautics and Space Administration s (NASA) Langley Research Center (LaRC) to investigate ground landing capabilities of the Orion crew exploration vehicle (CEV). The Orion capsule is NASA s replacement for the Space Shuttle. The Orion capsule will service the International Space Station and be used for future space missions to the Moon and to Mars. To evaluate the feasibility of Orion ground landings, a series of capsule impact tests are being performed at the NASA Langley Landing and Impact Research Facility (LandIR). The experimental results derived at LandIR provide means to validate and calibrate nonlinear dynamic finite element models, which are also being developed during this study. Because of the high cost and time involvement intrinsic to full-scale testing, numerical simulations are favored over experimental work. Subsequent to a numerical model validated by actual test responses, impact simulations will be conducted to study multiple impact scenarios not practical to test. Twenty-one swing tests using the LandIR gantry were conducted during the June 07 through October 07 time period to evaluate the Orion s impact response. Results for two capsule initial pitch angles, 0deg and -15deg , along with their computer simulations using LS-DYNA are presented in this article. A soil-vehicle friction coefficient of 0.45 was determined by comparing the test stopping distance with computer simulations. In addition, soil modeling accuracy is presented by comparing vertical penetrometer impact tests with computer simulations for the soil model used during the swing tests.

  7. Modeling the dynamics of chromosomal alteration progression in cervical cancer: A computational model

    PubMed Central

    2017-01-01

    Computational modeling has been applied to simulate the heterogeneity of cancer behavior. The development of Cervical Cancer (CC) is a process in which the cell acquires dynamic behavior from non-deleterious and deleterious mutations, exhibiting chromosomal alterations as a manifestation of this dynamic. To further determine the progression of chromosomal alterations in precursor lesions and CC, we introduce a computational model to study the dynamics of deleterious and non-deleterious mutations as an outcome of tumor progression. The analysis of chromosomal alterations mediated by our model reveals that multiple deleterious mutations are more frequent in precursor lesions than in CC. Cells with lethal deleterious mutations would be eliminated, which would mitigate cancer progression; on the other hand, cells with non-deleterious mutations would become dominant, which could predispose them to cancer progression. The study of somatic alterations through computer simulations of cancer progression provides a feasible pathway for insights into the transformation of cell mechanisms in humans. During cancer progression, tumors may acquire new phenotype traits, such as the ability to invade and metastasize or to become clinically important when they develop drug resistance. Non-deleterious chromosomal alterations contribute to this progression. PMID:28723940

  8. An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.

    2017-01-01

    Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.

  9. Pre-operative Simulation of the Appropriate C-arm Position Using Computed Tomography Post-processing Software Reduces Radiation and Contrast Medium Exposure During EVAR Procedures.

    PubMed

    Stahlberg, E; Planert, M; Panagiotopoulos, N; Horn, M; Wiedner, M; Kleemann, M; Barkhausen, J; Goltz, J P

    2017-02-01

    The aim was to evaluate the feasibility and efficacy of a new method for pre-operative calculation of an appropriate C-arm position for iliac bifurcation visualisation during endovascular aortic repair (EVAR) procedures by using three dimensional computed tomography angiography (CTA) post-processing software. Post-processing software was used to simulate C-arm angulations in two dimensions (oblique, cranial/caudal) for appropriate visualisation of distal landing zones at the iliac bifurcation during EVAR. Retrospectively, 27 consecutive EVAR patients (25 men, mean ± SD age 73 ± 7 years) were identified; one group of patients (NEW; n = 12 [23 iliac bifurcations]) was compared after implementation of the new method with a group of patients who received a historic method (OLD; n = 15 [23 iliac bifurcations]), treated with EVAR before the method was applied. In the OLD group, a median of 2.0 (interquartile range [IQR] 1-3) digital subtraction angiography runs were needed per iliac bifurcation versus 1.0 (IQR 1-1) runs in the NEW group (p = .007). The median dose area products per iliac bifurcation were 11951 mGy*cm 2 (IQR 7308-16663 mGy*cm 2 ) for the NEW, and 39394 mGy*cm 2 (IQR 19066-53702 mGy*cm 2 ) for the OLD group, respectively (p = .001). The median volume of contrast per iliac bifurcation was 13.0 mL (IQR: 13-13 mL) in the NEW and 26 mL (IQR 13-39 mL) in the OLD group (p = .007). Pre-operative simulation of the appropriate C-arm angulation in two dimensions using dedicated computed tomography angiography post-processing software is feasible and significantly reduces radiation and contrast medium exposure. Copyright © 2016 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  10. Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation

    NASA Astrophysics Data System (ADS)

    Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab

    2015-05-01

    3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.

  11. Modelling and simulation of complex sociotechnical systems: envisioning and analysing work environments

    PubMed Central

    Hettinger, Lawrence J.; Kirlik, Alex; Goh, Yang Miang; Buckle, Peter

    2015-01-01

    Accurate comprehension and analysis of complex sociotechnical systems is a daunting task. Empirically examining, or simply envisioning the structure and behaviour of such systems challenges traditional analytic and experimental approaches as well as our everyday cognitive capabilities. Computer-based models and simulations afford potentially useful means of accomplishing sociotechnical system design and analysis objectives. From a design perspective, they can provide a basis for a common mental model among stakeholders, thereby facilitating accurate comprehension of factors impacting system performance and potential effects of system modifications. From a research perspective, models and simulations afford the means to study aspects of sociotechnical system design and operation, including the potential impact of modifications to structural and dynamic system properties, in ways not feasible with traditional experimental approaches. This paper describes issues involved in the design and use of such models and simulations and describes a proposed path forward to their development and implementation. Practitioner Summary: The size and complexity of real-world sociotechnical systems can present significant barriers to their design, comprehension and empirical analysis. This article describes the potential advantages of computer-based models and simulations for understanding factors that impact sociotechnical system design and operation, particularly with respect to process and occupational safety. PMID:25761227

  12. SediFoam: A general-purpose, open-source CFD-DEM solver for particle-laden flow with emphasis on sediment transport

    NASA Astrophysics Data System (ADS)

    Sun, Rui; Xiao, Heng

    2016-04-01

    With the growth of available computational resource, CFD-DEM (computational fluid dynamics-discrete element method) becomes an increasingly promising and feasible approach for the study of sediment transport. Several existing CFD-DEM solvers are applied in chemical engineering and mining industry. However, a robust CFD-DEM solver for the simulation of sediment transport is still desirable. In this work, the development of a three-dimensional, massively parallel, and open-source CFD-DEM solver SediFoam is detailed. This solver is built based on open-source solvers OpenFOAM and LAMMPS. OpenFOAM is a CFD toolbox that can perform three-dimensional fluid flow simulations on unstructured meshes; LAMMPS is a massively parallel DEM solver for molecular dynamics. Several validation tests of SediFoam are performed using cases of a wide range of complexities. The results obtained in the present simulations are consistent with those in the literature, which demonstrates the capability of SediFoam for sediment transport applications. In addition to the validation test, the parallel efficiency of SediFoam is studied to test the performance of the code for large-scale and complex simulations. The parallel efficiency tests show that the scalability of SediFoam is satisfactory in the simulations using up to O(107) particles.

  13. Computational modeling of radiofrequency ablation: evaluation on ex vivo data using ultrasound monitoring

    NASA Astrophysics Data System (ADS)

    Audigier, Chloé; Kim, Younsu; Dillow, Austin; Boctor, Emad M.

    2017-03-01

    Radiofrequency ablation (RFA) is the most widely used minimally invasive ablative therapy for liver cancer, but it is challenged by a lack of patient-specific monitoring. Inter-patient tissue variability and the presence of blood vessels make the prediction of the RFA difficult. A monitoring tool which can be personalized for a given patient during the intervention would be helpful to achieve a complete tumor ablation. However, the clinicians do not have access to such a tool, which results in incomplete treatment and a large number of recurrences. Computational models can simulate the phenomena and mechanisms governing this therapy. The temperature evolution as well as the resulted ablation can be modeled. When combined together with intraoperative measurements, computational modeling becomes an accurate and powerful tool to gain quantitative understanding and to enable improvements in the ongoing clinical settings. This paper shows how computational models of RFA can be evaluated using intra-operative measurements. First, simulations are used to demonstrate the feasibility of the method, which is then evaluated on two ex vivo datasets. RFA is simulated on a simplified geometry to generate realistic longitudinal temperature maps and the resulted necrosis. Computed temperatures are compared with the temperature evolution recorded using thermometers, and with temperatures monitored by ultrasound (US) in a 2D plane containing the ablation tip. Two ablations are performed on two cadaveric bovine livers, and we achieve error of 2.2 °C on average between the computed and the thermistors temperature and 1.4 °C and 2.7 °C on average between the temperature computed and monitored by US during the ablation at two different time points (t = 240 s and t = 900 s).

  14. Crystallographic Lattice Boltzmann Method

    PubMed Central

    Namburi, Manjusha; Krithivasan, Siddharth; Ansumali, Santosh

    2016-01-01

    Current approaches to Direct Numerical Simulation (DNS) are computationally quite expensive for most realistic scientific and engineering applications of Fluid Dynamics such as automobiles or atmospheric flows. The Lattice Boltzmann Method (LBM), with its simplified kinetic descriptions, has emerged as an important tool for simulating hydrodynamics. In a heterogeneous computing environment, it is often preferred due to its flexibility and better parallel scaling. However, direct simulation of realistic applications, without the use of turbulence models, remains a distant dream even with highly efficient methods such as LBM. In LBM, a fictitious lattice with suitable isotropy in the velocity space is considered to recover Navier-Stokes hydrodynamics in macroscopic limit. The same lattice is mapped onto a cartesian grid for spatial discretization of the kinetic equation. In this paper, we present an inverted argument of the LBM, by making spatial discretization as the central theme. We argue that the optimal spatial discretization for LBM is a Body Centered Cubic (BCC) arrangement of grid points. We illustrate an order-of-magnitude gain in efficiency for LBM and thus a significant progress towards feasibility of DNS for realistic flows. PMID:27251098

  15. Learning a force field for the martensitic phase transformation in Zr

    NASA Astrophysics Data System (ADS)

    Zong, Hongxiang; Pilania, Ghanshyam; Ramprasad, Rampi; Lookman, Turab

    Atomic simulations provide an effective means to understand the underlying physics of martensitic transformations under extreme conditions. However, this is still a challenge for certain phase transforming metals due to the lack of an accurate classical force field. Quantum molecular dynamics (QMD) simulations are accurate but expensive. During the course of QMD simulations, similar configurations are constantly visited and revisited. Machine Learning can effectively learn from past visits and, therefore, eliminate such redundancies. In this talk, we will discuss the development of a hybrid ML-QMD method in which on-demand, on-the-fly quantum mechanical (QM) calculations are performed to accelerate calculations of interatomic forces at much lower computational costs. Using Zirconium as a model system for which accurate atomisctic potentials are currently unvailable we will demonstrate the feasibility and effectiveness of our approach. Specifically, the computed structural phase transformation behavior within the ML-QMD approach will be compared with available experimental results. Furthermore, results on phonons, stacking fault energies, and activation barriers for the homogeneous martensitic transformation in Zr will be presented.

  16. From Solidification Processing to Microstructure to Mechanical Properties: A Multi-scale X-ray Study of an Al-Cu Alloy Sample

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tourret, D.; Mertens, J. C. E.; Lieberman, E.

    We follow an Al-12 at. pct Cu alloy sample from the liquid state to mechanical failure, using in situ X-ray radiography during directional solidification and tensile testing, as well as three-dimensional computed tomography of the microstructure before and after mechanical testing. The solidification processing stage is simulated with a multi-scale dendritic needle network model, and the micromechanical behavior of the solidified microstructure is simulated using voxelized tomography data and an elasto-viscoplastic fast Fourier transform model. This study demonstrates the feasibility of direct in situ monitoring of a metal alloy microstructure from the liquid processing stage up to its mechanical failure,more » supported by quantitative simulations of microstructure formation and its mechanical behavior.« less

  17. From Solidification Processing to Microstructure to Mechanical Properties: A Multi-scale X-ray Study of an Al-Cu Alloy Sample

    DOE PAGES

    Tourret, D.; Mertens, J. C. E.; Lieberman, E.; ...

    2017-09-13

    We follow an Al-12 at. pct Cu alloy sample from the liquid state to mechanical failure, using in situ X-ray radiography during directional solidification and tensile testing, as well as three-dimensional computed tomography of the microstructure before and after mechanical testing. The solidification processing stage is simulated with a multi-scale dendritic needle network model, and the micromechanical behavior of the solidified microstructure is simulated using voxelized tomography data and an elasto-viscoplastic fast Fourier transform model. This study demonstrates the feasibility of direct in situ monitoring of a metal alloy microstructure from the liquid processing stage up to its mechanical failure,more » supported by quantitative simulations of microstructure formation and its mechanical behavior.« less

  18. From Solidification Processing to Microstructure to Mechanical Properties: A Multi-scale X-ray Study of an Al-Cu Alloy Sample

    NASA Astrophysics Data System (ADS)

    Tourret, D.; Mertens, J. C. E.; Lieberman, E.; Imhoff, S. D.; Gibbs, J. W.; Henderson, K.; Fezzaa, K.; Deriy, A. L.; Sun, T.; Lebensohn, R. A.; Patterson, B. M.; Clarke, A. J.

    2017-11-01

    We follow an Al-12 at. pct Cu alloy sample from the liquid state to mechanical failure, using in situ X-ray radiography during directional solidification and tensile testing, as well as three-dimensional computed tomography of the microstructure before and after mechanical testing. The solidification processing stage is simulated with a multi-scale dendritic needle network model, and the micromechanical behavior of the solidified microstructure is simulated using voxelized tomography data and an elasto-viscoplastic fast Fourier transform model. This study demonstrates the feasibility of direct in situ monitoring of a metal alloy microstructure from the liquid processing stage up to its mechanical failure, supported by quantitative simulations of microstructure formation and its mechanical behavior.

  19. Assessing the accuracy of improved force-matched water models derived from Ab initio molecular dynamics simulations.

    PubMed

    Köster, Andreas; Spura, Thomas; Rutkai, Gábor; Kessler, Jan; Wiebeler, Hendrik; Vrabec, Jadran; Kühne, Thomas D

    2016-07-15

    The accuracy of water models derived from ab initio molecular dynamics simulations by means on an improved force-matching scheme is assessed for various thermodynamic, transport, and structural properties. It is found that although the resulting force-matched water models are typically less accurate than fully empirical force fields in predicting thermodynamic properties, they are nevertheless much more accurate than generally appreciated in reproducing the structure of liquid water and in fact superseding most of the commonly used empirical water models. This development demonstrates the feasibility to routinely parametrize computationally efficient yet predictive potential energy functions based on accurate ab initio molecular dynamics simulations for a large variety of different systems. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. FESetup: Automating Setup for Alchemical Free Energy Simulations.

    PubMed

    Loeffler, Hannes H; Michel, Julien; Woods, Christopher

    2015-12-28

    FESetup is a new pipeline tool which can be used flexibly within larger workflows. The tool aims to support fast and easy setup of alchemical free energy simulations for molecular simulation packages such as AMBER, GROMACS, Sire, or NAMD. Post-processing methods like MM-PBSA and LIE can be set up as well. Ligands are automatically parametrized with AM1-BCC, and atom mappings for a single topology description are computed with a maximum common substructure search (MCSS) algorithm. An abstract molecular dynamics (MD) engine can be used for equilibration prior to free energy setup or standalone. Currently, all modern AMBER force fields are supported. Ease of use, robustness of the code, and automation where it is feasible are the main development goals. The project follows an open development model, and we welcome contributions.

  1. Hybrid Multiscale Simulation of Hydrologic and Biogeochemical Processes in the River-Groundwater Interaction Zone

    NASA Astrophysics Data System (ADS)

    Yang, X.; Scheibe, T. D.; Chen, X.; Hammond, G. E.; Song, X.

    2015-12-01

    The zone in which river water and groundwater mix plays an important role in natural ecosystems as it regulates the mixing of nutrients that control biogeochemical transformations. Subsurface heterogeneity leads to local hotspots of microbial activity that are important to system function yet difficult to resolve computationally. To address this challenge, we are testing a hybrid multiscale approach that couples models at two distinct scales, based on field research at the U. S. Department of Energy's Hanford Site. The region of interest is a 400 x 400 x 20 m macroscale domain that intersects the aquifer and the river and contains a contaminant plume. However, biogeochemical activity is high in a thin zone (mud layer, <1 m thick) immediately adjacent to the river. This microscale domain is highly heterogeneous and requires fine spatial resolution to adequately represent the effects of local mixing on reactions. It is not computationally feasible to resolve the full macroscale domain at the fine resolution needed in the mud layer, and the reaction network needed in the mud layer is much more complex than that needed in the rest of the macroscale domain. Hence, a hybrid multiscale approach is used to efficiently and accurately predict flow and reactive transport at both scales. In our simulations, models at both scales are simulated using the PFLOTRAN code. Multiple microscale simulations in dynamically defined sub-domains (fine resolution, complex reaction network) are executed and coupled with a macroscale simulation over the entire domain (coarse resolution, simpler reaction network). The objectives of the research include: 1) comparing accuracy and computing cost of the hybrid multiscale simulation with a single-scale simulation; 2) identifying hot spots of microbial activity; and 3) defining macroscopic quantities such as fluxes, residence times and effective reaction rates.

  2. Development of Three-Dimensional Flow Code Package to Predict Performance and Stability of Aircraft with Leading Edge Ice Contamination

    NASA Technical Reports Server (NTRS)

    Strash, D. J.; Summa, J. M.

    1996-01-01

    In the work reported herein, a simplified, uncoupled, zonal procedure is utilized to assess the capability of numerically simulating icing effects on a Boeing 727-200 aircraft. The computational approach combines potential flow plus boundary layer simulations by VSAERO for the un-iced aircraft forces and moments with Navier-Stokes simulations by NPARC for the incremental forces and moments due to iced components. These are compared with wind tunnel force and moment data, supplied by the Boeing Company, examining longitudinal flight characteristics. Grid refinement improved the local flow features over previously reported work with no appreciable difference in the incremental ice effect. The computed lift curve slope with and without empennage ice matches the experimental value to within 1%, and the zero lift angle agrees to within 0.2 of a degree. The computed slope of the un-iced and iced aircraft longitudinal stability curve is within about 2% of the test data. This work demonstrates the feasibility of a zonal method for the icing analysis of complete aircraft or isolated components within the linear angle of attack range. In fact, this zonal technique has allowed for the viscous analysis of a complete aircraft with ice which is currently not otherwise considered tractable.

  3. Monte Carlo Simulation for Polychromatic X-Ray Fluorescence Computed Tomography with Sheet-Beam Geometry

    PubMed Central

    Jiang, Shanghai

    2017-01-01

    X-ray fluorescence computed tomography (XFCT) based on sheet beam can save a huge amount of time to obtain a whole set of projections using synchrotron. However, it is clearly unpractical for most biomedical research laboratories. In this paper, polychromatic X-ray fluorescence computed tomography with sheet-beam geometry is tested by Monte Carlo simulation. First, two phantoms (A and B) filled with PMMA are used to simulate imaging process through GEANT 4. Phantom A contains several GNP-loaded regions with the same size (10 mm) in height and diameter but different Au weight concentration ranging from 0.3% to 1.8%. Phantom B contains twelve GNP-loaded regions with the same Au weight concentration (1.6%) but different diameter ranging from 1 mm to 9 mm. Second, discretized presentation of imaging model is established to reconstruct more accurate XFCT images. Third, XFCT images of phantoms A and B are reconstructed by filter back-projection (FBP) and maximum likelihood expectation maximization (MLEM) with and without correction, respectively. Contrast-to-noise ratio (CNR) is calculated to evaluate all the reconstructed images. Our results show that it is feasible for sheet-beam XFCT system based on polychromatic X-ray source and the discretized imaging model can be used to reconstruct more accurate images. PMID:28567054

  4. Featured Image: The Simulated Collapse of a Core

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-11-01

    This stunning snapshot (click for a closer look!) is from a simulation of a core-collapse supernova. Despite having been studied for many decades, the mechanism driving the explosions of core-collapse supernovae is still an area of active research. Extremely complex simulations such as this one represent best efforts to include as many realistic physical processes as is currently computationally feasible. In this study led by Luke Roberts (a NASA Einstein Postdoctoral Fellow at Caltech at the time), a core-collapse supernova is modeled long-term in fully 3D simulations that include the effects of general relativity, radiation hydrodynamics, and even neutrino physics. The authors use these simulations to examine the evolution of a supernova after its core bounce. To read more about the teams findings (and see more awesome images from their simulations), check out the paper below!CitationLuke F. Roberts et al 2016 ApJ 831 98. doi:10.3847/0004-637X/831/1/98

  5. Population Simulation, AKA: Grahz, Rahbitz and Fawkzes

    NASA Technical Reports Server (NTRS)

    Bangert, Tyler R.

    2008-01-01

    In an effort to give students a more visceral experience of science and instill a deeper working knowledge of concepts, activities that utilize hands-on, laboratory and simulated experiences are recommended because these activities have a greater impact on student learning, especially for Native American students. Because it is not usually feasible to take large and/or multiple classes of high school science students into the field to count numbers of organisms of a particular species, especially over a long period of time and covering a large area of an environment, the population simulation presented in this paper was created to aid students in understanding population dynamics by working with a simulated environment, which can be done in the classroom. Students create an environment and populate the environment with imaginary species. Then, using a sequence of "rules" that allow organisms to eat, reproduce, move and age, students see how the population of a species changes over time. In particular, students practice collecting data, summarizing information, plotting graphs, and interpreting graphs for such information as carrying capacity, predator prey relationships, and how specific species factors impact population and the environment. Students draw conclusions from their results and suggest further research, which may involve changes in simulation parameters, prediction of outcomes, and testing predictions. The population Simulation has demonstrated success in the above student activities using a "board game" version of the population simulation. A computer version of the population simulation needs more testing, but preliminary runs are promising. A second - and more complicated - computer simulation will simulate the same things and will add simulated population genetics.

  6. Neuromorphic Kalman filter implementation in IBM’s TrueNorth

    NASA Astrophysics Data System (ADS)

    Carney, R.; Bouchard, K.; Calafiura, P.; Clark, D.; Donofrio, D.; Garcia-Sciveres, M.; Livezey, J.

    2017-10-01

    Following the advent of a post-Moore’s law field of computation, novel architectures continue to emerge. With composite, multi-million connection neuromorphic chips like IBM’s TrueNorth, neural engineering has now become a feasible technology in this novel computing paradigm. High Energy Physics experiments are continuously exploring new methods of computation and data handling, including neuromorphic, to support the growing challenges of the field and be prepared for future commodity computing trends. This work details the first instance of a Kalman filter implementation in IBM’s neuromorphic architecture, TrueNorth, for both parallel and serial spike trains. The implementation is tested on multiple simulated systems and its performance is evaluated with respect to an equivalent non-spiking Kalman filter. The limits of the implementation are explored whilst varying the size of weight and threshold registers, the number of spikes used to encode a state, size of neuron block for spatial encoding, and neuron potential reset schemes.

  7. The Direct Lighting Computation in Global Illumination Methods

    NASA Astrophysics Data System (ADS)

    Wang, Changyaw Allen

    1994-01-01

    Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.

  8. Development and evaluation of a Kalman-filter algorithm for terminal area navigation using sensors of moderate accuracy

    NASA Technical Reports Server (NTRS)

    Kanning, G.; Cicolani, L. S.; Schmidt, S. F.

    1983-01-01

    Translational state estimation in terminal area operations, using a set of commonly available position, air data, and acceleration sensors, is described. Kalman filtering is applied to obtain maximum estimation accuracy from the sensors but feasibility in real-time computations requires a variety of approximations and devices aimed at minimizing the required computation time with only negligible loss of accuracy. Accuracy behavior throughout the terminal area, its relation to sensor accuracy, its effect on trajectory tracking errors and control activity in an automatic flight control system, and its adequacy in terms of existing criteria for various terminal area operations are examined. The principal investigative tool is a simulation of the system.

  9. Visual environment recognition for robot path planning using template matched filters

    NASA Astrophysics Data System (ADS)

    Orozco-Rosas, Ulises; Picos, Kenia; Díaz-Ramírez, Víctor H.; Montiel, Oscar; Sepúlveda, Roberto

    2017-08-01

    A visual approach in environment recognition for robot navigation is proposed. This work includes a template matching filtering technique to detect obstacles and feasible paths using a single camera to sense a cluttered environment. In this problem statement, a robot can move from the start to the goal by choosing a single path between multiple possible ways. In order to generate an efficient and safe path for mobile robot navigation, the proposal employs a pseudo-bacterial potential field algorithm to derive optimal potential field functions using evolutionary computation. Simulation results are evaluated in synthetic and real scenes in terms of accuracy of environment recognition and efficiency of path planning computation.

  10. Magnetoacoustic Tomography with Magnetic Induction (MAT-MI) for Breast Tumor Imaging: Numerical Modeling and Simulation

    PubMed Central

    Zhou, Lian; Li, Xu; Zhu, Shanan; He, Bin

    2011-01-01

    Magnetoacoustic tomography with magnetic induction (MAT-MI) was recently introduced as a noninvasive electrical conductivity imaging approach with high spatial resolution close to ultrasound imaging. In the present study, we test the feasibility of the MAT-MI method for breast tumor imaging using numerical modeling and computer simulation. Using the finite element method, we have built three dimensional numerical breast models with varieties of embedded tumors for this simulation study. In order to obtain an accurate and stable forward solution that does not have numerical errors caused by singular MAT-MI acoustic sources at conductivity boundaries, we first derive an integral forward method for calculating MAT-MI acoustic sources over the entire imaging volume. An inverse algorithm for reconstructing the MAT-MI acoustic source is also derived with spherical measurement aperture, which simulates a practical setup for breast imaging. With the numerical breast models, we have conducted computer simulations under different imaging parameter setups and all the results suggest that breast tumors that have large conductivity contrast to its surrounding tissues as reported in literature may be readily detected in the reconstructed MAT-MI images. In addition, our simulations also suggest that the sensitivity of imaging breast tumors using the presented MAT-MI setup depends more on the tumor location and the conductivity contrast between the tumor and its surrounding tissues than on the tumor size. PMID:21364262

  11. Modelling and Simulation of Search Engine

    NASA Astrophysics Data System (ADS)

    Nasution, Mahyuddin K. M.

    2017-01-01

    The best tool currently used to access information is a search engine. Meanwhile, the information space has its own behaviour. Systematically, an information space needs to be familiarized with mathematics so easily we identify the characteristics associated with it. This paper reveal some characteristics of search engine based on a model of document collection, which are then estimated the impact on the feasibility of information. We reveal some of characteristics of search engine on the lemma and theorem about singleton and doubleton, then computes statistically characteristic as simulating the possibility of using search engine. In this case, Google and Yahoo. There are differences in the behaviour of both search engines, although in theory based on the concept of documents collection.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nord, B.; Amara, A.; Refregier, A.

    The nature of dark matter, dark energy and large-scale gravity pose some of the most pressing questions in cosmology today. These fundamental questions require highly precise measurements, and a number of wide-field spectroscopic survey instruments are being designed to meet this requirement. A key component in these experiments is the development of a simulation tool to forecast science performance, define requirement flow-downs, optimize implementation, demonstrate feasibility, and prepare for exploitation. We present SPOKES (SPectrOscopic KEn Simulation), an end-to-end simulation facility for spectroscopic cosmological surveys designed to address this challenge. SPOKES is based on an integrated infrastructure, modular function organization, coherentmore » data handling and fast data access. These key features allow reproducibility of pipeline runs, enable ease of use and provide flexibility to update functions within the pipeline. The cyclic nature of the pipeline offers the possibility to make the science output an efficient measure for design optimization and feasibility testing. We present the architecture, first science, and computational performance results of the simulation pipeline. The framework is general, but for the benchmark tests, we use the Dark Energy Spectrometer (DESpec), one of the early concepts for the upcoming project, the Dark Energy Spectroscopic Instrument (DESI). As a result, we discuss how the SPOKES framework enables a rigorous process to optimize and exploit spectroscopic survey experiments in order to derive high-precision cosmological measurements optimally.« less

  13. Thoracoscopic anatomical lung segmentectomy using 3D computed tomography simulation without tumour markings for non-palpable and non-visualized small lung nodules.

    PubMed

    Kato, Hirohisa; Oizumi, Hiroyuki; Suzuki, Jun; Hamada, Akira; Watarai, Hikaru; Sadahiro, Mitsuaki

    2017-09-01

    Although wedge resection can be curative for small lung tumours, tumour marking is sometimes required for resection of non-palpable or visually undetectable lung nodules as a method for identification of tumours. Tumour marking sometimes fails and occasionally causes serious complications. We have performed many thoracoscopic segmentectomies using 3D computed tomography simulation for undetectable small lung tumours without any tumour markings. The aim of this study was to investigate whether thoracoscopic segmentectomy planned with 3D computed tomography simulation could precisely remove non-palpable and visually undetectable tumours. Between January 2012 and March 2016, 58 patients underwent thoracoscopic segmentectomy using 3D computed tomography simulation for non-palpable, visually undetectable tumours. Surgical outcomes were evaluated. A total of 35, 14 and 9 patients underwent segmentectomy, subsegmentectomy and segmentectomy combined with adjacent subsegmentectomy, respectively. All tumours were correctly resected without tumour marking. The median tumour size and distance from the visceral pleura was 14 ± 5.2 mm (range 5-27 mm) and 11.6 mm (range 1-38.8 mm), respectively. Median values related to the procedures were operative time, 176 min (range 83-370 min); blood loss, 43 ml (range 0-419 ml); duration of chest tube placement, 1 day (range 1-8 days); and postoperative hospital stay, 5 days (range 3-12 days). Two cases were converted to open thoracotomy due to bleeding. Three cases required pleurodesis for pleural fistula. No recurrences occurred during the mean follow-up period of 44.4 months (range 5-53 months). Thoracoscopic segmentectomy using 3D computed tomography simulation was feasible and could be performed to resect undetectable tumours with no tumour markings. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  14. Descriptive Summaries of the Research Development Test & Evaluation. Army Appropriation FY 1984. Supporting Data FY 1984 Budget Estimate Submitted to Congress--February 1983. Volume I.

    DTIC Science & Technology

    1983-02-01

    s.,ccesstully modeled to enhance future computer design simulations; (2) a new methodology for conduc*n dynamic analysis of vehicle mechanics was...to prelminary design methodology for tilt rotors, advancing blade concepts configuration helicopters, and compound helicopters in conjunction with...feasibility of low-level personnel parachutes has been demon- strated. A study was begun to design a free-fall water contalner. An experimental program to

  15. Software For Design Of Life-Support Systems

    NASA Technical Reports Server (NTRS)

    Rudokas, Mary R.; Cantwell, Elizabeth R.; Robinson, Peter I.; Shenk, Timothy W.

    1991-01-01

    Design Assistant Workstation (DAWN) computer program is prototype of expert software system for analysis and design of regenerative, physical/chemical life-support systems that revitalize air, reclaim water, produce food, and treat waste. Incorporates both conventional software for quantitative mathematical modeling of physical, chemical, and biological processes and expert system offering user stored knowledge about materials and processes. Constructs task tree as it leads user through simulated process, offers alternatives, and indicates where alternative not feasible. Also enables user to jump from one design level to another.

  16. Robust analysis of an underwater navigational strategy in electrically heterogeneous corridors.

    PubMed

    Dimble, Kedar D; Ranganathan, Badri N; Keshavan, Jishnu; Humbert, J Sean

    2016-08-01

    Obstacles and other global stimuli provide relevant navigational cues to a weakly electric fish. In this work, robust analysis of a control strategy based on electrolocation for performing obstacle avoidance in electrically heterogeneous corridors is presented and validated. Static output feedback control is shown to achieve the desired goal of reflexive obstacle avoidance in such environments in simulation and experimentation. The proposed approach is computationally inexpensive and readily implementable on a small scale underwater vehicle, making underwater autonomous navigation feasible in real-time.

  17. Hazard detection and avoidance sensor for NASA's planetary landers

    NASA Technical Reports Server (NTRS)

    Lau, Brian; Chao, Tien-Hsin

    1992-01-01

    An optical terrain analysis based sensor system specifically designed for landing hazard detection as required for NASA's autonomous planetary landers is introduced. This optical hazard detection and avoidance (HDA) sensor utilizes an optoelectronic wedge-and-ting (WRD) filter for Fourier transformed feature extraction and an electronic neural network processor for pattern classification. A fully implemented optical HDA sensor would assure safe landing of the planetary landers. Computer simulation results of a successful feasibility study is reported. Future research for hardware system implementation is also provided.

  18. Communications satellite system for Africa

    NASA Astrophysics Data System (ADS)

    Kriegl, W.; Laufenberg, W.

    1980-09-01

    Earlier established requirement estimations were improved upon by contacting African administrations and organizations. An enormous demand is shown to exist for telephony and teletype services in rural areas. It is shown that educational television broadcasting should be realized in the current African transport and communications decade (1978-1987). Radio broadcasting is proposed in order to overcome illiteracy and to improve educational levels. The technical and commercial feasibility of the system is provided by computer simulations which demonstrate how the required objectives can be fulfilled in conjunction with ground networks.

  19. Space Station solar water heater

    NASA Technical Reports Server (NTRS)

    Horan, D. C.; Somers, Richard E.; Haynes, R. D.

    1990-01-01

    The feasibility of directly converting solar energy for crew water heating on the Space Station Freedom (SSF) and other human-tended missions such as a geosynchronous space station, lunar base, or Mars spacecraft was investigated. Computer codes were developed to model the systems, and a proof-of-concept thermal vacuum test was conducted to evaluate system performance in an environment simulating the SSF. The results indicate that a solar water heater is feasible. It could provide up to 100 percent of the design heating load without a significant configuration change to the SSF or other missions. The solar heater system requires only 15 percent of the electricity that an all-electric system on the SSF would require. This allows a reduction in the solar array or a surplus of electricity for onboard experiments.

  20. Multi-pass Monte Carlo simulation method in nuclear transmutations.

    PubMed

    Mateescu, Liviu; Kadambi, N Prasad; Ravindra, Nuggehalli M

    2016-12-01

    Monte Carlo methods, in their direct brute simulation incarnation, bring realistic results if the involved probabilities, be they geometrical or otherwise, remain constant for the duration of the simulation. However, there are physical setups where the evolution of the simulation represents a modification of the simulated system itself. Chief among such evolving simulated systems are the activation/transmutation setups. That is, the simulation starts with a given set of probabilities, which are determined by the geometry of the system, the components and by the microscopic interaction cross-sections. However, the relative weight of the components of the system changes along with the steps of the simulation. A natural measure would be adjusting probabilities after every step of the simulation. On the other hand, the physical system has typically a number of components of the order of Avogadro's number, usually 10 25 or 10 26 members. A simulation step changes the characteristics for just a few of these members; a probability will therefore shift by a quantity of 1/10 25 . Such a change cannot be accounted for within a simulation, because then the simulation should have then a number of at least 10 28 steps in order to have some significance. This is not feasible, of course. For our computing devices, a simulation of one million steps is comfortable, but a further order of magnitude becomes too big a stretch for the computing resources. We propose here a method of dealing with the changing probabilities, leading to the increasing of the precision. This method is intended as a fast approximating approach, and also as a simple introduction (for the benefit of students) in the very branched subject of Monte Carlo simulations vis-à-vis nuclear reactors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Evolving the Land Information System into a Cloud Computing Service

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Houser, Paul R.

    The Land Information System (LIS) was developed to use advanced flexible land surface modeling and data assimilation frameworks to integrate extremely large satellite- and ground-based observations with advanced land surface models to produce continuous high-resolution fields of land surface states and fluxes. The resulting fields are extremely useful for drought and flood assessment, agricultural planning, disaster management, weather and climate forecasting, water resources assessment, and the like. We envisioned transforming the LIS modeling system into a scientific cloud computing-aware web and data service that would allow clients to easily setup and configure for use in addressing large water management issues.more » The focus of this Phase 1 project was to determine the scientific, technical, commercial merit and feasibility of the proposed LIS-cloud innovations that are currently barriers to broad LIS applicability. We (a) quantified the barriers to broad LIS utility and commercialization (high performance computing, big data, user interface, and licensing issues); (b) designed the proposed LIS-cloud web service, model-data interface, database services, and user interfaces; (c) constructed a prototype LIS user interface including abstractions for simulation control, visualization, and data interaction, (d) used the prototype to conduct a market analysis and survey to determine potential market size and competition, (e) identified LIS software licensing and copyright limitations and developed solutions, and (f) developed a business plan for development and marketing of the LIS-cloud innovation. While some significant feasibility issues were found in the LIS licensing, overall a high degree of LIS-cloud technical feasibility was found.« less

  2. Cerebellum-inspired neural network solution of the inverse kinematics problem.

    PubMed

    Asadi-Eydivand, Mitra; Ebadzadeh, Mohammad Mehdi; Solati-Hashjin, Mehran; Darlot, Christian; Abu Osman, Noor Azuan

    2015-12-01

    The demand today for more complex robots that have manipulators with higher degrees of freedom is increasing because of technological advances. Obtaining the precise movement for a desired trajectory or a sequence of arm and positions requires the computation of the inverse kinematic (IK) function, which is a major problem in robotics. The solution of the IK problem leads robots to the precise position and orientation of their end-effector. We developed a bioinspired solution comparable with the cerebellar anatomy and function to solve the said problem. The proposed model is stable under all conditions merely by parameter determination, in contrast to recursive model-based solutions, which remain stable only under certain conditions. We modified the proposed model for the simple two-segmented arm to prove the feasibility of the model under a basic condition. A fuzzy neural network through its learning method was used to compute the parameters of the system. Simulation results show the practical feasibility and efficiency of the proposed model in robotics. The main advantage of the proposed model is its generalizability and potential use in any robot.

  3. Feasibility of Conducting J-2X Engine Testing at the Glenn Research Center Plum Brook Station B-2 Facility

    NASA Technical Reports Server (NTRS)

    Schafer, Charles F.; Cheston, Derrick J.; Worlund, Armis L.; Brown, James R.; Hooper, William G.; Monk, Jan C.; Winstead, Thomas W.

    2008-01-01

    A trade study of the feasibility of conducting J-2X testing in the Glenn Research Center (GRC) Plum Brook Station (PBS) B-2 facility was initiated in May 2006 with results available in October 2006. The Propulsion Test Integration Group (PTIG) led the study with support from Marshall Space Flight Center (MSFC) and Jacobs Sverdrup Engineering. The primary focus of the trade study was on facility design concepts and their capability to satisfy the J-2X altitude simulation test requirements. The propulsion systems tested in the B-2 facility were in the 30,000-pound (30K) thrust class. The J-2X thrust is approximately 10 times larger. Therefore, concepts significantly different from the current configuration are necessary for the diffuser, spray chamber subsystems, and cooling water. Steam exhaust condensation in the spray chamber is judged to be the key risk consideration relative to acceptable spray chamber pressure. Further assessment via computational fluid dynamics (CFD) and other simulation capabilities (e.g. methodology for anchoring predictions with actual test data and subscale testing to support investigation.

  4. Feasibility of 3D Reconstruction of Neural Morphology Using Expansion Microscopy and Barcode-Guided Agglomeration

    PubMed Central

    Yoon, Young-Gyu; Dai, Peilun; Wohlwend, Jeremy; Chang, Jae-Byum; Marblestone, Adam H.; Boyden, Edward S.

    2017-01-01

    We here introduce and study the properties, via computer simulation, of a candidate automated approach to algorithmic reconstruction of dense neural morphology, based on simulated data of the kind that would be obtained via two emerging molecular technologies—expansion microscopy (ExM) and in-situ molecular barcoding. We utilize a convolutional neural network to detect neuronal boundaries from protein-tagged plasma membrane images obtained via ExM, as well as a subsequent supervoxel-merging pipeline guided by optical readout of information-rich, cell-specific nucleic acid barcodes. We attempt to use conservative imaging and labeling parameters, with the goal of establishing a baseline case that points to the potential feasibility of optical circuit reconstruction, leaving open the possibility of higher-performance labeling technologies and algorithms. We find that, even with these conservative assumptions, an all-optical approach to dense neural morphology reconstruction may be possible via the proposed algorithmic framework. Future work should explore both the design-space of chemical labels and barcodes, as well as algorithms, to ultimately enable routine, high-performance optical circuit reconstruction. PMID:29114215

  5. Feasibility of 3D Reconstruction of Neural Morphology Using Expansion Microscopy and Barcode-Guided Agglomeration.

    PubMed

    Yoon, Young-Gyu; Dai, Peilun; Wohlwend, Jeremy; Chang, Jae-Byum; Marblestone, Adam H; Boyden, Edward S

    2017-01-01

    We here introduce and study the properties, via computer simulation, of a candidate automated approach to algorithmic reconstruction of dense neural morphology, based on simulated data of the kind that would be obtained via two emerging molecular technologies-expansion microscopy (ExM) and in-situ molecular barcoding. We utilize a convolutional neural network to detect neuronal boundaries from protein-tagged plasma membrane images obtained via ExM, as well as a subsequent supervoxel-merging pipeline guided by optical readout of information-rich, cell-specific nucleic acid barcodes. We attempt to use conservative imaging and labeling parameters, with the goal of establishing a baseline case that points to the potential feasibility of optical circuit reconstruction, leaving open the possibility of higher-performance labeling technologies and algorithms. We find that, even with these conservative assumptions, an all-optical approach to dense neural morphology reconstruction may be possible via the proposed algorithmic framework. Future work should explore both the design-space of chemical labels and barcodes, as well as algorithms, to ultimately enable routine, high-performance optical circuit reconstruction.

  6. Electronic Circular Dichroism of [16]Helicene With Simplified TD-DFT: Beyond the Single Structure Approach.

    PubMed

    Bannwarth, Christoph; Seibert, Jakob; Grimme, Stefan

    2016-05-01

    The electronic circular dichroism (ECD) spectrum of the recently synthesized [16]helicene and a derivative comprising two triisopropylsilyloxy protection groups was computed by means of the very efficient simplified time-dependent density functional theory (sTD-DFT) approach. Different from many previous ECD studies of helicenes, nonequilibrium structure effects were accounted for by computing ECD spectra on "snapshots" obtained from a molecular dynamics (MD) simulation including solvent molecules. The trajectories are based on a molecule specific classical potential as obtained from the recently developed quantum chemically derived force field (QMDFF) scheme. The reduced computational cost in the MD simulation due to the use of the QMDFF (compared to ab-initio MD) as well as the sTD-DFT approach make realistic spectral simulations feasible for these compounds that comprise more than 100 atoms. While the ECD spectra of [16]helicene and its derivative computed vertically on the respective gas phase, equilibrium geometries show noticeable differences, these are "washed" out when nonequilibrium structures are taken into account. The computed spectra with two recommended density functionals (ωB97X and BHLYP) and extended basis sets compare very well with the experimental one. In addition we provide an estimate for the missing absolute intensities of the latter. The approach presented here could also be used in future studies to capture nonequilibrium effects, but also to systematically average ECD spectra over different conformations in more flexible molecules. Chirality 28:365-369, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. Improved FastICA algorithm in fMRI data analysis using the sparsity property of the sources.

    PubMed

    Ge, Ruiyang; Wang, Yubao; Zhang, Jipeng; Yao, Li; Zhang, Hang; Long, Zhiying

    2016-04-01

    As a blind source separation technique, independent component analysis (ICA) has many applications in functional magnetic resonance imaging (fMRI). Although either temporal or spatial prior information has been introduced into the constrained ICA and semi-blind ICA methods to improve the performance of ICA in fMRI data analysis, certain types of additional prior information, such as the sparsity, has seldom been added to the ICA algorithms as constraints. In this study, we proposed a SparseFastICA method by adding the source sparsity as a constraint to the FastICA algorithm to improve the performance of the widely used FastICA. The source sparsity is estimated through a smoothed ℓ0 norm method. We performed experimental tests on both simulated data and real fMRI data to investigate the feasibility and robustness of SparseFastICA and made a performance comparison between SparseFastICA, FastICA and Infomax ICA. Results of the simulated and real fMRI data demonstrated the feasibility and robustness of SparseFastICA for the source separation in fMRI data. Both the simulated and real fMRI experimental results showed that SparseFastICA has better robustness to noise and better spatial detection power than FastICA. Although the spatial detection power of SparseFastICA and Infomax did not show significant difference, SparseFastICA had faster computation speed than Infomax. SparseFastICA was comparable to the Infomax algorithm with a faster computation speed. More importantly, SparseFastICA outperformed FastICA in robustness and spatial detection power and can be used to identify more accurate brain networks than FastICA algorithm. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. SU-F-J-156: The Feasibility of MR-Only IMRT Planning for Prostate Anatomy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaitheeswaran, R; Sivaramakrishnan, KR; Kumar, Prashant

    Purpose: For prostate anatomy, previous investigations have shown that simulated CT (sCT) generated from MR images can be used for accurate dose computation. In this study, we demonstrate the feasibility of MR-only IMRT planning for prostate case. Methods: Regular CT (rCT) and MR images of the same patient were acquired for prostate anatomy. Regions-of-interest (ROIs) i.e. target and risk structures are delineated on the rCT. A simulated CT (sCT) is generated from the MR image using the method described by Schadewaldt N et al. Their work establishes the clinical acceptability of dose calculation results on the sCT when compared tomore » rCT. rCT and sCT are rigidly registered to ensure proper alignment between the two images. rCT and sCT are overlaid on each other and slice-wise visual inspection confirms excellent agreement between the two images. ROIs on the rCT are copied over to sCT. Philips AutoPlanning solution is used for generating treatment plans. The same treatment technique protocol (plan parameters and clinical goals) is used to generate AutoPlan-rCT and AutoPlan-sCT respectively for rCT and and sCT. DVH comparison on ROIs and slice-wise evaluation of dose is performed between AutoPlan-rCT and AutoPlan-sCT. Delivery parameters i.e. beam and corresponding segments from the AutoPlan-sCT are copied over to rCT and dose is computed to get AutoPlan-sCT-on-rCT. Results: Plan evaluation is done based on Dose Volume Histogram (DVH) of ROIs and manual slice-wise inspection of dose distribution. Both AutoPlan-rCT and AutoPlan-sCT provide a clinically acceptable plan. Also, AutoPlan-sCT-on-rCT shows excellent agreement with AutoPlan-sCT. Conclusion: The study demonstrates that it is feasible to do IMRT planning on the simulated CT image obtained from MR image for prostate anatomy. The research is supported by Philips India Ltd.« less

  9. Feasibility of Executing MIMS on Interdata 80.

    DTIC Science & Technology

    CDC 6500 computers, CDC 6600 computers, MIMS(Medical Information Management System ), Medical information management system , File structures, Computer...storage managementThe report examines the feasibility of implementing large information management system on mini-computers. The Medical Information ... Management System and the Interdata 80 mini-computer were selected as being representative systems. The FORTRAN programs currently being used in MIMS

  10. Developing Computer Model-Based Assessment of Chemical Reasoning: A Feasibility Study

    ERIC Educational Resources Information Center

    Liu, Xiufeng; Waight, Noemi; Gregorius, Roberto; Smith, Erica; Park, Mihwa

    2012-01-01

    This paper reports a feasibility study on developing computer model-based assessments of chemical reasoning at the high school level. Computer models are flash and NetLogo environments to make simultaneously available three domains in chemistry: macroscopic, submicroscopic, and symbolic. Students interact with computer models to answer assessment…

  11. Imaging and computational considerations for image computed permeability: Operating envelope of Digital Rock Physics

    NASA Astrophysics Data System (ADS)

    Saxena, Nishank; Hows, Amie; Hofmann, Ronny; Alpak, Faruk O.; Freeman, Justin; Hunter, Sander; Appel, Matthias

    2018-06-01

    This study defines the optimal operating envelope of the Digital Rock technology from the perspective of imaging and numerical simulations of transport properties. Imaging larger volumes of rocks for Digital Rock Physics (DRP) analysis improves the chances of achieving a Representative Elementary Volume (REV) at which flow-based simulations (1) do not vary with change in rock volume, and (2) is insensitive to the choice of boundary conditions. However, this often comes at the expense of image resolution. This trade-off exists due to the finiteness of current state-of-the-art imaging detectors. Imaging and analyzing digital rocks that sample the REV and still sufficiently resolve pore throats is critical to ensure simulation quality and robustness of rock property trends for further analysis. We find that at least 10 voxels are needed to sufficiently resolve pore throats for single phase fluid flow simulations. If this condition is not met, additional analyses and corrections may allow for meaningful comparisons between simulation results and laboratory measurements of permeability, but some cases may fall outside the current technical feasibility of DRP. On the other hand, we find that the ratio of field of view and effective grain size provides a reliable measure of the REV for siliciclastic rocks. If this ratio is greater than 5, the coefficient of variation for single-phase permeability simulations drops below 15%. These imaging considerations are crucial when comparing digitally computed rock flow properties with those measured in the laboratory. We find that the current imaging methods are sufficient to achieve both REV (with respect to numerical boundary conditions) and required image resolution to perform digital core analysis for coarse to fine-grained sandstones.

  12. In silico preclinical trials: a proof of concept in closed-loop control of type 1 diabetes.

    PubMed

    Kovatchev, Boris P; Breton, Marc; Man, Chiara Dalla; Cobelli, Claudio

    2009-01-01

    Arguably, a minimally invasive system using subcutaneous (s.c.) continuous glucose monitoring (CGM) and s.c. insulin delivery via insulin pump would be a most feasible step to closed-loop control in type 1 diabetes mellitus (T1DM). Consequently, diabetes technology is focusing on developing an artificial pancreas using control algorithms to link CGM with s.c. insulin delivery. The future development of the artificial pancreas will be greatly accelerated by employing mathematical modeling and computer simulation. Realistic computer simulation is capable of providing invaluable information about the safety and the limitations of closed-loop control algorithms, guiding clinical studies, and out-ruling ineffective control scenarios in a cost-effective manner. Thus computer simulation testing of closed-loop control algorithms is regarded as a prerequisite to clinical trials of the artificial pancreas. In this paper, we present a system for in silico testing of control algorithms that has three principal components: (1) a large cohort of n=300 simulated "subjects" (n=100 adults, 100 adolescents, and 100 children) based on real individuals' data and spanning the observed variability of key metabolic parameters in the general population of people with T1DM; (2) a simulator of CGM sensor errors representative of Freestyle Navigator™, Guardian RT, or Dexcom™ STS™, 7-day sensor; and (3) a simulator of discrete s.c. insulin delivery via OmniPod Insulin Management System or Deltec Cozmo(®) insulin pump. The system has been shown to represent adequate glucose fluctuations in T1DM observed during meal challenges, and has been accepted by the Food and Drug Administration as a substitute to animal trials in the preclinical testing of closed-loop control strategies. © Diabetes Technology Society

  13. An FDTD-based computer simulation platform for shock wave propagation in electrohydraulic lithotripsy.

    PubMed

    Yılmaz, Bülent; Çiftçi, Emre

    2013-06-01

    Extracorporeal Shock Wave Lithotripsy (ESWL) is based on disintegration of the kidney stone by delivering high-energy shock waves that are created outside the body and transmitted through the skin and body tissues. Nowadays high-energy shock waves are also used in orthopedic operations and investigated to be used in the treatment of myocardial infarction and cancer. Because of these new application areas novel lithotriptor designs are needed for different kinds of treatment strategies. In this study our aim was to develop a versatile computer simulation environment which would give the device designers working on various medical applications that use shock wave principle a substantial amount of flexibility while testing the effects of new parameters such as reflector size, material properties of the medium, water temperature, and different clinical scenarios. For this purpose, we created a finite-difference time-domain (FDTD)-based computational model in which most of the physical system parameters were defined as an input and/or as a variable in the simulations. We constructed a realistic computational model of a commercial electrohydraulic lithotriptor and optimized our simulation program using the results that were obtained by the manufacturer in an experimental setup. We, then, compared the simulation results with the results from an experimental setup in which oxygen level in water was varied. Finally, we studied the effects of changing the input parameters like ellipsoid size and material, temperature change in the wave propagation media, and shock wave source point misalignment. The simulation results were consistent with the experimental results and expected effects of variation in physical parameters of the system. The results of this study encourage further investigation and provide adequate evidence that the numerical modeling of a shock wave therapy system is feasible and can provide a practical means to test novel ideas in new device design procedures. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. A feasibility study on porting the community land model onto accelerators using OpenACC

    DOE PAGES

    Wang, Dali; Wu, Wei; Winkler, Frank; ...

    2014-01-01

    As environmental models (such as Accelerated Climate Model for Energy (ACME), Parallel Reactive Flow and Transport Model (PFLOTRAN), Arctic Terrestrial Simulator (ATS), etc.) became more and more complicated, we are facing enormous challenges regarding to porting those applications onto hybrid computing architecture. OpenACC appears as a very promising technology, therefore, we have conducted a feasibility analysis on porting the Community Land Model (CLM), a terrestrial ecosystem model within the Community Earth System Models (CESM)). Specifically, we used automatic function testing platform to extract a small computing kernel out of CLM, then we apply this kernel into the actually CLM dataflowmore » procedure, and investigate the strategy of data parallelization and the benefit of data movement provided by current implementation of OpenACC. Even it is a non-intensive kernel, on a single 16-core computing node, the performance (based on the actual computation time using one GPU) of OpenACC implementation is 2.3 time faster than that of OpenMP implementation using single OpenMP thread, but it is 2.8 times slower than the performance of OpenMP implementation using 16 threads. On multiple nodes, MPI_OpenACC implementation demonstrated very good scalability on up to 128 GPUs on 128 computing nodes. This study also provides useful information for us to look into the potential benefits of “deep copy” capability and “routine” feature of OpenACC standards. In conclusion, we believe that our experience on the environmental model, CLM, can be beneficial to many other scientific research programs who are interested to porting their large scale scientific code using OpenACC onto high-end computers, empowered by hybrid computing architecture.« less

  15. Toward Interactive Scenario Analysis and Exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gayle, Thomas R.; Summers, Kenneth Lee; Jungels, John

    2015-01-01

    As Modeling and Simulation (M&S) tools have matured, their applicability and importance have increased across many national security challenges. In particular, they provide a way to test how something may behave without the need to do real world testing. However, current and future changes across several factors including capabilities, policy, and funding are driving a need for rapid response or evaluation in ways that many M&S tools cannot address. Issues around large data, computational requirements, delivery mechanisms, and analyst involvement already exist and pose significant challenges. Furthermore, rising expectations, rising input complexity, and increasing depth of analysis will only increasemore » the difficulty of these challenges. In this study we examine whether innovations in M&S software coupled with advances in ''cloud'' computing and ''big-data'' methodologies can overcome many of these challenges. In particular, we propose a simple, horizontally-scalable distributed computing environment that could provide the foundation (i.e. ''cloud'') for next-generation M&S-based applications based on the notion of ''parallel multi-simulation''. In our context, the goal of parallel multi- simulation is to consider as many simultaneous paths of execution as possible. Therefore, with sufficient resources, the complexity is dominated by the cost of single scenario runs as opposed to the number of runs required. We show the feasibility of this architecture through a stable prototype implementation coupled with the Umbra Simulation Framework [6]. Finally, we highlight the utility through multiple novel analysis tools and by showing the performance improvement compared to existing tools.« less

  16. A full-wave Helmholtz model for continuous-wave ultrasound transmission.

    PubMed

    Huttunen, Tomi; Malinen, Matti; Kaipio, Jari P; White, Phillip Jason; Hynynen, Kullervo

    2005-03-01

    A full-wave Helmholtz model of continuous-wave (CW) ultrasound fields may offer several attractive features over widely used partial-wave approximations. For example, many full-wave techniques can be easily adjusted for complex geometries, and multiple reflections of sound are automatically taken into account in the model. To date, however, the full-wave modeling of CW fields in general 3D geometries has been avoided due to the large computational cost associated with the numerical approximation of the Helmholtz equation. Recent developments in computing capacity together with improvements in finite element type modeling techniques are making possible wave simulations in 3D geometries which reach over tens of wavelengths. The aim of this study is to investigate the feasibility of a full-wave solution of the 3D Helmholtz equation for modeling of continuous-wave ultrasound fields in an inhomogeneous medium. The numerical approximation of the Helmholtz equation is computed using the ultraweak variational formulation (UWVF) method. In addition, an inverse problem technique is utilized to reconstruct the velocity distribution on the transducer which is used to model the sound source in the UWVF scheme. The modeling method is verified by comparing simulated and measured fields in the case of transmission of 531 kHz CW fields through layered plastic plates. The comparison shows a reasonable agreement between simulations and measurements at low angles of incidence but, due to mode conversion, the Helmholtz model becomes insufficient for simulating ultrasound fields in plates at large angles of incidence.

  17. A combination of three-dimensional printing and computer-assisted virtual surgical procedure for preoperative planning of acetabular fracture reduction.

    PubMed

    Zeng, Canjun; Xing, Weirong; Wu, Zhanglin; Huang, Huajun; Huang, Wenhua

    2016-10-01

    Treatment of acetabular fractures remains one of the most challenging tasks that orthopaedic surgeons face. An accurate assessment of the injuries and preoperative planning are essential for an excellent reduction. The purpose of this study was to evaluate the feasibility, accuracy and effectiveness of performing 3D printing technology and computer-assisted virtual surgical procedures for preoperative planning in acetabular fractures. We hypothesised that more accurate preoperative planning using 3D printing models will reduce the operation time and significantly improve the outcome of acetabular fracture repair. Ten patients with acetabular fractures were recruited prospectively and examined by CT scanning. A 3-D model of each acetabular fracture was reconstructed with MIMICS14.0 software from the DICOM file of the CT data. Bone fragments were moved and rotated to simulate fracture reduction and restore the pelvic integrity with virtual fixation. The computer-assisted 3D image of the reduced acetabula was printed for surgery simulation and plate pre-bending. The postoperative CT scan was performed to compare the consistency of the preoperative planning with the surgical implants by 3D-superimposition in MIMICS14.0, and evaluated by Matta's method. Computer-based pre-operations were precisely mimicked and consistent with the actual operations in all cases. The pre-bent fixation plates had an anatomical shape specifically fit to the individual pelvis without further bending or adjustment at the time of surgery and fracture reductions were significantly improved. Seven out of 10 patients had a displacement of fracture reduction of less than 1mm; 3 cases had a displacement of fracture reduction between 1 and 2mm. The 3D printing technology combined with virtual surgery for acetabular fractures is feasible, accurate, and effective leading to improved patient-specific preoperative planning and outcome of real surgery. The results provide useful technical tips in planning pelvic surgeries. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Efficient non-hydrostatic modelling of 3D wave-induced currents using a subgrid approach

    NASA Astrophysics Data System (ADS)

    Rijnsdorp, Dirk P.; Smit, Pieter B.; Zijlema, Marcel; Reniers, Ad J. H. M.

    2017-08-01

    Wave-induced currents are an ubiquitous feature in coastal waters that can spread material over the surf zone and the inner shelf. These currents are typically under resolved in non-hydrostatic wave-flow models due to computational constraints. Specifically, the low vertical resolutions adequate to describe the wave dynamics - and required to feasibly compute at the scales of a field site - are too coarse to account for the relevant details of the three-dimensional (3D) flow field. To describe the relevant dynamics of both wave and currents, while retaining a model framework that can be applied at field scales, we propose a two grid approach to solve the governing equations. With this approach, the vertical accelerations and non-hydrostatic pressures are resolved on a relatively coarse vertical grid (which is sufficient to accurately resolve the wave dynamics), whereas the horizontal velocities and turbulent stresses are resolved on a much finer subgrid (of which the resolution is dictated by the vertical scale of the mean flows). This approach ensures that the discrete pressure Poisson equation - the solution of which dominates the computational effort - is evaluated on the coarse grid scale, thereby greatly improving efficiency, while providing a fine vertical resolution to resolve the vertical variation of the mean flow. This work presents the general methodology, and discusses the numerical implementation in the SWASH wave-flow model. Model predictions are compared with observations of three flume experiments to demonstrate that the subgrid approach captures both the nearshore evolution of the waves, and the wave-induced flows like the undertow profile and longshore current. The accuracy of the subgrid predictions is comparable to fully resolved 3D simulations - but at much reduced computational costs. The findings of this work thereby demonstrate that the subgrid approach has the potential to make 3D non-hydrostatic simulations feasible at the scale of a realistic coastal region.

  19. Cone beam x-ray luminescence computed tomography: a feasibility study.

    PubMed

    Chen, Dongmei; Zhu, Shouping; Yi, Huangjian; Zhang, Xianghan; Chen, Duofang; Liang, Jimin; Tian, Jie

    2013-03-01

    The appearance of x-ray luminescence computed tomography (XLCT) opens new possibilities to perform molecular imaging by x ray. In the previous XLCT system, the sample was irradiated by a sequence of narrow x-ray beams and the x-ray luminescence was measured by a highly sensitive charge coupled device (CCD) camera. This resulted in a relatively long sampling time and relatively low utilization of the x-ray beam. In this paper, a novel cone beam x-ray luminescence computed tomography strategy is proposed, which can fully utilize the x-ray dose and shorten the scanning time. The imaging model and reconstruction method are described. The validity of the imaging strategy has been studied in this paper. In the cone beam XLCT system, the cone beam x ray was adopted to illuminate the sample and a highly sensitive CCD camera was utilized to acquire luminescent photons emitted from the sample. Photons scattering in biological tissues makes it an ill-posed problem to reconstruct the 3D distribution of the x-ray luminescent sample in the cone beam XLCT. In order to overcome this issue, the authors used the diffusion approximation model to describe the photon propagation in tissues, and employed the sparse regularization method for reconstruction. An incomplete variables truncated conjugate gradient method and permissible region strategy were used for reconstruction. Meanwhile, traditional x-ray CT imaging could also be performed in this system. The x-ray attenuation effect has been considered in their imaging model, which is helpful in improving the reconstruction accuracy. First, simulation experiments with cylinder phantoms were carried out to illustrate the validity of the proposed compensated method. The experimental results showed that the location error of the compensated algorithm was smaller than that of the uncompensated method. The permissible region strategy was applied and reduced the reconstruction error to less than 2 mm. The robustness and stability were then evaluated from different view numbers, different regularization parameters, different measurement noise levels, and optical parameters mismatch. The reconstruction results showed that the settings had a small effect on the reconstruction. The nonhomogeneous phantom simulation was also carried out to simulate a more complex experimental situation and evaluated their proposed method. Second, the physical cylinder phantom experiments further showed similar results in their prototype XLCT system. With the discussion of the above experiments, it was shown that the proposed method is feasible to the general case and actual experiments. Utilizing numerical simulation and physical experiments, the authors demonstrated the validity of the new cone beam XLCT method. Furthermore, compared with the previous narrow beam XLCT, the cone beam XLCT could more fully utilize the x-ray dose and the scanning time would be shortened greatly. The study of both simulation experiments and physical phantom experiments indicated that the proposed method was feasible to the general case and actual experiments.

  20. Managing emergency department overcrowding via ambulance diversion: a discrete event simulation model.

    PubMed

    Lin, Chih-Hao; Kao, Chung-Yao; Huang, Chong-Ye

    2015-01-01

    Ambulance diversion (AD) is considered one of the possible solutions to relieve emergency department (ED) overcrowding. Study of the effectiveness of various AD strategies is prerequisite for policy-making. Our aim is to develop a tool that quantitatively evaluates the effectiveness of various AD strategies. A simulation model and a computer simulation program were developed. Three sets of simulations were executed to evaluate AD initiating criteria, patient-blocking rules, and AD intervals, respectively. The crowdedness index, the patient waiting time for service, and the percentage of adverse patients were assessed to determine the effect of various AD policies. Simulation results suggest that, in a certain setting, the best timing for implementing AD is when the crowdedness index reaches the critical value, 1.0 - an indicator that ED is operating at its maximal capacity. The strategy to divert all patients transported by ambulance is more effective than to divert either high-acuity patients only or low-acuity patients only. Given a total allowable AD duration, implementing AD multiple times with short intervals generally has better effect than having a single AD with maximal allowable duration. An input-throughput-output simulation model is proposed for simulating ED operation. Effectiveness of several AD strategies on relieving ED overcrowding was assessed via computer simulations based on this model. By appropriate parameter settings, the model can represent medical resource providers of different scales. It is also feasible to expand the simulations to evaluate the effect of AD strategies on a community basis. The results may offer insights for making effective AD policies. Copyright © 2012. Published by Elsevier B.V.

  1. Next Generation Extended Lagrangian Quantum-based Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Negre, Christian

    2017-06-01

    A new framework for extended Lagrangian first-principles molecular dynamics simulations is presented, which overcomes shortcomings of regular, direct Born-Oppenheimer molecular dynamics, while maintaining important advantages of the unified extended Lagrangian formulation of density functional theory pioneered by Car and Parrinello three decades ago. The new framework allows, for the first time, energy conserving, linear-scaling Born-Oppenheimer molecular dynamics simulations, which is necessary to study larger and more realistic systems over longer simulation times than previously possible. Expensive, self-consinstent-field optimizations are avoided and normal integration time steps of regular, direct Born-Oppenheimer molecular dynamics can be used. Linear scaling electronic structure theory is presented using a graph-based approach that is ideal for parallel calculations on hybrid computer platforms. For the first time, quantum based Born-Oppenheimer molecular dynamics simulation is becoming a practically feasible approach in simulations of +100,000 atoms-representing a competitive alternative to classical polarizable force field methods. In collaboration with: Anders Niklasson, Los Alamos National Laboratory.

  2. Parallel Simulation of Unsteady Turbulent Flames

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    1996-01-01

    Time-accurate simulation of turbulent flames in high Reynolds number flows is a challenging task since both fluid dynamics and combustion must be modeled accurately. To numerically simulate this phenomenon, very large computer resources (both time and memory) are required. Although current vector supercomputers are capable of providing adequate resources for simulations of this nature, the high cost and their limited availability, makes practical use of such machines less than satisfactory. At the same time, the explicit time integration algorithms used in unsteady flow simulations often possess a very high degree of parallelism, making them very amenable to efficient implementation on large-scale parallel computers. Under these circumstances, distributed memory parallel computers offer an excellent near-term solution for greatly increased computational speed and memory, at a cost that may render the unsteady simulations of the type discussed above more feasible and affordable.This paper discusses the study of unsteady turbulent flames using a simulation algorithm that is capable of retaining high parallel efficiency on distributed memory parallel architectures. Numerical studies are carried out using large-eddy simulation (LES). In LES, the scales larger than the grid are computed using a time- and space-accurate scheme, while the unresolved small scales are modeled using eddy viscosity based subgrid models. This is acceptable for the moment/energy closure since the small scales primarily provide a dissipative mechanism for the energy transferred from the large scales. However, for combustion to occur, the species must first undergo mixing at the small scales and then come into molecular contact. Therefore, global models cannot be used. Recently, a new model for turbulent combustion was developed, in which the combustion is modeled, within the subgrid (small-scales) using a methodology that simulates the mixing and the molecular transport and the chemical kinetics within each LES grid cell. Finite-rate kinetics can be included without any closure and this approach actually provides a means to predict the turbulent rates and the turbulent flame speed. The subgrid combustion model requires resolution of the local time scales associated with small-scale mixing, molecular diffusion and chemical kinetics and, therefore, within each grid cell, a significant amount of computations must be carried out before the large-scale (LES resolved) effects are incorporated. Therefore, this approach is uniquely suited for parallel processing and has been implemented on various systems such as: Intel Paragon, IBM SP-2, Cray T3D and SGI Power Challenge (PC) using the system independent Message Passing Interface (MPI) compiler. In this paper, timing data on these machines is reported along with some characteristic results.

  3. Efficient Analysis of Simulations of the Sun's Magnetic Field

    NASA Astrophysics Data System (ADS)

    Scarborough, C. W.; Martínez-Sykora, J.

    2014-12-01

    Dynamics in the solar atmosphere, including solar flares, coronal mass ejections, micro-flares and different types of jets, are powered by the evolution of the sun's intense magnetic field. 3D Radiative Magnetohydrodnamics (MHD) computer simulations have furthered our understanding of the processes involved: When non aligned magnetic field lines reconnect, the alteration of the magnetic topology causes stored magnetic energy to be converted into thermal and kinetic energy. Detailed analysis of this evolution entails tracing magnetic field lines, an operation which is not time-efficient on a single processor. By utilizing a graphics card (GPU) to trace lines in parallel, conducting such analysis is made feasible. We applied our GPU implementation to the most advanced 3D Radiative-MHD simulations (Bifrost, Gudicksen et al. 2011) of the solar atmosphere in order to better understand the evolution of the modeled field lines.

  4. Communication: Quantum molecular dynamics simulation of liquid para-hydrogen by nuclear and electron wave packet approach.

    PubMed

    Hyeon-Deuk, Kim; Ando, Koji

    2014-05-07

    Liquid para-hydrogen (p-H2) is a typical quantum liquid which exhibits strong nuclear quantum effects (NQEs) and thus anomalous static and dynamic properties. We propose a real-time simulation method of wave packet (WP) molecular dynamics (MD) based on non-empirical intra- and inter-molecular interactions of non-spherical hydrogen molecules, and apply it to condensed-phase p-H2. The NQEs, such as WP delocalization and zero-point energy, are taken into account without perturbative expansion of prepared model potential functions but with explicit interactions between nuclear and electron WPs. The developed MD simulation for 100 ps with 1200 hydrogen molecules is realized at feasible computational cost, by which basic experimental properties of p-H2 liquid such as radial distribution functions, self-diffusion coefficients, and shear viscosities are all well reproduced.

  5. Simulation model for wind energy storage systems. Volume II. Operation manual. [SIMWEST code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, A.W.; Edsinger, R.W.; Burroughs, J.D.

    1977-08-01

    The effort developed a comprehensive computer program for the modeling of wind energy/storage systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic). An acronym for the program is SIMWEST (Simulation Model for Wind Energy Storage). The level of detail of SIMWEST is consistent with a role of evaluating the economic feasibility as well as the general performance of wind energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. Volume II, the SIMWEST operation manual, describes the usage of the SIMWEST program, the designmore » of the library components, and a number of simple example simulations intended to familiarize the user with the program's operation. Volume II also contains a listing of each SIMWEST library subroutine.« less

  6. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molzahn, Daniel K.

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  7. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE PAGES

    Molzahn, Daniel K.

    2017-03-15

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  8. Toward brain-computer interface based wheelchair control utilizing tactually-evoked event-related potentials

    PubMed Central

    2014-01-01

    Background People with severe disabilities, e.g. due to neurodegenerative disease, depend on technology that allows for accurate wheelchair control. For those who cannot operate a wheelchair with a joystick, brain-computer interfaces (BCI) may offer a valuable option. Technology depending on visual or auditory input may not be feasible as these modalities are dedicated to processing of environmental stimuli (e.g. recognition of obstacles, ambient noise). Herein we thus validated the feasibility of a BCI based on tactually-evoked event-related potentials (ERP) for wheelchair control. Furthermore, we investigated use of a dynamic stopping method to improve speed of the tactile BCI system. Methods Positions of four tactile stimulators represented navigation directions (left thigh: move left; right thigh: move right; abdomen: move forward; lower neck: move backward) and N = 15 participants delivered navigation commands by focusing their attention on the desired tactile stimulus in an oddball-paradigm. Results Participants navigated a virtual wheelchair through a building and eleven participants successfully completed the task of reaching 4 checkpoints in the building. The virtual wheelchair was equipped with simulated shared-control sensors (collision avoidance), yet these sensors were rarely needed. Conclusion We conclude that most participants achieved tactile ERP-BCI control sufficient to reliably operate a wheelchair and dynamic stopping was of high value for tactile ERP classification. Finally, this paper discusses feasibility of tactile ERPs for BCI based wheelchair control. PMID:24428900

  9. On computing stress in polymer systems involving multi-body potentials from molecular dynamics simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Yao, E-mail: fu5@mailbox.sc.edu, E-mail: jhsong@cec.sc.edu; Song, Jeong-Hoon, E-mail: fu5@mailbox.sc.edu, E-mail: jhsong@cec.sc.edu

    2014-08-07

    Hardy stress definition has been restricted to pair potentials and embedded-atom method potentials due to the basic assumptions in the derivation of a symmetric microscopic stress tensor. Force decomposition required in the Hardy stress expression becomes obscure for multi-body potentials. In this work, we demonstrate the invariance of the Hardy stress expression for a polymer system modeled with multi-body interatomic potentials including up to four atoms interaction, by applying central force decomposition of the atomic force. The balance of momentum has been demonstrated to be valid theoretically and tested under various numerical simulation conditions. The validity of momentum conservation justifiesmore » the extension of Hardy stress expression to multi-body potential systems. Computed Hardy stress has been observed to converge to the virial stress of the system with increasing spatial averaging volume. This work provides a feasible and reliable linkage between the atomistic and continuum scales for multi-body potential systems.« less

  10. A baroclinic quasigeostrophic open ocean model

    NASA Technical Reports Server (NTRS)

    Miller, R. N.; Robinson, A. R.; Haidvogel, D. B.

    1983-01-01

    A baroclinic quasigeostrophic open ocean model is presented, calibrated by a series of test problems, and demonstrated to be feasible and efficient for application to realistic mid-oceanic mesoscale eddy flow regimes. Two methods of treating the depth dependence of the flow, a finite difference method and a collocation method, are tested and intercompared. Sample Rossby wave calculations with and without advection are performed with constant stratification and two levels of nonlinearity, one weaker than and one typical of real ocean flows. Using exact analytical solutions for comparison, the accuracy and efficiency of the model is tabulated as a function of the computational parameters and stability limits set; typically, errors were controlled between 1 percent and 10 percent RMS after two wave periods. Further Rossby wave tests with realistic stratification and wave parameters chosen to mimic real ocean conditions were performed to determine computational parameters for use with real and simulated data. Finally, a prototype calculation with quasiturbulent simulated data was performed successfully, which demonstrates the practicality of the model for scientific use.

  11. Micro- and meso-scale simulations of magnetospheric processes related to the aurora and substorm morphology

    NASA Technical Reports Server (NTRS)

    Swift, Daniel W.

    1991-01-01

    The primary methodology during the grant period has been the use of micro or meso-scale simulations to address specific questions concerning magnetospheric processes related to the aurora and substorm morphology. This approach, while useful in providing some answers, has its limitations. Many of the problems relating to the magnetosphere are inherently global and kinetic. Effort during the last year of the grant period has increasingly focused on development of a global-scale hybrid code to model the entire, coupled magnetosheath - magnetosphere - ionosphere system. In particular, numerical procedures for curvilinear coordinate generation and exactly conservative differencing schemes for hybrid codes in curvilinear coordinates have been developed. The new computer algorithms and the massively parallel computer architectures now make this global code a feasible proposition. Support provided by this project has played an important role in laying the groundwork for the eventual development or a global-scale code to model and forecast magnetospheric weather.

  12. Analysis of operational comfort in manual tasks using human force manipulability measure.

    PubMed

    Tanaka, Yoshiyuki; Nishikawa, Kazuo; Yamada, Naoki; Tsuji, Toshio

    2015-01-01

    This paper proposes a scheme for human force manipulability (HFM) based on the use of isometric joint torque properties to simulate the spatial characteristics of human operation forces at an end-point of a limb with feasible magnitudes for a specified limb posture. This is also applied to the evaluation/prediction of operational comfort (OC) when manually operating a human-machine interface. The effectiveness of HFM is investigated through two experiments and computer simulations of humans generating forces by using their upper extremities. Operation force generation with maximum isometric effort can be roughly estimated with an HFM measure computed from information on the arm posture during a maintained posture. The layout of a human-machine interface is then discussed based on the results of operational experiments using an electric gear-shifting system originally developed for robotic devices. The results indicate a strong relationship between the spatial characteristics of the HFM and OC levels when shifting, and the OC is predicted by using a multiple regression model with HFM measures.

  13. Investigation for Molecular Attraction Impact Between Contacting Surfaces in Micro-Gears

    NASA Astrophysics Data System (ADS)

    Yang, Ping; Li, Xialong; Zhao, Yanfang; Yang, Haiying; Wang, Shuting; Yang, Jianming

    2013-10-01

    The aim of this research work is to provide a systematic method to perform molecular attraction impact between contacting surfaces in micro-gear train. This method is established by integrating involute profile analysis and molecular dynamics simulation. A mathematical computation of micro-gear involute is presented based on geometrical properties, Taylor expression and Hamaker assumption. In the meantime, Morse potential function and the cut-off radius are introduced with a molecular dynamics simulation. So a hybrid computational method for the Van Der Waals force between the contacting faces in micro-gear train is developed. An example is illustrated to show the performance of this method. The results show that the change of Van Der Waals force in micro-gear train has a nonlinear characteristic with parameters change such as the modulus of the gear and the tooth number of gear etc. The procedure implies a potential feasibility that we can control the Van Der Waals force by adjusting the manufacturing parameters for gear train design.

  14. Generalizing Gillespie’s Direct Method to Enable Network-Free Simulations

    DOE PAGES

    Suderman, Ryan T.; Mitra, Eshan David; Lin, Yen Ting; ...

    2018-03-28

    Gillespie’s direct method for stochastic simulation of chemical kinetics is a staple of computational systems biology research. However, the algorithm requires explicit enumeration of all reactions and all chemical species that may arise in the system. In many cases, this is not feasible due to the combinatorial explosion of reactions and species in biological networks. Rule-based modeling frameworks provide a way to exactly represent networks containing such combinatorial complexity, and generalizations of Gillespie’s direct method have been developed as simulation engines for rule-based modeling languages. Here, we provide both a high-level description of the algorithms underlying the simulation engines, termedmore » network-free simulation algorithms, and how they have been applied in systems biology research. We also define a generic rule-based modeling framework and describe a number of technical details required for adapting Gillespie’s direct method for network-free simulation. Lastly, we briefly discuss potential avenues for advancing network-free simulation and the role they continue to play in modeling dynamical systems in biology.« less

  15. Generalizing Gillespie’s Direct Method to Enable Network-Free Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suderman, Ryan T.; Mitra, Eshan David; Lin, Yen Ting

    Gillespie’s direct method for stochastic simulation of chemical kinetics is a staple of computational systems biology research. However, the algorithm requires explicit enumeration of all reactions and all chemical species that may arise in the system. In many cases, this is not feasible due to the combinatorial explosion of reactions and species in biological networks. Rule-based modeling frameworks provide a way to exactly represent networks containing such combinatorial complexity, and generalizations of Gillespie’s direct method have been developed as simulation engines for rule-based modeling languages. Here, we provide both a high-level description of the algorithms underlying the simulation engines, termedmore » network-free simulation algorithms, and how they have been applied in systems biology research. We also define a generic rule-based modeling framework and describe a number of technical details required for adapting Gillespie’s direct method for network-free simulation. Lastly, we briefly discuss potential avenues for advancing network-free simulation and the role they continue to play in modeling dynamical systems in biology.« less

  16. Examining the Feasibility and Effect of Transitioning GED Tests to Computer

    ERIC Educational Resources Information Center

    Higgins, Jennifer; Patterson, Margaret Becker; Bozman, Martha; Katz, Michael

    2010-01-01

    This study examined the feasibility of administering GED Tests using a computer based testing system with embedded accessibility tools and the impact on test scores and test-taker experience when GED Tests are transitioned from paper to computer. Nineteen test centers across five states successfully installed the computer based testing program,…

  17. Establishing Multiscale Models for Simulating Whole Limb Estimates of Electric Fields for Osseointegrated Implants

    PubMed Central

    Isaacson, Brad M.; Stinstra, Jeroen G.; Bloebaum, Roy D.; Pasquina, COL Paul F.; MacLeod, Rob S.

    2011-01-01

    Although the survival rates of warfighters in recent conflicts are among the highest in military history, those who have sustained proximal limb amputations, may pose additional rehabilitation concerns. In some of these cases, traditional prosthetic limbs may not provide adequate function for returning to an active lifestyle. Osseointegration has emerged as a potential prosthetic alternative for those with limited residual limb length. Using this technology, direct skeletal attachment occurs between a transcutaneous osseointegrated implant (TOI) and the host bone, thereby eliminating the need for a socket. While reports from the first 100 patients with a TOI have been promising, some rehabilitation regimens require 12–18 months of restricted weight bearing to prevent overloading at the bone implant-interface. Electrically induced osseointegration has been proposed as an option for expediting periprosthetic fixation and preliminary studies have demonstrated the feasibility of adapting the TOI into a functional cathode. To assure safe and effective electrical fields that are conducive for osseoinduction and osseointegration, we have developed multiscale modeling approaches to simulate the expected electric metrics at the bone-implant interface. We have used computed tomography scans and volume segmentation tools to create anatomically accurate models that clearly distinguish tissue parameters and serve as the basis for finite element analysis. This translational computational biological process has supported biomedical electrode design, implant placement, and experiments to date have demonstrated the clinical feasibility of electrically induced osseointegration. PMID:21712151

  18. Toward low-cloud-permitting cloud superparameterization with explicit boundary layer turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parishani, Hossein; Pritchard, Michael S.; Bretherton, Christopher S.

    Systematic biases in the representation of boundary layer (BL) clouds are a leading source of uncertainty in climate projections. A variation on superparameterization (SP) called “ultraparameterization” (UP) is developed, in which the grid spacing of the cloud-resolving models (CRMs) is fine enough (250 × 20 m) to explicitly capture the BL turbulence, associated clouds, and entrainment in a global climate model capable of multiyear simulations. UP is implemented within the Community Atmosphere Model using 2° resolution (~14,000 embedded CRMs) with one-moment microphysics. By using a small domain and mean-state acceleration, UP is computationally feasible today and promising for exascale computers.more » Short-duration global UP hindcasts are compared with SP and satellite observations of top-of-atmosphere radiation and cloud vertical structure. The most encouraging improvement is a deeper BL and more realistic vertical structure of subtropical stratocumulus (Sc) clouds, due to stronger vertical eddy motions that promote entrainment. Results from 90 day integrations show climatological errors that are competitive with SP, with a significant improvement in the diurnal cycle of offshore Sc liquid water. Ongoing concerns with the current UP implementation include a dim bias for near-coastal Sc that also occurs less prominently in SP and a bright bias over tropical continental deep convection zones. Nevertheless, UP makes global eddy-permitting simulation a feasible and interesting alternative to conventionally parameterized GCMs or SP-GCMs with turbulence parameterizations for studying BL cloud-climate and cloud-aerosol feedback.« less

  19. Toward low-cloud-permitting cloud superparameterization with explicit boundary layer turbulence

    DOE PAGES

    Parishani, Hossein; Pritchard, Michael S.; Bretherton, Christopher S.; ...

    2017-06-19

    Systematic biases in the representation of boundary layer (BL) clouds are a leading source of uncertainty in climate projections. A variation on superparameterization (SP) called “ultraparameterization” (UP) is developed, in which the grid spacing of the cloud-resolving models (CRMs) is fine enough (250 × 20 m) to explicitly capture the BL turbulence, associated clouds, and entrainment in a global climate model capable of multiyear simulations. UP is implemented within the Community Atmosphere Model using 2° resolution (~14,000 embedded CRMs) with one-moment microphysics. By using a small domain and mean-state acceleration, UP is computationally feasible today and promising for exascale computers.more » Short-duration global UP hindcasts are compared with SP and satellite observations of top-of-atmosphere radiation and cloud vertical structure. The most encouraging improvement is a deeper BL and more realistic vertical structure of subtropical stratocumulus (Sc) clouds, due to stronger vertical eddy motions that promote entrainment. Results from 90 day integrations show climatological errors that are competitive with SP, with a significant improvement in the diurnal cycle of offshore Sc liquid water. Ongoing concerns with the current UP implementation include a dim bias for near-coastal Sc that also occurs less prominently in SP and a bright bias over tropical continental deep convection zones. Nevertheless, UP makes global eddy-permitting simulation a feasible and interesting alternative to conventionally parameterized GCMs or SP-GCMs with turbulence parameterizations for studying BL cloud-climate and cloud-aerosol feedback.« less

  20. Toward low-cloud-permitting cloud superparameterization with explicit boundary layer turbulence

    NASA Astrophysics Data System (ADS)

    Parishani, Hossein; Pritchard, Michael S.; Bretherton, Christopher S.; Wyant, Matthew C.; Khairoutdinov, Marat

    2017-07-01

    Systematic biases in the representation of boundary layer (BL) clouds are a leading source of uncertainty in climate projections. A variation on superparameterization (SP) called "ultraparameterization" (UP) is developed, in which the grid spacing of the cloud-resolving models (CRMs) is fine enough (250 × 20 m) to explicitly capture the BL turbulence, associated clouds, and entrainment in a global climate model capable of multiyear simulations. UP is implemented within the Community Atmosphere Model using 2° resolution (˜14,000 embedded CRMs) with one-moment microphysics. By using a small domain and mean-state acceleration, UP is computationally feasible today and promising for exascale computers. Short-duration global UP hindcasts are compared with SP and satellite observations of top-of-atmosphere radiation and cloud vertical structure. The most encouraging improvement is a deeper BL and more realistic vertical structure of subtropical stratocumulus (Sc) clouds, due to stronger vertical eddy motions that promote entrainment. Results from 90 day integrations show climatological errors that are competitive with SP, with a significant improvement in the diurnal cycle of offshore Sc liquid water. Ongoing concerns with the current UP implementation include a dim bias for near-coastal Sc that also occurs less prominently in SP and a bright bias over tropical continental deep convection zones. Nevertheless, UP makes global eddy-permitting simulation a feasible and interesting alternative to conventionally parameterized GCMs or SP-GCMs with turbulence parameterizations for studying BL cloud-climate and cloud-aerosol feedback.

  1. Computational Fluid Dynamics (CFD) Simulation of Drag Reduction by Riblets on Automobile

    NASA Astrophysics Data System (ADS)

    Ghazali, N. N. N.; Yau, Y. H.; Badarudin, A.; Lim, Y. C.

    2010-05-01

    One of the ongoing automotive technological developments is the reduction of aerodynamic drag because this has a direct impact on fuel reduction, which is a major topic due to the influence on many other requirements. Passive drag reduction techniques stand as the most portable and feasible way to be implemented in real applications. One of the passive techniques is the longitudinal microgrooves aligned in the flow direction, known as riblets. In this study, the simulation of turbulent flows over an automobile in a virtual wind tunnel has been conducted by computational fluid dynamics (CFD). Three important aspects of this study are: the drag reduction effect of riblets on smooth surface automobile, the position and geometry of the riblets on drag reduction. The simulation involves three stages: geometry modeling, meshing, solving and analysis. The simulation results show that the attachment of riblets on the rear roof surface reduces the drag coefficient by 2.74%. By adjusting the attachment position of the riblets film, reduction rates between the range 0.5%-9.51% are obtained, in which the position of the top middle roof optimizes the effect. Four riblet geometries are investigated, among them the semi-hexagon trapezoidally shaped riblets is considered the most effective. Reduction rate of drag is found ranging from -3.34% to 6.36%.

  2. Mesoscale Effective Property Simulations Incorporating Conductive Binder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trembacki, Bradley L.; Noble, David R.; Brunini, Victor E.

    Lithium-ion battery electrodes are composed of active material particles, binder, and conductive additives that form an electrolyte-filled porous particle composite. The mesoscale (particle-scale) interplay of electrochemistry, mechanical deformation, and transport through this tortuous multi-component network dictates the performance of a battery at the cell-level. Effective electrode properties connect mesoscale phenomena with computationally feasible battery-scale simulations. We utilize published tomography data to reconstruct a large subsection (1000+ particles) of an NMC333 cathode into a computational mesh and extract electrode-scale effective properties from finite element continuum-scale simulations. We present a novel method to preferentially place a composite binder phase throughout the mesostructure,more » a necessary approach due difficulty distinguishing between non-active phases in tomographic data. We compare stress generation and effective thermal, electrical, and ionic conductivities across several binder placement approaches. Isotropic lithiation-dependent mechanical swelling of the NMC particles and the consideration of strain-dependent composite binder conductivity significantly impact the resulting effective property trends and stresses generated. Lastly, our results suggest that composite binder location significantly affects mesoscale behavior, indicating that a binder coating on active particles is not sufficient and that more accurate approaches should be used when calculating effective properties that will inform battery-scale models in this inherently multi-scale battery simulation challenge.« less

  3. Mesoscale Effective Property Simulations Incorporating Conductive Binder

    DOE PAGES

    Trembacki, Bradley L.; Noble, David R.; Brunini, Victor E.; ...

    2017-07-26

    Lithium-ion battery electrodes are composed of active material particles, binder, and conductive additives that form an electrolyte-filled porous particle composite. The mesoscale (particle-scale) interplay of electrochemistry, mechanical deformation, and transport through this tortuous multi-component network dictates the performance of a battery at the cell-level. Effective electrode properties connect mesoscale phenomena with computationally feasible battery-scale simulations. We utilize published tomography data to reconstruct a large subsection (1000+ particles) of an NMC333 cathode into a computational mesh and extract electrode-scale effective properties from finite element continuum-scale simulations. We present a novel method to preferentially place a composite binder phase throughout the mesostructure,more » a necessary approach due difficulty distinguishing between non-active phases in tomographic data. We compare stress generation and effective thermal, electrical, and ionic conductivities across several binder placement approaches. Isotropic lithiation-dependent mechanical swelling of the NMC particles and the consideration of strain-dependent composite binder conductivity significantly impact the resulting effective property trends and stresses generated. Lastly, our results suggest that composite binder location significantly affects mesoscale behavior, indicating that a binder coating on active particles is not sufficient and that more accurate approaches should be used when calculating effective properties that will inform battery-scale models in this inherently multi-scale battery simulation challenge.« less

  4. Richardson-Lucy/maximum likelihood image restoration algorithm for fluorescence microscopy: further testing.

    PubMed

    Holmes, T J; Liu, Y H

    1989-11-15

    A maximum likelihood based iterative algorithm adapted from nuclear medicine imaging for noncoherent optical imaging was presented in a previous publication with some initial computer-simulation testing. This algorithm is identical in form to that previously derived in a different way by W. H. Richardson "Bayesian-Based Iterative Method of Image Restoration," J. Opt. Soc. Am. 62, 55-59 (1972) and L. B. Lucy "An Iterative Technique for the Rectification of Observed Distributions," Astron. J. 79, 745-765 (1974). Foreseen applications include superresolution and 3-D fluorescence microscopy. This paper presents further simulation testing of this algorithm and a preliminary experiment with a defocused camera. The simulations show quantified resolution improvement as a function of iteration number, and they show qualitatively the trend in limitations on restored resolution when noise is present in the data. Also shown are results of a simulation in restoring missing-cone information for 3-D imaging. Conclusions are in support of the feasibility of using these methods with real systems, while computational cost and timing estimates indicate that it should be realistic to implement these methods. Itis suggested in the Appendix that future extensions to the maximum likelihood based derivation of this algorithm will address some of the limitations that are experienced with the nonextended form of the algorithm presented here.

  5. MRXCAT: Realistic numerical phantoms for cardiovascular magnetic resonance

    PubMed Central

    2014-01-01

    Background Computer simulations are important for validating novel image acquisition and reconstruction strategies. In cardiovascular magnetic resonance (CMR), numerical simulations need to combine anatomical information and the effects of cardiac and/or respiratory motion. To this end, a framework for realistic CMR simulations is proposed and its use for image reconstruction from undersampled data is demonstrated. Methods The extended Cardiac-Torso (XCAT) anatomical phantom framework with various motion options was used as a basis for the numerical phantoms. Different tissue, dynamic contrast and signal models, multiple receiver coils and noise are simulated. Arbitrary trajectories and undersampled acquisition can be selected. The utility of the framework is demonstrated for accelerated cine and first-pass myocardial perfusion imaging using k-t PCA and k-t SPARSE. Results MRXCAT phantoms allow for realistic simulation of CMR including optional cardiac and respiratory motion. Example reconstructions from simulated undersampled k-t parallel imaging demonstrate the feasibility of simulated acquisition and reconstruction using the presented framework. Myocardial blood flow assessment from simulated myocardial perfusion images highlights the suitability of MRXCAT for quantitative post-processing simulation. Conclusion The proposed MRXCAT phantom framework enables versatile and realistic simulations of CMR including breathhold and free-breathing acquisitions. PMID:25204441

  6. Nonlinear plasma wave models in 3D fluid simulations of laser-plasma interaction

    NASA Astrophysics Data System (ADS)

    Chapman, Thomas; Berger, Richard; Arrighi, Bill; Langer, Steve; Banks, Jeffrey; Brunner, Stephan

    2017-10-01

    Simulations of laser-plasma interaction (LPI) in inertial confinement fusion (ICF) conditions require multi-mm spatial scales due to the typical laser beam size and durations of order 100 ps in order for numerical laser reflectivities to converge. To be computationally achievable, these scales necessitate a fluid-like treatment of light and plasma waves with a spatial grid size on the order of the light wave length. Plasma waves experience many nonlinear phenomena not naturally described by a fluid treatment, such as frequency shifts induced by trapping, a nonlinear (typically suppressed) Landau damping, and mode couplings leading to instabilities that can cause the plasma wave to decay rapidly. These processes affect the onset and saturation of stimulated Raman and Brillouin scattering, and are of direct interest to the modeling and prediction of deleterious LPI in ICF. It is not currently computationally feasible to simulate these Debye length-scale phenomena in 3D across experimental scales. Analytically-derived and/or numerically benchmarked models of processes occurring at scales finer than the fluid simulation grid offer a path forward. We demonstrate the impact of a range of kinetic processes on plasma reflectivity via models included in the LPI simulation code pF3D. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  7. Modeling Real-Time Coordination of Distributed Expertise and Event Response in NASA Mission Control Center Operations

    NASA Astrophysics Data System (ADS)

    Onken, Jeffrey

    This dissertation introduces a multidisciplinary framework for the enabling of future research and analysis of alternatives for control centers for real-time operations of safety-critical systems. The multidisciplinary framework integrates functional and computational models that describe the dynamics in fundamental concepts of previously disparate engineering and psychology research disciplines, such as group performance and processes, supervisory control, situation awareness, events and delays, and expertise. The application in this dissertation is the real-time operations within the NASA Mission Control Center in Houston, TX. This dissertation operationalizes the framework into a model and simulation, which simulates the functional and computational models in the framework according to user-configured scenarios for a NASA human-spaceflight mission. The model and simulation generates data according to the effectiveness of the mission-control team in supporting the completion of mission objectives and detecting, isolating, and recovering from anomalies. Accompanying the multidisciplinary framework is a proof of concept, which demonstrates the feasibility of such a framework. The proof of concept demonstrates that variability occurs where expected based on the models. The proof of concept also demonstrates that the data generated from the model and simulation is useful for analyzing and comparing MCC configuration alternatives because an investigator can give a diverse set of scenarios to the simulation and the output compared in detail to inform decisions about the effect of MCC configurations on mission operations performance.

  8. Hyper-Systolic Processing on APE100/QUADRICS:. n2-LOOP Computations

    NASA Astrophysics Data System (ADS)

    Lippert, Thomas; Ritzenhöfer, Gero; Glaessner, Uwe; Hoeber, Henning; Seyfried, Armin; Schilling, Klaus

    We investigate the performance gains from hyper-systolic implementations of n2-loop problems on the massively parallel computer Quadrics, exploiting its three-dimensional interprocessor connectivity. For illustration we study the communication aspects of an exact molecular dynamics simulation of n particles with Coulomb (or gravitational) interactions. We compare the interprocessor communication costs of the standard-systolic and the hyper-systolic approaches for various granularities. We predict gain factors as large as three on the Q4 and eight on the QH4 and measure actual performances on these machine configurations. We conclude that it appears feasible to investigate the thermodynamics of a full gravitating n-body problem with O(16.000) particles using the new method on a QH4 system.

  9. On-chip, photon-number-resolving, telecommunication-band detectors for scalable photonic information processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerrits, Thomas; Lita, Adriana E.; Calkins, Brice

    Integration is currently the only feasible route toward scalable photonic quantum processing devices that are sufficiently complex to be genuinely useful in computing, metrology, and simulation. Embedded on-chip detection will be critical to such devices. We demonstrate an integrated photon-number-resolving detector, operating in the telecom band at 1550 nm, employing an evanescently coupled design that allows it to be placed at arbitrary locations within a planar circuit. Up to five photons are resolved in the guided optical mode via absorption from the evanescent field into a tungsten transition-edge sensor. The detection efficiency is 7.2{+-}0.5 %. The polarization sensitivity of themore » detector is also demonstrated. Detailed modeling of device designs shows a clear and feasible route to reaching high detection efficiencies.« less

  10. Real-time in situ three-dimensional integral videography and surgical navigation using augmented reality: a pilot study

    PubMed Central

    Suenaga, Hideyuki; Hoang Tran, Huy; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi; Hoshi, Kazuto; Mori, Yoshiyuki; Takato, Tsuyoshi

    2013-01-01

    To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. Three-dimensional surface models of the jawbones, based on the computed tomography data, were used to create the integral videography images of a subject's maxillofacial area. The three-dimensional augmented reality system (integral videography display, computed tomography, a position tracker and a computer) was used to generate a three-dimensional overlay that was projected on the surgical site via a half-silvered mirror. Thereafter, a feasibility study was performed on a volunteer. The accuracy of this system was verified on a solid model while simulating bone resection. Positional registration was attained by identifying and tracking the patient/surgical instrument's position. Thus, integral videography images of jawbones, teeth and the surgical tool were superimposed in the correct position. Stereoscopic images viewed from various angles were accurately displayed. Change in the viewing angle did not negatively affect the surgeon's ability to simultaneously observe the three-dimensional images and the patient, without special glasses. The difference in three-dimensional position of each measuring point on the solid model and augmented reality navigation was almost negligible (<1 mm); this indicates that the system was highly accurate. This augmented reality system was highly accurate and effective for surgical navigation and for overlaying a three-dimensional computed tomography image on a patient's surgical area, enabling the surgeon to understand the positional relationship between the preoperative image and the actual surgical site, with the naked eye. PMID:23703710

  11. Low-Cost Computer-Aided Instruction/Computer-Managed Instruction (CAI/CMI) System: Feasibility Study. Final Report.

    ERIC Educational Resources Information Center

    Lintz, Larry M.; And Others

    This study investigated the feasibility of a low cost computer-aided instruction/computer-managed instruction (CAI/CMI) system. Air Force instructors and training supervisors were surveyed to determine the potential payoffs of various CAI and CMI functions. Results indicated that a wide range of capabilities had potential for resident technical…

  12. A novel grid-based mesoscopic model for evacuation dynamics

    NASA Astrophysics Data System (ADS)

    Shi, Meng; Lee, Eric Wai Ming; Ma, Yi

    2018-05-01

    This study presents a novel grid-based mesoscopic model for evacuation dynamics. In this model, the evacuation space is discretised into larger cells than those used in microscopic models. This approach directly computes the dynamic changes crowd densities in cells over the course of an evacuation. The density flow is driven by the density-speed correlation. The computation is faster than in traditional cellular automata evacuation models which determine density by computing the movements of each pedestrian. To demonstrate the feasibility of this model, we apply it to a series of practical scenarios and conduct a parameter sensitivity study of the effect of changes in time step δ. The simulation results show that within the valid range of δ, changing δ has only a minor impact on the simulation. The model also makes it possible to directly acquire key information such as bottleneck areas from a time-varied dynamic density map, even when a relatively large time step is adopted. We use the commercial software AnyLogic to evaluate the model. The result shows that the mesoscopic model is more efficient than the microscopic model and provides more in-situ details (e.g., pedestrian movement pattern) than the macroscopic models.

  13. Computational Simulations and the Scientific Method

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  14. THERMAL-ENERGY STORAGE IN A DEEP SANDSTONE AQUIFER IN MINNESOTA: FIELD OBSERVATIONS AND THERMAL ENERGY-TRANSPORT MODELING.

    USGS Publications Warehouse

    Miller, R.T.

    1986-01-01

    A study of the feasibility of storing heated water in a deep sandstone aquifer in Minnesota is described. The aquifer consists of four hydraulic zones that are areally anisotropic and have average hydraulic conductivities that range from 0. 03 to 1. 2 meters per day. A preliminary axially symmetric, nonisothermal, isotropic, single-phase, radial-flow, thermal-energy-transport model was constructed to investigate the sensitivity of model simulation to various hydraulic and thermal properties of the aquifer. A three-dimensional flow and thermal-energy transport model was constructed to incorporate the areal anisotropy of the aquifer. Analytical solutions of equations describing areally anisotropic groundwater flow around a doublet-well system were used to specify model boundary conditions for simulation of heat injection. The entire heat-injection-testing period of approximately 400 days was simulated. Model-computed temperatures compared favorably with field-recorded temperatures, with differences of no more than plus or minus 8 degree C. For each test cycle, model-computed aquifer thermal efficiency, defined as total heat withdrawn divided by total heat injected, was within plus or minus 2% of the field-calculated values.

  15. Cross-scale MD simulations of dynamic strength of tantalum

    NASA Astrophysics Data System (ADS)

    Bulatov, Vasily

    2017-06-01

    Dislocations are ubiquitous in metals where their motion presents the dominant and often the only mode of plastic response to straining. Over the last 25 years computational prediction of plastic response in metals has relied on Discrete Dislocation Dynamics (DDD) as the most fundamental method to account for collective dynamics of moving dislocations. Here we present first direct atomistic MD simulations of dislocation-mediated plasticity that are sufficiently large and long to compute plasticity response of single crystal tantalum while tracing the underlying dynamics of dislocations in all atomistic details. Where feasible, direct MD simulations sidestep DDD altogether thus reducing uncertainties of strength predictions to those of the interatomic potential. In the specific context of shock-induced material dynamics, the same MD models predict when, under what conditions and how dislocations interact and compete with other fundamental mechanisms of dynamic response, e.g. twinning, phase-transformations, fracture. In collaboration with: Luis Zepeda-Ruiz, Lawrence Livermore National Laboratory; Alexander Stukowski, Technische Universitat Darmstadt; Tomas Oppelstrup, Lawrence Livermore National Laboratory. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  16. Effect of tidal fluctuations on transient dispersion of simulated contaminant concentrations in coastal aquifers

    USGS Publications Warehouse

    La Licata, Ivana; Langevin, Christian D.; Dausman, Alyssa M.; Alberti, Luca

    2011-01-01

    Variable-density groundwater models require extensive computational resources, particularly for simulations representing short-term hydrologic variability such as tidal fluctuations. Saltwater-intrusion models usually neglect tidal fluctuations and this may introduce errors in simulated concentrations. The effects of tides on simulated concentrations in a coastal aquifer were assessed. Three analyses are reported: in the first, simulations with and without tides were compared for three different dispersivity values. Tides do not significantly affect the transfer of a hypothetical contaminant into the ocean; however, the concentration difference between tidal and non-tidal simulations could be as much as 15%. In the second analysis, the dispersivity value for the model without tides was increased in a zone near the ocean boundary. By slightly increasing dispersivity in this zone, the maximum concentration difference between the simulations with and without tides was reduced to as low as 7%. In the last analysis, an apparent dispersivity value was calculated for each model cell using the simulated velocity variations from the model with tides. Use of apparent dispersivity values in models with a constant ocean boundary seems to provide a reasonable approach for approximating tidal effects in simulations where explicit representation of tidal fluctuations is not feasible.

  17. Effect of tidal fluctuations on transient dispersion of simulated contaminant concentrations in coastal aquifers

    USGS Publications Warehouse

    La Licata, Ivana; Langevin, Christian D.; Dausman, Alyssa M.; Alberti, Luca

    2013-01-01

    Variable-density groundwater models require extensive computational resources, particularly for simulations representing short-term hydrologic variability such as tidal fluctuations. Saltwater-intrusion models usually neglect tidal fluctuations and this may introduce errors in simulated concentrations. The effects of tides on simulated concentrations in a coastal aquifer were assessed. Three analyses are reported: in the first, simulations with and without tides were compared for three different dispersivity values. Tides do not significantly affect the transfer of a hypothetical contaminant into the ocean; however, the concentration difference between tidal and non-tidal simulations could be as much as 15%. In the second analysis, the dispersivity value for the model without tides was increased in a zone near the ocean boundary. By slightly increasing dispersivity in this zone, the maximum concentration difference between the simulations with and without tides was reduced to as low as 7%. In the last analysis, an apparent dispersivity value was calculated for each model cell using the simulated velocity variations from the model with tides. Use of apparent dispersivity values in models with a constant ocean boundary seems to provide a reasonable approach for approximating tidal effects in simulations where explicit representation of tidal fluctuations is not feasible.

  18. Accelerated weight histogram method for exploring free energy landscapes

    NASA Astrophysics Data System (ADS)

    Lindahl, V.; Lidmar, J.; Hess, B.

    2014-07-01

    Calculating free energies is an important and notoriously difficult task for molecular simulations. The rapid increase in computational power has made it possible to probe increasingly complex systems, yet extracting accurate free energies from these simulations remains a major challenge. Fully exploring the free energy landscape of, say, a biological macromolecule typically requires sampling large conformational changes and slow transitions. Often, the only feasible way to study such a system is to simulate it using an enhanced sampling method. The accelerated weight histogram (AWH) method is a new, efficient extended ensemble sampling technique which adaptively biases the simulation to promote exploration of the free energy landscape. The AWH method uses a probability weight histogram which allows for efficient free energy updates and results in an easy discretization procedure. A major advantage of the method is its general formulation, making it a powerful platform for developing further extensions and analyzing its relation to already existing methods. Here, we demonstrate its efficiency and general applicability by calculating the potential of mean force along a reaction coordinate for both a single dimension and multiple dimensions. We make use of a non-uniform, free energy dependent target distribution in reaction coordinate space so that computational efforts are not wasted on physically irrelevant regions. We present numerical results for molecular dynamics simulations of lithium acetate in solution and chignolin, a 10-residue long peptide that folds into a β-hairpin. We further present practical guidelines for setting up and running an AWH simulation.

  19. Accelerated weight histogram method for exploring free energy landscapes.

    PubMed

    Lindahl, V; Lidmar, J; Hess, B

    2014-07-28

    Calculating free energies is an important and notoriously difficult task for molecular simulations. The rapid increase in computational power has made it possible to probe increasingly complex systems, yet extracting accurate free energies from these simulations remains a major challenge. Fully exploring the free energy landscape of, say, a biological macromolecule typically requires sampling large conformational changes and slow transitions. Often, the only feasible way to study such a system is to simulate it using an enhanced sampling method. The accelerated weight histogram (AWH) method is a new, efficient extended ensemble sampling technique which adaptively biases the simulation to promote exploration of the free energy landscape. The AWH method uses a probability weight histogram which allows for efficient free energy updates and results in an easy discretization procedure. A major advantage of the method is its general formulation, making it a powerful platform for developing further extensions and analyzing its relation to already existing methods. Here, we demonstrate its efficiency and general applicability by calculating the potential of mean force along a reaction coordinate for both a single dimension and multiple dimensions. We make use of a non-uniform, free energy dependent target distribution in reaction coordinate space so that computational efforts are not wasted on physically irrelevant regions. We present numerical results for molecular dynamics simulations of lithium acetate in solution and chignolin, a 10-residue long peptide that folds into a β-hairpin. We further present practical guidelines for setting up and running an AWH simulation.

  20. Towards a unified Global Weather-Climate Prediction System

    NASA Astrophysics Data System (ADS)

    Lin, S. J.

    2016-12-01

    The Geophysical Fluid Dynamics Laboratory has been developing a unified regional-global modeling system with variable resolution capabilities that can be used for severe weather predictions and kilometer scale regional climate simulations within a unified global modeling system. The foundation of this flexible modeling system is the nonhydrostatic Finite-Volume Dynamical Core on the Cubed-Sphere (FV3). A unique aspect of FV3 is that it is "vertically Lagrangian" (Lin 2004), essentially reducing the equation sets to two dimensions, and is the single most important reason why FV3 outperforms other non-hydrostatic cores. Owning to its accuracy, adaptability, and computational efficiency, the FV3 has been selected as the "engine" for NOAA's Next Generation Global Prediction System (NGGPS). We have built into the modeling system a stretched grid, a two-way regional-global nested grid, and an optimal combination of the stretched and two-way nests capability, making kilometer-scale regional simulations within a global modeling system feasible. Our main scientific goal is to enable simulations of high impact weather phenomena (such as tornadoes, thunderstorms, category-5 hurricanes) within an IPCC-class climate modeling system previously regarded as impossible. In this presentation I will demonstrate that, with the FV3, it is computationally feasible to simulate not only super-cell thunderstorms, but also the subsequent genesis of tornado-like vortices using a global model that was originally designed for climate simulations. The development and tuning strategy between traditional weather and climate models are fundamentally different due to different metrics. We were able to adapt and use traditional "climate" metrics or standards, such as angular momentum conservation, energy conservation, and flux balance at top of the atmosphere, and gain insight into problems of traditional weather prediction model for medium-range weather prediction, and vice versa. Therefore, the unification in weather and climate models can happen not just at the algorithm or parameterization level, but also in the metric and tuning strategy used for both applications, and ultimately, with benefits to both weather and climate applications.

  1. Numerical simulation and nasal air-conditioning

    PubMed Central

    Keck, Tilman; Lindemann, Jörg

    2011-01-01

    Heating and humidification of the respiratory air are the main functions of the nasal airways in addition to cleansing and olfaction. Optimal nasal air conditioning is mandatory for an ideal pulmonary gas exchange in order to avoid desiccation and adhesion of the alveolar capillary bed. The complex three-dimensional anatomical structure of the nose makes it impossible to perform detailed in vivo studies on intranasal heating and humidification within the entire nasal airways applying various technical set-ups. The main problem of in vivo temperature and humidity measurements is a poor spatial and time resolution. Therefore, in vivo measurements are feasible only to a restricted extent, solely providing single temperature values as the complete nose is not entirely accessible. Therefore, data on the overall performance of the nose are only based on one single measurement within each nasal segment. In vivo measurements within the entire nose are not feasible. These serious technical issues concerning in vivo measurements led to a large number of numerical simulation projects in the last few years providing novel information about the complex functions of the nasal airways. In general, numerical simulations merely calculate predictions in a computational model, e.g. a realistic nose model, depending on the setting of the boundary conditions. Therefore, numerical simulations achieve only approximations of a possible real situation. The aim of this review is the synopsis of the technical expertise on the field of in vivo nasal air conditioning, the novel information of numerical simulations and the current state of knowledge on the influence of nasal and sinus surgery on nasal air conditioning. PMID:22073112

  2. [Simulation and air-conditioning in the nose].

    PubMed

    Keck, T; Lindemann, J

    2010-05-01

    Heating and humidification of the respiratory air are the main functions of the nasal airways in addition to cleansing and olfaction. Optimal nasal air conditioning is mandatory for an ideal pulmonary gas exchange in order to avoid dessication and adhesion of the alveolar capillary bed. The complex three-dimensional anatomical structure of the nose makes it impossible to perform detailed in vivo studies on intranasal heating and humidification within the entire nasal airways applying various technical set-ups. The main problem of in vivo temperature and humidity measurements is a poor spatial and time resolution. Therefore, in vivo measurements are feasible to a restricted extent, only providing single temperature values as the complete nose is not entirely accessible. Therefore, data on the overall performance of the nose are only based on one single measurement within each nasal segment. In vivo measurements within the entire nose are not feasible. These serious technical issues concerning in vivo measurements led to a large number of numerical simulation projects in the last few years providing novel information about the complex functions of the nasal airways. In general, numerical simulations only calculate predictions in a computational model, e. g. realistic nose model, depending on the setting of the boundary conditions. Therefore, numerical simulations achieve only approximations of a possible real situation. The aim of this report is the synopsis of the technical expertise on the field of in vivo nasal air conditioning, the novel information of numerical simulations and the current state of knowledge on the influence of nasal and sinus surgery on nasal air conditioning.

  3. Pacific Educational Computer Network Study. Final Report.

    ERIC Educational Resources Information Center

    Hawaii Univ., Honolulu. ALOHA System.

    The Pacific Educational Computer Network Feasibility Study examined technical and non-technical aspects of the formation of an international Pacific Area computer network for higher education. The technical study covered the assessment of the feasibility of a packet-switched satellite and radio ground distribution network for data transmission…

  4. Feasibility studies on explosive detection and homeland security applications using a neutron and x-ray combined computed tomography system

    NASA Astrophysics Data System (ADS)

    Sinha, V.; Srivastava, A.; Lee, H. K.; Liu, X.

    2013-05-01

    The successful creation and operation of a neutron and X-ray combined computed tomography (NXCT) system has been demonstrated by researchers at the Missouri University of Science and Technology. The NXCT system has numerous applications in the field of material characterization and object identification in materials with a mixture of atomic numbers represented. Presently, the feasibility studies have been performed for explosive detection and homeland security applications, particularly in concealed material detection and determination of the light atomic number materials. These materials cannot be detected using traditional X-ray imaging. The new system has the capability to provide complete structural and compositional information due to the complementary nature of X-ray and neutron interactions with materials. The design of the NXCT system facilitates simultaneous and instantaneous imaging operation, promising enhanced detection capabilities of explosive materials, low atomic number materials and illicit materials for homeland security applications. In addition, a sample positioning system allowing the user to remotely and automatically manipulate the sample makes the system viable for commercial applications. Several explosives and weapon simulants have been imaged and the results are provided. The fusion algorithms which combine the data from the neutron and X-ray imaging produce superior images. This paper is a compete overview of the NXCT system for feasibility studies of explosive detection and homeland security applications. The design of the system, operation, algorithm development, and detection schemes are provided. This is the first combined neutron and X-ray computed tomography system in operation. Furthermore, the method of fusing neutron and X-ray images together is a new approach which provides high contrast images of the desired object. The system could serve as a standardized tool in nondestructive testing of many applications, especially in explosives detection and homeland security research.

  5. Advanced in Visualization of 3D Time-Dependent CFD Solutions

    NASA Technical Reports Server (NTRS)

    Lane, David A.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Numerical simulations of complex 3D time-dependent (unsteady) flows are becoming increasingly feasible because of the progress in computing systems. Unfortunately, many existing flow visualization systems were developed for time-independent (steady) solutions and do not adequately depict solutions from unsteady flow simulations. Furthermore, most systems only handle one time step of the solutions individually and do not consider the time-dependent nature of the solutions. For example, instantaneous streamlines are computed by tracking the particles using one time step of the solution. However, for streaklines and timelines, particles need to be tracked through all time steps. Streaklines can reveal quite different information about the flow than those revealed by instantaneous streamlines. Comparisons of instantaneous streamlines with dynamic streaklines are shown. For a complex 3D flow simulation, it is common to generate a grid system with several millions of grid points and to have tens of thousands of time steps. The disk requirement for storing the flow data can easily be tens of gigabytes. Visualizing solutions of this magnitude is a challenging problem with today's computer hardware technology. Even interactive visualization of one time step of the flow data can be a problem for some existing flow visualization systems because of the size of the grid. Current approaches for visualizing complex 3D time-dependent CFD solutions are described. The flow visualization system developed at NASA Ames Research Center to compute time-dependent particle traces from unsteady CFD solutions is described. The system computes particle traces (streaklines) by integrating through the time steps. This system has been used by several NASA scientists to visualize their CFD time-dependent solutions. The flow visualization capabilities of this system are described, and visualization results are shown.

  6. Final product analysis in the e-beam and gamma radiolysis of aqueous solutions of metoprolol tartrate

    NASA Astrophysics Data System (ADS)

    Slegers, Catherine; Tilquin, Bernard

    2006-09-01

    The radiostability of metoprolol tartrate aqueous solutions and the influence of the absorbed dose (0-50 kGy), dose rate (e-beam (EB) vs. gamma ( γ)) and radioprotectors (pharmaceutical excipients) are investigated by HPLC-UV analyses and through computer simulations. The use of radioprotecting excipients is more promising than an increase in the dose rate to lower the degradation of metoprolol tartrate aqueous solutions for applications such as radiosterilization. The decontamination of metoprolol tartrate from waste waters by EB processing appears highly feasible.

  7. Organometallic chemical vapor deposition of silicon nitride films enhanced by atomic nitrogen generated from surface-wave plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okada, H.; Kato, M.; Ishimaru, T.

    2014-02-20

    Organometallic chemical vapor deposition of silicon nitride films enhanced by atomic nitrogen generated from surface-wave plasma is investigated. Feasibility of precursors of triethylsilane (TES) and bis(dimethylamino)dimethylsilane (BDMADMS) is discussed based on a calculation of bond energies by computer simulation. Refractive indices of 1.81 and 1.71 are obtained for deposited films with TES and BDMADMS, respectively. X-ray photoelectron spectroscopy (XPS) analysis of the deposited film revealed that TES-based film coincides with the stoichiometric thermal silicon nitride.

  8. Why is CDMA the solution for mobile satellite communication

    NASA Technical Reports Server (NTRS)

    Gilhousen, Klein S.; Jacobs, Irwin M.; Padovani, Roberto; Weaver, Lindsay A.

    1989-01-01

    It is demonstrated that spread spectrum Code Division Multiple Access (CDMA) systems provide an economically superior solution to satellite mobile communications by increasing the system maximum capacity with respect to single channel per carrier Frequency Division Multiple Access (FDMA) systems. Following the comparative analysis of CDMA and FDMA systems, the design of a model that was developed to test the feasibility of the approach and the performance of a spread spectrum system in a mobile environment. Results of extensive computer simulations as well as laboratory and field tests results are presented.

  9. Investigation of parabolic computational techniques for internal high-speed viscous flows

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.; Power, G. D.

    1985-01-01

    A feasibility study was conducted to assess the applicability of an existing parabolic analysis (ADD-Axisymmetric Diffuser Duct), developed previously for subsonic viscous internal flows, to mixed supersonic/subsonic flows with heat addition simulating a SCRAMJET combustor. A study was conducted with the ADD code modified to include additional convection effects in the normal momentum equation when supersonic expansion and compression waves were present. It is concluded from the present study that for the class of problems where strong viscous/inviscid interactions are present a global iteration procedure is required.

  10. Precision Attitude Determination System (PADS) system design and analysis: Single-axis gimbal star tracker

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The feasibility is evaluated of an evolutionary development for use of a single-axis gimbal star tracker from prior two-axis gimbal star tracker based system applications. Detailed evaluation of the star tracker gimbal encoder is considered. A brief system description is given including the aspects of tracker evolution and encoder evaluation. System analysis includes evaluation of star availability and mounting constraints for the geosynchronous orbit application, and a covariance simulation analysis to evaluate performance potential. Star availability and covariance analysis digital computer programs are included.

  11. The Characteristics of Binary Spike-Time-Dependent Plasticity in HfO2-Based RRAM and Applications for Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Zhou, Zheng; Liu, Chen; Shen, Wensheng; Dong, Zhen; Chen, Zhe; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng

    2017-04-01

    A binary spike-time-dependent plasticity (STDP) protocol based on one resistive-switching random access memory (RRAM) device was proposed and experimentally demonstrated in the fabricated RRAM array. Based on the STDP protocol, a novel unsupervised online pattern recognition system including RRAM synapses and CMOS neurons is developed. Our simulations show that the system can efficiently compete the handwritten digits recognition task, which indicates the feasibility of using the RRAM-based binary STDP protocol in neuromorphic computing systems to obtain good performance.

  12. Feasibility of Modified Anterior Odontoid Screw Fixation: Analysis of a New Trajectory Using 3-Dimensional Simulation Software.

    PubMed

    Zhang, Li-Lian; Chen, Qi; Wang, Hao-Li; Xu, Hua-Zi; Tian, Nai-Feng

    2018-05-03

    Anterior odontoid screw fixation (AOSF) has been suggested as the optimal treatment for type II and some shallow type III odontoid fractures. However, only the classical surgical trajectory is available; no newer entry points or trajectories have been reported. We evaluated the anatomic feasibility of a new trajectory for AOSF using 3-dimensional (3D) screw insertion simulation software (Mimics). Computed tomography (CT) scans of patients (65 males and 59 females) with normal cervical structures were obtained consecutively, and the axes were reconstructed in 3 dimensions by Mimics software. Then simulated operations were performed using 2 new entry points below the superior articular process using bilateral screws of different diameters (group 1: 4 mm and 4 mm; group 2: 4 mm and 3.5 mm; group 3: 3.5 mm and 3.5 mm). The success rates and the required screw lengths were recorded and analyzed. The success rates were 79.03% for group 1, 95.16% for group 2, and 98.39% for group 3. The success rates for groups 2 and 3 did not differ significantly, and both were significantly better than the rate for group 1. The success rate was much higher in males than in females in group 1, but the success rate was similar in males and females in the other 2 groups. Screw lengths did not differ significantly among the 3 groups, but an effect of sex was apparent. Our modified trajectory is anatomically feasible for fixation of anterior odontoid fractures, but further anatomic experiments and clinical research are needed. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. A generic, cost-effective, and scalable cell lineage analysis platform

    PubMed Central

    Biezuner, Tamir; Spiro, Adam; Raz, Ofir; Amir, Shiran; Milo, Lilach; Adar, Rivka; Chapal-Ilani, Noa; Berman, Veronika; Fried, Yael; Ainbinder, Elena; Cohen, Galit; Barr, Haim M.; Halaban, Ruth; Shapiro, Ehud

    2016-01-01

    Advances in single-cell genomics enable commensurate improvements in methods for uncovering lineage relations among individual cells. Current sequencing-based methods for cell lineage analysis depend on low-resolution bulk analysis or rely on extensive single-cell sequencing, which is not scalable and could be biased by functional dependencies. Here we show an integrated biochemical-computational platform for generic single-cell lineage analysis that is retrospective, cost-effective, and scalable. It consists of a biochemical-computational pipeline that inputs individual cells, produces targeted single-cell sequencing data, and uses it to generate a lineage tree of the input cells. We validated the platform by applying it to cells sampled from an ex vivo grown tree and analyzed its feasibility landscape by computer simulations. We conclude that the platform may serve as a generic tool for lineage analysis and thus pave the way toward large-scale human cell lineage discovery. PMID:27558250

  14. Scalable software-defined optical networking with high-performance routing and wavelength assignment algorithms.

    PubMed

    Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin

    2015-10-19

    The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.

  15. Computer-aided design and experimental investigation of a hydrodynamic device: the microwire electrode

    PubMed

    Fulian; Gooch; Fisher; Stevens; Compton

    2000-08-01

    The development and application of a new electrochemical device using a computer-aided design strategy is reported. This novel design is based on the flow of electrolyte solution past a microwire electrode situated centrally within a large duct. In the design stage, finite element simulations were employed to evaluate feasible working geometries and mass transport rates. The computer-optimized designs were then exploited to construct experimental devices. Steady-state voltammetric measurements were performed for a reversible one-electron-transfer reaction to establish the experimental relationship between electrolysis current and solution velocity. The experimental results are compared to those predicted numerically, and good agreement is found. The numerical studies are also used to establish an empirical relationship between the mass transport limited current and the volume flow rate, providing a simple and quantitative alternative for workers who would prefer to exploit this device without the need to develop the numerical aspects.

  16. Local pursuit strategy-inspired cooperative trajectory planning algorithm for a class of nonlinear constrained dynamical systems

    NASA Astrophysics Data System (ADS)

    Xu, Yunjun; Remeikas, Charles; Pham, Khanh

    2014-03-01

    Cooperative trajectory planning is crucial for networked vehicles to respond rapidly in cluttered environments and has a significant impact on many applications such as air traffic or border security monitoring and assessment. One of the challenges in cooperative planning is to find a computationally efficient algorithm that can accommodate both the complexity of the environment and real hardware and configuration constraints of vehicles in the formation. Inspired by a local pursuit strategy observed in foraging ants, feasible and optimal trajectory planning algorithms are proposed in this paper for a class of nonlinear constrained cooperative vehicles in environments with densely populated obstacles. In an iterative hierarchical approach, the local behaviours, such as the formation stability, obstacle avoidance, and individual vehicle's constraints, are considered in each vehicle's (i.e. follower's) decentralised optimisation. The cooperative-level behaviours, such as the inter-vehicle collision avoidance, are considered in the virtual leader's centralised optimisation. Early termination conditions are derived to reduce the computational cost by not wasting time in the local-level optimisation if the virtual leader trajectory does not satisfy those conditions. The expected advantages of the proposed algorithms are (1) the formation can be globally asymptotically maintained in a decentralised manner; (2) each vehicle decides its local trajectory using only the virtual leader and its own information; (3) the formation convergence speed is controlled by one single parameter, which makes it attractive for many practical applications; (4) nonlinear dynamics and many realistic constraints, such as the speed limitation and obstacle avoidance, can be easily considered; (5) inter-vehicle collision avoidance can be guaranteed in both the formation transient stage and the formation steady stage; and (6) the computational cost in finding both the feasible and optimal solutions is low. In particular, the feasible solution can be computed in a very quick fashion. The minimum energy trajectory planning for a group of robots in an obstacle-laden environment is simulated to showcase the advantages of the proposed algorithms.

  17. Three-dimensional particle-particle simulations: Dependence of relaxation time on plasma parameter

    NASA Astrophysics Data System (ADS)

    Zhao, Yinjian

    2018-05-01

    A particle-particle simulation model is applied to investigate the dependence of the relaxation time on the plasma parameter in a three-dimensional unmagnetized plasma. It is found that the relaxation time increases linearly as the plasma parameter increases within the range of the plasma parameter from 2 to 10; when the plasma parameter equals 2, the relaxation time is independent of the total number of particles, but when the plasma parameter equals 10, the relaxation time slightly increases as the total number of particles increases, which indicates the transition of a plasma from collisional to collisionless. In addition, ions with initial Maxwell-Boltzmann (MB) distribution are found to stay in the MB distribution during the whole simulation time, and the mass of ions does not significantly affect the relaxation time of electrons. This work also shows the feasibility of the particle-particle model when using GPU parallel computing techniques.

  18. Finite element simulation of adaptive aerospace structures with SMA actuators

    NASA Astrophysics Data System (ADS)

    Frautschi, Jason; Seelecke, Stefan

    2003-07-01

    The particular demands of aerospace engineering have spawned many of the developments in the field of adaptive structures. Shape memory alloys are particularly attractive as actuators in these types of structures due to their large strains, high specific work output and potential for structural integration. However, the requisite extensive physical testing has slowed development of potential applications and highlighted the need for a simulation tool for feasibility studies. In this paper we present an implementation of an extended version of the M'ller-Achenbach SMA model into a commercial finite element code suitable for such studies. Interaction between the SMA model and the solution algorithm for the global FE equations is thoroughly investigated with respect to the effect of tolerances and time step size on convergence, computational cost and accuracy. Finally, a simulation of a SMA-actuated flexible trailing edge of an aircraft wing modeled with beam elements is presented.

  19. Study of ceramic products and processing techniques in space. [using computerized simulation

    NASA Technical Reports Server (NTRS)

    Markworth, A. J.; Oldfield, W.

    1974-01-01

    An analysis of the solidification kinetics of beta alumina in a zero-gravity environment was carried out, using computer-simulation techniques, in order to assess the feasibility of producing high-quality single crystals of this material in space. The two coupled transport processes included were movement of the solid-liquid interface and diffusion of sodium atoms in the melt. Results of the simulation indicate that appreciable crystal-growth rates can be attained in space. Considerations were also made of the advantages offered by high-quality single crystals of beta alumina for use as a solid electrolyte; these clearly indicate that space-grown materials are superior in many respects to analogous terrestrially-grown crystals. Likewise, economic considerations, based on the rapidly expanding technological applications for beta alumina and related fast ionic conductors, reveal that the many superior qualities of space-grown material justify the added expense and experimental detail associated with space processing.

  20. A Stratified Acoustic Model Accounting for Phase Shifts for Underwater Acoustic Networks

    PubMed Central

    Wang, Ping; Zhang, Lin; Li, Victor O. K.

    2013-01-01

    Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated. PMID:23669708

  1. Performance Evaluation of Counter-Based Dynamic Load Balancing Schemes for Massive Contingency Analysis with Different Computing Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Chavarría-Miranda, Daniel

    Contingency analysis is a key function in the Energy Management System (EMS) to assess the impact of various combinations of power system component failures based on state estimation. Contingency analysis is also extensively used in power market operation for feasibility test of market solutions. High performance computing holds the promise of faster analysis of more contingency cases for the purpose of safe and reliable operation of today’s power grids with less operating margin and more intermittent renewable energy sources. This paper evaluates the performance of counter-based dynamic load balancing schemes for massive contingency analysis under different computing environments. Insights frommore » the performance evaluation can be used as guidance for users to select suitable schemes in the application of massive contingency analysis. Case studies, as well as MATLAB simulations, of massive contingency cases using the Western Electricity Coordinating Council power grid model are presented to illustrate the application of high performance computing with counter-based dynamic load balancing schemes.« less

  2. Research in the design of high-performance reconfigurable systems

    NASA Technical Reports Server (NTRS)

    Mcewan, S. D.; Spry, A. J.

    1985-01-01

    Computer aided design and computer aided manufacturing have the potential for greatly reducing the cost and lead time in the development of VLSI components. This potential paves the way for the design and fabrication of a wide variety of economically feasible high level functional units. It was observed that current computer systems have only a limited capacity to absorb new VLSI component types other than memory, microprocessors, and a relatively small number of other parts. The first purpose is to explore a system design which is capable of effectively incorporating a considerable number of VLSI part types and will both increase the speed of computation and reduce the attendant programming effort. A second purpose is to explore design techniques for VLSI parts which when incorporated by such a system will result in speeds and costs which are optimal. The proposed work may lay the groundwork for future efforts in the extensive simulation and measurements of the system's cost effectiveness and lead to prototype development.

  3. Infinitely dilute partial molar properties of proteins from computer simulation.

    PubMed

    Ploetz, Elizabeth A; Smith, Paul E

    2014-11-13

    A detailed understanding of temperature and pressure effects on an infinitely dilute protein's conformational equilibrium requires knowledge of the corresponding infinitely dilute partial molar properties. Established molecular dynamics methodologies generally have not provided a way to calculate these properties without either a loss of thermodynamic rigor, the introduction of nonunique parameters, or a loss of information about which solute conformations specifically contributed to the output values. Here we implement a simple method that is thermodynamically rigorous and possesses none of the above disadvantages, and we report on the method's feasibility and computational demands. We calculate infinitely dilute partial molar properties for two proteins and attempt to distinguish the thermodynamic differences between a native and a denatured conformation of a designed miniprotein. We conclude that simple ensemble average properties can be calculated with very reasonable amounts of computational power. In contrast, properties corresponding to fluctuating quantities are computationally demanding to calculate precisely, although they can be obtained more easily by following the temperature and/or pressure dependence of the corresponding ensemble averages.

  4. A stratified acoustic model accounting for phase shifts for underwater acoustic networks.

    PubMed

    Wang, Ping; Zhang, Lin; Li, Victor O K

    2013-05-13

    Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated.

  5. Assessing the accuracy of using oscillating gradient spin echo sequences with AxCaliber to infer micron-sized axon diameters.

    PubMed

    Mercredi, Morgan; Vincent, Trevor J; Bidinosti, Christopher P; Martin, Melanie

    2017-02-01

    Current magnetic resonance imaging (MRI) axon diameter measurements rely on the pulsed gradient spin-echo sequence, which is unable to provide diffusion times short enough to measure small axon diameters. This study combines the AxCaliber axon diameter fitting method with data generated from Monte Carlo simulations of oscillating gradient spin-echo sequences (OGSE) to infer micron-sized axon diameters, in order to determine the feasibility of using MRI to infer smaller axon diameters in brain tissue. Monte Carlo computer simulation data were synthesized from tissue geometries of cylinders of different diameters using a range of gradient frequencies in the cosine OGSE sequence . Data were fitted to the AxCaliber method modified to allow the new pulse sequence. Intra- and extra-axonal water were studied separately and together. The simulations revealed the extra-axonal model to be problematic. Rather than change the model, we found that restricting the range of gradient frequencies such that the measured apparent diffusion coefficient was constant over that range resulted in more accurate fitted diameters. Thus a careful selection of frequency ranges is needed for the AxCaliber method to correctly model extra-axonal water, or adaptations to the method are needed. This restriction helped reduce the necessary gradient strengths for measurements that could be performed with parameters feasible for a Bruker BG6 gradient set. For these experiments, the simulations inferred diameters as small as 0.5 μm on square-packed and randomly packed cylinders. The accuracy of the inferred diameters was found to be dependent on the signal-to-noise ratio (SNR), with smaller diameters more affected by noise, although all diameter distributions were distinguishable from one another for all SNRs tested. The results of this study indicate the feasibility of using MRI with OGSE on preclinical scanners to infer small axon diameters.

  6. Modified animal model and computer-assisted approach for dentoalveolar distraction osteogenesis to reconstruct unilateral maxillectomy defect.

    PubMed

    Feng, Zhihong; Zhao, Jinlong; Zhou, Libin; Dong, Yan; Zhao, Yimin

    2009-10-01

    The purpose of this report is to show the establishment of an animal model with a unilateral maxilla defect, application of virtual reality and rapid prototyping in the surgical planning for dentoalveolar distraction osteogenesis (DO). Two adult dogs were used to develop an animal model with a unilateral maxillary defect. The 3-dimensional model of the canine craniofacial skeleton was reconstructed with computed tomography data using the software Mimics, version 12.0 (Materialise Group, Leuven, Belgium). A virtual individual distractor was designed and transferred onto the model with the defect, and the osteotomies and distraction processes were simulated. A precise casting technique and numeric control technology were applied to produce the titanium distraction device, which was installed on the physical model with the defect, which was generated using Selective Laser Sintering technology, and the in vitro simulation of osteotomies and DO was done. The 2 dogs survived the operation and were lively. The osteotomies and distraction process were simulated successfully whether on the virtual or the physical model. The bone transport could be distracted to the desired position both in the virtual environment and on the physical model. The novel method to develop an animal model with a unilateral maxillary defect was feasible, and the animal model was suitable to develop the reconstruction method for unilateral maxillary defect cases with dentoalveolar DO. Computer-assisted surgical planning and simulation improved the reliability of the maxillofacial surgery, especially for the complex cases. The novel idea to reconstruct the unilateral maxillary defect with dentoalveolar DO was proved through the model experiment.

  7. Stochastic resource allocation in emergency departments with a multi-objective simulation optimization algorithm.

    PubMed

    Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li

    2017-03-01

    The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.

  8. Application of conventional molecular dynamics simulation in evaluating the stability of apomyoglobin in urea solution

    PubMed Central

    Zhang, Dawei; Lazim, Raudah

    2017-01-01

    In this study, we had exploited the advancement in computer technology to determine the stability of four apomyoglobin variants namely wild type, E109A, E109G and G65A/G73A by conducting conventional molecular dynamics simulations in explicit urea solution. Variations in RMSD, native contacts and solvent accessible surface area of the apomyoglobin variants during the simulation were calculated to probe the effect of mutation on the overall conformation of the protein. Subsequently, the mechanism leading to the destabilization of the apoMb variants was studied through the calculation of correlation matrix, principal component analyses, hydrogen bond analyses and RMSF. The results obtained here correlate well with the study conducted by Baldwin and Luo which showed improved stability of apomyoglobin with E109A mutation and contrariwise for E109G and G65A/G73A mutation. These positive observations showcase the feasibility of exploiting MD simulation in determining protein stability prior to protein expression. PMID:28300210

  9. Application of conventional molecular dynamics simulation in evaluating the stability of apomyoglobin in urea solution

    NASA Astrophysics Data System (ADS)

    Zhang, Dawei; Lazim, Raudah

    2017-03-01

    In this study, we had exploited the advancement in computer technology to determine the stability of four apomyoglobin variants namely wild type, E109A, E109G and G65A/G73A by conducting conventional molecular dynamics simulations in explicit urea solution. Variations in RMSD, native contacts and solvent accessible surface area of the apomyoglobin variants during the simulation were calculated to probe the effect of mutation on the overall conformation of the protein. Subsequently, the mechanism leading to the destabilization of the apoMb variants was studied through the calculation of correlation matrix, principal component analyses, hydrogen bond analyses and RMSF. The results obtained here correlate well with the study conducted by Baldwin and Luo which showed improved stability of apomyoglobin with E109A mutation and contrariwise for E109G and G65A/G73A mutation. These positive observations showcase the feasibility of exploiting MD simulation in determining protein stability prior to protein expression.

  10. Application of conventional molecular dynamics simulation in evaluating the stability of apomyoglobin in urea solution.

    PubMed

    Zhang, Dawei; Lazim, Raudah

    2017-03-16

    In this study, we had exploited the advancement in computer technology to determine the stability of four apomyoglobin variants namely wild type, E109A, E109G and G65A/G73A by conducting conventional molecular dynamics simulations in explicit urea solution. Variations in RMSD, native contacts and solvent accessible surface area of the apomyoglobin variants during the simulation were calculated to probe the effect of mutation on the overall conformation of the protein. Subsequently, the mechanism leading to the destabilization of the apoMb variants was studied through the calculation of correlation matrix, principal component analyses, hydrogen bond analyses and RMSF. The results obtained here correlate well with the study conducted by Baldwin and Luo which showed improved stability of apomyoglobin with E109A mutation and contrariwise for E109G and G65A/G73A mutation. These positive observations showcase the feasibility of exploiting MD simulation in determining protein stability prior to protein expression.

  11. Fun During Knee Rehabilitation: Feasibility and Acceptability Testing of a New Android-Based Training Device.

    PubMed

    Weber-Spickschen, Thomas Sanjay; Colcuc, Christian; Hanke, Alexander; Clausen, Jan-Dierk; James, Paul Abraham; Horstmann, Hauke

    2017-01-01

    The initial goals of rehabilitation after knee injuries and operations are to achieve full knee extension and to activate quadriceps muscle. In addition to regular physiotherapy, an android-based knee training device is designed to help patients achieve these goals and improve compliance in the early rehabilitation period. This knee training device combines fun in a computer game with muscular training or rehabilitation. Our aim was to test the feasibility and acceptability of this new device. 50 volunteered subjects enrolled to test out the computer game aided device. The first game was the high-striker game, which recorded maximum knee extension power. The second game involved controlling quadriceps muscular power to simulate flying an aeroplane in order to record accuracy of muscle activation. The subjects evaluated this game by completing a simple questionnaire. No technical problem was encountered during the usage of this device. No subjects complained of any discomfort after using this device. Measurements including maximum knee extension power, knee muscle activation and control were recorded successfully. Subjects rated their experience with the device as either excellent or very good and agreed that the device can motivate and monitor the progress of knee rehabilitation training. To the best of our knowledge, this is the first android-based tool available to fast track knee rehabilitation training. All subjects gave very positive feedback to this computer game aided knee device.

  12. Bearing optimization for SSME HPOTP application

    NASA Technical Reports Server (NTRS)

    Armstrong, Elizabeth S.; Coe, Harold H.

    1988-01-01

    The space shuttle main engine (SSME) high-pressure oxygen turbopumps (HPOTP) have not experienced the service life required of them. To improve the life of the existing turbopump bearings, modifications to the bearings that could be retrofitted into the present bearing cavity are being investigated. Several bearing parameters were optimized using the computer program SHABERTH, which performs a thermomechanical simulation of a load support system. The computer analysis showed that improved bearing performance is feasible if low friction coefficients can be attained. Bearing geometries were optimized considering heat generation, equilibrium temperatures, and relative life. Two sets of curvatures were selected from the optimization: an inner-raceway curvature of 0.54, an outer-raceway curvature of 0.52, and an inner-raceway curvature of 0.55, an outer-raceway curvature of 0.53. A contact angle of 16 deg was also selected. Thermal gradients through the bearings were found to be lower with liquid lubrication than with solid film lubrication. As the coolant flowrate through the bearing increased, the ball temperature decreased but at a continuously decreasing rate. The optimum flowrate was approximately 4 kg/s. The analytical modeling used to determine these feasible modifications to improve bearing performance is described.

  13. WE-DE-202-00: Connecting Radiation Physics with Computational Biology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Radiation therapy for the treatment of cancer has been established as a highly precise and effective way to eradicate a localized region of diseased tissue. To achieve further significant gains in the therapeutic ratio, we need to move towards biologically optimized treatment planning. To achieve this goal, we need to understand how the radiation-type dependent patterns of induced energy depositions within the cell (physics) connect via molecular, cellular and tissue reactions to treatment outcome such as tumor control and undesirable effects on normal tissue. Several computational biology approaches have been developed connecting physics to biology. Monte Carlo simulations are themore » most accurate method to calculate physical dose distributions at the nanometer scale, however simulations at the DNA scale are slow and repair processes are generally not simulated. Alternative models that rely on the random formation of individual DNA lesions within one or two turns of the DNA have been shown to reproduce the clusters of DNA lesions, including single strand breaks (SSBs), double strand breaks (DSBs) without the need for detailed track structure simulations. Efficient computational simulations of initial DNA damage induction facilitate computational modeling of DNA repair and other molecular and cellular processes. Mechanistic, multiscale models provide a useful conceptual framework to test biological hypotheses and help connect fundamental information about track structure and dosimetry at the sub-cellular level to dose-response effects on larger scales. In this symposium we will learn about the current state of the art of computational approaches estimating radiation damage at the cellular and sub-cellular scale. How can understanding the physics interactions at the DNA level be used to predict biological outcome? We will discuss if and how such calculations are relevant to advance our understanding of radiation damage and its repair, or, if the underlying biological processes are too complex for a mechanistic approach. Can computer simulations be used to guide future biological research? We will debate the feasibility of explaining biology from a physicists’ perspective. Learning Objectives: Understand the potential applications and limitations of computational methods for dose-response modeling at the molecular, cellular and tissue levels Learn about mechanism of action underlying the induction, repair and biological processing of damage to DNA and other constituents Understand how effects and processes at one biological scale impact on biological processes and outcomes on other scales J. Schuemann, NCI/NIH grantsS. McMahon, Funding: European Commission FP7 (grant EC FP7 MC-IOF-623630)« less

  14. WE-DE-202-02: Are Track Structure Simulations Truly Needed for Radiobiology at the Cellular and Tissue Levels?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, R.

    Radiation therapy for the treatment of cancer has been established as a highly precise and effective way to eradicate a localized region of diseased tissue. To achieve further significant gains in the therapeutic ratio, we need to move towards biologically optimized treatment planning. To achieve this goal, we need to understand how the radiation-type dependent patterns of induced energy depositions within the cell (physics) connect via molecular, cellular and tissue reactions to treatment outcome such as tumor control and undesirable effects on normal tissue. Several computational biology approaches have been developed connecting physics to biology. Monte Carlo simulations are themore » most accurate method to calculate physical dose distributions at the nanometer scale, however simulations at the DNA scale are slow and repair processes are generally not simulated. Alternative models that rely on the random formation of individual DNA lesions within one or two turns of the DNA have been shown to reproduce the clusters of DNA lesions, including single strand breaks (SSBs), double strand breaks (DSBs) without the need for detailed track structure simulations. Efficient computational simulations of initial DNA damage induction facilitate computational modeling of DNA repair and other molecular and cellular processes. Mechanistic, multiscale models provide a useful conceptual framework to test biological hypotheses and help connect fundamental information about track structure and dosimetry at the sub-cellular level to dose-response effects on larger scales. In this symposium we will learn about the current state of the art of computational approaches estimating radiation damage at the cellular and sub-cellular scale. How can understanding the physics interactions at the DNA level be used to predict biological outcome? We will discuss if and how such calculations are relevant to advance our understanding of radiation damage and its repair, or, if the underlying biological processes are too complex for a mechanistic approach. Can computer simulations be used to guide future biological research? We will debate the feasibility of explaining biology from a physicists’ perspective. Learning Objectives: Understand the potential applications and limitations of computational methods for dose-response modeling at the molecular, cellular and tissue levels Learn about mechanism of action underlying the induction, repair and biological processing of damage to DNA and other constituents Understand how effects and processes at one biological scale impact on biological processes and outcomes on other scales J. Schuemann, NCI/NIH grantsS. McMahon, Funding: European Commission FP7 (grant EC FP7 MC-IOF-623630)« less

  15. WE-DE-202-01: Connecting Nanoscale Physics to Initial DNA Damage Through Track Structure Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuemann, J.

    Radiation therapy for the treatment of cancer has been established as a highly precise and effective way to eradicate a localized region of diseased tissue. To achieve further significant gains in the therapeutic ratio, we need to move towards biologically optimized treatment planning. To achieve this goal, we need to understand how the radiation-type dependent patterns of induced energy depositions within the cell (physics) connect via molecular, cellular and tissue reactions to treatment outcome such as tumor control and undesirable effects on normal tissue. Several computational biology approaches have been developed connecting physics to biology. Monte Carlo simulations are themore » most accurate method to calculate physical dose distributions at the nanometer scale, however simulations at the DNA scale are slow and repair processes are generally not simulated. Alternative models that rely on the random formation of individual DNA lesions within one or two turns of the DNA have been shown to reproduce the clusters of DNA lesions, including single strand breaks (SSBs), double strand breaks (DSBs) without the need for detailed track structure simulations. Efficient computational simulations of initial DNA damage induction facilitate computational modeling of DNA repair and other molecular and cellular processes. Mechanistic, multiscale models provide a useful conceptual framework to test biological hypotheses and help connect fundamental information about track structure and dosimetry at the sub-cellular level to dose-response effects on larger scales. In this symposium we will learn about the current state of the art of computational approaches estimating radiation damage at the cellular and sub-cellular scale. How can understanding the physics interactions at the DNA level be used to predict biological outcome? We will discuss if and how such calculations are relevant to advance our understanding of radiation damage and its repair, or, if the underlying biological processes are too complex for a mechanistic approach. Can computer simulations be used to guide future biological research? We will debate the feasibility of explaining biology from a physicists’ perspective. Learning Objectives: Understand the potential applications and limitations of computational methods for dose-response modeling at the molecular, cellular and tissue levels Learn about mechanism of action underlying the induction, repair and biological processing of damage to DNA and other constituents Understand how effects and processes at one biological scale impact on biological processes and outcomes on other scales J. Schuemann, NCI/NIH grantsS. McMahon, Funding: European Commission FP7 (grant EC FP7 MC-IOF-623630)« less

  16. Multidisciplinary Design Optimization of a Full Vehicle with High Performance Computing

    NASA Technical Reports Server (NTRS)

    Yang, R. J.; Gu, L.; Tho, C. H.; Sobieszczanski-Sobieski, Jaroslaw

    2001-01-01

    Multidisciplinary design optimization (MDO) of a full vehicle under the constraints of crashworthiness, NVH (Noise, Vibration and Harshness), durability, and other performance attributes is one of the imperative goals for automotive industry. However, it is often infeasible due to the lack of computational resources, robust simulation capabilities, and efficient optimization methodologies. This paper intends to move closer towards that goal by using parallel computers for the intensive computation and combining different approximations for dissimilar analyses in the MDO process. The MDO process presented in this paper is an extension of the previous work reported by Sobieski et al. In addition to the roof crush, two full vehicle crash modes are added: full frontal impact and 50% frontal offset crash. Instead of using an adaptive polynomial response surface method, this paper employs a DOE/RSM method for exploring the design space and constructing highly nonlinear crash functions. Two NMO strategies are used and results are compared. This paper demonstrates that with high performance computing, a conventionally intractable real world full vehicle multidisciplinary optimization problem considering all performance attributes with large number of design variables become feasible.

  17. Reprogrammable logic in memristive crossbar for in-memory computing

    NASA Astrophysics Data System (ADS)

    Cheng, Long; Zhang, Mei-Yun; Li, Yi; Zhou, Ya-Xiong; Wang, Zhuo-Rui; Hu, Si-Yu; Long, Shi-Bing; Liu, Ming; Miao, Xiang-Shui

    2017-12-01

    Memristive stateful logic has emerged as a promising next-generation in-memory computing paradigm to address escalating computing-performance pressures in traditional von Neumann architecture. Here, we present a nonvolatile reprogrammable logic method that can process data between different rows and columns in a memristive crossbar array based on material implication (IMP) logic. Arbitrary Boolean logic can be executed with a reprogrammable cell containing four memristors in a crossbar array. In the fabricated Ti/HfO2/W memristive array, some fundamental functions, such as universal NAND logic and data transfer, were experimentally implemented. Moreover, using eight memristors in a 2  ×  4 array, a one-bit full adder was theoretically designed and verified by simulation to exhibit the feasibility of our method to accomplish complex computing tasks. In addition, some critical logic-related performances were further discussed, such as the flexibility of data processing, cascading problem and bit error rate. Such a method could be a step forward in developing IMP-based memristive nonvolatile logic for large-scale in-memory computing architecture.

  18. Feasibility of Coherent and Incoherent Backscatter Experiments from the AMPS Laboratory. Technical Section

    NASA Technical Reports Server (NTRS)

    Mozer, F. S.

    1976-01-01

    A computer program simulated the spectrum which resulted when a radar signal was transmitted into the ionosphere for a finite time and received for an equal finite interval. The spectrum derived from this signal is statistical in nature because the signal is scattered from the ionosphere, which is statistical in nature. Many estimates of any property of the ionosphere can be made. Their average value will approach the average property of the ionosphere which is being measured. Due to the statistical nature of the spectrum itself, the estimators will vary about this average. The square root of the variance about this average is called the standard deviation, an estimate of the error which exists in any particular radar measurement. In order to determine the feasibility of the space shuttle radar, the magnitude of these errors for measurements of physical interest must be understood.

  19. Quantum realization of the bilinear interpolation method for NEQR.

    PubMed

    Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Ian, Hou

    2017-05-31

    In recent years, quantum image processing is one of the most active fields in quantum computation and quantum information. Image scaling as a kind of image geometric transformation has been widely studied and applied in the classical image processing, however, the quantum version of which does not exist. This paper is concerned with the feasibility of the classical bilinear interpolation based on novel enhanced quantum image representation (NEQR). Firstly, the feasibility of the bilinear interpolation for NEQR is proven. Then the concrete quantum circuits of the bilinear interpolation including scaling up and scaling down for NEQR are given by using the multiply Control-Not operation, special adding one operation, the reverse parallel adder, parallel subtractor, multiplier and division operations. Finally, the complexity analysis of the quantum network circuit based on the basic quantum gates is deduced. Simulation result shows that the scaled-up image using bilinear interpolation is clearer and less distorted than nearest interpolation.

  20. Temperature and humidity profiles in the atmosphere from spaceborne lasers: A feasibility study

    NASA Technical Reports Server (NTRS)

    Grassl, H.; Schluessel, P.

    1984-01-01

    Computer simulations of the differential absorption lidar technique in a space craft for the purpose of temperature and humidity profiling indicate: (1) Current technology applied to O2 and H2O lines in the .7 to .8 micrometers wavelength band gives sufficiently high signal-to-noise ratios (up to 50 for a single pulse pair) if backscattering by aerosol particles is high, i.e. profiling accurate to 2 K for temperature and 10% for humidity should be feasible within the turbid lower troposphere in 1 km layers and with an averaging over approximately 100 pulses. (2) The impact of short term fluctuations in aerosol particle concentration is too big for a one laser system. Only a two laser system firing at a time lag of about 1 millisecond can surmount these difficulties. (3) The finite width of the laser line and the quasi-random shift of this line introduce tolerable, partly systematic errors.

  1. Aerocapture Systems Analysis for a Neptune Mission

    NASA Technical Reports Server (NTRS)

    Lockwood, Mary Kae; Edquist, Karl T.; Starr, Brett R.; Hollis, Brian R.; Hrinda, Glenn A.; Bailey, Robert W.; Hall, Jeffery L.; Spilker, Thomas R.; Noca, Muriel A.; O'Kongo, N.

    2006-01-01

    A Systems Analysis was completed to determine the feasibility, benefit and risk of an aeroshell aerocapture system for Neptune and to identify technology gaps and technology performance goals. The systems analysis includes the following disciplines: science; mission design; aeroshell configuration; interplanetary navigation analyses; atmosphere modeling; computational fluid dynamics for aerodynamic performance and aeroheating environment; stability analyses; guidance development; atmospheric flight simulation; thermal protection system design; mass properties; structures; spacecraft design and packaging; and mass sensitivities. Results show that aerocapture is feasible and performance is adequate for the Neptune mission. Aerocapture can deliver 1.4 times more mass to Neptune orbit than an all-propulsive system for the same launch vehicle and results in a 3-4 year reduction in trip time compared to all-propulsive systems. Enabling technologies for this mission include TPS manufacturing; and aerothermodynamic methods for determining coupled 3-D convection, radiation and ablation aeroheating rates and loads.

  2. SU-E-J-66: Evaluation of a Real-Time Positioning Assistance Simulator System for Skull Radiography Using the Microsoft Kinect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurata, T; Ono, M; Kozono, K

    2014-06-01

    Purpose: The purpose of this study is to investigate the feasibility of a low cost, small size positioning assistance simulator system for skull radiography using the Microsoft Kinect sensor. A conventional radiographic simulator system can only measure the three-dimensional coordinates of an x-ray tube using angle sensors, but not measure the movement of the subject. Therefore, in this study, we developed a real-time simulator system using the Microsoft Kinect to measure both the x-ray tube and the subject, and evaluated its accuracy and feasibility by comparing the simulated and the measured x-ray images. Methods: This system can track a headmore » phantom by using Face Tracking, which is one of the functions of the Kinect. The relative relationship between the Kinect and the head phantom was measured and the projection image was calculated by using the ray casting method, and by using three-dimensional CT head data with 220 slices at 512 × 512 pixels. X-ray images were thus obtained by using a computed radiography (CR) system. We could then compare the simulated projection images with the measured x-ray images from 0 degrees to 45 degrees at increments of 15 degrees by calculating the cross correlation coefficient C. Results: The calculation time of the simulated projection images was almost real-time (within 1 second) by using the Graphics Processing Unit(GPU). The cross-correlation coefficients C are: 0.916; 0.909; 0.891; and, 0.886 at 0, 15, 30, and 45 degrees, respectively. As a result, there were strong correlations between the simulated and measured images. Conclusion: This system can be used to perform head positioning more easily and accurately. It is expected that this system will be useful for learning radiographic techniques by students. Moreover, it could also be used for predicting the actual x-ray image prior to x-ray exposure in clinical environments.« less

  3. Preoperative simulation for the planning of microsurgical clipping of intracranial aneurysms.

    PubMed

    Marinho, Paulo; Vermandel, Maximilien; Bourgeois, Philippe; Lejeune, Jean-Paul; Mordon, Serge; Thines, Laurent

    2014-12-01

    The safety and success of intracranial aneurysm (IA) surgery could be improved through the dedicated application of simulation covering the procedure from the 3-dimensional (3D) description of the surgical scene to the visual representation of the clip application. We aimed in this study to validate the technical feasibility and clinical relevance of such a protocol. All patients preoperatively underwent 3D magnetic resonance imaging and 3D computed tomography angiography to build 3D reconstructions of the brain, cerebral arteries, and surrounding cranial bone. These 3D models were segmented and merged using Osirix, a DICOM image processing application. This provided the surgical scene that was subsequently imported into Blender, a modeling platform for 3D animation. Digitized clips and appliers could then be manipulated in the virtual operative environment, allowing the visual simulation of clipping. This simulation protocol was assessed in a series of 10 IAs by 2 neurosurgeons. The protocol was feasible in all patients. The visual similarity between the surgical scene and the operative view was excellent in 100% of the cases, and the identification of the vascular structures was accurate in 90% of the cases. The neurosurgeons found the simulation helpful for planning the surgical approach (ie, the bone flap, cisternal opening, and arterial tree exposure) in 100% of the cases. The correct number of final clip(s) needed was predicted from the simulation in 90% of the cases. The preoperatively expected characteristics of the optimal clip(s) (ie, their number, shape, size, and orientation) were validated during surgery in 80% of the cases. This study confirmed that visual simulation of IA clipping based on the processing of high-resolution 3D imaging can be effective. This is a new and important step toward the development of a more sophisticated integrated simulation platform dedicated to cerebrovascular surgery.

  4. High accuracy differential pressure measurements using fluid-filled catheters - A feasibility study in compliant tubes.

    PubMed

    Rotman, Oren Moshe; Weiss, Dar; Zaretsky, Uri; Shitzer, Avraham; Einav, Shmuel

    2015-09-18

    High accuracy differential pressure measurements are required in various biomedical and medical applications, such as in fluid-dynamic test systems, or in the cath-lab. Differential pressure measurements using fluid-filled catheters are relatively inexpensive, yet may be subjected to common mode pressure errors (CMP), which can significantly reduce the measurement accuracy. Recently, a novel correction method for high accuracy differential pressure measurements was presented, and was shown to effectively remove CMP distortions from measurements acquired in rigid tubes. The purpose of the present study was to test the feasibility of this correction method inside compliant tubes, which effectively simulate arteries. Two tubes with varying compliance were tested under dynamic flow and pressure conditions to cover the physiological range of radial distensibility in coronary arteries. A third, compliant model, with a 70% stenosis severity was additionally tested. Differential pressure measurements were acquired over a 3 cm tube length using a fluid-filled double-lumen catheter, and were corrected using the proposed CMP correction method. Validation of the corrected differential pressure signals was performed by comparison to differential pressure recordings taken via a direct connection to the compliant tubes, and by comparison to predicted differential pressure readings of matching fluid-structure interaction (FSI) computational simulations. The results show excellent agreement between the experimentally acquired and computationally determined differential pressure signals. This validates the application of the CMP correction method in compliant tubes of the physiological range for up to intermediate size stenosis severity of 70%. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Mantle circulation models with variational data assimilation: Inferring past mantle flow and structure from plate motion histories and seismic tomography

    NASA Astrophysics Data System (ADS)

    Bunge, Hans-Peter

    2002-08-01

    Earth's mantle overturns itself about once every 200 Million years (myrs). Prima facie evidence for this overturn is the motion of tectonic plates at the surface of the Earth driving the geologic activity of our planet. Supporting evidence also comes from seismic tomograms of the Earth's interior that reveal the convective currents in remarkable clarity. Much has been learned about the physics of solid state mantle convection over the past two decades aided primarily by sophisticated computer simulations. Such simulations are reaching the threshold of fully resolving the convective system globally. In this talk we will review recent progress in mantle dynamics studies. We will then turn our attention to the fundamental question of whether it is possible to explicitly reconstruct mantle flow back in time. This is a classic problem of history matching, amenable to control theory and data assimilation. The technical advances that make such approach feasible are dramatically increasing compute resources, represented for example through Beowulf clusters, and new observational initiatives, represented for example through the US-Array effort that should lead to an order-of-magnitude improvement in our ability to resolve Earth structure seismically below North America. In fact, new observational constraints on deep Earth structure illustrate the growing importance of of improving our data assimilation skills in deep Earth models. We will explore data assimilation through high resolution global adjoint models of mantle circulation and conclude that it is feasible to reconstruct mantle flow back in time for at least the past 100 myrs.

  6. Laboratory Photoionization Fronts in Nitrogen Gas: A Numerical Feasibility and Parameter Study

    NASA Astrophysics Data System (ADS)

    Gray, William J.; Keiter, P. A.; Lefevre, H.; Patterson, C. R.; Davis, J. S.; van Der Holst, B.; Powell, K. G.; Drake, R. P.

    2018-05-01

    Photoionization fronts play a dominant role in many astrophysical situations but remain difficult to achieve in a laboratory experiment. We present the results from a computational parameter study evaluating the feasibility of the photoionization experiment presented in the design paper by Drake et al. in which a photoionization front is generated in a nitrogen medium. The nitrogen gas density and the Planckian radiation temperature of the X-ray source define each simulation. Simulations modeled experiments in which the X-ray flux is generated by a laser-heated gold foil, suitable for experiments using many kJ of laser energy, and experiments in which the flux is generated by a “z-pinch” device, which implodes a cylindrical shell of conducting wires. The models are run using CRASH, our block-adaptive-mesh code for multimaterial radiation hydrodynamics. The radiative transfer model uses multigroup, flux-limited diffusion with 30 radiation groups. In addition, electron heat conduction is modeled using a single-group, flux-limited diffusion. In the theory, a photoionization front can exist only when the ratios of the electron recombination rate to the photoionization rate and the electron-impact ionization rate to the recombination rate lie in certain ranges. These ratios are computed for several ionization states of nitrogen. Photoionization fronts are found to exist for laser-driven models with moderate nitrogen densities (∼1021 cm‑3) and radiation temperatures above 90 eV. For “z-pinch”-driven models, lower nitrogen densities are preferred (<1021 cm‑3). We conclude that the proposed experiments are likely to generate photoionization fronts.

  7. Explicit integration with GPU acceleration for large kinetic networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brock, Benjamin; Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37830; Belt, Andrew

    2015-12-01

    We demonstrate the first implementation of recently-developed fast explicit kinetic integration algorithms on modern graphics processing unit (GPU) accelerators. Taking as a generic test case a Type Ia supernova explosion with an extremely stiff thermonuclear network having 150 isotopic species and 1604 reactions coupled to hydrodynamics using operator splitting, we demonstrate the capability to solve of order 100 realistic kinetic networks in parallel in the same time that standard implicit methods can solve a single such network on a CPU. This orders-of-magnitude decrease in computation time for solving systems of realistic kinetic networks implies that important coupled, multiphysics problems inmore » various scientific and technical fields that were intractable, or could be simulated only with highly schematic kinetic networks, are now computationally feasible.« less

  8. Social Noise: Generating Random Numbers from Twitter Streams

    NASA Astrophysics Data System (ADS)

    Fernández, Norberto; Quintas, Fernando; Sánchez, Luis; Arias, Jesús

    2015-12-01

    Due to the multiple applications of random numbers in computer systems (cryptography, online gambling, computer simulation, etc.) it is important to have mechanisms to generate these numbers. True Random Number Generators (TRNGs) are commonly used for this purpose. TRNGs rely on non-deterministic sources to generate randomness. Physical processes (like noise in semiconductors, quantum phenomenon, etc.) play this role in state of the art TRNGs. In this paper, we depart from previous work and explore the possibility of defining social TRNGs using the stream of public messages of the microblogging service Twitter as randomness source. Thus, we define two TRNGs based on Twitter stream information and evaluate them using the National Institute of Standards and Technology (NIST) statistical test suite. The results of the evaluation confirm the feasibility of the proposed approach.

  9. Adjoint Airfoil Optimization of Darrieus-Type Vertical Axis Wind Turbine

    NASA Astrophysics Data System (ADS)

    Fuchs, Roman; Nordborg, Henrik

    2012-11-01

    We present the feasibility of using an adjoint solver to optimize the torque of a Darrieus-type vertical axis wind turbine (VAWT). We start with a 2D cross section of a symmetrical airfoil and restrict us to low solidity ratios to minimize blade vortex interactions. The adjoint solver of the ANSYS FLUENT software package computes the sensitivities of airfoil surface forces based on a steady flow field. Hence, we find the torque of a full revolution using a weighted average of the sensitivities at different wind speeds and angles of attack. The weights are computed analytically, and the range of angles of attack is given by the tip speed ratio. Then the airfoil geometry is evolved, and the proposed methodology is evaluated by transient simulations.

  10. Dexterity optimization by port placement in robot-assisted minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Selha, Shaun; Dupont, Pierre; Howe, Robert D.; Torchiana, David F.

    2002-02-01

    A computer-based algorithm has been developed which uses preoperative images to provide a surgeon with a list of feasible port triplets ranked according to tool dexterity and endoscopic view quality at each surgical site involved in a procedure. A computer simulation allows the surgeon to select from among the proposed port locations. The procedure selected for the development of the system consists of a coronary artery bypass graft (CABG). In this procedure, the interior mammary artery (IMA) is mobilized from the interior chest wall, and one end is attached to the coronary arteries to provide a new blood supply for the heart. Approximately 10-20 cm is dissected free, using blunt dissection and a harmonic scalpel or electrocautery. At present, the port placement system is being evaluated in clinical trials.

  11. Computer-assisted design and finite element simulation of braces for the treatment of adolescent idiopathic scoliosis using a coronal plane radiograph and surface topography.

    PubMed

    Pea, Rany; Dansereau, Jean; Caouette, Christiane; Cobetto, Nikita; Aubin, Carl-Éric

    2018-05-01

    Orthopedic braces made by Computer-Aided Design and Manufacturing and numerical simulation were shown to improve spinal deformities correction in adolescent idiopathic scoliosis while using less material. Simulations with BraceSim (Rodin4D, Groupe Lagarrigue, Bordeaux, France) require a sagittal radiograph, not always available. The objective was to develop an innovative modeling method based on a single coronal radiograph and surface topography, and assess the effectiveness of braces designed with this approach. With a patient coronal radiograph and a surface topography, the developed method allowed the 3D reconstruction of the spine, rib cage and pelvis using geometric models from a database and a free form deformation technique. The resulting 3D reconstruction converted into a finite element model was used to design and simulate the correction of a brace. The developed method was tested with data from ten scoliosis cases. The simulated correction was compared to analogous simulations performed with a 3D reconstruction built using two radiographs and surface topography (validated gold standard reference). There was an average difference of 1.4°/1.7° for the thoracic/lumbar Cobb angle, and 2.6°/5.5° for the kyphosis/lordosis between the developed reconstruction method and the reference. The average difference of the simulated correction was 2.8°/2.4° for the thoracic/lumbar Cobb angles and 3.5°/5.4° the kyphosis/lordosis. This study showed the feasibility to design and simulate brace corrections based on a new modeling method with a single coronal radiograph and surface topography. This innovative method could be used to improve brace designs, at a lesser radiation dose for the patient. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Convergence of Molecular Dynamics Simulation of Protein Native States: Feasibility vs Self-Consistency Dilemma.

    PubMed

    Sawle, Lucas; Ghosh, Kingshuk

    2016-02-09

    All-atom molecular dynamics simulations need convergence tests to evaluate the quality of data. The notion of "true" convergence is elusive, and one can only hope to satisfy self-consistency checks (SCC). There are multiple SCC criteria, and their assessment of all-atom simulations of the native state for real globular proteins is sparse. Here, we present a systematic study of different SCC algorithms, both in terms of their ability to detect the lack of self-consistency and their computational demand, for the all-atom native state simulations of four globular proteins (CSP, CheA, CheW, and BPTI). Somewhat surprisingly, we notice some of the most stringent SCC criteria, e.g., the criteria demanding similarity of the cluster probability distribution between the first and the second halves of the trajectory or the comparison of fluctuations between different blocks using covariance overlap measure, can require tens of microseconds of simulation even for proteins with less than 100 amino acids. We notice such long simulation times can sometimes be associated with traps, but these traps cannot be detected by some of the common SCC methods. We suggest an additional, and simple, SCC algorithm to quickly detect such traps by monitoring the constancy of the cluster entropy (CCE). CCE is a necessary but not sufficient criteria, and additional SCC algorithms must be combined with it. Furthermore, as seen in the explicit solvent simulation of 1 ms long trajectory of BPTI,1 passing self-consistency checks at an earlier stage may be misleading due to conformational changes taking place later in the simulation, resulting in different, but segregated regions of SCC. Although there is a hierarchy of complex SCC algorithms, caution must be exercised in their application with the knowledge of their limitations and computational expense.

  13. Stochastic Simulation of Biomolecular Networks in Dynamic Environments

    PubMed Central

    Voliotis, Margaritis; Thomas, Philipp; Grima, Ramon; Bowsher, Clive G.

    2016-01-01

    Simulation of biomolecular networks is now indispensable for studying biological systems, from small reaction networks to large ensembles of cells. Here we present a novel approach for stochastic simulation of networks embedded in the dynamic environment of the cell and its surroundings. We thus sample trajectories of the stochastic process described by the chemical master equation with time-varying propensities. A comparative analysis shows that existing approaches can either fail dramatically, or else can impose impractical computational burdens due to numerical integration of reaction propensities, especially when cell ensembles are studied. Here we introduce the Extrande method which, given a simulated time course of dynamic network inputs, provides a conditionally exact and several orders-of-magnitude faster simulation solution. The new approach makes it feasible to demonstrate—using decision-making by a large population of quorum sensing bacteria—that robustness to fluctuations from upstream signaling places strong constraints on the design of networks determining cell fate. Our approach has the potential to significantly advance both understanding of molecular systems biology and design of synthetic circuits. PMID:27248512

  14. Virtual planning for craniomaxillofacial surgery--7 years of experience.

    PubMed

    Adolphs, Nicolai; Haberl, Ernst-Johannes; Liu, Weichen; Keeve, Erwin; Menneking, Horst; Hoffmeister, Bodo

    2014-07-01

    Contemporary computer-assisted surgery systems more and more allow for virtual simulation of even complex surgical procedures with increasingly realistic predictions. Preoperative workflows are established and different commercially software solutions are available. Potential and feasibility of virtual craniomaxillofacial surgery as an additional planning tool was assessed retrospectively by comparing predictions and surgical results. Since 2006 virtual simulation has been performed in selected patient cases affected by complex craniomaxillofacial disorders (n = 8) in addition to standard surgical planning based on patient specific 3d-models. Virtual planning could be performed for all levels of the craniomaxillofacial framework within a reasonable preoperative workflow. Simulation of even complex skeletal displacements corresponded well with the real surgical result and soft tissue simulation proved to be helpful. In combination with classic 3d-models showing the underlying skeletal pathology virtual simulation improved planning and transfer of craniomaxillofacial corrections. Additional work and expenses may be justified by increased possibilities of visualisation, information, instruction and documentation in selected craniomaxillofacial procedures. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  15. Scalability Test of Multiscale Fluid-Platelet Model for Three Top Supercomputers

    PubMed Central

    Zhang, Peng; Zhang, Na; Gao, Chao; Zhang, Li; Gao, Yuxiang; Deng, Yuefan; Bluestein, Danny

    2016-01-01

    We have tested the scalability of three supercomputers: the Tianhe-2, Stampede and CS-Storm with multiscale fluid-platelet simulations, in which a highly-resolved and efficient numerical model for nanoscale biophysics of platelets in microscale viscous biofluids is considered. Three experiments involving varying problem sizes were performed: Exp-S: 680,718-particle single-platelet; Exp-M: 2,722,872-particle 4-platelet; and Exp-L: 10,891,488-particle 16-platelet. Our implementations of multiple time-stepping (MTS) algorithm improved the performance of single time-stepping (STS) in all experiments. Using MTS, our model achieved the following simulation rates: 12.5, 25.0, 35.5 μs/day for Exp-S and 9.09, 6.25, 14.29 μs/day for Exp-M on Tianhe-2, CS-Storm 16-K80 and Stampede K20. The best rate for Exp-L was 6.25 μs/day for Stampede. Utilizing current advanced HPC resources, the simulation rates achieved by our algorithms bring within reach performing complex multiscale simulations for solving vexing problems at the interface of biology and engineering, such as thrombosis in blood flow which combines millisecond-scale hematology with microscale blood flow at resolutions of micro-to-nanoscale cellular components of platelets. This study of testing the performance characteristics of supercomputers with advanced computational algorithms that offer optimal trade-off to achieve enhanced computational performance serves to demonstrate that such simulations are feasible with currently available HPC resources. PMID:27570250

  16. SIMWEST - A simulation model for wind energy storage systems

    NASA Technical Reports Server (NTRS)

    Edsinger, R. W.; Warren, A. W.; Gordon, L. H.; Chang, G. C.

    1978-01-01

    This paper describes a comprehensive and efficient computer program for the modeling of wind energy systems with storage. The level of detail of SIMWEST (SImulation Model for Wind Energy STorage) is consistent with evaluating the economic feasibility as well as the general performance of wind energy systems with energy storage options. The software package consists of two basic programs and a library of system, environmental, and control components. The first program is a precompiler which allows the library components to be put together in building block form. The second program performs the technoeconomic system analysis with the required input/output, and the integration of system dynamics. An example of the application of the SIMWEST program to a current 100 kW wind energy storage system is given.

  17. Autonomous integrated GPS/INS navigation experiment for OMV. Phase 1: Feasibility study

    NASA Technical Reports Server (NTRS)

    Upadhyay, Triveni N.; Priovolos, George J.; Rhodehamel, Harley

    1990-01-01

    The phase 1 research focused on the experiment definition. A tightly integrated Global Positioning System/Inertial Navigation System (GPS/INS) navigation filter design was analyzed and was shown, via detailed computer simulation, to provide precise position, velocity, and attitude (alignment) data to support navigation and attitude control requirements of future NASA missions. The application of the integrated filter was also shown to provide the opportunity to calibrate inertial instrument errors which is particularly useful in reducing INS error growth during times of GPS outages. While the Orbital Maneuvering Vehicle (OMV) provides a good target platform for demonstration and for possible flight implementation to provide improved capability, a successful proof-of-concept ground demonstration can be obtained using any simulated mission scenario data, such as Space Transfer Vehicle, Shuttle-C, Space Station.

  18. Effect of rotation rate on the forces of a rotating cylinder: Simulation and control

    NASA Technical Reports Server (NTRS)

    Burns, John A.; Ou, Yuh-Roung

    1993-01-01

    In this paper we present numerical solutions to several optimal control problems for an unsteady viscous flow. The main thrust of this work is devoted to simulation and control of an unsteady flow generated by a circular cylinder undergoing rotary motion. By treating the rotation rate as a control variable, we can formulate two optimal control problems and use a central difference/pseudospectral transform method to numerically compute the optimal control rates. Several types of rotations are considered as potential controls, and we show that a proper synchronization of forcing frequency with the natural vortex shedding frequency can greatly influence the flow. The results here indicate that using moving boundary controls for such systems may provide a feasible mechanism for flow control.

  19. Using Markov state models to study self-assembly

    NASA Astrophysics Data System (ADS)

    Perkett, Matthew R.; Hagan, Michael F.

    2014-06-01

    Markov state models (MSMs) have been demonstrated to be a powerful method for computationally studying intramolecular processes such as protein folding and macromolecular conformational changes. In this article, we present a new approach to construct MSMs that is applicable to modeling a broad class of multi-molecular assembly reactions. Distinct structures formed during assembly are distinguished by their undirected graphs, which are defined by strong subunit interactions. Spatial inhomogeneities of free subunits are accounted for using a recently developed Gaussian-based signature. Simplifications to this state identification are also investigated. The feasibility of this approach is demonstrated on two different coarse-grained models for virus self-assembly. We find good agreement between the dynamics predicted by the MSMs and long, unbiased simulations, and that the MSMs can reduce overall simulation time by orders of magnitude.

  20. Toward superconducting critical current by design

    DOE PAGES

    Sadovskyy, Ivan A.; Jia, Ying; Leroux, Maxime; ...

    2016-03-31

    The interaction of vortex matter with defects in applied superconductors directly determines their current carrying capacity. Defects range from chemically grown nanostructures and crystalline imperfections to the layered structure of the material itself. The vortex-defect interactions are non-additive in general, leading to complex dynamic behavior that has proven difficult to capture in analytical models. With recent rapid progress in computational powers, a new paradigm has emerged that aims at simulation assisted design of defect structures with predictable ‘critical-current-by-design’: analogous to the materials genome concept of predicting stable materials structures of interest. We demonstrate the feasibility of this paradigm by combiningmore » large-scale time-dependent Ginzburg-Landau numerical simulations with experiments on commercial high temperature superconductor (HTS) containing well-controlled correlated defects.« less

  1. Comparison of rate one-half, equivalent constraint length 24, binary convolutional codes for use with sequential decoding on the deep-space channel

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.

  2. Active Control of Fan Noise: Feasibility Study. Volume 5; Numerical Computation of Acoustic Mode Reflection Coefficients for an Unflanged Cylindrical Duct

    NASA Technical Reports Server (NTRS)

    Kraft, R. E.

    1996-01-01

    A computational method to predict modal reflection coefficients in cylindrical ducts has been developed based on the work of Homicz, Lordi, and Rehm, which uses the Wiener-Hopf method to account for the boundary conditions at the termination of a thin cylindrical pipe. The purpose of this study is to develop a computational routine to predict the reflection coefficients of higher order acoustic modes impinging on the unflanged termination of a cylindrical duct. This effort was conducted wider Task Order 5 of the NASA Lewis LET Program, Active Noise Control of aircraft Engines: Feasibility Study, and will be used as part of the development of an integrated source noise, acoustic propagation, ANC actuator coupling, and control system algorithm simulation. The reflection coefficient prediction will be incorporated into an existing cylindrical duct modal analysis to account for the reflection of modes from the duct termination. This will provide a more accurate, rapid computation design tool for evaluating the effect of reflected waves on active noise control systems mounted in the duct, as well as providing a tool for the design of acoustic treatment in inlet ducts. As an active noise control system design tool, the method can be used preliminary to more accurate but more numerically intensive acoustic propagation models such as finite element methods. The resulting computer program has been shown to give reasonable results, some examples of which are presented. Reliable data to use for comparison is scarce, so complete checkout is difficult, and further checkout is needed over a wider range of system parameters. In future efforts the method will be adapted as a subroutine to the GEAE segmented cylindrical duct modal analysis program.

  3. Patient-specific CT dosimetry calculation: a feasibility study.

    PubMed

    Fearon, Thomas; Xie, Huchen; Cheng, Jason Y; Ning, Holly; Zhuge, Ying; Miller, Robert W

    2011-11-15

    Current estimation of radiation dose from computed tomography (CT) scans on patients has relied on the measurement of Computed Tomography Dose Index (CTDI) in standard cylindrical phantoms, and calculations based on mathematical representations of "standard man". Radiation dose to both adult and pediatric patients from a CT scan has been a concern, as noted in recent reports. The purpose of this study was to investigate the feasibility of adapting a radiation treatment planning system (RTPS) to provide patient-specific CT dosimetry. A radiation treatment planning system was modified to calculate patient-specific CT dose distributions, which can be represented by dose at specific points within an organ of interest, as well as organ dose-volumes (after image segmentation) for a GE Light Speed Ultra Plus CT scanner. The RTPS calculation algorithm is based on a semi-empirical, measured correction-based algorithm, which has been well established in the radiotherapy community. Digital representations of the physical phantoms (virtual phantom) were acquired with the GE CT scanner in axial mode. Thermoluminescent dosimeter (TLDs) measurements in pediatric anthropomorphic phantoms were utilized to validate the dose at specific points within organs of interest relative to RTPS calculations and Monte Carlo simulations of the same virtual phantoms (digital representation). Congruence of the calculated and measured point doses for the same physical anthropomorphic phantom geometry was used to verify the feasibility of the method. The RTPS algorithm can be extended to calculate the organ dose by calculating a dose distribution point-by-point for a designated volume. Electron Gamma Shower (EGSnrc) codes for radiation transport calculations developed by National Research Council of Canada (NRCC) were utilized to perform the Monte Carlo (MC) simulation. In general, the RTPS and MC dose calculations are within 10% of the TLD measurements for the infant and child chest scans. With respect to the dose comparisons for the head, the RTPS dose calculations are slightly higher (10%-20%) than the TLD measurements, while the MC results were within 10% of the TLD measurements. The advantage of the algebraic dose calculation engine of the RTPS is a substantially reduced computation time (minutes vs. days) relative to Monte Carlo calculations, as well as providing patient-specific dose estimation. It also provides the basis for a more elaborate reporting of dosimetric results, such as patient specific organ dose volumes after image segmentation.

  4. Optical clearing of vaginal tissues, ex vivo, for minimally invasive laser treatment of female stress urinary incontinence

    NASA Astrophysics Data System (ADS)

    Chang, Chun-Hung; Myers, Erinn M.; Kennelly, Michael J.; Fried, Nathaniel M.

    2017-01-01

    Near-infrared laser energy in conjunction with applied tissue cooling is being investigated for thermal remodeling of the endopelvic fascia during minimally invasive treatment of female stress urinary incontinence. Previous computer simulations of light transport, heat transfer, and tissue thermal damage have shown that a transvaginal approach is more feasible than a transurethral approach. However, results were suboptimal, and some undesirable thermal insult to the vaginal wall was still predicted. This study uses experiments and computer simulations to explore whether application of an optical clearing agent (OCA) can further improve optical penetration depth and completely preserve the vaginal wall during subsurface treatment of the endopelvic fascia. Several different mixtures of OCA's were tested, and 100% glycerol was found to be the optimal agent. Optical transmission studies, optical coherence tomography, reflection spectroscopy, and computer simulations [including Monte Carlo (MC) light transport, heat transfer, and Arrhenius integral model of thermal damage] using glycerol were performed. The OCA produced a 61% increase in optical transmission through porcine vaginal wall at 37°C after 30 min. The MC model showed improved energy deposition in endopelvic fascia using glycerol. Without OCA, 62%, 37%, and 1% of energy was deposited in vaginal wall, endopelvic fascia, and urethral wall, respectively, compared with 50%, 49%, and 1% using OCA. Use of OCA also resulted in 0.5-mm increase in treatment depth, allowing potential thermal tissue remodeling at a depth of 3 mm with complete preservation of the vaginal wall.

  5. Antenna analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern shaping. The interesting thing about D-C synthesis is that the side lobes have the same amplitude. Five-element arrays were used. Again, 41 pattern samples were used for the input. Nine actual D-C patterns ranging from -10 dB to -30 dB side lobe levels were used to train the network. A comparison between simulated and actual D-C techniques for a pattern with -22 dB side lobe level is shown. The goal for this research was to evaluate the performance of neural network computing with antennas. Future applications will employ the backpropagation training algorithm to drastically reduce the computational complexity involved in performing EM compensation for surface errors in large space reflector antennas.

  6. Antenna analysis using neural networks

    NASA Astrophysics Data System (ADS)

    Smith, William T.

    1992-09-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary).

  7. Simulation of Electric Potentials and Ion Motion in Planar Electrode Structures for Lossless Ion Manipulations (SLIM)

    DOE PAGES

    Garimella, Sandilya V. B; Ibrahim, Yehia M.; Webb, Ian K.; ...

    2014-09-26

    Here we report a conceptual study and computational evaluation of novel planar electrode Structures for Lossless Ion Manipulations (SLIM). Planar electrode SLIM devices were designed that allow for flexible ion confinement, transport and storage using a combination of RF and DC fields. Effective potentials can be generated that provide near ideal regions for confining ions in the presence of a gas. Ion trajectory simulations using SIMION 8.1 demonstrated the capability for lossless ion motion in these devices over a wide m/z range and a range of electric fields at low pressures (e.g. a few torr). More complex ion manipulations, e.g.more » turning ions by 90° and dynamically switching selected ion species into orthogonal channels, are also feasible. Lastly, the performance of SLIM devices at ~4 torr pressure for performing ion mobility based separations (IMS) is computationally evaluated and compared to initial experimental results, and both of which agree closely with experimental and theoretical IMS performance for a conventional drift tube design.« less

  8. Lightning Simulation and Design Program (LSDP)

    NASA Astrophysics Data System (ADS)

    Smith, D. A.

    This computer program simulates a user-defined lighting configuration. It has been developed as a tool to aid in the design of exterior lighting systems. Although this program is used primarily for perimeter security lighting design, it has potential use for any application where the light can be approximated by a point source. A data base of luminaire photometric information is maintained for use with this program. The user defines the surface area to be illuminated with a rectangular grid and specifies luminaire positions. Illumination values are calculated for regularly spaced points in that area and isolux contour plots are generated. The numerical and graphical output for a particular site mode are then available for analysis. The amount of time spent on point-to-point illumination computation with this progress is much less than that required for tedious hand calculations. The ease with which various parameters can be interactively modified with the progress also reduces the time and labor expended. Consequently, the feasibility of design ideas can be examined, modified, and retested more thoroughly, and overall design costs can be substantially lessened by using this progress as an adjunct to the design process.

  9. Feasibility of new simulation technology to train novice drivers

    DOT National Transportation Integrated Search

    1996-12-01

    This project examined the feasibility of using existing simulation and other electronic device technology with the potential for the safety training of novice drivers. Project activities included: a literature review; site visits and telephone inquir...

  10. Application of visualization and simulation program to improve work zone safety and mobility.

    DOT National Transportation Integrated Search

    2010-01-01

    A previous study sponsored by the Smart Work Zone Deployment Initiative, Feasibility of Visualization and Simulation Applications to Improve Work Zone Safety and Mobility, demonstrated the feasibility of combining readily available, inexpensive...

  11. Application of visualization and simulation program to improve work zone safety and mobility.

    DOT National Transportation Integrated Search

    2010-01-01

    "A previous study sponsored by the Smart Work Zone Deployment Initiative, Feasibility of Visualization and Simulation Applications to Improve Work Zone Safety and Mobility, demonstrated the feasibility of combining readily available, inexpensiv...

  12. Neurolinguistically constrained simulation of sentence comprehension: integrating artificial intelligence and brain theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gigley, H.M.

    1982-01-01

    An artificial intelligence approach to the simulation of neurolinguistically constrained processes in sentence comprehension is developed using control strategies for simulation of cooperative computation in associative networks. The desirability of this control strategy in contrast to ATN and production system strategies is explained. A first pass implementation of HOPE, an artificial intelligence simulation model of sentence comprehension, constrained by studies of aphasic performance, psycholinguistics, neurolinguistics, and linguistic theory is described. Claims that the model could serve as a basis for sentence production simulation and for a model of language acquisition as associative learning are discussed. HOPE is a model thatmore » performs in a normal state and includes a lesion simulation facility. HOPE is also a research tool. Its modifiability and use as a tool to investigate hypothesized causes of degradation in comprehension performance by aphasic patients are described. Issues of using behavioral constraints in modelling and obtaining appropriate data for simulated process modelling are discussed. Finally, problems of validation of the simulation results are raised; and issues of how to interpret clinical results to define the evolution of the model are discussed. Conclusions with respect to the feasibility of artificial intelligence simulation process modelling are discussed based on the current state of research.« less

  13. A Feasibility Study of Synthesizing Subsurfaces Modeled with Computational Neural Networks

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Housner, Jerrold M.; Szewczyk, Z. Peter

    1998-01-01

    This paper investigates the feasibility of synthesizing substructures modeled with computational neural networks. Substructures are modeled individually with computational neural networks and the response of the assembled structure is predicted by synthesizing the neural networks. A superposition approach is applied to synthesize models for statically determinate substructures while an interface displacement collocation approach is used to synthesize statically indeterminate substructure models. Beam and plate substructures along with components of a complicated Next Generation Space Telescope (NGST) model are used in this feasibility study. In this paper, the limitations and difficulties of synthesizing substructures modeled with neural networks are also discussed.

  14. Training on N.O.T.E.S.: from history we learn.

    PubMed

    Al-Akash, M; Boyle, E; Tanner, W A

    2009-06-01

    Surgical errors occurring early in the learning curve of laparoscopic surgery providers delayed the uptake and progress of minimally invasive surgery (MIS) for years. This taught us a valuable lesson; innovations in surgical techniques should not be rapidly implemented until all aspects including applicability, feasibility and safety have been fully tested. In 2005, the Natural Orifice Surgery Consortium for Assessment and Research (NOSCAR) published a white paper highlighting the barriers to NOTES development and identifying key elements for its progress. One of these elements is the training of future providers. Proficiency-based, virtual reality simulation will offer a feasible alternative to animal testing once the safety and efficacy parameters of NOTES are established. Recent advances in imaging including computed tomography (CT) scanning, magnetic resonance imaging (MRI) scanning, and ultrasound (US) scanning can offer improved image registration and real-time tracking. Combining these advanced imaging technologies with the newly designed virtual reality simulators will result in a fully comprehensive simulation curriculum which will offer a unique facility for future NOTES providers to train anytime, anywhere, and as much as they need to in order to achieve the pre-set proficiency levels for a variety of NOTES procedures. Furthermore they will incorporate patient-specific anatomical models obtained from patient imaging and uploaded onto the simulator to ensure face reliability and validity assurance. Training in a clean, safe environment with proximate feedback and performance analysis will help accelerate the learning curve and therefore improve patients' safety and outcomes in order to maximize the benefits of innovative access procedures such as NOTES.

  15. Development Of A Data Assimilation Capability For RAPID

    NASA Astrophysics Data System (ADS)

    Emery, C. M.; David, C. H.; Turmon, M.; Hobbs, J.; Allen, G. H.; Famiglietti, J. S.

    2017-12-01

    The global decline of in situ observations associated with the increasing ability to monitor surface water from space motivates the creation of data assimilation algorithms that merge computer models and space-based observations to produce consistent estimates of terrestrial hydrology that fill the spatiotemporal gaps in observations. RAPID is a routing model based on the Muskingum method that is capable of estimating river streamflow over large scales with a relatively short computing time. This model only requires limited inputs: a reach-based river network, and lateral surface and subsurface flow into the rivers. The relatively simple model physics imply that RAPID simulations could be significantly improved by including a data assimilation capability. Here we present the early developments of such data assimilation approach into RAPID. Given the linear and matrix-based structure of the model, we chose to apply a direct Kalman filter, hence allowing for the preservation of high computational speed. We correct the simulated streamflows by assimilating streamflow observations and our early results demonstrate the feasibility of the approach. Additionally, the use of in situ gauges at continental scales motivates the application of our new data assimilation scheme to altimetry measurements from existing (e.g. EnviSat, Jason 2) and upcoming satellite missions (e.g. SWOT), and ultimately apply the scheme globally.

  16. Virtual patient simulator for distributed collaborative medical education.

    PubMed

    Caudell, Thomas P; Summers, Kenneth L; Holten, Jim; Hakamata, Takeshi; Mowafi, Moad; Jacobs, Joshua; Lozanoff, Beth K; Lozanoff, Scott; Wilks, David; Keep, Marcus F; Saiki, Stanley; Alverson, Dale

    2003-01-01

    Project TOUCH (Telehealth Outreach for Unified Community Health; http://hsc.unm.edu/touch) investigates the feasibility of using advanced technologies to enhance education in an innovative problem-based learning format currently being used in medical school curricula, applying specific clinical case models, and deploying to remote sites/workstations. The University of New Mexico's School of Medicine and the John A. Burns School of Medicine at the University of Hawai'i face similar health care challenges in providing and delivering services and training to remote and rural areas. Recognizing that health care needs are local and require local solutions, both states are committed to improving health care delivery to their unique populations by sharing information and experiences through emerging telehealth technologies by using high-performance computing and communications resources. The purpose of this study is to describe the deployment of a problem-based learning case distributed over the National Computational Science Alliance's Access Grid. Emphasis is placed on the underlying technical components of the TOUCH project, including the virtual reality development tool Flatland, the artificial intelligence-based simulation engine, the Access Grid, high-performance computing platforms, and the software that connects them all. In addition, educational and technical challenges for Project TOUCH are identified. Copyright 2003 Wiley-Liss, Inc.

  17. Quantum Monte Carlo for large chemical systems: implementing efficient strategies for petascale platforms and beyond.

    PubMed

    Scemama, Anthony; Caffarel, Michel; Oseret, Emmanuel; Jalby, William

    2013-04-30

    Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC=Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC=Chem has been shown to be capable of running at the petascale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exascale platforms with a comparable level of efficiency is expected to be feasible. Copyright © 2013 Wiley Periodicals, Inc.

  18. 3D noninvasive ultrasound Joule heat tomography based on acousto-electric effect using unipolar pulses: a simulation study

    PubMed Central

    Yang, Renhuan; Li, Xu; Song, Aiguo; He, Bin; Yan, Ruqiang

    2012-01-01

    Electrical properties of biological tissues are highly sensitive to their physiological and pathological status. Thus it is of importance to image electrical properties of biological tissues. However, spatial resolution of conventional electrical impedance tomography (EIT) is generally poor. Recently, hybrid imaging modalities combining electric conductivity contrast and ultrasonic resolution based on acouto-electric effect has attracted considerable attention. In this study, we propose a novel three-dimensional (3D) noninvasive ultrasound Joule heat tomography (UJHT) approach based on acouto-electric effect using unipolar ultrasound pulses. As the Joule heat density distribution is highly dependent on the conductivity distribution, an accurate and high resolution mapping of the Joule heat density distribution is expected to give important information that is closely related to the conductivity contrast. The advantages of the proposed ultrasound Joule heat tomography using unipolar pulses include its simple inverse solution, better performance than UJHT using common bipolar pulses and its independence of any priori knowledge of the conductivity distribution of the imaging object. Computer simulation results show that using the proposed method, it is feasible to perform a high spatial resolution Joule heat imaging in an inhomogeneous conductive media. Application of this technique on tumor scanning is also investigated by a series of computer simulations. PMID:23123757

  19. Unveiling Stability Criteria of DNA-Carbon Nanotubes Constructs by Scanning Tunneling Microscopy and Computational Modeling

    DOE PAGES

    Kilina, Svetlana; Yarotski, Dzmitry A.; Talin, A. Alec; ...

    2011-01-01

    We present a combined approach that relies on computational simulations and scanning tunneling microscopy (STM) measurements to reveal morphological properties and stability criteria of carbon nanotube-DNA (CNT-DNA) constructs. Application of STM allows direct observation of very stable CNT-DNA hybrid structures with the well-defined DNA wrapping angle of 63.4 ° and a coiling period of 3.3 nm. Using force field simulations, we determine how the DNA-CNT binding energy depends on the sequence and binding geometry of a single strand DNA. This dependence allows us to quantitatively characterize the stability of a hybrid structure with an optimal π-stacking between DNA nucleotides and themore » tube surface and better interpret STM data. Our simulations clearly demonstrate the existence of a very stable DNA binding geometry for (6,5) CNT as evidenced by the presence of a well-defined minimum in the binding energy as a function of an angle between DNA strand and the nanotube chiral vector. This novel approach demonstrates the feasibility of CNT-DNA geometry studies with subnanometer resolution and paves the way towards complete characterization of the structural and electronic properties of drug-delivering systems based on DNA-CNT hybrids as a function of DNA sequence and a nanotube chirality.« less

  20. Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks

    PubMed Central

    Kaltenbacher, Barbara; Hasenauer, Jan

    2017-01-01

    Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351

  1. Characterization of Unsteady Flow Structures Near Landing-Edge Slat. Part 2; 2D Computations

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi; Choudhari, Meelan M.; Jenkins, Luther N.

    2004-01-01

    In our previous computational studies of a generic high-lift configuration, quasi-laminar (as opposed to fully turbulent) treatment of the slat cove region proved to be an effective approach for capturing the unsteady dynamics of the cove flow field. Combined with acoustic propagation via Ffowes Williams and Hawkings formulation, the quasi-laminar simulations captured some important features of the slat cove noise measured with microphone array techniques. However. a direct assessment of the computed cove flow field was not feasible due to the unavailability of off-surface flow measurements. To remedy this shortcoming, we have undertaken a combined experiment and computational study aimed at characterizing the flow structures and fluid mechanical processes within the slat cove region. Part I of this paper outlines the experimental aspects of this investigation focused on the 30P30N high-lift configuration; the present paper describes the accompanying computational results including a comparison between computation and experiment at various angles of attack. Even through predictions of the time-averaged flow field agree well with the measured data, the study indicates the need for further refinement of the zonal turbulence approach in order to capture the full dynamics of the cove's fluctuating flow field.

  2. A pilot feasibility study of virtual patient simulation to enhance social work students' brief mental health assessment skills.

    PubMed

    Washburn, Micki; Bordnick, Patrick; Rizzo, Albert Skip

    2016-10-01

    This study presents preliminary feasibility and acceptability data on the use of virtual patient (VP) simulations to develop brief assessment skills within an interdisciplinary care setting. Results support the acceptability of technology-enhanced simulations and offer preliminary evidence for an association between engagement in VP practice simulations and improvements in diagnostic accuracy and clinical interviewing skills. Recommendations and next steps for research on technology-enhanced simulations within social work are discussed.

  3. Experimental investigation of the Multipoint Ultrasonic Flowmeter

    NASA Astrophysics Data System (ADS)

    Jakub, Filipský

    2018-06-01

    The Multipoint Ultrasonic Flowmeter is a vector tomographic device capable of reconstructing all three components of velocity field based solely on boundary ultrasonic measurements. Computer simulations have shown the feasibility of such a device and have been published previously. This paper describes an experimental investigation of achievable accuracy of such a method. Doubled acoustic tripoles used to obtain information of the solenoidal part of vector field show extremely short differences between the Time Of Flights (TOFs) of individual sensors and are therefore sensitive to parasitic effects of TOF measurements. Sampling at 40MHz and correlation method is used to measure the TOF.

  4. Doppler Navigation System with a Non-Stabilized Antenna as a Sea-Surface Wind Sensor.

    PubMed

    Nekrasov, Alexey; Khachaturian, Alena; Veremyev, Vladimir; Bogachev, Mikhail

    2017-06-09

    We propose a concept of the utilization of an aircraft Doppler Navigation System (DNS) as a sea-surface wind sensor complementary to its normal functionality. The DNS with an antenna, which is non-stabilized physically to the local horizontal with x -configured beams, is considered. We consider the wind measurements by the DNS configured in the multi-beam scatterometer mode for a rectilinear flight scenario. The system feasibility and the efficiency of the proposed wind algorithm retrieval are supported by computer simulations. Finally, the associated limitations of the proposed approach are considered.

  5. Doppler Navigation System with a Non-Stabilized Antenna as a Sea-Surface Wind Sensor

    PubMed Central

    Nekrasov, Alexey; Khachaturian, Alena; Veremyev, Vladimir; Bogachev, Mikhail

    2017-01-01

    We propose a concept of the utilization of an aircraft Doppler Navigation System (DNS) as a sea-surface wind sensor complementary to its normal functionality. The DNS with an antenna, which is non-stabilized physically to the local horizontal with x-configured beams, is considered. We consider the wind measurements by the DNS configured in the multi-beam scatterometer mode for a rectilinear flight scenario. The system feasibility and the efficiency of the proposed wind algorithm retrieval are supported by computer simulations. Finally, the associated limitations of the proposed approach are considered. PMID:28598374

  6. Heterogeneous path ensembles for conformational transitions in semi–atomistic models of adenylate kinase

    PubMed Central

    Bhatt, Divesh; Zuckerman, Daniel M.

    2010-01-01

    We performed “weighted ensemble” path–sampling simulations of adenylate kinase, using several semi–atomistic protein models. The models have an all–atom backbone with various levels of residue interactions. The primary result is that full statistically rigorous path sampling required only a few weeks of single–processor computing time with these models, indicating the addition of further chemical detail should be readily feasible. Our semi–atomistic path ensembles are consistent with previous biophysical findings: the presence of two distinct pathways, identification of intermediates, and symmetry of forward and reverse pathways. PMID:21660120

  7. Automated system for integration and display of physiological response data

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The system analysis approach was applied in a study of physiological systems in both 1-g and weightlessness, for short and long term experiments. A whole body, algorithm developed as the first step in the construction of a total body simulation system is described and an advanced biomedical computer system concept including interactive display/command consoles is discussed. The documentation of the design specifications, design and development studies, and user's instructions (which include program listings) for these delivered end-terms; the reports on the results of many research and feasibility studies; and many subcontract reports are cited in the bibliography.

  8. MITHRA 1.0: A full-wave simulation tool for free electron lasers

    NASA Astrophysics Data System (ADS)

    Fallahi, Arya; Yahaghi, Alireza; Kärtner, Franz X.

    2018-07-01

    Free Electron Lasers (FELs) are a solution for providing intense, coherent and bright radiation in the hard X-ray regime. Due to the low wall-plug efficiency of FEL facilities, it is crucial and additionally very useful to develop complete and accurate simulation tools for better optimizing a FEL interaction. The highly sophisticated dynamics involved in a FEL process was the main obstacle hindering the development of general simulation tools for this problem. We present a numerical algorithm based on finite difference time domain/Particle in cell (FDTD/PIC) in a Lorentz boosted coordinate system which is able to fulfill a full-wave simulation of a FEL process. The developed software offers a suitable tool for the analysis of FEL interactions without considering any of the usual approximations. A coordinate transformation to bunch rest frame makes the very different length scales of bunch size, optical wavelengths and the undulator period transform to values with the same order. Consequently, FDTD/PIC simulations in conjunction with efficient parallelization techniques make the full-wave simulation feasible using the available computational resources. Several examples of free electron lasers are analyzed using the developed software, the results are benchmarked based on standard FEL codes and discussed in detail.

  9. A multifrequency virtual spectrometer for complex bio-organic systems: vibronic and environmental effects on the UV/Vis spectrum of chlorophyll a.

    PubMed

    Barone, Vincenzo; Biczysko, Malgorzata; Borkowska-Panek, Monika; Bloino, Julien

    2014-10-20

    The subtle interplay of several different effects means that the interpretation and analysis of experimental spectra in terms of structural and dynamic characteristics is a challenging task. In this context, theoretical studies can be helpful, and as such, computational spectroscopy is rapidly evolving from a highly specialized research field toward a versatile and widespread tool. However, in the case of electronic spectra (e.g. UV/Vis, circular dichroism, photoelectron, and X-ray spectra), the most commonly used methods still rely on the computation of vertical excitation energies, which are further convoluted to simulate line shapes. Such treatment completely neglects the influence of nuclear motions, despite the well-recognized notion that a proper account of vibronic effects is often mandatory to correctly interpret experimental findings. Development and validation of improved models rooted into density functional theory (DFT) and its time-dependent extension (TD-DFT) is of course instrumental for the optimal balance between reliability and favorable scaling with the number of electrons. However, the implementation of easy-to-use and effective procedures to simulate vibrationally resolved electronic spectra, and their availability to a wide community of users, is at least equally important for reliable simulations of spectral line shapes for compounds of biological and technological interest. Here, such an approach has been applied to the study of the UV/Vis spectra of chlorophyll a. The results show that properly tailored approaches are feasible for state-of-the-art computational spectroscopy studies, and allow, with affordable computational resources, vibrational and environmental effects on the spectral line shapes to be taken into account for large systems. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  11. Rapid simulation of spatial epidemics: a spectral method.

    PubMed

    Brand, Samuel P C; Tildesley, Michael J; Keeling, Matthew J

    2015-04-07

    Spatial structure and hence the spatial position of host populations plays a vital role in the spread of infection. In the majority of situations, it is only possible to predict the spatial spread of infection using simulation models, which can be computationally demanding especially for large population sizes. Here we develop an approximation method that vastly reduces this computational burden. We assume that the transmission rates between individuals or sub-populations are determined by a spatial transmission kernel. This kernel is assumed to be isotropic, such that the transmission rate is simply a function of the distance between susceptible and infectious individuals; as such this provides the ideal mechanism for modelling localised transmission in a spatial environment. We show that the spatial force of infection acting on all susceptibles can be represented as a spatial convolution between the transmission kernel and a spatially extended 'image' of the infection state. This representation allows the rapid calculation of stochastic rates of infection using fast-Fourier transform (FFT) routines, which greatly improves the computational efficiency of spatial simulations. We demonstrate the efficiency and accuracy of this fast spectral rate recalculation (FSR) method with two examples: an idealised scenario simulating an SIR-type epidemic outbreak amongst N habitats distributed across a two-dimensional plane; the spread of infection between US cattle farms, illustrating that the FSR method makes continental-scale outbreak forecasting feasible with desktop processing power. The latter model demonstrates which areas of the US are at consistently high risk for cattle-infections, although predictions of epidemic size are highly dependent on assumptions about the tail of the transmission kernel. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Numerical Simulations of Hypersonic Boundary Layer Transition

    NASA Astrophysics Data System (ADS)

    Bartkowicz, Matthew David

    Numerical schemes for supersonic flows tend to use large amounts of artificial viscosity for stability. This tends to damp out the small scale structures in the flow. Recently some low-dissipation methods have been proposed which selectively eliminate the artificial viscosity in regions which do not require it. This work builds upon the low-dissipation method of Subbareddy and Candler which uses the flux vector splitting method of Steger and Warming but identifies the dissipation portion to eliminate it. Computing accurate fluxes typically relies on large grid stencils or coupled linear systems that become computationally expensive to solve. Unstructured grids allow for CFD solutions to be obtained on complex geometries, unfortunately, it then becomes difficult to create a large stencil or the coupled linear system. Accurate solutions require grids that quickly become too large to be feasible. In this thesis a method is proposed to obtain more accurate solutions using relatively local data, making it suitable for unstructured grids composed of hexahedral elements. Fluxes are reconstructed using local gradients to extend the range of data used. The method is then validated on several test problems. Simulations of boundary layer transition are then performed. An elliptic cone at Mach 8 is simulated based on an experiment at the Princeton Gasdynamics Laboratory. A simulated acoustic noise boundary condition is imposed to model the noisy conditions of the wind tunnel and the transitioning boundary layer observed. A computation of an isolated roughness element is done based on an experiment in Purdue's Mach 6 quiet wind tunnel. The mechanism for transition is identified as an instability in the upstream separation region and a comparison is made to experimental data. In the CFD a fully turbulent boundary layer is observed downstream.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McMahon, S.

    Radiation therapy for the treatment of cancer has been established as a highly precise and effective way to eradicate a localized region of diseased tissue. To achieve further significant gains in the therapeutic ratio, we need to move towards biologically optimized treatment planning. To achieve this goal, we need to understand how the radiation-type dependent patterns of induced energy depositions within the cell (physics) connect via molecular, cellular and tissue reactions to treatment outcome such as tumor control and undesirable effects on normal tissue. Several computational biology approaches have been developed connecting physics to biology. Monte Carlo simulations are themore » most accurate method to calculate physical dose distributions at the nanometer scale, however simulations at the DNA scale are slow and repair processes are generally not simulated. Alternative models that rely on the random formation of individual DNA lesions within one or two turns of the DNA have been shown to reproduce the clusters of DNA lesions, including single strand breaks (SSBs), double strand breaks (DSBs) without the need for detailed track structure simulations. Efficient computational simulations of initial DNA damage induction facilitate computational modeling of DNA repair and other molecular and cellular processes. Mechanistic, multiscale models provide a useful conceptual framework to test biological hypotheses and help connect fundamental information about track structure and dosimetry at the sub-cellular level to dose-response effects on larger scales. In this symposium we will learn about the current state of the art of computational approaches estimating radiation damage at the cellular and sub-cellular scale. How can understanding the physics interactions at the DNA level be used to predict biological outcome? We will discuss if and how such calculations are relevant to advance our understanding of radiation damage and its repair, or, if the underlying biological processes are too complex for a mechanistic approach. Can computer simulations be used to guide future biological research? We will debate the feasibility of explaining biology from a physicists’ perspective. Learning Objectives: Understand the potential applications and limitations of computational methods for dose-response modeling at the molecular, cellular and tissue levels Learn about mechanism of action underlying the induction, repair and biological processing of damage to DNA and other constituents Understand how effects and processes at one biological scale impact on biological processes and outcomes on other scales J. Schuemann, NCI/NIH grantsS. McMahon, Funding: European Commission FP7 (grant EC FP7 MC-IOF-623630)« less

  14. Acceleration of the chemistry solver for modeling DI engine combustion using dynamic adaptive chemistry (DAC) schemes

    NASA Astrophysics Data System (ADS)

    Shi, Yu; Liang, Long; Ge, Hai-Wen; Reitz, Rolf D.

    2010-03-01

    Acceleration of the chemistry solver for engine combustion is of much interest due to the fact that in practical engine simulations extensive computational time is spent solving the fuel oxidation and emission formation chemistry. A dynamic adaptive chemistry (DAC) scheme based on a directed relation graph error propagation (DRGEP) method has been applied to study homogeneous charge compression ignition (HCCI) engine combustion with detailed chemistry (over 500 species) previously using an R-value-based breadth-first search (RBFS) algorithm, which significantly reduced computational times (by as much as 30-fold). The present paper extends the use of this on-the-fly kinetic mechanism reduction scheme to model combustion in direct-injection (DI) engines. It was found that the DAC scheme becomes less efficient when applied to DI engine simulations using a kinetic mechanism of relatively small size and the accuracy of the original DAC scheme decreases for conventional non-premixed combustion engine. The present study also focuses on determination of search-initiating species, involvement of the NOx chemistry, selection of a proper error tolerance, as well as treatment of the interaction of chemical heat release and the fuel spray. Both the DAC schemes were integrated into the ERC KIVA-3v2 code, and simulations were conducted to compare the two schemes. In general, the present DAC scheme has better efficiency and similar accuracy compared to the previous DAC scheme. The efficiency depends on the size of the chemical kinetics mechanism used and the engine operating conditions. For cases using a small n-heptane kinetic mechanism of 34 species, 30% of the computational time is saved, and 50% for a larger n-heptane kinetic mechanism of 61 species. The paper also demonstrates that by combining the present DAC scheme with an adaptive multi-grid chemistry (AMC) solver, it is feasible to simulate a direct-injection engine using a detailed n-heptane mechanism with 543 species with practical computer time.

  15. An Evolutionary Optimization of the Refueling Simulation for a CANDU Reactor

    NASA Astrophysics Data System (ADS)

    Do, Q. B.; Choi, H.; Roh, G. H.

    2006-10-01

    This paper presents a multi-cycle and multi-objective optimization method for the refueling simulation of a 713 MWe Canada deuterium uranium (CANDU-6) reactor based on a genetic algorithm, an elitism strategy and a heuristic rule. The proposed algorithm searches for the optimal refueling patterns for a single cycle that maximizes the average discharge burnup, minimizes the maximum channel power and minimizes the change in the zone controller unit water fills while satisfying the most important safety-related neutronic parameters of the reactor core. The heuristic rule generates an initial population of individuals very close to a feasible solution and it reduces the computing time of the optimization process. The multi-cycle optimization is carried out based on a single cycle refueling simulation. The proposed approach was verified by a refueling simulation of a natural uranium CANDU-6 reactor for an operation period of 6 months at an equilibrium state and compared with the experience-based automatic refueling simulation and the generalized perturbation theory. The comparison has shown that the simulation results are consistent from each other and the proposed approach is a reasonable optimization method of the refueling simulation that controls all the safety-related parameters of the reactor core during the simulation

  16. Development of a SaaS application probe to the physical properties of the Earth's interior: An attempt at moving HPC to the cloud

    NASA Astrophysics Data System (ADS)

    Huang, Qian

    2014-09-01

    Scientific computing often requires the availability of a massive number of computers for performing large-scale simulations, and computing in mineral physics is no exception. In order to investigate physical properties of minerals at extreme conditions in computational mineral physics, parallel computing technology is used to speed up the performance by utilizing multiple computer resources to process a computational task simultaneously thereby greatly reducing computation time. Traditionally, parallel computing has been addressed by using High Performance Computing (HPC) solutions and installed facilities such as clusters and super computers. Today, it has been seen that there is a tremendous growth in cloud computing. Infrastructure as a Service (IaaS), the on-demand and pay-as-you-go model, creates a flexible and cost-effective mean to access computing resources. In this paper, a feasibility report of HPC on a cloud infrastructure is presented. It is found that current cloud services in IaaS layer still need to improve performance to be useful to research projects. On the other hand, Software as a Service (SaaS), another type of cloud computing, is introduced into an HPC system for computing in mineral physics, and an application of which is developed. In this paper, an overall description of this SaaS application is presented. This contribution can promote cloud application development in computational mineral physics, and cross-disciplinary studies.

  17. Feasibility of Computer Processing of Technical Information on the Design of Instructional Systems. Final Report for the Period 1 July 1972 through 31 March 1973.

    ERIC Educational Resources Information Center

    Scheffler, F. L.; And Others

    A feasibility study examined the capability of a computer-based system's handling of technical information pertinent to the design of instructional systems. Structured interviews were held to assess the information needs of both researchers and practitioners and an investigation was conducted of 10 computer-based information storage and retrieval…

  18. Feasibility Computer Applications to Mission-Oriented Training in the Aircraft Armament Systems Specialist Career-Field.

    DTIC Science & Technology

    1980-01-01

    necessary and identify by block number) on-the-job training task proficiency mission-oriented training training management aircraft armament systems...as was the training itself, to determine the feasibility of applying state-of-the-art computer technology to the problems of management and...62 Measures Used in Rank-ordering Functions ........ ........... 63 Computer-Supportable Functions ........ .. 63 Instructional Management

  19. Fun During Knee Rehabilitation: Feasibility and Acceptability Testing of a New Android-Based Training Device

    PubMed Central

    Weber-Spickschen, Thomas Sanjay; Colcuc, Christian; Hanke, Alexander; Clausen, Jan-Dierk; James, Paul Abraham; Horstmann, Hauke

    2017-01-01

    Purpose: The initial goals of rehabilitation after knee injuries and operations are to achieve full knee extension and to activate quadriceps muscle. In addition to regular physiotherapy, an android-based knee training device is designed to help patients achieve these goals and improve compliance in the early rehabilitation period. This knee training device combines fun in a computer game with muscular training or rehabilitation. Our aim was to test the feasibility and acceptability of this new device. Methods: 50 volunteered subjects enrolled to test out the computer game aided device. The first game was the high-striker game, which recorded maximum knee extension power. The second game involved controlling quadriceps muscular power to simulate flying an aeroplane in order to record accuracy of muscle activation. The subjects evaluated this game by completing a simple questionnaire. Results: No technical problem was encountered during the usage of this device. No subjects complained of any discomfort after using this device. Measurements including maximum knee extension power, knee muscle activation and control were recorded successfully. Subjects rated their experience with the device as either excellent or very good and agreed that the device can motivate and monitor the progress of knee rehabilitation training. Conclusion: To the best of our knowledge, this is the first android-based tool available to fast track knee rehabilitation training. All subjects gave very positive feedback to this computer game aided knee device. PMID:29081870

  20. Combining Computational Fluid Dynamics and Agent-Based Modeling: A New Approach to Evacuation Planning

    PubMed Central

    Epstein, Joshua M.; Pankajakshan, Ramesh; Hammond, Ross A.

    2011-01-01

    We introduce a novel hybrid of two fields—Computational Fluid Dynamics (CFD) and Agent-Based Modeling (ABM)—as a powerful new technique for urban evacuation planning. CFD is a predominant technique for modeling airborne transport of contaminants, while ABM is a powerful approach for modeling social dynamics in populations of adaptive individuals. The hybrid CFD-ABM method is capable of simulating how large, spatially-distributed populations might respond to a physically realistic contaminant plume. We demonstrate the overall feasibility of CFD-ABM evacuation design, using the case of a hypothetical aerosol release in Los Angeles to explore potential effectiveness of various policy regimes. We conclude by arguing that this new approach can be powerfully applied to arbitrary population centers, offering an unprecedented preparedness and catastrophic event response tool. PMID:21687788

  1. Explicit integration with GPU acceleration for large kinetic networks

    DOE PAGES

    Brock, Benjamin; Belt, Andrew; Billings, Jay Jay; ...

    2015-09-15

    In this study, we demonstrate the first implementation of recently-developed fast explicit kinetic integration algorithms on modern graphics processing unit (GPU) accelerators. Taking as a generic test case a Type Ia supernova explosion with an extremely stiff thermonuclear network having 150 isotopic species and 1604 reactions coupled to hydrodynamics using operator splitting, we demonstrate the capability to solve of order 100 realistic kinetic networks in parallel in the same time that standard implicit methods can solve a single such network on a CPU. In addition, this orders-of-magnitude decrease in computation time for solving systems of realistic kinetic networks implies thatmore » important coupled, multiphysics problems in various scientific and technical fields that were intractable, or could be simulated only with highly schematic kinetic networks, are now computationally feasible.« less

  2. Hierarchical Fuzzy Control Applied to Parallel Connected UPS Inverters Using Average Current Sharing Scheme

    NASA Astrophysics Data System (ADS)

    Singh, Santosh Kumar; Ghatak Choudhuri, Sumit

    2018-05-01

    Parallel connection of UPS inverters to enhance power rating is a widely accepted practice. Inter-modular circulating currents appear when multiple inverter modules are connected in parallel to supply variable critical load. Interfacing of modules henceforth requires an intensive design, using proper control strategy. The potentiality of human intuitive Fuzzy Logic (FL) control with imprecise system model is well known and thus can be utilised in parallel-connected UPS systems. Conventional FL controller is computational intensive, especially with higher number of input variables. This paper proposes application of Hierarchical-Fuzzy Logic control for parallel connected Multi-modular inverters system for reduced computational burden on the processor for a given switching frequency. Simulated results in MATLAB environment and experimental verification using Texas TMS320F2812 DSP are included to demonstrate feasibility of the proposed control scheme.

  3. Atomistic Simulations of Graphene Growth: From Kinetics to Mechanism.

    PubMed

    Qiu, Zongyang; Li, Pai; Li, Zhenyu; Yang, Jinlong

    2018-03-20

    Epitaxial growth is a promising strategy to produce high-quality graphene samples. At the same time, this method has great flexibility for industrial scale-up. To optimize growth protocols, it is essential to understand the underlying growth mechanisms. This is, however, very challenging, as the growth process is complicated and involves many elementary steps. Experimentally, atomic-scale in situ characterization methods are generally not feasible at the high temperature of graphene growth. Therefore, kinetics is the main experimental information to study growth mechanisms. Theoretically, first-principles calculations routinely provide atomic structures and energetics but have a stringent limit on the accessible spatial and time scales. Such gap between experiment and theory can be bridged by atomistic simulations using first-principles atomic details as input and providing the overall growth kinetics, which can be directly compared with experiment, as output. Typically, system-specific approximations should be applied to make such simulations computationally feasible. By feeding kinetic Monte Carlo (kMC) simulations with first-principles parameters, we can directly simulate the graphene growth process and thus understand the growth mechanisms. Our simulations suggest that the carbon dimer is the dominant feeding species in the epitaxial growth of graphene on both Cu(111) and Cu(100) surfaces, which enables us to understand why the reaction is diffusion limited on Cu(111) but attachment limited on Cu(100). When hydrogen is explicitly considered in the simulation, the central role hydrogen plays in graphene growth is revealed, which solves the long-standing puzzle into why H 2 should be fed in the chemical vapor deposition of graphene. The simulation results can be directly compared with the experimental kinetic data, if available. Our kMC simulations reproduce the experimentally observed quintic-like behavior of graphene growth on Ir(111). By checking the simulation results, we find that such nonlinearity is caused by lattice mismatch, and the induced growth front inhomogeneity can be universally used to predict growth behaviors in other heteroepitaxial systems. Notably, although experimental kinetics usually gives useful insight into atomic mechanisms, it can sometimes be misleading. Such pitfalls can be avoided via atomistic simulations, as demonstrated in our study of the graphene etching process. Growth protocols can be designed theoretically with computational kinetic and mechanistic information. By contrasting the different activation energies involved in an atom-exchange-based carbon penetration process for monolayer and bilayer graphene, we propose a three-step strategy to grow high-quality bilayer graphene. Based on first-principles parameters, a kinetic pathway toward the high-density, ordered N doping of epitaxial graphene on Cu(111) using a C 5 NCl 5 precursor is also identified. These studies demonstrate that atomistic simulations can unambiguously produce or reproduce the kinetic information on graphene growth, which is pivotal to understanding the growth mechanism and designing better growth protocols. A similar strategy can be used in growth mechanism studies of other two-dimensional atomic crystals.

  4. Scheduling multimedia services in cloud computing environment

    NASA Astrophysics Data System (ADS)

    Liu, Yunchang; Li, Chunlin; Luo, Youlong; Shao, Yanling; Zhang, Jing

    2018-02-01

    Currently, security is a critical factor for multimedia services running in the cloud computing environment. As an effective mechanism, trust can improve security level and mitigate attacks within cloud computing environments. Unfortunately, existing scheduling strategy for multimedia service in the cloud computing environment do not integrate trust mechanism when making scheduling decisions. In this paper, we propose a scheduling scheme for multimedia services in multi clouds. At first, a novel scheduling architecture is presented. Then, We build a trust model including both subjective trust and objective trust to evaluate the trust degree of multimedia service providers. By employing Bayesian theory, the subjective trust degree between multimedia service providers and users is obtained. According to the attributes of QoS, the objective trust degree of multimedia service providers is calculated. Finally, a scheduling algorithm integrating trust of entities is proposed by considering the deadline, cost and trust requirements of multimedia services. The scheduling algorithm heuristically hunts for reasonable resource allocations and satisfies the requirement of trust and meets deadlines for the multimedia services. Detailed simulated experiments demonstrate the effectiveness and feasibility of the proposed trust scheduling scheme.

  5. Infinitely Dilute Partial Molar Properties of Proteins from Computer Simulation

    PubMed Central

    2015-01-01

    A detailed understanding of temperature and pressure effects on an infinitely dilute protein’s conformational equilibrium requires knowledge of the corresponding infinitely dilute partial molar properties. Established molecular dynamics methodologies generally have not provided a way to calculate these properties without either a loss of thermodynamic rigor, the introduction of nonunique parameters, or a loss of information about which solute conformations specifically contributed to the output values. Here we implement a simple method that is thermodynamically rigorous and possesses none of the above disadvantages, and we report on the method’s feasibility and computational demands. We calculate infinitely dilute partial molar properties for two proteins and attempt to distinguish the thermodynamic differences between a native and a denatured conformation of a designed miniprotein. We conclude that simple ensemble average properties can be calculated with very reasonable amounts of computational power. In contrast, properties corresponding to fluctuating quantities are computationally demanding to calculate precisely, although they can be obtained more easily by following the temperature and/or pressure dependence of the corresponding ensemble averages. PMID:25325571

  6. Spectrally resolving and scattering-compensated x-ray luminescence/fluorescence computed tomography

    PubMed Central

    Cong, Wenxiang; Shen, Haiou; Wang, Ge

    2011-01-01

    The nanophosphors, or other similar materials, emit near-infrared (NIR) light upon x-ray excitation. They were designed as optical probes for in vivo visualization and analysis of molecular and cellular targets, pathways, and responses. Based on the previous work on x-ray fluorescence computed tomography (XFCT) and x-ray luminescence computed tomography (XLCT), here we propose a spectrally-resolving and scattering-compensated x-ray luminescence/fluorescence computed tomography (SXLCT or SXFCT) approach to quantify a spatial distribution of nanophosphors (other similar materials or chemical elements) within a biological object. In this paper, the x-ray scattering is taken into account in the reconstruction algorithm. The NIR scattering is described in the diffusion approximation model. Then, x-ray excitations are applied with different spectra, and NIR signals are measured in a spectrally resolving fashion. Finally, a linear relationship is established between the nanophosphor distribution and measured NIR data using the finite element method and inverted using the compressive sensing technique. The numerical simulation results demonstrate the feasibility and merits of the proposed approach. PMID:21721815

  7. An evaluation of software tools for the design and development of cockpit displays

    NASA Technical Reports Server (NTRS)

    Ellis, Thomas D., Jr.

    1993-01-01

    The use of all-glass cockpits at the NASA Langley Research Center (LaRC) simulation facility has changed the means of design, development, and maintenance of instrument displays. The human-machine interface has evolved from a physical hardware device to a software-generated electronic display system. This has subsequently caused an increased workload at the facility. As computer processing power increases and the glass cockpit becomes predominant in facilities, software tools used in the design and development of cockpit displays are becoming both feasible and necessary for a more productive simulation environment. This paper defines LaRC requirements of a display software development tool and compares two available applications against these requirements. As a part of the software engineering process, these tools reduce development time, provide a common platform for display development, and produce exceptional real-time results.

  8. Design selection of an innovative tool holder for ultrasonic vibration assisted turning (IN-UVAT) using finite element analysis simulation

    NASA Astrophysics Data System (ADS)

    Rachmat, Haris; Ibrahim, M. Rasidi; Hasan, Sulaiman bin

    2017-04-01

    On of high technology in machining is ultrasonic vibration assisted turning. The design of tool holder was a crucial step to make sure the tool holder is enough to handle all forces on turning process. Because of the direct experimental approach is expensive, the paper studied to predict feasibility of tool holder displacement and effective stress was used the computational in finite element simulation. SS201 and AISI 1045 materials were used with sharp and ramp corners flexure hinges on design. The result shows that AISI 1045 material and which has ramp corner flexure hinge was the best choice to be produced. The displacement is around 11.3 micron and effective stress is 1.71e+008 N/m2 and also the factor of safety is 3.10.

  9. Using Markov state models to study self-assembly

    PubMed Central

    Perkett, Matthew R.; Hagan, Michael F.

    2014-01-01

    Markov state models (MSMs) have been demonstrated to be a powerful method for computationally studying intramolecular processes such as protein folding and macromolecular conformational changes. In this article, we present a new approach to construct MSMs that is applicable to modeling a broad class of multi-molecular assembly reactions. Distinct structures formed during assembly are distinguished by their undirected graphs, which are defined by strong subunit interactions. Spatial inhomogeneities of free subunits are accounted for using a recently developed Gaussian-based signature. Simplifications to this state identification are also investigated. The feasibility of this approach is demonstrated on two different coarse-grained models for virus self-assembly. We find good agreement between the dynamics predicted by the MSMs and long, unbiased simulations, and that the MSMs can reduce overall simulation time by orders of magnitude. PMID:24907984

  10. Towards photorealistic and immersive virtual-reality environments for simulated prosthetic vision: integrating recent breakthroughs in consumer hardware and software.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Zheng, Steven; Suaning, Gregg J

    2014-01-01

    Simulated prosthetic vision (SPV) in normally sighted subjects is an established way of investigating the prospective efficacy of visual prosthesis designs in visually guided tasks such as mobility. To perform meaningful SPV mobility studies in computer-based environments, a credible representation of both the virtual scene to navigate and the experienced artificial vision has to be established. It is therefore prudent to make optimal use of existing hardware and software solutions when establishing a testing framework. The authors aimed at improving the realism and immersion of SPV by integrating state-of-the-art yet low-cost consumer technology. The feasibility of body motion tracking to control movement in photo-realistic virtual environments was evaluated in a pilot study. Five subjects were recruited and performed an obstacle avoidance and wayfinding task using either keyboard and mouse, gamepad or Kinect motion tracking. Walking speed and collisions were analyzed as basic measures for task performance. Kinect motion tracking resulted in lower performance as compared to classical input methods, yet results were more uniform across vision conditions. The chosen framework was successfully applied in a basic virtual task and is suited to realistically simulate real-world scenes under SPV in mobility research. Classical input peripherals remain a feasible and effective way of controlling the virtual movement. Motion tracking, despite its limitations and early state of implementation, is intuitive and can eliminate between-subject differences due to familiarity to established input methods.

  11. Improvements in the Scalability of the NASA Goddard Multiscale Modeling Framework for Hurricane Climate Studies

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen; Tao, Wei-Kuo; Chern, Jiun-Dar

    2007-01-01

    Improving our understanding of hurricane inter-annual variability and the impact of climate change (e.g., doubling CO2 and/or global warming) on hurricanes brings both scientific and computational challenges to researchers. As hurricane dynamics involves multiscale interactions among synoptic-scale flows, mesoscale vortices, and small-scale cloud motions, an ideal numerical model suitable for hurricane studies should demonstrate its capabilities in simulating these interactions. The newly-developed multiscale modeling framework (MMF, Tao et al., 2007) and the substantial computing power by the NASA Columbia supercomputer show promise in pursuing the related studies, as the MMF inherits the advantages of two NASA state-of-the-art modeling components: the GEOS4/fvGCM and 2D GCEs. This article focuses on the computational issues and proposes a revised methodology to improve the MMF's performance and scalability. It is shown that this prototype implementation enables 12-fold performance improvements with 364 CPUs, thereby making it more feasible to study hurricane climate.

  12. DENA: A Configurable Microarchitecture and Design Flow for Biomedical DNA-Based Logic Design.

    PubMed

    Beiki, Zohre; Jahanian, Ali

    2017-10-01

    DNA is known as the building block for storing the life codes and transferring the genetic features through the generations. However, it is found that DNA strands can be used for a new type of computation that opens fascinating horizons in computational medicine. Significant contributions are addressed on design of DNA-based logic gates for medical and computational applications but there are serious challenges for designing the medium and large-scale DNA circuits. In this paper, a new microarchitecture and corresponding design flow is proposed to facilitate the design of multistage large-scale DNA logic systems. Feasibility and efficiency of the proposed microarchitecture are evaluated by implementing a full adder and, then, its cascadability is determined by implementing a multistage 8-bit adder. Simulation results show the highlight features of the proposed design style and microarchitecture in terms of the scalability, implementation cost, and signal integrity of the DNA-based logic system compared to the traditional approaches.

  13. Challenges of Future High-End Computing

    NASA Technical Reports Server (NTRS)

    Bailey, David; Kutler, Paul (Technical Monitor)

    1998-01-01

    The next major milestone in high performance computing is a sustained rate of one Pflop/s (also written one petaflops, or 10(circumflex)15 floating-point operations per second). In addition to prodigiously high computational performance, such systems must of necessity feature very large main memories, as well as comparably high I/O bandwidth and huge mass storage facilities. The current consensus of scientists who have studied these issues is that "affordable" petaflops systems may be feasible by the year 2010, assuming that certain key technologies continue to progress at current rates. One important question is whether applications can be structured to perform efficiently on such systems, which are expected to incorporate many thousands of processors and deeply hierarchical memory systems. To answer these questions, advanced performance modeling techniques, including simulation of future architectures and applications, may be required. It may also be necessary to formulate "latency tolerant algorithms" and other completely new algorithmic approaches for certain applications. This talk will give an overview of these challenges.

  14. An optimization program based on the method of feasible directions: Theory and users guide

    NASA Technical Reports Server (NTRS)

    Belegundu, Ashok D.; Berke, Laszlo; Patnaik, Surya N.

    1994-01-01

    The theory and user instructions for an optimization code based on the method of feasible directions are presented. The code was written for wide distribution and ease of attachment to other simulation software. Although the theory of the method of feasible direction was developed in the 1960's, many considerations are involved in its actual implementation as a computer code. Included in the code are a number of features to improve robustness in optimization. The search direction is obtained by solving a quadratic program using an interior method based on Karmarkar's algorithm. The theory is discussed focusing on the important and often overlooked role played by the various parameters guiding the iterations within the program. Also discussed is a robust approach for handling infeasible starting points. The code was validated by solving a variety of structural optimization test problems that have known solutions obtained by other optimization codes. It has been observed that this code is robust: it has solved a variety of problems from different starting points. However, the code is inefficient in that it takes considerable CPU time as compared with certain other available codes. Further work is required to improve its efficiency while retaining its robustness.

  15. Hydrodynamic Simulations and Tomographic Reconstructions of the Intergalactic Medium

    NASA Astrophysics Data System (ADS)

    Stark, Casey William

    The Intergalactic Medium (IGM) is the dominant reservoir of matter in the Universe from which the cosmic web and galaxies form. The structure and physical state of the IGM provides insight into the cosmological model of the Universe, the origin and timeline of the reionization of the Universe, as well as being an essential ingredient in our understanding of galaxy formation and evolution. Our primary handle on this information is a signal known as the Lyman-alpha forest (or Ly-alpha forest) -- the collection of absorption features in high-redshift sources due to intervening neutral hydrogen, which scatters HI Ly-alpha photons out of the line of sight. The Ly-alpha forest flux traces density fluctuations at high redshift and at moderate overdensities, making it an excellent tool for mapping large-scale structure and constraining cosmological parameters. Although the computational methodology for simulating the Ly-alpha forest has existed for over a decade, we are just now approaching the scale of computing power required to simultaneously capture large cosmological scales and the scales of the smallest absorption systems. My thesis focuses on using simulations at the edge of modern computing to produce precise predictions of the statistics of the Ly-alpha forest and to better understand the structure of the IGM. In the first part of my thesis, I review the state of hydrodynamic simulations of the IGM, including pitfalls of the existing under-resolved simulations. Our group developed a new cosmological hydrodynamics code to tackle the computational challenge, and I developed a distributed analysis framework to compute flux statistics from our simulations. I present flux statistics derived from a suite of our large hydrodynamic simulations and demonstrate convergence to the per cent level. I also compare flux statistics derived from simulations using different discretizations and hydrodynamic schemes (Eulerian finite volume vs. smoothed particle hydrodynamics) and discuss differences in their convergence behavior, their overall agreement, and the implications for cosmological constraints. In the second part of my thesis, I present a tomographic reconstruction method that allows us to make 3D maps of the IGM with Mpc resolution. In order to make reconstructions of large surveys computationally feasible, I developed a new Wiener Filter application with an algorithm specialized to our problem, which significantly reduces the space and time complexity compared to previous implementations. I explore two scientific applications of the maps: finding protoclusters by searching the maps for large, contiguous regions of low flux and finding cosmic voids by searching the maps for regions of high flux. Using a large N-body simulation, I identify and characterize both protoclusters and voids at z = 2.5, in the middle of the redshift range being mapped by ongoing surveys. I provide simple methods for identifying protocluster and void candidates in the tomographic flux maps, and then test them on mock surveys and reconstructions. I present forecasts for sample purity and completeness and other scientific applications of these large, high-redshift objects.

  16. Atomistic minimal model for estimating profile of electrodeposited nanopatterns

    NASA Astrophysics Data System (ADS)

    Asgharpour Hassankiadeh, Somayeh; Sadeghi, Ali

    2018-06-01

    We develop a computationally efficient and methodologically simple approach to realize molecular dynamics simulations of electrodeposition. Our minimal model takes into account the nontrivial electric field due a sharp electrode tip to perform simulations of the controllable coating of a thin layer on a surface with an atomic precision. On the atomic scale a highly site-selective electrodeposition of ions and charged particles by means of the sharp tip of a scanning probe microscope is possible. A better understanding of the microscopic process, obtained mainly from atomistic simulations, helps us to enhance the quality of this nanopatterning technique and to make it applicable in fabrication of nanowires and nanocontacts. In the limit of screened inter-particle interactions, it is feasible to run very fast simulations of the electrodeposition process within the framework of the proposed model and thus to investigate how the shape of the overlayer depends on the tip-sample geometry and dielectric properties, electrolyte viscosity, etc. Our calculation results reveal that the sharpness of the profile of a nano-scale deposited overlayer is dictated by the normal-to-sample surface component of the electric field underneath the tip.

  17. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter

    PubMed Central

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-01-01

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection. PMID:29023385

  18. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter.

    PubMed

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-10-12

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  19. Diffuse optical tomography for breast cancer imaging guided by computed tomography: A feasibility study.

    PubMed

    Baikejiang, Reheman; Zhang, Wei; Li, Changqing

    2017-01-01

    Diffuse optical tomography (DOT) has attracted attentions in the last two decades due to its intrinsic sensitivity in imaging chromophores of tissues such as hemoglobin, water, and lipid. However, DOT has not been clinically accepted yet due to its low spatial resolution caused by strong optical scattering in tissues. Structural guidance provided by an anatomical imaging modality enhances the DOT imaging substantially. Here, we propose a computed tomography (CT) guided multispectral DOT imaging system for breast cancer imaging. To validate its feasibility, we have built a prototype DOT imaging system which consists of a laser at the wavelength of 650 nm and an electron multiplying charge coupled device (EMCCD) camera. We have validated the CT guided DOT reconstruction algorithms with numerical simulations and phantom experiments, in which different imaging setup parameters, such as projection number of measurements and width of measurement patch, have been investigated. Our results indicate that an air-cooling EMCCD camera is good enough for the transmission mode DOT imaging. We have also found that measurements at six angular projections are sufficient for DOT to reconstruct the optical targets with 2 and 4 times absorption contrast when the CT guidance is applied. Finally, we have described our future research plan on integration of a multispectral DOT imaging system into a breast CT scanner.

  20. Vibration control of building structures using self-organizing and self-learning neural networks

    NASA Astrophysics Data System (ADS)

    Madan, Alok

    2005-11-01

    Past research in artificial intelligence establishes that artificial neural networks (ANN) are effective and efficient computational processors for performing a variety of tasks including pattern recognition, classification, associative recall, combinatorial problem solving, adaptive control, multi-sensor data fusion, noise filtering and data compression, modelling and forecasting. The paper presents a potentially feasible approach for training ANN in active control of earthquake-induced vibrations in building structures without the aid of teacher signals (i.e. target control forces). A counter-propagation neural network is trained to output the control forces that are required to reduce the structural vibrations in the absence of any feedback on the correctness of the output control forces (i.e. without any information on the errors in output activations of the network). The present study shows that, in principle, the counter-propagation network (CPN) can learn from the control environment to compute the required control forces without the supervision of a teacher (unsupervised learning). Simulated case studies are presented to demonstrate the feasibility of implementing the unsupervised learning approach in ANN for effective vibration control of structures under the influence of earthquake ground motions. The proposed learning methodology obviates the need for developing a mathematical model of structural dynamics or training a separate neural network to emulate the structural response for implementation in practice.

  1. When Long-Range Zero-Lag Synchronization is Feasible in Cortical Networks

    PubMed Central

    Viriyopase, Atthaphon; Bojak, Ingo; Zeitler, Magteld; Gielen, Stan

    2012-01-01

    Many studies have reported long-range synchronization of neuronal activity between brain areas, in particular in the beta and gamma bands with frequencies in the range of 14–30 and 40–80 Hz, respectively. Several studies have reported synchrony with zero phase lag, which is remarkable considering the synaptic and conduction delays inherent in the connections between distant brain areas. This result has led to many speculations about the possible functional role of zero-lag synchrony, such as for neuronal communication, attention, memory, and feature binding. However, recent studies using recordings of single-unit activity and local field potentials report that neuronal synchronization may occur with non-zero phase lags. This raises the questions whether zero-lag synchrony can occur in the brain and, if so, under which conditions. We used analytical methods and computer simulations to investigate which connectivity between neuronal populations allows or prohibits zero-lag synchrony. We did so for a model where two oscillators interact via a relay oscillator. Analytical results and computer simulations were obtained for both type I Mirollo–Strogatz neurons and type II Hodgkin–Huxley neurons. We have investigated the dynamics of the model for various types of synaptic coupling and importantly considered the potential impact of Spike-Timing Dependent Plasticity (STDP) and its learning window. We confirm previous results that zero-lag synchrony can be achieved in this configuration. This is much easier to achieve with Hodgkin–Huxley neurons, which have a biphasic phase response curve, than for type I neurons. STDP facilitates zero-lag synchrony as it adjusts the synaptic strengths such that zero-lag synchrony is feasible for a much larger range of parameters than without STDP. PMID:22866034

  2. Low Mass-Damping Vortex-Induced Vibrations of a Single Cylinder at Moderate Reynolds Number.

    PubMed

    Jus, Y; Longatte, E; Chassaing, J-C; Sagaut, P

    2014-10-01

    The feasibility and accuracy of large eddy simulation is investigated for the case of three-dimensional unsteady flows past an elastically mounted cylinder at moderate Reynolds number. Although these flow problems are unconfined, complex wake flow patterns may be observed depending on the elastic properties of the structure. An iterative procedure is used to solve the structural dynamic equation to be coupled with the Navier-Stokes system formulated in a pseudo-Eulerian way. A moving mesh method is involved to deform the computational domain according to the motion of the fluid structure interface. Numerical simulations of vortex-induced vibrations are performed for a freely vibrating cylinder at Reynolds number 3900 in the subcritical regime under two low mass-damping conditions. A detailed physical analysis is provided for a wide range of reduced velocities, and the typical three-branch response of the amplitude behavior usually reported in the experiments is exhibited and reproduced by numerical simulation.

  3. Feasibility analysis on integration of luminous environment measuring and design based on exposure curve calibration

    NASA Astrophysics Data System (ADS)

    Zou, Yuan; Shen, Tianxing

    2013-03-01

    Besides illumination calculating during architecture and luminous environment design, to provide more varieties of photometric data, the paper presents combining relation between luminous environment design and SM light environment measuring system, which contains a set of experiment devices including light information collecting and processing modules, and can offer us various types of photometric data. During the research process, we introduced a simulation method for calibration, which mainly includes rebuilding experiment scenes in 3ds Max Design, calibrating this computer aid design software in simulated environment under conditions of various typical light sources, and fitting the exposure curves of rendered images. As analytical research went on, the operation sequence and points for attention during the simulated calibration were concluded, connections between Mental Ray renderer and SM light environment measuring system were established as well. From the paper, valuable reference conception for coordination between luminous environment design and SM light environment measuring system was pointed out.

  4. Characteristic Evolution and Matching

    NASA Astrophysics Data System (ADS)

    Winicour, Jeffrey

    2012-01-01

    I review the development of numerical evolution codes for general relativity based upon the characteristic initial-value problem. Progress in characteristic evolution is traced from the early stage of 1D feasibility studies to 2D-axisymmetric codes that accurately simulate the oscillations and gravitational collapse of relativistic stars and to current 3D codes that provide pieces of a binary black-hole spacetime. Cauchy codes have now been successful at simulating all aspects of the binary black-hole problem inside an artificially constructed outer boundary. A prime application of characteristic evolution is to extend such simulations to null infinity where the waveform from the binary inspiral and merger can be unambiguously computed. This has now been accomplished by Cauchy-characteristic extraction, where data for the characteristic evolution is supplied by Cauchy data on an extraction worldtube inside the artificial outer boundary. The ultimate application of characteristic evolution is to eliminate the role of this outer boundary by constructing a global solution via Cauchy-characteristic matching. Progress in this direction is discussed.

  5. Linear and nonlinear ARMA model parameter estimation using an artificial neural network

    NASA Technical Reports Server (NTRS)

    Chon, K. H.; Cohen, R. J.

    1997-01-01

    This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.

  6. A fast implementation of MPC-based motion cueing algorithms for mid-size road vehicle motion simulators

    NASA Astrophysics Data System (ADS)

    Bruschetta, M.; Maran, F.; Beghi, A.

    2017-06-01

    The use of dynamic driving simulators is constantly increasing in the automotive community, with applications ranging from vehicle development to rehab and driver training. The effectiveness of such devices is related to their capabilities of well reproducing the driving sensations, hence it is crucial that the motion control strategies generate both realistic and feasible inputs to the platform. Such strategies are called motion cueing algorithms (MCAs). In recent years several MCAs based on model predictive control (MPC) techniques have been proposed. The main drawback associated with the use of MPC is its computational burden, that may limit their application to high performance dynamic simulators. In the paper, a fast, real-time implementation of an MPC-based MCA for 9 DOF, high performance platform is proposed. Effectiveness of the approach in managing the available working area is illustrated by presenting experimental results from an implementation on a real device with a 200 Hz control frequency.

  7. Parameters Identification for Photovoltaic Module Based on an Improved Artificial Fish Swarm Algorithm

    PubMed Central

    Wang, Hong-Hua

    2014-01-01

    A precise mathematical model plays a pivotal role in the simulation, evaluation, and optimization of photovoltaic (PV) power systems. Different from the traditional linear model, the model of PV module has the features of nonlinearity and multiparameters. Since conventional methods are incapable of identifying the parameters of PV module, an excellent optimization algorithm is required. Artificial fish swarm algorithm (AFSA), originally inspired by the simulation of collective behavior of real fish swarms, is proposed to fast and accurately extract the parameters of PV module. In addition to the regular operation, a mutation operator (MO) is designed to enhance the searching performance of the algorithm. The feasibility of the proposed method is demonstrated by various parameters of PV module under different environmental conditions, and the testing results are compared with other studied methods in terms of final solutions and computational time. The simulation results show that the proposed method is capable of obtaining higher parameters identification precision. PMID:25243233

  8. A decision tool for selecting trench cap designs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paige, G.B.; Stone, J.J.; Lane, L.J.

    1995-12-31

    A computer based prototype decision support system (PDSS) is being developed to assist the risk manager in selecting an appropriate trench cap design for waste disposal sites. The selection of the {open_quote}best{close_quote} design among feasible alternatives requires consideration of multiple and often conflicting objectives. The methodology used in the selection process consists of: selecting and parameterizing decision variables using data, simulation models, or expert opinion; selecting feasible trench cap design alternatives; ordering the decision variables and ranking the design alternatives. The decision model is based on multi-objective decision theory and uses a unique approach to order the decision variables andmore » rank the design alternatives. Trench cap designs are evaluated based on federal regulations, hydrologic performance, cover stability and cost. Four trench cap designs, which were monitored for a four year period at Hill Air Force Base in Utah, are used to demonstrate the application of the PDSS and evaluate the results of the decision model. The results of the PDSS, using both data and simulations, illustrate the relative advantages of each of the cap designs and which cap is the {open_quotes}best{close_quotes} alternative for a given set of criteria and a particular importance order of those decision criteria.« less

  9. Feasibility Study of a Vision-Based Landing System for Unmanned Fixed-Wing Aircraft

    DTIC Science & Technology

    2017-06-01

    International Journal of Computer Science and Network Security 7 no. 3: 112–117. Accessed April 7, 2017. http://www.sciencedirect.com/science/ article /pii...the feasibility of applying computer vision techniques and visual feedback in the control loop for an autonomous system. This thesis examines the...integration into an autonomous aircraft control system. 14. SUBJECT TERMS autonomous systems, auto-land, computer vision, image processing

  10. A feasibility study of low-income homebound older adults' participation in an online chronic disease self-management program.

    PubMed

    Choi, Namkee G; An, Sok; Garcia, Alexandra

    2014-01-01

    This study explored the feasibility of "Better Choices, Better Health" (BCBH), the online version of Stanford's Chronic Disease Self-Management Program, among 10 low-income homebound older adults with no or limited computer skills, compared with 10 peers with high computer skills. Computer training was provided before and at the beginning of the BCBH workshop. Feasibility data consisted of field notes by a research assistant who provided computer training, participants' weekly logs, and a semi-structured interview with each participant at 4 weeks after the completion of BCBH. All those who initially lacked computer skills were able to participate in BCBH with a few hours of face-to-face demonstration and training. The 4-week postintervention follow-up showed significant improvement in health and self-management outcomes. Aging-service agencies need to introduce BCBH to low-income homebound older adults and utilize their volunteer base to provide computer and Internet skills training for low-income homebound older adults in need of such training.

  11. Efficient computation of electrograms and ECGs in human whole heart simulations using a reaction-eikonal model.

    PubMed

    Neic, Aurel; Campos, Fernando O; Prassl, Anton J; Niederer, Steven A; Bishop, Martin J; Vigmond, Edward J; Plank, Gernot

    2017-10-01

    Anatomically accurate and biophysically detailed bidomain models of the human heart have proven a powerful tool for gaining quantitative insight into the links between electrical sources in the myocardium and the concomitant current flow in the surrounding medium as they represent their relationship mechanistically based on first principles. Such models are increasingly considered as a clinical research tool with the perspective of being used, ultimately, as a complementary diagnostic modality. An important prerequisite in many clinical modeling applications is the ability of models to faithfully replicate potential maps and electrograms recorded from a given patient. However, while the personalization of electrophysiology models based on the gold standard bidomain formulation is in principle feasible, the associated computational expenses are significant, rendering their use incompatible with clinical time frames. In this study we report on the development of a novel computationally efficient reaction-eikonal (R-E) model for modeling extracellular potential maps and electrograms. Using a biventricular human electrophysiology model, which incorporates a topologically realistic His-Purkinje system (HPS), we demonstrate by comparing against a high-resolution reaction-diffusion (R-D) bidomain model that the R-E model predicts extracellular potential fields, electrograms as well as ECGs at the body surface with high fidelity and offers vast computational savings greater than three orders of magnitude. Due to their efficiency R-E models are ideally suitable for forward simulations in clinical modeling studies which attempt to personalize electrophysiological model features.

  12. Efficient computation of electrograms and ECGs in human whole heart simulations using a reaction-eikonal model

    NASA Astrophysics Data System (ADS)

    Neic, Aurel; Campos, Fernando O.; Prassl, Anton J.; Niederer, Steven A.; Bishop, Martin J.; Vigmond, Edward J.; Plank, Gernot

    2017-10-01

    Anatomically accurate and biophysically detailed bidomain models of the human heart have proven a powerful tool for gaining quantitative insight into the links between electrical sources in the myocardium and the concomitant current flow in the surrounding medium as they represent their relationship mechanistically based on first principles. Such models are increasingly considered as a clinical research tool with the perspective of being used, ultimately, as a complementary diagnostic modality. An important prerequisite in many clinical modeling applications is the ability of models to faithfully replicate potential maps and electrograms recorded from a given patient. However, while the personalization of electrophysiology models based on the gold standard bidomain formulation is in principle feasible, the associated computational expenses are significant, rendering their use incompatible with clinical time frames. In this study we report on the development of a novel computationally efficient reaction-eikonal (R-E) model for modeling extracellular potential maps and electrograms. Using a biventricular human electrophysiology model, which incorporates a topologically realistic His-Purkinje system (HPS), we demonstrate by comparing against a high-resolution reaction-diffusion (R-D) bidomain model that the R-E model predicts extracellular potential fields, electrograms as well as ECGs at the body surface with high fidelity and offers vast computational savings greater than three orders of magnitude. Due to their efficiency R-E models are ideally suitable for forward simulations in clinical modeling studies which attempt to personalize electrophysiological model features.

  13. Distributed collaborative response surface method for mechanical dynamic assembly reliability design

    NASA Astrophysics Data System (ADS)

    Bai, Guangchen; Fei, Chengwei

    2013-11-01

    Because of the randomness of many impact factors influencing the dynamic assembly relationship of complex machinery, the reliability analysis of dynamic assembly relationship needs to be accomplished considering the randomness from a probabilistic perspective. To improve the accuracy and efficiency of dynamic assembly relationship reliability analysis, the mechanical dynamic assembly reliability(MDAR) theory and a distributed collaborative response surface method(DCRSM) are proposed. The mathematic model of DCRSM is established based on the quadratic response surface function, and verified by the assembly relationship reliability analysis of aeroengine high pressure turbine(HPT) blade-tip radial running clearance(BTRRC). Through the comparison of the DCRSM, traditional response surface method(RSM) and Monte Carlo Method(MCM), the results show that the DCRSM is not able to accomplish the computational task which is impossible for the other methods when the number of simulation is more than 100 000 times, but also the computational precision for the DCRSM is basically consistent with the MCM and improved by 0.40˜4.63% to the RSM, furthermore, the computational efficiency of DCRSM is up to about 188 times of the MCM and 55 times of the RSM under 10000 times simulations. The DCRSM is demonstrated to be a feasible and effective approach for markedly improving the computational efficiency and accuracy of MDAR analysis. Thus, the proposed research provides the promising theory and method for the MDAR design and optimization, and opens a novel research direction of probabilistic analysis for developing the high-performance and high-reliability of aeroengine.

  14. Slat Cove Noise Modeling: A Posteriori Analysis of Unsteady RANS Simulations

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan; Khorrami, Mehdi R.; Lockard, David P.; Atkins, Harold L.; Lilley, Geoffrey M.

    2002-01-01

    A companion paper by Khorrami et al demonstrates the feasibility of simulating the (nominally) self-sustained, large-scale unsteadiness within the leading-edge slat-cove region of multi-element airfoils using unsteady Reynolds-Averaged Navier-Stokes (URANS) equations, provided that the turbulence production term in the underlying two-equation turbulence model is switched off within the cove region. In conjunction with a FfowesWilliams-Hawkings solver, the URANS computations were shown to capture the dominant portion of the acoustic spectrum attributed to slat noise, as well as reproducing the increased intensity of slat cove motions (and, correspondingly, far-field noise as well) at the lower angles of attack. This paper examines that simulation database, augmented by additional simulations, with the objective of transitioning this apparent success to aeroacoustic predictions in an engineering context. As a first step towards this goal, the simulated flow and acoustic fields are compared with experiment and simplified analytical model. Rather intense near-field fluctuations in the simulated flow are found to be associated with unsteady separation along the slat bottom surface, relatively close to the slat cusp. Accuracy of the laminar-cove simulations in this near-wall region is raised to be an open issue. The adjoint Green's function approach is also explored in an attempt to identify the most efficient noise source locations.

  15. High-Fidelity Dynamic Modeling of Spacecraft in the Continuum--Rarefied Transition Regime

    NASA Astrophysics Data System (ADS)

    Turansky, Craig P.

    The state of the art of spacecraft rarefied aerodynamics seldom accounts for detailed rigid-body dynamics. In part because of computational constraints, simpler models based upon the ballistic and drag coefficients are employed. Of particular interest is the continuum-rarefied transition regime of Earth's thermosphere where gas dynamic simulation is difficult yet wherein many spacecraft operate. The feasibility of increasing the fidelity of modeling spacecraft dynamics is explored by coupling rarefied aerodynamics with rigid-body dynamics modeling similar to that traditionally used for aircraft in atmospheric flight. Presented is a framework of analysis and guiding principles which capitalize on the availability of increasing computational methods and resources. Aerodynamic force inputs for modeling spacecraft in two dimensions in a rarefied flow are provided by analytical equations in the free-molecular regime, and the direct simulation Monte Carlo method in the transition regime. The application of the direct simulation Monte Carlo method to this class of problems is examined in detail with a new code specifically designed for engineering-level rarefied aerodynamic analysis. Time-accurate simulations of two distinct geometries in low thermospheric flight and atmospheric entry are performed, demonstrating non-linear dynamics that cannot be predicted using simpler approaches. The results of this straightforward approach to the aero-orbital coupled-field problem highlight the possibilities for future improvements in drag prediction, control system design, and atmospheric science. Furthermore, a number of challenges for future work are identified in the hope of stimulating the development of a new subfield of spacecraft dynamics.

  16. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories

    NASA Technical Reports Server (NTRS)

    Ng, Hok Kwan; Sridhar, Banavar

    2016-01-01

    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  17. Multilevel summation method for electrostatic force evaluation.

    PubMed

    Hardy, David J; Wu, Zhe; Phillips, James C; Stone, John E; Skeel, Robert D; Schulten, Klaus

    2015-02-10

    The multilevel summation method (MSM) offers an efficient algorithm utilizing convolution for evaluating long-range forces arising in molecular dynamics simulations. Shifting the balance of computation and communication, MSM provides key advantages over the ubiquitous particle–mesh Ewald (PME) method, offering better scaling on parallel computers and permitting more modeling flexibility, with support for periodic systems as does PME but also for semiperiodic and nonperiodic systems. The version of MSM available in the simulation program NAMD is described, and its performance and accuracy are compared with the PME method. The accuracy feasible for MSM in practical applications reproduces PME results for water property calculations of density, diffusion constant, dielectric constant, surface tension, radial distribution function, and distance-dependent Kirkwood factor, even though the numerical accuracy of PME is higher than that of MSM. Excellent agreement between MSM and PME is found also for interface potentials of air–water and membrane–water interfaces, where long-range Coulombic interactions are crucial. Applications demonstrate also the suitability of MSM for systems with semiperiodic and nonperiodic boundaries. For this purpose, simulations have been performed with periodic boundaries along directions parallel to a membrane surface but not along the surface normal, yielding membrane pore formation induced by an imbalance of charge across the membrane. Using a similar semiperiodic boundary condition, ion conduction through a graphene nanopore driven by an ion gradient has been simulated. Furthermore, proteins have been simulated inside a single spherical water droplet. Finally, parallel scalability results show the ability of MSM to outperform PME when scaling a system of modest size (less than 100 K atoms) to over a thousand processors, demonstrating the suitability of MSM for large-scale parallel simulation.

  18. Surrogate Model Application to the Identification of Optimal Groundwater Exploitation Scheme Based on Regression Kriging Method—A Case Study of Western Jilin Province

    PubMed Central

    An, Yongkai; Lu, Wenxi; Cheng, Weiguo

    2015-01-01

    This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately. PMID:26264008

  19. Crowd Sensing-Enabling Security Service Recommendation for Social Fog Computing Systems

    PubMed Central

    Wu, Jun; Su, Zhou; Li, Jianhua

    2017-01-01

    Fog computing, shifting intelligence and resources from the remote cloud to edge networks, has the potential of providing low-latency for the communication from sensing data sources to users. For the objects from the Internet of Things (IoT) to the cloud, it is a new trend that the objects establish social-like relationships with each other, which efficiently brings the benefits of developed sociality to a complex environment. As fog service become more sophisticated, it will become more convenient for fog users to share their own services, resources, and data via social networks. Meanwhile, the efficient social organization can enable more flexible, secure, and collaborative networking. Aforementioned advantages make the social network a potential architecture for fog computing systems. In this paper, we design an architecture for social fog computing, in which the services of fog are provisioned based on “friend” relationships. To the best of our knowledge, this is the first attempt at an organized fog computing system-based social model. Meanwhile, social networking enhances the complexity and security risks of fog computing services, creating difficulties of security service recommendations in social fog computing. To address this, we propose a novel crowd sensing-enabling security service provisioning method to recommend security services accurately in social fog computing systems. Simulation results show the feasibilities and efficiency of the crowd sensing-enabling security service recommendation method for social fog computing systems. PMID:28758943

  20. Crowd Sensing-Enabling Security Service Recommendation for Social Fog Computing Systems.

    PubMed

    Wu, Jun; Su, Zhou; Wang, Shen; Li, Jianhua

    2017-07-30

    Fog computing, shifting intelligence and resources from the remote cloud to edge networks, has the potential of providing low-latency for the communication from sensing data sources to users. For the objects from the Internet of Things (IoT) to the cloud, it is a new trend that the objects establish social-like relationships with each other, which efficiently brings the benefits of developed sociality to a complex environment. As fog service become more sophisticated, it will become more convenient for fog users to share their own services, resources, and data via social networks. Meanwhile, the efficient social organization can enable more flexible, secure, and collaborative networking. Aforementioned advantages make the social network a potential architecture for fog computing systems. In this paper, we design an architecture for social fog computing, in which the services of fog are provisioned based on "friend" relationships. To the best of our knowledge, this is the first attempt at an organized fog computing system-based social model. Meanwhile, social networking enhances the complexity and security risks of fog computing services, creating difficulties of security service recommendations in social fog computing. To address this, we propose a novel crowd sensing-enabling security service provisioning method to recommend security services accurately in social fog computing systems. Simulation results show the feasibilities and efficiency of the crowd sensing-enabling security service recommendation method for social fog computing systems.

  1. Optical clearing of vaginal tissues, ex vivo, for minimally invasive laser treatment of female stress urinary incontinence

    PubMed Central

    Chang, Chun-Hung; Myers, Erinn M.; Kennelly, Michael J.; Fried, Nathaniel M.

    2017-01-01

    Abstract. Near-infrared laser energy in conjunction with applied tissue cooling is being investigated for thermal remodeling of the endopelvic fascia during minimally invasive treatment of female stress urinary incontinence. Previous computer simulations of light transport, heat transfer, and tissue thermal damage have shown that a transvaginal approach is more feasible than a transurethral approach. However, results were suboptimal, and some undesirable thermal insult to the vaginal wall was still predicted. This study uses experiments and computer simulations to explore whether application of an optical clearing agent (OCA) can further improve optical penetration depth and completely preserve the vaginal wall during subsurface treatment of the endopelvic fascia. Several different mixtures of OCA’s were tested, and 100% glycerol was found to be the optimal agent. Optical transmission studies, optical coherence tomography, reflection spectroscopy, and computer simulations [including Monte Carlo (MC) light transport, heat transfer, and Arrhenius integral model of thermal damage] using glycerol were performed. The OCA produced a 61% increase in optical transmission through porcine vaginal wall at 37°C after 30 min. The MC model showed improved energy deposition in endopelvic fascia using glycerol. Without OCA, 62%, 37%, and 1% of energy was deposited in vaginal wall, endopelvic fascia, and urethral wall, respectively, compared with 50%, 49%, and 1% using OCA. Use of OCA also resulted in 0.5-mm increase in treatment depth, allowing potential thermal tissue remodeling at a depth of 3 mm with complete preservation of the vaginal wall. PMID:28301637

  2. Feasibility study, software design, layout and simulation of a two-dimensional fast Fourier transform machine for use in optical array interferometry

    NASA Technical Reports Server (NTRS)

    Boriakoff, Valentin; Chen, Wei

    1990-01-01

    The NASA-Cornell Univ.-Worcester Polytechnic Institute Fast Fourier Transform (FFT) chip based on the architecture of the systolic FFT computation as presented by Boriakoff is implemented into an operating device design. The kernel of the system, a systolic inner product floating point processor, was designed to be assembled into a systolic network that would take incoming data streams in pipeline fashion and provide an FFT output at the same rate, word by word. It was thoroughly simulated for proper operation, and it has passed a comprehensive set of tests showing no operational errors. The black box specifications of the chip, which conform to the initial requirements of the design as specified by NASA, are given. The five subcells are described and their high level function description, logic diagrams, and simulation results are presented. Some modification of the Read Only Memory (ROM) design were made, since some errors were found in it. Because a four stage pipeline structure was used, simulating such a structure is more difficult than an ordinary structure. Simulation methods are discussed. Chip signal protocols and chip pinout are explained.

  3. A novel method for energy harvesting simulation based on scenario generation

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Li, Taoshen; Xiao, Nan; Ye, Jin; Wu, Min

    2018-06-01

    Energy harvesting network (EHN) is a new form of computer networks. It converts ambient energy into usable electric energy and supply the electrical energy as a primary or secondary power source to the communication devices. However, most of the EHN uses the analytical probability distribution function to describe the energy harvesting process, which cannot accurately identify the actual situation for the lack of authenticity. We propose an EHN simulation method based on scenario generation in this paper. Firstly, instead of setting a probability distribution in advance, it uses optimal scenario reduction technology to generate representative scenarios in single period based on the historical data of the harvested energy. Secondly, it uses homogeneous simulated annealing algorithm to generate optimal daily energy harvesting scenario sequences to get a more accurate simulation of the random characteristics of the energy harvesting network. Then taking the actual wind power data as an example, the accuracy and stability of the method are verified by comparing with the real data. Finally, we cite an instance to optimize the network throughput, which indicate the feasibility and effectiveness of the method we proposed from the optimal solution and data analysis in energy harvesting simulation.

  4. First-order convex feasibility algorithms for x-ray CT

    PubMed Central

    Sidky, Emil Y.; Jørgensen, Jakob S.; Pan, Xiaochuan

    2013-01-01

    Purpose: Iterative image reconstruction (IIR) algorithms in computed tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this paper, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for rapidly convergent algorithms for their solution—thereby facilitating the IIR algorithm design process. Methods: An accelerated version of the Chambolle−Pock (CP) algorithm is adapted to various convex feasibility problems of potential interest to IIR in CT. One of the proposed problems is seen to be equivalent to least-squares minimization, and two other problems provide alternatives to penalized, least-squares minimization. Results: The accelerated CP algorithms are demonstrated on a simulation of circular fan-beam CT with a limited scanning arc of 144°. The CP algorithms are seen in the empirical results to converge to the solution of their respective convex feasibility problems. Conclusions: Formulation of convex feasibility problems can provide a useful alternative to unconstrained optimization when designing IIR algorithms for CT. The approach is amenable to recent methods for accelerating first-order algorithms which may be particularly useful for CT with limited angular-range scanning. The present paper demonstrates the methodology, and future work will illustrate its utility in actual CT application. PMID:23464295

  5. Study on a discrete-time dynamic control model to enhance nitrogen removal with fluctuation of influent in oxidation ditches.

    PubMed

    Liu, Yanchen; Shi, Hanchang; Shi, Huiming; Wang, Zhiqiang

    2010-10-01

    The aim of study was proposed a new control model feasible on-line implemented by Programmable Logic Controller (PLC) to enhance nitrogen removal against the fluctuation of influent in Carrousel oxidation ditch. The discrete-time control model was established by confirmation model of operational conditions based on a expert access, which was obtained by a simulation using Activated Sludge Model 2-D (ASM2-D) and Computation Fluid Dynamics (CFD), and discrete-time control model to switch between different operational stages. A full-scale example is provided to demonstrate the feasibility of the proposed operation and the procedure of the control design. The effluent quality was substantially improved, to the extent that it met the new wastewater discharge standards of NH(3)-N<5mg/L and TN<15 mg/L enacted in China throughout a one-day period with fluctuation of influent. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. A photonic circuit for complementary frequency shifting, in-phase quadrature/single sideband modulation and frequency multiplication: analysis and integration feasibility

    NASA Astrophysics Data System (ADS)

    Hasan, Mehedi; Hu, Jianqi; Nikkhah, Hamdam; Hall, Trevor

    2017-08-01

    A novel photonic integrated circuit architecture for implementing orthogonal frequency division multiplexing by means of photonic generation of phase-correlated sub-carriers is proposed. The circuit can also be used for implementing complex modulation, frequency up-conversion of the electrical signal to the optical domain and frequency multiplication. The principles of operation of the circuit are expounded using transmission matrices and the predictions of the analysis are verified by computer simulation using an industry-standard software tool. Non-ideal scenarios that may affect the correct function of the circuit are taken into consideration and quantified. The discussion of integration feasibility is illustrated by a photonic integrated circuit that has been fabricated using 'library' components and which features most of the elements of the proposed circuit architecture. The circuit is found to be practical and may be fabricated in any material platform that offers a linear electro-optic modulator such as organic or ferroelectric thin films hybridized with silicon photonics.

  7. Contributed Review: The feasibility of a fully miniaturized magneto-optical trap for portable ultracold quantum technology.

    PubMed

    Rushton, J A; Aldous, M; Himsworth, M D

    2014-12-01

    Experiments using laser cooled atoms and ions show real promise for practical applications in quantum-enhanced metrology, timing, navigation, and sensing as well as exotic roles in quantum computing, networking, and simulation. The heart of many of these experiments has been translated to microfabricated platforms known as atom chips whose construction readily lend themselves to integration with larger systems and future mass production. To truly make the jump from laboratory demonstrations to practical, rugged devices, the complex surrounding infrastructure (including vacuum systems, optics, and lasers) also needs to be miniaturized and integrated. In this paper we explore the feasibility of applying this approach to the Magneto-Optical Trap; incorporating the vacuum system, atom source and optical geometry into a permanently sealed micro-litre system capable of maintaining 10(-10) mbar for more than 1000 days of operation with passive pumping alone. We demonstrate such an engineering challenge is achievable using recent advances in semiconductor microfabrication techniques and materials.

  8. Contributed Review: The feasibility of a fully miniaturized magneto-optical trap for portable ultracold quantum technology

    NASA Astrophysics Data System (ADS)

    Rushton, J. A.; Aldous, M.; Himsworth, M. D.

    2014-12-01

    Experiments using laser cooled atoms and ions show real promise for practical applications in quantum-enhanced metrology, timing, navigation, and sensing as well as exotic roles in quantum computing, networking, and simulation. The heart of many of these experiments has been translated to microfabricated platforms known as atom chips whose construction readily lend themselves to integration with larger systems and future mass production. To truly make the jump from laboratory demonstrations to practical, rugged devices, the complex surrounding infrastructure (including vacuum systems, optics, and lasers) also needs to be miniaturized and integrated. In this paper we explore the feasibility of applying this approach to the Magneto-Optical Trap; incorporating the vacuum system, atom source and optical geometry into a permanently sealed micro-litre system capable of maintaining 10-10 mbar for more than 1000 days of operation with passive pumping alone. We demonstrate such an engineering challenge is achievable using recent advances in semiconductor microfabrication techniques and materials.

  9. Coupled Multi-Disciplinary Optimization for Structural Reliability and Affordability

    NASA Technical Reports Server (NTRS)

    Abumeri, Galib H.; Chamis, Christos C.

    2003-01-01

    A computational simulation method is presented for Non-Deterministic Multidisciplinary Optimization of engine composite materials and structures. A hypothetical engine duct made with ceramic matrix composites (CMC) is evaluated probabilistically in the presence of combined thermo-mechanical loading. The structure is tailored by quantifying the uncertainties in all relevant design variables such as fabrication, material, and loading parameters. The probabilistic sensitivities are used to select critical design variables for optimization. In this paper, two approaches for non-deterministic optimization are presented. The non-deterministic minimization of combined failure stress criterion is carried out by: (1) performing probabilistic evaluation first and then optimization and (2) performing optimization first and then probabilistic evaluation. The first approach shows that the optimization feasible region can be bounded by a set of prescribed probability limits and that the optimization follows the cumulative distribution function between those limits. The second approach shows that the optimization feasible region is bounded by 0.50 and 0.999 probabilities.

  10. Reproducibility of Ultrasound-Guided High Intensity Focused Ultrasound (HIFU) Thermal Lesions in Minimally-Invasive Brain Surgery

    NASA Astrophysics Data System (ADS)

    Zahedi, Sulmaz

    This study aims to prove the feasibility of using Ultrasound-Guided High Intensity Focused Ultrasound (USg-HIFU) to create thermal lesions in neurosurgical applications, allowing for precise ablation of brain tissue, while simultaneously providing real time imaging. To test the feasibility of the system, an optically transparent HIFU compatible tissue-mimicking phantom model was produced. USg-HIFU was then used for ablation of the phantom, with and without targets. Finally, ex vivo lamb brain tissue was imaged and ablated using the USg-HIFU system. Real-time ultrasound images and videos obtained throughout the ablation process showing clear lesion formation at the focal point of the HIFU transducer. Post-ablation gross and histopathology examinations were conducted to verify thermal and mechanical damage in the ex vivo lamb brain tissue. Finally, thermocouple readings were obtained, and HIFU field computer simulations were conducted to verify findings. Results of the study concluded reproducibility of USg-HIFU thermal lesions for neurosurgical applications.

  11. Contributed Review: The feasibility of a fully miniaturized magneto-optical trap for portable ultracold quantum technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rushton, J. A.; Aldous, M.; Himsworth, M. D., E-mail: m.d.himsworth@soton.ac.uk

    2014-12-15

    Experiments using laser cooled atoms and ions show real promise for practical applications in quantum-enhanced metrology, timing, navigation, and sensing as well as exotic roles in quantum computing, networking, and simulation. The heart of many of these experiments has been translated to microfabricated platforms known as atom chips whose construction readily lend themselves to integration with larger systems and future mass production. To truly make the jump from laboratory demonstrations to practical, rugged devices, the complex surrounding infrastructure (including vacuum systems, optics, and lasers) also needs to be miniaturized and integrated. In this paper we explore the feasibility of applyingmore » this approach to the Magneto-Optical Trap; incorporating the vacuum system, atom source and optical geometry into a permanently sealed micro-litre system capable of maintaining 10{sup −10} mbar for more than 1000 days of operation with passive pumping alone. We demonstrate such an engineering challenge is achievable using recent advances in semiconductor microfabrication techniques and materials.« less

  12. Experimental evaluation of a wind shear alert and energy management display

    NASA Technical Reports Server (NTRS)

    Kraiss, K.-F.; Baty, D. L.

    1978-01-01

    A method is proposed for onboard measurement and display of specific windshear and energy management data derived from an air data computer. An open-loop simulation study is described which was carried out to verify the feasibility of this display concept, and whose results were used as a basis to develop the respective cockpit instrumentation. The task was to fly a three-degree landing approach under various shear conditions with and without specific information on the shear. Improved performance due to augmented cockpit information was observed. Critical shears with increasing tailwinds could be handled more consistently and with less deviation from the glide path.

  13. A Power Regulation and Droop Mode Control Method for a Stand-Alone Load Fed from a PV-Current Source Inverter

    NASA Astrophysics Data System (ADS)

    Khayamy, Mehdy; Ojo, Olorunfemi

    2015-04-01

    A current source inverter fed from photovoltaic cells is proposed to power an autonomous load when operating under either power regulation or voltage and frequency drooping modes. Input-output linearization technique is applied to the overall nonlinear system to achieve a globally stable system under feasible operating conditions. After obtaining the steady-state model that demarcates the modes of operation, computer Simulation results for variations in irradiance and the load power of the controlled system are generated in which an acceptable dynamic response of the power generator system under the two modes of operation is observed.

  14. An efficient wireless power transfer system with security considerations for electric vehicle applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Zhen; Chau, K. T., E-mail: ktchau@eee.hku.hk; Liu, Chunhua

    2014-05-07

    This paper presents a secure inductive wireless power transfer (WPT) system for electric vehicle (EV) applications, such as charging the electric devices inside EVs and performing energy exchange between EVs. The key is to employ chaos theory to encrypt the wirelessly transferred energy which can then be decrypted by specific receptors in the multi-objective system. In this paper, the principle of encrypted WPT is first revealed. Then, computer simulation is conducted to validate the feasibility of the proposed system. Moreover, by comparing the WPT systems with and without encryption, the proposed energy encryption scheme does not involve noticeable power consumption.

  15. Physical models and primary design of reactor based slow positron source at CMRR

    NASA Astrophysics Data System (ADS)

    Wang, Guanbo; Li, Rundong; Qian, Dazhi; Yang, Xin

    2018-07-01

    Slow positron facilities are widely used in material science. A high intensity slow positron source is now at the design stage based on the China Mianyang Research Reactor (CMRR). This paper describes the physical models and our primary design. We use different computer programs or mathematical formula to simulate different physical process, and validate them by proper experiments. Considering the feasibility, we propose a primary design, containing a cadmium shield, a honeycomb arranged W tubes assembly, electrical lenses, and a solenoid. It is planned to be vertically inserted in the Si-doping channel. And the beam intensity is expected to be 5 ×109

  16. Feasibility study of transit photon correlation anemometer for Ames Research Center unitary wind tunnel plan

    NASA Technical Reports Server (NTRS)

    Mayo, W. T., Jr.; Smart, A. E.

    1979-01-01

    A laser transit anemometer measured a two-dimensional vector velocity, using the transit time of scattering particles between two focused and parallel laser beams. The objectives were: (1) the determination of the concentration levels and light scattering efficiencies of naturally occurring, submicron particles in the NASA/Ames unitary wind tunnel and (2) the evaluation based on these measured data of a laser transit anemometer with digital correlation processing for nonintrusive velocity measurement in this facility. The evaluation criteria were the speeds at which point velocity measurements could be realized with this technique (as determined from computer simulations) for given accuracy requirements.

  17. Photodetachment and Doppler laser cooling of anionic molecules

    NASA Astrophysics Data System (ADS)

    Gerber, Sebastian; Fesel, Julian; Doser, Michael; Comparat, Daniel

    2018-02-01

    We propose to extend laser-cooling techniques, so far only achieved for neutral molecules, to molecular anions. A detailed computational study is performed for {{{C}}}2- molecules stored in Penning traps using GPU based Monte Carlo simulations. Two cooling schemes—Doppler laser cooling and photodetachment cooling—are investigated. The sympathetic cooling of antiprotons is studied for the Doppler cooling scheme, where it is shown that cooling of antiprotons to subKelvin temperatures could becomes feasible, with impacts on the field of antimatter physics. The presented cooling schemes also have applications for the generation of cold, negatively charged particle sources and for the sympathetic cooling of other molecular anions.

  18. Hierarchical clustering method for improved prostate cancer imaging in diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Kavuri, Venkaiah C.; Liu, Hanli

    2013-03-01

    We investigate the feasibility of trans-rectal near infrared (NIR) based diffuse optical tomography (DOT) for early detection of prostate cancer using a transrectal ultrasound (TRUS) compatible imaging probe. For this purpose, we designed a TRUS-compatible, NIR-based image system (780nm), in which the photo diodes were placed on the trans-rectal probe. DC signals were recorded and used for estimating the absorption coefficient. We validated the system using laboratory phantoms. For further improvement, we also developed a hierarchical clustering method (HCM) to improve the accuracy of image reconstruction with limited prior information. We demonstrated the method using computer simulations laboratory phantom experiments.

  19. Compressible or incompressible blend of interacting monodisperse linear polymers near a surface.

    PubMed

    Batman, Richard; Gujrati, P D

    2007-08-28

    We consider a lattice model of a mixture of repulsive, attractive, or neutral monodisperse linear polymers of two species, A and B, with a third monomeric species C, which may be taken to represent free volume. The mixture is confined between two hard, parallel plates of variable separation whose interactions with A and C may be attractive, repulsive, or neutral, and may be different from each other. The interactions with A and C are all that are required to completely specify the effect of each surface on all three components. We numerically study various density profiles as we move away from the surface, by using the recursive method of Gujrati and Chhajer [J. Chem. Phys. 106, 5599 (1997)] that has already been previously applied to study polydisperse solutions and blends next to surfaces. The resulting density profiles show the oscillations that are seen in Monte Carlo simulations and the enrichment of the smaller species at a neutral surface. The method is computationally ultrafast and can be carried out on a personal computer (PC), even in the incompressible case, when Monte Carlo simulations are not feasible. The calculations of density profiles usually take less than 20 min on a PC.

  20. Validation of coupled atmosphere-fire behavior models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bossert, J.E.; Reisner, J.M.; Linn, R.R.

    1998-12-31

    Recent advances in numerical modeling and computer power have made it feasible to simulate the dynamical interaction and feedback between the heat and turbulence induced by wildfires and the local atmospheric wind and temperature fields. At Los Alamos National Laboratory, the authors have developed a modeling system that includes this interaction by coupling a high resolution atmospheric dynamics model, HIGRAD, with a fire behavior model, BEHAVE, to predict the spread of wildfires. The HIGRAD/BEHAVE model is run at very high resolution to properly resolve the fire/atmosphere interaction. At present, these coupled wildfire model simulations are computationally intensive. The additional complexitymore » of these models require sophisticated methods for assuring their reliability in real world applications. With this in mind, a substantial part of the research effort is directed at model validation. Several instrumented prescribed fires have been conducted with multi-agency support and participation from chaparral, marsh, and scrub environments in coastal areas of Florida and inland California. In this paper, the authors first describe the data required to initialize the components of the wildfire modeling system. Then they present results from one of the Florida fires, and discuss a strategy for further testing and improvement of coupled weather/wildfire models.« less

Top