Simulation verification techniques study: Simulation self test hardware design and techniques report
NASA Technical Reports Server (NTRS)
1974-01-01
The final results are presented of the hardware verification task. The basic objectives of the various subtasks are reviewed along with the ground rules under which the overall task was conducted and which impacted the approach taken in deriving techniques for hardware self test. The results of the first subtask and the definition of simulation hardware are presented. The hardware definition is based primarily on a brief review of the simulator configurations anticipated for the shuttle training program. The results of the survey of current self test techniques are presented. The data sources that were considered in the search for current techniques are reviewed, and results of the survey are presented in terms of the specific types of tests that are of interest for training simulator applications. Specifically, these types of tests are readiness tests, fault isolation tests and incipient fault detection techniques. The most applicable techniques were structured into software flows that are then referenced in discussions of techniques for specific subsystems.
Improved importance sampling technique for efficient simulation of digital communication systems
NASA Technical Reports Server (NTRS)
Lu, Dingqing; Yao, Kung
1988-01-01
A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.
Simulation verification techniques study
NASA Technical Reports Server (NTRS)
Schoonmaker, P. B.; Wenglinski, T. H.
1975-01-01
Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided.
Development of a technique for inflight jet noise simulation. I, II
NASA Technical Reports Server (NTRS)
Clapper, W. S.; Stringas, E. J.; Mani, R.; Banerian, G.
1976-01-01
Several possible noise simulation techniques were evaluated, including closed circuit wind tunnels, free jets, rocket sleds and high speed trains. The free jet technique was selected for demonstration and verification. The first paper describes the selection and development of the technique and presents results for simulation and in-flight tests of the Learjet, F106, and Bertin Aerotrain. The second presents a theoretical study relating the two sets of noise signatures. It is concluded that the free jet simulation technique provides a satisfactory assessment of in-flight noise.
Assessment of simulation fidelity using measurements of piloting technique in flight
NASA Technical Reports Server (NTRS)
Clement, W. F.; Cleveland, W. B.; Key, D. L.
1984-01-01
The U.S. Army and NASA joined together on a project to conduct a systematic investigation and validation of a ground based piloted simulation of the Army/Sikorsky UH-60A helicopter. Flight testing was an integral part of the validation effort. Nap-of-the-Earth (NOE) piloting tasks which were investigated included the bob-up, the hover turn, the dash/quickstop, the sidestep, the dolphin, and the slalom. Results from the simulation indicate that the pilot's NOE task performance in the simulator is noticeably and quantifiably degraded when compared with the task performance results generated in flight test. The results of the flight test and ground based simulation experiments support a unique rationale for the assessment of simulation fidelity: flight simulation fidelity should be judged quantitatively by measuring pilot's control strategy and technique as induced by the simulator. A quantitative comparison is offered between the piloting technique observed in a flight simulator and that observed in flight test for the same tasks performed by the same pilots.
Simulation verification techniques study. Subsystem simulation validation techniques
NASA Technical Reports Server (NTRS)
Duncan, L. M.; Reddell, J. P.; Schoonmaker, P. B.
1974-01-01
Techniques for validation of software modules which simulate spacecraft onboard systems are discussed. An overview of the simulation software hierarchy for a shuttle mission simulator is provided. A set of guidelines for the identification of subsystem/module performance parameters and critical performance parameters are presented. Various sources of reference data to serve as standards of performance for simulation validation are identified. Environment, crew station, vehicle configuration, and vehicle dynamics simulation software are briefly discussed from the point of view of their interfaces with subsystem simulation modules. A detailed presentation of results in the area of vehicle subsystems simulation modules is included. A list of references, conclusions and recommendations are also given.
A comparison of solute-transport solution techniques based on inverse modelling results
Mehl, S.; Hill, M.C.
2000-01-01
Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results-simulated breakthrough curves, sensitivity analysis, and calibrated parameter values-change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.
Nonlinear relaxation algorithms for circuit simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saleh, R.A.
Circuit simulation is an important Computer-Aided Design (CAD) tool in the design of Integrated Circuits (IC). However, the standard techniques used in programs such as SPICE result in very long computer-run times when applied to large problems. In order to reduce the overall run time, a number of new approaches to circuit simulation were developed and are described. These methods are based on nonlinear relaxation techniques and exploit the relative inactivity of large circuits. Simple waveform-processing techniques are described to determine the maximum possible speed improvement that can be obtained by exploiting this property of large circuits. Three simulation algorithmsmore » are described, two of which are based on the Iterated Timing Analysis (ITA) method and a third based on the Waveform-Relaxation Newton (WRN) method. New programs that incorporate these techniques were developed and used to simulate a variety of industrial circuits. The results from these simulations are provided. The techniques are shown to be much faster than the standard approach. In addition, a number of parallel aspects of these algorithms are described, and a general space-time model of parallel-task scheduling is developed.« less
Computer animation challenges for computational fluid dynamics
NASA Astrophysics Data System (ADS)
Vines, Mauricio; Lee, Won-Sook; Mavriplis, Catherine
2012-07-01
Computer animation requirements differ from those of traditional computational fluid dynamics (CFD) investigations in that visual plausibility and rapid frame update rates trump physical accuracy. We present an overview of the main techniques for fluid simulation in computer animation, starting with Eulerian grid approaches, the Lattice Boltzmann method, Fourier transform techniques and Lagrangian particle introduction. Adaptive grid methods, precomputation of results for model reduction, parallelisation and computation on graphical processing units (GPUs) are reviewed in the context of accelerating simulation computations for animation. A survey of current specific approaches for the application of these techniques to the simulation of smoke, fire, water, bubbles, mixing, phase change and solid-fluid coupling is also included. Adding plausibility to results through particle introduction, turbulence detail and concentration on regions of interest by level set techniques has elevated the degree of accuracy and realism of recent animations. Basic approaches are described here. Techniques to control the simulation to produce a desired visual effect are also discussed. Finally, some references to rendering techniques and haptic applications are mentioned to provide the reader with a complete picture of the challenges of simulating fluids in computer animation.
Paper simulation techniques in user requirements analysis for interactive computer systems
NASA Technical Reports Server (NTRS)
Ramsey, H. R.; Atwood, M. E.; Willoughby, J. K.
1979-01-01
This paper describes the use of a technique called 'paper simulation' in the analysis of user requirements for interactive computer systems. In a paper simulation, the user solves problems with the aid of a 'computer', as in normal man-in-the-loop simulation. In this procedure, though, the computer does not exist, but is simulated by the experimenters. This allows simulated problem solving early in the design effort, and allows the properties and degree of structure of the system and its dialogue to be varied. The technique, and a method of analyzing the results, are illustrated with examples from a recent paper simulation exercise involving a Space Shuttle flight design task
Naff, R.L.; Haley, D.F.; Sudicky, E.A.
1998-01-01
In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic-conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non-Gaussian behavior of the mean cloud, are reported on as well.
Johnsson, A Christina E; Kjellberg, Anders; Lagerström, Monica I
2006-05-01
The aim of this study was to investigate if nursing students improved their work technique when assisting a simulated patient from bed to wheelchair after proficiency training, and to investigate whether there was a correlation between the nursing students' work technique and the simulated patients' perceptions of the transfer. 71 students participated in the study, 35 in the intervention group and 36 in the comparison group. The students assisted a simulated patient to move from a bed to a wheelchair. In the intervention group the students made one transfer before and one after training, and in the comparison group they made two transfers before training. Six variables were evaluated: work technique score; nursing students' ratings of comfort, work technique and exertion, and the simulated patients' perceptions of comfort and safety during the transfer. The result showed that nursing students improved their work technique, and that there was a correlation between the work technique and the simulated patients' subjective ratings of the transfer. In conclusion, nursing students improved their work technique after training in patient transfer methods, and the work technique affected the simulated patients' perceptions of the transfer.
A Validation Study of Merging and Spacing Techniques in a NAS-Wide Simulation
NASA Technical Reports Server (NTRS)
Glaab, Patricia C.
2011-01-01
In November 2010, Intelligent Automation, Inc. (IAI) delivered an M&S software tool to that allows system level studies of the complex terminal airspace with the ACES simulation. The software was evaluated against current day arrivals in the Atlanta TRACON using Atlanta's Hartsfield-Jackson International Airport (KATL) arrival schedules. Results of this validation effort are presented describing data sets, traffic flow assumptions and techniques, and arrival rate comparisons between reported landings at Atlanta versus simulated arrivals using the same traffic sets in ACES equipped with M&S. Initial results showed the simulated system capacity to be significantly below arrival capacity seen at KATL. Data was gathered for Atlanta using commercial airport and flight tracking websites (like FlightAware.com), and analyzed to insure compatible techniques were used for result reporting and comparison. TFM operators for Atlanta were consulted for tuning final simulation parameters and for guidance in flow management techniques during high volume operations. Using these modified parameters and incorporating TFM guidance for efficiencies in flowing aircraft, arrival capacity for KATL was matched for the simulation. Following this validation effort, a sensitivity study was conducted to measure the impact of variations in system parameters on the Atlanta airport arrival capacity.
Procedure for Adapting Direct Simulation Monte Carlo Meshes
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.
1992-01-01
A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.
Non-Black-Box Simulation from One-Way Functions and Applications to Resettable Security
2012-11-05
from 2001, Barak (FOCS’01) introduced a novel non-black-box simulation technique. This technique enabled the construc- tion of new cryptographic...primitives, such as resettably-sound zero-knowledge arguments, that cannot be proven secure using just black-box simulation techniques. The work of Barak ... Barak requires the existence of collision-resistant hash functions, and a very recent result by Bitansky and Paneth (FOCS’12) instead requires the
NASA Astrophysics Data System (ADS)
Taniguchi, Kenji
2018-04-01
To investigate future variations in high-impact weather events, numerous samples are required. For the detailed assessment in a specific region, a high spatial resolution is also required. A simple ensemble simulation technique is proposed in this paper. In the proposed technique, new ensemble members were generated from one basic state vector and two perturbation vectors, which were obtained by lagged average forecasting simulations. Sensitivity experiments with different numbers of ensemble members, different simulation lengths, and different perturbation magnitudes were performed. Experimental application to a global warming study was also implemented for a typhoon event. Ensemble-mean results and ensemble spreads of total precipitation, atmospheric conditions showed similar characteristics across the sensitivity experiments. The frequencies of the maximum total and hourly precipitation also showed similar distributions. These results indicate the robustness of the proposed technique. On the other hand, considerable ensemble spread was found in each ensemble experiment. In addition, the results of the application to a global warming study showed possible variations in the future. These results indicate that the proposed technique is useful for investigating various meteorological phenomena and the impacts of global warming. The results of the ensemble simulations also enable the stochastic evaluation of differences in high-impact weather events. In addition, the impacts of a spectral nudging technique were also examined. The tracks of a typhoon were quite different between cases with and without spectral nudging; however, the ranges of the tracks among ensemble members were comparable. It indicates that spectral nudging does not necessarily suppress ensemble spread.
Kelay, Tanika; Chan, Kah Leong; Ako, Emmanuel; Yasin, Mohammad; Costopoulos, Charis; Gold, Matthew; Kneebone, Roger K; Malik, Iqbal S; Bello, Fernando
2017-01-01
Distributed Simulation is the concept of portable, high-fidelity immersive simulation. Here, it is used for the development of a simulation-based training programme for cardiovascular specialities. We present an evidence base for how accessible, portable and self-contained simulated environments can be effectively utilised for the modelling, development and testing of a complex training framework and assessment methodology. Iterative user feedback through mixed-methods evaluation techniques resulted in the implementation of the training programme. Four phases were involved in the development of our immersive simulation-based training programme: ( 1) initial conceptual stage for mapping structural criteria and parameters of the simulation training framework and scenario development ( n = 16), (2) training facility design using Distributed Simulation , (3) test cases with clinicians ( n = 8) and collaborative design, where evaluation and user feedback involved a mixed-methods approach featuring (a) quantitative surveys to evaluate the realism and perceived educational relevance of the simulation format and framework for training and (b) qualitative semi-structured interviews to capture detailed feedback including changes and scope for development. Refinements were made iteratively to the simulation framework based on user feedback, resulting in (4) transition towards implementation of the simulation training framework, involving consistent quantitative evaluation techniques for clinicians ( n = 62). For comparative purposes, clinicians' initial quantitative mean evaluation scores for realism of the simulation training framework, realism of the training facility and relevance for training ( n = 8) are presented longitudinally, alongside feedback throughout the development stages from concept to delivery, including the implementation stage ( n = 62). Initially, mean evaluation scores fluctuated from low to average, rising incrementally. This corresponded with the qualitative component, which augmented the quantitative findings; trainees' user feedback was used to perform iterative refinements to the simulation design and components (collaborative design), resulting in higher mean evaluation scores leading up to the implementation phase. Through application of innovative Distributed Simulation techniques, collaborative design, and consistent evaluation techniques from conceptual, development, and implementation stages, fully immersive simulation techniques for cardiovascular specialities are achievable and have the potential to be implemented more broadly.
Estimation variance bounds of importance sampling simulations in digital communication systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1991-01-01
In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.
Investigation of Propagation in Foliage Using Simulation Techniques
2011-12-01
simulation models provide a rough approximation to radiowave propagation in an actual rainforest environment. Based on the simulated results, the...simulation models provide a rough approximation to radiowave propagation in an actual rainforest environment. Based on the simulated results, the path... Rainforest ...............................2 2. Electrical Properties of a Forest .........................................................3 B. OBJECTIVES OF
NASA Astrophysics Data System (ADS)
Jin, Minquan; Delshad, Mojdeh; Dwarakanath, Varadarajan; McKinney, Daene C.; Pope, Gary A.; Sepehrnoori, Kamy; Tilburg, Charles E.; Jackson, Richard E.
1995-05-01
In this paper we present a partitioning interwell tracer test (PITT) technique for the detection, estimation, and remediation performance assessment of the subsurface contaminated by nonaqueous phase liquids (NAPLs). We demonstrate the effectiveness of this technique by examples of experimental and simulation results. The experimental results are from partitioning tracer experiments in columns packed with Ottawa sand. Both the method of moments and inverse modeling techniques for estimating NAPL saturation in the sand packs are demonstrated. In the simulation examples we use UTCHEM, a comprehensive three-dimensional, chemical flood compositional simulator developed at the University of Texas, to simulate a hypothetical two-dimensional aquifer with properties similar to the Borden site contaminated by tetrachloroethylene (PCE), and we show how partitioning interwell tracer tests can be used to estimate the amount of PCE contaminant before remedial action and as the remediation process proceeds. Tracer tests results from different stages of remediation are compared to determine the quantity of PCE removed and the amount remaining. Both the experimental (small-scale) and simulation (large-scale) results demonstrate that PITT can be used as an innovative and effective technique to detect and estimate the amount of residual NAPL and for remediation performance assessment in subsurface formations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, M.; Delshad, M.; Dwarakanath, V.
1995-05-01
In this paper we present a partitioning interwell tracer test (PITT) technique for the detection, estimation, and remediation performance assessment of the subsurface contaminated by nonaqueous phase liquids (NAPLs). We demonstrate the effectiveness of this technique by examples of experimental and simulation results. The experimental results are from partitioning tracer experiments in columns packed with Ottawa sand. Both the method of moments and inverse modeling techniques for estimating NAPL saturation in the sand packs are demonstrated. In the simulation examples we use UTCHEM, a comprehensive three-dimensional, chemical flood compositional simulator developed at the University of Texas, to simulate a hypotheticalmore » two-dimensional aquifer with properties similar to the Borden site contaminated by tetrachloroethylene (PCE), and we show how partitioning interwell tracer tests can be used to estimate the amount of PCE contaminant before remedial action and as the remediation process proceeds. Tracer test results from different stages of remediation are compared to determine the quantity of PCE removed and the amount remaining. Both the experimental (small-scale) and simulation (large-scale) results demonstrate that PITT can be used as an innovative and effective technique to detect and estimate the amount of residual NAPL and for remediation performance assessment in subsurface formations. 43 refs., 10 figs., 1 tab.« less
Generalized simulation technique for turbojet engine system analysis
NASA Technical Reports Server (NTRS)
Seldner, K.; Mihaloew, J. R.; Blaha, R. J.
1972-01-01
A nonlinear analog simulation of a turbojet engine was developed. The purpose of the study was to establish simulation techniques applicable to propulsion system dynamics and controls research. A schematic model was derived from a physical description of a J85-13 turbojet engine. Basic conservation equations were applied to each component along with their individual performance characteristics to derive a mathematical representation. The simulation was mechanized on an analog computer. The simulation was verified in both steady-state and dynamic modes by comparing analytical results with experimental data obtained from tests performed at the Lewis Research Center with a J85-13 engine. In addition, comparison was also made with performance data obtained from the engine manufacturer. The comparisons established the validity of the simulation technique.
Simulation of Thermographic Responses of Delaminations in Composites with Quadrupole Method
NASA Technical Reports Server (NTRS)
Winfree, William P.; Zalameda, Joseph N.; Howell, Patricia A.; Cramer, K. Elliott
2016-01-01
The application of the quadrupole method for simulating thermal responses of delaminations in carbon fiber reinforced epoxy composites materials is presented. The method solves for the flux at the interface containing the delamination. From the interface flux, the temperature at the surface is calculated. While the results presented are for single sided measurements, with ash heating, expansion of the technique to arbitrary temporal flux heating or through transmission measurements is simple. The quadrupole method is shown to have two distinct advantages relative to finite element or finite difference techniques. First, it is straight forward to incorporate arbitrary shaped delaminations into the simulation. Second, the quadrupole method enables calculation of the thermal response at only the times of interest. This, combined with a significant reduction in the number of degrees of freedom for the same simulation quality, results in a reduction of the computation time by at least an order of magnitude. Therefore, it is a more viable technique for model based inversion of thermographic data. Results for simulations of delaminations in composites are presented and compared to measurements and finite element method results.
Acceleration techniques for dependability simulation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Barnette, James David
1995-01-01
As computer systems increase in complexity, the need to project system performance from the earliest design and development stages increases. We have to employ simulation for detailed dependability studies of large systems. However, as the complexity of the simulation model increases, the time required to obtain statistically significant results also increases. This paper discusses an approach that is application independent and can be readily applied to any process-based simulation model. Topics include background on classical discrete event simulation and techniques for random variate generation and statistics gathering to support simulation.
NASA Astrophysics Data System (ADS)
Lachinova, Svetlana L.; Vorontsov, Mikhail A.; Filimonov, Grigory A.; LeMaster, Daniel A.; Trippel, Matthew E.
2017-07-01
Computational efficiency and accuracy of wave-optics-based Monte-Carlo and brightness function numerical simulation techniques for incoherent imaging of extended objects through atmospheric turbulence are evaluated. Simulation results are compared with theoretical estimates based on known analytical solutions for the modulation transfer function of an imaging system and the long-exposure image of a Gaussian-shaped incoherent light source. It is shown that the accuracy of both techniques is comparable over the wide range of path lengths and atmospheric turbulence conditions, whereas the brightness function technique is advantageous in terms of the computational speed.
Simulated transition from RCP8.5 to RCP4.5 through three different Radiation Management techniques
NASA Astrophysics Data System (ADS)
Muri, H.; Kristjansson, J. E.; Adakudlu, M.; Grini, A.; Lauvset, S. K.; Otterå, O. H.; Schulz, M.; Tjiputra, J. F.
2016-12-01
Scenario studies have shown that in order to limit global warming to 2°C above pre-industrial levels, negative CO2 emissions are required. Currently, no safe and well-established technologies exist for achieving such negative emissions. Hence, although carbon dioxide removal may appear less risky and controversial than Radiation Management (RM) techniques, the latter type of climate engineering (CE) techniques cannot be ruled out as a future policy option. The EXPECT project, funded by the Norwegian Research Council, explores the potential and risks of RM through Earth System Model Simulations. We here describe results from a study that simulates a 21st century transition from an RCP8.5 to a RCP4.5 scenario through Radiation Management. The study uses the Norwegian Earth System Model (NorESM) to compare the results from the following three RM techniques: a) Stratospheric Aerosol Injections (SAI); b) Marine Sky Brightening (MSB); c) Cirrus Cloud Thinning (CCT). All three simulations start from the year 2020 and run until 2100. Whereas both SAI and MSB successfully simulate the desired negative radiative forcing throughout the 21st century, the CCT simulations have a +0.5 W m-2 residual forcing (on top of RCP4.5) at the end of the century. Although all three techniques obtain approximately the same global temperature evolution, precipitation responses are very different. In particular, the CCT simulation has even more globally averaged precipitation at year 2100 than RCP8.5, whereas both SAI and MSB simulate less precipitation than RCP4.5. In addition, there are significant differences in geographical patterns of precipitation. Natural variability in the Earth System also exhibits sensitivity to the choice of RM technique: Both the Atlantic Meridional Overturning Circulation and the Pacific Decadal Oscillation respond differently to the choice of SAI, MSB or CCT. We will present a careful analysis, as well as a physical interpretation of the above results.
Modeling software systems by domains
NASA Technical Reports Server (NTRS)
Dippolito, Richard; Lee, Kenneth
1992-01-01
The Software Architectures Engineering (SAE) Project at the Software Engineering Institute (SEI) has developed engineering modeling techniques that both reduce the complexity of software for domain-specific computer systems and result in systems that are easier to build and maintain. These techniques allow maximum freedom for system developers to apply their domain expertise to software. We have applied these techniques to several types of applications, including training simulators operating in real time, engineering simulators operating in non-real time, and real-time embedded computer systems. Our modeling techniques result in software that mirrors both the complexity of the application and the domain knowledge requirements. We submit that the proper measure of software complexity reflects neither the number of software component units nor the code count, but the locus of and amount of domain knowledge. As a result of using these techniques, domain knowledge is isolated by fields of engineering expertise and removed from the concern of the software engineer. In this paper, we will describe kinds of domain expertise, describe engineering by domains, and provide relevant examples of software developed for simulator applications using the techniques.
Retinal Image Simulation of Subjective Refraction Techniques.
Perches, Sara; Collados, M Victoria; Ares, Jorge
2016-01-01
Refraction techniques make it possible to determine the most appropriate sphero-cylindrical lens prescription to achieve the best possible visual quality. Among these techniques, subjective refraction (i.e., patient's response-guided refraction) is the most commonly used approach. In this context, this paper's main goal is to present a simulation software that implements in a virtual manner various subjective-refraction techniques--including Jackson's Cross-Cylinder test (JCC)--relying all on the observation of computer-generated retinal images. This software has also been used to evaluate visual quality when the JCC test is performed in multifocal-contact-lens wearers. The results reveal this software's usefulness to simulate the retinal image quality that a particular visual compensation provides. Moreover, it can help to gain a deeper insight and to improve existing refraction techniques and it can be used for simulated training.
Validation of scramjet exhaust simulation technique at Mach 6
NASA Technical Reports Server (NTRS)
Hopkins, H. B.; Konopka, W.; Leng, J.
1979-01-01
Current design philosophy for hydrogen-fueled, scramjet-powered hypersonic aircraft results in configurations with strong couplings between the engine plume and vehicle aerodynamics. The experimental verification of the scramjet exhaust simulation is described. The scramjet exhaust was reproduced for the Mach 6 flight condition by the detonation tube simulator. The exhaust flow pressure profiles, and to a large extent the heat transfer rate profiles, were then duplicated by cool gas mixtures of Argon and Freon 13B1 or Freon 12. The results of these experiments indicate that a cool gas simulation of the hot scramjet exhaust is a viable simulation technique except for phenomena which are dependent on the wall temperature relative to flow temperature.
NASA Technical Reports Server (NTRS)
Tranter, W. H.
1979-01-01
A technique for estimating the signal-to-noise ratio at a point in a digital simulation of a communication system is described; the technique is essentially a digital realization of a technique proposed by Shepertycki (1964) for the evaluation of analog communication systems. Signals having lowpass or bandpass spectra may be used. Simulation results show the technique to be accurate over a wide range of signal-to-noise ratios.
NASA Technical Reports Server (NTRS)
Kibler, K. S.; Mcdaniel, G. A.
1981-01-01
A digital local linearization technique was used to solve a system of stiff differential equations which simulate a magnetic bearing assembly. The results prove the technique to be accurate, stable, and efficient when compared to a general purpose variable order Adams method with a stiff option.
H2LIFT: global navigation simulation ship tracking and WMD detection in the maritime domain
NASA Astrophysics Data System (ADS)
Wyffels, Kevin
2007-04-01
This paper presents initial results for a tracking simulation of multiple maritime vehicles for use in a data fusion program detecting Weapons of Mass Destruction (WMD). This simulation supports a fusion algorithm (H2LIFT) for collecting and analyzing data providing a heuristic analysis tool for detecting weapons of mass destruction in the maritime domain. Tools required to develop a navigational simulation fitting a set of project objectives are introduced for integration into the H2LIFT algorithm. Emphasis is placed on the specific requirements of the H2LIFT project, however the basic equations, algorithms, and methodologies can be used as tools in a variety of scenario simulations. Discussion will be focused on track modeling (e.g. position tracking of ships), navigational techniques, WMD detection, and simulation of these models using Matlab and Simulink. Initial results provide absolute ship position data for a given multi-ship maritime scenario with random generation of a given ship containing a WMD. Required coordinate systems, conversions between coordinate systems, Earth modeling techniques, and navigational conventions and techniques are introduced for development of the simulations.
Simulations of multi-contrast x-ray imaging using near-field speckles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zdora, Marie-Christine; Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom and Department of Physics & Astronomy, University College London, London, WC1E 6BT; Thibault, Pierre
2016-01-28
X-ray dark-field and phase-contrast imaging using near-field speckles is a novel technique that overcomes limitations inherent in conventional absorption x-ray imaging, i.e. poor contrast for features with similar density. Speckle-based imaging yields a wealth of information with a simple setup tolerant to polychromatic and divergent beams, and simple data acquisition and analysis procedures. Here, we present a simulation software used to model the image formation with the speckle-based technique, and we compare simulated results on a phantom sample with experimental synchrotron data. Thorough simulation of a speckle-based imaging experiment will help for better understanding and optimising the technique itself.
CFAVC scheme for high frequency series resonant inverter-fed domestic induction heating system
NASA Astrophysics Data System (ADS)
Nagarajan, Booma; Reddy Sathi, Rama
2016-01-01
This article presents the investigations on the constant frequency asymmetric voltage cancellation control in the AC-AC resonant converter-fed domestic induction heating system. Conventional fixed frequency control techniques used in the high frequency converters lead to non-zero voltage switching operation and reduced output power. The proposed control technique produces higher output power than the conventional fixed-frequency control strategies. In this control technique, zero-voltage-switching operation is maintained during different duty cycle operation for reduction in the switching losses. Complete analysis of the induction heating power supply system with asymmetric voltage cancellation control is discussed in this article. Simulation and experimental study on constant frequency asymmetric voltage cancellation (CFAVC)-controlled full bridge series resonant inverter is performed. Time domain simulation results for the open and closed loop of the system are obtained using MATLAB simulation tool. The simulation results prove the control of voltage and power in a wide range. PID controller-based closed loop control system achieves the voltage regulation of the proposed system for the step change in load. Hardware implementation of the system under CFAVC control is done using the embedded controller. The simulation and experimental results validate the performance of the CFAVC control technique for series resonant-based induction cooking system.
Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.
Genetic Adaptive Control for PZT Actuators
NASA Technical Reports Server (NTRS)
Kim, Jeongwook; Stover, Shelley K.; Madisetti, Vijay K.
1995-01-01
A piezoelectric transducer (PZT) is capable of providing linear motion if controlled correctly and could provide a replacement for traditional heavy and large servo systems using motors. This paper focuses on a genetic model reference adaptive control technique (GMRAC) for a PZT which is moving a mirror where the goal is to keep the mirror velocity constant. Genetic Algorithms (GAs) are an integral part of the GMRAC technique acting as the search engine for an optimal PID controller. Two methods are suggested to control the actuator in this research. The first one is to change the PID parameters and the other is to add an additional reference input in the system. The simulation results of these two methods are compared. Simulated Annealing (SA) is also used to solve the problem. Simulation results of GAs and SA are compared after simulation. GAs show the best result according to the simulation results. The entire model is designed using the Mathworks' Simulink tool.
NASA Technical Reports Server (NTRS)
OBrien, T. Kevin (Technical Monitor); Krueger, Ronald; Minguet, Pierre J.
2004-01-01
The application of a shell/3D modeling technique for the simulation of skin/stringer debond in a specimen subjected to tension and three-point bending was studied. The global structure was modeled with shell elements. A local three-dimensional model, extending to about three specimen thicknesses on either side of the delamination front was used to model the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from shell/3D simulations were in good agreement with results obtained from full solid models. The good correlation of the results demonstrated the effectiveness of the shell/3D modeling technique for the investigation of skin/stiffener separation due to delamination in the adherents. In addition, the application of the submodeling technique for the simulation of skin/stringer debond was also studied. Global models made of shell elements and solid elements were studied. Solid elements were used for local submodels, which extended between three and six specimen thicknesses on either side of the delamination front to model the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from the simulations using the submodeling technique were not in agreement with results obtained from full solid models.
Liu, Xin
2014-01-01
This study describes a deterministic method for simulating the first-order scattering in a medical computed tomography scanner. The method was developed based on a physics model of x-ray photon interactions with matter and a ray tracing technique. The results from simulated scattering were compared to the ones from an actual scattering measurement. Two phantoms with homogeneous and heterogeneous material distributions were used in the scattering simulation and measurement. It was found that the simulated scatter profile was in agreement with the measurement result, with an average difference of 25% or less. Finally, tomographic images with artifacts caused by scatter were corrected based on the simulated scatter profiles. The image quality improved significantly.
Real time digital propulsion system simulation for manned flight simulators
NASA Technical Reports Server (NTRS)
Mihaloew, J. R.; Hart, C. E.
1978-01-01
A real time digital simulation of a STOL propulsion system was developed which generates significant dynamics and internal variables needed to evaluate system performance and aircraft interactions using manned flight simulators. The simulation ran at a real-to-execution time ratio of 8.8. The model was used in a piloted NASA flight simulator program to evaluate the simulation technique and the propulsion system digital control. The simulation is described and results shown. Limited results of the flight simulation program are also presented.
Using Lotus 1-2-3 for "Non-Stop" Graphic Simulation.
ERIC Educational Resources Information Center
Godin, Victor B.; Rao, Ashok
1988-01-01
Discusses the use of Lotus 1-2-3 to create non-stop graphic displays of simulation models. Describes a simple application of this technique using the distribution resulting from repeated throws of dice. Lists other software used with this technique. Stresses the advantages of this approach in education. (CW)
Mehl, S.; Hill, M.C.
2001-01-01
Five common numerical techniques for solving the advection-dispersion equation (finite difference, predictor corrector, total variation diminishing, method of characteristics, and modified method of characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using discrete, randomly distributed, homogeneous blocks of five sand types. This experimental model provides an opportunity to compare the solution techniques: the heterogeneous hydraulic-conductivity distribution of known structure can be accurately represented by a numerical model, and detailed measurements can be compared with simulated concentrations and total flow through the tank. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation given the different methods of simulating solute transport. The breakthrough curves show that simulated peak concentrations, even at very fine grid spacings, varied between the techniques because of different amounts of numerical dispersion. Sensitivity-analysis results revealed: (1) a high correlation between hydraulic conductivity and porosity given the concentration and flow observations used, so that both could not be estimated; and (2) that the breakthrough curve data did not provide enough information to estimate individual values of dispersivity for the five sands. This study demonstrates that the choice of assigned dispersivity and the amount of numerical dispersion present in the solution technique influence estimated hydraulic conductivity values to a surprising degree.
NASA Astrophysics Data System (ADS)
Chittaro, Luca; Zangrando, Nicola
Although virtual reality (VR) is a powerful simulation tool that can allow users to experience the effects of their actions in vivid and memorable ways, explorations of VR as a persuasive technology are rare. In this paper, we focus on different ways of providing negative feedback for persuasive purposes through simulated experiences in VR. The persuasive goal we consider concerns awareness of personal fire safety issues and the experiment we describe focuses on attitudes towards smoke in evacuating buildings. We test two techniques: the first technique simulates the damaging effects of smoke on the user through a visualization that should not evoke strong emotions, while the second is aimed at partially reproducing the anxiety of an emergency situation. The results of the study show that the second technique is able to increase user's anxiety as well as producing better results in attitude change.
NASA Technical Reports Server (NTRS)
Tranter, W. H.; Turner, M. D.
1977-01-01
Techniques are developed to estimate power gain, delay, signal-to-noise ratio, and mean square error in digital computer simulations of lowpass and bandpass systems. The techniques are applied to analog and digital communications. The signal-to-noise ratio estimates are shown to be maximum likelihood estimates in additive white Gaussian noise. The methods are seen to be especially useful for digital communication systems where the mapping from the signal-to-noise ratio to the error probability can be obtained. Simulation results show the techniques developed to be accurate and quite versatile in evaluating the performance of many systems through digital computer simulation.
NASA Technical Reports Server (NTRS)
Stankovic, Ana V.
2003-01-01
Professor Stankovic will be developing and refining Simulink based models of the PM alternator and comparing the simulation results with experimental measurements taken from the unit. Her first task is to validate the models using the experimental data. Her next task is to develop alternative control techniques for the application of the Brayton Cycle PM Alternator in a nuclear electric propulsion vehicle. The control techniques will be first simulated using the validated models then tried experimentally with hardware available at NASA. Testing and simulation of a 2KW PM synchronous generator with diode bridge output is described. The parameters of a synchronous PM generator have been measured and used in simulation. Test procedures have been developed to verify the PM generator model with diode bridge output. Experimental and simulation results are in excellent agreement.
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Cunningham, Kevin; Hill, Melissa A.
2013-01-01
Flight test and modeling techniques were developed for efficiently identifying global aerodynamic models that can be used to accurately simulate stall, upset, and recovery on large transport airplanes. The techniques were developed and validated in a high-fidelity fixed-base flight simulator using a wind-tunnel aerodynamic database, realistic sensor characteristics, and a realistic flight deck representative of a large transport aircraft. Results demonstrated that aerodynamic models for stall, upset, and recovery can be identified rapidly and accurately using relatively simple piloted flight test maneuvers. Stall maneuver predictions and comparisons of identified aerodynamic models with data from the underlying simulation aerodynamic database were used to validate the techniques.
Ioannou, Ioanna; Kazmierczak, Edmund; Stern, Linda
2015-01-01
The use of virtual reality (VR) simulation for surgical training has gathered much interest in recent years. Despite increasing popularity and usage, limited work has been carried out in the use of automated objective measures to quantify the extent to which performance in a simulator resembles performance in the operating theatre, and the effects of simulator training on real world performance. To this end, we present a study exploring the effects of VR training on the performance of dentistry students learning a novel oral surgery task. We compare the performance of trainees in a VR simulator and in a physical setting involving ovine jaws, using a range of automated metrics derived by motion analysis. Our results suggest that simulator training improved the motion economy of trainees without adverse effects on task outcome. Comparison of surgical technique on the simulator with the ovine setting indicates that simulator technique is similar, but not identical to real world technique.
A real time Pegasus propulsion system model for VSTOL piloted simulation evaluation
NASA Technical Reports Server (NTRS)
Mihaloew, J. R.; Roth, S. P.; Creekmore, R.
1981-01-01
A real time propulsion system modeling technique suitable for use in man-in-the-loop simulator studies was developd. This technique provides the system accuracy, stability, and transient response required for integrated aircraft and propulsion control system studies. A Pegasus-Harrier propulsion system was selected as a baseline for developing mathematical modeling and simulation techniques for VSTOL. Initially, static and dynamic propulsion system characteristics were modeled in detail to form a nonlinear aerothermodynamic digital computer simulation of a Pegasus engine. From this high fidelity simulation, a real time propulsion model was formulated by applying a piece-wise linear state variable methodology. A hydromechanical and water injection control system was also simulated. The real time dynamic model includes the detail and flexibility required for the evaluation of critical control parameters and propulsion component limits over a limited flight envelope. The model was programmed for interfacing with a Harrier aircraft simulation. Typical propulsion system simulation results are presented.
Optimisation of phase ratio in the triple jump using computer simulation.
Allen, Sam J; King, Mark A; Yeadon, M R Fred
2016-04-01
The triple jump is an athletic event comprising three phases in which the optimal proportion of each phase to the total distance jumped, termed the phase ratio, is unknown. This study used a whole-body torque-driven computer simulation model of all three phases of the triple jump to investigate optimal technique. The technique of the simulation model was optimised by varying torque generator activation parameters using a Genetic Algorithm in order to maximise total jump distance, resulting in a hop-dominated technique (35.7%:30.8%:33.6%) and a distance of 14.05m. Optimisations were then run with penalties forcing the model to adopt hop and jump phases of 33%, 34%, 35%, 36%, and 37% of the optimised distance, resulting in total distances of: 13.79m, 13.87m, 13.95m, 14.05m, and 14.02m; and 14.01m, 14.02m, 13.97m, 13.84m, and 13.67m respectively. These results indicate that in this subject-specific case there is a plateau in optimum technique encompassing balanced and hop-dominated techniques, but that a jump-dominated technique is associated with a decrease in performance. Hop-dominated techniques are associated with higher forces than jump-dominated techniques; therefore optimal phase ratio may be related to a combination of strength and approach velocity. Copyright © 2016 Elsevier B.V. All rights reserved.
Using cognitive task analysis to develop simulation-based training for medical tasks.
Cannon-Bowers, Jan; Bowers, Clint; Stout, Renee; Ricci, Katrina; Hildabrand, Annette
2013-10-01
Pressures to increase the efficacy and effectiveness of medical training are causing the Department of Defense to investigate the use of simulation technologies. This article describes a comprehensive cognitive task analysis technique that can be used to simultaneously generate training requirements, performance metrics, scenario requirements, and simulator/simulation requirements for medical tasks. On the basis of a variety of existing techniques, we developed a scenario-based approach that asks experts to perform the targeted task multiple times, with each pass probing a different dimension of the training development process. In contrast to many cognitive task analysis approaches, we argue that our technique can be highly cost effective because it is designed to accomplish multiple goals. The technique was pilot tested with expert instructors from a large military medical training command. These instructors were employed to generate requirements for two selected combat casualty care tasks-cricothyroidotomy and hemorrhage control. Results indicated that the technique is feasible to use and generates usable data to inform simulation-based training system design. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
Interleaved concatenated codes: new perspectives on approaching the Shannon limit.
Viterbi, A J; Viterbi, A M; Sindhushayana, N T
1997-09-02
The last few years have witnessed a significant decrease in the gap between the Shannon channel capacity limit and what is practically achievable. Progress has resulted from novel extensions of previously known coding techniques involving interleaved concatenated codes. A considerable body of simulation results is now available, supported by an important but limited theoretical basis. This paper presents a computational technique which further ties simulation results to the known theory and reveals a considerable reduction in the complexity required to approach the Shannon limit.
Efficient finite element simulation of slot spirals, slot radomes and microwave structures
NASA Technical Reports Server (NTRS)
Gong, J.; Volakis, J. L.
1995-01-01
This progress report contains the following two documents: (1) 'Efficient Finite Element Simulation of Slot Antennas using Prismatic Elements' - A hybrid finite element-boundary integral (FE-BI) simulation technique is discussed to treat narrow slot antennas etched on a planar platform. Specifically, the prismatic elements are used to reduce the redundant sampling rates and ease the mesh generation process. Numerical results for an antenna slot and frequency selective surfaces are presented to demonstrate the validity and capability of the technique; and (2) 'Application and Design Guidelines of the PML Absorber for Finite Element Simulations of Microwave Packages' - The recently introduced perfectly matched layer (PML) uniaxial absorber for frequency domain finite element simulations has several advantages. In this paper we present the application of PML for microwave circuit simulations along with design guidelines to obtain a desired level of absorption. Different feeding techniques are also investigated for improved accuracy.
Liu, Heng-Liang; Lin, Chun-Li; Sun, Ming-Tsung; Chang, Yen-Hsiang
2010-06-01
This study investigates micro-crack propagation at the enamel/adhesive interface using finite element (FE) submodeling and element death techniques. A three-dimensional (3D) FE macro-model of the enamel/adhesive/ceramic subjected to shear bond testing was generated and analyzed. A 3D micro-model with interfacial bonding structure was constructed at the upper enamel/adhesive interface where the stress concentration was found from the macro-model results. The morphology of this interfacial bonding structure (i.e., resin tag) was assigned based on resin tag geometry and enamel rod arrangement from a scanning electron microscopy micrograph. The boundary conditions for the micro-model were determined from the macro-model results. A custom iterative code combined with the element death technique was used to calculate the micro-crack propagation. Parallel experiments were performed to validate this FE simulation. The stress concentration within the adhesive occurred mainly at the upper corner near the enamel/adhesive interface and the resin tag base. A simulated fracture path was found at the resin tag base along the enamel/adhesive interface. A morphological observation of the fracture patterns obtained from in vitro testing corresponded with the simulation results. This study shows that the FE submodeling and element death techniques could be used to simulate the 3D micro-stress pattern and the crack propagation noted at the enamel/adhesive interface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vincenti, H.; Vay, J. -L.
Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less
Transcranial phase aberration correction using beam simulations and MR-ARFI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vyas, Urvi, E-mail: urvi.vyas@gmail.com; Kaye, Elena; Pauly, Kim Butts
2014-03-15
Purpose: Transcranial magnetic resonance-guided focused ultrasound surgery is a noninvasive technique for causing selective tissue necrosis. Variations in density, thickness, and shape of the skull cause aberrations in the location and shape of the focal zone. In this paper, the authors propose a hybrid simulation-MR-ARFI technique to achieve aberration correction for transcranial MR-guided focused ultrasound surgery. The technique uses ultrasound beam propagation simulations with MR Acoustic Radiation Force Imaging (MR-ARFI) to correct skull-caused phase aberrations. Methods: Skull-based numerical aberrations were obtained from a MR-guided focused ultrasound patient treatment and were added to all elements of the InSightec conformal bone focusedmore » ultrasound surgery transducer during transmission. In the first experiment, the 1024 aberrations derived from a human skull were condensed into 16 aberrations by averaging over the transducer area of 64 elements. In the second experiment, all 1024 aberrations were applied to the transducer. The aberrated MR-ARFI images were used in the hybrid simulation-MR-ARFI technique to find 16 estimated aberrations. These estimated aberrations were subtracted from the original aberrations to result in the corrected images. Each aberration experiment (16-aberration and 1024-aberration) was repeated three times. Results: The corrected MR-ARFI image was compared to the aberrated image and the ideal image (image with zero aberrations) for each experiment. The hybrid simulation-MR-ARFI technique resulted in an average increase in focal MR-ARFI phase of 44% for the 16-aberration case and 52% for the 1024-aberration case, and recovered 83% and 39% of the ideal MR-ARFI phase for the 16-aberrations and 1024-aberration case, respectively. Conclusions: Using one MR-ARFI image and noa priori information about the applied phase aberrations, the hybrid simulation-MR-ARFI technique improved the maximum MR-ARFI phase of the beam's focus.« less
A Comparison of Compressed Sensing and Sparse Recovery Algorithms Applied to Simulation Data
Fan, Ya Ju; Kamath, Chandrika
2016-09-01
The move toward exascale computing for scientific simulations is placing new demands on compression techniques. It is expected that the I/O system will not be able to support the volume of data that is expected to be written out. To enable quantitative analysis and scientific discovery, we are interested in techniques that compress high-dimensional simulation data and can provide perfect or near-perfect reconstruction. In this paper, we explore the use of compressed sensing (CS) techniques to reduce the size of the data before they are written out. Using large-scale simulation data, we investigate how the sufficient sparsity condition and themore » contrast in the data affect the quality of reconstruction and the degree of compression. Also, we provide suggestions for the practical implementation of CS techniques and compare them with other sparse recovery methods. Finally, our results show that despite longer times for reconstruction, compressed sensing techniques can provide near perfect reconstruction over a range of data with varying sparsity.« less
A Comparison of Compressed Sensing and Sparse Recovery Algorithms Applied to Simulation Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Ya Ju; Kamath, Chandrika
The move toward exascale computing for scientific simulations is placing new demands on compression techniques. It is expected that the I/O system will not be able to support the volume of data that is expected to be written out. To enable quantitative analysis and scientific discovery, we are interested in techniques that compress high-dimensional simulation data and can provide perfect or near-perfect reconstruction. In this paper, we explore the use of compressed sensing (CS) techniques to reduce the size of the data before they are written out. Using large-scale simulation data, we investigate how the sufficient sparsity condition and themore » contrast in the data affect the quality of reconstruction and the degree of compression. Also, we provide suggestions for the practical implementation of CS techniques and compare them with other sparse recovery methods. Finally, our results show that despite longer times for reconstruction, compressed sensing techniques can provide near perfect reconstruction over a range of data with varying sparsity.« less
Time-Distance Analysis of Deep Solar Convection
NASA Technical Reports Server (NTRS)
Duvall, T. L., Jr.; Hanasoge, S. M.
2011-01-01
Recently it was shown by Hanasoge, Duvall, and DeRosa (2010) that the upper limit to convective flows for spherical harmonic degrees l is considerably smaller than the flows predicted by the ASH simulations (Miesch et a7. ref) at the depth r/R=0.95 ' The deep-focusing Lime-distance technique used to develop the upper limit was applied to linear acoustic simulations of a solar interior perturbed by convective flows in order to calibrate the technique. This technique has been applied to other depths in the convection zone and the results will be presented. The deep-focusing technique has considerable sensitivity to the flow ' signals at the desired subsurface location ' However, as shown by Birch {ref}, there is remaining much sensitivity to near-surface signals. Modifications to the technique using multiple bounce signals have been examined in a search for a more refined sensitivity, or kernel function. Initial results are encouraging and results will be presented'
Computational study of noise in a large signal transduction network.
Intosalmi, Jukka; Manninen, Tiina; Ruohonen, Keijo; Linne, Marja-Leena
2011-06-21
Biochemical systems are inherently noisy due to the discrete reaction events that occur in a random manner. Although noise is often perceived as a disturbing factor, the system might actually benefit from it. In order to understand the role of noise better, its quality must be studied in a quantitative manner. Computational analysis and modeling play an essential role in this demanding endeavor. We implemented a large nonlinear signal transduction network combining protein kinase C, mitogen-activated protein kinase, phospholipase A2, and β isoform of phospholipase C networks. We simulated the network in 300 different cellular volumes using the exact Gillespie stochastic simulation algorithm and analyzed the results in both the time and frequency domain. In order to perform simulations in a reasonable time, we used modern parallel computing techniques. The analysis revealed that time and frequency domain characteristics depend on the system volume. The simulation results also indicated that there are several kinds of noise processes in the network, all of them representing different kinds of low-frequency fluctuations. In the simulations, the power of noise decreased on all frequencies when the system volume was increased. We concluded that basic frequency domain techniques can be applied to the analysis of simulation results produced by the Gillespie stochastic simulation algorithm. This approach is suited not only to the study of fluctuations but also to the study of pure noise processes. Noise seems to have an important role in biochemical systems and its properties can be numerically studied by simulating the reacting system in different cellular volumes. Parallel computing techniques make it possible to run massive simulations in hundreds of volumes and, as a result, accurate statistics can be obtained from computational studies. © 2011 Intosalmi et al; licensee BioMed Central Ltd.
NASA Astrophysics Data System (ADS)
Mayer, J. M.; Stead, D.
2017-04-01
With the increased drive towards deeper and more complex mine designs, geotechnical engineers are often forced to reconsider traditional deterministic design techniques in favour of probabilistic methods. These alternative techniques allow for the direct quantification of uncertainties within a risk and/or decision analysis framework. However, conventional probabilistic practices typically discretize geological materials into discrete, homogeneous domains, with attributes defined by spatially constant random variables, despite the fact that geological media display inherent heterogeneous spatial characteristics. This research directly simulates this phenomenon using a geostatistical approach, known as sequential Gaussian simulation. The method utilizes the variogram which imposes a degree of controlled spatial heterogeneity on the system. Simulations are constrained using data from the Ok Tedi mine site in Papua New Guinea and designed to randomly vary the geological strength index and uniaxial compressive strength using Monte Carlo techniques. Results suggest that conventional probabilistic techniques have a fundamental limitation compared to geostatistical approaches, as they fail to account for the spatial dependencies inherent to geotechnical datasets. This can result in erroneous model predictions, which are overly conservative when compared to the geostatistical results.
Retinal Image Simulation of Subjective Refraction Techniques
Perches, Sara; Collados, M. Victoria; Ares, Jorge
2016-01-01
Refraction techniques make it possible to determine the most appropriate sphero-cylindrical lens prescription to achieve the best possible visual quality. Among these techniques, subjective refraction (i.e., patient’s response-guided refraction) is the most commonly used approach. In this context, this paper’s main goal is to present a simulation software that implements in a virtual manner various subjective-refraction techniques—including Jackson’s Cross-Cylinder test (JCC)—relying all on the observation of computer-generated retinal images. This software has also been used to evaluate visual quality when the JCC test is performed in multifocal-contact-lens wearers. The results reveal this software’s usefulness to simulate the retinal image quality that a particular visual compensation provides. Moreover, it can help to gain a deeper insight and to improve existing refraction techniques and it can be used for simulated training. PMID:26938648
Towards Application of NASA Standard for Models and Simulations in Aeronautical Design Process
NASA Astrophysics Data System (ADS)
Vincent, Luc; Dunyach, Jean-Claude; Huet, Sandrine; Pelissier, Guillaume; Merlet, Joseph
2012-08-01
Even powerful computational techniques like simulation endure limitations in their validity domain. Consequently using simulation models requires cautions to avoid making biased design decisions for new aeronautical products on the basis of inadequate simulation results. Thus the fidelity, accuracy and validity of simulation models shall be monitored in context all along the design phases to build confidence in achievement of the goals of modelling and simulation.In the CRESCENDO project, we adapt the Credibility Assessment Scale method from NASA standard for models and simulations from space programme to the aircraft design in order to assess the quality of simulations. The proposed eight quality assurance metrics aggregate information to indicate the levels of confidence in results. They are displayed in management dashboard and can secure design trade-off decisions at programme milestones.The application of this technique is illustrated in aircraft design context with specific thermal Finite Elements Analysis. This use case shows how to judge the fitness- for-purpose of simulation as Virtual testing means and then green-light the continuation of Simulation Lifecycle Management (SLM) process.
ERIC Educational Resources Information Center
Fouladi, Rachel T.
2000-01-01
Provides an overview of standard and modified normal theory and asymptotically distribution-free covariance and correlation structure analysis techniques and details Monte Carlo simulation results on Type I and Type II error control. Demonstrates through the simulation that robustness and nonrobustness of structure analysis techniques vary as a…
NASA Astrophysics Data System (ADS)
Ziss, Dorian; Martín-Sánchez, Javier; Lettner, Thomas; Halilovic, Alma; Trevisi, Giovanna; Trotta, Rinaldo; Rastelli, Armando; Stangl, Julian
2017-04-01
In this paper, strain transfer efficiencies from a single crystalline piezoelectric lead magnesium niobate-lead titanate substrate to a GaAs semiconductor membrane bonded on top are investigated using state-of-the-art x-ray diffraction (XRD) techniques and finite-element-method (FEM) simulations. Two different bonding techniques are studied, namely, gold-thermo-compression and polymer-based SU8 bonding. Our results show a much higher strain-transfer for the "soft" SU8 bonding in comparison to the "hard" bonding via gold-thermo-compression. A comparison between the XRD results and FEM simulations allows us to explain this unexpected result with the presence of complex interface structures between the different layers.
Ziss, Dorian; Martín-Sánchez, Javier; Lettner, Thomas; Halilovic, Alma; Trevisi, Giovanna; Trotta, Rinaldo; Rastelli, Armando; Stangl, Julian
2017-01-01
In this paper, strain transfer efficiencies from a single crystalline piezoelectric lead magnesium niobate-lead titanate substrate to a GaAs semiconductor membrane bonded on top are investigated using state-of-the-art x-ray diffraction (XRD) techniques and finite-element-method (FEM) simulations. Two different bonding techniques are studied, namely, gold-thermo-compression and polymer-based SU8 bonding. Our results show a much higher strain-transfer for the “soft” SU8 bonding in comparison to the “hard” bonding via gold-thermo-compression. A comparison between the XRD results and FEM simulations allows us to explain this unexpected result with the presence of complex interface structures between the different layers. PMID:28522879
Ziss, Dorian; Martín-Sánchez, Javier; Lettner, Thomas; Halilovic, Alma; Trevisi, Giovanna; Trotta, Rinaldo; Rastelli, Armando; Stangl, Julian
2017-04-01
In this paper, strain transfer efficiencies from a single crystalline piezoelectric lead magnesium niobate-lead titanate substrate to a GaAs semiconductor membrane bonded on top are investigated using state-of-the-art x-ray diffraction (XRD) techniques and finite-element-method (FEM) simulations. Two different bonding techniques are studied, namely, gold-thermo-compression and polymer-based SU8 bonding. Our results show a much higher strain-transfer for the "soft" SU8 bonding in comparison to the "hard" bonding via gold-thermo-compression. A comparison between the XRD results and FEM simulations allows us to explain this unexpected result with the presence of complex interface structures between the different layers.
Allen, Robert C; Rutan, Sarah C
2011-10-31
Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.
Suenaga, Hideyuki; Taniguchi, Asako; Yonenaga, Kazumichi; Hoshi, Kazuto; Takato, Tsuyoshi
2016-01-01
Computer-assisted preoperative simulation surgery is employed to plan and interact with the 3D images during the orthognathic procedure. It is useful for positioning and fixation of maxilla by a plate. We report a case of maxillary retrusion by a bilateral cleft lip and palate, in which a 2-stage orthognathic procedure (maxillary advancement by distraction technique and mandibular setback surgery) was performed following a computer-assisted preoperative simulation planning to achieve the positioning and fixation of the plate. A high accuracy was achieved in the present case. A 21-year-old male patient presented to our department with a complaint of maxillary retrusion following bilateral cleft lip and palate. Computer-assisted preoperative simulation with 2-stage orthognathic procedure using distraction technique and mandibular setback surgery was planned. The preoperative planning of the procedure resulted in good aesthetic outcomes. The error of the maxillary position was less than 1mm. The implementation of the computer-assisted preoperative simulation for the positioning and fixation of plate in 2-stage orthognathic procedure using distraction technique and mandibular setback surgery yielded good results. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Webster, Victoria A; Nieto, Santiago G; Grosberg, Anna; Akkus, Ozan; Chiel, Hillel J; Quinn, Roger D
2016-10-01
In this study, new techniques for approximating the contractile properties of cells in biohybrid devices using Finite Element Analysis (FEA) have been investigated. Many current techniques for modeling biohybrid devices use individual cell forces to simulate the cellular contraction. However, such techniques result in long simulation runtimes. In this study we investigated the effect of the use of thermal contraction on simulation runtime. The thermal contraction model was significantly faster than models using individual cell forces, making it beneficial for rapidly designing or optimizing devices. Three techniques, Stoney׳s Approximation, a Modified Stoney׳s Approximation, and a Thermostat Model, were explored for calibrating thermal expansion/contraction parameters (TECPs) needed to simulate cellular contraction using thermal contraction. The TECP values were calibrated by using published data on the deflections of muscular thin films (MTFs). Using these techniques, TECP values that suitably approximate experimental deflections can be determined by using experimental data obtained from cardiomyocyte MTFs. Furthermore, a sensitivity analysis was performed in order to investigate the contribution of individual variables, such as elastic modulus and layer thickness, to the final calibrated TECP for each calibration technique. Additionally, the TECP values are applicable to other types of biohybrid devices. Two non-MTF models were simulated based on devices reported in the existing literature. Copyright © 2016 Elsevier Ltd. All rights reserved.
Developing integrated patient pathways using hybrid simulation
NASA Astrophysics Data System (ADS)
Zulkepli, Jafri; Eldabi, Tillal
2016-10-01
Integrated patient pathways includes several departments, i.e. healthcare which includes emergency care and inpatient ward; intermediate care which patient(s) will stay for a maximum of two weeks and at the same time be assessed by assessment team to find the most suitable care; and social care. The reason behind introducing the intermediate care in western countries was to reduce the rate of patients that stays in the hospital especially for elderly patients. This type of care setting has been considered to be set up in some other countries including Malaysia. Therefore, to assess the advantages of introducing this type of integrated healthcare setting, we suggest develop the model using simulation technique. We argue that single simulation technique is not viable enough to represent this type of patient pathways. Therefore, we suggest develop this model using hybrid techniques, i.e. System Dynamics (SD) and Discrete Event Simulation (DES). Based on hybrid model result, we argued that the result is viable to be as references for decision making process.
Acoustic Parametric Array for Identifying Standoff Targets
NASA Astrophysics Data System (ADS)
Hinders, M. K.; Rudd, K. E.
2010-02-01
An integrated simulation method for investigating nonlinear sound beams and 3D acoustic scattering from any combination of complicated objects is presented. A standard finite-difference simulation method is used to model pulsed nonlinear sound propagation from a source to a scattering target via the KZK equation. Then, a parallel 3D acoustic simulation method based on the finite integration technique is used to model the acoustic wave interaction with the target. Any combination of objects and material layers can be placed into the 3D simulation space to study the resulting interaction. Several example simulations are presented to demonstrate the simulation method and 3D visualization techniques. The combined simulation method is validated by comparing experimental and simulation data and a demonstration of how this combined simulation method assisted in the development of a nonlinear acoustic concealed weapons detector is also presented.
Barnes, Ronald A; Maswadi, Saher; Glickman, Randolph; Shadaram, Mehdi
2014-01-20
The goal of this paper is to demonstrate the unique capability of measuring the vector or angular information of propagating acoustic waves using an optical sensor. Acoustic waves were generated using photoacoustic interaction and detected by the probe beam deflection technique. Experiments and simulations were performed to study the interaction of acoustic emissions with an optical sensor in a coupling medium. The simulated results predict the probe beam and wavefront interaction and produced simulated signals that are verified by experiment.
Opto-electronic characterization of third-generation solar cells.
Neukom, Martin; Züfle, Simon; Jenatsch, Sandra; Ruhstaller, Beat
2018-01-01
We present an overview of opto-electronic characterization techniques for solar cells including light-induced charge extraction by linearly increasing voltage, impedance spectroscopy, transient photovoltage, charge extraction and more. Guidelines for the interpretation of experimental results are derived based on charge drift-diffusion simulations of solar cells with common performance limitations. It is investigated how nonidealities like charge injection barriers, traps and low mobilities among others manifest themselves in each of the studied cell characterization techniques. Moreover, comprehensive parameter extraction for an organic bulk-heterojunction solar cell comprising PCDTBT:PC 70 BM is demonstrated. The simulations reproduce measured results of 9 different experimental techniques. Parameter correlation is minimized due to the combination of various techniques. Thereby a route to comprehensive and accurate parameter extraction is identified.
Interleaved concatenated codes: New perspectives on approaching the Shannon limit
Viterbi, A. J.; Viterbi, A. M.; Sindhushayana, N. T.
1997-01-01
The last few years have witnessed a significant decrease in the gap between the Shannon channel capacity limit and what is practically achievable. Progress has resulted from novel extensions of previously known coding techniques involving interleaved concatenated codes. A considerable body of simulation results is now available, supported by an important but limited theoretical basis. This paper presents a computational technique which further ties simulation results to the known theory and reveals a considerable reduction in the complexity required to approach the Shannon limit. PMID:11038568
Flood Detection/Monitoring Using Adjustable Histogram Equalization Technique
Riaz, Muhammad Mohsin; Ghafoor, Abdul
2014-01-01
Flood monitoring technique using adjustable histogram equalization is proposed. The technique overcomes the limitations (overenhancement, artifacts, and unnatural look) of existing technique by adjusting the contrast of images. The proposed technique takes pre- and postimages and applies different processing steps for generating flood map without user interaction. The resultant flood maps can be used for flood monitoring and detection. Simulation results show that the proposed technique provides better output quality compared to the state of the art existing technique. PMID:24558332
Antimisting kerosene atomization and flammability
NASA Technical Reports Server (NTRS)
Fleeter, R.; Petersen, R. A.; Toaz, R. D.; Jakub, A.; Sarohia, V.
1982-01-01
Various parameters found to affect the flammability of antimisting kerosene (Jet A + polymer additive) are investigated. Digital image processing was integrated into a technique for measurement of fuel spray characteristics. This technique was developed to avoid many of the error sources inherent to other spray assessment techniques and was applied to the study of engine fuel nozzle atomization performance with Jet A and antimisting fuel. Aircraft accident fuel spill and ignition dynamics were modeled in a steady state simulator allowing flammability to be measured as a function of airspeed, fuel flow rate, fuel jet Reynolds number and polymer concentration. The digital imaging technique was employed to measure spray characteristics in this simulation and these results were related to flammability test results. Scaling relationships were investigated through correlation of experimental results with characteristic dimensions spanning more than two orders of magnitude.
Spacecraft Multiple Array Communication System Performance Analysis
NASA Technical Reports Server (NTRS)
Hwu, Shian U.; Desilva, Kanishka; Sham, Catherine C.
2010-01-01
The Communication Systems Simulation Laboratory (CSSL) at the NASA Johnson Space Center is tasked to perform spacecraft and ground network communication system simulations, design validation, and performance verification. The CSSL has developed simulation tools that model spacecraft communication systems and the space and ground environment in which the tools operate. In this paper, a spacecraft communication system with multiple arrays is simulated. Multiple array combined technique is used to increase the radio frequency coverage and data rate performance. The technique is to achieve phase coherence among the phased arrays to combine the signals at the targeting receiver constructively. There are many technical challenges in spacecraft integration with a high transmit power communication system. The array combining technique can improve the communication system data rate and coverage performances without increasing the system transmit power requirements. Example simulation results indicate significant performance improvement can be achieved with phase coherence implementation.
Demonstration of innovative techniques for work zone safety data analysis
DOT National Transportation Integrated Search
2009-07-15
Based upon the results of the simulator data analysis, additional future research can be : identified to validate the driving simulator in terms of similarities with Ohio work zones. For : instance, the speeds observed in the simulator were greater f...
Vincenti, H.; Vay, J. -L.
2015-11-22
Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less
Billings, Jay Jay; Deyton, Jordan H.; Forest Hull, S.; ...
2015-07-17
Building new fission reactors in the United States presents many technical and regulatory challenges. Chief among the technical challenges is the need to share and present results from new high- fidelity, high- performance simulations in an easily consumable way. In light of the modern multi-scale, multi-physics simulations can generate petabytes of data, this will require the development of new techniques and methods to reduce the data to familiar quantities of interest with a more reasonable resolution and size. Furthermore, some of the results from these simulations may be new quantities for which visualization and analysis techniques are not immediately availablemore » in the community and need to be developed. Our paper describes a new system for managing high-performance simulation results in a domain-specific way that naturally exposes quantities of interest for light water and sodium-cooled fast reactors. It enables easy qualitative and quantitative comparisons between simulation results with a graphical user interface and cross-platform, multi-language input- output libraries for use by developers to work with the data. One example comparing results from two different simulation suites for a single assembly in a light-water reactor is presented along with a detailed discussion of the system s requirements and design.« less
Simulation techniques for estimating error in the classification of normal patterns
NASA Technical Reports Server (NTRS)
Whitsitt, S. J.; Landgrebe, D. A.
1974-01-01
Methods of efficiently generating and classifying samples with specified multivariate normal distributions were discussed. Conservative confidence tables for sample sizes are given for selective sampling. Simulation results are compared with classified training data. Techniques for comparing error and separability measure for two normal patterns are investigated and used to display the relationship between the error and the Chernoff bound.
Results of a joint NOAA/NASA sounder simulation study
NASA Technical Reports Server (NTRS)
Phillips, N.; Susskind, Joel; Mcmillin, L.
1988-01-01
This paper presents the results of a joint NOAA and NASA sounder simulation study in which the accuracies of atmospheric temperature profiles and surface skin temperature measuremnents retrieved from two sounders were compared: (1) the currently used IR temperature sounder HIRS2 (High-resolution Infrared Radiation Sounder 2); and (2) the recently proposed high-spectral-resolution IR sounder AMTS (Advanced Moisture and Temperature Sounder). Simulations were conducted for both clear and partial cloud conditions. Data were analyzed at NASA using a physical inversion technique and at NOAA using a statistical technique. Results show significant improvement of AMTS compared to HIRS2 for both clear and cloudy conditions. The improvements are indicated by both methods of data analysis, but the physical retrievals outperform the statistical retrievals.
User modeling techniques for enhanced usability of OPSMODEL operations simulation software
NASA Technical Reports Server (NTRS)
Davis, William T.
1991-01-01
The PC based OPSMODEL operations software for modeling and simulation of space station crew activities supports engineering and cost analyses and operations planning. Using top-down modeling, the level of detail required in the data base can be limited to being commensurate with the results required of any particular analysis. To perform a simulation, a resource environment consisting of locations, crew definition, equipment, and consumables is first defined. Activities to be simulated are then defined as operations and scheduled as desired. These operations are defined within a 1000 level priority structure. The simulation on OPSMODEL, then, consists of the following: user defined, user scheduled operations executing within an environment of user defined resource and priority constraints. Techniques for prioritizing operations to realistically model a representative daily scenario of on-orbit space station crew activities are discussed. The large number of priority levels allows priorities to be assigned commensurate with the detail necessary for a given simulation. Several techniques for realistic modeling of day-to-day work carryover are also addressed.
An efficient and reliable predictive method for fluidized bed simulation
Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen
2017-06-13
In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less
An efficient and reliable predictive method for fluidized bed simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen
2017-06-29
In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less
Global Magnetosphere Modeling With Kinetic Treatment of Magnetic Reconnection
NASA Astrophysics Data System (ADS)
Toth, G.; Chen, Y.; Gombosi, T. I.; Cassak, P.; Markidis, S.; Peng, B.; Henderson, M. G.
2017-12-01
Global magnetosphere simulations with a kinetic treatment of magnetic reconnection are very challenging because of the large separation of global and kinetic scales. We have developed two algorithms that can overcome these difficulties: 1) the two-way coupling of the global magnetohydrodynamic code with an embedded particle-in-cell model (MHD-EPIC) and 2) the artificial increase of the ion and electron kinetic scales. Both of these techniques improve the efficiency of the simulations by many orders of magnitude. We will describe the techniques and show that they provide correct and meaningful results. Using the coupled model and the increased kinetic scales, we will present global magnetosphere simulations with the PIC domains covering the dayside and/or tail reconnection sites. The simulation results will be compared to and validated with MMS observations.
Near-field diffraction from amplitude diffraction gratings: theory, simulation and results
NASA Astrophysics Data System (ADS)
Abedin, Kazi Monowar; Rahman, S. M. Mujibur
2017-08-01
We describe a computer simulation method by which the complete near-field diffract pattern of an amplitude diffraction grating can be generated. The technique uses the method of iterative Fresnel integrals to calculate and generate the diffraction images. Theoretical background as well as the techniques to perform the simulation is described. The program is written in MATLAB, and can be implemented in any ordinary PC. Examples of simulated diffraction images are presented and discussed. The generated images in the far-field where they reduce to Fraunhofer diffraction pattern are also presented for a realistic grating, and compared with the results predicted by the grating equation, which is applicable in the far-field. The method can be used as a tool to teach the complex phenomenon of diffraction in classrooms.
Simulated Consulting Experiences in Counselor Preparation
ERIC Educational Resources Information Center
Panther, Edward E.
1971-01-01
Simulation, using role playing and commercially available materials, was used to provide counselor teacher consultation experience for counselor trainees. The results of the program supported the use of simulation as a technique for counselor education. Implications for counselor education programs are discussed. (Author/CG)
Testing prediction methods: Earthquake clustering versus the Poisson model
Michael, A.J.
1997-01-01
Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.
ERIC Educational Resources Information Center
Dieckmann, Peter; Friis, Susanne Molin; Lippert, Anne; Ostergaard, Doris
2012-01-01
Introduction: This study describes (a) process goals, (b) success factors, and (c) barriers for optimizing simulation-based learning environments within the simulation setting model developed by Dieckmann. Methods: Seven simulation educators of different experience levels were interviewed using the Critical Incident Technique. Results: (a) The…
Large Eddy Simulation of a Film Cooling Technique with a Plenum
NASA Astrophysics Data System (ADS)
Dharmarathne, Suranga; Sridhar, Narendran; Araya, Guillermo; Castillo, Luciano; Parameswaran, Sivapathasund
2012-11-01
Factors that affect the film cooling performance have been categorized into three main groups: (i) coolant & mainstream conditions, (ii) hole geometry & configuration, and (iii) airfoil geometry Bogard et al. (2006). The present study focuses on the second group of factors, namely, the modeling of coolant hole and the plenum. It is required to simulate correct physics of the problem to achieve more realistic numerical results. In this regard, modeling of cooling jet hole and the plenum chamber is highly important Iourokina et al. (2006). Substitution of artificial boundary conditions instead of correct plenum design would yield unrealistic results Iourokina et al. (2006). This study attempts to model film cooling technique with a plenum using a Large Eddy Simulation.Incompressible coolant jet ejects to the surface of the plate at an angle of 30° where it meets compressible turbulent boundary layer which simulates the turbine inflow conditions. Dynamic multi-scale approach Araya (2011) is introduced to prescribe turbulent inflow conditions. Simulations are carried out for two different blowing ratios and film cooling effectiveness is calculated for both cases. Results obtained from LES will be compared with experimental results.
Clinical validation of robot simulation of toothbrushing - comparative plaque removal efficacy
2014-01-01
Background Clinical validation of laboratory toothbrushing tests has important advantages. It was, therefore, the aim to demonstrate correlation of tooth cleaning efficiency of a new robot brushing simulation technique with clinical plaque removal. Methods Clinical programme: 27 subjects received dental cleaning prior to 3-day-plaque-regrowth-interval. Plaque was stained, photographically documented and scored using planimetrical index. Subjects brushed teeth 33–47 with three techniques (horizontal, rotating, vertical), each for 20s buccally and for 20s orally in 3 consecutive intervals. The force was calibrated, the brushing technique was video supported. Two different brushes were randomly assigned to the subject. Robot programme: Clinical brushing programmes were transfered to a 6-axis-robot. Artificial teeth 33–47 were covered with plaque-simulating substrate. All brushing techniques were repeated 7 times, results were scored according to clinical planimetry. All data underwent statistical analysis by t-test, U-test and multivariate analysis. Results The individual clinical cleaning patterns are well reproduced by the robot programmes. Differences in plaque removal are statistically significant for the two brushes, reproduced in clinical and robot data. Multivariate analysis confirms the higher cleaning efficiency for anterior teeth and for the buccal sites. Conclusions The robot tooth brushing simulation programme showed good correlation with clinically standardized tooth brushing. This new robot brushing simulation programme can be used for rapid, reproducible laboratory testing of tooth cleaning. PMID:24996973
Opto-electronic characterization of third-generation solar cells
Jenatsch, Sandra
2018-01-01
Abstract We present an overview of opto-electronic characterization techniques for solar cells including light-induced charge extraction by linearly increasing voltage, impedance spectroscopy, transient photovoltage, charge extraction and more. Guidelines for the interpretation of experimental results are derived based on charge drift-diffusion simulations of solar cells with common performance limitations. It is investigated how nonidealities like charge injection barriers, traps and low mobilities among others manifest themselves in each of the studied cell characterization techniques. Moreover, comprehensive parameter extraction for an organic bulk-heterojunction solar cell comprising PCDTBT:PC70BM is demonstrated. The simulations reproduce measured results of 9 different experimental techniques. Parameter correlation is minimized due to the combination of various techniques. Thereby a route to comprehensive and accurate parameter extraction is identified. PMID:29707069
NASA Astrophysics Data System (ADS)
Sembiring, L.; Van Ormondt, M.; Van Dongeren, A. R.; Roelvink, J. A.
2017-07-01
Rip currents are one of the most dangerous coastal hazards for swimmers. In order to minimize the risk, a coastal operational-process based-model system can be utilized in order to provide forecast of nearshore waves and currents that may endanger beach goers. In this paper, an operational model for rip current prediction by utilizing nearshore bathymetry obtained from video image technique is demonstrated. For the nearshore scale model, XBeach1 is used with which tidal currents, wave induced currents (including the effect of the wave groups) can be simulated simultaneously. Up-to-date bathymetry will be obtained using video images technique, cBathy 2. The system will be tested for the Egmond aan Zee beach, located in the northern part of the Dutch coastline. This paper will test the applicability of bathymetry obtained from video technique to be used as input for the numerical modelling system by comparing simulation results using surveyed bathymetry and model results using video bathymetry. Results show that the video technique is able to produce bathymetry converging towards the ground truth observations. This bathymetry validation will be followed by an example of operational forecasting type of simulation on predicting rip currents. Rip currents flow fields simulated over measured and modeled bathymetries are compared in order to assess the performance of the proposed forecast system.
Fixed gain and adaptive techniques for rotorcraft vibration control
NASA Technical Reports Server (NTRS)
Roy, R. H.; Saberi, H. A.; Walker, R. A.
1985-01-01
The results of an analysis effort performed to demonstrate the feasibility of employing approximate dynamical models and frequency shaped cost functional control law desgin techniques for helicopter vibration suppression are presented. Both fixed gain and adaptive control designs based on linear second order dynamical models were implemented in a detailed Rotor Systems Research Aircraft (RSRA) simulation to validate these active vibration suppression control laws. Approximate models of fuselage flexibility were included in the RSRA simulation in order to more accurately characterize the structural dynamics. The results for both the fixed gain and adaptive approaches are promising and provide a foundation for pursuing further validation in more extensive simulation studies and in wind tunnel and/or flight tests.
Discrete event simulation: the preferred technique for health economic evaluations?
Caro, Jaime J; Möller, Jörgen; Getsios, Denis
2010-12-01
To argue that discrete event simulation should be preferred to cohort Markov models for economic evaluations in health care. The basis for the modeling techniques is reviewed. For many health-care decisions, existing data are insufficient to fully inform them, necessitating the use of modeling to estimate the consequences that are relevant to decision-makers. These models must reflect what is known about the problem at a level of detail sufficient to inform the questions. Oversimplification will result in estimates that are not only inaccurate, but potentially misleading. Markov cohort models, though currently popular, have so many limitations and inherent assumptions that they are inadequate to inform most health-care decisions. An event-based individual simulation offers an alternative much better suited to the problem. A properly designed discrete event simulation provides more accurate, relevant estimates without being computationally prohibitive. It does require more data and may be a challenge to convey transparently, but these are necessary trade-offs to provide meaningful and valid results. In our opinion, discrete event simulation should be the preferred technique for health economic evaluations today. © 2010, International Society for Pharmacoeconomics and Outcomes Research (ISPOR).
NASA Technical Reports Server (NTRS)
Bedewi, Nabih E.; Yang, Jackson C. S.
1987-01-01
Identification of the system parameters of a randomly excited structure may be treated using a variety of statistical techniques. Of all these techniques, the Random Decrement is unique in that it provides the homogeneous component of the system response. Using this quality, a system identification technique was developed based on a least-squares fit of the signatures to estimate the mass, damping, and stiffness matrices of a linear randomly excited system. The mathematics of the technique is presented in addition to the results of computer simulations conducted to demonstrate the prediction of the response of the system and the random forcing function initially introduced to excite the system.
Ramamurti, B S; Estok, D M; Jasty, M; Harris, W H
1998-05-01
We developed an analytical technique to determine the paths traced by specific points on the femoral head against the acetabulum in the human hip joint during gait. The purpose of the study was to apply this technique to the mechanical hip simulators chosen to conduct wear tests on polymeric acetabular liners used in total hip replacements. These simulators differ from one another in the type of motion produced, apart from other variables such as type of lubricant and head position. Due to the variation in the kinematics between the machines, the paths traced by the points on the femoral head against the acetabular liner ranged from simple linear traces to figure-8 loops and quasi-elliptical paths during a single simulator cycle. The distances traveled by these points during the same period also varied appreciably among the different hip simulator designs. These results are important when combined with other studies that have shown that kinematics can play an important role in the outcome of in vitro wear experiments. The kinematic differences quantified in this study can partially explain the substantial differences in wear data reported from different simulator designs and also underscore the usefulness of the technique described in this study in judging the results from different hip simulator experiments.
An adaptive front tracking technique for three-dimensional transient flows
NASA Astrophysics Data System (ADS)
Galaktionov, O. S.; Anderson, P. D.; Peters, G. W. M.; van de Vosse, F. N.
2000-01-01
An adaptive technique, based on both surface stretching and surface curvature analysis for tracking strongly deforming fluid volumes in three-dimensional flows is presented. The efficiency and accuracy of the technique are demonstrated for two- and three-dimensional flow simulations. For the two-dimensional test example, the results are compared with results obtained using a different tracking approach based on the advection of a passive scalar. Although for both techniques roughly the same structures are found, the resolution for the front tracking technique is much higher. In the three-dimensional test example, a spherical blob is tracked in a chaotic mixing flow. For this problem, the accuracy of the adaptive tracking is demonstrated by the volume conservation for the advected blob. Adaptive front tracking is suitable for simulation of the initial stages of fluid mixing, where the interfacial area can grow exponentially with time. The efficiency of the algorithm significantly benefits from parallelization of the code. Copyright
Consistent Principal Component Modes from Molecular Dynamics Simulations of Proteins.
Cossio-Pérez, Rodrigo; Palma, Juliana; Pierdominici-Sottile, Gustavo
2017-04-24
Principal component analysis is a technique widely used for studying the movements of proteins using data collected from molecular dynamics simulations. In spite of its extensive use, the technique has a serious drawback: equivalent simulations do not afford the same PC-modes. In this article, we show that concatenating equivalent trajectories and calculating the PC-modes from the concatenated one significantly enhances the reproducibility of the results. Moreover, the consistency of the modes can be systematically improved by adding more individual trajectories to the concatenated one.
A study of the feasibility of statistical analysis of airport performance simulation
NASA Technical Reports Server (NTRS)
Myers, R. H.
1982-01-01
The feasibility of conducting a statistical analysis of simulation experiments to study airport capacity is investigated. First, the form of the distribution of airport capacity is studied. Since the distribution is non-Gaussian, it is important to determine the effect of this distribution on standard analysis of variance techniques and power calculations. Next, power computations are made in order to determine how economic simulation experiments would be if they are designed to detect capacity changes from condition to condition. Many of the conclusions drawn are results of Monte-Carlo techniques.
Skin fluorescence model based on the Monte Carlo technique
NASA Astrophysics Data System (ADS)
Churmakov, Dmitry Y.; Meglinski, Igor V.; Piletsky, Sergey A.; Greenhalgh, Douglas A.
2003-10-01
The novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the NIR spectral region, while fluorescence of sensor layer embedded in epidermis is localized at the adjusted depth. The model is also able to simulate the skin fluorescence spectra.
The effect of sampling techniques used in the multiconfigurational Ehrenfest method
NASA Astrophysics Data System (ADS)
Symonds, C.; Kattirtzi, J. A.; Shalashilin, D. V.
2018-05-01
In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.
The effect of sampling techniques used in the multiconfigurational Ehrenfest method.
Symonds, C; Kattirtzi, J A; Shalashilin, D V
2018-05-14
In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.
Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian
2016-03-20
We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.
2013-01-01
Background It is important to understand the perceived value of surgical design and simulation (SDS) amongst surgeons, as this will influence its implementation in clinical settings. The purpose of the present study was to examine the application of the convergent interview technique in the field of surgical design and simulation and evaluate whether the technique would uncover new perceptions of virtual surgical planning (VSP) and medical models not discovered by other qualitative case-based techniques. Methods Five surgeons were asked to participate in the study. Each participant was interviewed following the convergent interview technique. After each interview, the interviewer interpreted the information by seeking agreements and disagreements among the interviewees in order to understand the key concepts in the field of SDS. Results Fifteen important issues were extracted from the convergent interviews. Conclusion In general, the convergent interview was an effective technique in collecting information about the perception of clinicians. The study identified three areas where the technique could be improved upon for future studies in the SDS field. PMID:23782771
NASA Astrophysics Data System (ADS)
Akhlaghi, H.; Roohi, E.; Myong, R. S.
2012-11-01
Micro/nano geometries with specified wall heat flux are widely encountered in electronic cooling and micro-/nano-fluidic sensors. We introduce a new technique to impose the desired (positive/negative) wall heat flux boundary condition in the DSMC simulations. This technique is based on an iterative progress on the wall temperature magnitude. It is found that the proposed iterative technique has a good numerical performance and could implement both positive and negative values of wall heat flux rates accurately. Using present technique, rarefied gas flow through micro-/nanochannels under specified wall heat flux conditions is simulated and unique behaviors are observed in case of channels with cooling walls. For example, contrary to the heating process, it is observed that cooling of micro/nanochannel walls would result in small variations in the density field. Upstream thermal creep effects in the cooling process decrease the velocity slip despite of the Knudsen number increase along the channel. Similarly, cooling process decreases the curvature of the pressure distribution below the linear incompressible distribution. Our results indicate that flow cooling increases the mass flow rate through the channel, and vice versa.
Knowledge-based simulation using object-oriented programming
NASA Technical Reports Server (NTRS)
Sidoran, Karen M.
1993-01-01
Simulations have become a powerful mechanism for understanding and modeling complex phenomena. Their results have had substantial impact on a broad range of decisions in the military, government, and industry. Because of this, new techniques are continually being explored and developed to make them even more useful, understandable, extendable, and efficient. One such area of research is the application of the knowledge-based methods of artificial intelligence (AI) to the computer simulation field. The goal of knowledge-based simulation is to facilitate building simulations of greatly increased power and comprehensibility by making use of deeper knowledge about the behavior of the simulated world. One technique for representing and manipulating knowledge that has been enhanced by the AI community is object-oriented programming. Using this technique, the entities of a discrete-event simulation can be viewed as objects in an object-oriented formulation. Knowledge can be factual (i.e., attributes of an entity) or behavioral (i.e., how the entity is to behave in certain circumstances). Rome Laboratory's Advanced Simulation Environment (RASE) was developed as a research vehicle to provide an enhanced simulation development environment for building more intelligent, interactive, flexible, and realistic simulations. This capability will support current and future battle management research and provide a test of the object-oriented paradigm for use in large scale military applications.
NASA Technical Reports Server (NTRS)
Deacetis, Louis A.
1991-01-01
The need to reduce the costs of Space Station Freedom has resulted in a major redesign and downsizing of the Station in general, and its Communications and Tracking (C&T) components in particular. Earlier models and simulations of the C&T Space-to-Ground Subsystem (SGS) in particular are no longer valid. There thus exists a general need for updated, high fidelity simulations of C&T subsystems. This project explored simulation techniques and methods that might be used in developing new simulations of C&T subsystems, including the SGS. Three requirements were placed on the simulations to be developed: (1) they run on IBM PC/XT/AT compatible computers; (2) they be written in Ada as much as possible; and (3) since control and monitoring of the C&T subsystems will involve communication via a MIL-STD-1553B serial bus, that the possibility of commanding the simulator and monitoring its sensors via that bus be included in the design of the simulator. The result of the project is a prototype of a simulation of the Assembly/Contingency Transponder of the SGS, written in Ada, which can be controlled from another PC via a MIL-STD-1553B bus.
Construction of dynamic stochastic simulation models using knowledge-based techniques
NASA Technical Reports Server (NTRS)
Williams, M. Douglas; Shiva, Sajjan G.
1990-01-01
Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).
The Numerical Technique for the Landslide Tsunami Simulations Based on Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Kozelkov, A. S.
2017-12-01
The paper presents an integral technique simulating all phases of a landslide-driven tsunami. The technique is based on the numerical solution of the system of Navier-Stokes equations for multiphase flows. The numerical algorithm uses a fully implicit approximation method, in which the equations of continuity and momentum conservation are coupled through implicit summands of pressure gradient and mass flow. The method we propose removes severe restrictions on the time step and allows simulation of tsunami propagation to arbitrarily large distances. The landslide origin is simulated as an individual phase being a Newtonian fluid with its own density and viscosity and separated from the water and air phases by an interface. The basic formulas of equation discretization and expressions for coefficients are presented, and the main steps of the computation procedure are described in the paper. To enable simulations of tsunami propagation across wide water areas, we propose a parallel algorithm of the technique implementation, which employs an algebraic multigrid method. The implementation of the multigrid method is based on the global level and cascade collection algorithms that impose no limitations on the paralleling scale and make this technique applicable to petascale systems. We demonstrate the possibility of simulating all phases of a landslide-driven tsunami, including its generation, propagation and uprush. The technique has been verified against the problems supported by experimental data. The paper describes the mechanism of incorporating bathymetric data to simulate tsunamis in real water areas of the world ocean. Results of comparison with the nonlinear dispersion theory, which has demonstrated good agreement, are presented for the case of a historical tsunami of volcanic origin on the Montserrat Island in the Caribbean Sea.
Wind Energy System Time-domain (WEST) analyzers using hybrid simulation techniques
NASA Technical Reports Server (NTRS)
Hoffman, J. A.
1979-01-01
Two stand-alone analyzers constructed for real time simulation of the complex dynamic characteristics of horizontal-axis wind energy systems are described. Mathematical models for an aeroelastic rotor, including nonlinear aerodynamic and elastic loads, are implemented with high speed digital and analog circuitry. Models for elastic supports, a power train, a control system, and a rotor gimbal system are also included. Limited correlation efforts show good comparisons between results produced by the analyzers and results produced by a large digital simulation. The digital simulation results correlate well with test data.
Martins-Costa, Marilia T C; Ruiz-López, Manuel F
2017-04-15
We report an enhanced sampling technique that allows to reach the multi-nanosecond timescale in quantum mechanics/molecular mechanics molecular dynamics simulations. The proposed technique, called horsetail sampling, is a specific type of multiple molecular dynamics approach exhibiting high parallel efficiency. It couples a main simulation with a large number of shorter trajectories launched on independent processors at periodic time intervals. The technique is applied to study hydrogen peroxide at the water liquid-vapor interface, a system of considerable atmospheric relevance. A total simulation time of a little more than 6 ns has been attained for a total CPU time of 5.1 years representing only about 20 days of wall-clock time. The discussion of the results highlights the strong influence of the solvation effects at the interface on the structure and the electronic properties of the solute. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Simulation studies of wide and medium field of view earth radiation data analysis
NASA Technical Reports Server (NTRS)
Green, R. N.
1978-01-01
A parameter estimation technique is presented to estimate the radiative flux distribution over the earth from radiometer measurements at satellite altitude. The technique analyzes measurements from a wide field of view (WFOV), horizon to horizon, nadir pointing sensor with a mathematical technique to derive the radiative flux estimates at the top of the atmosphere for resolution elements smaller than the sensor field of view. A computer simulation of the data analysis technique is presented for both earth-emitted and reflected radiation. Zonal resolutions are considered as well as the global integration of plane flux. An estimate of the equator-to-pole gradient is obtained from the zonal estimates. Sensitivity studies of the derived flux distribution to directional model errors are also presented. In addition to the WFOV results, medium field of view results are presented.
NASA Astrophysics Data System (ADS)
Karimabadi, Homa
2012-03-01
Recent advances in simulation technology and hardware are enabling breakthrough science where many longstanding problems can now be addressed for the first time. In this talk, we focus on kinetic simulations of the Earth's magnetosphere and magnetic reconnection process which is the key mechanism that breaks the protective shield of the Earth's dipole field, allowing the solar wind to enter the Earth's magnetosphere. This leads to the so-called space weather where storms on the Sun can affect space-borne and ground-based technological systems on Earth. The talk will consist of three parts: (a) overview of a new multi-scale simulation technique where each computational grid is updated based on its own unique timestep, (b) Presentation of a new approach to data analysis that we refer to as Physics Mining which entails combining data mining and computer vision algorithms with scientific visualization to extract physics from the resulting massive data sets. (c) Presentation of several recent discoveries in studies of space plasmas including the role of vortex formation and resulting turbulence in magnetized plasmas.
Method to simulate and analyse induced stresses for laser crystal packaging technologies.
Ribes-Pleguezuelo, Pol; Zhang, Site; Beckert, Erik; Eberhardt, Ramona; Wyrowski, Frank; Tünnermann, Andreas
2017-03-20
A method to simulate induced stresses for a laser crystal packaging technique and the consequent study of birefringent effects inside the laser cavities has been developed. The method has been implemented by thermo-mechanical simulations implemented with ANSYS 17.0. ANSYS results were later imported in VirtualLab Fusion software where input/output beams in terms of wavelengths and polarization were analysed. The study has been built in the context of a low-stress soldering technique implemented for glass or crystal optics packaging's called the solderjet bumping technique. The outcome of the analysis showed almost no difference between the input and output laser beams for the laser cavity constructed with an yttrium aluminum garnet active laser crystal, a second harmonic generator beta-barium borate, and the output laser mirror made of fused silica assembled by the low-stress solderjet bumping technique.
Anonymity and Historical-Anonymity in Location-Based Services
NASA Astrophysics Data System (ADS)
Bettini, Claudio; Mascetti, Sergio; Wang, X. Sean; Freni, Dario; Jajodia, Sushil
The problem of protecting user’s privacy in Location-Based Services (LBS) has been extensively studied recently and several defense techniques have been proposed. In this contribution, we first present a categorization of privacy attacks and related defenses. Then, we consider the class of defense techniques that aim at providing privacy through anonymity and in particular algorithms achieving “historical k- anonymity” in the case of the adversary obtaining a trace of requests recognized as being issued by the same (anonymous) user. Finally, we investigate the issues involved in the experimental evaluation of anonymity based defense techniques; we show that user movement simulations based on mostly random movements can lead to overestimate the privacy protection in some cases and to overprotective techniques in other cases. The above results are obtained by comparison to a more realistic simulation with an agent-based simulator, considering a specific deployment scenario.
Use of the Marshall Space Flight Center solar simulator in collector performance evaluation
NASA Technical Reports Server (NTRS)
Humphries, W. R.
1978-01-01
Actual measured values from simulator checkout tests are detailed. Problems encountered during initial startup are discussed and solutions described. Techniques utilized to evaluate collector performance from simulator test data are given. Performance data generated in the simulator are compared to equivalent data generated during natural outdoor testing. Finally, a summary of collector performance parameters generated to date as a result of simulator testing are given.
Weber, Erin L; Leland, Hyuma A; Azadgoli, Beina; Minneti, Michael; Carey, Joseph N
2017-08-01
Rehearsal is an essential part of mastering any technical skill. The efficacy of surgical rehearsal is currently limited by low fidelity simulation models. Fresh cadaver models, however, offer maximal surgical simulation. We hypothesize that preoperative surgical rehearsal using fresh tissue surgical simulation will improve resident confidence and serve as an important adjunct to current training methods. Preoperative rehearsal of surgical procedures was performed by plastic surgery residents using fresh cadavers in a simulated operative environment. Rehearsal was designed to mimic the clinical operation, complete with a surgical technician to assist. A retrospective, web-based survey was used to assess resident perception of pre- and post-procedure confidence, preparation, technique, speed, safety, and anatomical knowledge on a 5-point scale (1= not confident, 5= very confident). Twenty-six rehearsals were performed by 9 residents (PGY 1-7) an average of 4.7±2.1 days prior to performance of the scheduled operation. Surveys demonstrated a median pre-simulation confidence score of 2 and a post-rehearsal score of 4 (P<0.01). The perceived improvement in confidence and performance was greatest when simulation was performed within 3 days of the scheduled case. All residents felt that cadaveric simulation was better than standard preparation methods of self-directed reading or discussion with other surgeons. All residents believed that their technique, speed, safety, and anatomical knowledge improved as a result of simulation. Fresh tissue-based preoperative surgical rehearsal was effectively implemented in the residency program. Resident confidence and perception of technique improved. Survey results suggest that cadaveric simulation is beneficial for all levels of residents. We believe that implementation of preoperative surgical rehearsal is an effective adjunct to surgical training at all skill levels in the current environment of decreased work hours.
Protein free energy landscapes from long equilibrium simulations
NASA Astrophysics Data System (ADS)
Piana-Agostinetti, Stefano
Many computational techniques based on molecular dynamics (MD) simulation can be used to generate data to aid in the construction of protein free energy landscapes with atomistic detail. Unbiased, long, equilibrium MD simulations--although computationally very expensive--are particularly appealing, as they can provide direct kinetic and thermodynamic information on the transitions between the states that populate a protein free energy surface. It can be challenging to know how to analyze and interpret even results generated by this direct technique, however. I will discuss approaches we have employed, using equilibrium MD simulation data, to obtain descriptions of the free energy landscapes of proteins ranging in size from tens to thousands of amino acids.
NASA Technical Reports Server (NTRS)
Venable, D. D.
1983-01-01
A semi-analytic Monte Carlo simulation methodology (SALMON) was discussed. This simulation technique is particularly well suited for addressing fundamental radiative transfer problems in oceanographic LIDAR (optical radar), and also provides a framework for investigating the effects of environmental factors on LIDAR system performance. The simulation model was extended for airborne laser fluorosensors to allow for inhomogeneities in the vertical distribution of constituents in clear sea water. Results of the simulations for linearly varying step concentrations of chlorophyll are presented. The SALMON technique was also employed to determine how the LIDAR signals from an inhomogeneous media differ from those from homogeneous media.
Sachetto Oliveira, Rafael; Martins Rocha, Bernardo; Burgarelli, Denise; Meira, Wagner; Constantinides, Christakis; Weber Dos Santos, Rodrigo
2018-02-01
The use of computer models as a tool for the study and understanding of the complex phenomena of cardiac electrophysiology has attained increased importance nowadays. At the same time, the increased complexity of the biophysical processes translates into complex computational and mathematical models. To speed up cardiac simulations and to allow more precise and realistic uses, 2 different techniques have been traditionally exploited: parallel computing and sophisticated numerical methods. In this work, we combine a modern parallel computing technique based on multicore and graphics processing units (GPUs) and a sophisticated numerical method based on a new space-time adaptive algorithm. We evaluate each technique alone and in different combinations: multicore and GPU, multicore and GPU and space adaptivity, multicore and GPU and space adaptivity and time adaptivity. All the techniques and combinations were evaluated under different scenarios: 3D simulations on slabs, 3D simulations on a ventricular mouse mesh, ie, complex geometry, sinus-rhythm, and arrhythmic conditions. Our results suggest that multicore and GPU accelerate the simulations by an approximate factor of 33×, whereas the speedups attained by the space-time adaptive algorithms were approximately 48. Nevertheless, by combining all the techniques, we obtained speedups that ranged between 165 and 498. The tested methods were able to reduce the execution time of a simulation by more than 498× for a complex cellular model in a slab geometry and by 165× in a realistic heart geometry simulating spiral waves. The proposed methods will allow faster and more realistic simulations in a feasible time with no significant loss of accuracy. Copyright © 2017 John Wiley & Sons, Ltd.
Radiation doses in volume-of-interest breast computed tomography—A Monte Carlo simulation study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Chao-Jen, E-mail: cjlai3711@gmail.com; Zhong, Yuncheng; Yi, Ying
2015-06-15
Purpose: Cone beam breast computed tomography (breast CT) with true three-dimensional, nearly isotropic spatial resolution has been developed and investigated over the past decade to overcome the problem of lesions overlapping with breast anatomical structures on two-dimensional mammographic images. However, the ability of breast CT to detect small objects, such as tissue structure edges and small calcifications, is limited. To resolve this problem, the authors proposed and developed a volume-of-interest (VOI) breast CT technique to image a small VOI using a higher radiation dose to improve that region’s visibility. In this study, the authors performed Monte Carlo simulations to estimatemore » average breast dose and average glandular dose (AGD) for the VOI breast CT technique. Methods: Electron–Gamma-Shower system code-based Monte Carlo codes were used to simulate breast CT. The Monte Carlo codes estimated were validated using physical measurements of air kerma ratios and point doses in phantoms with an ion chamber and optically stimulated luminescence dosimeters. The validated full cone x-ray source was then collimated to simulate half cone beam x-rays to image digital pendant-geometry, hemi-ellipsoidal, homogeneous breast phantoms and to estimate breast doses with full field scans. 13-cm in diameter, 10-cm long hemi-ellipsoidal homogeneous phantoms were used to simulate median breasts. Breast compositions of 25% and 50% volumetric glandular fractions (VGFs) were used to investigate the influence on breast dose. The simulated half cone beam x-rays were then collimated to a narrow x-ray beam with an area of 2.5 × 2.5 cm{sup 2} field of view at the isocenter plane and to perform VOI field scans. The Monte Carlo results for the full field scans and the VOI field scans were then used to estimate the AGD for the VOI breast CT technique. Results: The ratios of air kerma ratios and dose measurement results from the Monte Carlo simulation to those from the physical measurements were 0.97 ± 0.03 and 1.10 ± 0.13, respectively, indicating that the accuracy of the Monte Carlo simulation was adequate. The normalized AGD with VOI field scans was substantially reduced by a factor of about 2 over the VOI region and by a factor of 18 over the entire breast for both 25% and 50% VGF simulated breasts compared with the normalized AGD with full field scans. The normalized AGD for the VOI breast CT technique can be kept the same as or lower than that for a full field scan with the exposure level for the VOI field scan increased by a factor of as much as 12. Conclusions: The authors’ Monte Carlo estimates of normalized AGDs for the VOI breast CT technique show that this technique can be used to markedly increase the dose to the breast and thus the visibility of the VOI region without increasing the dose to the breast. The results of this investigation should be helpful for those interested in using VOI breast CT technique to image small calcifications with dose concern.« less
Radiation doses in volume-of-interest breast computed tomography—A Monte Carlo simulation study
Lai, Chao-Jen; Zhong, Yuncheng; Yi, Ying; Wang, Tianpeng; Shaw, Chris C.
2015-01-01
Purpose: Cone beam breast computed tomography (breast CT) with true three-dimensional, nearly isotropic spatial resolution has been developed and investigated over the past decade to overcome the problem of lesions overlapping with breast anatomical structures on two-dimensional mammographic images. However, the ability of breast CT to detect small objects, such as tissue structure edges and small calcifications, is limited. To resolve this problem, the authors proposed and developed a volume-of-interest (VOI) breast CT technique to image a small VOI using a higher radiation dose to improve that region’s visibility. In this study, the authors performed Monte Carlo simulations to estimate average breast dose and average glandular dose (AGD) for the VOI breast CT technique. Methods: Electron–Gamma-Shower system code-based Monte Carlo codes were used to simulate breast CT. The Monte Carlo codes estimated were validated using physical measurements of air kerma ratios and point doses in phantoms with an ion chamber and optically stimulated luminescence dosimeters. The validated full cone x-ray source was then collimated to simulate half cone beam x-rays to image digital pendant-geometry, hemi-ellipsoidal, homogeneous breast phantoms and to estimate breast doses with full field scans. 13-cm in diameter, 10-cm long hemi-ellipsoidal homogeneous phantoms were used to simulate median breasts. Breast compositions of 25% and 50% volumetric glandular fractions (VGFs) were used to investigate the influence on breast dose. The simulated half cone beam x-rays were then collimated to a narrow x-ray beam with an area of 2.5 × 2.5 cm2 field of view at the isocenter plane and to perform VOI field scans. The Monte Carlo results for the full field scans and the VOI field scans were then used to estimate the AGD for the VOI breast CT technique. Results: The ratios of air kerma ratios and dose measurement results from the Monte Carlo simulation to those from the physical measurements were 0.97 ± 0.03 and 1.10 ± 0.13, respectively, indicating that the accuracy of the Monte Carlo simulation was adequate. The normalized AGD with VOI field scans was substantially reduced by a factor of about 2 over the VOI region and by a factor of 18 over the entire breast for both 25% and 50% VGF simulated breasts compared with the normalized AGD with full field scans. The normalized AGD for the VOI breast CT technique can be kept the same as or lower than that for a full field scan with the exposure level for the VOI field scan increased by a factor of as much as 12. Conclusions: The authors’ Monte Carlo estimates of normalized AGDs for the VOI breast CT technique show that this technique can be used to markedly increase the dose to the breast and thus the visibility of the VOI region without increasing the dose to the breast. The results of this investigation should be helpful for those interested in using VOI breast CT technique to image small calcifications with dose concern. PMID:26127058
Post-coronagraphic tip-tilt sensing for vortex phase masks: The QACITS technique
NASA Astrophysics Data System (ADS)
Huby, E.; Baudoz, P.; Mawet, D.; Absil, O.
2015-12-01
Context. Small inner working angle coronagraphs, such as the vortex phase mask, are essential to exploit the full potential of ground-based telescopes in the context of exoplanet detection and characterization. However, the drawback of this attractive feature is a high sensitivity to pointing errors, which degrades the performance of the coronagraph. Aims: We propose a tip-tilt retrieval technique based on the analysis of the final coronagraphic image, hereafter called Quadrant Analysis of Coronagraphic Images for Tip-tilt Sensing (QACITS). Methods: Under the assumption of small phase aberrations, we show that the behavior of the vortex phase mask can be simply described from the entrance pupil to the Lyot stop plane with Zernike polynomials. This convenient formalism is used to establish the theoretical basis of the QACITS technique. We performed simulations to demonstrate the validity and limits of the technique, including the case of a centrally obstructed pupil. Results: The QACITS technique principle is validated with experimental results in the case of an unobstructed circular aperture, as well as simulations in presence of a central obstruction. The typical configuration of the Keck telescope (24% central obstruction) has been simulated with additional high order aberrations. In these conditions, our simulations show that the QACITS technique is still adapted to centrally obstructed pupils and performs tip-tilt retrieval with a precision of 5 × 10-2λ/D when wavefront errors amount to λ/ 14 rms and 10-2λ/D for λ/ 70 rms errors (with λ the wavelength and D the pupil diameter). Conclusions: We have developed and demonstrated a tip-tilt sensing technique for vortex coronagraphs. The implementation of the QACITS technique is based on the analysis of the scientific image and does not require any modification of the original setup. Current facilities equipped with a vortex phase mask can thus directly benefit from this technique to improve the contrast performance close to the axis.
Verification of Eulerian-Eulerian and Eulerian-Lagrangian simulations for fluid-particle flows
NASA Astrophysics Data System (ADS)
Kong, Bo; Patel, Ravi G.; Capecelatro, Jesse; Desjardins, Olivier; Fox, Rodney O.
2017-11-01
In this work, we study the performance of three simulation techniques for fluid-particle flows: (1) a volume-filtered Euler-Lagrange approach (EL), (2) a quadrature-based moment method using the anisotropic Gaussian closure (AG), and (3) a traditional two-fluid model. By simulating two problems: particles in frozen homogeneous isotropic turbulence (HIT), and cluster-induced turbulence (CIT), the convergence of the methods under grid refinement is found to depend on the simulation method and the specific problem, with CIT simulations facing fewer difficulties than HIT. Although EL converges under refinement for both HIT and CIT, its statistical results exhibit dependence on the techniques used to extract statistics for the particle phase. For HIT, converging both EE methods (TFM and AG) poses challenges, while for CIT, AG and EL produce similar results. Overall, all three methods face challenges when trying to extract converged, parameter-independent statistics due to the presence of shocks in the particle phase. National Science Foundation and National Energy Technology Laboratory.
eLearning techniques supporting problem based learning in clinical simulation.
Docherty, Charles; Hoy, Derek; Topp, Helena; Trinder, Kathryn
2005-08-01
This paper details the results of the first phase of a project using eLearning to support students' learning within a simulated environment. The locus was a purpose built clinical simulation laboratory (CSL) where the School's philosophy of problem based learning (PBL) was challenged through lecturers using traditional teaching methods. a student-centred, problem based approach to the acquisition of clinical skills that used high quality learning objects embedded within web pages, substituting for lecturers providing instruction and demonstration. This encouraged student nurses to explore, analyse and make decisions within the safety of a clinical simulation. Learning was facilitated through network communications and reflection on video performances of self and others. Evaluations were positive, students demonstrating increased satisfaction with PBL, improved performance in exams, and increased self-efficacy in the performance of nursing activities. These results indicate that eLearning techniques can help students acquire clinical skills in the safety of a simulated environment within the context of a problem based learning curriculum.
Decision rules for unbiased inventory estimates
NASA Technical Reports Server (NTRS)
Argentiero, P. D.; Koch, D.
1979-01-01
An efficient and accurate procedure for estimating inventories from remote sensing scenes is presented. In place of the conventional and expensive full dimensional Bayes decision rule, a one-dimensional feature extraction and classification technique was employed. It is shown that this efficient decision rule can be used to develop unbiased inventory estimates and that for large sample sizes typical of satellite derived remote sensing scenes, resulting accuracies are comparable or superior to more expensive alternative procedures. Mathematical details of the procedure are provided in the body of the report and in the appendix. Results of a numerical simulation of the technique using statistics obtained from an observed LANDSAT scene are included. The simulation demonstrates the effectiveness of the technique in computing accurate inventory estimates.
Wide range operation of advanced low NOx aircraft gas turbine combustors
NASA Technical Reports Server (NTRS)
Roberts, P. B.; Fiorito, R. J.; Butze, H. F.
1978-01-01
The paper summarizes the results of an experimental test rig program designed to define and demonstrates techniques which would allow the jet-induced circulation and vortex air blast combustors to operate stably with acceptable emissions at simulated engine idle without compromise to the low NOx emissions under the high-altitude supersonic cruise condition. The discussion focuses on the test results of the key combustor modifications for both the simulated engine idle and cruise conditions. Several range-augmentation techniques are demonstrated that allow the lean-reaction premixed aircraft gas turbine combustor to operate with low NOx emissons at engine cruise and acceptable CO and UHC levels at engine idle. These techniques involve several combinations, including variable geometry and fuel switching designs.
Application of the Shell/3D Modeling Technique for the Analysis of Skin-Stiffener Debond Specimens
NASA Technical Reports Server (NTRS)
Krueger, Ronald; O'Brien, T. Kevin; Minguet, Pierre J.
2002-01-01
The application of a shell/3D modeling technique for the simulation of skin/stringer debond in a specimen subjected to three-point bending is demonstrated. The global structure was modeled with shell elements. A local three-dimensional model, extending to about three specimen thicknesses on either side of the delamination front was used to capture the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from shell/13D simulations were in good agreement with results obtained from full solid models. The good correlations of the results demonstrated the effectiveness of the shell/3D modeling technique for the investigation of skin/stiffener separation due to delamination in the adherents.
NASA Technical Reports Server (NTRS)
Lowrie, J. W.; Fermelia, A. J.; Haley, D. C.; Gremban, K. D.; Vanbaalen, J.; Walsh, R. W.
1982-01-01
A variety of artificial intelligence techniques which could be used with regard to NASA space applications and robotics were evaluated. The techniques studied were decision tree manipulators, problem solvers, rule based systems, logic programming languages, representation language languages, and expert systems. The overall structure of a robotic simulation tool was defined and a framework for that tool developed. Nonlinear and linearized dynamics equations were formulated for n link manipulator configurations. A framework for the robotic simulation was established which uses validated manipulator component models connected according to a user defined configuration.
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1985-01-01
The application of the Generalized Likelihood Ratio technique to the detection and identification of aircraft control element failures has been evaluated in a linear digital simulation of the longitudinal dynamics of a B-737 aircraft. Simulation results show that the technique has potential but that the effects of wind turbulence and Kalman filter model errors are problems which must be overcome.
An object-oriented simulator for 3D digital breast tomosynthesis imaging system.
Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa
2013-01-01
Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.
An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System
Cengiz, Kubra
2013-01-01
Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468
An interactive driving simulation for driver control and decision-making research
NASA Technical Reports Server (NTRS)
Allen, R. W.; Hogge, J. R.; Schwartz, S. H.
1975-01-01
Display techniques and equations of motion for a relatively simple fixed base car simulation are described. The vehicle dynamics include simplified lateral (steering) and longitudinal (speed) degrees of freedom. Several simulator tasks are described which require a combination of operator control and decision making, including response to wind gust inputs, curved roads, traffic signal lights, and obstacles. Logic circuits are used to detect speeding, running red lights, and crashes. A variety of visual and auditory cues are used to give the driver appropriate performance feedback. The simulated equations of motion are reviewed and the technique for generating the line drawing CRT roadway display is discussed. On-line measurement capabilities and experimenter control features are presented, along with previous and current research results demonstrating simulation capabilities and applications.
Quantum simulation of an ultrathin body field-effect transistor with channel imperfections
NASA Astrophysics Data System (ADS)
Vyurkov, V.; Semenikhin, I.; Filippov, S.; Orlikovsky, A.
2012-04-01
An efficient program for the all-quantum simulation of nanometer field-effect transistors is elaborated. The model is based on the Landauer-Buttiker approach. Our calculation of transmission coefficients employs a transfer-matrix technique involving the arbitrary precision (multiprecision) arithmetic to cope with evanescent modes. Modified in such way, the transfer-matrix technique turns out to be much faster in practical simulations than that of scattering-matrix. Results of the simulation demonstrate the impact of realistic channel imperfections (random charged centers and wall roughness) on transistor characteristics. The Landauer-Buttiker approach is developed to incorporate calculation of the noise at an arbitrary temperature. We also validate the ballistic Landauer-Buttiker approach for the usual situation when heavily doped contacts are indispensably included into the simulation region.
Spatiotemporal stochastic models for earth science and engineering applications
NASA Astrophysics Data System (ADS)
Luo, Xiaochun
1998-12-01
Spatiotemporal processes occur in many areas of earth sciences and engineering. However, most of the available theoretical tools and techniques of space-time daft processing have been designed to operate exclusively in time or in space, and the importance of spatiotemporal variability was not fully appreciated until recently. To address this problem, a systematic framework of spatiotemporal random field (S/TRF) models for geoscience/engineering applications is presented and developed in this thesis. The space-tune continuity characterization is one of the most important aspects in S/TRF modelling, where the space-time continuity is displayed with experimental spatiotemporal variograms, summarized in terms of space-time continuity hypotheses, and modelled using spatiotemporal variogram functions. Permissible spatiotemporal covariance/variogram models are addressed through permissibility criteria appropriate to spatiotemporal processes. The estimation of spatiotemporal processes is developed in terms of spatiotemporal kriging techniques. Particular emphasis is given to the singularity analysis of spatiotemporal kriging systems. The impacts of covariance, functions, trend forms, and data configurations on the singularity of spatiotemporal kriging systems are discussed. In addition, the tensorial invariance of universal spatiotemporal kriging systems is investigated in terms of the space-time trend. The conditional simulation of spatiotemporal processes is proposed with the development of the sequential group Gaussian simulation techniques (SGGS), which is actually a series of sequential simulation algorithms associated with different group sizes. The simulation error is analyzed with different covariance models and simulation grids. The simulated annealing technique honoring experimental variograms, is also proposed, providing a way of conditional simulation without the covariance model fitting which is prerequisite for most simulation algorithms. The proposed techniques were first applied for modelling of the pressure system in a carbonate reservoir, and then applied for modelling of springwater contents in the Dyle watershed. The results of these case studies as well as the theory suggest that these techniques are realistic and feasible.
Golabian, A; Hosseini, M A; Ahmadi, M; Soleimani, B; Rezvanifard, M
2018-01-01
Miniature neutron source reactors (MNSRs) are among the safest and economic research reactors with potentials to be used for neutron studies. This manuscript explores the feasibility of 177 Lu production in Isfahan MNSR reactor using direct production route. In this study, to assess the specific activity of the produced radioisotope, a simulation was carried out through the MCNPX2.6 code. The simulation was validated by irradiating a lutetium disc-like (99.98 chemical purity) at the thermal neutron flux of 5 × 10 11 ncm 2 s -1 and an irradiation time of 4min. After the spectrometry of the irradiated sample, the experimental results of 177 Lu production were compared with the simulation results. In addition, factor from the simulation was extracted by replacing it in the related equations in order to calculate specific activity through a multi-stage approach, and by using different irradiation techniques. The results showed that the simulation technique designed in this study is in agreement with the experimental approach (with a difference of approximately 3%). It was also found that the maximum 177 Lu production at the maximum flux and irradiation time allows access to 723.5mCi/g after 27 cycles. Furthermore, the comparison of irradiation techniques showed that increasing the irradiation time is more effective in 177 Lu production efficiency than increasing the number of irradiation cycles. In a way that increasing the irradiation time would postpone the saturation of the productions. On the other hand, it was shown that the choice of an appropriate irradiation technique for 177 Lu production can be economically important in term of the effective fuel consumption in the reactor. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Animesh, E-mail: animesh@zedat.fu-berlin.de; Delle Site, Luigi, E-mail: dellesite@fu-berlin.de
Quantum effects due to the spatial delocalization of light atoms are treated in molecular simulation via the path integral technique. Among several methods, Path Integral (PI) Molecular Dynamics (MD) is nowadays a powerful tool to investigate properties induced by spatial delocalization of atoms; however, computationally this technique is very demanding. The above mentioned limitation implies the restriction of PIMD applications to relatively small systems and short time scales. One of the possible solutions to overcome size and time limitation is to introduce PIMD algorithms into the Adaptive Resolution Simulation Scheme (AdResS). AdResS requires a relatively small region treated at pathmore » integral level and embeds it into a large molecular reservoir consisting of generic spherical coarse grained molecules. It was previously shown that the realization of the idea above, at a simple level, produced reasonable results for toy systems or simple/test systems like liquid parahydrogen. Encouraged by previous results, in this paper, we show the simulation of liquid water at room conditions where AdResS, in its latest and more accurate Grand-Canonical-like version (GC-AdResS), is merged with two of the most relevant PIMD techniques available in the literature. The comparison of our results with those reported in the literature and/or with those obtained from full PIMD simulations shows a highly satisfactory agreement.« less
Infant phantom head circuit board for EEG head phantom and pediatric brain simulation
NASA Astrophysics Data System (ADS)
Almohsen, Safa
The infant's skull differs from an adult skull because of the characteristic features of the human skull during early development. The fontanels and the conductivity of the infant skull influence surface currents, generated by neurons, which underlie electroencephalography (EEG) signals. An electric circuit was built to power a set of simulated neural sources for an infant brain activity simulator. Also, in the simulator, three phantom tissues were created using saline solution plus Agarose gel to mimic the conductivity of each layer in the head [scalp, skull brain]. The conductivity measurement was accomplished by two different techniques: using the four points' measurement technique, and a conductivity meter. Test results showed that the optimized phantom tissues had appropriate conductivities to simulate each tissue layer to fabricate a physical head phantom. In this case, the best results should be achieved by testing the electrical neural circuit with the sample physical model to generate simulated EEG data and use that to solve both the forward and the inverse problems for the purpose of localizing the neural sources in the head phantom.
An image filtering technique for SPIDER visible tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fonnesu, N., E-mail: nicola.fonnesu@igi.cnr.it; Agostini, M.; Brombin, M.
2014-02-15
The tomographic diagnostic developed for the beam generated in the SPIDER facility (100 keV, 50 A prototype negative ion source of ITER neutral beam injector) will characterize the two-dimensional particle density distribution of the beam. The simulations described in the paper show that instrumental noise has a large influence on the maximum achievable resolution of the diagnostic. To reduce its impact on beam pattern reconstruction, a filtering technique has been adapted and implemented in the tomography code. This technique is applied to the simulated tomographic reconstruction of the SPIDER beam, and the main results are reported.
Fitting Flux Ropes to a Global MHD Solution: A Comparison of Techniques. Appendix 1
NASA Technical Reports Server (NTRS)
Riley, Pete; Linker, J. A.; Lionello, R.; Mikic, Z.; Odstrcil, D.; Hidalgo, M. A.; Cid, C.; Hu, Q.; Lepping, R. P.; Lynch, B. J.
2004-01-01
Flux rope fitting (FRF) techniques are an invaluable tool for extracting information about the properties of a subclass of CMEs in the solar wind. However, it has proven difficult to assess their accuracy since the underlying global structure of the CME cannot be independently determined from the data. In contrast, large-scale MHD simulations of CME evolution can provide both a global view as well as localized time series at specific points in space. In this study we apply 5 different fitting techniques to 2 hypothetical time series derived from MHD simulation results. Independent teams performed the analysis of the events in "blind tests", for which no information, other than the time series, was provided. F rom the results, we infer the following: (1) Accuracy decreases markedly with increasingly glancing encounters; (2) Correct identification of the boundaries of the flux rope can be a significant limiter; and (3) Results from techniques that infer global morphology must be viewed with caution. In spite of these limitations, FRF techniques remain a useful tool for describing in situ observations of flux rope CMEs.
3D Modeling of Ultrasonic Wave Interaction with Disbonds and Weak Bonds
NASA Technical Reports Server (NTRS)
Leckey, C.; Hinders, M.
2011-01-01
Ultrasonic techniques, such as the use of guided waves, can be ideal for finding damage in the plate and pipe-like structures used in aerospace applications. However, the interaction of waves with real flaw types and geometries can lead to experimental signals that are difficult to interpret. 3-dimensional (3D) elastic wave simulations can be a powerful tool in understanding the complicated wave scattering involved in flaw detection and for optimizing experimental techniques. We have developed and implemented parallel 3D elastodynamic finite integration technique (3D EFIT) code to investigate Lamb wave scattering from realistic flaws. This paper discusses simulation results for an aluminum-aluminum diffusion disbond and an aluminum-epoxy disbond and compares results from the disbond case to the common artificial flaw type of a flat-bottom hole. The paper also discusses the potential for extending the 3D EFIT equations to incorporate physics-based weak bond models for simulating wave scattering from weak adhesive bonds.
Nonuniform sampling techniques for antenna applications
NASA Technical Reports Server (NTRS)
Rahmat-Samii, Yahya; Cheung, Rudolf Lap-Tung
1987-01-01
A two-dimensional sampling technique, which can employ irregularly spaced samples (amplitude and phase) in order to generate the complete far-field patterns is presented. The technique implements a matrix inversion algorithm, which depends only on the nonuniform sampled data point locations and with no dependence on the actual field values at these points. A powerful simulation algorithm is presented to allow a real-life simulation of many reflector/feed configurations and to determine the usefulness of the nonuniform sampling technique for the copolar and cross-polar patterns. Additionally, an overlapped window concept and a generalized error simulation model are discussed to identify the stability of the technique for recovering the field data among the nonuniform sampled data. Numerical results are tailored for the pattern reconstruction of a 20-m offset reflector antenna operating at L-band. This reflector is planned to be used in a proposed measurement concept of large antenna aboard the Space Shuttle, whereby it would be almost impractical to accurately control the movement of the Shuttle with respect to the RF source in prescribed directions in order to generate uniform sampled points. Also, application of the nonuniform sampling technique to patterns obtained using near-field measured data is demonstrated. Finally, results of an actual far-field measurement are presented for the construction of patterns of a reflector antenna from a set of nonuniformly distributed measured amplitude and phase data.
Soranno, Andrea; Holla, Andrea; Dingfelder, Fabian; Nettels, Daniel; Makarov, Dmitrii E.; Schuler, Benjamin
2017-01-01
Internal friction is an important contribution to protein dynamics at all stages along the folding reaction. Even in unfolded and intrinsically disordered proteins, internal friction has a large influence, as demonstrated with several experimental techniques and in simulations. However, these methods probe different facets of internal friction and have been applied to disparate molecular systems, raising questions regarding the compatibility of the results. To obtain an integrated view, we apply here the combination of two complementary experimental techniques, simulations, and theory to the same system: unfolded protein L. We use single-molecule Förster resonance energy transfer (FRET) to measure the global reconfiguration dynamics of the chain, and photoinduced electron transfer (PET), a contact-based method, to quantify the rate of loop formation between two residues. This combination enables us to probe unfolded-state dynamics on different length scales, corresponding to different parts of the intramolecular distance distribution. Both FRET and PET measurements show that internal friction dominates unfolded-state dynamics at low denaturant concentration, and the results are in remarkable agreement with recent large-scale molecular dynamics simulations using a new water model. The simulations indicate that intrachain interactions and dihedral angle rotation correlate with the presence of internal friction, and theoretical models of polymer dynamics provide a framework for interrelating the contribution of internal friction observed in the two types of experiments and in the simulations. The combined results thus provide a coherent and quantitative picture of internal friction in unfolded proteins that could not be attained from the individual techniques. PMID:28223518
Soranno, Andrea; Holla, Andrea; Dingfelder, Fabian; Nettels, Daniel; Makarov, Dmitrii E; Schuler, Benjamin
2017-03-07
Internal friction is an important contribution to protein dynamics at all stages along the folding reaction. Even in unfolded and intrinsically disordered proteins, internal friction has a large influence, as demonstrated with several experimental techniques and in simulations. However, these methods probe different facets of internal friction and have been applied to disparate molecular systems, raising questions regarding the compatibility of the results. To obtain an integrated view, we apply here the combination of two complementary experimental techniques, simulations, and theory to the same system: unfolded protein L. We use single-molecule Förster resonance energy transfer (FRET) to measure the global reconfiguration dynamics of the chain, and photoinduced electron transfer (PET), a contact-based method, to quantify the rate of loop formation between two residues. This combination enables us to probe unfolded-state dynamics on different length scales, corresponding to different parts of the intramolecular distance distribution. Both FRET and PET measurements show that internal friction dominates unfolded-state dynamics at low denaturant concentration, and the results are in remarkable agreement with recent large-scale molecular dynamics simulations using a new water model. The simulations indicate that intrachain interactions and dihedral angle rotation correlate with the presence of internal friction, and theoretical models of polymer dynamics provide a framework for interrelating the contribution of internal friction observed in the two types of experiments and in the simulations. The combined results thus provide a coherent and quantitative picture of internal friction in unfolded proteins that could not be attained from the individual techniques.
G and C boost and abort study summary, exhibit B
NASA Technical Reports Server (NTRS)
Backman, H. D.
1972-01-01
A six degree of freedom simulation of rigid vehicles was developed to study space shuttle vehicle boost-abort guidance and control techniques. The simulation was described in detail as an all digital program and as a hybrid program. Only the digital simulation was implemented. The equations verified in the digital simulation were adapted for use in the hybrid simulation. Study results were obtained from four abort cases using the digital program.
Simulations of Convection Zone Flows and Measurements from Multiple Viewing Angles
NASA Technical Reports Server (NTRS)
Duvall, Thomas L.; Hanasoge, Shravan
2011-01-01
A deep-focusing time-distance measurement technique has been applied to linear acoustic simulations of a solar interior perturbed by convective flows. The simulations are for the full sphere for r/R greater than 0.2. From these it is straightforward to simulate the observations from different viewing angles and to test how multiple viewing angles enhance detectibility. Some initial results will be presented.
Tissue Acoustoelectric Effect Modeling From Solid Mechanics Theory.
Song, Xizi; Qin, Yexian; Xu, Yanbin; Ingram, Pier; Witte, Russell S; Dong, Feng
2017-10-01
The acoustoelectric (AE) effect is a basic physical phenomenon, which underlies the changes made in the conductivity of a medium by the application of focused ultrasound. Recently, based on the AE effect, several biomedical imaging techniques have been widely studied, such as ultrasound-modulated electrical impedance tomography and ultrasound current source density imaging. To further investigate the mechanism of the AE effect in tissue and to provide guidance for such techniques, we have modeled the tissue AE effect using the theory of solid mechanics. Both bulk compression and thermal expansion of tissue are considered and discussed. Computation simulation shows that the muscle AE effect result, conductivity change rate, is 3.26×10 -3 with 4.3-MPa peak pressure, satisfying the theoretical value. Bulk compression plays the main role for muscle AE effect, while thermal expansion makes almost no contribution to it. In addition, the AE signals of porcine muscle are measured at different focal positions. With the same magnitude order and the same change trend, the experiment result confirms that the simulation result is effective. Both simulation and experimental results validate that tissue AE effect modeling using solid mechanics theory is feasible, which is of significance for the further development of related biomedical imaging techniques.
Carlson, Jean M.
2018-01-01
In this paper we study antibiotic-induced C. difficile infection (CDI), caused by the toxin-producing C. difficile (CD), and implement clinically-inspired simulated treatments in a computational framework that synthesizes a generalized Lotka-Volterra (gLV) model with SIR modeling techniques. The gLV model uses parameters derived from an experimental mouse model, in which the mice are administered antibiotics and subsequently dosed with CD. We numerically identify which of the experimentally measured initial conditions are vulnerable to CD colonization, then formalize the notion of CD susceptibility analytically. We simulate fecal transplantation, a clinically successful treatment for CDI, and discover that both the transplant timing and transplant donor are relevant to the the efficacy of the treatment, a result which has clinical implications. We incorporate two nongeneric yet dangerous attributes of CD into the gLV model, sporulation and antibiotic-resistant mutation, and for each identify relevant SIR techniques that describe the desired attribute. Finally, we rely on the results of our framework to analyze an experimental study of fecal transplants in mice, and are able to explain observed experimental results, validate our simulated results, and suggest model-motivated experiments. PMID:29451873
Jones, Eric W; Carlson, Jean M
2018-02-01
In this paper we study antibiotic-induced C. difficile infection (CDI), caused by the toxin-producing C. difficile (CD), and implement clinically-inspired simulated treatments in a computational framework that synthesizes a generalized Lotka-Volterra (gLV) model with SIR modeling techniques. The gLV model uses parameters derived from an experimental mouse model, in which the mice are administered antibiotics and subsequently dosed with CD. We numerically identify which of the experimentally measured initial conditions are vulnerable to CD colonization, then formalize the notion of CD susceptibility analytically. We simulate fecal transplantation, a clinically successful treatment for CDI, and discover that both the transplant timing and transplant donor are relevant to the the efficacy of the treatment, a result which has clinical implications. We incorporate two nongeneric yet dangerous attributes of CD into the gLV model, sporulation and antibiotic-resistant mutation, and for each identify relevant SIR techniques that describe the desired attribute. Finally, we rely on the results of our framework to analyze an experimental study of fecal transplants in mice, and are able to explain observed experimental results, validate our simulated results, and suggest model-motivated experiments.
NASA Astrophysics Data System (ADS)
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
2017-09-01
Exascale-level simulations require fault-resilient algorithms that are robust against repeated and expected software and/or hardware failures during computations, which may render the simulation results unsatisfactory. If each processor can share some global information about the simulation from a coarse, limited accuracy but relatively costless auxiliary simulator we can effectively fill-in the missing spatial data at the required times by a statistical learning technique - multi-level Gaussian process regression, on the fly; this has been demonstrated in previous work [1]. Based on the previous work, we also employ another (nonlinear) statistical learning technique, Diffusion Maps, that detects computational redundancy in time and hence accelerate the simulation by projective time integration, giving the overall computation a "patch dynamics" flavor. Furthermore, we are now able to perform information fusion with multi-fidelity and heterogeneous data (including stochastic data). Finally, we set the foundations of a new framework in CFD, called patch simulation, that combines information fusion techniques from, in principle, multiple fidelity and resolution simulations (and even experiments) with a new adaptive timestep refinement technique. We present two benchmark problems (the heat equation and the Navier-Stokes equations) to demonstrate the new capability that statistical learning tools can bring to traditional scientific computing algorithms. For each problem, we rely on heterogeneous and multi-fidelity data, either from a coarse simulation of the same equation or from a stochastic, particle-based, more "microscopic" simulation. We consider, as such "auxiliary" models, a Monte Carlo random walk for the heat equation and a dissipative particle dynamics (DPD) model for the Navier-Stokes equations. More broadly, in this paper we demonstrate the symbiotic and synergistic combination of statistical learning, domain decomposition, and scientific computing in exascale simulations.
Numerical simulation of the SAGD process coupled with geomechanical behavior
NASA Astrophysics Data System (ADS)
Li, Pingke
Canada has vast oil sand resources. While a large portion of this resource can be recovered by surface mining techniques, a majority is located at depths requiring the application of in situ recovery technologies. Although a number of in situ recovery technologies exist, the steam assisted gravity drainage (SAGD) process has emerged as one of the most promising technologies to develop the in situ oil sands resources. During the SAGD operations, saturated steam is continuously injected into the oil sands reservoir, which induces pore pressure and stress variations. As a result, reservoir parameters and processes may also vary, particularly when tensile and shear failure occur. This geomechanical effect is obvious for oil sands material because oil sands have the in situ interlocked fabric. The conventional reservoir simulation generally does not take this coupled mechanism into consideration. Therefore, this research is to improve the reservoir simulation techniques of the SAGD process applied in the development of oil sands and heavy oil reservoirs. The analyses of the decoupled reservoir geomechanical simulation results show that the geomechanical behavior in SAGD has obvious impact on reservoir parameters, such as absolute permeability. The issues with the coupled reservoir geomechanical simulations of the SAGD process have been clarified and the permeability variations due to geomechanical behaviors in the SAGD process investigated. A methodology of sequentially coupled reservoir geomechanical simulation technique was developed based on the reservoir simulator, EXOTHERM, and the geomechanical simulator, FLAC. In addition, a representative geomechanical model of oil sands material was summarized in this research. Finally, this reservoir geomechanical simulation methodology was verified with the UTF Phase A SAGD project and applied in a SAGD operation with gas-over-bitumen geometry. Based on this methodology, the geomechanical effect on the SAGD production performance can be quantified. This research program involves the analyses of laboratory testing results obtained from literatures. However, no laboratory testing was conducted in the process of this research.
A quantitative investigation of the fracture pump-in/flowback test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plahn, S.V.; Nolte, K.G.; Thompson, L.G.
1997-02-01
Fracture-closure pressure is an important parameter for fracture treatment design and evaluation. The pump-in/flowback (PIFB) test is frequently used to estimate its magnitude. The test is attractive because bottomhole pressures (BHP`s) during flowback develop a distinct and repeatable signature. This is in contrast to the pump-in/shut-in test, where strong indications of fracture closure are rarely seen. Various techniques are used to extract closure pressure from the flowback-pressure response. Unfortunately, these techniques give different estimates for closure pressure, and their theoretical bases are not well established. The authors present results that place the PIFB test on a firmer foundation. A numericalmore » model is used to simulate the PIFB test and glean physical mechanisms contributing to the response. On the basis of their simulation results, they propose interpretation techniques that give better estimates of closure pressure than existing techniques.« less
Simulation of wind turbine wakes using the actuator line technique
Sørensen, Jens N.; Mikkelsen, Robert F.; Henningson, Dan S.; Ivanell, Stefan; Sarmast, Sasan; Andersen, Søren J.
2015-01-01
The actuator line technique was introduced as a numerical tool to be employed in combination with large eddy simulations to enable the study of wakes and wake interaction in wind farms. The technique is today largely used for studying basic features of wakes as well as for making performance predictions of wind farms. In this paper, we give a short introduction to the wake problem and the actuator line methodology and present a study in which the technique is employed to determine the near-wake properties of wind turbines. The presented results include a comparison of experimental results of the wake characteristics of the flow around a three-bladed model wind turbine, the development of a simple analytical formula for determining the near-wake length behind a wind turbine and a detailed investigation of wake structures based on proper orthogonal decomposition analysis of numerically generated snapshots of the wake. PMID:25583862
A technique to remove the tensile instability in weakly compressible SPH
NASA Astrophysics Data System (ADS)
Xu, Xiaoyang; Yu, Peng
2018-01-01
When smoothed particle hydrodynamics (SPH) is directly applied for the numerical simulations of transient viscoelastic free surface flows, a numerical problem called tensile instability arises. In this paper, we develop an optimized particle shifting technique to remove the tensile instability in SPH. The basic equations governing free surface flow of an Oldroyd-B fluid are considered, and approximated by an improved SPH scheme. This includes the implementations of the correction of kernel gradient and the introduction of Rusanov flux into the continuity equation. To verify the effectiveness of the optimized particle shifting technique in removing the tensile instability, the impacting drop, the injection molding of a C-shaped cavity, and the extrudate swell, are conducted. The numerical results obtained are compared with those simulated by other numerical methods. A comparison among different numerical techniques (e.g., the artificial stress) to remove the tensile instability is further performed. All numerical results agree well with the available data.
Distributed Spectral Monitoring For Emitter Localization
2018-02-12
localization techniques in a DSA sensor network. The results of the research are presented through simulation of localization algorithms, emulation of a...network on a wireless RF environment emulator, and field tests. The results of the various tests in both the lab and field are obtained and analyzed to... are two main classes of localization techniques, and the technique to use will depend on the information available with the emitter. The first class
An analysis of airline landing flare data based on flight and training simulator measurements
NASA Technical Reports Server (NTRS)
Heffley, R. K.; Schulman, T. M.; Clement, T. M.
1982-01-01
Landings by experienced airline pilots transitioning to the DC-10, performed in flight and on a simulator, were analyzed and compared using a pilot-in-the-loop model of the landing maneuver. By solving for the effective feedback gains and pilot compensation which described landing technique, it was possible to discern fundamental differences in pilot behavior between the actual aircraft and the simulator. These differences were then used to infer simulator fidelity in terms of specific deficiencies and to quantify the effectiveness of training on the simulator as compared to training in flight. While training on the simulator, pilots exhibited larger effective lag in commanding the flare. The inability to compensate adequately for this lag was associated with hard or inconsistent landings. To some degree this deficiency was carried into flight, thus resulting in a slightly different and inferior landing technique than exhibited by pilots trained exclusively on the actual aircraft.
NASA Astrophysics Data System (ADS)
Alsadoon, Abeer; Prasad, P. W. C.; Beg, Azam
2017-09-01
Making the students understand the theoretical concepts of digital logic design concepts is one of the major issues faced by the academics, therefore the teachers have tried different techniques to link the theoretical information to the practical knowledge. Use of software simulations is a technique for learning and practice that can be applied to many different disciplines. Experimentation of different computer hardware components/integrated circuits with the use of the simulators enhances the student learning. The simulators can be rather simplistic or quite complex. This paper reports our evaluation of different simulators available for use in the higher education institutions. We also provide the experience of incorporating some selected tools in teaching introductory courses in computer systems. We justified the effectiveness of incorporating the simulators into the computer system courses by use of student survey and final grade results.
Partial molar enthalpies and reaction enthalpies from equilibrium molecular dynamics simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schnell, Sondre K.; Department of Chemical and Biomolecular Engineering, University of California, Berkeley, California 94720; Department of Chemistry, Faculty of Natural Science and Technology, Norwegian University of Science and Technology, 4791 Trondheim
2014-10-14
We present a new molecular simulation technique for determining partial molar enthalpies in mixtures of gases and liquids from single simulations, without relying on particle insertions, deletions, or identity changes. The method can also be applied to systems with chemical reactions. We demonstrate our method for binary mixtures of Weeks-Chandler-Anderson particles by comparing with conventional simulation techniques, as well as for a simple model that mimics a chemical reaction. The method considers small subsystems inside a large reservoir (i.e., the simulation box), and uses the construction of Hill to compute properties in the thermodynamic limit from small-scale fluctuations. Results obtainedmore » with the new method are in excellent agreement with those from previous methods. Especially for modeling chemical reactions, our method can be a valuable tool for determining reaction enthalpies directly from a single MD simulation.« less
Scramjet exhaust simulation technique for hypersonic aircraft nozzle design and aerodynamic tests
NASA Technical Reports Server (NTRS)
Hunt, J. L.; Talcott, N. A., Jr.; Cubbage, J. M.
1977-01-01
Current design philosophy for scramjet-powered hypersonic aircraft results in configurations with the entire lower fuselage surface utilized as part of the propulsion system. The lower aft-end of the vehicle acts as a high expansion ratio nozzle. Not only must the external nozzle be designed to extract the maximum possible thrust force from the high energy flow at the combustor exit, but the forces produced by the nozzle must be aligned such that they do not unduly affect aerodynamic balance. The strong coupling between the propulsion system and aerodynamics of the aircraft makes imperative at least a partial simulation of the inlet, exhaust, and external flows of the hydrogen-burning scramjet in conventional facilities for both nozzle formulation and aerodynamic-force data acquisition. Aerodynamic testing methods offer no contemporary approach for such vehicle design requirements. NASA-Langley has pursued an extensive scramjet/airframe integration R&D program for several years and has recently developed a promising technique for simulation of the scramjet exhaust flow for hypersonic aircraft. Current results of the research program to develop a scramjet flow simulation technique through the use of substitute gas blends are described in this paper.
NASA Astrophysics Data System (ADS)
Krongkietlearts, K.; Tangboonduangjit, P.; Paisangittisakul, N.
2016-03-01
In order to improve the life's quality for a cancer patient, the radiation techniques are constantly evolving. Especially, the two modern techniques which are intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) are quite promising. They comprise of many small beam sizes (beamlets) with various intensities to achieve the intended radiation dose to the tumor and minimal dose to the nearby normal tissue. The study investigates whether the microDiamond detector (PTW manufacturer), a synthetic single crystal diamond detector, is suitable for small field output factor measurement. The results were compared with those measured by the stereotactic field detector (SFD) and the Monte Carlo simulation (EGSnrc/BEAMnrc/DOSXYZ). The calibration of Monte Carlo simulation was done using the percentage depth dose and dose profile measured by the photon field detector (PFD) of the 10×10 cm2 field size with 100 cm SSD. Comparison of the values obtained from the calculations and measurements are consistent, no more than 1% difference. The output factors obtained from the microDiamond detector have been compared with those of SFD and Monte Carlo simulation, the results demonstrate the percentage difference of less than 2%.
NASA Astrophysics Data System (ADS)
Brunet, V.; Molton, P.; Bézard, H.; Deck, S.; Jacquin, L.
2012-01-01
This paper describes the results obtained during the European Union JEDI (JEt Development Investigations) project carried out in cooperation between ONERA and Airbus. The aim of these studies was first to acquire a complete database of a modern-type engine jet installation set under a wall-to-wall swept wing in various transonic flow conditions. Interactions between the engine jet, the pylon, and the wing were studied thanks to ¤advanced¥ measurement techniques. In parallel, accurate Reynolds-averaged Navier Stokes (RANS) simulations were carried out from simple ones with the Spalart Allmaras model to more complex ones like the DRSM-SSG (Differential Reynolds Stress Modef of Speziale Sarkar Gatski) turbulence model. In the end, Zonal-Detached Eddy Simulations (Z-DES) were also performed to compare different simulation techniques. All numerical results are accurately validated thanks to the experimental database acquired in parallel. This complete and complex study of modern civil aircraft engine installation allowed many upgrades in understanding and simulation methods to be obtained. Furthermore, a setup for engine jet installation studies has been validated for possible future works in the S3Ch transonic research wind-tunnel. The main conclusions are summed up in this paper.
Verification of component mode techniques for flexible multibody systems
NASA Technical Reports Server (NTRS)
Wiens, Gloria J.
1990-01-01
Investigations were conducted in the modeling aspects of flexible multibodies undergoing large angular displacements. Models were to be generated and analyzed through application of computer simulation packages employing the 'component mode synthesis' techniques. Multibody Modeling, Verification and Control Laboratory (MMVC) plan was implemented, which includes running experimental tests on flexible multibody test articles. From these tests, data was to be collected for later correlation and verification of the theoretical results predicted by the modeling and simulation process.
NASA Astrophysics Data System (ADS)
Irwandi; Rusydy, Ibnu; Muksin, Umar; Rudyanto, Ariska; Daryono
2018-05-01
Wave vibration confined in the boundary will produce stationary wave solution in discrete states called modes. There are many physics applications related to modal solutions such as air column resonance, string vibration, and emission spectrum of the atomic Hydrogen. Naturally, energy is distributed in several modes so that the complete calculation is obtained from the sum of the whole modes called modal summation. The modal summation technique was applied to simulate the surface wave propagation above crustal structure of the earth. The method is computational because it uses 1D structural model which is not necessary to calculate the overall wave propagation. The simulation results of the magnitude 6.5 Pidie Jaya earthquake show the response spectral of the Summation Technique has a good correlation to the observed seismometer and accelerometer waveform data, especially at the KCSI (Kotacane) station. On the other hand, at the LASI (Langsa) station shows the modal simulation result of response is relatively lower than observation. The lower value of the reaction spectral estimation is obtained because the station is located in the thick sedimentary basin causing the amplification effect. This is the limitation of modal summation technique, and therefore it should be combined with different finite simulation on the 2D local structural model of the basin.
Flash Infrared Thermography Contrast Data Analysis Technique
NASA Technical Reports Server (NTRS)
Koshti, Ajay
2014-01-01
This paper provides information on an IR Contrast technique that involves extracting normalized contrast versus time evolutions from the flash thermography inspection infrared video data. The analysis calculates thermal measurement features from the contrast evolution. In addition, simulation of the contrast evolution is achieved through calibration on measured contrast evolutions from many flat-bottom holes in the subject material. The measurement features and the contrast simulation are used to evaluate flash thermography data in order to characterize delamination-like anomalies. The thermal measurement features relate to the anomaly characteristics. The contrast evolution simulation is matched to the measured contrast evolution over an anomaly to provide an assessment of the anomaly depth and width which correspond to the depth and diameter of the equivalent flat-bottom hole (EFBH) similar to that used as input to the simulation. A similar analysis, in terms of diameter and depth of an equivalent uniform gap (EUG) providing a best match with the measured contrast evolution, is also provided. An edge detection technique called the half-max is used to measure width and length of the anomaly. Results of the half-max width and the EFBH/EUG diameter are compared to evaluate the anomaly. The information provided here is geared towards explaining the IR Contrast technique. Results from a limited amount of validation data on reinforced carbon-carbon (RCC) hardware are included in this paper.
Method for simulating discontinuous physical systems
Baty, Roy S.; Vaughn, Mark R.
2001-01-01
The mathematical foundations of conventional numerical simulation of physical systems provide no consistent description of the behavior of such systems when subjected to discontinuous physical influences. As a result, the numerical simulation of such problems requires ad hoc encoding of specific experimental results in order to address the behavior of such discontinuous physical systems. In the present invention, these foundations are replaced by a new combination of generalized function theory and nonstandard analysis. The result is a class of new approaches to the numerical simulation of physical systems which allows the accurate and well-behaved simulation of discontinuous and other difficult physical systems, as well as simpler physical systems. Applications of this new class of numerical simulation techniques to process control, robotics, and apparatus design are outlined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Detwiler, Russell L.; Glass, Robert J.; Pringle, Scott E.
Understanding of single and multi-phase flow and transport in fractures can be greatly enhanced through experimentation in transparent systems (analogs or replicas) where light transmission techniques yield quantitative measurements of aperture, solute concentration, and phase saturation fields. Here we quanti@ aperture field measurement error and demonstrate the influence of this error on the results of flow and transport simulations (hypothesized experimental results) through saturated and partially saturated fractures. find that precision and accuracy can be balanced to greatly improve the technique and We present a measurement protocol to obtain a minimum error field. Simulation results show an increased sensitivity tomore » error as we move from flow to transport and from saturated to partially saturated conditions. Significant sensitivity under partially saturated conditions results in differences in channeling and multiple-peaked breakthrough curves. These results emphasize the critical importance of defining and minimizing error for studies of flow and transpoti in single fractures.« less
From Simulation to Real Robots with Predictable Results: Methods and Examples
NASA Astrophysics Data System (ADS)
Balakirsky, S.; Carpin, S.; Dimitoglou, G.; Balaguer, B.
From a theoretical perspective, one may easily argue (as we will in this chapter) that simulation accelerates the algorithm development cycle. However, in practice many in the robotics development community share the sentiment that “Simulation is doomed to succeed” (Brooks, R., Matarić, M., Robot Learning, Kluwer Academic Press, Hingham, MA, 1993, p. 209). This comes in large part from the fact that many simulation systems are brittle; they do a fair-to-good job of simulating the expected, and fail to simulate the unexpected. It is the authors' belief that a simulation system is only as good as its models, and that deficiencies in these models lead to the majority of these failures. This chapter will attempt to address these deficiencies by presenting a systematic methodology with examples for the development of both simulated mobility models and sensor models for use with one of today's leading simulation engines. Techniques for using simulation for algorithm development leading to real-robot implementation will be presented, as well as opportunities for involvement in international robotics competitions based on these techniques.
Computer Simulation of Diffraction Patterns.
ERIC Educational Resources Information Center
Dodd, N. A.
1983-01-01
Describes an Apple computer program (listing available from author) which simulates Fraunhofer and Fresnel diffraction using vector addition techniques (vector chaining) and allows user to experiment with different shaped multiple apertures. Graphics output include vector resultants, phase difference, diffraction patterns, and the Cornu spiral…
Peter, Emanuel K; Shea, Joan-Emma; Pivkin, Igor V
2016-05-14
In this paper, we present a coarse replica exchange molecular dynamics (REMD) approach, based on kinetic Monte Carlo (kMC). The new development significantly can reduce the amount of replicas and the computational cost needed to enhance sampling in protein simulations. We introduce 2 different methods which primarily differ in the exchange scheme between the parallel ensembles. We apply this approach on folding of 2 different β-stranded peptides: the C-terminal β-hairpin fragment of GB1 and TrpZip4. Additionally, we use the new simulation technique to study the folding of TrpCage, a small fast folding α-helical peptide. Subsequently, we apply the new methodology on conformation changes in signaling of the light-oxygen voltage (LOV) sensitive domain from Avena sativa (AsLOV2). Our results agree well with data reported in the literature. In simulations of dialanine, we compare the statistical sampling of the 2 techniques with conventional REMD and analyze their performance. The new techniques can reduce the computational cost of REMD significantly and can be used in enhanced sampling simulations of biomolecules.
MOCCA code for star cluster simulation: comparison with optical observations using COCOA
NASA Astrophysics Data System (ADS)
Askar, Abbas; Giersz, Mirek; Pych, Wojciech; Olech, Arkadiusz; Hypki, Arkadiusz
2016-02-01
We introduce and present preliminary results from COCOA (Cluster simulatiOn Comparison with ObservAtions) code for a star cluster after 12 Gyr of evolution simulated using the MOCCA code. The COCOA code is being developed to quickly compare results of numerical simulations of star clusters with observational data. We use COCOA to obtain parameters of the projected cluster model. For comparison, a FITS file of the projected cluster was provided to observers so that they could use their observational methods and techniques to obtain cluster parameters. The results show that the similarity of cluster parameters obtained through numerical simulations and observations depends significantly on the quality of observational data and photometric accuracy.
Real-time simulation of biological soft tissues: a PGD approach.
Niroomandi, S; González, D; Alfaro, I; Bordeu, F; Leygue, A; Cueto, E; Chinesta, F
2013-05-01
We introduce here a novel approach for the numerical simulation of nonlinear, hyperelastic soft tissues at kilohertz feedback rates necessary for haptic rendering. This approach is based upon the use of proper generalized decomposition techniques, a generalization of PODs. Proper generalized decomposition techniques can be considered as a means of a priori model order reduction and provides a physics-based meta-model without the need for prior computer experiments. The suggested strategy is thus composed of an offline phase, in which a general meta-model is computed, and an online evaluation phase in which the results are obtained at real time. Results are provided that show the potential of the proposed technique, together with some benchmark test that shows the accuracy of the method. Copyright © 2013 John Wiley & Sons, Ltd.
MoSeS: Modelling and Simulation for e-Social Science.
Townend, Paul; Xu, Jie; Birkin, Mark; Turner, Andy; Wu, Belinda
2009-07-13
MoSeS (Modelling and Simulation for e-Social Science) is a research node of the National Centre for e-Social Science. MoSeS uses e-Science techniques to execute an events-driven model that simulates discrete demographic processes; this allows us to project the UK population 25 years into the future. This paper describes the architecture, simulation methodology and latest results obtained by MoSeS.
Laser fractional photothermolysis of the skin: numerical simulation of microthermal zones.
Marqa, Mohamad Feras; Mordon, Serge
2014-04-01
Laser Fractional Photothermolysis (FP) is one of the innovative techniques for skin remodeling and resurfacing. During treatment, the control of the Microscopic Thermal Zones' (MTZs) dimensions versus pulse energy requires detailed knowledge of the various parameters governing the heat transfer process. In this study, a mathematical model is devised to simulate the effect of pulse energy variations on the dimensions of MTZs. Two series of simulations for ablative (10.6 μm CO2) and non-ablative (1.550 μm Er:Glass) lasers systems were performed. In each series, simulations were carried for the following pulses energies: 5, 10, 15, 20, 25, 30, 35, and 40 mJ. Results of simulations are validated by histological analysis images of MTZs sections reported in works by Hantash et al. and Bedi et al. MTZs dimensions were compared between histology and those achieved using our simulation model using fusion data technique for both ablative FP and non-ablative FP treatment methods. Depths and widths from simulations are usually deeper (21 ± 2%) and wider (12 ± 2%) when compared with histological analysis data. When accounting for the shrinkage effect of excision of cutaneous tissues, a good correlation can be established between the simulation and the histological analysis results.
NASA Astrophysics Data System (ADS)
Zubanov, V. M.; Stepanov, D. V.; Shabliy, L. S.
2017-01-01
The article describes the method for simulation of transient combustion processes in the rocket engine. The engine operates on gaseous propellant: oxygen and hydrogen. Combustion simulation was performed using the ANSYS CFX software. Three reaction mechanisms for the stationary mode were considered and described in detail. Reactions mechanisms have been taken from several sources and verified. The method for converting ozone properties from the Shomate equation to the NASA-polynomial format was described in detail. The way for obtaining quick CFD-results with intermediate combustion components using an EDM model was found. Modeling difficulties with combustion model Finite Rate Chemistry, associated with a large scatter of reference data were identified and described. The way to generate the Flamelet library with CFX-RIF is described. Formulated adequate reaction mechanisms verified at a steady state have also been tested for transient simulation. The Flamelet combustion model was recognized as adequate for the transient mode. Integral parameters variation relates to the values obtained during stationary simulation. A cyclic irregularity of the temperature field, caused by precession of the vortex core, was detected in the chamber with the proposed simulation technique. Investigations of unsteady processes of rocket engines including the processes of ignition were proposed as the area for application of the described simulation technique.
Huff, G.F.
2004-01-01
The tendency of solutes in input water to precipitate efficiency lowering scale deposits on the membranes of reverse osmosis (RO) desalination systems is an important factor in determining the suitability of input water for desalination. Simulated input water evaporation can be used as a technique to quantitatively assess the potential for scale formation in RO desalination systems. The technique was demonstrated by simulating the increase in solute concentrations required to form calcite, gypsum, and amorphous silica scales at 25??C and 40??C from 23 desalination input waters taken from the literature. Simulation results could be used to quantitatively assess the potential of a given input water to form scale or to compare the potential of a number of input waters to form scale during RO desalination. Simulated evaporation of input waters cannot accurately predict the conditions under which scale will form owing to the effects of potentially stable supersaturated solutions, solution velocity, and residence time inside RO systems. However, the simulated scale-forming potential of proposed input waters could be compared with the simulated scale-forming potentials and actual scale-forming properties of input waters having documented operational histories in RO systems. This may provide a technique to estimate the actual performance and suitability of proposed input waters during RO.
NASA Technical Reports Server (NTRS)
Craun, Robert W.; Acosta, Diana M.; Beard, Steven D.; Leonard, Michael W.; Hardy, Gordon H.; Weinstein, Michael; Yildiz, Yildiray
2013-01-01
This paper describes the maturation of a control allocation technique designed to assist pilots in the recovery from pilot induced oscillations (PIOs). The Control Allocation technique to recover from Pilot Induced Oscillations (CAPIO) is designed to enable next generation high efficiency aircraft designs. Energy efficient next generation aircraft require feedback control strategies that will enable lowering the actuator rate limit requirements for optimal airframe design. One of the common issues flying with actuator rate limits is PIOs caused by the phase lag between the pilot inputs and control surface response. CAPIO utilizes real-time optimization for control allocation to eliminate phase lag in the system caused by control surface rate limiting. System impacts of the control allocator were assessed through a piloted simulation evaluation of a non-linear aircraft simulation in the NASA Ames Vertical Motion Simulator. Results indicate that CAPIO helps reduce oscillatory behavior, including the severity and duration of PIOs, introduced by control surface rate limiting.
NASA Technical Reports Server (NTRS)
Bakuckas, J. G.; Tan, T. M.; Lau, A. C. W.; Awerbuch, J.
1993-01-01
A finite element-based numerical technique has been developed to simulate damage growth in unidirectional composites. This technique incorporates elastic-plastic analysis, micromechanics analysis, failure criteria, and a node splitting and node force relaxation algorithm to create crack surfaces. Any combination of fiber and matrix properties can be used. One of the salient features of this technique is that damage growth can be simulated without pre-specifying a crack path. In addition, multiple damage mechanisms in the forms of matrix cracking, fiber breakage, fiber-matrix debonding and plastic deformation are capable of occurring simultaneously. The prevailing failure mechanism and the damage (crack) growth direction are dictated by the instantaneous near-tip stress and strain fields. Once the failure mechanism and crack direction are determined, the crack is advanced via the node splitting and node force relaxation algorithm. Simulations of the damage growth process in center-slit boron/aluminum and silicon carbide/titanium unidirectional specimens were performed. The simulation results agreed quite well with the experimental observations.
Hybrid General Pattern Search and Simulated Annealing for Industrail Production Planning Problems
NASA Astrophysics Data System (ADS)
Vasant, P.; Barsoum, N.
2010-06-01
In this paper, the hybridization of GPS (General Pattern Search) method and SA (Simulated Annealing) incorporated in the optimization process in order to look for the global optimal solution for the fitness function and decision variables as well as minimum computational CPU time. The real strength of SA approach been tested in this case study problem of industrial production planning. This is due to the great advantage of SA for being easily escaping from trapped in local minima by accepting up-hill move through a probabilistic procedure in the final stages of optimization process. Vasant [1] in his Ph. D thesis has provided 16 different techniques of heuristic and meta-heuristic in solving industrial production problems with non-linear cubic objective functions, eight decision variables and 29 constraints. In this paper, fuzzy technological problems have been solved using hybrid techniques of general pattern search and simulated annealing. The simulated and computational results are compared to other various evolutionary techniques.
Lobb, Eric C
2016-07-08
Version 6.3 of the RITG148+ software package offers eight automated analysis routines for quality assurance of the TomoTherapy platform. A performance evaluation of each routine was performed in order to compare RITG148+ results with traditionally accepted analysis techniques and verify that simulated changes in machine parameters are correctly identified by the software. Reference films were exposed according to AAPM TG-148 methodology for each routine and the RITG148+ results were compared with either alternative software analysis techniques or manual analysis techniques in order to assess baseline agreement. Changes in machine performance were simulated through translational and rotational adjustments to subsequently irradiated films, and these films were analyzed to verify that the applied changes were accurately detected by each of the RITG148+ routines. For the Hounsfield unit routine, an assessment of the "Frame Averaging" functionality and the effects of phantom roll on the routine results are presented. All RITG148+ routines reported acceptable baseline results consistent with alternative analysis techniques, with 9 of the 11 baseline test results showing agreement of 0.1mm/0.1° or better. Simulated changes were correctly identified by the RITG148+ routines within approximately 0.2 mm/0.2° with the exception of the Field Centervs. Jaw Setting routine, which was found to have limited accuracy in cases where field centers were not aligned for all jaw settings due to inaccurate autorotation of the film during analysis. The performance of the RITG148+ software package was found to be acceptable for introduction into our clinical environment as an automated alternative to traditional analysis techniques for routine TomoTherapy quality assurance testing.
Demonstration of landfill gas enhancement techniques in landfill simulators
NASA Astrophysics Data System (ADS)
Walsh, J. J.; Vogt, W. G.
1982-02-01
Various techniques to enhance gas production in sanitary landfills were applied to landfill simulators. These techniques include (1) accelerated moisture addition, (2) leachate recycling, (3) buffer addition, (4) nutrient addition, and (5) combinations of the above. Results are compiled through on-going operation and monitoring of sixteen landfill simulators. These test cells contain about 380 kg of municipal solid waste. Quantities of buffer and nutrient materials were placed in selected cells at the time of loading. Water is added to all test cells on a monthly basis; leachate is withdrawn from all cells (and recycled on selected cells) also on a monthly basis. Daily monitoring of gas volumes and refuse temperatures is performed. Gas and leachate samples are collected and analyzed on a monthly basis. Leachate and gas quality and quantity reslts are presented for the first 18 months of operation.
3D Ultrasonic Wave Simulations for Structural Health Monitoring
NASA Technical Reports Server (NTRS)
Campbell, Leckey Cara A/; Miler, Corey A.; Hinders, Mark K.
2011-01-01
Structural health monitoring (SHM) for the detection of damage in aerospace materials is an important area of research at NASA. Ultrasonic guided Lamb waves are a promising SHM damage detection technique since the waves can propagate long distances. For complicated flaw geometries experimental signals can be difficult to interpret. High performance computing can now handle full 3-dimensional (3D) simulations of elastic wave propagation in materials. We have developed and implemented parallel 3D elastodynamic finite integration technique (3D EFIT) code to investigate ultrasound scattering from flaws in materials. EFIT results have been compared to experimental data and the simulations provide unique insight into details of the wave behavior. This type of insight is useful for developing optimized experimental SHM techniques. 3D EFIT can also be expanded to model wave propagation and scattering in anisotropic composite materials.
EMU Suit Performance Simulation
NASA Technical Reports Server (NTRS)
Cowley, Matthew S.; Benson, Elizabeth; Harvill, Lauren; Rajulu, Sudhakar
2014-01-01
Introduction: Designing a planetary suit is very complex and often requires difficult trade-offs between performance, cost, mass, and system complexity. To verify that new suit designs meet requirements, full prototypes must be built and tested with human subjects. However, numerous design iterations will occur before the hardware meets those requirements. Traditional draw-prototype-test paradigms for research and development are prohibitively expensive with today's shrinking Government budgets. Personnel at NASA are developing modern simulation techniques that focus on a human-centric design paradigm. These new techniques make use of virtual prototype simulations and fully adjustable physical prototypes of suit hardware. This is extremely advantageous and enables comprehensive design down-selections to be made early in the design process. Objectives: The primary objective was to test modern simulation techniques for evaluating the human performance component of two EMU suit concepts, pivoted and planar style hard upper torso (HUT). Methods: This project simulated variations in EVA suit shoulder joint design and subject anthropometry and then measured the differences in shoulder mobility caused by the modifications. These estimations were compared to human-in-the-loop test data gathered during past suited testing using four subjects (two large males, two small females). Results: Results demonstrated that EVA suit modeling and simulation are feasible design tools for evaluating and optimizing suit design based on simulated performance. The suit simulation model was found to be advantageous in its ability to visually represent complex motions and volumetric reach zones in three dimensions, giving designers a faster and deeper comprehension of suit component performance vs. human performance. Suit models were able to discern differing movement capabilities between EMU HUT configurations, generic suit fit concerns, and specific suit fit concerns for crewmembers based on individual anthropometry
Modeling target normal sheath acceleration using handoffs between multiple simulations
NASA Astrophysics Data System (ADS)
McMahon, Matthew; Willis, Christopher; Mitchell, Robert; King, Frank; Schumacher, Douglass; Akli, Kramer; Freeman, Richard
2013-10-01
We present a technique to model the target normal sheath acceleration (TNSA) process using full-scale LSP PIC simulations. The technique allows for a realistic laser, full size target and pre-plasma, and sufficient propagation length for the accelerated ions and electrons. A first simulation using a 2D Cartesian grid models the laser-plasma interaction (LPI) self-consistently and includes field ionization. Electrons accelerated by the laser are imported into a second simulation using a 2D cylindrical grid optimized for the initial TNSA process and incorporating an equation of state. Finally, all of the particles are imported to a third simulation optimized for the propagation of the accelerated ions and utilizing a static field solver for initialization. We also show use of 3D LPI simulations. Simulation results are compared to recent ion acceleration experiments using SCARLET laser at The Ohio State University. This work was performed with support from ASOFR under contract # FA9550-12-1-0341, DARPA, and allocations of computing time from the Ohio Supercomputing Center.
Iterative repair for scheduling and rescheduling
NASA Technical Reports Server (NTRS)
Zweben, Monte; Davis, Eugene; Deale, Michael
1991-01-01
An iterative repair search method is described called constraint based simulated annealing. Simulated annealing is a hill climbing search technique capable of escaping local minima. The utility of the constraint based framework is shown by comparing search performance with and without the constraint framework on a suite of randomly generated problems. Results are also shown of applying the technique to the NASA Space Shuttle ground processing problem. These experiments show that the search methods scales to complex, real world problems and reflects interesting anytime behavior.
New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi; Lung, Shun-Fat
2017-01-01
A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.
Characterization of the spectral phase of an intense laser at focus via ionization blueshift
Mittelberger, D. E.; Nakamura, K.; Lehe, R.; ...
2016-01-01
An in situ diagnostic for verifying the spectral phase of an intense laser pulse at focus is shown. This diagnostic relies on measuring the effect of optical compression on ionization-induced blueshifting of the laser spectrum. Experimental results from the Berkeley Lab Laser Accelerator, a laser source rigorously characterized by conventional techniques, are presented and compared with simulations to illustrate the utility of this technique. These simulations show distinguishable effects from second-, third-, and fourth-order spectral phase.
A Technique for Measuring Rotocraft Dynamic Stability in the 40 by 80 Foot Wind Tunnel
NASA Technical Reports Server (NTRS)
Gupta, N. K.; Bohn, J. G.
1977-01-01
An on-line technique is described for the measurement of tilt rotor aircraft dynamic stability in the Ames 40- by 80-Foot Wind Tunnel. The technique is based on advanced system identification methodology and uses the instrumental variables approach. It is particulary applicable to real time estimation problems with limited amounts of noise-contaminated data. Several simulations are used to evaluate the algorithm. Estimated natural frequencies and damping ratios are compared with simulation values. The algorithm is also applied to wind tunnel data in an off-line mode. The results are used to develop preliminary guidelines for effective use of the algorithm.
Yu, Huan; Ni, Shi-Jun; Kong, Bo; He, Zheng-Wei; Zhang, Cheng-Jiang; Zhang, Shu-Qing; Pan, Xin; Xia, Chao-Xu; Li, Xuan-Qiong
2013-01-01
Land-use planning has triggered debates on social and environmental values, in which two key questions will be faced: one is how to see different planning simulation results instantaneously and apply the results back to interactively assist planning work; the other is how to ensure that the planning simulation result is scientific and accurate. To answer these questions, the objective of this paper is to analyze whether and how a bridge can be built between qualitative and quantitative approaches for land-use planning work and to find out a way to overcome the gap that exists between the ability to construct computer simulation models to aid integrated land-use plan making and the demand for them by planning professionals. The study presented a theoretical framework of land-use planning based on scenario analysis (SA) method and multiagent system (MAS) simulation integration and selected freshwater wetlands in the Sanjiang Plain of China as a case study area. Study results showed that MAS simulation technique emphasizing quantitative process effectively compensated for the SA method emphasizing qualitative process, which realized the organic combination of qualitative and quantitative land-use planning work, and then provided a new idea and method for the land-use planning and sustainable managements of land resources.
Ni, Shi-Jun; He, Zheng-Wei; Zhang, Cheng-Jiang; Zhang, Shu-Qing; Pan, Xin; Xia, Chao-Xu; Li, Xuan-Qiong
2013-01-01
Land-use planning has triggered debates on social and environmental values, in which two key questions will be faced: one is how to see different planning simulation results instantaneously and apply the results back to interactively assist planning work; the other is how to ensure that the planning simulation result is scientific and accurate. To answer these questions, the objective of this paper is to analyze whether and how a bridge can be built between qualitative and quantitative approaches for land-use planning work and to find out a way to overcome the gap that exists between the ability to construct computer simulation models to aid integrated land-use plan making and the demand for them by planning professionals. The study presented a theoretical framework of land-use planning based on scenario analysis (SA) method and multiagent system (MAS) simulation integration and selected freshwater wetlands in the Sanjiang Plain of China as a case study area. Study results showed that MAS simulation technique emphasizing quantitative process effectively compensated for the SA method emphasizing qualitative process, which realized the organic combination of qualitative and quantitative land-use planning work, and then provided a new idea and method for the land-use planning and sustainable managements of land resources. PMID:23818816
A novel, highly efficient cavity backshort design for far-infrared TES detectors
NASA Astrophysics Data System (ADS)
Bracken, C.; de Lange, G.; Audley, M. D.; Trappe, N.; Murphy, J. A.; Gradziel, M.; Vreeling, W.-J.; Watson, D.
2018-03-01
In this paper we present a new cavity backshort design for TES (transition edge sensor) detectors which will provide increased coupling of the incoming astronomical signal to the detectors. The increased coupling results from the improved geometry of the cavities, where the geometry is a consequence of the proposed chemical etching manufacturing technique. Using a number of modelling techniques, predicted results of the performance of the cavities for frequencies of 4.3-10 THz are presented and compared to more standard cavity designs. Excellent optical efficiency is demonstrated, with improved response flatness across the band. In order to verify the simulated results, a scaled model cavity was built for testing at the lower W-band frequencies (75-100 GHz) with a VNA system. Further testing of the scale model at THz frequencies was carried out using a globar and bolometer via an FTS measurement set-up. The experimental results are presented, and compared to the simulations. Although there is relatively poor comparison between simulation and measurement at some frequencies, the discrepancies are explained by means of higher-mode excitation in the measured cavity which are not accounted for in the single-mode simulations. To verify this assumption, a better behaved cylindrical cavity is simulated and measured, where excellent agreement is demonstrated in those results. It can be concluded that both the simulations and the supporting measurements give confidence that this novel cavity design will indeed provide much-improved optical coupling for TES detectors in the far-infrared/THz band.
Tam, Matthew D B S; Lewis, Mark
2012-10-01
Safe femoral arterial access is an important procedural step in many interventional procedures and variations of the anatomy of the region are well known. The aim of this study was to redefine the anatomy relevant to the femoral arterial puncture and simulate the results of different puncture techniques. A total of 100 consecutive CT angiograms were used and regions of interest were labelled giving Cartesian co-ordinates which allowed determination of arterial puncture site relative to skin puncture site, the bifurcation and inguinal ligament (ING). The ING was lower than defined by bony landmarks by 16.6 mm. The femoral bifurcation was above the inferior aspect of the femoral head in 51% and entirely medial to the femoral head in 1%. Simulated antegrade and retrograde punctures with dogmatic technique, using a 45-degree angle would result in a significant rate of high and low arterial punctures. Simulated 50% soft tissue compression also resulted in decreased rate of high retrograde punctures but an increased rate of low antegrade punctures. Use of dogmatic access techniques is predicted to result in an unacceptably high rate of dangerous high and low punctures. Puncture angle and geometry can be severely affected by patient obesity. The combination of fluoroscopy to identify entry point, ultrasound-guidance to identify the femoral bifurcation and soft tissue compression to improve puncture geometry are critical for safe femoral arterial access.
Assessment of simulation fidelity using measurements of piloting technique in flight. II
NASA Technical Reports Server (NTRS)
Ferguson, S. W.; Clement, W. F.; Hoh, R. H.; Cleveland, W. B.
1985-01-01
Two components of the Vertical Motion Simulator (presently being used to assess the fidelity of UH-60A simulation) are evaluated: (1) the dash/quickstop Nap-of-the-earth (NOE) piloting task, and (2) the bop-up task. Data from these two flight test experiments are presented which provide information on the effect of reduced visual field of view, variation in scene content and texture, and the affect of pure time delay in the closed-loop pilot response. In comparison with task performance results obtained in flight tests, the results from the simulation indicate that the pilot's NOE task performance in the simulator is significantly degraded.
Wideband piezoelectric energy harvester for low-frequency application with plucking mechanism
NASA Astrophysics Data System (ADS)
Hiraki, Yasuhiro; Masuda, Arata; Ikeda, Naoto; Katsumura, Hidenori; Kagata, Hiroshi; Okumura, Hidenori
2015-04-01
Wireless sensor networks need energy harvesting from vibrational environment for their power supply. The conventional resonance type vibration energy harvesters, however, are not always effective for low frequency application. The purpose of this paper is to propose a high efficiency energy harvester for low frequency application by utilizing plucking and SSHI techniques, and to investigate the effects of applying those techniques in terms of the energy harvesting efficiency. First, we derived an approximate formulation of energy harvesting efficiency of the plucking device by theoretical analysis. Next, it was confirmed that the improved efficiency agreed with numerical and experimental results. Also, a parallel SSHI, a switching circuit technique to improve the performance of the harvester was introduced and examined by numerical simulations and experiments. Contrary to the simulated results in which the efficiency was improved from 13.1% to 22.6% by introducing the SSHI circuit, the efficiency obtained in the experiment was only 7.43%. This would due to the internal resistance of the inductors and photo MOS relays on the switching circuit and the simulation including this factor revealed large negative influence of it. This result suggested that the reduction of the switching resistance was significantly important to the implementation of SSHI.
ERIC Educational Resources Information Center
Boker, Steven M.; Nesselroade, John R.
2002-01-01
Examined two methods for fitting models of intrinsic dynamics to intraindividual variability data by testing these techniques' behavior in equations through simulation studies. Among the main results is the demonstration that a local linear approximation of derivatives can accurately recover the parameters of a simulated linear oscillator, with…
Investigation of laser Doppler techniques using the Monte Carlo method
NASA Astrophysics Data System (ADS)
Ruetten, Walter; Gellekum, Thomas; Jessen, Katrin
1995-01-01
Laser Doppler techniques are increasingly used in research and clinical applications to study perfusion phenomena in the skin, yet the influences of changing scattering parameters and geometry on the measure of perfusion are not well explored. To investigate these influences, a simulation program based on the Monte Carlo method was developed, which is capable of determining the Doppler spectra caused by moving red blood cells. The simulation model allows for the definition of arbitrary networks of blood vessels with individual velocities. The volume is represented by a voxel tree with adaptive spatial resolution which contains references to the optical properties and is used to store the location dependent photon fluence determined during the simulation. Two evaluation methods for Doppler spectra from biological tissue described in the literate were investigated with the simulation program. The results obtained suggest that both methods give a measure of perfusion nearly proportional to the velocity of the red blood cells. However, simulations done with different geometries of the blood vessels seem to indicate a nonlinear behavior concerning the concentration of red blood cells in the measurement volume. Nevertheless these simulation results may help in the interpretation of measurements obtained from devices using the investigated evaluation methods.
Simulating the x-ray image contrast to setup techniques with desired flaw detectability
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2015-04-01
The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing the detector resolution. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.
Application of Discrete Fracture Modeling and Upscaling Techniques to Complex Fractured Reservoirs
NASA Astrophysics Data System (ADS)
Karimi-Fard, M.; Lapene, A.; Pauget, L.
2012-12-01
During the last decade, an important effort has been made to improve data acquisition (seismic and borehole imaging) and workflow for reservoir characterization which has greatly benefited the description of fractured reservoirs. However, the geological models resulting from the interpretations need to be validated or calibrated against dynamic data. Flow modeling in fractured reservoirs remains a challenge due to the difficulty of representing mass transfers at different heterogeneity scales. The majority of the existing approaches are based on dual continuum representation where the fracture network and the matrix are represented separately and their interactions are modeled using transfer functions. These models are usually based on idealized representation of the fracture distribution which makes the integration of real data difficult. In recent years, due to increases in computer power, discrete fracture modeling techniques (DFM) are becoming popular. In these techniques the fractures are represented explicitly allowing the direct use of data. In this work we consider the DFM technique developed by Karimi-Fard et al. [1] which is based on an unstructured finite-volume discretization. The mass flux between two adjacent control-volumes is evaluated using an optimized two-point flux approximation. The result of the discretization is a list of control-volumes with the associated pore-volumes and positions, and a list of connections with the associated transmissibilities. Fracture intersections are simplified using a connectivity transformation which contributes considerably to the efficiency of the methodology. In addition, the method is designed for general purpose simulators and any connectivity based simulator can be used for flow simulations. The DFM technique is either used standalone or as part of an upscaling technique. The upscaling techniques are required for large reservoirs where the explicit representation of all fractures and faults is not possible. Karimi-Fard et al. [2] have developed an upscaling technique based on DFM representation. The original version of this technique was developed to construct a dual-porosity model from a discrete fracture description. This technique has been extended and generalized so it can be applied to a wide range of problems from reservoirs with a few or no fracture to highly fractured reservoirs. In this work, we present the application of these techniques to two three-dimensional fractured reservoirs constructed using real data. The first model contains more than 600 medium and large scale fractures. The fractures are not always connected which requires a general modeling technique. The reservoir has 50 wells (injectors and producers) and water flooding simulations are performed. The second test case is a larger reservoir with sparsely distributed faults. Single-phase simulations are performed with 5 producing wells. [1] Karimi-Fard M., Durlofsky L.J., and Aziz K. 2004. An efficient discrete-fracture model applicable for general-purpose reservoir simulators. SPE Journal, 9(2): 227-236. [2] Karimi-Fard M., Gong B., and Durlofsky L.J. 2006. Generation of coarse-scale continuum flow models from detailed fracture characterizations. Water Resources Research, 42(10): W10423.
Design and evaluation of a DAMQ multiprocessor network with self-compacting buffers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, J.; O`Krafka, B.W.O.; Vassiliadis, S.
1994-12-31
This paper describes a new approach to implement Dynamically Allocated Multi-Queue (DAMQ) switching elements using a technique called ``self-compacting buffers``. This technique is efficient in that the amount of hardware required to manage the buffers is relatively small; it offers high performance since it is an implementation of a DAMQ. The first part of this paper describes the self-compacting buffer architecture in detail, and compares it against a competing DAMQ switch design. The second part presents extensive simulation results comparing the performance of a self compacting buffer switch against an ideal switch including several examples of k-ary n-cubes and deltamore » networks. In addition, simulation results show how the performance of an entire network can be quickly and accurately approximated by simulating just a single switching element.« less
NASA Technical Reports Server (NTRS)
Hinton, David A.
1989-01-01
Numerous air carrier accidents and incidents result from encounters with the atmospheric wind shear associated with microburst phenomena, in some cases resulting in heavy loss of life. An important issue in current wind shear research is how to best manage aircraft performance during an inadvertent wind shear encounter. The goals of this study were to: (1) develop techniques and guidance for maximizing an aircraft's ability to recover from microburst encounters following takeoff, (2) develop an understanding of how theoretical predictions of wind shear recovery performance might be achieved in actual use, and (3) gain insight into the piloting factors associated with recovery from microburst encounters. Three recovery strategies were implemented and tested in piloted simulation. Results show that a recovery strategy based on flying a flight path angle schedule produces improved performance over constant pitch attitude or acceleration-based recovery techniques. The best recovery technique was initially counterintuitive to the pilots who participated in the study. Evidence was found to indicate that the techniques required for flight through the turbulent vortex of a microburst may differ from the techniques being developed using classical, nonturbulent microburst models.
Technology for Transient Simulation of Vibration during Combustion Process in Rocket Thruster
NASA Astrophysics Data System (ADS)
Zubanov, V. M.; Stepanov, D. V.; Shabliy, L. S.
2018-01-01
The article describes the technology for simulation of transient combustion processes in the rocket thruster for determination of vibration frequency occurs during combustion. The engine operates on gaseous propellant: oxygen and hydrogen. Combustion simulation was performed using the ANSYS CFX software. Three reaction mechanisms for the stationary mode were considered and described in detail. The way for obtaining quick CFD-results with intermediate combustion components using an EDM model was found. The way to generate the Flamelet library with CFX-RIF was described. A technique for modeling transient combustion processes in the rocket thruster was proposed based on the Flamelet library. A cyclic irregularity of the temperature field like vortex core precession was detected in the chamber. Frequency of flame precession was obtained with the proposed simulation technique.
NASA Technical Reports Server (NTRS)
Ungar, Stephen G.; Merry, Carolyn J.; Mckim, Harlan L.; Irish, Richard; Miller, Michael S.
1988-01-01
A simulated data set was used to evaluate techniques for extracting topography from side-looking satellite systems for an area of northwest Washington state. A negative transparency orthophotoquad was digitized at a spacing of 85 microns, resulting in an equivalent ground distance of 9.86 m between pixels and a radiometric resolution of 256 levels. A bilinear interpolation was performed on digital elevation model data to generate elevation data at a 9.86-m resolution. The nominal orbital characteristics and geometry of the SPOT satellite were convoluted with the data to produce simulated panchromatic HRV digital stereo imagery for three different orbital paths and techniques for reconstructing topographic data were developed. Analyses with the simulated HRV data and other data sets show that the method is effective.
Höfler, K; Schwarzer, S
2000-06-01
Building on an idea of Fogelson and Peskin [J. Comput. Phys. 79, 50 (1988)] we describe the implementation and verification of a simulation technique for systems of non-Brownian particles in fluids at Reynolds numbers up to about 20 on the particle scale. This direct simulation technique fills a gap between simulations in the viscous regime and high-Reynolds-number modeling. It also combines sufficient computational accuracy with numerical efficiency and allows studies of several thousand, in principle arbitrarily shaped, extended and hydrodynamically interacting particles on regular work stations. We verify the algorithm in two and three dimensions for (i) single falling particles and (ii) a fluid flowing through a bed of fixed spheres. In the context of sedimentation we compute the volume fraction dependence of the mean sedimentation velocity. The results are compared with experimental and other numerical results both in the viscous and inertial regime and we find very satisfactory agreement.
Conversion from Engineering Units to Telemetry Counts on Dryden Flight Simulators
NASA Technical Reports Server (NTRS)
Fantini, Jay A.
1998-01-01
Dryden real-time flight simulators encompass the simulation of pulse code modulation (PCM) telemetry signals. This paper presents a new method whereby the calibration polynomial (from first to sixth order), representing the conversion from counts to engineering units (EU), is numerically inverted in real time. The result is less than one-count error for valid EU inputs. The Newton-Raphson method is used to numerically invert the polynomial. A reverse linear interpolation between the EU limits is used to obtain an initial value for the desired telemetry count. The method presented here is not new. What is new is how classical numerical techniques are optimized to take advantage of modem computer power to perform the desired calculations in real time. This technique makes the method simple to understand and implement. There are no interpolation tables to store in memory as in traditional methods. The NASA F-15 simulation converts and transmits over 1000 parameters at 80 times/sec. This paper presents algorithm development, FORTRAN code, and performance results.
NASA Technical Reports Server (NTRS)
Dermanis, A.
1977-01-01
The possibility of recovering earth rotation and network geometry (baseline) parameters are emphasized. The numerical simulated experiments performed are set up in an environment where station coordinates vary with respect to inertial space according to a simulated earth rotation model similar to the actual but unknown rotation of the earth. The basic technique of VLBI and its mathematical model are presented. The parametrization of earth rotation chosen is described and the resulting model is linearized. A simple analysis of the geometry of the observations leads to some useful hints on achieving maximum sensitivity of the observations with respect to the parameters considered. The basic philosophy for the simulation of data and their analysis through standard least squares adjustment techniques is presented. A number of characteristic network designs based on present and candidate station locations are chosen. The results of the simulations for each design are presented together with a summary of the conclusions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vance, B.; Mendillo, M.
1981-04-30
A three-dimensional model of the ionosphere was developed including chemical reactions and neutral and plasma transport. The model uses Finite Element Simulation to simulate ionospheric modification rather than solving a set of differential equations. The initial conditions of the Los Alamos Scientific Laboratory experiments, Lagopedo Uno and Dos, were input to the model, and these events were simulated. Simulation results were compared to ground and rocketborne electron-content measurements. A simulation of the transport of released SF6 was also made.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pang, Yuan-Ping, E-mail: pang@mayo.edu
Highlights: • Reducing atomic masses by 10-fold vastly improves sampling in MD simulations. • CLN025 folded in 4 of 10 × 0.5-μs MD simulations when masses were reduced by 10-fold. • CLN025 folded as early as 96.2 ns in 1 of the 4 simulations that captured folding. • CLN025 did not fold in 10 × 0.5-μs MD simulations when standard masses were used. • Low-mass MD simulation is a simple and generic sampling enhancement technique. - Abstract: CLN025 is one of the smallest fast-folding proteins. Until now it has not been reported that CLN025 can autonomously fold to its nativemore » conformation in a classical, all-atom, and isothermal–isobaric molecular dynamics (MD) simulation. This article reports the autonomous and repeated folding of CLN025 from a fully extended backbone conformation to its native conformation in explicit solvent in multiple 500-ns MD simulations at 277 K and 1 atm with the first folding event occurring as early as 66.1 ns. These simulations were accomplished by using AMBER forcefield derivatives with atomic masses reduced by 10-fold on Apple Mac Pros. By contrast, no folding event was observed when the simulations were repeated using the original AMBER forcefields of FF12SB and FF14SB. The results demonstrate that low-mass MD simulation is a simple and generic technique to enhance configurational sampling. This technique may propel autonomous folding of a wide range of miniature proteins in classical, all-atom, and isothermal–isobaric MD simulations performed on commodity computers—an important step forward in quantitative biology.« less
Simulation of FIB-SEM images for analysis of porous microstructures.
Prill, Torben; Schladitz, Katja
2013-01-01
Focused ion beam nanotomography-scanning electron microscopy tomography yields high-quality three-dimensional images of materials microstructures at the nanometer scale combining serial sectioning using a focused ion beam with SEM. However, FIB-SEM tomography of highly porous media leads to shine-through artifacts preventing automatic segmentation of the solid component. We simulate the SEM process in order to generate synthetic FIB-SEM image data for developing and validating segmentation methods. Monte-Carlo techniques yield accurate results, but are too slow for the simulation of FIB-SEM tomography requiring hundreds of SEM images for one dataset alone. Nevertheless, a quasi-analytic description of the specimen and various acceleration techniques, including a track compression algorithm and an acceleration for the simulation of secondary electrons, cut down the computing time by orders of magnitude, allowing for the first time to simulate FIB-SEM tomography. © Wiley Periodicals, Inc.
Scalable Methods for Eulerian-Lagrangian Simulation Applied to Compressible Multiphase Flows
NASA Astrophysics Data System (ADS)
Zwick, David; Hackl, Jason; Balachandar, S.
2017-11-01
Multiphase flows can be found in countless areas of physics and engineering. Many of these flows can be classified as dispersed two-phase flows, meaning that there are solid particles dispersed in a continuous fluid phase. A common technique for simulating such flow is the Eulerian-Lagrangian method. While useful, this method can suffer from scaling issues on larger problem sizes that are typical of many realistic geometries. Here we present scalable techniques for Eulerian-Lagrangian simulations and apply it to the simulation of a particle bed subjected to expansion waves in a shock tube. The results show that the methods presented here are viable for simulation of larger problems on modern supercomputers. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138. This work was supported in part by the U.S. Department of Energy under Contract No. DE-NA0002378.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storelli, A., E-mail: alexandre.storelli@lpp.polytechnique.fr; Vermare, L.; Hennequin, P.
2015-06-15
In a dedicated collisionality scan in Tore Supra, the geodesic acoustic mode (GAM) is detected and identified with the Doppler backscattering technique. Observations are compared to the results of a simulation with the gyrokinetic code GYSELA. We found that the GAM frequency in experiments is lower than predicted by simulation and theory. Moreover, the disagreement is higher in the low collisionality scenario. Bursts of non harmonic GAM oscillations have been characterized with filtering techniques, such as the Hilbert-Huang transform. When comparing this dynamical behaviour between experiments and simulation, the probability density function of GAM amplitude and the burst autocorrelation timemore » are found to be remarkably similar. In the simulation, where the radial profile of GAM frequency is continuous, we observed a phenomenon of radial phase mixing of the GAM oscillations, which could influence the burst autocorrelation time.« less
Study of changes in properties of solar sail materials from radiation exposure
NASA Technical Reports Server (NTRS)
Smith, T.
1977-01-01
Techniques for monitoring changes in preparation of solar sail materials resulting from space radiation simulation, stressing (e.g., thermal, mechanical) and exposure to terrestrial environments are developed. The properties of interest are: metallic coating deterioration, polymeric film deterioration, interfacial debonding and possible metallic coating diffusion into the polymeric film. Four accelerated tests were devised to simulate the possible degradation processes mentioned above. These four tests are: a thermal shock test to simulate the wide variation of temperature expected in space (260 C to -100 C), a cyclic temperature test to stimulate the 6 minute temperature cycle anticipated in space, a mechanical vibration test to simulate mechanical bonding, folding and handling, and a humidity test to simulate terrestrial environment effects. The techniques for monitoring property changes are: visual and microscopic examination, ellipsometry, surface potential difference (SPD), photoelectron emission (PEE), and water contact angles.
Monte Carlo simulations of particle acceleration at oblique shocks: Including cross-field diffusion
NASA Technical Reports Server (NTRS)
Baring, M. G.; Ellison, D. C.; Jones, F. C.
1995-01-01
The Monte Carlo technique of simulating diffusive particle acceleration at shocks has made spectral predictions that compare extremely well with particle distributions observed at the quasi-parallel region of the earth's bow shock. The current extension of this work to compare simulation predictions with particle spectra at oblique interplanetary shocks has required the inclusion of significant cross-field diffusion (strong scattering) in the simulation technique, since oblique shocks are intrinsically inefficient in the limit of weak scattering. In this paper, we present results from the method we have developed for the inclusion of cross-field diffusion in our simulations, namely model predictions of particle spectra downstream of oblique subluminal shocks. While the high-energy spectral index is independent of the shock obliquity and the strength of the scattering, the latter is observed to profoundly influence the efficiency of injection of cosmic rays into the acceleration process.
Space Simulation, 7th. [facilities and testing techniques
NASA Technical Reports Server (NTRS)
1973-01-01
Space simulation facilities and techniques are outlined that encompass thermal scale modeling, computerized simulations, reentry materials, spacecraft contamination, solar simulation, vacuum tests, and heat transfer studies.
Convolutional coding results for the MVM '73 X-band telemetry experiment
NASA Technical Reports Server (NTRS)
Layland, J. W.
1978-01-01
Results of simulation of several short-constraint-length convolutional codes using a noisy symbol stream obtained via the turnaround ranging channels of the MVM'73 spacecraft are presented. First operational use of this coding technique is on the Voyager mission. The relative performance of these codes in this environment is as previously predicted from computer-based simulations.
Estimation of discontinuous coefficients in parabolic systems: Applications to reservoir simulation
NASA Technical Reports Server (NTRS)
Lamm, P. D.
1984-01-01
Spline based techniques for estimating spatially varying parameters that appear in parabolic distributed systems (typical of those found in reservoir simulation problems) are presented. The problem of determining discontinuous coefficients, estimating both the functional shape and points of discontinuity for such parameters is discussed. Convergence results and a summary of numerical performance of the resulting algorithms are given.
Advances in Heavy Ion Beam Probe Technology and Operation on MST
NASA Astrophysics Data System (ADS)
Demers, D. R.; Connor, K. A.; Schoch, P. M.; Radke, R. J.; Anderson, J. K.; Craig, D.; den Hartog, D. J.
2003-10-01
A technique to map the magnetic field of a plasma via spectral imaging is being developed with the Heavy Ion Beam Probe on the Madison Symmetric Torus. The technique will utilize two-dimensional images of the ion beam in the plasma, acquired by two CCD cameras, to generate a three-dimensional reconstruction of the beam trajectory. This trajectory, and the known beam ion mass, energy and charge-state, will be used to determine the magnetic field of the plasma. A suitable emission line has not yet been observed since radiation from the MST plasma is both broadband and intense. An effort to raise the emission intensity from the ion beam by increasing beam focus and current has been undertaken. Simulations of the accelerator ion optics and beam characteristics led to a technique, confirmed by experiment, that achieves a narrower beam and marked increase in ion current near the plasma surface. The improvements arising from these simulations will be discussed. Realization of the magnetic field mapping technique is contingent upon accurate reconstruction of the beam trajectory from the camera images. Simulations of two camera CCD images, including the interior of MST, its various landmarks and beam trajectories have been developed. These simulations accept user input such as camera locations, resolution via pixellization and noise. The quality of the images simulated with these and other variables will help guide the selection of viewing port pairs, image size and camera specifications. The results of these simulations will be presented.
Scheduling Earth Observing Satellites with Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna
2003-01-01
We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.
ERIC Educational Resources Information Center
CRAWFORD, MEREDITH P.
OPEN AND CLOSED LOOP SIMULATION IS DISCUSSED FROM THE VIEWPOINT OF RESEARCH AND DEVELOPMENT IN TRAINING TECHNIQUES. AREAS DISCUSSED INCLUDE--(1) OPEN-LOOP ENVIRONMENTAL SIMULATION, (2) SIMULATION NOT INVOLVING PEOPLE, (3) ANALYSIS OF OCCUPATIONS, (4) SIMULATION FOR TRAINING, (5) REAL-SIZE SYSTEM SIMULATION, (6) TECHNIQUES OF MINIATURIZATION, AND…
Reliability of regional climate simulations
NASA Astrophysics Data System (ADS)
Ahrens, W.; Block, A.; Böhm, U.; Hauffe, D.; Keuler, K.; Kücken, M.; Nocke, Th.
2003-04-01
Quantification of uncertainty becomes more and more a key issue for assessing the trustability of future climate scenarios. In addition to the mean conditions, climate impact modelers focus in particular on extremes. Before generating such scenarios using e.g. dynamic regional climate models, a careful validation of present-day simulations should be performed to determine the range of errors for the quantities of interest under recent conditions as a raw estimate of their uncertainty in the future. Often, multiple aspects shall be covered together, and the required simulation accuracy depends on the user's demand. In our approach, a massive parallel regional climate model shall be used on the one hand to generate "long-term" high-resolution climate scenarios for several decades, and on the other hand to provide very high-resolution ensemble simulations of future dry spells or heavy rainfall events. To diagnosis the model's performance for present-day simulations, we have recently developed and tested a first version of a validation and visualization chain for this model. It is, however, applicable in a much more general sense and could be used as a common test bed for any regional climate model aiming at this type of simulations. Depending on the user's interest, integrated quality measures can be derived for near-surface parameters using multivariate techniques and multidimensional distance measures in a first step. At this point, advanced visualization techniques have been developed and included to allow for visual data mining and to qualitatively identify dominating aspects and regularities. Univariate techniques that are especially designed to assess climatic aspects in terms of statistical properties can then be used to quantitatively diagnose the error contributions of the individual used parameters. Finally, a comprehensive in-depth diagnosis tool allows to investigate, why the model produces the obtained near-surface results to answer the question if the model performs well from the modeler's point of view. Examples will be presented for results obtained using this approach for assessing the risk of potential total agricultural yield loss under drought conditions in Northeast Brazil and for evaluating simulation results for a 10-year period for Europe. To support multi-run simulations and result evaluation, the model will be embedded into an already existing simulation environment that provides further postprocessing tools for sensitivity studies, behavioral analysis and Monte-Carlo simulations, but also for ensemble scenario analysis in one of the next steps.
Fast Learning for Immersive Engagement in Energy Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, Brian W; Bugbee, Bruce; Gruchalla, Kenny M
The fast computation which is critical for immersive engagement with and learning from energy simulations would be furthered by developing a general method for creating rapidly computed simplified versions of NREL's computation-intensive energy simulations. Created using machine learning techniques, these 'reduced form' simulations can provide statistically sound estimates of the results of the full simulations at a fraction of the computational cost with response times - typically less than one minute of wall-clock time - suitable for real-time human-in-the-loop design and analysis. Additionally, uncertainty quantification techniques can document the accuracy of the approximate models and their domain of validity. Approximationmore » methods are applicable to a wide range of computational models, including supply-chain models, electric power grid simulations, and building models. These reduced-form representations cannot replace or re-implement existing simulations, but instead supplement them by enabling rapid scenario design and quality assurance for large sets of simulations. We present an overview of the framework and methods we have implemented for developing these reduced-form representations.« less
A Large number of fast cosmological simulations
NASA Astrophysics Data System (ADS)
Koda, Jun; Kazin, E.; Blake, C.
2014-01-01
Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Domenico, Giovanni, E-mail: didomenico@fe.infn.it; Cardarelli, Paolo; Taibi, Angelo
Purpose: The quality of a radiography system is affected by several factors, a major one being the focal spot size of the x-ray tube. In fact, the measurement of such size is recognized to be of primary importance during acceptance tests and image quality evaluations of clinical radiography systems. The most common device providing an image of the focal spot emission distribution is a pin-hole camera, which requires a high tube loading in order to produce a measurable signal. This work introduces an alternative technique to obtain an image of the focal spot, through the processing of a single radiographmore » of a simple test object, acquired with a suitable magnification. Methods: The radiograph of a magnified sharp edge is a well-established method to evaluate the extension of the focal spot profile along the direction perpendicular to the edge. From a single radiograph of a circular x-ray absorber, it is possible to extract simultaneously the radial profiles of several sharp edges with different orientations. The authors propose a technique that allows to obtain an image of the focal spot through the processing of these radial profiles by means of a pseudo-CT reconstruction technique. In order to validate this technique, the reconstruction has been applied to the simulated radiographs of an ideal disk-shaped absorber, generated by various simulated focal spot distributions. Furthermore, the method has been applied to the focal spot of a commercially available mammography unit. Results: In the case of simulated radiographs, the results of the reconstructions have been compared to the original distributions, showing an excellent agreement for what regards both the overall distribution and the full width at half maximum measurements. In the case of the experimental test, the method allowed to obtain images of the focal spot that have been compared with the results obtained through standard techniques, namely, pin-hole camera and slit camera. Conclusions: The method was proven to be effective for simulated images and the results of the experimental test suggest that it could be considered as an alternative technique for focal spot distribution evaluation. The method offers the possibility to measure the actual focal spot size and emission distribution at the same exposure conditions as clinical routine, avoiding high tube loading as in the case of the pin-hole imaging technique.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banerjee, Srutarshi; Rajan, Rehim N.; Singh, Sandeep K.
2014-07-01
DC Accelerators undergoes different types of discharges during its operation. A model depicting the discharges has been simulated to study the different transient conditions. The paper presents a Physics based approach of developing a compact circuit model of the DC Accelerator using Partial Element Equivalent Circuit (PEEC) technique. The equivalent RLC model aids in analyzing the transient behavior of the system and predicting anomalies in the system. The electrical discharges and its properties prevailing in the accelerator can be evaluated by this equivalent model. A parallel coupled voltage multiplier structure is simulated in small scale using few stages of coronamore » guards and the theoretical and practical results are compared. The PEEC technique leads to a simple model for studying the fault conditions in accelerator systems. Compared to the Finite Element Techniques, this technique gives the circuital representation. The lumped components of the PEEC are used to obtain the input impedance and the result is also compared to that of the FEM technique for a frequency range of (0-200) MHz. (author)« less
Using Multi-Spacecraft Technique to Identify the Structure of Magnetic Field in CMEs
NASA Astrophysics Data System (ADS)
Al-haddad, N. A.; Jacobs, C.; Poedts, S.; Moestl, C.; Farrugia, C. J.; Lugaz, N.
2013-12-01
In order to understand the magnetic field structure of coronal mass ejections (CMEs), it is often required to investigate its local configuration at different positions of the CME. While this could be very challenging to implement observationally; it is rather applicable when using numerical simulations. In this work, we study the properties of a simulated CME using multi-spacecraft technique. We have shown previously how the reconstruction of magnetic fields from a single spacecraft, may yield misleading results. Here, we look into the reconstruction of the magnetic field using sets of two, and three spacecrafts at different longitudes, and discuss the effectiveness of this technique. This type of work can pave the way for future out-of-the-ecliptic missions such as Solar Probe or Solar Orbiter. Grad-Shafranov reconstruction of simulated satellite measurements of a CME containing writhed field lines.
Numerical simulation of coupled electrochemical and transport processes in battery systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liaw, B.Y.; Gu, W.B.; Wang, C.Y.
1997-12-31
Advanced numerical modeling to simulate dynamic battery performance characteristics for several types of advanced batteries is being conducted using computational fluid dynamics (CFD) techniques. The CFD techniques provide efficient algorithms to solve a large set of highly nonlinear partial differential equations that represent the complex battery behavior governed by coupled electrochemical reactions and transport processes. The authors have recently successfully applied such techniques to model advanced lead-acid, Ni-Cd and Ni-MH cells. In this paper, the authors briefly discuss how the governing equations were numerically implemented, show some preliminary modeling results, and compare them with other modeling or experimental data reportedmore » in the literature. The authors describe the advantages and implications of using the CFD techniques and their capabilities in future battery applications.« less
Accurate lithography simulation model based on convolutional neural networks
NASA Astrophysics Data System (ADS)
Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki
2017-07-01
Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.
Plume Impingement to the Lunar Surface: A Challenging Problem for DSMC
NASA Technical Reports Server (NTRS)
Lumpkin, Forrest; Marichalar, Jermiah; Piplica, Anthony
2007-01-01
The President's Vision for Space Exploration calls for the return of human exploration of the Moon. The plans are ambitious and call for the creation of a lunar outpost. Lunar Landers will therefore be required to land near predeployed hardware, and the dust storm created by the Lunar Lander's plume impingement to the lunar surface presents a hazard. Knowledge of the number density, size distribution, and velocity of the grains in the dust cloud entrained into the flow is needing to develop mitigation strategies. An initial step to acquire such knowledge is simulating the associated plume impingement flow field. The following paper presents results from a loosely coupled continuum flow solver/Direct Simulation Monte Carlo (DSMC) technique for simulating the plume impingement of the Apollo Lunar module on the lunar surface. These cases were chosen for initial study to allow for comparison with available Apollo video. The relatively high engine thrust and the desire to simulate interesting cases near touchdown result in flow that is nearly entirely continuum. The DSMC region of the flow field was simulated using NASA's DSMC Analysis Code (DAC) and must begin upstream of the impingement shock for the loosely coupled technique to succeed. It was therefore impossible to achieve mean free path resolution with a reasonable number of molecules (say 100 million) as is shown. In order to mitigate accuracy and performance issues when using such large cells, advanced techniques such as collision limiting and nearest neighbor collisions were employed. The final paper will assess the benefits and shortcomings of such techniques. In addition, the effects of plume orientation, plume altitude, and lunar topography, such as craters, on the flow field, the surface pressure distribution, and the surface shear stress distribution are presented.
Husak, Gregory J.; Michaelsen, Joel; Kyriakidis, P.; Verdin, James P.; Funk, Chris; Galu, Gideon
2011-01-01
Probabilistic forecasts are produced from a variety of outlets to help predict rainfall, and other meteorological events, for periods of 1 month or more. Such forecasts are expressed as probabilities of a rainfall event, e.g. being in the upper, middle, or lower third of the relevant distribution of rainfall in the region. The impact of these forecasts on the expectation for the event is not always clear or easily conveyed. This article proposes a technique based on Monte Carlo simulation for adjusting existing climatologic statistical parameters to match forecast information, resulting in new parameters defining the probability of events for the forecast interval. The resulting parameters are shown to approximate the forecasts with reasonable accuracy. To show the value of the technique as an application for seasonal rainfall, it is used with consensus forecast developed for the Greater Horn of Africa for the 2009 March-April-May season. An alternative, analytical approach is also proposed, and discussed in comparison to the first simulation-based technique.
Simulation of wind turbine wakes using the actuator line technique.
Sørensen, Jens N; Mikkelsen, Robert F; Henningson, Dan S; Ivanell, Stefan; Sarmast, Sasan; Andersen, Søren J
2015-02-28
The actuator line technique was introduced as a numerical tool to be employed in combination with large eddy simulations to enable the study of wakes and wake interaction in wind farms. The technique is today largely used for studying basic features of wakes as well as for making performance predictions of wind farms. In this paper, we give a short introduction to the wake problem and the actuator line methodology and present a study in which the technique is employed to determine the near-wake properties of wind turbines. The presented results include a comparison of experimental results of the wake characteristics of the flow around a three-bladed model wind turbine, the development of a simple analytical formula for determining the near-wake length behind a wind turbine and a detailed investigation of wake structures based on proper orthogonal decomposition analysis of numerically generated snapshots of the wake. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
On the application of accelerated molecular dynamics to liquid water simulations.
de Oliveira, César Augusto F; Hamelberg, Donald; McCammon, J Andrew
2006-11-16
Our group recently proposed a robust bias potential function that can be used in an efficient all-atom accelerated molecular dynamics (MD) approach to simulate the transition of high energy barriers without any advance knowledge of the potential-energy landscape. The main idea is to modify the potential-energy surface by adding a bias, or boost, potential in regions close to the local minima, such that all transitions rates are increased. By applying the accelerated MD simulation method to liquid water, we observed that this new simulation technique accelerates the molecular motion without losing its microscopic structure and equilibrium properties. Our results showed that the application of a small boost energy on the potential-energy surface significantly reduces the statistical inefficiency of the simulation while keeping all the other calculated properties unchanged. On the other hand, although aggressive acceleration of the dynamics simulation increases the self-diffusion coefficient of water molecules greatly and dramatically reduces the correlation time of the simulation, configurations representative of the true structure of liquid water are poorly sampled. Our results also showed the strength and robustness of this simulation technique, which confirm this approach as a very useful and promising tool to extend the time scale of the all-atom simulations of biological system with explicit solvent models. However, we should keep in mind that there is a compromise between the strength of the boost applied in the simulation and the reproduction of the ensemble average properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klishin, G.S.; Seleznev, V.E.; Aleoshin, V.V.
1997-12-31
Gas industry enterprises such as main pipelines, compressor gas transfer stations, gas extracting complexes belong to the energy intensive industry. Accidents there can result into the catastrophes and great social, environmental and economic losses. Annually, according to the official data several dozens of large accidents take place at the pipes in the USA and Russia. That is why prevention of the accidents, analysis of the mechanisms of their development and prediction of their possible consequences are acute and important tasks nowadays. The accidents reasons are usually of a complicated character and can be presented as a complex combination of natural,more » technical and human factors. Mathematical and computer simulations are safe, rather effective and comparatively inexpensive methods of the accident analysis. It makes it possible to analyze different mechanisms of a failure occurrence and development, to assess its consequences and give recommendations to prevent it. Besides investigation of the failure cases, numerical simulation techniques play an important role in the treatment of the diagnostics results of the objects and in further construction of mathematical prognostic simulations of the object behavior in the period of time between two inspections. While solving diagnostics tasks and in the analysis of the failure cases, the techniques of theoretical mechanics, of qualitative theory of different equations, of mechanics of a continuous medium, of chemical macro-kinetics and optimizing techniques are implemented in the Conversion Design Bureau {number_sign}5 (DB{number_sign}5). Both universal and special numerical techniques and software (SW) are being developed in DB{number_sign}5 for solution of such tasks. Almost all of them are calibrated on the calculations of the simulated and full-scale experiments performed at the VNIIEF and MINATOM testing sites. It is worth noting that in the long years of work there has been established a fruitful and effective collaboration of theoreticians, mathematicians and experimentalists of the institute to solve such tasks.« less
NASA Astrophysics Data System (ADS)
Larsen, J. D.; Schaap, M. G.
2013-12-01
Recent advances in computing technology and experimental techniques have made it possible to observe and characterize fluid dynamics at the micro-scale. Many computational methods exist that can adequately simulate fluid flow in porous media. Lattice Boltzmann methods provide the distinct advantage of tracking particles at the microscopic level and returning macroscopic observations. While experimental methods can accurately measure macroscopic fluid dynamics, computational efforts can be used to predict and gain insight into fluid dynamics by utilizing thin sections or computed micro-tomography (CMT) images of core sections. Although substantial effort have been made to advance non-invasive imaging methods such as CMT, fluid dynamics simulations, and microscale analysis, a true three dimensional image segmentation technique has not been developed until recently. Many competing segmentation techniques are utilized in industry and research settings with varying results. In this study lattice Boltzmann method is used to simulate stokes flow in a macroporous soil column. Two dimensional CMT images were used to reconstruct a three dimensional representation of the original sample. Six competing segmentation standards were used to binarize the CMT volumes which provide distinction between solid phase and pore space. The permeability of the reconstructed samples was calculated, with Darcy's Law, from lattice Boltzmann simulations of fluid flow in the samples. We compare simulated permeability from differing segmentation algorithms to experimental findings.
Clinical validation of robot simulation of toothbrushing--comparative plaque removal efficacy.
Lang, Tomas; Staufer, Sebastian; Jennes, Barbara; Gaengler, Peter
2014-07-04
Clinical validation of laboratory toothbrushing tests has important advantages. It was, therefore, the aim to demonstrate correlation of tooth cleaning efficiency of a new robot brushing simulation technique with clinical plaque removal. Clinical programme: 27 subjects received dental cleaning prior to 3-day-plaque-regrowth-interval. Plaque was stained, photographically documented and scored using planimetrical index. Subjects brushed teeth 33-47 with three techniques (horizontal, rotating, vertical), each for 20s buccally and for 20s orally in 3 consecutive intervals. The force was calibrated, the brushing technique was video supported. Two different brushes were randomly assigned to the subject. Robot programme: Clinical brushing programmes were transfered to a 6-axis-robot. Artificial teeth 33-47 were covered with plaque-simulating substrate. All brushing techniques were repeated 7 times, results were scored according to clinical planimetry. All data underwent statistical analysis by t-test, U-test and multivariate analysis. The individual clinical cleaning patterns are well reproduced by the robot programmes. Differences in plaque removal are statistically significant for the two brushes, reproduced in clinical and robot data. Multivariate analysis confirms the higher cleaning efficiency for anterior teeth and for the buccal sites. The robot tooth brushing simulation programme showed good correlation with clinically standardized tooth brushing.This new robot brushing simulation programme can be used for rapid, reproducible laboratory testing of tooth cleaning.
Automated parameterization of intermolecular pair potentials using global optimization techniques
NASA Astrophysics Data System (ADS)
Krämer, Andreas; Hülsmann, Marco; Köddermann, Thorsten; Reith, Dirk
2014-12-01
In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters' influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.
Simulation of FRET dyes allows quantitative comparison against experimental data
NASA Astrophysics Data System (ADS)
Reinartz, Ines; Sinner, Claude; Nettels, Daniel; Stucki-Buchli, Brigitte; Stockmar, Florian; Panek, Pawel T.; Jacob, Christoph R.; Nienhaus, Gerd Ulrich; Schuler, Benjamin; Schug, Alexander
2018-03-01
Fully understanding biomolecular function requires detailed insight into the systems' structural dynamics. Powerful experimental techniques such as single molecule Förster Resonance Energy Transfer (FRET) provide access to such dynamic information yet have to be carefully interpreted. Molecular simulations can complement these experiments but typically face limits in accessing slow time scales and large or unstructured systems. Here, we introduce a coarse-grained simulation technique that tackles these challenges. While requiring only few parameters, we maintain full protein flexibility and include all heavy atoms of proteins, linkers, and dyes. We are able to sufficiently reduce computational demands to simulate large or heterogeneous structural dynamics and ensembles on slow time scales found in, e.g., protein folding. The simulations allow for calculating FRET efficiencies which quantitatively agree with experimentally determined values. By providing atomically resolved trajectories, this work supports the planning and microscopic interpretation of experiments. Overall, these results highlight how simulations and experiments can complement each other leading to new insights into biomolecular dynamics and function.
Robust Nonlinear Feedback Control of Aircraft Propulsion Systems
NASA Technical Reports Server (NTRS)
Garrard, William L.; Balas, Gary J.; Litt, Jonathan (Technical Monitor)
2001-01-01
This is the final report on the research performed under NASA Glen grant NASA/NAG-3-1975 concerning feedback control of the Pratt & Whitney (PW) STF 952, a twin spool, mixed flow, after burning turbofan engine. The research focussed on the design of linear and gain-scheduled, multivariable inner-loop controllers for the PW turbofan engine using H-infinity and linear, parameter-varying (LPV) control techniques. The nonlinear turbofan engine simulation was provided by PW within the NASA Rocket Engine Transient Simulator (ROCETS) simulation software environment. ROCETS was used to generate linearized models of the turbofan engine for control design and analysis as well as the simulation environment to evaluate the performance and robustness of the controllers. Comparison between the H-infinity, and LPV controllers are made with the baseline multivariable controller and developed by Pratt & Whitney engineers included in the ROCETS simulation. Simulation results indicate that H-infinity and LPV techniques effectively achieve desired response characteristics with minimal cross coupling between commanded values and are very robust to unmodeled dynamics and sensor noise.
Sui, Yuan; Pan, Jun J; Qin, Hong; Liu, Hao; Lu, Yun
2017-12-01
Laparoscopic surgery (LS), also referred to as minimally invasive surgery, is a modern surgical technique which is widely applied. The fulcrum effect makes LS a non-intuitive motor skill with a steep learning curve. A hybrid model of tetrahedrons and a multi-layer triangular mesh are constructed to simulate the deformable behavior of the rectum and surrounding tissues in the Position-Based Dynamics (PBD) framework. A heat-conduction based electric-burn technique is employed to simulate the electrocautery procedure. The simulator has been applied for laparoscopic rectum cancer surgery training. From the experimental results, trainees can operate in real time with high degrees of stability and fidelity. A preliminary study was performed to evaluate the realism and usefulness. This prototype simulator has been tested and verified by colorectal surgeons through a pilot study. They believed both the visual and the haptic performance of the simulation are realistic and helpful to enhance laparoscopic skills. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Jamilah, It; Priyani, Nunuk; Lusia Natalia, Santa
2018-03-01
Lactic acid bacteria (LAB) has been added to various food products as a probiotic agent because it has been known to provide beneficial health effects in humans. In the application of LAB, cell viability often decreased as influenced by environment stresses. Encapsulation technique is one of the cell protection techniques using a coating material. Effective coating material is required to produce maximum protection of LAB cells. In this study, candidate of probiotic LAB (isolate US7) was encapsulated with alginate-mung bean flour and alginate-gram flour with inulin prebiotic by extrusion technique. Viability of encapsulated LAB cells were able to survive by up to 108CFU g‑1 after 4 weeks of storage at 4 °C. Beads were incubated in simulated liquid gastric acid (pH=2) for 2 hrs and simulated intestinal fluid (pH=6) for 3 hrs at 37 °C. The results showed that encapsulated LAB cells maintained the survival rate of 97% with the number of cells at 9.07 Log CFU g‑1in the simulated liquid gastric acid and then followed by releasing cells in simulated intestinal fluid. In general, this study indicates that encapsulation with alginate-mung bean flour and alginategram flour with inulin successfullyprotect probiotic bacteria against simulated human gastrointestinal conditions.
Translating the Simulation of Procedural Drilling Techniques for Interactive Neurosurgical Training
Stredney, Don; Rezai, Ali R.; Prevedello, Daniel M.; Elder, J. Bradley; Kerwin, Thomas; Hittle, Bradley; Wiet, Gregory J.
2014-01-01
Background Through previous and concurrent efforts, we have developed a fully virtual environment to provide procedural training of otologic surgical technique. The virtual environment is based on high-resolution volumetric data of the regional anatomy. This volumetric data helps drive an interactive multi-sensory, i.e., visual (stereo), aural (stereo), and tactile simulation environment. Subsequently, we have extended our efforts to support the training of neurosurgical procedural technique as part of the CNS simulation initiative. Objective The goal of this multi-level development is to deliberately study the integration of simulation technologies into the neurosurgical curriculum and to determine their efficacy in teaching minimally invasive cranial and skull base approaches. Methods We discuss issues of biofidelity as well as our methods to provide objective, quantitative automated assessment for the residents. Results We conclude with a discussion of our experiences by reporting on preliminary formative pilot studies and proposed approaches to take the simulation to the next level through additional validation studies. Conclusion We have presented our efforts to translate an otologic simulation environment for use in the neurosurgical curriculum. We have demonstrated the initial proof of principles and define the steps to integrate and validate the system as an adjuvant to the neurosurgical curriculum. PMID:24051887
NASA Technical Reports Server (NTRS)
Duncan, L. M.; Reddell, J. P.; Schoonmaker, P. B.
1975-01-01
Techniques and support software for the efficient performance of simulation validation are discussed. Overall validation software structure, the performance of validation at various levels of simulation integration, guidelines for check case formulation, methods for real time acquisition and formatting of data from an all up operational simulator, and methods and criteria for comparison and evaluation of simulation data are included. Vehicle subsystems modules, module integration, special test requirements, and reference data formats are also described.
Recent advances in lossless coding techniques
NASA Astrophysics Data System (ADS)
Yovanof, Gregory S.
Current lossless techniques are reviewed with reference to both sequential data files and still images. Two major groups of sequential algorithms, dictionary and statistical techniques, are discussed. In particular, attention is given to Lempel-Ziv coding, Huffman coding, and arithmewtic coding. The subject of lossless compression of imagery is briefly discussed. Finally, examples of practical implementations of lossless algorithms and some simulation results are given.
New simulation model of multicomponent crystal growth and inhibition.
Wathen, Brent; Kuiper, Michael; Walker, Virginia; Jia, Zongchao
2004-04-02
We review a novel computational model for the study of crystal structures both on their own and in conjunction with inhibitor molecules. The model advances existing Monte Carlo (MC) simulation techniques by extending them from modeling 3D crystal surface patches to modeling entire 3D crystals, and by including the use of "complex" multicomponent molecules within the simulations. These advances makes it possible to incorporate the 3D shape and non-uniform surface properties of inhibitors into simulations, and to study what effect these inhibitor properties have on the growth of whole crystals containing up to tens of millions of molecules. The application of this extended MC model to the study of antifreeze proteins (AFPs) and their effects on ice formation is reported, including the success of the technique in achieving AFP-induced ice-growth inhibition with concurrent changes to ice morphology that mimic experimental results. Simulations of ice-growth inhibition suggest that the degree of inhibition afforded by an AFP is a function of its ice-binding position relative to the underlying anisotropic growth pattern of ice. This extended MC technique is applicable to other crystal and crystal-inhibitor systems, including more complex crystal systems such as clathrates.
Modelling and simulation of a heat exchanger
NASA Technical Reports Server (NTRS)
Xia, Lei; Deabreu-Garcia, J. Alex; Hartley, Tom T.
1991-01-01
Two models for two different control systems are developed for a parallel heat exchanger. First by spatially lumping a heat exchanger model, a good approximate model which has a high system order is produced. Model reduction techniques are applied to these to obtain low order models that are suitable for dynamic analysis and control design. The simulation method is discussed to ensure a valid simulation result.
NASA Astrophysics Data System (ADS)
Fourment, Lionel; Ducloux, Richard; Marie, Stéphane; Ejday, Mohsen; Monnereau, Dominique; Massé, Thomas; Montmitonnet, Pierre
2010-06-01
The use of material processing numerical simulation allows a strategy of trial and error to improve virtual processes without incurring material costs or interrupting production and therefore save a lot of money, but it requires user time to analyze the results, adjust the operating conditions and restart the simulation. Automatic optimization is the perfect complement to simulation. Evolutionary Algorithm coupled with metamodelling makes it possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. Ten industrial partners have been selected to cover the different area of the mechanical forging industry and provide different examples of the forming simulation tools. It aims to demonstrate that it is possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. The large computational time is handled by a metamodel approach. It allows interpolating the objective function on the entire parameter space by only knowing the exact function values at a reduced number of "master points". Two algorithms are used: an evolution strategy combined with a Kriging metamodel and a genetic algorithm combined with a Meshless Finite Difference Method. The later approach is extended to multi-objective optimization. The set of solutions, which corresponds to the best possible compromises between the different objectives, is then computed in the same way. The population based approach allows using the parallel capabilities of the utilized computer with a high efficiency. An optimization module, fully embedded within the Forge2009 IHM, makes possible to cover all the defined examples, and the use of new multi-core hardware to compute several simulations at the same time reduces the needed time dramatically. The presented examples demonstrate the method versatility. They include billet shape optimization of a common rail, the cogging of a bar and a wire drawing problem.
Design and Simulation of Horn Antenna Using CST Software for GPR System
NASA Astrophysics Data System (ADS)
Joret, Ariffuddin; Sulong, M. S.; Abdullah, M. F. L.; Madun, Aziman; Haimi Dahlan, Samsul
2018-04-01
Detection of underground object can be made using a GPR system. This system is classified as a non-destructive technique (NDT) where the ground areas need not to be excavated. The technique used by the GPR system is by measuring the reflection of electromagnetic wave signal produced and detected by antenna which is known as the transmitter and the receiver antenna. In this study, a GPR system was studied by means of simulation using a Horn antenna as a transceiver antenna. The electromagnetic wave signal in this simulation is produced by current signal of an antenna which having a shape of modulation of Gaussian pulse which is having spectrum from 8 GHz until 12 GHz. CST and MATLAB Software are used in this GPR system simulation. A model of a Horn antenna has been designed using the CST software before the GPR’s system simulation modeled by adding a model of background in front of the Horn antenna. The simulation results show that the output signal of the Horn antenna can be used in detecting embedded object which are made from material of wood and iron. In addition, the simulation result has successfully developed a 3D model image of the GPR system using output signal of the Horn antenna. The embedded iron object in the GPR system simulation can be seen clearly by using this 3D image.
NASA Astrophysics Data System (ADS)
Guidi, Giovanni; Scannapieco, Cecilia; Walcher, C. Jakob
2015-12-01
We study the sources of biases and systematics in the derivation of galaxy properties from observational studies, focusing on stellar masses, star formation rates, gas and stellar metallicities, stellar ages, magnitudes and colours. We use hydrodynamical cosmological simulations of galaxy formation, for which the real quantities are known, and apply observational techniques to derive the observables. We also analyse biases that are relevant for a proper comparison between simulations and observations. For our study, we post-process the simulation outputs to calculate the galaxies' spectral energy distributions (SEDs) using stellar population synthesis models and also generate the fully consistent far-UV-submillimetre wavelength SEDs with the radiative transfer code SUNRISE. We compared the direct results of simulations with the observationally derived quantities obtained in various ways, and found that systematic differences in all studied galaxy properties appear, which are caused by: (1) purely observational biases, (2) the use of mass-weighted and luminosity-weighted quantities, with preferential sampling of more massive and luminous regions, (3) the different ways of constructing the template of models when a fit to the spectra is performed, and (4) variations due to different calibrations, most notably for gas metallicities and star formation rates. Our results show that large differences can appear depending on the technique used to derive galaxy properties. Understanding these differences is of primary importance both for simulators, to allow a better judgement of similarities and differences with observations, and for observers, to allow a proper interpretation of the data.
NASA Astrophysics Data System (ADS)
Tu, H.-Yu.; Tasneem, Sarah
Most of modern microprocessors employ on—chip cache memories to meet the memory bandwidth demand. These caches are now occupying a greater real es tate of chip area. Also, continuous down scaling of transistors increases the possi bility of defects in the cache area which already starts to occupies more than 50% of chip area. For this reason, various techniques have been proposed to tolerate defects in cache blocks. These techniques can be classified into three different cat egories, namely, cache line disabling, replacement with spare block, and decoder reconfiguration without spare blocks. This chapter examines each of those fault tol erant techniques with a fixed typical size and organization of L1 cache, through extended simulation using SPEC2000 benchmark on individual techniques. The de sign and characteristics of each technique are summarized with a view to evaluate the scheme. We then present our simulation results and comparative study of the three different methods.
Aerodynamic force measurement on a large-scale model in a short duration test facility
NASA Astrophysics Data System (ADS)
Tanno, H.; Kodera, M.; Komuro, T.; Sato, K.; Takahasi, M.; Itoh, K.
2005-03-01
A force measurement technique has been developed for large-scale aerodynamic models with a short test time. The technique is based on direct acceleration measurements, with miniature accelerometers mounted on a test model suspended by wires. Measuring acceleration at two different locations, the technique can eliminate oscillations from natural vibration of the model. The technique was used for drag force measurements on a 3m long supersonic combustor model in the HIEST free-piston driven shock tunnel. A time resolution of 350μs is guaranteed during measurements, whose resolution is enough for ms order test time in HIEST. To evaluate measurement reliability and accuracy, measured values were compared with results from a three-dimensional Navier-Stokes numerical simulation. The difference between measured values and numerical simulation values was less than 5%. We conclude that this measurement technique is sufficiently reliable for measuring aerodynamic force within test durations of 1ms.
Evolution of surface characteristics in material removal simulation with subaperture tools
NASA Astrophysics Data System (ADS)
Kim, Sug-Whan; Jee, Myung-Kook
2002-02-01
Over the last decade, we have witnessed that the fabrication of 200 - 2000 mm scale have received relatively little attention from the fabrication technology development, compared to those of smaller than 200 mm and of larger than 2000 mm in diameter. As a result, the optical surfaces of these scales are still predominantly completed by small optics shops where opticians apply the traditional technique for polishing. Lack of tools in aiding opticians for planning, executing and analyzing their polishing work is a root cause for long and, sometimes, unpredictable delivery and high manufacturing cost for such optical surfaces. We present the on-going development of a software simulation environment called Surface Analysis and Fabrication Environment (SAFE). It is primarily intended to increase the throughput of polishing and testing cycles by allowing opticians to simulate the resulting surface form and roughness with input polishing variables. A brief review of current polishing techniques and their target optics clarifies the need for such simulation tool. This is followed by the development targets and a preliminary simulation plan using the developmental version of SAFE. Among many polishing variables, two removal assumptions and three different types of removal functions we used for the polishing simulation presented. The simulations show that the Gaussian removal function with the proportional removal assumption resulted in the fastest, though marginal, convergence to a super-polished surface of 0.56 micron Peat- to-Valley in form accuracy and of 0.02 nanometer in surface roughness Ra. Other meaningful results and their implications are also presented.
Sellers, Michael S; Lísal, Martin; Brennan, John K
2016-03-21
We present an extension of various free-energy methodologies to determine the chemical potential of the solid and liquid phases of a fully-flexible molecule using classical simulation. The methods are applied to the Smith-Bharadwaj atomistic potential representation of cyclotrimethylene trinitramine (RDX), a well-studied energetic material, to accurately determine the solid and liquid phase Gibbs free energies, and the melting point (Tm). We outline an efficient technique to find the absolute chemical potential and melting point of a fully-flexible molecule using one set of simulations to compute the solid absolute chemical potential and one set of simulations to compute the solid-liquid free energy difference. With this combination, only a handful of simulations are needed, whereby the absolute quantities of the chemical potentials are obtained, for use in other property calculations, such as the characterization of crystal polymorphs or the determination of the entropy. Using the LAMMPS molecular simulator, the Frenkel and Ladd and pseudo-supercritical path techniques are adapted to generate 3rd order fits of the solid and liquid chemical potentials. Results yield the thermodynamic melting point Tm = 488.75 K at 1.0 atm. We also validate these calculations and compare this melting point to one obtained from a typical superheated simulation technique.
A Method for Generating Reduced-Order Linear Models of Multidimensional Supersonic Inlets
NASA Technical Reports Server (NTRS)
Chicatelli, Amy; Hartley, Tom T.
1998-01-01
Simulation of high speed propulsion systems may be divided into two categories, nonlinear and linear. The nonlinear simulations are usually based on multidimensional computational fluid dynamics (CFD) methodologies and tend to provide high resolution results that show the fine detail of the flow. Consequently, these simulations are large, numerically intensive, and run much slower than real-time. ne linear simulations are usually based on large lumping techniques that are linearized about a steady-state operating condition. These simplistic models often run at or near real-time but do not always capture the detailed dynamics of the plant. Under a grant sponsored by the NASA Lewis Research Center, Cleveland, Ohio, a new method has been developed that can be used to generate improved linear models for control design from multidimensional steady-state CFD results. This CFD-based linear modeling technique provides a small perturbation model that can be used for control applications and real-time simulations. It is important to note the utility of the modeling procedure; all that is needed to obtain a linear model of the propulsion system is the geometry and steady-state operating conditions from a multidimensional CFD simulation or experiment. This research represents a beginning step in establishing a bridge between the controls discipline and the CFD discipline so that the control engineer is able to effectively use multidimensional CFD results in control system design and analysis.
Accurate Monitoring and Fault Detection in Wind Measuring Devices through Wireless Sensor Networks
Khan, Komal Saifullah; Tariq, Muhammad
2014-01-01
Many wind energy projects report poor performance as low as 60% of the predicted performance. The reason for this is poor resource assessment and the use of new untested technologies and systems in remote locations. Predictions about the potential of an area for wind energy projects (through simulated models) may vary from the actual potential of the area. Hence, introducing accurate site assessment techniques will lead to accurate predictions of energy production from a particular area. We solve this problem by installing a Wireless Sensor Network (WSN) to periodically analyze the data from anemometers installed in that area. After comparative analysis of the acquired data, the anemometers transmit their readings through a WSN to the sink node for analysis. The sink node uses an iterative algorithm which sequentially detects any faulty anemometer and passes the details of the fault to the central system or main station. We apply the proposed technique in simulation as well as in practical implementation and study its accuracy by comparing the simulation results with experimental results to analyze the variation in the results obtained from both simulation model and implemented model. Simulation results show that the algorithm indicates faulty anemometers with high accuracy and low false alarm rate when as many as 25% of the anemometers become faulty. Experimental analysis shows that anemometers incorporating this solution are better assessed and performance level of implemented projects is increased above 86% of the simulated models. PMID:25421739
CFD simulation of liquid-liquid dispersions in a stirred tank bioreactor
NASA Astrophysics Data System (ADS)
Gelves, R.
2013-10-01
In this paper simulations were developed in order to allow the examinations of drop sizes in liquid-liquid dispersions (oil-water) in a stirred tank bioreactor using CFD simulations (Computational Fluid Dynamics). The effects of turbulence, rotating flow, drop breakage were simulated by using the k-e, MRF (Multiple Reference Frame) and PBM (Population Balance Model), respectively. The numerical results from different operational conditions are compared with experimental data obtained from an endoscope technique and good agreement is achieved. Motivated by these simulated and experimental results CFD simulations are qualified as a very promising tool for predicting hydrodynamics and drop sizes especially useful for liquid-liquid applications which are characterized by the challenging problem of emulsion stability due to undesired drop sizes.
A histogram-based technique for rapid vector extraction from PIV photographs
NASA Technical Reports Server (NTRS)
Humphreys, William M., Jr.
1991-01-01
A new analysis technique, performed totally in the image plane, is proposed which rapidly extracts all available vectors from individual interrogation regions on PIV photographs. The technique avoids the need for using Fourier transforms with the associated computational burden. The data acquisition and analysis procedure is described, and results of a preliminary simulation study to evaluate the accuracy of the technique are presented. Recently obtained PIV photographs are analyzed.
Supercontinuum generation and analysis in extruded suspended-core As2S3 chalcogenide fibers
NASA Astrophysics Data System (ADS)
Si, Nian; Sun, Lihong; Zhao, Zheming; Wang, Xunsi; Zhu, Qingde; Zhang, Peiqing; Liu, Shuo; Pan, Zhanghao; Liu, Zijun; Dai, Shixun; Nie, Qiuhua
2018-02-01
Compared with the traditional fluoride fibers and tellurite fibers that can work in the near-infrared region, suspended-core fibers based on chalcogenide glasses have wider transmitting regions and higher nonlinear coefficients, thus the mid-infrared supercontinuum generations can be achieved easily. Rather than adopting the traditional fabrication technique of hole-drilling and air filling, we adopted a totally novel extrusion technique to fabricate As2S3 suspended-core fibers with four holes, and its mid-infrared supercontinuum generation was investigated systematically by integrating theoretical simulation and empirical results. The generalized nonlinear SchrÖdinger equation was used to simulate the supercontinuum generation in the As2S3 suspended-core fibers. The simulated supercontinuum generation in the As2S3 suspended-core fibers with different pump wavelengths (2-5 µm), increasing powers (0.3-4 kW), and various fiber lengths (1-50 cm) was obtained by a simulative software, MATLAB. The experimental results of supercontinuum generation via femtosecond optical parametric amplification (OPA) were recorded by changing fiber lengths (5-25 cm), pump wavelengths (2.9-5 µm), and pump powers (10-200 kW). The simulated consulting spectra are consistent with the experimental results of supercontinuum generation only if the fiber loss is sufficiently low.
Numerical simulation of the control of the three-dimensional transition process in boundary layers
NASA Technical Reports Server (NTRS)
Kral, L. D.; Fasel, H. F.
1990-01-01
Surface heating techniques to control the three-dimensional laminar-turbulent transition process are numerically investigated for a water boundary layer. The Navier-Stokes and energy equations are solved using a fully implicit finite difference/spectral method. The spatially evolving boundary layer is simulated. Results of both passive and active methods of control are shown for small amplitude two-dimensional and three-dimensional disturbance waves. Control is also applied to the early stages of the secondary instability process using passive or active control techniques.
Theoretical and simulated performance for a novel frequency estimation technique
NASA Technical Reports Server (NTRS)
Crozier, Stewart N.
1993-01-01
A low complexity, open-loop, discrete-time, delay-multiply-average (DMA) technique for estimating the frequency offset for digitally modulated MPSK signals is investigated. A nonlinearity is used to remove the MPSK modulation and generate the carrier component to be extracted. Theoretical and simulated performance results are presented and compared to the Cramer-Rao lower bound (CRLB) for the variance of the frequency estimation error. For all signal-to-noise ratios (SNR's) above threshold, it is shown that the CRLB can essentially be achieved with linear complexity.
Streambank response to simulated grazing
Warren P. Clary; John W. Kinney
2000-01-01
Simulated grazing techniques were used to investigate livestock impacts on structural characteristics of streambanks. The treatments consisted of no grazing, moderate early summer grazing, moderate mid summer grazing, and heavy season-long grazing. The heavy season-long treatment resulted in a 11.5 cm depression of the streambank surface, while the moderate treatments...
Software Partitioning Schemes for Advanced Simulation Computer Systems. Final Report.
ERIC Educational Resources Information Center
Clymer, S. J.
Conducted to design software partitioning techniques for use by the Air Force to partition a large flight simulator program for optimal execution on alternative configurations, this study resulted in a mathematical model which defines characteristics for an optimal partition, and a manually demonstrated partitioning algorithm design which…
Simulation of aerosol flow interaction with a solid body on molecular level
NASA Astrophysics Data System (ADS)
Amelyushkin, Ivan A.; Stasenko, Albert L.
2018-05-01
Physico-mathematical models and numerical algorithm of two-phase flow interaction with a solid body are developed. Results of droplet motion and its impingement upon a rough surface in real gas boundary layer simulation on the molecular level obtained via molecular dynamics technique are presented.
Assessment of simulation fidelity using measurements of piloting technique in flight
NASA Technical Reports Server (NTRS)
Ferguson, S. W.; Clement, W. F.; Cleveland, W. B.; Key, D. L.
1984-01-01
The U.S. Army and NASA have undertaken the systematic validation of a ground-based piloted simulator for the UH-60A helicopter. The results of previous handling quality and task performance flight tests for this helicopter have been used as a data base for evaluating the fidelity of the present simulation, which is being conducted at the NASA Ames Research Center's Vertical Motion Simulator. Such nap-of-the-earth piloting tasks as pop-up, hover turn, dash/quick stop, sidestep, dolphin, and slalom, have been investigated. It is noted that pilot simulator performance is significantly and quantifiable degraded by comparison with flight test results for the same tasks.
Computer program uses Monte Carlo techniques for statistical system performance analysis
NASA Technical Reports Server (NTRS)
Wohl, D. P.
1967-01-01
Computer program with Monte Carlo sampling techniques determines the effect of a component part of a unit upon the overall system performance. It utilizes the full statistics of the disturbances and misalignments of each component to provide unbiased results through simulated random sampling.
Sensing Methods for Detecting Analog Television Signals
NASA Astrophysics Data System (ADS)
Rahman, Mohammad Azizur; Song, Chunyi; Harada, Hiroshi
This paper introduces a unified method of spectrum sensing for all existing analog television (TV) signals including NTSC, PAL and SECAM. We propose a correlation based method (CBM) with a single reference signal for sensing any analog TV signals. In addition we also propose an improved energy detection method. The CBM approach has been implemented in a hardware prototype specially designed for participating in Singapore TV white space (WS) test trial conducted by Infocomm Development Authority (IDA) of the Singapore government. Analytical and simulation results of the CBM method will be presented in the paper, as well as hardware testing results for sensing various analog TV signals. Both AWGN and fading channels will be considered. It is shown that the theoretical results closely match with those from simulations. Sensing performance of the hardware prototype will also be presented in fading environment by using a fading simulator. We present performance of the proposed techniques in terms of probability of false alarm, probability of detection, sensing time etc. We also present a comparative study of the various techniques.
NASA Astrophysics Data System (ADS)
Assous, Franck; Chaskalovic, Joël
2011-06-01
We propose a new approach that consists in using data mining techniques for scientific computing. Indeed, data mining has proved to be efficient in other contexts which deal with huge data like in biology, medicine, marketing, advertising and communications. Our aim, here, is to deal with the important problem of the exploitation of the results produced by any numerical method. Indeed, more and more data are created today by numerical simulations. Thus, it seems necessary to look at efficient tools to analyze them. In this work, we focus our presentation to a test case dedicated to an asymptotic paraxial approximation to model ultrarelativistic particles. Our method directly deals with numerical results of simulations and try to understand what each order of the asymptotic expansion brings to the simulation results over what could be obtained by other lower-order or less accurate means. This new heuristic approach offers new potential applications to treat numerical solutions to mathematical models.
Generalized Green's function molecular dynamics for canonical ensemble simulations
NASA Astrophysics Data System (ADS)
Coluci, V. R.; Dantas, S. O.; Tewary, V. K.
2018-05-01
The need of small integration time steps (˜1 fs) in conventional molecular dynamics simulations is an important issue that inhibits the study of physical, chemical, and biological systems in real timescales. Additionally, to simulate those systems in contact with a thermal bath, thermostating techniques are usually applied. In this work, we generalize the Green's function molecular dynamics technique to allow simulations within the canonical ensemble. By applying this technique to one-dimensional systems, we were able to correctly describe important thermodynamic properties such as the temperature fluctuations, the temperature distribution, and the velocity autocorrelation function. We show that the proposed technique also allows the use of time steps one order of magnitude larger than those typically used in conventional molecular dynamics simulations. We expect that this technique can be used in long-timescale molecular dynamics simulations.
NASA Technical Reports Server (NTRS)
Kawamura, K.; Beale, G. O.; Schaffer, J. D.; Hsieh, B. J.; Padalkar, S.; Rodriguez-Moscoso, J. J.
1985-01-01
The results of the first phase of Research on an Expert System for Database Operation of Simulation/Emulation Math Models, is described. Techniques from artificial intelligence (AI) were to bear on task domains of interest to NASA Marshall Space Flight Center. One such domain is simulation of spacecraft attitude control systems. Two related software systems were developed to and delivered to NASA. One was a generic simulation model for spacecraft attitude control, written in FORTRAN. The second was an expert system which understands the usage of a class of spacecraft attitude control simulation software and can assist the user in running the software. This NASA Expert Simulation System (NESS), written in LISP, contains general knowledge about digital simulation, specific knowledge about the simulation software, and self knowledge.
Kim, Dae Wook; Kim, Sug-Whan
2005-02-07
We present a novel simulation technique that offers efficient mass fabrication strategies for 2m class hexagonal mirror segments of extremely large telescopes. As the first of two studies in series, we establish the theoretical basis of the tool influence function (TIF) for precessing tool polishing simulation for non-rotating workpieces. These theoretical TIFs were then used to confirm the reproducibility of the material removal foot-prints (measured TIFs) of the bulged precessing tooling reported elsewhere. This is followed by the reverse-computation technique that traces, employing the simplex search method, the real polishing pressure from the empirical TIF. The technical details, together with the results and implications described here, provide the theoretical tool for material removal essential to the successful polishing simulation which will be reported in the second study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vencels, Juris; Delzanno, Gian Luca; Johnson, Alec
2015-06-01
A spectral method for kinetic plasma simulations based on the expansion of the velocity distribution function in a variable number of Hermite polynomials is presented. The method is based on a set of non-linear equations that is solved to determine the coefficients of the Hermite expansion satisfying the Vlasov and Poisson equations. In this paper, we first show that this technique combines the fluid and kinetic approaches into one framework. Second, we present an adaptive strategy to increase and decrease the number of Hermite functions dynamically during the simulation. The technique is applied to the Landau damping and two-stream instabilitymore » test problems. Performance results show 21% and 47% saving of total simulation time in the Landau and two-stream instability test cases, respectively.« less
Simulation of keratoconus observation in photorefraction
NASA Astrophysics Data System (ADS)
Chen, Ying-Ling; Tan, B.; Baker, K.; Lewis, J. W. L.; Swartz, T.; Jiang, Y.; Wang, M.
2006-11-01
In the recent years, keratoconus (KC) has increasingly gained attention due to its treatment options and to the popularity of keratorefractive surgery. This paper investigates the potential of identification of KC using photorefraction (PR), an optical technique that is similar to objective retinoscopy and is commonly used for large-scale ocular screening. Using personalized eye models of both KC and pre-LASIK patients, computer simulations were performed to achieve visualization of this ophthalmic measurement. The simulations are validated by comparing results to two sets of experimental measurements. These PR images show distinguishable differences between KC eyes and eyes that are either normal or ametropic. The simulation technique with personalized modeling can be extended to other ophthalmic instrument developments. It makes possible investigation with the least number of real human subjects. The application is also of great interest in medical training.
Atomistic simulation and XAS investigation of Mn induced defects in Bi{sub 12}TiO{sub 20}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rezende, Marcos V dos S.; Santos, Denise J.; Jackson, Robert A.
2016-06-15
This work reports an investigation of the valence and site occupancy of Mn dopants in Bi{sub 12}TiO{sub 20} (BTO: Mn) host using X-ray Absorption (XAS) and atomistic simulation techniques based on energy minimisation. X-ray Absorption Near Edge Structure (XANES) at the Mn K-edges gave typical results for Mn ions with mixed valences of 3+ and 4+. Extended X-ray Absorption Fine Structure (EXAFS) results indicated that Mn ions are probably substituted at Ti sites. Atomistic simulation was performed assuming the incorporation of Mn{sup 2+}, Mn{sup 3+} and Mn{sup 4+} ions at either Bi{sup 3+} or Ti{sup 4+} sites, and the resultsmore » were compared to XANES and EXAFS measurements. Electrical conductivity for pure and doped samples was used to evaluate the consistency of the proposed model. - Graphical abstract: The structure of Bi{sub 12}TiO{sub 20} (BTO). Display Omitted - Highlights: • Pure and Mn-doped Bi{sub 12}TiO{sub 20} samples were studied by experimental techniques combined with atomistic simulation. • Good agreement between experimental and simulation results was obtained. • XANES results suggest a mixture of 3+ and 4+ valences for Mn, occupying the Ti4+ site in both cases. • Charge compensation by holes is most energetically favoured, explaining the enhancement observed in AC dark conductivity.« less
NASA Astrophysics Data System (ADS)
Setiya Pradana, Jalu; Hidayat, Rahmat
2018-04-01
In this paper, we report our research work on developing a Surface Plasmon Resonance (SPR) element with sub-micron (hundreds of nanometers) periodicity grating structure. This grating structure was fabricated by using a simple nano-imprint lithography technique from an organically siloxane polymers, which was then covered by nanometer thin gold layer. The formed grating structure was a very well defined square-shaped periodic structure. The measured reflectance spectra indicate the SPR wave excitation on this grating structure. For comparison, the simulations of reflectance spectra have been also carried out by using Rigorous Coupled-Wave Analysis (RCWA) method. The experimental results are in very good agreement with the simulation results.
Results from Binary Black Hole Simulations in Astrophysics Applications
NASA Technical Reports Server (NTRS)
Baker, John G.
2007-01-01
Present and planned gravitational wave observatories are opening a new astronomical window to the sky. A key source of gravitational waves is the merger of two black holes. The Laser Interferometer Space Antenna (LISA), in particular, is expected to observe these events with signal-to-noise ratio's in the thousands. To fully reap the scientific benefits of these observations requires a detailed understanding, based on numerical simulations, of the predictions of General Relativity for the waveform signals. New techniques for simulating binary black hole mergers, introduced two years ago, have led to dramatic advances in applied numerical simulation work. Over the last two years, numerical relativity researchers have made tremendous strides in understanding the late stages of binary black hole mergers. Simulations have been applied to test much of the basic physics of binary black hole interactions, showing robust results for merger waveform predictions, and illuminating such phenomena as spin-precession. Calculations have shown that merging systems can be kicked at up to 2500 km/s by the thrust from asymmetric emission. Recently, long lasting simulations of ten or more orbits allow tests of post-Newtonian (PN) approximation results for radiation from the last orbits of the binary's inspiral. Already, analytic waveform models based PN techniques with incorporated information from numerical simulations may be adequate for observations with current ground based observatories. As new advances in simulations continue to rapidly improve our theoretical understanding of the systems, it seems certain that high-precision predictions will be available in time for LISA and other advanced ground-based instruments. Future gravitational wave observatories are expected to make precision.
NASA Astrophysics Data System (ADS)
Pagano, P.; Bemporad, A.; Mackay, D. H.
2015-10-01
Context. Understanding the 3D structure of coronal mass ejections (CMEs) is crucial for understanding the nature and origin of solar eruptions. However, owing to the optical thinness of the solar corona we can only observe the line of sight integrated emission. As a consequence the resulting projection effects hide the true 3D structure of CMEs. To derive information on the 3D structure of CMEs from white-light (total and polarized brightness) images, the polarization ratio technique is widely used. The soon-to-be-launched METIS coronagraph on board Solar Orbiter will use this technique to produce new polarimetric images. Aims: This work considers the application of the polarization ratio technique to synthetic CME observations from METIS. In particular we determine the accuracy at which the position of the centre of mass, direction and speed of propagation, and the column density of the CME can be determined along the line of sight. Methods: We perform a 3D MHD simulation of a flux rope ejection where a CME is produced. From the simulation we (i) synthesize the corresponding METIS white-light (total and polarized brightness) images and (ii) apply the polarization ratio technique to these synthesized images and compare the results with the known density distribution from the MHD simulation. In addition, we use recent results that consider how the position of a single blob of plasma is measured depending on its projected position in the plane of the sky. From this we can interpret the results of the polarization ratio technique and give an estimation of the error associated with derived parameters. Results: We find that the polarization ratio technique reproduces with high accuracy the position of the centre of mass along the line of sight. However, some errors are inherently associated with this determination. The polarization ratio technique also allows information to be derived on the real 3D direction of propagation of the CME. The determination of this is of fundamental importance for future space weather forecasting. In addition, we find that the column density derived from white-light images is accurate and we propose an improved technique where the combined use of the polarization ratio technique and white-light images minimizes the error in the estimation of column densities. Moreover, by applying the comparison to a set of snapshots of the simulation we can also assess the errors related to the trajectory and the expansion of the CME. Conclusions: Our method allows us to thoroughly test the performance of the polarization ratio technique and allows a determination of the errors associated with it, which means that it can be used to quantify the results from the analysis of the forthcoming METIS observations in white light (total and polarized brightness). Finally, we describe a satellite observing configuration relative to the Earth that can allow the technique to be efficiently used for space weather predictions. A movie attached to Fig. 15 is available in electronic form at http://www.aanda.org
NASA Astrophysics Data System (ADS)
Saunders, R.; Samei, E.; Badea, C.; Yuan, H.; Ghaghada, K.; Qi, Y.; Hedlund, L. W.; Mukundan, S.
2008-03-01
Dual-energy contrast-enhanced breast tomosynthesis has been proposed as a technique to improve the detection of early-stage cancer in young, high-risk women. This study focused on optimizing this technique using computer simulations. The computer simulation used analytical calculations to optimize the signal difference to noise ratio (SdNR) of resulting images from such a technique at constant dose. The optimization included the optimal radiographic technique, optimal distribution of dose between the two single-energy projection images, and the optimal weighting factor for the dual energy subtraction. Importantly, the SdNR included both anatomical and quantum noise sources, as dual energy imaging reduces anatomical noise at the expense of increases in quantum noise. Assuming a tungsten anode, the maximum SdNR at constant dose was achieved for a high energy beam at 49 kVp with 92.5 μm copper filtration and a low energy beam at 49 kVp with 95 μm tin filtration. These analytical calculations were followed by Monte Carlo simulations that included the effects of scattered radiation and detector properties. Finally, the feasibility of this technique was tested in a small animal imaging experiment using a novel iodinated liposomal contrast agent. The results illustrated the utility of dual energy imaging and determined the optimal acquisition parameters for this technique. This work was supported in part by grants from the Komen Foundation (PDF55806), the Cancer Research and Prevention Foundation, and the NIH (NCI R21 CA124584-01). CIVM is a NCRR/NCI National Resource under P41-05959/U24-CA092656.
NASA Astrophysics Data System (ADS)
Guan, Fada
Monte Carlo method has been successfully applied in simulating the particles transport problems. Most of the Monte Carlo simulation tools are static and they can only be used to perform the static simulations for the problems with fixed physics and geometry settings. Proton therapy is a dynamic treatment technique in the clinical application. In this research, we developed a method to perform the dynamic Monte Carlo simulation of proton therapy using Geant4 simulation toolkit. A passive-scattering treatment nozzle equipped with a rotating range modulation wheel was modeled in this research. One important application of the Monte Carlo simulation is to predict the spatial dose distribution in the target geometry. For simplification, a mathematical model of a human body is usually used as the target, but only the average dose over the whole organ or tissue can be obtained rather than the accurate spatial dose distribution. In this research, we developed a method using MATLAB to convert the medical images of a patient from CT scanning into the patient voxel geometry. Hence, if the patient voxel geometry is used as the target in the Monte Carlo simulation, the accurate spatial dose distribution in the target can be obtained. A data analysis tool---root was used to score the simulation results during a Geant4 simulation and to analyze the data and plot results after simulation. Finally, we successfully obtained the accurate spatial dose distribution in part of a human body after treating a patient with prostate cancer using proton therapy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romanov, Gennady; /Fermilab
CST Particle Studio combines electromagnetic field simulation, multi-particle tracking, adequate post-processing and advanced probabilistic emission model, which is the most important new capability in multipactor simulation. The emission model includes in simulation the stochastic properties of emission and adds primary electron elastic and inelastic reflection from the surfaces. The simulation of multipactor in coaxial waveguides have been performed to study the effects of the innovations on the multipactor threshold and the range over which multipactor can occur. The results compared with available previous experiments and simulations as well as the technique of MP simulation with CST PS are presented andmore » discussed.« less
Application of Land Surface Data Assimilation to Simulations of Sea Breeze Circulations
NASA Technical Reports Server (NTRS)
Mackaro, Scott; Lapenta, William M.; Blackwell, Keith; Suggs, Ron; McNider, Richard T.; Jedlovec, Gary; Kimball, Sytske
2003-01-01
A technique has been developed for assimilating GOES-derived skin temperature tendencies and insolation into the surface energy budget equation of a mesoscale model so that the simulated rate of temperature change closely agrees with the satellite observations. A critical assumption of the technique is that the availability of moisture (either from the soil or vegetation) is the least known term in the model's surface energy budget. Therefore, the simulated latent heat flux, which is a function of surface moisture availability, is adjusted based upon differences between the modeled and satellite- observed skin temperature tendencies. An advantage of this technique is that satellite temperature tendencies are assimilated in an energetically consistent manner that avoids energy imbalances and surface stability problems that arise from direct assimilation of surface shelter temperatures. The fact that the rate of change of the satellite skin temperature is used rather than the absolute temperature means that sensor calibration is not as critical. The sea/land breeze is a well-documented mesoscale circulation that affects many coastal areas of the world including the northern Gulf Coast of the United States. The focus of this paper is to examine how the satellite assimilation technique impacts the simulation of a sea breeze circulation observed along the Mississippi/Alabama coast in the spring of 2001. The technique is implemented within the PSUNCAR MM5 V3-5 and applied at spatial resolutions of 12- and 4-km. It is recognized that even 4-km grid spacing is too coarse to explicitly resolve the detailed, mesoscale structure of sea breezes. Nevertheless, the model can forecast certain characteristics of the observed sea breeze including a thermally direct circulation that results from differential low-level heating across the land-sea interface. Our intent is to determine the sensitivity of the circulation to the differential land surface forcing produced via the assimilation of GOES skin temperature tendencies. Results will be quantified through statistical verification techniques.
Robust state preparation in quantum simulations of Dirac dynamics
NASA Astrophysics Data System (ADS)
Song, Xue-Ke; Deng, Fu-Guo; Lamata, Lucas; Muga, J. G.
2017-02-01
A nonrelativistic system such as an ultracold trapped ion may perform a quantum simulation of a Dirac equation dynamics under specific conditions. The resulting Hamiltonian and dynamics are highly controllable, but the coupling between momentum and internal levels poses some difficulties to manipulate the internal states accurately in wave packets. We use invariants of motion to inverse engineer robust population inversion processes with a homogeneous, time-dependent simulated electric field. This exemplifies the usefulness of inverse-engineering techniques to improve the performance of quantum simulation protocols.
Material model validation for laser shock peening process simulation
NASA Astrophysics Data System (ADS)
Amarchinta, H. K.; Grandhi, R. V.; Langer, K.; Stargel, D. S.
2009-01-01
Advanced mechanical surface enhancement techniques have been used successfully to increase the fatigue life of metallic components. These techniques impart deep compressive residual stresses into the component to counter potentially damage-inducing tensile stresses generated under service loading. Laser shock peening (LSP) is an advanced mechanical surface enhancement technique used predominantly in the aircraft industry. To reduce costs and make the technique available on a large-scale basis for industrial applications, simulation of the LSP process is required. Accurate simulation of the LSP process is a challenging task, because the process has many parameters such as laser spot size, pressure profile and material model that must be precisely determined. This work focuses on investigating the appropriate material model that could be used in simulation and design. In the LSP process material is subjected to strain rates of 106 s-1, which is very high compared with conventional strain rates. The importance of an accurate material model increases because the material behaves significantly different at such high strain rates. This work investigates the effect of multiple nonlinear material models for representing the elastic-plastic behavior of materials. Elastic perfectly plastic, Johnson-Cook and Zerilli-Armstrong models are used, and the performance of each model is compared with available experimental results.
Maleke, Caroline; Luo, Jianwen; Gamarnik, Viktor; Lu, Xin L; Konofagou, Elisa E
2010-07-01
The objective of this study is to show that Harmonic Motion Imaging (HMI) can be used as a reliable tumor-mapping technique based on the tumor's distinct stiffness at the early onset of disease. HMI is a radiation-force-based imaging method that generates a localized vibration deep inside the tissue to estimate the relative tissue stiffness based on the resulting displacement amplitude. In this paper, a finite-element model (FEM) study is presented, followed by an experimental validation in tissue-mimicking polyacrylamide gels and excised human breast tumors ex vivo. This study compares the resulting tissue motion in simulations and experiments at four different gel stiffnesses and three distinct spherical inclusion diameters. The elastic moduli of the gels were separately measured using mechanical testing. Identical transducer parameters were used in both the FEM and experimental studies, i.e., a 4.5-MHz single-element focused ultrasound (FUS) and a 7.5-MHz diagnostic (pulse-echo) transducer. In the simulation, an acoustic pressure field was used as the input stimulus to generate a localized vibration inside the target. Radiofrequency (rf) signals were then simulated using a 2D convolution model. A one-dimensional cross-correlation technique was performed on the simulated and experimental rf signals to estimate the axial displacement resulting from the harmonic radiation force. In order to measure the reliability of the displacement profiles in estimating the tissue stiffness distribution, the contrast-transfer efficiency (CTE) was calculated. For tumor mapping ex vivo, a harmonic radiation force was applied using a 2D raster-scan technique. The 2D HMI images of the breast tumor ex vivo could detect a malignant tumor (20 x 10 mm2) surrounded by glandular and fat tissues. The FEM and experimental results from both gels and breast tumors ex vivo demonstrated that HMI was capable of detecting and mapping the tumor or stiff inclusion with various diameters or stiffnesses. HMI may thus constitute a promising technique in tumor detection (>3 mm in diameter) and mapping based on its distinct stiffness.
Solid State Audio/Speech Processor Analysis.
1980-03-01
techniques. The techniques were demonstrated to be worthwhile in an efficient realtime AWR system. Finally, microprocessor architectures were designed to...do not include custom chip development, detailed hardware design , construction or testing. ITTDCD is very encouraged by the results obtained in this...California, Berkley, was responsible for furnishing the simulation data of OD speech analysis techniques and for the design and development of the hardware OD
Simplified nonplanar wafer bonding for heterogeneous device integration
NASA Astrophysics Data System (ADS)
Geske, Jon; Bowers, John E.; Riley, Anton
2004-07-01
We demonstrate a simplified nonplanar wafer bonding technique for heterogeneous device integration. The improved technique can be used to laterally integrate dissimilar semiconductor device structures on a lattice-mismatched substrate. Using the technique, two different InP-based vertical-cavity surface-emitting laser active regions have been integrated onto GaAs without compromising the quality of the photoluminescence. Experimental and numerical simulation results are presented.
RCWA and FDTD modeling of light emission from internally structured OLEDs.
Callens, Michiel Koen; Marsman, Herman; Penninck, Lieven; Peeters, Patrick; de Groot, Harry; ter Meulen, Jan Matthijs; Neyts, Kristiaan
2014-05-05
We report on the fabrication and simulation of a green OLED with an Internal Light Extraction (ILE) layer. The optical behavior of these devices is simulated using both Rigorous Coupled Wave Analysis (RCWA) and Finite Difference Time-Domain (FDTD) methods. Results obtained using these two different techniques show excellent agreement and predict the experimental results with good precision. By verifying the validity of both simulation methods on the internal light extraction structure we pave the way to optimization of ILE layers using either of these methods.
A data-driven dynamics simulation framework for railway vehicles
NASA Astrophysics Data System (ADS)
Nie, Yinyu; Tang, Zhao; Liu, Fengjia; Chang, Jian; Zhang, Jianjun
2018-03-01
The finite element (FE) method is essential for simulating vehicle dynamics with fine details, especially for train crash simulations. However, factors such as the complexity of meshes and the distortion involved in a large deformation would undermine its calculation efficiency. An alternative method, the multi-body (MB) dynamics simulation provides satisfying time efficiency but limited accuracy when highly nonlinear dynamic process is involved. To maintain the advantages of both methods, this paper proposes a data-driven simulation framework for dynamics simulation of railway vehicles. This framework uses machine learning techniques to extract nonlinear features from training data generated by FE simulations so that specific mesh structures can be formulated by a surrogate element (or surrogate elements) to replace the original mechanical elements, and the dynamics simulation can be implemented by co-simulation with the surrogate element(s) embedded into a MB model. This framework consists of a series of techniques including data collection, feature extraction, training data sampling, surrogate element building, and model evaluation and selection. To verify the feasibility of this framework, we present two case studies, a vertical dynamics simulation and a longitudinal dynamics simulation, based on co-simulation with MATLAB/Simulink and Simpack, and a further comparison with a popular data-driven model (the Kriging model) is provided. The simulation result shows that using the legendre polynomial regression model in building surrogate elements can largely cut down the simulation time without sacrifice in accuracy.
NASA Astrophysics Data System (ADS)
Gelb, Lev D.; Chakraborty, Somendra Nath
2011-12-01
The normal boiling points are obtained for a series of metals as described by the "quantum-corrected Sutton Chen" (qSC) potentials [S.-N. Luo, T. J. Ahrens, T. Çağın, A. Strachan, W. A. Goddard III, and D. C. Swift, Phys. Rev. B 68, 134206 (2003)]. Instead of conventional Monte Carlo simulations in an isothermal or expanded ensemble, simulations were done in the constant-NPH adabatic variant of the Gibbs ensemble technique as proposed by Kristóf and Liszi [Chem. Phys. Lett. 261, 620 (1996)]. This simulation technique is shown to be a precise tool for direct calculation of boiling temperatures in high-boiling fluids, with results that are almost completely insensitive to system size or other arbitrary parameters as long as the potential truncation is handled correctly. Results obtained were validated using conventional NVT-Gibbs ensemble Monte Carlo simulations. The qSC predictions for boiling temperatures are found to be reasonably accurate, but substantially underestimate the enthalpies of vaporization in all cases. This appears to be largely due to the systematic overestimation of dimer binding energies by this family of potentials, which leads to an unsatisfactory description of the vapor phase.
Hyper-Parallel Tempering Monte Carlo Method and It's Applications
NASA Astrophysics Data System (ADS)
Yan, Qiliang; de Pablo, Juan
2000-03-01
A new generalized hyper-parallel tempering Monte Carlo molecular simulation method is presented for study of complex fluids. The method is particularly useful for simulation of many-molecule complex systems, where rough energy landscapes and inherently long characteristic relaxation times can pose formidable obstacles to effective sampling of relevant regions of configuration space. The method combines several key elements from expanded ensemble formalisms, parallel-tempering, open ensemble simulations, configurational bias techniques, and histogram reweighting analysis of results. It is found to accelerate significantly the diffusion of a complex system through phase-space. In this presentation, we demonstrate the effectiveness of the new method by implementing it in grand canonical ensembles for a Lennard-Jones fluid, for the restricted primitive model of electrolyte solutions (RPM), and for polymer solutions and blends. Our results indicate that the new algorithm is capable of overcoming the large free energy barriers associated with phase transitions, thereby greatly facilitating the simulation of coexistence properties. It is also shown that the method can be orders of magnitude more efficient than previously available techniques. More importantly, the method is relatively simple and can be incorporated into existing simulation codes with minor efforts.
Simulation and Implementation of a Morphology-Tuned Gold Nano-Islands Integrated Plasmonic Sensor
Ozhikandathil, Jayan; Packirisamy, Muthukumaran
2014-01-01
This work presents simulation, analysis and implementation of morphology tuning of gold nano-island structures deposited by a novel convective assembly technique. The gold nano-islands were simulated using 3D Finite-Difference Time-Domain (FDTD) techniques to investigate the effect of morphological changes and adsorption of protein layers on the localized surface plasmon resonance (LSPR) properties. Gold nano-island structures were deposited on glass substrates by a novel and low-cost convective assembly process. The structure formed by an uncontrolled deposition method resulted in a nano-cluster morphology, which was annealed at various temperatures to tune the optical absorbance properties by transforming the nano-clusters to a nano-island morphology by modifying the structural shape and interparticle separation distances. The dependence of the size and the interparticle separation distance of the nano-islands on the LSPR properties were analyzed in the simulation. The effect of adsorption of protein layer on the nano-island structures was simulated and a relation between the thickness and the refractive index of the protein layer on the LSPR peak was presented. Further, the sensitivity of the gold nano-island integrated sensor against refractive index was computed and compared with the experimental results. PMID:24932868
Simulating the X-Ray Image Contrast to Set-Up Techniques with Desired Flaw Detectability
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2015-01-01
The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is being developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing X-ray detector resolution for crack detection. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.
An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities.
Amer, Hayder; Salman, Naveed; Hawes, Matthew; Chaqfeh, Moumena; Mihaylova, Lyudmila; Mayfield, Martin
2016-06-30
Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads' length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO₂ emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario.
An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities
Amer, Hayder; Salman, Naveed; Hawes, Matthew; Chaqfeh, Moumena; Mihaylova, Lyudmila; Mayfield, Martin
2016-01-01
Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads’ length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO2 emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario. PMID:27376289
Booth, Jonathan; Vazquez, Saulo; Martinez-Nunez, Emilio; Marks, Alison; Rodgers, Jeff; Glowacki, David R; Shalashilin, Dmitrii V
2014-08-06
In this paper, we briefly review the boxed molecular dynamics (BXD) method which allows analysis of thermodynamics and kinetics in complicated molecular systems. BXD is a multiscale technique, in which thermodynamics and long-time dynamics are recovered from a set of short-time simulations. In this paper, we review previous applications of BXD to peptide cyclization, solution phase organic reaction dynamics and desorption of ions from self-assembled monolayers (SAMs). We also report preliminary results of simulations of diamond etching mechanisms and protein unfolding in atomic force microscopy experiments. The latter demonstrate a correlation between the protein's structural motifs and its potential of mean force. Simulations of these processes by standard molecular dynamics (MD) is typically not possible, because the experimental time scales are very long. However, BXD yields well-converged and physically meaningful results. Compared with other methods of accelerated MD, our BXD approach is very simple; it is easy to implement, and it provides an integrated approach for simultaneously obtaining both thermodynamics and kinetics. It also provides a strategy for obtaining statistically meaningful dynamical results in regions of configuration space that standard MD approaches would visit only very rarely.
Gauthier, Philippe-Aubert; Berry, Alain; Woszczyk, Wieslaw
2005-02-01
This paper describes the simulations and results obtained when applying optimal control to progressive sound-field reproduction (mainly for audio applications) over an area using multiple monopole loudspeakers. The model simulates a reproduction system that operates either in free field or in a closed space approaching a typical listening room, and is based on optimal control in the frequency domain. This rather simple approach is chosen for the purpose of physical investigation, especially in terms of sensing microphones and reproduction loudspeakers configurations. Other issues of interest concern the comparison with wave-field synthesis and the control mechanisms. The results suggest that in-room reproduction of sound field using active control can be achieved with a residual normalized squared error significantly lower than open-loop wave-field synthesis in the same situation. Active reproduction techniques have the advantage of automatically compensating for the room's natural dynamics. For the considered cases, the simulations show that optimal control results are not sensitive (in terms of reproduction error) to wall absorption in the reproduction room. A special surrounding configuration of sensors is introduced for a sensor-free listening area in free field.
SU-F-T-370: A Fast Monte Carlo Dose Engine for Gamma Knife
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, T; Zhou, L; Li, Y
2016-06-15
Purpose: To develop a fast Monte Carlo dose calculation algorithm for Gamma Knife. Methods: To make the simulation more efficient, we implemented the track repeating technique on GPU. We first use EGSnrc to pre-calculate the photon and secondary electron tracks in water from two mono-energy photons of 60Co. The total photon mean free paths for different materials and energies are obtained from NIST. During simulation, each entire photon track was first loaded to shared memory for each block, the incident original photon was then splitted to Nthread sub-photons, each thread transport one sub-photon, the Russian roulette technique was applied formore » scattered and bremsstrahlung photons. The resultant electrons from photon interactions are simulated by repeating the recorded electron tracks. The electron step length is stretched/shrunk proportionally based on the local density and stopping power ratios of the local material. Energy deposition in a voxel is proportional to the fraction of the equivalent step length in that voxel. To evaluate its accuracy, dose deposition in a 300mm*300mm*300mm water phantom is calculated, and compared to EGSnrc results. Results: Both PDD and OAR showed great agreements (within 0.5%) between our dose engine result and the EGSnrc result. It only takes less than 1 min for every simulation, being reduced up to ∼40 times compared to EGSnrc simulations. Conclusion: We have successfully developed a fast Monte Carlo dose engine for Gamma Knife.« less
Numerical Simulations of the Digital Microfluidic Manipulation of Single Microparticles.
Lan, Chuanjin; Pal, Souvik; Li, Zhen; Ma, Yanbao
2015-09-08
Single-cell analysis techniques have been developed as a valuable bioanalytical tool for elucidating cellular heterogeneity at genomic, proteomic, and cellular levels. Cell manipulation is an indispensable process for single-cell analysis. Digital microfluidics (DMF) is an important platform for conducting cell manipulation and single-cell analysis in a high-throughput fashion. However, the manipulation of single cells in DMF has not been quantitatively studied so far. In this article, we investigate the interaction of a single microparticle with a liquid droplet on a flat substrate using numerical simulations. The droplet is driven by capillary force generated from the wettability gradient of the substrate. Considering the Brownian motion of microparticles, we utilize many-body dissipative particle dynamics (MDPD), an off-lattice mesoscopic simulation technique, in this numerical study. The manipulation processes (including pickup, transport, and drop-off) of a single microparticle with a liquid droplet are simulated. Parametric studies are conducted to investigate the effects on the manipulation processes from the droplet size, wettability gradient, wetting properties of the microparticle, and particle-substrate friction coefficients. The numerical results show that the pickup, transport, and drop-off processes can be precisely controlled by these parameters. On the basis of the numerical results, a trap-free delivery of a hydrophobic microparticle to a destination on the substrate is demonstrated in the numerical simulations. The numerical results not only provide a fundamental understanding of interactions among the microparticle, the droplet, and the substrate but also demonstrate a new technique for the trap-free immobilization of single hydrophobic microparticles in the DMF design. Finally, our numerical method also provides a powerful design and optimization tool for the manipulation of microparticles in DMF systems.
NASA Astrophysics Data System (ADS)
Harzalla, S.; Belgacem, F. Bin Muhammad; Chabaat, M.
2014-12-01
In this paper, a nondestructive technique is used as a tool to control cracks and microcracks in materials. A simulation by a numerical approach such as the finite element method is employed to detect cracks and eventually; to study their propagation using a crucial parameter such as the stress intensity factor. This approach has been used in the aircraft industry to control cracks. Besides, it makes it possible to highlight the defects of parts while preserving the integrity of the controlled products. On the other side, it is proven that the reliability of the control of defects gives convincing results for the improvement of the quality and the safety of the material. Eddy current testing (ECT) is a standard technique in industry for the detection of surface breaking flaws in magnetic materials such as steels. In this context, simulation tools can be used to improve the understanding of experimental signals, optimize the design of sensors or evaluate the performance of ECT procedures. CEA-LIST has developed for many years semi-analytical models embedded into the simulation platform CIVA dedicated to non-destructive testing. The developments presented herein address the case of flaws located inside a planar and magnetic medium. Simulation results are obtained through the application of the Volume Integral Method (VIM). When considering the ECT of a single flaw, a system of two differential equations is derived from Maxwell equations. The numerical resolution of the system is carried out using the classical Galerkin variant of the Method of Moments. Besides, a probe response is calculated by application of the Lorentz reciprocity theorem. Finally, the approach itself as well as comparisons between simulation results and measured data are presented.
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.
1997-01-01
The NASA Lewis Research Center is developing analytical methods and software tools to create a bridge between the controls and computational fluid dynamics (CFD) disciplines. Traditionally, control design engineers have used coarse nonlinear simulations to generate information for the design of new propulsion system controls. However, such traditional methods are not adequate for modeling the propulsion systems of complex, high-speed vehicles like the High Speed Civil Transport. To properly model the relevant flow physics of high-speed propulsion systems, one must use simulations based on CFD methods. Such CFD simulations have become useful tools for engineers that are designing propulsion system components. The analysis techniques and software being developed as part of this effort are an attempt to evolve CFD into a useful tool for control design as well. One major aspect of this research is the generation of linear models from steady-state CFD results. CFD simulations, often used during the design of high-speed inlets, yield high resolution operating point data. Under a NASA grant, the University of Akron has developed analytical techniques and software tools that use these data to generate linear models for control design. The resulting linear models have the same number of states as the original CFD simulation, so they are still very large and computationally cumbersome. Model reduction techniques have been successfully applied to reduce these large linear models by several orders of magnitude without significantly changing the dynamic response. The result is an accurate, easy to use, low-order linear model that takes less time to generate than those generated by traditional means. The development of methods for generating low-order linear models from steady-state CFD is most complete at the one-dimensional level, where software is available to generate models with different kinds of input and output variables. One-dimensional methods have been extended somewhat so that linear models can also be generated from two- and three-dimensional steady-state results. Standard techniques are adequate for reducing the order of one-dimensional CFD-based linear models. However, reduction of linear models based on two- and three-dimensional CFD results is complicated by very sparse, ill-conditioned matrices. Some novel approaches are being investigated to solve this problem.
Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques
NASA Astrophysics Data System (ADS)
Elliott, Louie C.
This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.
Satellite Data Transmission (SDT) requirement
NASA Technical Reports Server (NTRS)
Chie, C. M.; White, M.; Lindsey, W. C.
1984-01-01
An 85 Mb/s modem/codec to operate in a 34 MHz C-band domestic satellite transponder at a system carrier to noise power ratio of 19.5 dB is discussed. Characteristics of a satellite channel and the approach adopted for the satellite data transmission modem/codec selection are discussed. Measured data and simulation results of the existing 50 Mbps link are compared and used to verify the simulation techniques. Various modulation schemes that were screened for the SDT are discussed and the simulated performance of two prime candidates, the 8 PSK and the SMSK/2 are given. The selection process that leads to the candidate codec techniques are documented and the technology of the modem/codec candidates is assessed. Costs of the modems and codecs are estimated.
NASA Astrophysics Data System (ADS)
Allaf, M. Athari; Shahriari, M.; Sohrabpour, M.
2004-04-01
A new method using Monte Carlo source simulation of interference reactions in neutron activation analysis experiments has been developed. The neutron spectrum at the sample location has been simulated using the Monte Carlo code MCNP and the contributions of different elements to produce a specified gamma line have been determined. The produced response matrix has been used to measure peak areas and the sample masses of the elements of interest. A number of benchmark experiments have been performed and the calculated results verified against known values. The good agreement obtained between the calculated and known values suggests that this technique may be useful for the elimination of interference reactions in neutron activation analysis.
NASA Astrophysics Data System (ADS)
Sukharev, V.; Sukhanova, E.; Mozhevitina, E.; Sadovsky, A.; Avetissov, I.
2017-06-01
Li2O - ZnO - MoO3 pseudo ternary system was used for the growth of Li2Zn2(MoO4)3 crystals by the top seeded solution growth technique in which MoO3 was used as a solvent. Properties of the melts (density, viscosity) have been experimentally measured at different temperatures and compositions of Li2O - ZnO - MoO3 pseudo ternary system. Heat mass transfer in the crystal growth setup was numerically simulated. Using the simulation results a real growth setup was made, Li2Zn2(MoO4)3 crystals were grown and their properties were studied.
Optimisation of Critical Infrastructure Protection: The SiVe Project on Airport Security
NASA Astrophysics Data System (ADS)
Breiing, Marcus; Cole, Mara; D'Avanzo, John; Geiger, Gebhard; Goldner, Sascha; Kuhlmann, Andreas; Lorenz, Claudia; Papproth, Alf; Petzel, Erhard; Schwetje, Oliver
This paper outlines the scientific goals, ongoing work and first results of the SiVe research project on critical infrastructure security. The methodology is generic while pilot studies are chosen from airport security. The outline proceeds in three major steps, (1) building a threat scenario, (2) development of simulation models as scenario refinements, and (3) assessment of alternatives. Advanced techniques of systems analysis and simulation are employed to model relevant airport structures and processes as well as offences. Computer experiments are carried out to compare and optimise alternative solutions. The optimality analyses draw on approaches to quantitative risk assessment recently developed in the operational sciences. To exploit the advantages of the various techniques, an integrated simulation workbench is build up in the project.
NASA Technical Reports Server (NTRS)
Middleton, D. B.; Hurt, G. J., Jr.
1971-01-01
A fixed-base piloted simulator investigation has been made of the feasibility of using any of several manual guidance and control techniques for emergency lunar escape to orbit with very simplified, lightweight vehicle systems. The escape-to-orbit vehicles accommodate two men, but one man performs all of the guidance and control functions. Three basic attitude-control modes and four manually executed trajectory-guidance schemes were used successfully during approximately 125 simulated flights under a variety of conditions. These conditions included thrust misalinement, uneven propellant drain, and a vehicle moment-of-inertia range of 250 to 12,000 slugs per square foot. Two types of results are presented - orbit characteristics and pilot ratings of vehicle handling qualities.
The role of numerical simulation for the development of an advanced HIFU system
NASA Astrophysics Data System (ADS)
Okita, Kohei; Narumi, Ryuta; Azuma, Takashi; Takagi, Shu; Matumoto, Yoichiro
2014-10-01
High-intensity focused ultrasound (HIFU) has been used clinically and is under clinical trials to treat various diseases. An advanced HIFU system employs ultrasound techniques for guidance during HIFU treatment instead of magnetic resonance imaging in current HIFU systems. A HIFU beam imaging for monitoring the HIFU beam and a localized motion imaging for treatment validation of tissue are introduced briefly as the real-time ultrasound monitoring techniques. Numerical simulations have a great impact on the development of real-time ultrasound monitoring as well as the improvement of the safety and efficacy of treatment in advanced HIFU systems. A HIFU simulator was developed to reproduce ultrasound propagation through the body in consideration of the elasticity of tissue, and was validated by comparison with in vitro experiments in which the ultrasound emitted from the phased-array transducer propagates through the acrylic plate acting as a bone phantom. As the result, the defocus and distortion of the ultrasound propagating through the acrylic plate in the simulation quantitatively agree with that in the experimental results. Therefore, the HIFU simulator accurately reproduces the ultrasound propagation through the medium whose shape and physical properties are well known. In addition, it is experimentally confirmed that simulation-assisted focus control of the phased-array transducer enables efficient assignment of the focus to the target. Simulation-assisted focus control can contribute to design of transducers and treatment planning.
Simulating New Drop Test Vehicles and Test Techniques for the Orion CEV Parachute Assembly System
NASA Technical Reports Server (NTRS)
Morris, Aaron L.; Fraire, Usbaldo, Jr.; Bledsoe, Kristin J.; Ray, Eric; Moore, Jim W.; Olson, Leah M.
2011-01-01
The Crew Exploration Vehicle Parachute Assembly System (CPAS) project is engaged in a multi-year design and test campaign to qualify a parachute recovery system for human use on the Orion Spacecraft. Test and simulation techniques have evolved concurrently to keep up with the demands of a challenging and complex system. The primary simulations used for preflight predictions and post-test data reconstructions are Decelerator System Simulation (DSS), Decelerator System Simulation Application (DSSA), and Drop Test Vehicle Simulation (DTV-SIM). The goal of this paper is to provide a roadmap to future programs on the test technique challenges and obstacles involved in executing a large-scale, multi-year parachute test program. A focus on flight simulation modeling and correlation to test techniques executed to obtain parachute performance parameters are presented.
GPU-based Efficient Realistic Techniques for Bleeding and Smoke Generation in Surgical Simulators
Halic, Tansel; Sankaranarayanan, Ganesh; De, Suvranu
2010-01-01
Background In actual surgery, smoke and bleeding due to cautery processes, provide important visual cues to the surgeon which have been proposed as factors in surgical skill assessment. While several virtual reality (VR)-based surgical simulators have incorporated effects of bleeding and smoke generation, they are not realistic due to the requirement of real time performance. To be interactive, visual update must be performed at least 30 Hz and haptic (touch) information must be refreshed at 1 kHz. Simulation of smoke and bleeding is, therefore, either ignored or simulated using highly simplified techniques since other computationally intensive processes compete for the available CPU resources. Methods In this work, we develop a novel low-cost method to generate realistic bleeding and smoke in VR-based surgical simulators which outsources the computations to the graphical processing unit (GPU), thus freeing up the CPU for other time-critical tasks. This method is independent of the complexity of the organ models in the virtual environment. User studies were performed using 20 subjects to determine the visual quality of the simulations compared to real surgical videos. Results The smoke and bleeding simulation were implemented as part of a Laparoscopic Adjustable Gastric Banding (LAGB) simulator. For the bleeding simulation, the original implementation using the shader did not incur in noticeable overhead. However, for smoke generation, an I/O (Input/Output) bottleneck was observed and two different methods were developed to overcome this limitation. Based on our benchmark results, a buffered approach performed better than a pipelined approach and could support up to 15 video streams in real time. Human subject studies showed that the visual realism of the simulations were as good as in real surgery (median rating of 4 on a 5-point Likert scale). Conclusions Based on the performance results and subject study, both bleeding and smoke simulations were concluded to be efficient, highly realistic and well suited in VR-based surgical simulators. PMID:20878651
Solution to the indexing problem of frequency domain simulation experiments
NASA Technical Reports Server (NTRS)
Mitra, Mousumi; Park, Stephen K.
1991-01-01
A frequency domain simulation experiment is one in which selected system parameters are oscillated sinusoidally to induce oscillations in one or more system statistics of interest. A spectral (Fourier) analysis of these induced oscillations is then performed. To perform this spectral analysis, all oscillation frequencies must be referenced to a common, independent variable - an oscillation index. In a discrete-event simulation, the global simulation clock is the most natural choice for the oscillation index. However, past efforts to reference all frequencies to the simulation clock generally yielded unsatisfactory results. The reason for these unsatisfactory results is explained in this paper and a new methodology which uses the simulation clock as the oscillation index is presented. Techniques for implementing this new methodology are demonstrated by performing a frequency domain simulation experiment for a network of queues.
NASA Astrophysics Data System (ADS)
Spangehl, Thomas; Schröder, Marc; Bodas-Salcedo, Alejandro; Glowienka-Hense, Rita; Hense, Andreas; Hollmann, Rainer; Dietzsch, Felix
2017-04-01
Decadal climate predictions are commonly evaluated focusing on geophysical parameters such as temperature, precipitation or wind speed using observational datasets and reanalysis. Alternatively, satellite based radiance measurements combined with satellite simulator techniques to deduce virtual satellite observations from the numerical model simulations can be used. The latter approach enables an evaluation in the instrument's parameter space and has the potential to reduce uncertainties on the reference side. Here we present evaluation methods focusing on forward operator techniques for the Special Sensor Microwave Imager (SSM/I). The simulator is developed as an integrated part of the CFMIP Observation Simulator Package (COSP). On the observational side the SSM/I and SSMIS Fundamental Climate Data Record (FCDR) released by CM SAF (http://dx.doi.org/10.5676/EUM_SAF_CM/FCDR_MWI/V002) is used, which provides brightness temperatures for different channels and covers the period from 1987 to 2013. The simulator is applied to hindcast simulations performed within the MiKlip project (http://fona-miklip.de) which is funded by the BMBF (Federal Ministry of Education and Research in Germany). Probabilistic evaluation results are shown based on a subset of the hindcast simulations covering the observational period.
Tools for 3D scientific visualization in computational aerodynamics
NASA Technical Reports Server (NTRS)
Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val
1989-01-01
The purpose is to describe the tools and techniques in use at the NASA Ames Research Center for performing visualization of computational aerodynamics, for example visualization of flow fields from computer simulations of fluid dynamics about vehicles such as the Space Shuttle. The hardware used for visualization is a high-performance graphics workstation connected to a super computer with a high speed channel. At present, the workstation is a Silicon Graphics IRIS 3130, the supercomputer is a CRAY2, and the high speed channel is a hyperchannel. The three techniques used for visualization are post-processing, tracking, and steering. Post-processing analysis is done after the simulation. Tracking analysis is done during a simulation but is not interactive, whereas steering analysis involves modifying the simulation interactively during the simulation. Using post-processing methods, a flow simulation is executed on a supercomputer and, after the simulation is complete, the results of the simulation are processed for viewing. The software in use and under development at NASA Ames Research Center for performing these types of tasks in computational aerodynamics is described. Workstation performance issues, benchmarking, and high-performance networks for this purpose are also discussed as well as descriptions of other hardware for digital video and film recording.
Parallel Performance of a Combustion Chemistry Simulation
Skinner, Gregg; Eigenmann, Rudolf
1995-01-01
We used a description of a combustion simulation's mathematical and computational methods to develop a version for parallel execution. The result was a reasonable performance improvement on small numbers of processors. We applied several important programming techniques, which we describe, in optimizing the application. This work has implications for programming languages, compiler design, and software engineering.
"FluSpec": A Simulated Experiment in Fluorescence Spectroscopy
ERIC Educational Resources Information Center
Bigger, Stephen W.; Bigger, Andrew S.; Ghiggino, Kenneth P.
2014-01-01
The "FluSpec" educational software package is a fully contained tutorial on the technique of fluorescence spectroscopy as well as a simulator on which experiments can be performed. The procedure for each of the experiments is also contained within the package along with example analyses of results that are obtained using the software.
ERIC Educational Resources Information Center
McFarland, Dennis J.
2014-01-01
Purpose: Factor analysis is a useful technique to aid in organizing multivariate data characterizing speech, language, and auditory abilities. However, knowledge of the limitations of factor analysis is essential for proper interpretation of results. The present study used simulated test scores to illustrate some characteristics of factor…
NASA Astrophysics Data System (ADS)
Yeh, Chun-Ping; Huang, Jiunn-Yuan
2018-04-01
Low-alloy steels used as structural materials in nuclear power plants are subjected to cyclic stresses during power plant operations. As a result, cracks may develop and propagate through the material. The alternating current potential drop technique is used to measure the lengths of cracks in metallic components. The depth of the penetration of the alternating current is assumed to be small compared to the crack length. This assumption allows the adoption of the unfolding technique to simplify the problem to a surface Laplacian field. The numerical modelling of the electric potential and current density distribution prediction model for a compact tension specimen and the unfolded crack model are presented in this paper. The goal of this work is to conduct numerical simulations to reduce deviations occurring in the crack length measurements. Numerical simulations were conducted on AISI 4340 low-alloy steel with different crack lengths to evaluate the electric potential distribution. From the simulated results, an optimised position for voltage measurements in the crack region was proposed.
A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components
NASA Technical Reports Server (NTRS)
Abernethy, K.
1986-01-01
The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.
Di Domenico, Giovanni; Cardarelli, Paolo; Contillo, Adriano; Taibi, Angelo; Gambaccini, Mauro
2016-01-01
The quality of a radiography system is affected by several factors, a major one being the focal spot size of the x-ray tube. In fact, the measurement of such size is recognized to be of primary importance during acceptance tests and image quality evaluations of clinical radiography systems. The most common device providing an image of the focal spot emission distribution is a pin-hole camera, which requires a high tube loading in order to produce a measurable signal. This work introduces an alternative technique to obtain an image of the focal spot, through the processing of a single radiograph of a simple test object, acquired with a suitable magnification. The radiograph of a magnified sharp edge is a well-established method to evaluate the extension of the focal spot profile along the direction perpendicular to the edge. From a single radiograph of a circular x-ray absorber, it is possible to extract simultaneously the radial profiles of several sharp edges with different orientations. The authors propose a technique that allows to obtain an image of the focal spot through the processing of these radial profiles by means of a pseudo-CT reconstruction technique. In order to validate this technique, the reconstruction has been applied to the simulated radiographs of an ideal disk-shaped absorber, generated by various simulated focal spot distributions. Furthermore, the method has been applied to the focal spot of a commercially available mammography unit. In the case of simulated radiographs, the results of the reconstructions have been compared to the original distributions, showing an excellent agreement for what regards both the overall distribution and the full width at half maximum measurements. In the case of the experimental test, the method allowed to obtain images of the focal spot that have been compared with the results obtained through standard techniques, namely, pin-hole camera and slit camera. The method was proven to be effective for simulated images and the results of the experimental test suggest that it could be considered as an alternative technique for focal spot distribution evaluation. The method offers the possibility to measure the actual focal spot size and emission distribution at the same exposure conditions as clinical routine, avoiding high tube loading as in the case of the pin-hole imaging technique.
Simulation training and resident performance of singleton vaginal breech delivery.
Deering, Shad; Brown, Jill; Hodor, Jonathon; Satin, Andrew J
2006-01-01
To determine whether simulation training improves resident competency in the management of a simulated vaginal breech delivery. Without advance notice or training, residents from 2 obstetrics and gynecology residency programs participated in a standardized simulation scenario of management of an imminent term vaginal breech delivery. The scenario used an obstetric birth simulator and human actors, with the encounters digitally recorded. Residents then received a training session with the simulator on the proper techniques for vaginal breech delivery. Two weeks later they were retested using a similar simulation scenario. A physician, blinded to training status, graded the residents' performance using a standardized evaluation sheet. Statistical analysis included the Wilcoxon signed rank test, McNemar chi2, regression analysis, and paired t test as appropriate with a P value of less than .05 considered significant. Twenty residents from 2 institutions completed all parts of the study protocol. Trained residents had significantly higher scores in 8 of 12 critical delivery components (P < .05). Overall performance of the delivery and safety in performing the delivery also improved significantly (P = .001 for both). Simulation training improved resident performance in the management of a simulated vaginal breech delivery. Performance of a term breech vaginal delivery is well suited for simulation training, because it is uncommon and inevitable, and improper technique may result in significant injury. II-2.
Image Transmission through OFDM System under the Influence of AWGN Channel
NASA Astrophysics Data System (ADS)
Krishna, Dharavathu; Anuradha, M. S., Dr.
2017-08-01
OFDM system is one among the modern techniques which is most abundantly used in next generation wireless communication networks for transmitting many forms of digital data in efficient manner than compared with other existing traditional techniques. In this paper, one such kind of a digital data corresponding to a two dimensional (2D) gray-scale image is used to evaluate the functionality and overall performance of an OFDM system under the influence of modeled AWGN channel in MATLAB simulation environment. Within the OFDM system, different configurations of notable modulation techniques such as M-PSK and M-QAM are considered for evaluation of the system and necessary valid conclusions are made from the comparison of several observed MATLAB simulation results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spong, D.A.
The design techniques and physics analysis of modern stellarator configurations for magnetic fusion research rely heavily on high performance computing and simulation. Stellarators, which are fundamentally 3-dimensional in nature, offer significantly more design flexibility than more symmetric devices such as the tokamak. By varying the outer boundary shape of the plasma, a variety of physics features, such as transport, stability, and heating efficiency can be optimized. Scientific visualization techniques are an important adjunct to this effort as they provide a necessary ergonomic link between the numerical results and the intuition of the human researcher. The authors have developed a varietymore » of visualization techniques for stellarators which both facilitate the design optimization process and allow the physics simulations to be more readily understood.« less
Shaded computer graphic techniques for visualizing and interpreting analytic fluid flow models
NASA Technical Reports Server (NTRS)
Parke, F. I.
1981-01-01
Mathematical models which predict the behavior of fluid flow in different experiments are simulated using digital computers. The simulations predict values of parameters of the fluid flow (pressure, temperature and velocity vector) at many points in the fluid. Visualization of the spatial variation in the value of these parameters is important to comprehend and check the data generated, to identify the regions of interest in the flow, and for effectively communicating information about the flow to others. The state of the art imaging techniques developed in the field of three dimensional shaded computer graphics is applied to visualization of fluid flow. Use of an imaging technique known as 'SCAN' for visualizing fluid flow, is studied and the results are presented.
The optical design and simulation of the collimated solar simulator
NASA Astrophysics Data System (ADS)
Zhang, Jun; Ma, Tao
2018-01-01
The solar simulator is a lighting device that can simulate the solar radiation. It has been widely used in the testing of solar cells, satellite space environment simulation and ground experiment, test and calibration precision of solar sensor. The solar simulator mainly consisted of short—arc xenon lamp, ellipsoidal reflectors, a group of optical integrator, field stop, aspheric folding mirror and collimating reflector. In this paper, the solar simulator's optical system basic size are given by calculation. Then the system is optically modeled with the Lighttools software, and the simulation analysis on solar simulator using the Monte Carlo ray -tracing technique is conducted. Finally, the simulation results are given quantitatively by diagrammatic form. The rationality of the design is verified on the basis of theory.
Satz, Alexander L
2016-07-11
Simulated screening of DNA encoded libraries indicates that the presence of truncated byproducts complicates the relationship between library member enrichment and equilibrium association constant (these truncates result from incomplete chemical reactions during library synthesis). Further, simulations indicate that some patterns observed in reported experimental data may result from the presence of truncated byproducts in the library mixture and not structure-activity relationships. Potential experimental methods of minimizing the presence of truncates are assessed via simulation; the relationship between enrichment and equilibrium association constant for libraries of differing purities is investigated. Data aggregation techniques are demonstrated that allow for more accurate analysis of screening results, in particular when the screened library contains significant quantities of truncates.
Medical Simulation Practices 2010 Survey Results
NASA Technical Reports Server (NTRS)
McCrindle, Jeffrey J.
2011-01-01
Medical Simulation Centers are an essential component of our learning infrastructure to prepare doctors and nurses for their careers. Unlike the military and aerospace simulation industry, very little has been published regarding the best practices currently in use within medical simulation centers. This survey attempts to provide insight into the current simulation practices at medical schools, hospitals, university nursing programs and community college nursing programs. Students within the MBA program at Saint Joseph's University conducted a survey of medical simulation practices during the summer 2010 semester. A total of 115 institutions responded to the survey. The survey resus discuss overall effectiveness of current simulation centers as well as the tools and techniques used to conduct the simulation activity
AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, D.; Alfonsi, A.; Talbot, P.
2016-10-01
The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, themore » overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).« less
NOViSE: a virtual natural orifice transluminal endoscopic surgery simulator.
Korzeniowski, Przemyslaw; Barrow, Alastair; Sodergren, Mikael H; Hald, Niels; Bello, Fernando
2016-12-01
Natural orifice transluminal endoscopic surgery (NOTES) is a novel technique in minimally invasive surgery whereby a flexible endoscope is inserted via a natural orifice to gain access to the abdominal cavity, leaving no external scars. This innovative use of flexible endoscopy creates many new challenges and is associated with a steep learning curve for clinicians. We developed NOViSE-the first force-feedback-enabled virtual reality simulator for NOTES training supporting a flexible endoscope. The haptic device is custom-built, and the behaviour of the virtual flexible endoscope is based on an established theoretical framework-the Cosserat theory of elastic rods. We present the application of NOViSE to the simulation of a hybrid trans-gastric cholecystectomy procedure. Preliminary results of face, content and construct validation have previously shown that NOViSE delivers the required level of realism for training of endoscopic manipulation skills specific to NOTES. VR simulation of NOTES procedures can contribute to surgical training and improve the educational experience without putting patients at risk, raising ethical issues or requiring expensive animal or cadaver facilities. In the context of an experimental technique, NOViSE could potentially facilitate NOTES development and contribute to its wider use by keeping practitioners up to date with this novel surgical technique. NOViSE is a first prototype, and the initial results indicate that it provides promising foundations for further development.
Enhancing multi-spot structured illumination microscopy with fluorescence difference
NASA Astrophysics Data System (ADS)
Ward, Edward N.; Torkelsen, Frida H.; Pal, Robert
2018-03-01
Structured illumination microscopy is a super-resolution technique used extensively in biological research. However, this technique is limited in the maximum possible resolution increase. Here we report the results of simulations of a novel enhanced multi-spot structured illumination technique. This method combines the super-resolution technique of difference microscopy with structured illumination deconvolution. Initial results give at minimum a 1.4-fold increase in resolution over conventional structured illumination in a low-noise environment. This new technique also has the potential to be expanded to further enhance axial resolution with three-dimensional difference microscopy. The requirement for precise pattern determination in this technique also led to the development of a new pattern estimation algorithm which proved more efficient and reliable than other methods tested.
Quenching behavior of molten pool with different strategies – A review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrikant,, E-mail: 2014rmt9018@mnit.ac.in; Pandel, U.; Duchaniya, R. K.
After the major severe accident in nuclear reactor, there has been lot of concerns regarding long term core melt stabilization following a severe accident in nuclear reactors. Numerous strategies have been though for quenching and stabilization of core melt like top flooding, bottom flooding, indirect cooling, etc. However, the effectiveness of these schemes is yet to be determined properly, for which, lot of experiments are needed. Several experiments have been performed for coolability of melt pool under bottom flooding as well as for indirect cooling. Besides these tests are very scattered because they involve different simulants material initial temperatures andmore » masses of melt, which makes it very complex to judge the effectiveness of a particular technique and advantage over the other. In this review paper, a study has been carried on different cooling techniques of simulant materials with same mass. Three techniques have been compared here and the results are discussed. Under top flooding technique it took several hours to cool the melt under without decay heat condition. In bottom flooding technique was found to be the best technique among in indirect cooling technique, top flooded technique, and bottom flooded technique.« less
Results and Error Estimates from GRACE Forward Modeling over Antarctica
NASA Astrophysics Data System (ADS)
Bonin, Jennifer; Chambers, Don
2013-04-01
Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Antarctica. However when tested previously, the least squares technique has required constraints in the form of added process noise in order to be reliable. Poor choice of local basin layout has also adversely affected results, as has the choice of spatial smoothing used with GRACE. To develop design parameters which will result in correct high-resolution mass detection and to estimate the systematic errors of the method over Antarctica, we use a "truth" simulation of the Antarctic signal. We apply the optimal parameters found from the simulation to RL05 GRACE data across Antarctica and the surrounding ocean. We particularly focus on separating the Antarctic peninsula's mass signal from that of the rest of western Antarctica. Additionally, we characterize how well the technique works for removing land leakage signal from the nearby ocean, particularly that near the Drake Passage.
Variable length adjacent partitioning for PTS based PAPR reduction of OFDM signal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibraheem, Zeyid T.; Rahman, Md. Mijanur; Yaakob, S. N.
2015-05-15
Peak-to-Average power ratio (PAPR) is a major drawback in OFDM communication. It leads the power amplifier into nonlinear region operation resulting into loss of data integrity. As such, there is a strong motivation to find techniques to reduce PAPR. Partial Transmit Sequence (PTS) is an attractive scheme for this purpose. Judicious partitioning the OFDM data frame into disjoint subsets is a pivotal component of any PTS scheme. Out of the existing partitioning techniques, adjacent partitioning is characterized by an attractive trade-off between cost and performance. With an aim of determining effects of length variability of adjacent partitions, we performed anmore » investigation into the performances of a variable length adjacent partitioning (VL-AP) and fixed length adjacent partitioning in comparison with other partitioning schemes such as pseudorandom partitioning. Simulation results with different modulation and partitioning scenarios showed that fixed length adjacent partition had better performance compared to variable length adjacent partitioning. As expected, simulation results showed a slightly better performance of pseudorandom partitioning technique compared to fixed and variable adjacent partitioning schemes. However, as the pseudorandom technique incurs high computational complexities, adjacent partitioning schemes were still seen as favorable candidates for PAPR reduction.« less
NASA Astrophysics Data System (ADS)
Özer, Abdullah; Eren Semercigil, S.
2008-06-01
Flexible robot manipulators have numerous advantages over their rigid counterparts. They have increased payload-to-weight ratio, they run at higher speeds, use less energy and smaller actuators, and they are safer during interaction with their environments. On the other hand, light design combined with external effects result in components which can oscillate with excessive amplitudes. These oscillations cause deviation from the desired path and long idle periods between tasks in order to perform the intended operation safely and accurately. This paper is on an investigation into the effectiveness of a vibration control technique for a two-link flexible robotic arm. Variable stiffness control (VSC) technique is used to control the excessive oscillations. Owing to its dissipative nature, the technique is stable, it is relatively insensitive to significant parameter changes and suitable to be implemented on existing robots. This research considers that the source of the flexibility is either the joints or the links or both. Simulation results of the response of the arm are presented to show the versatility of the proposed control technique. Experiments are performed on a laboratory prototype and the results are presented to test the validity of simulations.
The relative entropy is fundamental to adaptive resolution simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kreis, Karsten; Graduate School Materials Science in Mainz, Staudingerweg 9, 55128 Mainz; Potestio, Raffaello, E-mail: potestio@mpip-mainz.mpg.de
Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy withmore » respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.« less
Patel, Ravi G.; Desjardins, Olivier; Kong, Bo; ...
2017-09-01
Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less
The relative entropy is fundamental to adaptive resolution simulations
NASA Astrophysics Data System (ADS)
Kreis, Karsten; Potestio, Raffaello
2016-07-01
Adaptive resolution techniques are powerful methods for the efficient simulation of soft matter systems in which they simultaneously employ atomistic and coarse-grained (CG) force fields. In such simulations, two regions with different resolutions are coupled with each other via a hybrid transition region, and particles change their description on the fly when crossing this boundary. Here we show that the relative entropy, which provides a fundamental basis for many approaches in systematic coarse-graining, is also an effective instrument for the understanding of adaptive resolution simulation methodologies. We demonstrate that the use of coarse-grained potentials which minimize the relative entropy with respect to the atomistic system can help achieve a smoother transition between the different regions within the adaptive setup. Furthermore, we derive a quantitative relation between the width of the hybrid region and the seamlessness of the coupling. Our results do not only shed light on the what and how of adaptive resolution techniques but will also help setting up such simulations in an optimal manner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, Ravi G.; Desjardins, Olivier; Kong, Bo
Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less
Validating clustering of molecular dynamics simulations using polymer models
2011-01-01
Background Molecular dynamics (MD) simulation is a powerful technique for sampling the meta-stable and transitional conformations of proteins and other biomolecules. Computational data clustering has emerged as a useful, automated technique for extracting conformational states from MD simulation data. Despite extensive application, relatively little work has been done to determine if the clustering algorithms are actually extracting useful information. A primary goal of this paper therefore is to provide such an understanding through a detailed analysis of data clustering applied to a series of increasingly complex biopolymer models. Results We develop a novel series of models using basic polymer theory that have intuitive, clearly-defined dynamics and exhibit the essential properties that we are seeking to identify in MD simulations of real biomolecules. We then apply spectral clustering, an algorithm particularly well-suited for clustering polymer structures, to our models and MD simulations of several intrinsically disordered proteins. Clustering results for the polymer models provide clear evidence that the meta-stable and transitional conformations are detected by the algorithm. The results for the polymer models also help guide the analysis of the disordered protein simulations by comparing and contrasting the statistical properties of the extracted clusters. Conclusions We have developed a framework for validating the performance and utility of clustering algorithms for studying molecular biopolymer simulations that utilizes several analytic and dynamic polymer models which exhibit well-behaved dynamics including: meta-stable states, transition states, helical structures, and stochastic dynamics. We show that spectral clustering is robust to anomalies introduced by structural alignment and that different structural classes of intrinsically disordered proteins can be reliably discriminated from the clustering results. To our knowledge, our framework is the first to utilize model polymers to rigorously test the utility of clustering algorithms for studying biopolymers. PMID:22082218
SSAGES: Software Suite for Advanced General Ensemble Simulations.
Sidky, Hythem; Colón, Yamil J; Helfferich, Julian; Sikora, Benjamin J; Bezik, Cody; Chu, Weiwei; Giberti, Federico; Guo, Ashley Z; Jiang, Xikai; Lequieu, Joshua; Li, Jiyuan; Moller, Joshua; Quevillon, Michael J; Rahimi, Mohammad; Ramezani-Dakhel, Hadi; Rathee, Vikramjit S; Reid, Daniel R; Sevgen, Emre; Thapar, Vikram; Webb, Michael A; Whitmer, Jonathan K; de Pablo, Juan J
2018-01-28
Molecular simulation has emerged as an essential tool for modern-day research, but obtaining proper results and making reliable conclusions from simulations requires adequate sampling of the system under consideration. To this end, a variety of methods exist in the literature that can enhance sampling considerably, and increasingly sophisticated, effective algorithms continue to be developed at a rapid pace. Implementation of these techniques, however, can be challenging for experts and non-experts alike. There is a clear need for software that provides rapid, reliable, and easy access to a wide range of advanced sampling methods and that facilitates implementation of new techniques as they emerge. Here we present SSAGES, a publicly available Software Suite for Advanced General Ensemble Simulations designed to interface with multiple widely used molecular dynamics simulations packages. SSAGES allows facile application of a variety of enhanced sampling techniques-including adaptive biasing force, string methods, and forward flux sampling-that extract meaningful free energy and transition path data from all-atom and coarse-grained simulations. A noteworthy feature of SSAGES is a user-friendly framework that facilitates further development and implementation of new methods and collective variables. In this work, the use of SSAGES is illustrated in the context of simple representative applications involving distinct methods and different collective variables that are available in the current release of the suite. The code may be found at: https://github.com/MICCoM/SSAGES-public.
Modeling the Performance of Direct-Detection Doppler Lidar Systems in Real Atmospheres
NASA Technical Reports Server (NTRS)
McGill, Matthew J.; Hart, William D.; McKay, Jack A.; Spinhirne, James D.
1999-01-01
Previous modeling of the performance of spaceborne direct-detection Doppler lidar systems has assumed extremely idealized atmospheric models. Here we develop a technique for modeling the performance of these systems in a more realistic atmosphere, based on actual airborne lidar observations. The resulting atmospheric model contains cloud and aerosol variability that is absent in other simulations of spaceborne Doppler lidar instruments. To produce a realistic simulation of daytime performance, we include solar radiance values that are based on actual measurements and are allowed to vary as the viewing scene changes. Simulations are performed for two types of direct-detection Doppler lidar systems: the double-edge and the multi-channel techniques. Both systems were optimized to measure winds from Rayleigh backscatter at 355 nm. Simulations show that the measurement uncertainty during daytime is degraded by only about 10-20% compared to nighttime performance, provided a proper solar filter is included in the instrument design.
McGill, M J; Hart, W D; McKay, J A; Spinhirne, J D
1999-10-20
Previous modeling of the performance of spaceborne direct-detection Doppler lidar systems assumed extremely idealized atmospheric models. Here we develop a technique for modeling the performance of these systems in a more realistic atmosphere, based on actual airborne lidar observations. The resulting atmospheric model contains cloud and aerosol variability that is absent in other simulations of spaceborne Doppler lidar instruments. To produce a realistic simulation of daytime performance, we include solar radiance values that are based on actual measurements and are allowed to vary as the viewing scene changes. Simulations are performed for two types of direct-detection Doppler lidar system: the double-edge and the multichannel techniques. Both systems were optimized to measure winds from Rayleigh backscatter at 355 nm. Simulations show that the measurement uncertainty during daytime is degraded by only approximately 10-20% compared with nighttime performance, provided that a proper solar filter is included in the instrument design.
Realistic natural atmospheric phenomena and weather effects for interactive virtual environments
NASA Astrophysics Data System (ADS)
McLoughlin, Leigh
Clouds and the weather are important aspects of any natural outdoor scene, but existing dynamic techniques within computer graphics only offer the simplest of cloud representations. The problem that this work looks to address is how to provide a means of simulating clouds and weather features such as precipitation, that are suitable for virtual environments. Techniques for cloud simulation are available within the area of meteorology, but numerical weather prediction systems are computationally expensive, give more numerical accuracy than we require for graphics and are restricted to the laws of physics. Within computer graphics, we often need to direct and adjust physical features or to bend reality to meet artistic goals, which is a key difference between the subjects of computer graphics and physical science. Pure physically-based simulations, however, evolve their solutions according to pre-set rules and are notoriously difficult to control. The challenge then is for the solution to be computationally lightweight and able to be directed in some measure while at the same time producing believable results. This work presents a lightweight physically-based cloud simulation scheme that simulates the dynamic properties of cloud formation and weather effects. The system simulates water vapour, cloud water, cloud ice, rain, snow and hail. The water model incorporates control parameters and the cloud model uses an arbitrary vertical temperature profile, with a tool described to allow the user to define this. The result of this work is that clouds can now be simulated in near real-time complete with precipitation. The temperature profile and tool then provide a means of directing the resulting formation..
Computer simulation of stochastic processes through model-sampling (Monte Carlo) techniques.
Sheppard, C W.
1969-03-01
A simple Monte Carlo simulation program is outlined which can be used for the investigation of random-walk problems, for example in diffusion, or the movement of tracers in the blood circulation. The results given by the simulation are compared with those predicted by well-established theory, and it is shown how the model can be expanded to deal with drift, and with reflexion from or adsorption at a boundary.
Applications of CFD and visualization techniques
NASA Technical Reports Server (NTRS)
Saunders, James H.; Brown, Susan T.; Crisafulli, Jeffrey J.; Southern, Leslie A.
1992-01-01
In this paper, three applications are presented to illustrate current techniques for flow calculation and visualization. The first two applications use a commercial computational fluid dynamics (CFD) code, FLUENT, performed on a Cray Y-MP. The results are animated with the aid of data visualization software, apE. The third application simulates a particulate deposition pattern using techniques inspired by developments in nonlinear dynamical systems. These computations were performed on personal computers.
NDE and SHM Simulation for CFRP Composites
NASA Technical Reports Server (NTRS)
Leckey, Cara A. C.; Parker, F. Raymond
2014-01-01
Ultrasound-based nondestructive evaluation (NDE) is a common technique for damage detection in composite materials. There is a need for advanced NDE that goes beyond damage detection to damage quantification and characterization in order to enable data driven prognostics. The damage types that exist in carbon fiber-reinforced polymer (CFRP) composites include microcracking and delaminations, and can be initiated and grown via impact forces (due to ground vehicles, tool drops, bird strikes, etc), fatigue, and extreme environmental changes. X-ray microfocus computed tomography data, among other methods, have shown that these damage types often result in voids/discontinuities of a complex volumetric shape. The specific damage geometry and location within ply layers affect damage growth. Realistic threedimensional NDE and structural health monitoring (SHM) simulations can aid in the development and optimization of damage quantification and characterization techniques. This paper is an overview of ongoing work towards realistic NDE and SHM simulation tools for composites, and also discusses NASA's need for such simulation tools in aeronautics and spaceflight. The paper describes the development and implementation of a custom ultrasound simulation tool that is used to model ultrasonic wave interaction with realistic 3-dimensional damage in CFRP composites. The custom code uses elastodynamic finite integration technique and is parallelized to run efficiently on computing cluster or multicore machines.
NASA Astrophysics Data System (ADS)
Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia
2017-09-01
The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.
Bardy, Fabrice; Dillon, Harvey; Van Dun, Bram
2014-04-01
Rapid presentation of stimuli in an evoked response paradigm can lead to overlap of multiple responses and consequently difficulties interpreting waveform morphology. This paper presents a deconvolution method allowing overlapping multiple responses to be disentangled. The deconvolution technique uses a least-squared error approach. A methodology is proposed to optimize the stimulus sequence associated with the deconvolution technique under low-jitter conditions. It controls the condition number of the matrices involved in recovering the responses. Simulations were performed using the proposed deconvolution technique. Multiple overlapping responses can be recovered perfectly in noiseless conditions. In the presence of noise, the amount of error introduced by the technique can be controlled a priori by the condition number of the matrix associated with the used stimulus sequence. The simulation results indicate the need for a minimum amount of jitter, as well as a sufficient number of overlap combinations to obtain optimum results. An aperiodic model is recommended to improve reconstruction. We propose a deconvolution technique allowing multiple overlapping responses to be extracted and a method of choosing the stimulus sequence optimal for response recovery. This technique may allow audiologists, psychologists, and electrophysiologists to optimize their experimental designs involving rapidly presented stimuli, and to recover evoked overlapping responses. Copyright © 2013 International Federation of Clinical Neurophysiology. All rights reserved.
Identification of cost effective energy conservation measures
NASA Technical Reports Server (NTRS)
Bierenbaum, H. S.; Boggs, W. H.
1978-01-01
In addition to a successful program of readily implemented conservation actions for reducing building energy consumption at Kennedy Space Center, recent detailed analyses have identified further substantial savings for buildings representative of technical facilities designed when energy costs were low. The techniques employed for determination of these energy savings consisted of facility configuration analysis, power and lighting measurements, detailed computer simulations and simulation verifications. Use of these methods resulted in identification of projected energy savings as large as $330,000 a year (approximately two year break-even period) in a single building. Application of these techniques to other commercial buildings is discussed
Hawaiian Electric Advanced Inverter Test Plan - Result Summary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoke, Anderson; Nelson, Austin; Prabakar, Kumaraguru
This presentation is intended to share the results of lab testing of five PV inverters with the Hawaiian Electric Companies and other stakeholders and interested parties. The tests included baseline testing of advanced inverter grid support functions, as well as distribution circuit-level tests to examine the impact of the PV inverters on simulated distribution feeders using power hardware-in-the-loop (PHIL) techniques. hardware-in-the-loop (PHIL) techniques.
NASA Technical Reports Server (NTRS)
Rummler, D. R.
1976-01-01
The results are presented of investigations to apply regression techniques to the development of methodology for creep-rupture data analysis. Regression analysis techniques are applied to the explicit description of the creep behavior of materials for space shuttle thermal protection systems. A regression analysis technique is compared with five parametric methods for analyzing three simulated and twenty real data sets, and a computer program for the evaluation of creep-rupture data is presented.
Nomura, Shunsuke; Hayashi, Motohiro; Ishikawa, Tatsuya; Yamaguchi, Koji; Kawamata, Takakazu
2018-05-19
Vascular and osteological parameters, such as the heights of the carotid bifurcation and distal end of the plaque, are important preoperative considerations for patients undergoing carotid stenosis procedures such as carotid endarterectomy. However, for patients with contrast contraindications such as allergies or nephropathies, three-dimensional computed tomography angiography (3D-CTA) is unavailable, and preoperative evaluation remains challenging. In the present study, we aimed to develop a preoperative simulation for use in patients with contrast-contraindicated carotid stenosis. Images from non-contrast neck CT and magnetic resonance imaging obtained without the Leksell stereotactic frame were uploaded to GammaPlan. Following delineation of various structures, we performed preoperative simulations to determine the relationships between vascular and osteological structures. We applied this technique in 10 patients with carotid stenosis to verify the accuracy of the simulation. In all patients, the GammaPlan simulation successfully visualized the heights of the carotid bifurcation and distal end of the plaque without the use of contrast medium. Furthermore, information regarding the location of internal arterial structures, such as calcifications and unstable plaques, could be incorporated into GammaPlan images. Thereafter, we verified simulation accuracy by comparing the simulation results with 3D-CTA and operative findings. Simulations created using GammaPlan can be used to obtain accurate vascular and osteological information regarding the heights of the carotid bifurcation and distal end of the plaque, without the use of contrast medium. The reconstruction of delineated structures using this technique may be effective for preoperative evaluation in patients with contrast-contraindicated carotid stenosis. Copyright © 2018 Elsevier Inc. All rights reserved.
Using crosscorrelation techniques to determine the impulse response of linear systems
NASA Technical Reports Server (NTRS)
Dallabetta, Michael J.; Li, Harry W.; Demuth, Howard B.
1993-01-01
A crosscorrelation method of measuring the impulse response of linear systems is presented. The technique, implementation, and limitations of this method are discussed. A simple system is designed and built using discrete components and the impulse response of a linear circuit is measured. Theoretical and software simulation results are presented.
NASA Astrophysics Data System (ADS)
Scudder, J. D.; Karimabadi, H.; Daughton, W. S.
2013-12-01
Interpretations of 2D simulations of magnetic reconnection are greatly simplified by using the flux function, usually the out of plane component of the vector potential. This theoretical device is no longer available when simulations are analyzed in 3-D. We illustrate the results of determining the locale rates of flux slippage in simulations by a technique based on Maxwell's equations. The technique recovers the usual results obtained for the flux function in 2D simulations, but remains viable in 3D simulations where there is no flux function. The method has also been successfully tested for full PIC simulations where reconnection is geometrically forbiddden. While such layers possess measurable flux slippages (diffusion) their level is not as strong as recorded in known 2D PIC reconnection sites using the same methodology. This approach will be used to explore the spatial incidence and strength of flux slippages across a 3D, asymmetric, strong guide field run discussed previously in the literature. Regions of diffusive behavior are illustrated where LHDI has been previously identified out on the separatrices, while much stronger flux slippages, typical of the X-regions of 2D simulations, are shown to occur elsewhere throughout the simulation. These results suggest that reconnection requires sufficiently vigorous flux slippage to be self sustaining, while non-zero flux slippage can and does occur without being at the reconnection site. A cross check of this approach is provided by the mixing ratio of tagged simulation particles of known spatial origin discussed by Daughton et al., 2013 (this meeting); they provide an integral measure of flux slippage up to the present point in the simulation. We will discuss the correlations between our Maxwell based flux slippage rates and the inferred rates of change of this mixing ratio (as recorded in the local fluid frame).
NASA Technical Reports Server (NTRS)
Molusis, J. A.
1982-01-01
An on line technique is presented for the identification of rotor blade modal damping and frequency from rotorcraft random response test data. The identification technique is based upon a recursive maximum likelihood (RML) algorithm, which is demonstrated to have excellent convergence characteristics in the presence of random measurement noise and random excitation. The RML technique requires virtually no user interaction, provides accurate confidence bands on the parameter estimates, and can be used for continuous monitoring of modal damping during wind tunnel or flight testing. Results are presented from simulation random response data which quantify the identified parameter convergence behavior for various levels of random excitation. The data length required for acceptable parameter accuracy is shown to depend upon the amplitude of random response and the modal damping level. Random response amplitudes of 1.25 degrees to .05 degrees are investigated. The RML technique is applied to hingeless rotor test data. The inplane lag regressing mode is identified at different rotor speeds. The identification from the test data is compared with the simulation results and with other available estimates of frequency and damping.
Haji-Saeed, B; Sengupta, S K; Testorf, M; Goodhue, W; Khoury, J; Woods, C L; Kierstead, J
2006-05-10
We propose and demonstrate a new photorefractive real-time holographic deconvolution technique for adaptive one-way image transmission through aberrating media by means of four-wave mixing. In contrast with earlier methods, which typically required various codings of the exact phase or two-way image transmission for correcting phase distortion, our technique relies on one-way image transmission through the use of exact phase information. Our technique can simultaneously correct both amplitude and phase distortions. We include several forms of image degradation, various test cases, and experimental results. We characterize the performance as a function of the input beam ratios for four metrics: signal-to-noise ratio, normalized root-mean-square error, edge restoration, and peak-to-total energy ratio. In our characterization we use false-color graphic images to display the best beam-intensity ratio two-dimensional region(s) for each of these metrics. Test cases are simulated at the optimal values of the beam-intensity ratios. We demonstrate our results through both experiment and computer simulation.
Fast Simulation of Electromagnetic Showers in the ATLAS Calorimeter: Frozen Showers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barberio, E.; /Melbourne U.; Boudreau, J.
2011-11-29
One of the most time consuming process simulating pp interactions in the ATLAS detector at LHC is the simulation of electromagnetic showers in the calorimeter. In order to speed up the event simulation several parametrisation methods are available in ATLAS. In this paper we present a short description of a frozen shower technique, together with some recent benchmarks and comparison with full simulation. An expected high rate of proton-proton collisions in ATLAS detector at LHC requires large samples of simulated events (Monte Carlo) to study various physics processes. A detailed simulation of particle reactions ('full simulation') in the ATLAS detectormore » is based on GEANT4 and is very accurate. However, due to complexity of the detector, high particle multiplicity and GEANT4 itself, the average CPU time spend to simulate typical QCD event in pp collision is 20 or more minutes for modern computers. During detector simulation the largest time is spend in the calorimeters (up to 70%) most of which is required for electromagnetic particles in the electromagnetic (EM) part of the calorimeters. This is the motivation for fast simulation approaches which reduce the simulation time without affecting the accuracy. Several of fast simulation methods available within the ATLAS simulation framework (standard Athena based simulation program) are discussed here with the focus on the novel frozen shower library (FS) technique. The results obtained with FS are presented here as well.« less
Classifying and modelling spiral structures in hydrodynamic simulations of astrophysical discs
NASA Astrophysics Data System (ADS)
Forgan, D. H.; Ramón-Fox, F. G.; Bonnell, I. A.
2018-05-01
We demonstrate numerical techniques for automatic identification of individual spiral arms in hydrodynamic simulations of astrophysical discs. Building on our earlier work, which used tensor classification to identify regions that were `spiral-like', we can now obtain fits to spirals for individual arm elements. We show this process can even detect spirals in relatively flocculent spiral patterns, but the resulting fits to logarithmic `grand-design' spirals are less robust. Our methods not only permit the estimation of pitch angles, but also direct measurements of the spiral arm width and pattern speed. In principle, our techniques will allow the tracking of material as it passes through an arm. Our demonstration uses smoothed particle hydrodynamics simulations, but we stress that the method is suitable for any finite-element hydrodynamics system. We anticipate our techniques will be essential to studies of star formation in disc galaxies, and attempts to find the origin of recently observed spiral structure in protostellar discs.
Investigation of Models and Estimation Techniques for GPS Attitude Determination
NASA Technical Reports Server (NTRS)
Garrick, J.
1996-01-01
Much work has been done in the Flight Dynamics Analysis Branch (FDAB) in developing algorithms to met the new and growing field of attitude determination using the Global Positioning SYstem (GPS) constellation of satellites. Flight Dynamics has the responsibility to investigate any new technology and incorporate the innovations in the attitude ground support systems developed to support future missions. The work presented here is an investigative analysis that will produce the needed adaptation to allow the Flight Dynamics Support System (FDSS) to incorporate GPS phase measurements and produce observation measurements compatible with the FDSS. A simulator was developed to produce the necessary measurement data to test the models developed for the different estimation techniques used by FDAB. This paper gives an overview of the current modeling capabilities of the simulator models and algorithms for the adaptation of GPS measurement data and results from each of the estimation techniques. Future analysis efforts to evaluate the simulator and models against inflight GPS measurement data are also outlined.
NASA Astrophysics Data System (ADS)
Trivedi, Nitin; Kumar, Manoj; Haldar, Subhasis; Deswal, S. S.; Gupta, Mridula; Gupta, R. S.
2017-09-01
A charge plasma technique based dopingless (DL) accumulation mode (AM) junctionless (JL) cylindrical surrounding gate (CSG) MOSFET has been proposed and extensively investigated. Proposed device has no physical junction at source to channel and channel to drain interface. The complete silicon pillar has been considered as undoped. The high free electron density or induced N+ region is designed by keeping the work function of source/drain metal contacts lower than the work function of undoped silicon. Thus, its fabrication complexity is drastically reduced by curbing the requirement of high temperature doping techniques. The electrical/analog characteristics for the proposed device has been extensively investigated using the numerical simulation and are compared with conventional junctionless cylindrical surrounding gate (JL-CSG) MOSFET with identical dimensions. For the numerical simulation purpose ATLAS-3D device simulator is used. The results show that the proposed device is more short channel immune to conventional JL-CSG MOSFET and suitable for faster switching applications due to higher I ON/ I OFF ratio.
An Integrated Study on a Novel High Temperature High Entropy Alloy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Shizhong
2016-12-31
This report summarizes our recent works of theoretical modeling, simulation, and experimental validation of the simulation results on the new refractory high entropy alloy (HEA) design and oxide doped refractory HEA research. The simulation of the stability and thermal dynamics simulation on potential thermal stable candidates were performed and related HEA with oxide doped samples were synthesized and characterized. The HEA ab initio density functional theory and molecular dynamics physical property simulation methods and experimental texture validation techniques development, achievements already reached, course work development, students and postdoc training, and future improvement research directions are briefly introduced.
Innovations in surgery simulation: a review of past, current and future techniques
Burtt, Karen; Solorzano, Carlos A.; Carey, Joseph N.
2016-01-01
As a result of recent work-hours limitations and concerns for patient safety, innovations in extraclinical surgical simulation have become a desired part of residency education. Current simulation models, including cadaveric, animal, bench-top, virtual reality (VR) and robotic simulators are increasingly used in surgical training programs. Advances in telesurgery, three-dimensional (3D) printing, and the incorporation of patient-specific anatomy are paving the way for simulators to become integral components of medical training in the future. Evidence from the literature highlights the benefits of including simulations in surgical training; skills acquired through simulations translate into improvements in operating room performance. Moreover, simulations are rapidly incorporating new medical technologies and offer increasingly high-fidelity recreations of procedures. As a result, both novice and expert surgeons are able to benefit from their use. As dedicated, structured curricula are developed that incorporate simulations into daily resident training, simulated surgeries will strengthen the surgeon’s skill set, decrease hospital costs, and improve patient outcomes. PMID:28090509
Innovations in surgery simulation: a review of past, current and future techniques.
Badash, Ido; Burtt, Karen; Solorzano, Carlos A; Carey, Joseph N
2016-12-01
As a result of recent work-hours limitations and concerns for patient safety, innovations in extraclinical surgical simulation have become a desired part of residency education. Current simulation models, including cadaveric, animal, bench-top, virtual reality (VR) and robotic simulators are increasingly used in surgical training programs. Advances in telesurgery, three-dimensional (3D) printing, and the incorporation of patient-specific anatomy are paving the way for simulators to become integral components of medical training in the future. Evidence from the literature highlights the benefits of including simulations in surgical training; skills acquired through simulations translate into improvements in operating room performance. Moreover, simulations are rapidly incorporating new medical technologies and offer increasingly high-fidelity recreations of procedures. As a result, both novice and expert surgeons are able to benefit from their use. As dedicated, structured curricula are developed that incorporate simulations into daily resident training, simulated surgeries will strengthen the surgeon's skill set, decrease hospital costs, and improve patient outcomes.
A Unified Framework for Brain Segmentation in MR Images
Yazdani, S.; Yusof, R.; Karimian, A.; Riazi, A. H.; Bennamoun, M.
2015-01-01
Brain MRI segmentation is an important issue for discovering the brain structure and diagnosis of subtle anatomical changes in different brain diseases. However, due to several artifacts brain tissue segmentation remains a challenging task. The aim of this paper is to improve the automatic segmentation of brain into gray matter, white matter, and cerebrospinal fluid in magnetic resonance images (MRI). We proposed an automatic hybrid image segmentation method that integrates the modified statistical expectation-maximization (EM) method and the spatial information combined with support vector machine (SVM). The combined method has more accurate results than what can be achieved with its individual techniques that is demonstrated through experiments on both real data and simulated images. Experiments are carried out on both synthetic and real MRI. The results of proposed technique are evaluated against manual segmentation results and other methods based on real T1-weighted scans from Internet Brain Segmentation Repository (IBSR) and simulated images from BrainWeb. The Kappa index is calculated to assess the performance of the proposed framework relative to the ground truth and expert segmentations. The results demonstrate that the proposed combined method has satisfactory results on both simulated MRI and real brain datasets. PMID:26089978
Sequential Gaussian co-simulation of rate decline parameters of longwall gob gas ventholes.
Karacan, C Özgen; Olea, Ricardo A
2013-04-01
Gob gas ventholes (GGVs) are used to control methane inflows into a longwall mining operation by capturing the gas within the overlying fractured strata before it enters the work environment. Using geostatistical co-simulation techniques, this paper maps the parameters of their rate decline behaviors across the study area, a longwall mine in the Northern Appalachian basin. Geostatistical gas-in-place (GIP) simulations were performed, using data from 64 exploration boreholes, and GIP data were mapped within the fractured zone of the study area. In addition, methane flowrates monitored from 10 GGVs were analyzed using decline curve analyses (DCA) techniques to determine parameters of decline rates. Surface elevation showed the most influence on methane production from GGVs and thus was used to investigate its relation with DCA parameters using correlation techniques on normal-scored data. Geostatistical analysis was pursued using sequential Gaussian co-simulation with surface elevation as the secondary variable and with DCA parameters as the primary variables. The primary DCA variables were effective percentage decline rate, rate at production start, rate at the beginning of forecast period, and production end duration. Co-simulation results were presented to visualize decline parameters at an area-wide scale. Wells located at lower elevations, i.e., at the bottom of valleys, tend to perform better in terms of their rate declines compared to those at higher elevations. These results were used to calculate drainage radii of GGVs using GIP realizations. The calculated drainage radii are close to ones predicted by pressure transient tests.
Sequential Gaussian co-simulation of rate decline parameters of longwall gob gas ventholes
Karacan, C. Özgen; Olea, Ricardo A.
2013-01-01
Gob gas ventholes (GGVs) are used to control methane inflows into a longwall mining operation by capturing the gas within the overlying fractured strata before it enters the work environment. Using geostatistical co-simulation techniques, this paper maps the parameters of their rate decline behaviors across the study area, a longwall mine in the Northern Appalachian basin. Geostatistical gas-in-place (GIP) simulations were performed, using data from 64 exploration boreholes, and GIP data were mapped within the fractured zone of the study area. In addition, methane flowrates monitored from 10 GGVs were analyzed using decline curve analyses (DCA) techniques to determine parameters of decline rates. Surface elevation showed the most influence on methane production from GGVs and thus was used to investigate its relation with DCA parameters using correlation techniques on normal-scored data. Geostatistical analysis was pursued using sequential Gaussian co-simulation with surface elevation as the secondary variable and with DCA parameters as the primary variables. The primary DCA variables were effective percentage decline rate, rate at production start, rate at the beginning of forecast period, and production end duration. Co-simulation results were presented to visualize decline parameters at an area-wide scale. Wells located at lower elevations, i.e., at the bottom of valleys, tend to perform better in terms of their rate declines compared to those at higher elevations. These results were used to calculate drainage radii of GGVs using GIP realizations. The calculated drainage radii are close to ones predicted by pressure transient tests.
Sequential Gaussian co-simulation of rate decline parameters of longwall gob gas ventholes
Karacan, C.Özgen; Olea, Ricardo A.
2015-01-01
Gob gas ventholes (GGVs) are used to control methane inflows into a longwall mining operation by capturing the gas within the overlying fractured strata before it enters the work environment. Using geostatistical co-simulation techniques, this paper maps the parameters of their rate decline behaviors across the study area, a longwall mine in the Northern Appalachian basin. Geostatistical gas-in-place (GIP) simulations were performed, using data from 64 exploration boreholes, and GIP data were mapped within the fractured zone of the study area. In addition, methane flowrates monitored from 10 GGVs were analyzed using decline curve analyses (DCA) techniques to determine parameters of decline rates. Surface elevation showed the most influence on methane production from GGVs and thus was used to investigate its relation with DCA parameters using correlation techniques on normal-scored data. Geostatistical analysis was pursued using sequential Gaussian co-simulation with surface elevation as the secondary variable and with DCA parameters as the primary variables. The primary DCA variables were effective percentage decline rate, rate at production start, rate at the beginning of forecast period, and production end duration. Co-simulation results were presented to visualize decline parameters at an area-wide scale. Wells located at lower elevations, i.e., at the bottom of valleys, tend to perform better in terms of their rate declines compared to those at higher elevations. These results were used to calculate drainage radii of GGVs using GIP realizations. The calculated drainage radii are close to ones predicted by pressure transient tests. PMID:26190930
Space Geodetic Technique Co-location in Space: Simulation Results for the GRASP Mission
NASA Astrophysics Data System (ADS)
Kuzmicz-Cieslak, M.; Pavlis, E. C.
2011-12-01
The Global Geodetic Observing System-GGOS, places very stringent requirements in the accuracy and stability of future realizations of the International Terrestrial Reference Frame (ITRF): an origin definition at 1 mm or better at epoch and a temporal stability on the order of 0.1 mm/y, with similar numbers for the scale (0.1 ppb) and orientation components. These goals were derived from the requirements of Earth science problems that are currently the international community's highest priority. None of the geodetic positioning techniques can achieve this goal alone. This is due in part to the non-observability of certain attributes from a single technique. Another limitation is imposed from the extent and uniformity of the tracking network and the schedule of observational availability and number of suitable targets. The final limitation derives from the difficulty to "tie" the reference points of each technique at the same site, to an accuracy that will support the GGOS goals. The future GGOS network will address decisively the ground segment and to certain extent the space segment requirements. The JPL-proposed multi-technique mission GRASP (Geodetic Reference Antenna in Space) attempts to resolve the accurate tie between techniques, using their co-location in space, onboard a well-designed spacecraft equipped with GNSS receivers, a SLR retroreflector array, a VLBI beacon and a DORIS system. Using the anticipated system performance for all four techniques at the time the GGOS network is completed (ca 2020), we generated a number of simulated data sets for the development of a TRF. Our simulation studies examine the degree to which GRASP can improve the inter-technique "tie" issue compared to the classical approach, and the likely modus operandi for such a mission. The success of the examined scenarios is judged by the quality of the origin and scale definition of the resulting TRF.
Comparison of Sequential and Variational Data Assimilation
NASA Astrophysics Data System (ADS)
Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht
2017-04-01
Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.
NASA Technical Reports Server (NTRS)
Lang, Steve; Tao, W.-K.; Simpson, J.; Ferrier, B.; Einaudi, Franco (Technical Monitor)
2001-01-01
Six different convective-stratiform separation techniques, including a new technique that utilizes the ratio of vertical and terminal velocities, are compared and evaluated using two-dimensional numerical simulations of a tropical [Tropical Ocean Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA COARE)] and midlatitude continental [Preliminary Regional Experiment for STORM-Central (PRESTORM)] squall line. The simulations are made using two different numerical advection schemes: 4th order and positive definite advection. Comparisons are made in terms of rainfall, cloud coverage, mass fluxes, apparent heating and moistening, mean hydrometeor profiles, CFADs (Contoured Frequency with Altitude Diagrams), microphysics, and latent heating retrieval. Overall, it was found that the different separation techniques produced results that qualitatively agreed. However, the quantitative differences were significant. Observational comparisons were unable to conclusively evaluate the performance of the techniques. Latent heating retrieval was shown to be sensitive to the use of separation technique mainly due to the stratiform region for methods that found very little stratiform rain. The midlatitude PRESTORM simulation was found to be nearly invariant with respect to advection type for most quantities while for TOGA COARE fourth order advection produced numerous shallow convective cores and positive definite advection fewer cells that were both broader and deeper penetrating above the freezing level.
Uchida, Masafumi
2014-04-01
A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging. © 2014 Japanese Society of Hepato-Biliary-Pancreatic Surgery.
Fatigue crack localization with near-field acoustic emission signals
NASA Astrophysics Data System (ADS)
Zhou, Changjiang; Zhang, Yunfeng
2013-04-01
This paper presents an AE source localization technique using near-field acoustic emission (AE) signals induced by crack growth and propagation. The proposed AE source localization technique is based on the phase difference in the AE signals measured by two identical AE sensing elements spaced apart at a pre-specified distance. This phase difference results in canceling-out of certain frequency contents of signals, which can be related to AE source direction. Experimental data from simulated AE source such as pencil breaks was used along with analytical results from moment tensor analysis. It is observed that the theoretical predictions, numerical simulations and the experimental test results are in good agreement. Real data from field monitoring of an existing fatigue crack on a bridge was also used to test this system. Results show that the proposed method is fairly effective in determining the AE source direction in thick plates commonly encountered in civil engineering structures.
Solving search problems by strongly simulating quantum circuits
Johnson, T. H.; Biamonte, J. D.; Clark, S. R.; Jaksch, D.
2013-01-01
Simulating quantum circuits using classical computers lets us analyse the inner workings of quantum algorithms. The most complete type of simulation, strong simulation, is believed to be generally inefficient. Nevertheless, several efficient strong simulation techniques are known for restricted families of quantum circuits and we develop an additional technique in this article. Further, we show that strong simulation algorithms perform another fundamental task: solving search problems. Efficient strong simulation techniques allow solutions to a class of search problems to be counted and found efficiently. This enhances the utility of strong simulation methods, known or yet to be discovered, and extends the class of search problems known to be efficiently simulable. Relating strong simulation to search problems also bounds the computational power of efficiently strongly simulable circuits; if they could solve all problems in P this would imply that all problems in NP and #P could be solved in polynomial time. PMID:23390585
Techniques for hot structures testing
NASA Technical Reports Server (NTRS)
Deangelis, V. Michael; Fields, Roger A.
1990-01-01
Hot structures testing have been going on since the early 1960's beginning with the Mach 6, X-15 airplane. Early hot structures test programs at NASA-Ames-Dryden focused on operational testing required to support the X-15 flight test program, and early hot structures research projects focused on developing lab test techniques to simulate flight thermal profiles. More recent efforts involved numerous large and small hot structures test programs that served to develop test methods and measurement techniques to provide data that promoted the correlation of test data with results from analytical codes. In Nov. 1988 a workshop was sponsored that focused on the correlation of hot structures test data with analysis. Limited material is drawn from the workshop and a more formal documentation is provided of topics that focus on hot structures test techniques used at NASA-Ames-Dryden. Topics covered include the data acquisition and control of testing, the quartz lamp heater systems, current strain and temperature sensors, and hot structures test techniques used to simulate the flight thermal environment in the lab.
C-arm technique using distance driven method for nephrolithiasis and kidney stones detection
NASA Astrophysics Data System (ADS)
Malalla, Nuhad; Sun, Pengfei; Chen, Ying; Lipkin, Michael E.; Preminger, Glenn M.; Qin, Jun
2016-04-01
Distance driven represents a state of art method that used for reconstruction for x-ray techniques. C-arm tomography is an x-ray imaging technique that provides three dimensional information of the object by moving the C-shaped gantry around the patient. With limited view angle, C-arm system was investigated to generate volumetric data of the object with low radiation dosage and examination time. This paper is a new simulation study with two reconstruction methods based on distance driven including: simultaneous algebraic reconstruction technique (SART) and Maximum Likelihood expectation maximization (MLEM). Distance driven is an efficient method that has low computation cost and free artifacts compared with other methods such as ray driven and pixel driven methods. Projection images of spherical objects were simulated with a virtual C-arm system with a total view angle of 40 degrees. Results show the ability of limited angle C-arm technique to generate three dimensional images with distance driven reconstruction.
NASA Technical Reports Server (NTRS)
Antreasian, Peter G.
1988-01-01
Two orbit simulations, one representing the actual Geopotential Research Mission (GRM) orbit and the other representing the orbit estimated from orbit determination techniques, are presented. A computer algorithm was created to simulate GRM's drag compensation mechanism so the fuel expenditure and proof mass trajectories relative to the spacecraft centroid could be calculated for the mission. The results of the GRM DISCOS simulation demonstrated that the spacecraft can essentially be drag-free. The results showed that the centroid of the spacecraft can be controlled so that it will not deviate more than 1.0 mm in any direction from the centroid of the proof mass.
Simulation of Mirror Electron Microscopy Caustic Images in Three-Dimensions
NASA Astrophysics Data System (ADS)
Kennedy, S. M.; Zheng, C. X.; Jesson, D. E.
A full, three-dimensional (3D) ray tracing approach is developed to simulate the caustics visible in mirror electron microscopy (MEM). The method reproduces MEM image contrast resulting from 3D surface relief. To illustrate the potential of the simulation methods, we study the evolution of crater contrast associated with a movie of GaAs structures generated by the droplet epitaxy technique. Specifically, we simulate the image contrast resulting from both a precursor stage and the final crater morphology which is consistent with an inverted pyramid consisting of (111) facet walls. The method therefore facilities the study of how self-assembled quantum structures evolve with time and, in particular, the development of anisotropic features including faceting.
NASA Astrophysics Data System (ADS)
Spies, M.; Rieder, H.; Orth, Th.; Maack, S.
2012-05-01
In this contribution we address the beam field simulation of 2D ultrasonic arrays using the Generalized Point Source Synthesis technique. Aiming at the inspection of cylindrical components (e.g. pipes) the influence of concave and convex surface curvatures, respectively, has been evaluated for a commercial probe. We have compared these results with those obtained using a commercial simulation tool. In civil engineering, the ultrasonic inspection of highly attenuating concrete structures has been advanced by the development of dry contact point transducers, mainly applied in array arrangements. Our respective simulations for a widely used commercial probe are validated using experimental results acquired on concrete half-spheres with diameters from 200 mm up to 650 mm.
Hybrid modeling method for a DEP based particle manipulation.
Miled, Mohamed Amine; Gagne, Antoine; Sawan, Mohamad
2013-01-30
In this paper, a new modeling approach for Dielectrophoresis (DEP) based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results.
Hybrid Modeling Method for a DEP Based Particle Manipulation
Miled, Mohamed Amine; Gagne, Antoine; Sawan, Mohamad
2013-01-01
In this paper, a new modeling approach for Dielectrophoresis (DEP) based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results. PMID:23364197
Fast simulation of electromagnetic and hadronic showers in SpaCal calorimeter at the H1 experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raičević, Nataša, E-mail: raicevic@mail.desy.de; Glazov, Alexandre
2016-03-25
The fast simulation of showers induced by electrons (positrons) in the H1 lead/scintillating-fiber calorimeter, SpaCal, based on shower library technique has been presented previously. In this paper we show the results on linearity and uniformity of the reconstructed electron/positron cluster energy in electromagnetic section of Spacal for the simulations based on shower library and GFLASH shower parametrisation. The shapes of the clusters originating from photon and hadron candidates in SpaCal are analysed and experimental distributions compared with the two simulations.
Jurassic Diabase from Leesburg, VA: A Proposed Lunar Simulant
NASA Technical Reports Server (NTRS)
Taylor, Patrick T.; Lowman, P. D.; Nagihara, Seiichi; Milam, M. B.; Nakamura, Yosio
2008-01-01
A study of future lunar seismology and heat flow is being carried out as part of the NASA Lunar Sortie Science Program. This study will include new lunar drilling techniques, using a regolith simulant, for emplacement of instruments. Previous lunar simulants, such as JSC-1 and MLS-1, were not available when the study began, so a local simulant source was required. Diabase from a quarry at Leeseburg, Virginia, was obtained from the Luck Stone Corporation. We report here initial results of a petrographic examination of this rock, GSC-1 henceforth.
Jurassic Diabase from Leesburg, VA: A Proposed Lunar Simulant
NASA Technical Reports Server (NTRS)
Taylor, P. T.; Lowman, P. D.; Nagihara, Seiichi; Milam, M. B.; Nakamura, Yosio
2008-01-01
A study of future lunar seismology and heat flow is being carried out as part of the NASA Lunar Sortie Science Program [1].This study will include new lunar drilling techniques, using a regolith simulant, for emplacement of instruments. Previous lunar simulants, such as JSC-I and MLS-l, were not available when the study began, so a local simulant source was required. Diabase from a quarry at Leesburg, Virginia, was obtained from the Luck Stone Corporation. We report here initial results of a petrographic examination of this rock, GSC-1 henceforth.
NASA Astrophysics Data System (ADS)
Nakano, Masaru; Kubota, Fumiko; Inamori, Yutaka; Mitsuyuki, Keiji
Manufacturing system designers should concentrate on designing and planning manufacturing systems instead of spending their efforts on creating the simulation models to verify the design. This paper proposes a method and its tool to navigate the designers through the engineering process and generate the simulation model automatically from the design results. The design agent also supports collaborative design projects among different companies or divisions with distributed engineering and distributed simulation techniques. The idea was implemented and applied to a factory planning process.
How Learning Techniques Initiate Simulation of Human Mind
ERIC Educational Resources Information Center
Girija, C.
2014-01-01
The simulation of human mind often helps in the understanding of abstract concept by representing it in a realistic model and simplistic way so that a learner develops an understanding of the key concepts. Bian (1873) and James (1890) in their work suggested that thoughts and body activity result from interactions among neurons within the brain.…
NASA Technical Reports Server (NTRS)
Synowicki, R. A.; Hale, Jeffrey S.; Woollam, John A.
1992-01-01
The University of Nebraska is currently evaluating Low Earth Orbit (LEO) simulation techniques as well as a variety of thin film protective coatings to withstand atomic oxygen (AO) degradation. Both oxygen plasma ashers and an electron cyclotron resonance (ECR) source are being used for LEO simulation. Thin film coatings are characterized by optical techniques including Variable Angle Spectroscopic Ellipsometry, Optical spectrophotometry, and laser light scatterometry. Atomic Force Microscopy (AFM) is also used to characterize surface morphology. Results on diamondlike carbon (DLC) films show that DLC degrades with simulated AO exposure at a rate comparable to Kapton polyimide. Since DLC is not as susceptible to environmental factors such as moisture absorption, it could potentially provide more accurate measurements of AO fluence on short space flights.
NASA Astrophysics Data System (ADS)
Miyagawa, Chihiro; Kobayashi, Takumi; Taishi, Toshinori; Hoshikawa, Keigo
2014-09-01
Based on the growth of 3-inch diameter c-axis sapphire using the vertical Bridgman (VB) technique, numerical simulations were made and used to guide the growth of a 6-inch diameter sapphire. A 2D model of the VB hot-zone was constructed, the seeding interface shape of the 3-inch diameter sapphire as revealed by green laser scattering was estimated numerically, and the temperature distributions of two VB hot-zone models designed for 6-inch diameter sapphire growth were numerically simulated to achieve the optimal growth of large crystals. The hot-zone model with one heater was selected and prepared, and 6-inch diameter c-axis sapphire boules were actually grown, as predicted by the numerical results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, K.M.
1992-10-01
Sequential indicator simulation (SIS) is a geostatistical technique designed to aid in the characterization of uncertainty about the structure or behavior of natural systems. This report discusses a simulation experiment designed to study the quality of uncertainty bounds generated using SIS. The results indicate that, while SIS may produce reasonable uncertainty bounds in many situations, factors like the number and location of available sample data, the quality of variogram models produced by the user, and the characteristics of the geologic region to be modeled, can all have substantial effects on the accuracy and precision of estimated confidence limits. It ismore » recommended that users of SIS conduct validation studies for the technique on their particular regions of interest before accepting the output uncertainty bounds.« less
[Bone drilling simulation by three-dimensional imaging].
Suto, Y; Furuhata, K; Kojima, T; Kurokawa, T; Kobayashi, M
1989-06-01
The three-dimensional display technique has a wide range of medical applications. Pre-operative planning is one typical application: in orthopedic surgery, three-dimensional image processing has been used very successfully. We have employed this technique in pre-operative planning for orthopedic surgery, and have developed a simulation system for bone-drilling. Positive results were obtained by pre-operative rehearsal; when a region of interest is indicated by means of a mouse on the three-dimensional image displayed on the CRT, the corresponding region appears on the slice image which is displayed simultaneously. Consequently, the status of the bone-drilling is constantly monitored. In developing this system, we have placed emphasis on the quality of the reconstructed three-dimensional images, on fast processing, and on the easy operation of the surgical planning simulation.
NASA Technical Reports Server (NTRS)
Salmasi, A. B. (Editor); Springett, J. C.; Sumida, J. T.; Richter, P. H.
1984-01-01
The design and implementation of the Land Mobile Satellite Service (LMSS) channel simulator as a facility for an end to end hardware simulation of the LMSS communications links, primarily with the mobile terminal is described. A number of studies are reported which show the applications of the channel simulator as a facility for validation and assessment of the LMSS design requirements and capabilities by performing quantitative measurements and qualitative audio evaluations for various link design parameters and channel impairments under simulated LMSS operating conditions. As a first application, the LMSS channel simulator was used in the evaluation of a system based on the voice processing and modulation (e.g., NBFM with 30 kHz of channel spacing and a 2 kHz rms frequency deviation for average talkers) selected for the Bell System's Advanced Mobile Phone Service (AMPS). The various details of the hardware design, qualitative audio evaluation techniques, signal to channel impairment measurement techniques, the justifications for criteria of different parameter selection in regards to the voice processing and modulation methods, and the results of a number of parametric studies are further described.
Plasma Processing of Lunar Regolith Simulant for Diverse Applications
NASA Technical Reports Server (NTRS)
Schofield, Elizabeth C.; Sen, Subhayu; O'Dell, J. Scott
2008-01-01
Versatile manufacturing technologies for extracting resources from the moon are needed to support future space missions. Of particular interest is the production of gases and metals from lunar resources for life support, propulsion, and in-space fabrication. Deposits made from lunar regolith could yield highly emissive coatings and near-net shaped parts for replacement or repair of critical components. Equally important is development of high fidelity lunar simulants for ground based validation of potential lunar surface operations. Described herein is an innovative plasma processing technique for insitu production of gases, metals, coatings, and deposits from lunar regolith, and synthesis of high fidelity lunar simulant from NASA issued lunar simulant JSC-1. Initial plasma reduction trials of JSC-1 lunar simulant have indicated production of metallic iron and magnesium. Evolution of carbon monoxide has been detected subsequent to reduction of the simulant using the plasma process. Plasma processing of the simulant has also resulted in glassy phases resembling the volcanic glass and agglutinates found in lunar regolith. Complete and partial glassy phase deposits have been obtained by varying the plasma process variables. Experimental techniques, product characterization, and process gas analysis will be discussed.
Quantitative computer simulations of extraterrestrial processing operations
NASA Technical Reports Server (NTRS)
Vincent, T. L.; Nikravesh, P. E.
1989-01-01
The automation of a small, solid propellant mixer was studied. Temperature control is under investigation. A numerical simulation of the system is under development and will be tested using different control options. Control system hardware is currently being put into place. The construction of mathematical models and simulation techniques for understanding various engineering processes is also studied. Computer graphics packages were utilized for better visualization of the simulation results. The mechanical mixing of propellants is examined. Simulation of the mixing process is being done to study how one can control for chaotic behavior to meet specified mixing requirements. An experimental mixing chamber is also being built. It will allow visual tracking of particles under mixing. The experimental unit will be used to test ideas from chaos theory, as well as to verify simulation results. This project has applications to extraterrestrial propellant quality and reliability.
NASA Technical Reports Server (NTRS)
Zamora, M. A.
1977-01-01
Consumables analysis/crew training simulator interface requirements were defined. Two aspects were investigated: consumables analysis support techniques to crew training simulator for advanced spacecraft programs, and the applicability of the above techniques to the crew training simulator for the space shuttle program in particular.
Development of a nondestructive vibration technique for bond assessment of Space Shuttle tiles
NASA Technical Reports Server (NTRS)
Moslehy, Faissal A.
1994-01-01
This final report describes the achievements of the above titled project. The project is funded by NASA-KSC (Grant No. NAG 10-0117) for the period of 1 Jan. to 31 Dec. 1993. The purpose of this project was to develop a nondestructive, noncontact technique based on 'vibration signature' of tile systems to quantify the bond conditions of the thermal protection system) tiles of Space Shuttle orbiters. The technique uses a laser rapid scan system, modal measurements, and finite element modeling. Finite element models were developed for tiles bonded to both clamped and deformable integrated skin-stringer orbiter mid-fuselage. Results showed that the size and location of a disbonded tile can be determined from frequency and mode shape information. Moreover, a frequency response survey was used to quickly identify the disbonded tiles. The finite element results were compared with experimentally determined frequency responses of a 17-tile test panel, where a rapidscan laser system was employed. An excellent degree of correlation between the mathematical simulation and experimental results was realized. An inverse solution for single-tile assemblies was also derived and is being implemented into a computer program that can interact with the modal testing software. The output of the program displays the size and location of disbond. This program has been tested with simulated input (i.e., finite element data), and excellent agreement between predicted and simulated disbonds was shown. Finally, laser vibration imaging and acoustic emission techniques were shown to be well suited for detecting and monitoring the progressive damage in Graphite/Epoxy composite materials.
NASA Astrophysics Data System (ADS)
Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish
2018-02-01
Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.
Propulsion simulation for magnetically suspended wind tunnel models
NASA Technical Reports Server (NTRS)
Joshi, Prakash B.; Beerman, Henry P.; Chen, James; Krech, Robert H.; Lintz, Andrew L.; Rosen, David I.
1990-01-01
The feasibility of simulating propulsion-induced aerodynamic effects on scaled aircraft models in wind tunnels employing Magnetic Suspension and Balance Systems. The investigation concerned itself with techniques of generating exhaust jets of appropriate characteristics. The objectives were to: (1) define thrust and mass flow requirements of jets; (2) evaluate techniques for generating propulsive gas within volume limitations imposed by magnetically-suspended models; (3) conduct simple diagnostic experiments for techniques involving new concepts; and (4) recommend experiments for demonstration of propulsion simulation techniques. Various techniques of generating exhaust jets of appropriate characteristics were evaluated on scaled aircraft models in wind tunnels with MSBS. Four concepts of remotely-operated propulsion simulators were examined. Three conceptual designs involving innovative adaptation of convenient technologies (compressed gas cylinders, liquid, and solid propellants) were developed. The fourth innovative concept, namely, the laser-assisted thruster, which can potentially simulate both inlet and exhaust flows, was found to require very high power levels for small thrust levels.
NASA Astrophysics Data System (ADS)
Irwandi, Irwandi; Fashbir; Daryono
2018-04-01
Neo-Deterministic Seismic Hazard Assessment (NDSHA) method is a seismic hazard assessment method that has an advantage on realistic physical simulation of the source, propagation, and geological-geophysical structure. This simulation is capable on generating the synthetics seismograms at the sites that being observed. At the regional NDSHA scale, calculation of the strong ground motion is based on 1D modal summation technique because it is more efficient in computation. In this article, we verify the result of synthetic seismogram calculations with the result of field observations when Pidie Jaya earthquake on 7 December 2016 occurred with the moment magnitude of M6.5. Those data were recorded by broadband seismometers installed by BMKG (Indonesian Agency for Meteorology, Climatology and Geophysics). The result of the synthetic seismogram calculations verifies that some stations well show the suitability with observation while some other stations show the discrepancies with observation results. Based on the results of the observation of some stations, evidently 1D modal summation technique method has been well verified for thin sediment region (near the pre-tertiary basement), but less suitable for thick sediment region. The reason is that the 1D modal summation technique excludes the amplification effect of seismic wave occurring within thick sediment region. So, another approach is needed, e.g., 2D finite difference hybrid method, which is a part of local scale NDSHA method.
Architectural-level power estimation and experimentation
NASA Astrophysics Data System (ADS)
Ye, Wu
With the emergence of a plethora of embedded and portable applications and ever increasing integration levels, power dissipation of integrated circuits has moved to the forefront as a design constraint. Recent years have also seen a significant trend towards designs starting at the architectural (or RT) level. Those demand accurate yet fast RT level power estimation methodologies and tools. This thesis addresses issues and experiments associate with architectural level power estimation. An execution driven, cycle-accurate RT level power simulator, SimplePower, was developed using transition-sensitive energy models. It is based on the architecture of a five-stage pipelined RISC datapath for both 0.35mum and 0.8mum technology and can execute the integer subset of the instruction set of SimpleScalar . SimplePower measures the energy consumed in the datapath, memory and on-chip buses. During the development of SimplePower , a partitioning power modeling technique was proposed to model the energy consumed in complex functional units. The accuracy of this technique was validated with HSPICE simulation results for a register file and a shifter. A novel, selectively gated pipeline register optimization technique was proposed to reduce the datapath energy consumption. It uses the decoded control signals to selectively gate the data fields of the pipeline registers. Simulation results show that this technique can reduce the datapath energy consumption by 18--36% for a set of benchmarks. A low-level back-end compiler optimization, register relabeling, was applied to reduce the on-chip instruction cache data bus switch activities. Its impact was evaluated by SimplePower. Results show that it can reduce the energy consumed in the instruction data buses by 3.55--16.90%. A quantitative evaluation was conducted for the impact of six state-of-art high-level compilation techniques on both datapath and memory energy consumption. The experimental results provide a valuable insight for designers to develop future power-aware compilation frameworks for embedded systems.
Statistical Emulator for Expensive Classification Simulators
NASA Technical Reports Server (NTRS)
Ross, Jerret; Samareh, Jamshid A.
2016-01-01
Expensive simulators prevent any kind of meaningful analysis to be performed on the phenomena they model. To get around this problem the concept of using a statistical emulator as a surrogate representation of the simulator was introduced in the 1980's. Presently, simulators have become more and more complex and as a result running a single example on these simulators is very expensive and can take days to weeks or even months. Many new techniques have been introduced, termed criteria, which sequentially select the next best (most informative to the emulator) point that should be run on the simulator. These criteria methods allow for the creation of an emulator with only a small number of simulator runs. We follow and extend this framework to expensive classification simulators.
FPGA in-the-loop simulations of cardiac excitation model under voltage clamp conditions
NASA Astrophysics Data System (ADS)
Othman, Norliza; Adon, Nur Atiqah; Mahmud, Farhanahani
2017-01-01
Voltage clamp technique allows the detection of single channel currents in biological membranes in identifying variety of electrophysiological problems in the cellular level. In this paper, a simulation study of the voltage clamp technique has been presented to analyse current-voltage (I-V) characteristics of ion currents based on Luo-Rudy Phase-I (LR-I) cardiac model by using a Field Programmable Gate Array (FPGA). Nowadays, cardiac models are becoming increasingly complex which can cause a vast amount of time to run the simulation. Thus, a real-time hardware implementation using FPGA could be one of the best solutions for high-performance real-time systems as it provides high configurability and performance, and able to executes in parallel mode operation. For shorter time development while retaining high confidence results, FPGA-based rapid prototyping through HDL Coder from MATLAB software has been used to construct the algorithm for the simulation system. Basically, the HDL Coder is capable to convert the designed MATLAB Simulink blocks into hardware description language (HDL) for the FPGA implementation. As a result, the voltage-clamp fixed-point design of LR-I model has been successfully conducted in MATLAB Simulink and the simulation of the I-V characteristics of the ionic currents has been verified on Xilinx FPGA Virtex-6 XC6VLX240T development board through an FPGA-in-the-loop (FIL) simulation.
Computer Science Techniques Applied to Parallel Atomistic Simulation
NASA Astrophysics Data System (ADS)
Nakano, Aiichiro
1998-03-01
Recent developments in parallel processing technology and multiresolution numerical algorithms have established large-scale molecular dynamics (MD) simulations as a new research mode for studying materials phenomena such as fracture. However, this requires large system sizes and long simulated times. We have developed: i) Space-time multiresolution schemes; ii) fuzzy-clustering approach to hierarchical dynamics; iii) wavelet-based adaptive curvilinear-coordinate load balancing; iv) multilevel preconditioned conjugate gradient method; and v) spacefilling-curve-based data compression for parallel I/O. Using these techniques, million-atom parallel MD simulations are performed for the oxidation dynamics of nanocrystalline Al. The simulations take into account the effect of dynamic charge transfer between Al and O using the electronegativity equalization scheme. The resulting long-range Coulomb interaction is calculated efficiently with the fast multipole method. Results for temperature and charge distributions, residual stresses, bond lengths and bond angles, and diffusivities of Al and O will be presented. The oxidation of nanocrystalline Al is elucidated through immersive visualization in virtual environments. A unique dual-degree education program at Louisiana State University will also be discussed in which students can obtain a Ph.D. in Physics & Astronomy and a M.S. from the Department of Computer Science in five years. This program fosters interdisciplinary research activities for interfacing High Performance Computing and Communications with large-scale atomistic simulations of advanced materials. This work was supported by NSF (CAREER Program), ARO, PRF, and Louisiana LEQSF.
NASA Astrophysics Data System (ADS)
Han, Tao; Chen, Lingyun; Lai, Chao-Jen; Liu, Xinming; Shen, Youtao; Zhong, Yuncheng; Ge, Shuaiping; Yi, Ying; Wang, Tianpeng; Shaw, Chris C.
2009-02-01
Images of mastectomy breast specimens have been acquired with a bench top experimental Cone beam CT (CBCT) system. The resulting images have been segmented to model an uncompressed breast for simulation of various CBCT techniques. To further simulate conventional or tomosynthesis mammographic imaging for comparison with the CBCT technique, a deformation technique was developed to convert the CT data for an uncompressed breast to a compressed breast without altering the breast volume or regional breast density. With this technique, 3D breast deformation is separated into two 2D deformations in coronal and axial views. To preserve the total breast volume and regional tissue composition, each 2D deformation step was achieved by altering the square pixels into rectangular ones with the pixel areas unchanged and resampling with the original square pixels using bilinear interpolation. The compression was modeled by first stretching the breast in the superior-inferior direction in the coronal view. The image data were first deformed by distorting the voxels with a uniform distortion ratio. These deformed data were then deformed again using distortion ratios varying with the breast thickness and re-sampled. The deformation procedures were applied in the axial view to stretch the breast in the chest wall to nipple direction while shrinking it in the mediolateral to lateral direction re-sampled and converted into data for uniform cubic voxels. Threshold segmentation was applied to the final deformed image data to obtain the 3D compressed breast model. Our results show that the original segmented CBCT image data were successfully converted into those for a compressed breast with the same volume and regional density preserved. Using this compressed breast model, conventional and tomosynthesis mammograms were simulated for comparison with CBCT.
NASA Astrophysics Data System (ADS)
Darudi, Ahmad; Bakhshi, Hadi; Asgari, Reza
2015-05-01
In this paper we present the results of image restoration using the data taken by a Hartmann sensor. The aberration is measure by a Hartmann sensor in which the object itself is used as reference. Then the Point Spread Function (PSF) is simulated and used for image reconstruction using the Lucy-Richardson technique. A technique is presented for quantitative evaluation the Lucy-Richardson technique for deconvolution.
Scattering of circumferential waves in a cracked annulus
NASA Astrophysics Data System (ADS)
Valle, Christine; Qu, Jianmin; Jacobs, Laurence J.
2000-05-01
This paper considers guided waves propagating in the circumferential direction of an annulus with a radial crack, with the objective of developing an ultrasonic technique that can detect and characterize these cracks. Specifically, the finite element method is used to simulate the propagation and scattering of guided circumferential waves in a cracked annulus. This method fosters a better understanding of the wave fields, so that a transducer configuration used in the field can be optimized for crack detection/characterization. Both a point source (simulating laser generated ultrasound) and a distributed source (simulating a PZT transducer) are modeled and compared to corresponding experimental results. Animations (snapshots at different instants in time) of the strain energy field in the annulus are given for various combinations of load profiles, incident angles, and incident frequencies. Results of this paper provide the necessary design guidelines for developing nondestructive ultrasonic techniques for the detection/characterization of radial cracks in cylindrical pressure vessels, gas/oil pipes, and shaft/bearing systems.
Harte, Philip T.
1994-01-01
Proper discretization of a ground-water-flow field is necessary for the accurate simulation of ground-water flow by models. Although discretiza- tion guidelines are available to ensure numerical stability, current guidelines arc flexible enough (particularly in vertical discretization) to allow for some ambiguity of model results. Testing of two common types of vertical-discretization schemes (horizontal and nonhorizontal-model-layer approach) were done to simulate sloping hydrogeologic units characteristic of New England. Differences of results of model simulations using these two approaches are small. Numerical errors associated with use of nonhorizontal model layers are small (4 percent). even though this discretization technique does not adhere to the strict formulation of the finite-difference method. It was concluded that vertical discretization by means of the nonhorizontal layer approach has advantages in representing the hydrogeologic units tested and in simplicity of model-data input. In addition, vertical distortion of model cells by this approach may improve the representation of shallow flow processes.
Using cognitive architectures to study issues in team cognition in a complex task environment
NASA Astrophysics Data System (ADS)
Smart, Paul R.; Sycara, Katia; Tang, Yuqing
2014-05-01
Cognitive social simulation is a computer simulation technique that aims to improve our understanding of the dynamics of socially-situated and socially-distributed cognition. This makes cognitive social simulation techniques particularly appealing as a means to undertake experiments into team cognition. The current paper reports on the results of an ongoing effort to develop a cognitive social simulation capability that can be used to undertake studies into team cognition using the ACT-R cognitive architecture. This capability is intended to support simulation experiments using a team-based problem solving task, which has been used to explore the effect of different organizational environments on collective problem solving performance. The functionality of the ACT-R-based cognitive social simulation capability is presented and a number of areas of future development work are outlined. The paper also describes the motivation for adopting cognitive architectures in the context of social simulation experiments and presents a number of research areas where cognitive social simulation may be useful in developing a better understanding of the dynamics of team cognition. These include the use of cognitive social simulation to study the role of cognitive processes in determining aspects of communicative behavior, as well as the impact of communicative behavior on the shaping of task-relevant cognitive processes (e.g., the social shaping of individual and collective memory as a result of communicative exchanges). We suggest that the ability to perform cognitive social simulation experiments in these areas will help to elucidate some of the complex interactions that exist between cognitive, social, technological and informational factors in the context of team-based problem-solving activities.
Enhancing multi-spot structured illumination microscopy with fluorescence difference
Torkelsen, Frida H.
2018-01-01
Structured illumination microscopy is a super-resolution technique used extensively in biological research. However, this technique is limited in the maximum possible resolution increase. Here we report the results of simulations of a novel enhanced multi-spot structured illumination technique. This method combines the super-resolution technique of difference microscopy with structured illumination deconvolution. Initial results give at minimum a 1.4-fold increase in resolution over conventional structured illumination in a low-noise environment. This new technique also has the potential to be expanded to further enhance axial resolution with three-dimensional difference microscopy. The requirement for precise pattern determination in this technique also led to the development of a new pattern estimation algorithm which proved more efficient and reliable than other methods tested. PMID:29657751
A New Computational Technique for the Generation of Optimised Aircraft Trajectories
NASA Astrophysics Data System (ADS)
Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto
2017-12-01
A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.
NASA Technical Reports Server (NTRS)
Taylor, Brian R.; Ratnayake, Nalin A.
2010-01-01
As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will make use of distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. Research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique and validating this technique through simulation and flight test of the X-48B aircraft. The X-48B aircraft is an 8.5 percent-scale hybrid wing body aircraft demonstrator designed by The Boeing Company (Chicago, Illinois, USA), built by Cranfield Aerospace Limited (Cranfield, Bedford, United Kingdom) and flight tested at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California, USA). Based on data from flight test maneuvers performed at Dryden Flight Research Center, aerodynamic parameter estimation was performed using linear regression and output error techniques. An input design technique that uses temporal separation for de-correlation of control surfaces is proposed, and simulation and flight test results are compared with the aerodynamic database. This paper will present a method to determine individual control surface aerodynamic derivatives.
NASA Astrophysics Data System (ADS)
Fetsco, Sara Elizabeth
There are several topics that introductory physics students typically have difficulty understanding. The purpose of this thesis is to investigate if multiple instructional techniques will help students to better understand and retain the material. The three units analyzed in this study are graphing motion, projectile motion, and conservation of momentum. For each unit students were taught using new or altered instructional methods including online laboratory simulations, inquiry labs, and interactive demonstrations. Additionally, traditional instructional methods such as lecture and problem sets were retained. Effectiveness was measured through pre- and post-tests and student opinion surveys. Results suggest that incorporating multiple instructional techniques into teaching will improve student understanding and retention. Students stated that they learned well from all of the instructional methods used except the online simulations.
Heat transfer monitoring by means of the hot wire technique and finite element analysis software.
Hernández Wong, J; Suarez, V; Guarachi, J; Calderón, A; Rojas-Trigos, J B; Juárez, A G; Marín, E
2014-01-01
It is reported the study of the radial heat transfer in a homogeneous and isotropic substance with a heat linear source in its axial axis. For this purpose, the hot wire characterization technique has been used, in order to obtain the temperature distribution as a function of radial distance from the axial axis and time exposure. Also, the solution of the transient heat transport equation for this problem was obtained under appropriate boundary conditions, by means of finite element technique. A comparison between experimental, conventional theoretical model and numerical simulated results is done to demonstrate the utility of the finite element analysis simulation methodology in the investigation of the thermal response of substances. Copyright © 2013 Elsevier Ltd. All rights reserved.
Recent progress in the NDE of cast ship propulsion components
NASA Astrophysics Data System (ADS)
Spies, Martin; Rieder, Hans; Dillhöfer, Alexander; Rauhut, Markus; Taeubner, Kai; Kreier, Peter
2014-02-01
The failure of propulsion components of ships and ferries can lead to serious environmental and economic damage or even the loss of lives. For ultrasonic inspection of such large components we employ mechanized scanning and defect reconstruction using the Synthetic Aperture Focusing Technique (SAFT). We report on results obtained in view of the detection of defects with different inspection techniques. Also, we address the issue of Probability of Detection by reporting results obtained in POD and MAPOD-studies (Model-Assisted POD) using experimental and simulated data. Finally, we show recent results of surface and sub-surface inspection using optical and eddy current techniques.
Gelb, Lev D; Chakraborty, Somendra Nath
2011-12-14
The normal boiling points are obtained for a series of metals as described by the "quantum-corrected Sutton Chen" (qSC) potentials [S.-N. Luo, T. J. Ahrens, T. Çağın, A. Strachan, W. A. Goddard III, and D. C. Swift, Phys. Rev. B 68, 134206 (2003)]. Instead of conventional Monte Carlo simulations in an isothermal or expanded ensemble, simulations were done in the constant-NPH adabatic variant of the Gibbs ensemble technique as proposed by Kristóf and Liszi [Chem. Phys. Lett. 261, 620 (1996)]. This simulation technique is shown to be a precise tool for direct calculation of boiling temperatures in high-boiling fluids, with results that are almost completely insensitive to system size or other arbitrary parameters as long as the potential truncation is handled correctly. Results obtained were validated using conventional NVT-Gibbs ensemble Monte Carlo simulations. The qSC predictions for boiling temperatures are found to be reasonably accurate, but substantially underestimate the enthalpies of vaporization in all cases. This appears to be largely due to the systematic overestimation of dimer binding energies by this family of potentials, which leads to an unsatisfactory description of the vapor phase. © 2011 American Institute of Physics
NASA Astrophysics Data System (ADS)
Lantz, Jonas; Gupta, Vikas; Henriksson, Lilian; Karlsson, Matts; Persson, Ander; Carhall, Carljohan; Ebbers, Tino
2017-11-01
In this study, cardiac blood flow was simulated using Computational Fluid Dynamics and compared to in vivo flow measurements by 4D Flow MRI. In total, nine patients with various heart diseases were studied. Geometry and heart wall motion for the simulations were obtained from clinical CT measurements, with 0.3x0.3x0.3 mm spatial resolution and 20 time frames covering one heartbeat. The CFD simulations included pulmonary veins, left atrium and ventricle, mitral and aortic valve, and ascending aorta. Mesh sizes were on the order of 6-16 million cells, depending on the size of the heart, in order to resolve both papillary muscles and trabeculae. The computed flow field agreed visually very well with 4D Flow MRI, with characteristic vortices and flow structures seen in both techniques. Regression analysis showed that peak flow rate as well as stroke volume had an excellent agreement for the two techniques. We demonstrated the feasibility, and more importantly, fidelity of cardiac flow simulations by comparing CFD results to in vivo measurements. Both qualitative and quantitative results agreed well with the 4D Flow MRI measurements. Also, the developed simulation methodology enables ``what if'' scenarios, such as optimization of valve replacement and other surgical procedures. Funded by the Wallenberg Foundation.
Kelly, S C; O'Rourke, M J
2010-01-01
This work reports on the implementation and validation of a two-system, single-analysis, fluid-structure interaction (FSI) technique that uses the finite volume (FV) method for performing simulations on abdominal aortic aneurysm (AAA) geometries. This FSI technique, which was implemented in OpenFOAM, included fluid and solid mesh motion and incorporated a non-linear material model to represent AAA tissue. Fully implicit coupling was implemented, ensuring that both the fluid and solid domains reached convergence within each time step. The fluid and solid parts of the FSI code were validated independently through comparison with experimental data, before performing a complete FSI simulation on an idealized AAA geometry. Results from the FSI simulation showed that a vortex formed at the proximal end of the aneurysm during systolic acceleration, and moved towards the distal end of the aneurysm during diastole. Wall shear stress (WSS) values were found to peak at both the proximal and distal ends of the aneurysm and remain low along the centre of the aneurysm. The maximum von Mises stress in the aneurysm wall was found to be 408kPa, and this occurred at the proximal end of the aneurysm, while the maximum displacement of 2.31 mm occurred in the centre of the aneurysm. These results were found to be consistent with results from other FSI studies in the literature.
Meteor burst communications for LPI applications
NASA Astrophysics Data System (ADS)
Schilling, D. L.; Apelewicz, T.; Lomp, G. R.; Lundberg, L. A.
A technique that enhances the performance of meteor-burst communications is described. The technique, the feedback adaptive variable rate (FAVR) system, maintains a feedback channel that allows the transmitted bit rate to mimic the time behavior of the received power so that a constant bit energy is maintained. This results in a constant probability of bit error in each transmitted bit. Experimentally determined meteor-burst channel characteristics and FAVR system simulation results are presented.
Terrill Vosbein, Heidi A; Boatz, Jerry A; Kenney, John W
2005-12-22
The moment analysis method (MA) has been tested for the case of 2S --> 2P ([core]ns1 --> [core]np1) transitions of alkali metal atoms (M) doped into cryogenic rare gas (Rg) matrices using theoretically validated simulations. Theoretical/computational M/Rg system models are constructed with precisely defined parameters that closely mimic known M/Rg systems. Monte Carlo (MC) techniques are then employed to generate simulated absorption and magnetic circular dichroism (MCD) spectra of the 2S --> 2P M/Rg transition to which the MA method can be applied with the goal of seeing how effective the MA method is in re-extracting the M/Rg system parameters from these known simulated systems. The MA method is summarized in general, and an assessment is made of the use of the MA method in the rigid shift approximation typically used to evaluate M/Rg systems. The MC-MCD simulation technique is summarized, and validating evidence is presented. The simulation results and the assumptions used in applying MA to M/Rg systems are evaluated. The simulation results on Na/Ar demonstrate that the MA method does successfully re-extract the 2P spin-orbit coupling constant and Landé g-factor values initially used to build the simulations. However, assigning physical significance to the cubic and noncubic Jahn-Teller (JT) vibrational mode parameters in cryogenic M/Rg systems is not supported.
Shin, Jaemin; Ahn, Sinyeob; Hu, Xiaoping
2015-01-01
Purpose To develop an improved and generalized technique for correcting T1-related signal fluctuations (T1 effect) in cardiac-gated functional magnetie resonance imaging (fMRI) data with flip angle estimation. Theory and Methods Spatial maps of flip angle and T1 are jointly estimated from cardiac-gated time series using a Kalman filter. These maps are subsequently used for removing the T1 effect in the presence of B1 inhomogeneity. The new technique was compared with a prior technique that uses T1 only while assuming a homogeneous flip angle of 90°. The robustness of the new technique is demonstrated with simulated and experimental data. Results Simulation results revealed that the new method led to increased temporal signal-to-noise ratio across a large range of flip angles, T1s, and stimulus onset asynchrony means compared to the T1 only approach. With the experimental data, the new approach resulted in higher average gray matter temporal signal-to-noise ratio of seven subjects (84 vs. 48). The new approach also led to a higher statistical score of activation in the lateral geniculate nucleus (P < 0.002). Conclusion The new technique is able to remove the T1 effect robustly and is a promising tool for improving the ability to map activation in fMRI, especially in subcortical regions. PMID:23390029
NASA Technical Reports Server (NTRS)
Lapenta, William M.; McNider, Richard T.; Suggs, Ron; Jedlovec, Gary; Robertson, Franklin R.
1998-01-01
A technique has been developed for assimilating GOES-FR skin temperature tendencies into the surface energy budget equation of a mesoscale model so that the simulated rate of temperature chance closely agrees with the satellite observations. A critical assumption of the technique is that the availability of moisture (either from the soil or vegetation) is the least known term in the model's surface energy budget. Therefore, the simulated latent heat flux, which is a function of surface moisture availability, is adjusted based upon differences between the modeled and satellite-observed skin temperature tendencies. An advantage of this technique is that satellite temperature tendencies are assimilated in an energetically consistent manner that avoids energy imbalances and surface stability problems that arise from direct assimilation of surface shelter temperatures. The fact that the rate of change of the satellite skin temperature is used rather than the absolute temperature means that sensor calibration is not as critical. An advantage of this technique for short-range forecasts (0-48 h) is that it does not require a complex land-surface formulation within the atmospheric model. As a result, the need to specify poorly known soil and vegetative characteristics is eliminated. The GOES assimilation technique has been incorporated into the PSU/NCAR MM5. Results will be presented to demonstrate the ability of the assimilation scheme to improve short- term (0-48h) simulations of near-surface air temperature and mixing ratio during the warm season for several selected cases which exhibit a variety of atmospheric and land-surface conditions. In addition, validation of terms in the simulated surface energy budget will be presented using in situ data collected at the Southern Great Plains (SGP) Cloud And Radiation Testbed (CART) site as part of the Atmospheric Radiation Measurements Program (ARM).
Preliminary studies on the planetary entry to Jupiter by aerocapture technique
NASA Astrophysics Data System (ADS)
Aso, Shigeru; Yasaka, Tetsuo; Hirayama, Hiroshi; Poetro, Ridanto Eko; Hatta, Shinji
2006-10-01
Preliminary studies on the planetary entry to Jupiter by aerocapture technique are studied in order to complete technological challenges to deliver scientific probe with low cost and smaller mass of the spacecraft to Jupiter. Jupiter aerocapture corridor determination based on maximum deceleration limit of 5g (lower corridor) and aerocapture capability (upper corridor) at Jupiter are carefully considered and calculated. The results show about 1700 m/s of saving velocity due to aerocapture could be possible in some cases for the spacecraft to be captured by Jovian gravitational field. However, the results also show that Jovian aerocapture is not available in some cases. Hence, careful selection is needed to realize Jovian aerocapture. Also the numerical simulation of aerodynamic heating to the spacecraft has been conducted. DSMC method is used for the simulation of flow fields around the spacecraft. The transient changes of drag due to Jovian atmosphere and total heat loads to the spacecraft are obtained. The results show that the estimated heat loads could be within allowable amount heat load when some ablation heat shield technique is applied.
Preliminary studies on the planetary entry to Jupiter by aerocapture technique
NASA Astrophysics Data System (ADS)
Aso, Shigeru; Yasaka, Tetsuo; Hirayama, Hiroshi; Eko Poetro, Ridanto; Hatta, Shinji
2003-11-01
Preliminary studies on the planetary entry to Jupiter by aerocapture technique are studied in order to complete technological challenges to deliver scientific probe with low cost and smaller mass of the spacecraft to Jupiter. Jupiter aerocapture corridor determination based on maximum deceleration limit of 5g (lower corridor) and aerocapture capability (upper corridor) at Jupiter are carefully considered and calculated. The results show about 1700 m/s of saving velocity due to aerocapture could be possible in some cases for the spacecraft to be captured by Jovian gravitational field. However, the results also show that Jovian aerocapture is not available in some cases. Hence, careful selection is needed to realise Jovian aerocapture. Also the numerical simulation of aerodynamic heating to the spacecraft has been conducted. DSMC method is used for the simulation of flow fields around the spacecraft. The transient changes of drag due to Jovian atmosphere and total heat loads to the spacecraft are obtained. The results show the estimated heat loads could be within allowable amount heat load when some ablation heat shield technique is applied.
NASA Technical Reports Server (NTRS)
Sheen, Jyh-Jong; Bishop, Robert H.
1992-01-01
The feedback linearization technique is applied to the problem of spacecraft attitude control and momentum management with control moment gyros (CMGs). The feedback linearization consists of a coordinate transformation, which transforms the system to a companion form, and a nonlinear feedback control law to cancel the nonlinear dynamics resulting in a linear equivalent model. Pole placement techniques are then used to place the closed-loop poles. The coordinate transformation proposed here evolves from three output functions of relative degree four, three, and two, respectively. The nonlinear feedback control law is presented. Stability in a neighborhood of a controllable torque equilibrium attitude (TEA) is guaranteed and this fact is demonstrated by the simulation results. An investigation of the nonlinear control law shows that singularities exist in the state space outside the neighborhood of the controllable TEA. The nonlinear control law is simplified by a standard linearization technique and it is shown that the linearized nonlinear controller provides a natural way to select control gains for the multiple-input, multiple-output system. Simulation results using the linearized nonlinear controller show good performance relative to the nonlinear controller in the neighborhood of the TEA.
Rogge, Matthew D; Leckey, Cara A C
2013-09-01
Delaminations in composite laminates resulting from impact events may be accompanied by minimal indication of damage at the surface. As such, inspections are required to ensure defects are within allowable limits. Conventional ultrasonic scanning techniques have been shown to effectively characterize the size and depth of delaminations but require physical contact with the structure and considerable setup time. Alternatively, a non-contact scanning laser vibrometer may be used to measure guided wave propagation in the laminate structure generated by permanently bonded transducers. A local Fourier domain analysis method is presented for processing guided wavefield data to estimate spatially dependent wavenumber values, which can be used to determine delamination depth. The technique is applied to simulated wavefields and results are analyzed to determine limitations of the technique with regards to determining defect size and depth. Based on simulation results, guidelines for application of the technique are developed. Finally, experimental wavefield data is obtained in quasi-isotropic carbon fiber reinforced polymer (CFRP) laminates with impact damage. The recorded wavefields are analyzed and wavenumber is measured to an accuracy of up to 8.5% in the region of shallow delaminations. These results show the promise of local wavenumber domain analysis to characterize the depth of delamination damage in composite laminates. The technique can find application in automated vehicle health assurance systems with potential for high detection rates and greatly reduced operator effort and setup time. Published by Elsevier B.V.
Agent-based modeling: Methods and techniques for simulating human systems
Bonabeau, Eric
2002-01-01
Agent-based modeling is a powerful simulation modeling technique that has seen a number of applications in the last few years, including applications to real-world business problems. After the basic principles of agent-based simulation are briefly introduced, its four areas of application are discussed by using real-world applications: flow simulation, organizational simulation, market simulation, and diffusion simulation. For each category, one or several business applications are described and analyzed. PMID:12011407
Cost considerations in using simulations for medical training.
Fletcher, J D; Wind, Alexander P
2013-10-01
This article reviews simulation used for medical training, techniques for assessing simulation-based training, and cost analyses that can be included in such assessments. Simulation in medical training appears to take four general forms: human actors who are taught to simulate illnesses and ailments in standardized ways; virtual patients who are generally presented via computer-controlled, multimedia displays; full-body manikins that simulate patients using electronic sensors, responders, and controls; and part-task anatomical simulations of various body parts and systems. Techniques for assessing costs include benefit-cost analysis, return on investment, and cost-effectiveness analysis. Techniques for assessing the effectiveness of simulation-based medical training include the use of transfer effectiveness ratios and incremental transfer effectiveness ratios to measure transfer of knowledge and skill provided by simulation to the performance of medical procedures. Assessment of costs and simulation effectiveness can be combined with measures of transfer using techniques such as isoperformance analysis to identify ways of minimizing costs without reducing performance effectiveness or maximizing performance without increasing costs. In sum, economic analysis must be considered in training assessments if training budgets are to compete successfully with other requirements for funding. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
Trescott, Peter C.; Pinder, George Francis; Larson, S.P.
1976-01-01
The model will simulate ground-water flow in an artesian aquifer, a water-table aquifer, or a combined artesian and water-table aquifer. The aquifer may be heterogeneous and anisotropic and have irregular boundaries. The source term in the flow equation may include well discharge, constant recharge, leakage from confining beds in which the effects of storage are considered, and evapotranspiration as a linear function of depth to water. The theoretical development includes presentation of the appropriate flow equations and derivation of the finite-difference approximations (written for a variable grid). The documentation emphasizes the numerical techniques that can be used for solving the simultaneous equations and describes the results of numerical experiments using these techniques. Of the three numerical techniques available in the model, the strongly implicit procedure, in general, requires less computer time and has fewer numerical difficulties than do the iterative alternating direction implicit procedure and line successive overrelaxation (which includes a two-dimensional correction procedure to accelerate convergence). The documentation includes a flow chart, program listing, an example simulation, and sections on designing an aquifer model and requirements for data input. It illustrates how model results can be presented on the line printer and pen plotters with a program that utilizes the graphical display software available from the Geological Survey Computer Center Division. In addition the model includes options for reading input data from a disk and writing intermediate results on a disk.
Probabilistic techniques for obtaining accurate patient counts in Clinical Data Warehouses
Myers, Risa B.; Herskovic, Jorge R.
2011-01-01
Proposal and execution of clinical trials, computation of quality measures and discovery of correlation between medical phenomena are all applications where an accurate count of patients is needed. However, existing sources of this type of patient information, including Clinical Data Warehouses (CDW) may be incomplete or inaccurate. This research explores applying probabilistic techniques, supported by the MayBMS probabilistic database, to obtain accurate patient counts from a clinical data warehouse containing synthetic patient data. We present a synthetic clinical data warehouse (CDW), and populate it with simulated data using a custom patient data generation engine. We then implement, evaluate and compare different techniques for obtaining patients counts. We model billing as a test for the presence of a condition. We compute billing’s sensitivity and specificity both by conducting a “Simulated Expert Review” where a representative sample of records are reviewed and labeled by experts, and by obtaining the ground truth for every record. We compute the posterior probability of a patient having a condition through a “Bayesian Chain”, using Bayes’ Theorem to calculate the probability of a patient having a condition after each visit. The second method is a “one-shot” approach that computes the probability of a patient having a condition based on whether the patient is ever billed for the condition Our results demonstrate the utility of probabilistic approaches, which improve on the accuracy of raw counts. In particular, the simulated review paired with a single application of Bayes’ Theorem produces the best results, with an average error rate of 2.1% compared to 43.7% for the straightforward billing counts. Overall, this research demonstrates that Bayesian probabilistic approaches improve patient counts on simulated patient populations. We believe that total patient counts based on billing data are one of the many possible applications of our Bayesian framework. Use of these probabilistic techniques will enable more accurate patient counts and better results for applications requiring this metric. PMID:21986292
A new technique for observationally derived boundary conditions for space weather
NASA Astrophysics Data System (ADS)
Pagano, Paolo; Mackay, Duncan Hendry; Yeates, Anthony Robinson
2018-04-01
Context. In recent years, space weather research has focused on developing modelling techniques to predict the arrival time and properties of coronal mass ejections (CMEs) at the Earth. The aim of this paper is to propose a new modelling technique suitable for the next generation of Space Weather predictive tools that is both efficient and accurate. The aim of the new approach is to provide interplanetary space weather forecasting models with accurate time dependent boundary conditions of erupting magnetic flux ropes in the upper solar corona. Methods: To produce boundary conditions, we couple two different modelling techniques, MHD simulations and a quasi-static non-potential evolution model. Both are applied on a spatial domain that covers the entire solar surface, although they extend over a different radial distance. The non-potential model uses a time series of observed synoptic magnetograms to drive the non-potential quasi-static evolution of the coronal magnetic field. This allows us to follow the formation and loss of equilibrium of magnetic flux ropes. Following this a MHD simulation captures the dynamic evolution of the erupting flux rope, when it is ejected into interplanetary space. Results.The present paper focuses on the MHD simulations that follow the ejection of magnetic flux ropes to 4 R⊙. We first propose a technique for specifying the pre-eruptive plasma properties in the corona. Next, time dependent MHD simulations describe the ejection of two magnetic flux ropes, that produce time dependent boundary conditions for the magnetic field and plasma at 4 R⊙ that in future may be applied to interplanetary space weather prediction models. Conclusions: In the present paper, we show that the dual use of quasi-static non-potential magnetic field simulations and full time dependent MHD simulations can produce realistic inhomogeneous boundary conditions for space weather forecasting tools. Before a fully operational model can be produced there are a number of technical and scientific challenges that still need to be addressed. Nevertheless, we illustrate that coupling quasi-static and MHD simulations in this way can significantly reduce the computational time required to produce realistic space weather boundary conditions.
Simulation Modelling in Healthcare: An Umbrella Review of Systematic Literature Reviews.
Salleh, Syed; Thokala, Praveen; Brennan, Alan; Hughes, Ruby; Booth, Andrew
2017-09-01
Numerous studies examine simulation modelling in healthcare. These studies present a bewildering array of simulation techniques and applications, making it challenging to characterise the literature. The aim of this paper is to provide an overview of the level of activity of simulation modelling in healthcare and the key themes. We performed an umbrella review of systematic literature reviews of simulation modelling in healthcare. Searches were conducted of academic databases (JSTOR, Scopus, PubMed, IEEE, SAGE, ACM, Wiley Online Library, ScienceDirect) and grey literature sources, enhanced by citation searches. The articles were included if they performed a systematic review of simulation modelling techniques in healthcare. After quality assessment of all included articles, data were extracted on numbers of studies included in each review, types of applications, techniques used for simulation modelling, data sources and simulation software. The search strategy yielded a total of 117 potential articles. Following sifting, 37 heterogeneous reviews were included. Most reviews achieved moderate quality rating on a modified AMSTAR (A Measurement Tool used to Assess systematic Reviews) checklist. All the review articles described the types of applications used for simulation modelling; 15 reviews described techniques used for simulation modelling; three reviews described data sources used for simulation modelling; and six reviews described software used for simulation modelling. The remaining reviews either did not report or did not provide enough detail for the data to be extracted. Simulation modelling techniques have been used for a wide range of applications in healthcare, with a variety of software tools and data sources. The number of reviews published in recent years suggest an increased interest in simulation modelling in healthcare.
NASA Technical Reports Server (NTRS)
Meyers, James F.
2004-01-01
The historical development of techniques for measuring three velocity components using laser velocimetry is presented. The techniques are described and their relative merits presented. Many of the approaches currently in use based on the fringe laser velocimeter have yielded inaccurate measurements of turbulence intensity in the on-axis component. A possible explanation for these inaccuracies is presented along with simulation results.
ERIC Educational Resources Information Center
Riedel, James A.; And Others
Results of research to determine if an adaptive technique could be used to teach a physically complex psychomotor skill (specifically, performing on an arc welding simulator) more efficiently than the skill could be taught with a nonadaptive technique are presented. Sixty hull maintenance technician firemen and fireman apprentice trainees were…
The Recoverability of P-Technique Factor Analysis
ERIC Educational Resources Information Center
Molenaar, Peter C. M.; Nesselroade, John R.
2009-01-01
It seems that just when we are about to lay P-technique factor analysis finally to rest as obsolete because of newer, more sophisticated multivariate time-series models using latent variables--dynamic factor models--it rears its head to inform us that an obituary may be premature. We present the results of some simulations demonstrating that even…
Multi-filter spectrophotometry of quasar environments
NASA Technical Reports Server (NTRS)
Craven, Sally E.; Hickson, Paul; Yee, Howard K. C.
1993-01-01
A many-filter photometric technique for determining redshifts and morphological types, by fitting spectral templates to spectral energy distributions, has good potential for application in surveys. Despite success in studies performed on simulated data, the results have not been fully reliable when applied to real, low signal-to-noise data. We are investigating techniques to improve the fitting process.
Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Smith, Curtis L.; Alfonsi, Andrea
2015-09-01
The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, themore » overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.« less
Applied Time Domain Stability Margin Assessment for Nonlinear Time-Varying Systems
NASA Technical Reports Server (NTRS)
Kiefer, J. M.; Johnson, M. D.; Wall, J. H.; Dominguez, A.
2016-01-01
The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation. This technique was implemented by using the Stability Aerospace Vehicle Analysis Tool (SAVANT) computer simulation to evaluate the stability of the SLS system with the Adaptive Augmenting Control (AAC) active and inactive along its ascent trajectory. The gains for which the vehicle maintains apparent time-domain stability defines the gain margins, and the time delay similarly defines the phase margin. This method of extracting the control stability margins from the time-domain simulation is relatively straightforward and the resultant margins can be compared to the linearized system results. The sections herein describe the techniques employed to extract the time-domain margins, compare the results between these nonlinear and the linear methods, and provide explanations for observed discrepancies. The SLS ascent trajectory was simulated with SAVANT and the classical linear stability margins were evaluated at one second intervals. The linear analysis was performed with the AAC algorithm disabled to attain baseline stability margins. At each time point, the system was linearized about the current operating point using Simulink's built-in solver. Each linearized system in time was evaluated for its rigid-body gain margin (high frequency gain margin), rigid-body phase margin, and aero gain margin (low frequency gain margin) for each control axis. Using the stability margins derived from the baseline linearization approach, the time domain derived stability margins were determined by executing time domain simulations in which axis-specific incremental gain and phase adjustments were made to the nominal system about the expected neutral stability point at specific flight times. The baseline stability margin time histories were used to shift the system gain to various values around the zero margin point such that a precise amount of expected gain margin was maintained throughout flight. When assessing the gain margins, the gain was applied starting at the time point under consideration, thereafter following the variation in the margin found in the linear analysis. When assessing the rigid-body phase margin, a constant time delay was applied to the system starting at the time point under consideration. If the baseline stability margins were correctly determined via the linear analysis, the time domain simulation results should contain unstable behavior at certain gain and phase values. Examples will be shown from repeated simulations with variable added gain and phase lag. Faithfulness of margins calculated from the linear analysis to the nonlinear system will be demonstrated.
Evaluation of gravimetric techniques to estimate the microvascular filtration coefficient
Dongaonkar, R. M.; Laine, G. A.; Stewart, R. H.
2011-01-01
Microvascular permeability to water is characterized by the microvascular filtration coefficient (Kf). Conventional gravimetric techniques to estimate Kf rely on data obtained from either transient or steady-state increases in organ weight in response to increases in microvascular pressure. Both techniques result in considerably different estimates and neither account for interstitial fluid storage and lymphatic return. We therefore developed a theoretical framework to evaluate Kf estimation techniques by 1) comparing conventional techniques to a novel technique that includes effects of interstitial fluid storage and lymphatic return, 2) evaluating the ability of conventional techniques to reproduce Kf from simulated gravimetric data generated by a realistic interstitial fluid balance model, 3) analyzing new data collected from rat intestine, and 4) analyzing previously reported data. These approaches revealed that the steady-state gravimetric technique yields estimates that are not directly related to Kf and are in some cases directly proportional to interstitial compliance. However, the transient gravimetric technique yields accurate estimates in some organs, because the typical experimental duration minimizes the effects of interstitial fluid storage and lymphatic return. Furthermore, our analytical framework reveals that the supposed requirement of tying off all draining lymphatic vessels for the transient technique is unnecessary. Finally, our numerical simulations indicate that our comprehensive technique accurately reproduces the value of Kf in all organs, is not confounded by interstitial storage and lymphatic return, and provides corroboration of the estimate from the transient technique. PMID:21346245
NASA Astrophysics Data System (ADS)
Patcharoen, Theerasak; Yoomak, Suntiti; Ngaopitakkul, Atthapol; Pothisarn, Chaichan
2018-04-01
This paper describes the combination of discrete wavelet transforms (DWT) and artificial intelligence (AI), which are efficient techniques to identify the type of inrush current, analyze the origin and possible cause on the capacitor bank switching. The experiment setup used to verify the proposed techniques can be detected and classified the transient inrush current from normal capacitor rated current. The discrete wavelet transforms are used to detect and classify the inrush current. Then, output from wavelet is acted as input of fuzzy inference system for discriminating the type of switching transient inrush current. The proposed technique shows enhanced performance with a discrimination accuracy of 90.57%. Both simulation study and experimental results are quite satisfactory with providing the high accuracy and reliability which can be developed and implemented into a numerical overcurrent (50/51) and unbalanced current (60C) protection relay for an application of shunt capacitor bank protection in the future.
Simulation of white light generation and near light bullets using a novel numerical technique
NASA Astrophysics Data System (ADS)
Zia, Haider
2018-01-01
An accurate and efficient simulation has been devised, employing a new numerical technique to simulate the derivative generalised non-linear Schrödinger equation in all three spatial dimensions and time. The simulation models all pertinent effects such as self-steepening and plasma for the non-linear propagation of ultrafast optical radiation in bulk material. Simulation results are compared to published experimental spectral data of an example ytterbium aluminum garnet system at 3.1 μm radiation and fits to within a factor of 5. The simulation shows that there is a stability point near the end of the 2 mm crystal where a quasi-light bullet (spatial temporal soliton) is present. Within this region, the pulse is collimated at a reduced diameter (factor of ∼2) and there exists a near temporal soliton at the spatial center. The temporal intensity within this stable region is compressed by a factor of ∼4 compared to the input. This study shows that the simulation highlights new physical phenomena based on the interplay of various linear, non-linear and plasma effects that go beyond the experiment and is thus integral to achieving accurate designs of white light generation systems for optical applications. An adaptive error reduction algorithm tailor made for this simulation will also be presented in appendix.
Simulation of Neural Firing Dynamics: A Student Project.
ERIC Educational Resources Information Center
Kletsky, E. J.
This paper describes a student project in digital simulation techniques that is part of a graduate systems analysis course entitled Biosimulation. The students chose different simulation techniques to solve a problem related to the neuron model. (MLH)
NASA Astrophysics Data System (ADS)
Gholizadeh Doonechaly, N.; Rahman, S. S.
2012-05-01
Simulation of naturally fractured reservoirs offers significant challenges due to the lack of a methodology that can utilize field data. To date several methods have been proposed by authors to characterize naturally fractured reservoirs. Among them is the unfolding/folding method which offers some degree of accuracy in estimating the probability of the existence of fractures in a reservoir. Also there are statistical approaches which integrate all levels of field data to simulate the fracture network. This approach, however, is dependent on the availability of data sources, such as seismic attributes, core descriptions, well logs, etc. which often make it difficult to obtain field wide. In this study a hybrid tectono-stochastic simulation is proposed to characterize a naturally fractured reservoir. A finite element based model is used to simulate the tectonic event of folding and unfolding of a geological structure. A nested neuro-stochastic technique is used to develop the inter-relationship between the data and at the same time it utilizes the sequential Gaussian approach to analyze field data along with fracture probability data. This approach has the ability to overcome commonly experienced discontinuity of the data in both horizontal and vertical directions. This hybrid technique is used to generate a discrete fracture network of a specific Australian gas reservoir, Palm Valley in the Northern Territory. Results of this study have significant benefit in accurately describing fluid flow simulation and well placement for maximal hydrocarbon recovery.
NASA Astrophysics Data System (ADS)
Nair, Rajesh P.; Lakshmana Rao, C.
2014-01-01
Ballistic impact (BI) is a study that deals with a projectile hitting a target and observing its effects in terms of deformation and fragmentation of the target. The Discrete Element Method (DEM) is a powerful numerical technique used to model solid and particulate media. Here, an attempt is made to simulate the BI process using DEM. 1-D DEM for BI is developed and depth of penetration (DOP) is obtained. The DOP is compared with results obtained from 2-D DEM. DEM results are found to match empirical results. Effects of strain rate sensitivity of the material response on DOP are also simulated.
NASA Astrophysics Data System (ADS)
Hopkins, Deborah; Datuin, Marvin; Aldrin, John; Warchol, Mark; Warchol, Lyudmila; Forsyth, David
2018-04-01
The work presented here aims to develop and transition angled-beam shear-wave inspection techniques for crack localization at fastener sites in multi-layer aircraft structures. This requires moving beyond detection to achieve reliable crack location and size, thereby providing invaluable information for maintenance actions and service-life management. The technique presented is based on imaging cracks in "True" B-scans (depth view projected in the sheets along the beam path). The crack traces that contribute to localization in the True B-scans depend on small, diffracted signals from the crack edges and tips that are visible in simulations and experimental data acquired with sufficient gain. The most recent work shows that cracks rotated toward and away from the central ultrasonic beam also yield crack traces in True B-scans that allow localization in simulations, even for large obtuse angles where experimental and simulation results show very small or no indications in the C-scans. Similarly, for two sheets joined by sealant, simulations show that cracks in the second sheet can be located in True B-scans for all locations studied: cracks that intersect the front or back wall of the second sheet, as well as relatively small mid-bore cracks. These results are consistent with previous model verification and sensitivity studies that demonstrate crack localization in True B-scans for a single sheet and cracks perpendicular to the ultrasonic beam.
Analysis of thin plates with holes by using exact geometrical representation within XFEM.
Perumal, Logah; Tso, C P; Leng, Lim Thong
2016-05-01
This paper presents analysis of thin plates with holes within the context of XFEM. New integration techniques are developed for exact geometrical representation of the holes. Numerical and exact integration techniques are presented, with some limitations for the exact integration technique. Simulation results show that the proposed techniques help to reduce the solution error, due to the exact geometrical representation of the holes and utilization of appropriate quadrature rules. Discussion on minimum order of integration order needed to achieve good accuracy and convergence for the techniques presented in this work is also included.
NASA Technical Reports Server (NTRS)
Smyth, P.; Mellstrom, J.
1990-01-01
Initial results obtained from an investigation using pattern recognition techniques for identifying fault modes in the Deep Space Network (DSN) 70 m antenna control loops are described. The overall background to the problem is described, the motivation and potential benefits of this approach are outlined. In particular, an experiment is described in which fault modes were introduced into a state-space simulation of the antenna control loops. By training a multilayer feed-forward neural network on the simulated sensor output, classification rates of over 95 percent were achieved with a false alarm rate of zero on unseen tests data. It concludes that although the neural classifier has certain practical limitations at present, it also has considerable potential for problems of this nature.
Hayes, Timothy; Usami, Satoshi; Jacobucci, Ross; McArdle, John J
2015-12-01
In this article, we describe a recent development in the analysis of attrition: using classification and regression trees (CART) and random forest methods to generate inverse sampling weights. These flexible machine learning techniques have the potential to capture complex nonlinear, interactive selection models, yet to our knowledge, their performance in the missing data analysis context has never been evaluated. To assess the potential benefits of these methods, we compare their performance with commonly employed multiple imputation and complete case techniques in 2 simulations. These initial results suggest that weights computed from pruned CART analyses performed well in terms of both bias and efficiency when compared with other methods. We discuss the implications of these findings for applied researchers. (c) 2015 APA, all rights reserved).
Manual Manipulation of Engine Throttles for Emergency Flight Control
NASA Technical Reports Server (NTRS)
Burcham, Frank W., Jr.; Fullerton, C. Gordon; Maine, Trindel A.
2004-01-01
If normal aircraft flight controls are lost, emergency flight control may be attempted using only engines thrust. Collective thrust is used to control flightpath, and differential thrust is used to control bank angle. Flight test and simulation results on many airplanes have shown that pilot manipulation of throttles is usually adequate to maintain up-and-away flight, but is most often not capable of providing safe landings. There are techniques that will improve control and increase the chances of a survivable landing. This paper reviews the principles of throttles-only control (TOC), a history of accidents or incidents in which some or all flight controls were lost, manual TOC results for a wide range of airplanes from simulation and flight, and suggested techniques for flying with throttles only and making a survivable landing.
Recent research related to prediction of stall/spin characteristics of fighter aircraft
NASA Technical Reports Server (NTRS)
Nguyen, L. T.; Anglin, E. L.; Gilbert, W. P.
1976-01-01
The NASA Langley Research Center is currently engaged in a stall/spin research program to provide the fundamental information and design guidelines required to predict the stall/spin characteristics of fighter aircraft. The prediction methods under study include theoretical spin prediction techniques and piloted simulation studies. The paper discusses the overall status of theoretical techniques including: (1) input data requirements, (2) math model requirements, and (3) correlation between theoretical and experimental results. The Langley Differential Maneuvering Simulator (DMS) facility has been used to evaluate the spin susceptibility of several current fighters during typical air combat maneuvers and to develop and evaluate the effectiveness of automatic departure/spin prevention concepts. The evaluation procedure is described and some of the more significant results of the studies are presented.
Hayes, Timothy; Usami, Satoshi; Jacobucci, Ross; McArdle, John J.
2016-01-01
In this article, we describe a recent development in the analysis of attrition: using classification and regression trees (CART) and random forest methods to generate inverse sampling weights. These flexible machine learning techniques have the potential to capture complex nonlinear, interactive selection models, yet to our knowledge, their performance in the missing data analysis context has never been evaluated. To assess the potential benefits of these methods, we compare their performance with commonly employed multiple imputation and complete case techniques in 2 simulations. These initial results suggest that weights computed from pruned CART analyses performed well in terms of both bias and efficiency when compared with other methods. We discuss the implications of these findings for applied researchers. PMID:26389526
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, VT.; Silva, L.; Digonnet, H.
2011-05-04
The objective of this work is to model the viscoelastic behaviour of polymer from the solid state to the liquid state. With this objective, we perform experimental tensile tests and compare with simulation results. The chosen polymer is a PMMA whose behaviour depends on its temperature. The computation simulation is based on Navier-Stokes equations where we propose a mixed finite element method with an interpolation P1+/P1 using displacement (or velocity) and pressure as principal variables. The implemented technique uses a mesh composed of triangles (2D) or tetrahedra (3D). The goal of this approach is to model the viscoelastic behaviour ofmore » polymers through a fluid-structure coupling technique with a multiphase approach.« less
Mobit, P
2002-01-01
The energy responses of LiF-TLDs irradiated in megavoltage electron and photon beams have been determined experimentally by many investigators over the past 35 years but the results vary considerably. General cavity theory has been used to model some of the experimental findings but the predictions of these cavity theories differ from each other and from measurements by more than 13%. Recently, two groups or investigators using Monte Carlo simulations and careful experimental techniques showed that the energy response of 1 mm or 2 mm thick LiF-TLD irradiated by megavoltage photon and electron beams is not more than 5% less than unity for low-Z phantom materials like water or Perspex. However, when the depth of irradiation is significantly different from dmax and the TLD size is more than 5 mm, then the energy response is up to 12% less than unity for incident electron beams. Monte Carlo simulations of some of the experiments reported in the literature showed that some of the contradictory experimental results are reproducible with Monte Carlo simulations. Monte Carlo simulations show that the energy response of LiF-TLDs depends on the size of detector used in electron beams, the depth of irradiation and the incident electron energy. Other differences can be attributed to absolute dose determination and precision of the TL technique. Monte Carlo simulations have also been used to evaluate some of the published general cavity theories. The results show that some of the parameters used to evaluate Burlin's general cavity theory are wrong by factor of 3. Despite this, the estimation of the energy response for most clinical situations using Burlin's cavity equation agrees with Monte Carlo simulations within 1%.
Validating clustering of molecular dynamics simulations using polymer models.
Phillips, Joshua L; Colvin, Michael E; Newsam, Shawn
2011-11-14
Molecular dynamics (MD) simulation is a powerful technique for sampling the meta-stable and transitional conformations of proteins and other biomolecules. Computational data clustering has emerged as a useful, automated technique for extracting conformational states from MD simulation data. Despite extensive application, relatively little work has been done to determine if the clustering algorithms are actually extracting useful information. A primary goal of this paper therefore is to provide such an understanding through a detailed analysis of data clustering applied to a series of increasingly complex biopolymer models. We develop a novel series of models using basic polymer theory that have intuitive, clearly-defined dynamics and exhibit the essential properties that we are seeking to identify in MD simulations of real biomolecules. We then apply spectral clustering, an algorithm particularly well-suited for clustering polymer structures, to our models and MD simulations of several intrinsically disordered proteins. Clustering results for the polymer models provide clear evidence that the meta-stable and transitional conformations are detected by the algorithm. The results for the polymer models also help guide the analysis of the disordered protein simulations by comparing and contrasting the statistical properties of the extracted clusters. We have developed a framework for validating the performance and utility of clustering algorithms for studying molecular biopolymer simulations that utilizes several analytic and dynamic polymer models which exhibit well-behaved dynamics including: meta-stable states, transition states, helical structures, and stochastic dynamics. We show that spectral clustering is robust to anomalies introduced by structural alignment and that different structural classes of intrinsically disordered proteins can be reliably discriminated from the clustering results. To our knowledge, our framework is the first to utilize model polymers to rigorously test the utility of clustering algorithms for studying biopolymers.
Biomechanical testing simulation of a cadaver spine specimen: development and evaluation study.
Ahn, Hyung Soo; DiAngelo, Denis J
2007-05-15
This article describes a computer model of the cadaver cervical spine specimen and virtual biomechanical testing. To develop a graphics-oriented, multibody model of a cadaver cervical spine and to build a virtual laboratory simulator for the biomechanical testing using physics-based dynamic simulation techniques. Physics-based computer simulations apply the laws of physics to solid bodies with defined material properties. This technique can be used to create a virtual simulator for the biomechanical testing of a human cadaver spine. An accurate virtual model and simulation would complement tissue-based in vitro studies by providing a consistent test bed with minimal variability and by reducing cost. The geometry of cervical vertebrae was created from computed tomography images. Joints linking adjacent vertebrae were modeled as a triple-joint complex, comprised of intervertebral disc joints in the anterior region, 2 facet joints in the posterior region, and the surrounding ligament structure. A virtual laboratory simulation of an in vitro testing protocol was performed to evaluate the model responses during flexion, extension, and lateral bending. For kinematic evaluation, the rotation of motion segment unit, coupling behaviors, and 3-dimensional helical axes of motion were analyzed. The simulation results were in correlation with the findings of in vitro tests and published data. For kinetic evaluation, the forces of the intervertebral discs and facet joints of each segment were determined and visually animated. This methodology produced a realistic visualization of in vitro experiment, and allowed for the analyses of the kinematics and kinetics of the cadaver cervical spine. With graphical illustrations and animation features, this modeling technique has provided vivid and intuitive information.
Simulation and statistics: Like rhythm and song
NASA Astrophysics Data System (ADS)
Othman, Abdul Rahman
2013-04-01
Simulation has been introduced to solve problems in the form of systems. By using this technique the following two problems can be overcome. First, a problem that has an analytical solution but the cost of running an experiment to solve is high in terms of money and lives. Second, a problem exists but has no analytical solution. In the field of statistical inference the second problem is often encountered. With the advent of high-speed computing devices, a statistician can now use resampling techniques such as the bootstrap and permutations to form pseudo sampling distribution that will lead to the solution of the problem that cannot be solved analytically. This paper discusses how a Monte Carlo simulation was and still being used to verify the analytical solution in inference. This paper also discusses the resampling techniques as simulation techniques. The misunderstandings about these two techniques are examined. The successful usages of both techniques are also explained.
Ganju, Neil K.; Jaffe, Bruce E.; Schoellhamer, David H.
2011-01-01
Simulations of estuarine bathymetric change over decadal timescales require methods for idealization and reduction of forcing data and boundary conditions. Continuous simulations are hampered by computational and data limitations and results are rarely evaluated with observed bathymetric change data. Bathymetric change data for Suisun Bay, California span the 1867–1990 period with five bathymetric surveys during that period. The four periods of bathymetric change were modeled using a coupled hydrodynamic-sediment transport model operated at the tidal-timescale. The efficacy of idealization techniques was investigated by discontinuously simulating the four periods. The 1867–1887 period, used for calibration of wave energy and sediment parameters, was modeled with an average error of 37% while the remaining periods were modeled with error ranging from 23% to 121%. Variation in post-calibration performance is attributed to temporally variable sediment parameters and lack of bathymetric and configuration data for portions of Suisun Bay and the Delta. Modifying seaward sediment delivery and bed composition resulted in large performance increases for post-calibration periods suggesting that continuous simulation with constant parameters is unrealistic. Idealization techniques which accelerate morphological change should therefore be used with caution in estuaries where parameters may change on sub-decadal timescales. This study highlights the utility and shortcomings of estuarine geomorphic models for estimating past changes in forcing mechanisms such as sediment supply and bed composition. The results further stress the inherent difficulty of simulating estuarine changes over decadal timescales due to changes in configuration, benthic composition, and anthropogenic forcing such as dredging and channelization.
On the long-term memory of the Greenland Ice Sheet
NASA Astrophysics Data System (ADS)
Rogozhina, I.; Martinec, Z.; Hagedoorn, J. M.; Thomas, M.; Fleming, K.
2011-03-01
In this study, the memory of the Greenland Ice Sheet (GIS) with respect to its past states is analyzed. According to ice core reconstructions, the present-day GIS reflects former climatic conditions dating back to at least 250 thousand years before the present (kyr BP). This fact must be considered when initializing an ice sheet model. The common initialization techniques are paleoclimatic simulations driven by atmospheric forcing inferred from ice core records and steady state simulations driven by the present-day or past climatic conditions. When paleoclimatic simulations are used, the information about the past climatic conditions is partly reflected in the resulting present-day state of the GIS. However, there are several important questions that need to be clarified. First, for how long does the model remember its initial state? Second, it is generally acknowledged that, prior to 100 kyr BP, the longest Greenland ice core record (GRIP) is distorted by ice-flow irregularities. The question arises as to what extent do the uncertainties inherent in the GRIP-based forcing influence the resulting GIS? Finally, how is the modeled thermodynamic state affected by the choice of initialization technique (paleo or steady state)? To answer these questions, a series of paleoclimatic and steady state simulations is carried out. We conclude that (1) the choice of an ice-covered initial configuration shortens the initialization simulation time to 100 kyr, (2) the uncertainties in the GRIP-based forcing affect present-day modeled ice-surface topographies and temperatures only slightly, and (3) the GIS forced by present-day climatic conditions is overall warmer than that resulting from a paleoclimatic simulation.
Ferreiro-Rangel, Carlos A; Gelb, Lev D
2013-06-13
Structural and mechanical properties of silica aerogels are studied using a flexible coarse-grained model and a variety of simulation techniques. The model, introduced in a previous study (J. Phys. Chem. C 2007, 111, 15792-15802), consists of spherical "primary" gel particles that interact through weak nonbonded forces and through microscopically motivated interparticle bonds that may break and form during the simulations. Aerogel models are prepared using a three-stage protocol consisting of separate simulations of gelation, aging, and a final relaxation during which no further bond formation is permitted. Models of varying particle size, density, and size dispersity are considered. These are characterized in terms of fractal dimensions and pore size distributions, and generally good agreement with experimental data is obtained for these metrics. The bulk moduli of these materials are studied in detail. Two different techniques for obtaining the bulk modulus are considered, fluctuation analysis and direct compression/expansion simulations. We find that the fluctuation result can be subject to systematic error due to coupling with the simulation barostat but, if performed carefully, yields results equivalent with those of compression/expansion experiments. The dependence of the bulk modulus on density follows a power law with an exponent between 3.00 and 3.15, in agreement with reported experimental results. The best correlate for the bulk modulus appears to be the volumetric bond density, on which there is also a power law dependence. Polydisperse models exhibit lower bulk moduli than comparable monodisperse models, which is due to lower bond densities in the polydisperse materials.
Jing, Liwen; Li, Zhao; Wang, Wenjie; Dubey, Amartansh; Lee, Pedro; Meniconi, Silvia; Brunone, Bruno; Murch, Ross D
2018-05-01
An approximate inverse scattering technique is proposed for reconstructing cross-sectional area variation along water pipelines to deduce the size and position of blockages. The technique allows the reconstructed blockage profile to be written explicitly in terms of the measured acoustic reflectivity. It is based upon the Born approximation and provides good accuracy, low computational complexity, and insight into the reconstruction process. Numerical simulations and experimental results are provided for long pipelines with mild and severe blockages of different lengths. Good agreement is found between the inverse result and the actual pipe condition for mild blockages.
Molecular Dynamics Simulations of Nucleic Acids. From Tetranucleotides to the Ribosome.
Šponer, Jiří; Banáš, Pavel; Jurečka, Petr; Zgarbová, Marie; Kührová, Petra; Havrila, Marek; Krepl, Miroslav; Stadlbauer, Petr; Otyepka, Michal
2014-05-15
We present a brief overview of explicit solvent molecular dynamics (MD) simulations of nucleic acids. We explain physical chemistry limitations of the simulations, namely, the molecular mechanics (MM) force field (FF) approximation and limited time scale. Further, we discuss relations and differences between simulations and experiments, compare standard and enhanced sampling simulations, discuss the role of starting structures, comment on different versions of nucleic acid FFs, and relate MM computations with contemporary quantum chemistry. Despite its limitations, we show that MD is a powerful technique for studying the structural dynamics of nucleic acids with a fast growing potential that substantially complements experimental results and aids their interpretation.
Accuracy and performance of 3D mask models in optical projection lithography
NASA Astrophysics Data System (ADS)
Agudelo, Viviana; Evanschitzky, Peter; Erdmann, Andreas; Fühner, Tim; Shao, Feng; Limmer, Steffen; Fey, Dietmar
2011-04-01
Different mask models have been compared: rigorous electromagnetic field (EMF) modeling, rigorous EMF modeling with decomposition techniques and the thin mask approach (Kirchhoff approach) to simulate optical diffraction from different mask patterns in projection systems for lithography. In addition, each rigorous model was tested for two different formulations for partially coherent imaging: The Hopkins assumption and rigorous simulation of mask diffraction orders for multiple illumination angles. The aim of this work is to closely approximate results of the rigorous EMF method by the thin mask model enhanced with pupil filtering techniques. The validity of this approach for different feature sizes, shapes and illumination conditions is investigated.
Learning in Stochastic Bit Stream Neural Networks.
van Daalen, Max; Shawe-Taylor, John; Zhao, Jieyu
1996-08-01
This paper presents learning techniques for a novel feedforward stochastic neural network. The model uses stochastic weights and the "bit stream" data representation. It has a clean analysable functionality and is very attractive with its great potential to be implemented in hardware using standard digital VLSI technology. The design allows simulation at three different levels and learning techniques are described for each level. The lowest level corresponds to on-chip learning. Simulation results on three benchmark MONK's problems and handwritten digit recognition with a clean set of 500 16 x 16 pixel digits demonstrate that the new model is powerful enough for the real world applications. Copyright 1996 Elsevier Science Ltd
Computational analysis of fluid dynamics in pharmaceutical freeze-drying.
Alexeenko, Alina A; Ganguly, Arnab; Nail, Steven L
2009-09-01
Analysis of water vapor flows encountered in pharmaceutical freeze-drying systems, laboratory-scale and industrial, is presented based on the computational fluid dynamics (CFD) techniques. The flows under continuum gas conditions are analyzed using the solution of the Navier-Stokes equations whereas the rarefied flow solutions are obtained by the direct simulation Monte Carlo (DSMC) method for the Boltzmann equation. Examples of application of CFD techniques to laboratory-scale and industrial scale freeze-drying processes are discussed with an emphasis on the utility of CFD for improvement of design and experimental characterization of pharmaceutical freeze-drying hardware and processes. The current article presents a two-dimensional simulation of a laboratory scale dryer with an emphasis on the importance of drying conditions and hardware design on process control and a three-dimensional simulation of an industrial dryer containing a comparison of the obtained results with analytical viscous flow solutions. It was found that the presence of clean in place (CIP)/sterilize in place (SIP) piping in the duct lead to significant changes in the flow field characteristics. The simulation results for vapor flow rates in an industrial freeze-dryer have been compared to tunable diode laser absorption spectroscopy (TDLAS) and gravimetric measurements.
Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data
NASA Technical Reports Server (NTRS)
Johnson, Marty E.; Lalime, Aimee L.; Grosveld, Ferdinand W.; Rizzi, Stephen A.; Sullivan, Brenda M.
2003-01-01
Applying binaural simulation techniques to structural acoustic data can be very computationally intensive as the number of discrete noise sources can be very large. Typically, Head Related Transfer Functions (HRTFs) are used to individually filter the signals from each of the sources in the acoustic field. Therefore, creating a binaural simulation implies the use of potentially hundreds of real time filters. This paper details two methods of reducing the number of real-time computations required by: (i) using the singular value decomposition (SVD) to reduce the complexity of the HRTFs by breaking them into dominant singular values and vectors and (ii) by using equivalent source reduction (ESR) to reduce the number of sources to be analyzed in real-time by replacing sources on the scale of a structural wavelength with sources on the scale of an acoustic wavelength. The ESR and SVD reduction methods can be combined to provide an estimated computation time reduction of 99.4% for the structural acoustic data tested. In addition, preliminary tests have shown that there is a 97% correlation between the results of the combined reduction methods and the results found with the current binaural simulation techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, Christina M. L.; Palmeri, Mark L.; Department of Anesthesiology, Duke University Medical Center, Durham, North Carolina 27710
2013-04-15
Purpose: The authors previously reported on a three-dimensional computer-generated breast phantom, based on empirical human image data, including a realistic finite-element based compression model that was capable of simulating multimodality imaging data. The computerized breast phantoms are a hybrid of two phantom generation techniques, combining empirical breast CT (bCT) data with flexible computer graphics techniques. However, to date, these phantoms have been based on single human subjects. In this paper, the authors report on a new method to generate multiple phantoms, simulating additional subjects from the limited set of original dedicated breast CT data. The authors developed an image morphingmore » technique to construct new phantoms by gradually transitioning between two human subject datasets, with the potential to generate hundreds of additional pseudoindependent phantoms from the limited bCT cases. The authors conducted a preliminary subjective assessment with a limited number of observers (n= 4) to illustrate how realistic the simulated images generated with the pseudoindependent phantoms appeared. Methods: Several mesh-based geometric transformations were developed to generate distorted breast datasets from the original human subject data. Segmented bCT data from two different human subjects were used as the 'base' and 'target' for morphing. Several combinations of transformations were applied to morph between the 'base' and 'target' datasets such as changing the breast shape, rotating the glandular data, and changing the distribution of the glandular tissue. Following the morphing, regions of skin and fat were assigned to the morphed dataset in order to appropriately assign mechanical properties during the compression simulation. The resulting morphed breast was compressed using a finite element algorithm and simulated mammograms were generated using techniques described previously. Sixty-two simulated mammograms, generated from morphing three human subject datasets, were used in a preliminary observer evaluation where four board certified breast radiologists with varying amounts of experience ranked the level of realism (from 1 ='fake' to 10 ='real') of the simulated images. Results: The morphing technique was able to successfully generate new and unique morphed datasets from the original human subject data. The radiologists evaluated the realism of simulated mammograms generated from the morphed and unmorphed human subject datasets and scored the realism with an average ranking of 5.87 {+-} 1.99, confirming that overall the phantom image datasets appeared more 'real' than 'fake.' Moreover, there was not a significant difference (p > 0.1) between the realism of the unmorphed datasets (6.0 {+-} 1.95) compared to the morphed datasets (5.86 {+-} 1.99). Three of the four observers had overall average rankings of 6.89 {+-} 0.89, 6.9 {+-} 1.24, 6.76 {+-} 1.22, whereas the fourth observer ranked them noticeably lower at 2.94 {+-} 0.7. Conclusions: This work presents a technique that can be used to generate a suite of realistic computerized breast phantoms from a limited number of human subjects. This suite of flexible breast phantoms can be used for multimodality imaging research to provide a known truth while concurrently producing realistic simulated imaging data.« less
2014-11-01
39–44) has been explored in depth in the literature. Of particular interest for this study are investigations into roll control. Isolating the...Control Performance, Aerodynamic Modeling, and Validation of Coupled Simulation Techniques for Guided Projectile Roll Dynamics by Jubaraj...Simulation Techniques for Guided Projectile Roll Dynamics Jubaraj Sahu, Frank Fresconi, and Karen R. Heavey Weapons and Materials Research
ERIC Educational Resources Information Center
Bello, Sulaiman; Ibi, Mustapha Baba; Bukar, Ibrahim Bulama
2016-01-01
The study examined the effect of simulation technique and lecture method on students' academic performance in Mafoni Day Secondary School, Maiduguri. The study used both simulation technique and lecture methods of teaching at the basic level of education in the teaching/learning environment. The study aimed at determining the best predictor among…
Diez, P; Hoskin, P J; Aird, E G A
2007-10-01
This questionnaire forms the basis of the quality assurance (QA) programme for the UK randomized Phase III study of the Stanford V regimen versus ABVD for treatment of advanced Hodgkin's disease to assess differences between participating centres in treatment planning and delivery of involved-field radiotherapy for Hodgkin's lymphoma The questionnaire, which was circulated amongst 42 participating centres, consisted of seven sections: target volume definition and dose prescription; critical structures; patient positioning and irradiation techniques; planning; dose calculation; verification; and future developments The results are based on 25 responses. One-third plan using CT alone, one-third use solely the simulator and the rest individualize, depending on disease site. Eleven centres determine a dose distribution for each patient. Technique depends on disease site and whether CT or simulator planning is employed. Most departments apply isocentric techniques and use immobilization and customized shielding. In vivo dosimetry is performed in 7 centres and treatment verification occurs in 24 hospitals. In conclusion, the planning and delivery of treatment for lymphoma patients varies across the country. Conventional planning is still widespread but most centres are moving to CT-based planning and virtual simulation with extended use of immobilization, customized shielding and compensation.
Correlation as a Determinant of Configurational Entropy in Supramolecular and Protein Systems
2015-01-01
For biomolecules in solution, changes in configurational entropy are thought to contribute substantially to the free energies of processes like binding and conformational change. In principle, the configurational entropy can be strongly affected by pairwise and higher-order correlations among conformational degrees of freedom. However, the literature offers mixed perspectives regarding the contributions that changes in correlations make to changes in configurational entropy for such processes. Here we take advantage of powerful techniques for simulation and entropy analysis to carry out rigorous in silico studies of correlation in binding and conformational changes. In particular, we apply information-theoretic expansions of the configurational entropy to well-sampled molecular dynamics simulations of a model host–guest system and the protein bovine pancreatic trypsin inhibitor. The results bear on the interpretation of NMR data, as they indicate that changes in correlation are important determinants of entropy changes for biologically relevant processes and that changes in correlation may either balance or reinforce changes in first-order entropy. The results also highlight the importance of main-chain torsions as contributors to changes in protein configurational entropy. As simulation techniques grow in power, the mathematical techniques used here will offer new opportunities to answer challenging questions about complex molecular systems. PMID:24702693
Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows
NASA Astrophysics Data System (ADS)
McClure, Jeffrey; Yarusevych, Serhiy
2015-11-01
The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
Processing infrared images of aircraft lapjoints
NASA Technical Reports Server (NTRS)
Syed, Hazari; Winfree, William P.; Cramer, K. E.
1992-01-01
Techniques for processing IR images of aging aircraft lapjoint data are discussed. Attention is given to a technique for detecting disbonds in aircraft lapjoints which clearly delineates the disbonded region from the bonded regions. The technique is weak on unpainted aircraft skin surfaces, but can be overridden by using a self-adhering contact sheet. Neural network analysis on raw temperature data has been shown to be an effective tool for visualization of images. Numerical simulation results show the above processing technique to be an effective tool in delineating the disbonds.
High order discretization techniques for real-space ab initio simulations
NASA Astrophysics Data System (ADS)
Anderson, Christopher R.
2018-03-01
In this paper, we present discretization techniques to address numerical problems that arise when constructing ab initio approximations that use real-space computational grids. We present techniques to accommodate the singular nature of idealized nuclear and idealized electronic potentials, and we demonstrate the utility of using high order accurate grid based approximations to Poisson's equation in unbounded domains. To demonstrate the accuracy of these techniques, we present results for a Full Configuration Interaction computation of the dissociation of H2 using a computed, configuration dependent, orbital basis set.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Multiscale simulation of molecular processes in cellular environments.
Chiricotto, Mara; Sterpone, Fabio; Derreumaux, Philippe; Melchionna, Simone
2016-11-13
We describe the recent advances in studying biological systems via multiscale simulations. Our scheme is based on a coarse-grained representation of the macromolecules and a mesoscopic description of the solvent. The dual technique handles particles, the aqueous solvent and their mutual exchange of forces resulting in a stable and accurate methodology allowing biosystems of unprecedented size to be simulated.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2016 The Author(s).
A hybrid experimental-numerical technique for determining 3D velocity fields from planar 2D PIV data
NASA Astrophysics Data System (ADS)
Eden, A.; Sigurdson, M.; Mezić, I.; Meinhart, C. D.
2016-09-01
Knowledge of 3D, three component velocity fields is central to the understanding and development of effective microfluidic devices for lab-on-chip mixing applications. In this paper we present a hybrid experimental-numerical method for the generation of 3D flow information from 2D particle image velocimetry (PIV) experimental data and finite element simulations of an alternating current electrothermal (ACET) micromixer. A numerical least-squares optimization algorithm is applied to a theory-based 3D multiphysics simulation in conjunction with 2D PIV data to generate an improved estimation of the steady state velocity field. This 3D velocity field can be used to assess mixing phenomena more accurately than would be possible through simulation alone. Our technique can also be used to estimate uncertain quantities in experimental situations by fitting the gathered field data to a simulated physical model. The optimization algorithm reduced the root-mean-squared difference between the experimental and simulated velocity fields in the target region by more than a factor of 4, resulting in an average error less than 12% of the average velocity magnitude.
Challenges and solutions for realistic room simulation
NASA Astrophysics Data System (ADS)
Begault, Durand R.
2002-05-01
Virtual room acoustic simulation (auralization) techniques have traditionally focused on answering questions related to speech intelligibility or musical quality, typically in large volumetric spaces. More recently, auralization techniques have been found to be important for the externalization of headphone-reproduced virtual acoustic images. Although externalization can be accomplished using a minimal simulation, data indicate that realistic auralizations need to be responsive to head motion cues for accurate localization. Computational demands increase when providing for the simulation of coupled spaces, small rooms lacking meaningful reverberant decays, or reflective surfaces in outdoor environments. Auditory threshold data for both early reflections and late reverberant energy levels indicate that much of the information captured in acoustical measurements is inaudible, minimizing the intensive computational requirements of real-time auralization systems. Results are presented for early reflection thresholds as a function of azimuth angle, arrival time, and sound-source type, and reverberation thresholds as a function of reverberation time and level within 250-Hz-2-kHz octave bands. Good agreement is found between data obtained in virtual room simulations and those obtained in real rooms, allowing a strategy for minimizing computational requirements of real-time auralization systems.
Measurements of Deposition, Lung Surface Area and Lung Fluid for Simulation of Inhaled Compounds.
Fröhlich, Eleonore; Mercuri, Annalisa; Wu, Shengqian; Salar-Behzadi, Sharareh
2016-01-01
Modern strategies in drug development employ in silico techniques in the design of compounds as well as estimations of pharmacokinetics, pharmacodynamics and toxicity parameters. The quality of the results depends on software algorithm, data library and input data. Compared to simulations of absorption, distribution, metabolism, excretion, and toxicity of oral drug compounds, relatively few studies report predictions of pharmacokinetics and pharmacodynamics of inhaled substances. For calculation of the drug concentration at the absorption site, the pulmonary epithelium, physiological parameters such as lung surface and distribution volume (lung lining fluid) have to be known. These parameters can only be determined by invasive techniques and by postmortem studies. Very different values have been reported in the literature. This review addresses the state of software programs for simulation of orally inhaled substances and focuses on problems in the determination of particle deposition, lung surface and of lung lining fluid. The different surface areas for deposition and for drug absorption are difficult to include directly into the simulations. As drug levels are influenced by multiple parameters the role of single parameters in the simulations cannot be identified easily.
How to identify dislocations in molecular dynamics simulations?
NASA Astrophysics Data System (ADS)
Li, Duo; Wang, FengChao; Yang, ZhenYu; Zhao, YaPu
2014-12-01
Dislocations are of great importance in revealing the underlying mechanisms of deformed solid crystals. With the development of computational facilities and technologies, the observations of dislocations at atomic level through numerical simulations are permitted. Molecular dynamics (MD) simulation suggests itself as a powerful tool for understanding and visualizing the creation of dislocations as well as the evolution of crystal defects. However, the numerical results from the large-scale MD simulations are not very illuminating by themselves and there exist various techniques for analyzing dislocations and the deformed crystal structures. Thus, it is a big challenge for the beginners in this community to choose a proper method to start their investigations. In this review, we summarized and discussed up to twelve existing structure characterization methods in MD simulations of deformed crystal solids. A comprehensive comparison was made between the advantages and disadvantages of these typical techniques. We also examined some of the recent advances in the dynamics of dislocations related to the hydraulic fracturing. It was found that the dislocation emission has a significant effect on the propagation and bifurcation of the crack tip in the hydraulic fracturing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, T; Ding, H; Lipinski, J
2015-06-15
Purpose: To develop a physics-based model for accurate quantification of the cross-sectional area (CSA) of coronary arteries in CT angiography by measuring the integrated density to account for the partial volume effect. Methods: In this technique the integrated density of the object as compared with its local background is measured to account for the partial volume effect. Normal vessels were simulated as circles with diameters in the range of 0.1–3mm. Diseased vessels were simulated as 2, 3, and 4mm diameter vessels with 10–90% area stenosis, created by inserting circular plaques. A simplified two material model was used with the lumenmore » as 8mg/ml Iodine and background as lipid. The contrast-to-noise ratio between lumen and background was approximately 26. Linear fits to the known CSA were calculated. The precision and accuracy of the measurement were quantified using the root-mean-square fit deviations (RMSD) and errors to the known CSA (RMSE). Results compared to manual segmentation of the vessel lumen. To assess the impact of random variations, coefficients of variation (CV) from 10 simulations for each vessel were computed to determine reliability. Measurements with CVs less than 10% were considered reliable. Results: For normal vessels, the precision and accuracy of the integrated density technique were 0.12mm{sup 2} and 0.28mm{sup 2}, respectively. The corresponding results for manual segmentation were 0.27mm{sup 2} and 0.43mm{sup 2}. For diseased vessels, the precision and accuracy of the integrated density technique were 0.14mm{sup 2} and 0.19mm{sup 2}. Corresponding results for manual segmentation were 0.42mm{sup 2} and 0.71mm{sup 2}. Reliable CSAs were obtained for normal vessels with diameters larger than 1 mm and for diseased vessels with area as low as 1.26mm2. Conclusion: The CSA based on integrated density showed improved precision and accuracy as compared with manual segmentation in simulation. These results indicate the potential of using integrated density to quantify CSA of coronary arteries in CT angiography.« less
NASA Astrophysics Data System (ADS)
Bahl, Mayank; Zhou, Gui-Rong; Heller, Evan; Cassarly, William; Jiang, Mingming; Scarmozzino, Rob; Gregory, G. Groot
2014-09-01
Over the last two decades there has been extensive research done to improve the design of Organic Light Emitting Diodes (OLEDs) so as to enhance light extraction efficiency, improve beam shaping, and allow color tuning through techniques such as the use of patterned substrates, photonic crystal (PCs) gratings, back reflectors, surface texture, and phosphor down-conversion. Computational simulation has been an important tool for examining these increasingly complex designs. It has provided insights for improving OLED performance as a result of its ability to explore limitations, predict solutions, and demonstrate theoretical results. Depending upon the focus of the design and scale of the problem, simulations are carried out using rigorous electromagnetic (EM) wave optics based techniques, such as finite-difference time-domain (FDTD) and rigorous coupled wave analysis (RCWA), or through ray optics based technique such as Monte Carlo ray-tracing. The former are typically used for modeling nanostructures on the OLED die, and the latter for modeling encapsulating structures, die placement, back-reflection, and phosphor down-conversion. This paper presents the use of a mixed-level simulation approach which unifies the use of EM wave-level and ray-level tools. This approach uses rigorous EM wave based tools to characterize the nanostructured die and generate both a Bidirectional Scattering Distribution function (BSDF) and a far-field angular intensity distribution. These characteristics are then incorporated into the ray-tracing simulator to obtain the overall performance. Such mixed-level approach allows for comprehensive modeling of the optical characteristic of OLEDs and can potentially lead to more accurate performance than that from individual modeling tools alone.
NASA Astrophysics Data System (ADS)
Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.
2017-09-01
The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.
Feasibility study for wax deposition imaging in oil pipelines by PGNAA technique.
Cheng, Can; Jia, Wenbao; Hei, Daqian; Wei, Zhiyong; Wang, Hongtao
2017-10-01
Wax deposition in pipelines is a crucial problem in the oil industry. A method based on the prompt gamma-ray neutron activation analysis technique was applied to reconstruct the image of wax deposition in oil pipelines. The 2.223MeV hydrogen capture gamma rays were used to reconstruct the wax deposition image. To validate the method, both MCNP simulation and experiments were performed for wax deposited with a maximum thickness of 20cm. The performance of the method was simulated using the MCNP code. The experiment was conducted with a 252 Cf neutron source and a LaBr 3 : Ce detector. A good correspondence between the simulations and the experiments was observed. The results obtained indicate that the present approach is efficient for wax deposition imaging in oil pipelines. Copyright © 2017 Elsevier Ltd. All rights reserved.
A novel variable-gravity simulation method: potential for astronaut training.
Sussingham, J C; Cocks, F H
1995-11-01
Zero gravity conditions for astronaut training have traditionally used neutral buoyancy tanks, and with such tanks hypogravity conditions are produced by the use of supplemental weights. This technique does not allow for the influence of water viscosity on any reduced gravity exercise regime. With a water-foam fluid produced by using a microbubble air flow together with surface active agents to prevent bubble agglomeration, it has been found possible to simulate a range of gravity conditions without the need for supplemental weights and additionally with a substantial reduction in the resulting fluid viscosity. This new technique appears to have application in improving the simulation environment for astronaut training under the reduced gravity conditions to be found on the moon or on Mars, and may have terrestrial applications in patient rehabilitation and exercise as well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundstrom, B.; Shirazi, M.; Coddington, M.
2013-02-01
This poster describes a Grid Interconnection System Evaluator (GISE) that leverages hardware-in-the-loop (HIL) simulation techniques to rapidly evaluate the grid interconnection standard conformance of an ICS according to the procedures in IEEE Std 1547.1TM. The architecture and test sequencing of this evaluation tool, along with a set of representative ICS test results from three different photovoltaic (PV) inverters, are presented. The GISE adds to the National Renewable Energy Laboratory's (NREL) evaluation platform that now allows for rapid development of ICS control algorithms using controller HIL (CHIL) techniques, the ability to test the dc input characteristics of PV-based ICSs through themore » use of a PV simulator capable of simulating real-world dynamics using power HIL (PHIL), and evaluation of ICS grid interconnection conformance.« less
Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val
1989-01-01
Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers.
NASA Astrophysics Data System (ADS)
Hamidi, Mohammadreza; Shahanaghi, Kamran; Jabbarzadeh, Armin; Jahani, Ehsan; Pousti, Zahra
2017-12-01
In every production plant, it is necessary to have an estimation of production level. Sometimes there are many parameters affective in this estimation. In this paper, it tried to find an appropriate estimation of production level for an industrial factory called Barez in an uncertain environment. We have considered a part of production line, which has different production time for different kind of products, which means both environmental and system uncertainty. To solve the problem we have simulated the line and because of the uncertainty in the times, fuzzy simulation is considered. Required fuzzy numbers are estimated by the use of bootstrap technique. The results are used in production planning process by factory experts and have had satisfying consequences. Opinions of these experts about the efficiency of using this methodology, has been attached.
Numerical simulations of microwave heating of liquids: enhancements using Krylov subspace methods
NASA Astrophysics Data System (ADS)
Lollchund, M. R.; Dookhitram, K.; Sunhaloo, M. S.; Boojhawon, R.
2013-04-01
In this paper, we compare the performances of three iterative solvers for large sparse linear systems arising in the numerical computations of incompressible Navier-Stokes (NS) equations. These equations are employed mainly in the simulation of microwave heating of liquids. The emphasis of this work is on the application of Krylov projection techniques such as Generalized Minimal Residual (GMRES) to solve the Pressure Poisson Equations that result from discretisation of the NS equations. The performance of the GMRES method is compared with the traditional Gauss-Seidel (GS) and point successive over relaxation (PSOR) techniques through their application to simulate the dynamics of water housed inside a vertical cylindrical vessel which is subjected to microwave radiation. It is found that as the mesh size increases, GMRES gives the fastest convergence rate in terms of computational times and number of iterations.
Path integral pricing of Wasabi option in the Black-Scholes model
NASA Astrophysics Data System (ADS)
Cassagnes, Aurelien; Chen, Yu; Ohashi, Hirotada
2014-11-01
In this paper, using path integral techniques, we derive a formula for a propagator arising in the study of occupation time derivatives. Using this result we derive a fair price for the case of the cumulative Parisian option. After confirming the validity of the derived result using Monte Carlo simulation, a new type of heavily path dependent derivative product is investigated. We derive an approximation for our so-called Wasabi option fair price and check the accuracy of our result with a Monte Carlo simulation.
Impact of Simulation Technology on Die and Stamping Business
NASA Astrophysics Data System (ADS)
Stevens, Mark W.
2005-08-01
Over the last ten years, we have seen an explosion in the use of simulation-based techniques to improve the engineering, construction, and operation of GM production tools. The impact has been as profound as the overall switch to CAD/CAM from the old manual design and construction methods. The changeover to N/C machining from duplicating milling machines brought advances in accuracy and speed to our construction activity. It also brought significant reductions in fitting sculptured surfaces. Changing over to CAD design brought similar advances in accuracy, and today's use of solid modeling has enhanced that accuracy gain while finally leading to the reduction in lead time and cost through the development of parametric techniques. Elimination of paper drawings for die design, along with the process of blueprinting and distribution, provided the savings required to install high capacity computer servers, high-speed data transmission lines and integrated networks. These historic changes in the application of CAE technology in manufacturing engineering paved the way for the implementation of simulation to all aspects of our business. The benefits are being realized now, and the future holds even greater promise as the simulation techniques mature and expand. Every new line of dies is verified prior to casting for interference free operation. Sheet metal forming simulation validates the material flow, eliminating the high costs of physical experimentation dependent on trial and error methods of the past. Integrated forming simulation and die structural analysis and optimization has led to a reduction in die size and weight on the order of 30% or more. The latest techniques in factory simulation enable analysis of automated press lines, including all stamping operations with corresponding automation. This leads to manufacturing lines capable of running at higher levels of throughput, with actual results providing the capability of two or more additional strokes per minute. As we spread these simulation techniques to the balance of our business, from blank de-stacking to the racking of parts, we anticipate continued reduction in lead-time and engineering expense while improving quality and start-up execution. The author will provide an overview of technology and business evolution of the math-based process that brought an historical transition and revitalization to the die and stamping industry in the past decade. Finally, the author will give an outlook for future business needs and technology development directions.
Scattering of Acoustic Energy from Rough Deep Ocean Seafloor: a Numerical Modeling Approach.
NASA Astrophysics Data System (ADS)
Robertsson, Johan Olof Anders
1995-01-01
The highly heterogeneous and anelastic nature of deep ocean seafloor results in complex reverberation as acoustic energy incident from the overlaying water column interacts and scatters from it. To gain a deeper understanding of the mechanisms causing the reverberation in sonar and seafloor scattering experiments, we have developed numerical simulation techniques that are capable of modeling the principal physical properties of complex seafloor structures. A new viscoelastic finite-difference technique for modeling anelastic wave propagation in 2-D and 3-D heterogeneous media, as well as a computationally optimally efficient method for quantifying the anelastic properties in terms of viscoelastic mechanics are presented. A method for reducing numerical dispersion using a Galerkin-wavelet formulation that enables large computational savings is also presented. The widely different regimes of wave propagation occurring in ocean acoustic problems motivate the use of hybrid simulation techniques. HARVEST (Hybrid Adaptive Regime Visco-Elastic Simulation Technique) combines solutions from Gaussian beams, viscoelastic finite-differences, and Kirchhoff extrapolation, to simulate large offset scattering problems. Several scattering hypotheses based on finite -difference simulations of short-range acoustic scattering from realistic seafloor models are presented. Anelastic sediments on the seafloor are found to have a significant impact on the backscattered field from low grazing angle scattering experiments. In addition, small perturbations in the sediment compressional velocity can also dramatically alter the backscattered field due to transitions between pre- and post-critical reflection regimes. The hybrid techniques are employed to simulate deep ocean acoustic reverberation data collected in the vicinity of the northern mid-Atlantic ridge. In general, the simulated data compare well to the real data. Noise partly due to side-lobes in the beam-pattern of the receiver -array is the principal source of reverberation at lower levels. Overall, the employed seafloor models were found to model the real seafloor well. Inaccurately predicted events may partly be attributed to the intrinsic uncertainty in the stochastic seafloor models. For optimal comparison between real and HARVEST simulated data the experimental geometry should be chosen so that 3-D effects may be ignored, and to yield a cross-range resolution in the beam-formed acoustic data that is small relative to the lineation of the seafloor.
Szostek, Kamil; Piórkowski, Adam
2016-10-01
Ultrasound (US) imaging is one of the most popular techniques used in clinical diagnosis, mainly due to lack of adverse effects on patients and the simplicity of US equipment. However, the characteristics of the medium cause US imaging to imprecisely reconstruct examined tissues. The artifacts are the results of wave phenomena, i.e. diffraction or refraction, and should be recognized during examination to avoid misinterpretation of an US image. Currently, US training is based on teaching materials and simulators and ultrasound simulation has become an active research area in medical computer science. Many US simulators are limited by the complexity of the wave phenomena, leading to intensive sophisticated computation that makes it difficult for systems to operate in real time. To achieve the required frame rate, the vast majority of simulators reduce the problem of wave diffraction and refraction. The following paper proposes a solution for an ultrasound simulator based on methods known in geophysics. To improve simulation quality, a wavefront construction method was adapted which takes into account the refraction phenomena. This technique uses ray tracing and velocity averaging to construct wavefronts in the simulation. Instead of a geological medium, real CT scans are applied. This approach can produce more realistic projections of pathological findings and is also capable of providing real-time simulation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Patmanidis, Ilias
2018-01-01
In bionanotechnology, the field of creating functional materials consisting of bio-inspired molecules, the function and shape of a nanostructure only appear through the assembly of many small molecules together. The large number of building blocks required to define a nanostructure combined with the many degrees of freedom in packing small molecules has long precluded molecular simulations, but recent advances in computational hardware as well as software have made classical simulations available to this strongly expanding field. Here, we review the state of the art in simulations of self-assembling bio-inspired supramolecular systems. We will first discuss progress in force fields, simulation protocols and enhanced sampling techniques using recent examples. Secondly, we will focus on efforts to enable the comparison of experimentally accessible observables and computational results. Experimental quantities that can be measured by microscopy, spectroscopy and scattering can be linked to simulation output either directly or indirectly, via quantum mechanical or semi-empirical techniques. Overall, we aim to provide an overview of the various computational approaches to understand not only the molecular architecture of nanostructures, but also the mechanism of their formation. PMID:29688238
Efficient morse decompositions of vector fields.
Chen, Guoning; Mischaikow, Konstantin; Laramee, Robert S; Zhang, Eugene
2008-01-01
Existing topology-based vector field analysis techniques rely on the ability to extract the individual trajectories such as fixed points, periodic orbits, and separatrices that are sensitive to noise and errors introduced by simulation and interpolation. This can make such vector field analysis unsuitable for rigorous interpretations. We advocate the use of Morse decompositions, which are robust with respect to perturbations, to encode the topological structures of a vector field in the form of a directed graph, called a Morse connection graph (MCG). While an MCG exists for every vector field, it need not be unique. Previous techniques for computing MCG's, while fast, are overly conservative and usually results in MCG's that are too coarse to be useful for the applications. To address this issue, we present a new technique for performing Morse decomposition based on the concept of tau-maps, which typically provides finer MCG's than existing techniques. Furthermore, the choice of tau provides a natural tradeoff between the fineness of the MCG's and the computational costs. We provide efficient implementations of Morse decomposition based on tau-maps, which include the use of forward and backward mapping techniques and an adaptive approach in constructing better approximations of the images of the triangles in the meshes used for simulation.. Furthermore, we propose the use of spatial tau-maps in addition to the original temporal tau-maps. These techniques provide additional trade-offs between the quality of the MCGs and the speed of computation. We demonstrate the utility of our technique with various examples in the plane and on surfaces including engine simulation data sets.
An extended stochastic method for seismic hazard estimation
NASA Astrophysics Data System (ADS)
Abd el-aal, A. K.; El-Eraki, M. A.; Mostafa, S. I.
2015-12-01
In this contribution, we developed an extended stochastic technique for seismic hazard assessment purposes. This technique depends on the hypothesis of stochastic technique of Boore (2003) "Simulation of ground motion using the stochastic method. Appl. Geophy. 160:635-676". The essential characteristics of extended stochastic technique are to obtain and simulate ground motion in order to minimize future earthquake consequences. The first step of this technique is defining the seismic sources which mostly affect the study area. Then, the maximum expected magnitude is defined for each of these seismic sources. It is followed by estimating the ground motion using an empirical attenuation relationship. Finally, the site amplification is implemented in calculating the peak ground acceleration (PGA) at each site of interest. We tested and applied this developed technique at Cairo, Suez, Port Said, Ismailia, Zagazig and Damietta cities to predict the ground motion. Also, it is applied at Cairo, Zagazig and Damietta cities to estimate the maximum peak ground acceleration at actual soil conditions. In addition, 0.5, 1, 5, 10 and 20 % damping median response spectra are estimated using the extended stochastic simulation technique. The calculated highest acceleration values at bedrock conditions are found at Suez city with a value of 44 cm s-2. However, these acceleration values decrease towards the north of the study area to reach 14.1 cm s-2 at Damietta city. This comes in agreement with the results of previous studies of seismic hazards in northern Egypt and is found to be comparable. This work can be used for seismic risk mitigation and earthquake engineering purposes.
Comparisons of NIF convergent ablation simulations with radiograph data.
Olson, R E; Hicks, D G; Meezan, N B; Koch, J A; Landen, O L
2012-10-01
A technique for comparing simulation results directly with radiograph data from backlit capsule implosion experiments will be discussed. Forward Abel transforms are applied to the kappa*rho profiles of the simulation. These provide the transmission ratio (optical depth) profiles of the simulation. Gaussian and top hat blurs are applied to the simulated transmission ratio profiles in order to account for the motion blurring and imaging slit resolution of the experimental measurement. Comparisons between the simulated transmission ratios and the radiograph data lineouts are iterated until a reasonable backlighter profile is obtained. This backlighter profile is combined with the blurred, simulated transmission ratios to obtain simulated intensity profiles that can be directly compared with the radiograph data. Examples will be shown from recent convergent ablation (backlit implosion) experiments at the NIF.
Simulation training tools for nonlethal weapons using gaming environments
NASA Astrophysics Data System (ADS)
Donne, Alexsana; Eagan, Justin; Tse, Gabriel; Vanderslice, Tom; Woods, Jerry
2006-05-01
Modern simulation techniques have a growing role for evaluating new technologies and for developing cost-effective training programs. A mission simulator facilitates the productive exchange of ideas by demonstration of concepts through compellingly realistic computer simulation. Revolutionary advances in 3D simulation technology have made it possible for desktop computers to process strikingly realistic and complex interactions with results depicted in real-time. Computer games now allow for multiple real human players and "artificially intelligent" (AI) simulated robots to play together. Advances in computer processing power have compensated for the inherent intensive calculations required for complex simulation scenarios. The main components of the leading game-engines have been released for user modifications, enabling game enthusiasts and amateur programmers to advance the state-of-the-art in AI and computer simulation technologies. It is now possible to simulate sophisticated and realistic conflict situations in order to evaluate the impact of non-lethal devices as well as conflict resolution procedures using such devices. Simulations can reduce training costs as end users: learn what a device does and doesn't do prior to use, understand responses to the device prior to deployment, determine if the device is appropriate for their situational responses, and train with new devices and techniques before purchasing hardware. This paper will present the status of SARA's mission simulation development activities, based on the Half-Life gameengine, for the purpose of evaluating the latest non-lethal weapon devices, and for developing training tools for such devices.
Computer simulation of surface and film processes
NASA Technical Reports Server (NTRS)
Tiller, W. A.; Halicioglu, M. T.
1984-01-01
All the investigations which were performed employed in one way or another a computer simulation technique based on atomistic level considerations. In general, three types of simulation methods were used for modeling systems with discrete particles that interact via well defined potential functions: molecular dynamics (a general method for solving the classical equations of motion of a model system); Monte Carlo (the use of Markov chain ensemble averaging technique to model equilibrium properties of a system); and molecular statics (provides properties of a system at T = 0 K). The effects of three-body forces on the vibrational frequencies of triatomic cluster were investigated. The multilayer relaxation phenomena for low index planes of an fcc crystal was analyzed also as a function of the three-body interactions. Various surface properties for Si and SiC system were calculated. Results obtained from static simulation calculations for slip formation were presented. The more elaborate molecular dynamics calculations on the propagation of cracks in two-dimensional systems were outlined.
Eibenberger, Karin; Eibenberger, Bernhard; Rucci, Michele
2016-08-01
The precise measurement of eye movements is important for investigating vision, oculomotor control and vestibular function. The magnetic scleral search coil technique is one of the most precise measurement techniques for recording eye movements with very high spatial (≈ 1 arcmin) and temporal (>kHz) resolution. The technique is based on measuring voltage induced in a search coil through a large magnetic field. This search coil is embedded in a contact lens worn by a human subject. The measured voltage is in direct relationship to the orientation of the eye in space. This requires a magnetic field with a high homogeneity in the center, since otherwise the field inhomogeneity would give the false impression of a rotation of the eye due to a translational movement of the head. To circumvent this problem, a bite bar typically restricts head movement to a minimum. However, the need often emerges to precisely record eye movements under natural viewing conditions. To this end, one needs a uniform magnetic field that is uniform over a large area. In this paper, we present the numerical and finite element simulations of the magnetic flux density of different coil geometries that could be used for search coil recordings. Based on the results, we built a 2.2 × 2.2 × 2.2 meter coil frame with a set of 3 × 4 coils to generate a 3D magnetic field and compared the measured flux density with our simulation results. In agreement with simulation results, the system yields a highly uniform field enabling high-resolution recordings of eye movements.
Evaporation kinetics of Mg2SiO4 crystals and melts from molecular dynamics simulations
NASA Technical Reports Server (NTRS)
Kubicki, J. D.; Stolper, E. M.
1993-01-01
Computer simulations based on the molecular dynamics (MD) technique were used to study the mechanisms and kinetics of free evaporation from crystalline and molten forsterite (i.e., Mg2SiO4) on an atomic level. The interatomic potential employed for these simulations reproduces the energetics of bonding in forsterite and in gas-phase MgO and SiO2 reasonably accurately. Results of the simulation include predicted evaporation rates, diffusion rates, and reaction mechanisms for Mg2SiO4(s or l) yields 2Mg(g) + 20(g) + SiO2(g).
Signal Processing Studies of a Simulated Laser Doppler Velocimetry-Based Acoustic Sensor
1990-10-17
investigated using spectral correlation methods. Results indicate that it may be possible to extend demonstrated LDV-based acoustic sensor sensitivities using higher order processing techniques. (Author)
Altena, Ellemarije; Daviaux, Yannick; Sanz-Arigita, Ernesto; Bonhomme, Emilien; de Sevin, Étienne; Micoulaud-Franchi, Jean-Arthur; Bioulac, Stéphanie; Philip, Pierre
2018-04-17
Virtual reality and simulation tools enable us to assess daytime functioning in environments that simulate real life as close as possible. Simulator sickness, however, poses a problem in the application of these tools, and has been related to pre-existing health problems. How sleep problems contribute to simulator sickness has not yet been investigated. In the current study, 20 female chronic insomnia patients and 32 female age-matched controls drove in a driving simulator covering realistic city, country and highway scenes. Fifty percent of the insomnia patients as opposed to 12.5% of controls reported excessive simulator sickness leading to experiment withdrawal. In the remaining participants, patients with insomnia showed overall increased levels of oculomotor symptoms even before driving, while nausea symptoms further increased after driving. These results, as well as the realistic simulation paradigm developed, give more insight on how vestibular and oculomotor functions as well as interoceptive functions are affected in insomnia. Importantly, our results have direct implications for both the actual driving experience and the wider context of deploying simulation techniques to mimic real life functioning, in particular in those professions often exposed to sleep problems. © 2018 European Sleep Research Society.
NASA Astrophysics Data System (ADS)
Joglekar, Prasad; Shastry, K.; Satyal, Suman; Weiss, Alexander
2012-02-01
Time of flight Positron Annihilation Induced Auger Electron Spectroscopy system, a highly surface selective analytical technique using time of flight of auger electron resulting from the annihilation of core electrons by trapped incident positron in image potential well. We simulated and modeled the trajectories of the charge particles in TOF-PAES using SIMION for the development of new high resolution system at U T Arlington and current TOFPAES system. This poster presents the SIMION simulations results, Time of flight calculations and larmor radius calculations for current system as well as new system.
Rapid prototyping and AI programming environments applied to payload modeling
NASA Technical Reports Server (NTRS)
Carnahan, Richard S., Jr.; Mendler, Andrew P.
1987-01-01
This effort focused on using artificial intelligence (AI) programming environments and rapid prototyping to aid in both space flight manned and unmanned payload simulation and training. Significant problems addressed are the large amount of development time required to design and implement just one of these payload simulations and the relative inflexibility of the resulting model to accepting future modification. Results of this effort have suggested that both rapid prototyping and AI programming environments can significantly reduce development time and cost when applied to the domain of payload modeling for crew training. The techniques employed are applicable to a variety of domains where models or simulations are required.
Linking Simulation with Formal Verification and Modeling of Wireless Sensor Network in TLA+
NASA Astrophysics Data System (ADS)
Martyna, Jerzy
In this paper, we present the results of the simulation of a wireless sensor network based on the flooding technique and SPIN protocols. The wireless sensor network was specified and verified by means of the TLA+ specification language [1]. For a model of wireless sensor network built this way simulation was carried with the help of specially constructed software tools. The obtained results allow us to predict the behaviour of the wireless sensor network in various topologies and spatial densities. Visualization of the output data enable precise examination of some phenomenas in wireless sensor networks, such as a hidden terminal, etc.
Accurate Estimation of Solvation Free Energy Using Polynomial Fitting Techniques
Shyu, Conrad; Ytreberg, F. Marty
2010-01-01
This report details an approach to improve the accuracy of free energy difference estimates using thermodynamic integration data (slope of the free energy with respect to the switching variable λ) and its application to calculating solvation free energy. The central idea is to utilize polynomial fitting schemes to approximate the thermodynamic integration data to improve the accuracy of the free energy difference estimates. Previously, we introduced the use of polynomial regression technique to fit thermodynamic integration data (Shyu and Ytreberg, J Comput Chem 30: 2297–2304, 2009). In this report we introduce polynomial and spline interpolation techniques. Two systems with analytically solvable relative free energies are used to test the accuracy of the interpolation approach. We also use both interpolation and regression methods to determine a small molecule solvation free energy. Our simulations show that, using such polynomial techniques and non-equidistant λ values, the solvation free energy can be estimated with high accuracy without using soft-core scaling and separate simulations for Lennard-Jones and partial charges. The results from our study suggest these polynomial techniques, especially with use of non-equidistant λ values, improve the accuracy for ΔF estimates without demanding additional simulations. We also provide general guidelines for use of polynomial fitting to estimate free energy. To allow researchers to immediately utilize these methods, free software and documentation is provided via http://www.phys.uidaho.edu/ytreberg/software. PMID:20623657
NASA Astrophysics Data System (ADS)
Rose, Michael Benjamin
A novel trajectory and attitude control and navigation analysis tool for powered ascent is developed. The tool is capable of rapid trade-space analysis and is designed to ultimately reduce turnaround time for launch vehicle design, mission planning, and redesign work. It is streamlined to quickly determine trajectory and attitude control dispersions, propellant dispersions, orbit insertion dispersions, and navigation errors and their sensitivities to sensor errors, actuator execution uncertainties, and random disturbances. The tool is developed by applying both Monte Carlo and linear covariance analysis techniques to a closed-loop, launch vehicle guidance, navigation, and control (GN&C) system. The nonlinear dynamics and flight GN&C software models of a closed-loop, six-degree-of-freedom (6-DOF), Monte Carlo simulation are formulated and developed. The nominal reference trajectory (NRT) for the proposed lunar ascent trajectory is defined and generated. The Monte Carlo truth models and GN&C algorithms are linearized about the NRT, the linear covariance equations are formulated, and the linear covariance simulation is developed. The performance of the launch vehicle GN&C system is evaluated using both Monte Carlo and linear covariance techniques and their trajectory and attitude control dispersion, propellant dispersion, orbit insertion dispersion, and navigation error results are validated and compared. Statistical results from linear covariance analysis are generally within 10% of Monte Carlo results, and in most cases the differences are less than 5%. This is an excellent result given the many complex nonlinearities that are embedded in the ascent GN&C problem. Moreover, the real value of this tool lies in its speed, where the linear covariance simulation is 1036.62 times faster than the Monte Carlo simulation. Although the application and results presented are for a lunar, single-stage-to-orbit (SSTO), ascent vehicle, the tools, techniques, and mathematical formulations that are discussed are applicable to ascent on Earth or other planets as well as other rocket-powered systems such as sounding rockets and ballistic missiles.
NASA Technical Reports Server (NTRS)
Carreno, Victor A.; Choi, G.; Iyer, R. K.
1990-01-01
A simulation study is described which predicts the susceptibility of an advanced control system to electrical transients resulting in logic errors, latched errors, error propagation, and digital upset. The system is based on a custom-designed microprocessor and it incorporates fault-tolerant techniques. The system under test and the method to perform the transient injection experiment are described. Results for 2100 transient injections are analyzed and classified according to charge level, type of error, and location of injection.
Comparison of Nonlinear Random Response Using Equivalent Linearization and Numerical Simulation
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2000-01-01
A recently developed finite-element-based equivalent linearization approach for the analysis of random vibrations of geometrically nonlinear multiple degree-of-freedom structures is validated. The validation is based on comparisons with results from a finite element based numerical simulation analysis using a numerical integration technique in physical coordinates. In particular, results for the case of a clamped-clamped beam are considered for an extensive load range to establish the limits of validity of the equivalent linearization approach.