Science.gov

Sample records for accuracy simulation results

  1. Obtaining identical results with double precision global accuracy on different numbers of processors in parallel particle Monte Carlo simulations

    SciTech Connect

    Cleveland, Mathew A. Brunner, Thomas A.; Gentile, Nicholas A.; Keasler, Jeffrey A.

    2013-10-15

    We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositions will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.

  2. Accuracy analysis of distributed simulation systems

    NASA Astrophysics Data System (ADS)

    Lin, Qi; Guo, Jing

    2010-08-01

    Existed simulation works always emphasize on procedural verification, which put too much focus on the simulation models instead of simulation itself. As a result, researches on improving simulation accuracy are always limited in individual aspects. As accuracy is the key in simulation credibility assessment and fidelity study, it is important to give an all-round discussion of the accuracy of distributed simulation systems themselves. First, the major elements of distributed simulation systems are summarized, which can be used as the specific basis of definition, classification and description of accuracy of distributed simulation systems. In Part 2, the framework of accuracy of distributed simulation systems is presented in a comprehensive way, which makes it more sensible to analyze and assess the uncertainty of distributed simulation systems. The concept of accuracy of distributed simulation systems is divided into 4 other factors and analyzed respectively further more in Part 3. In Part 4, based on the formalized description of framework of accuracy analysis in distributed simulation systems, the practical approach are put forward, which can be applied to study unexpected or inaccurate simulation results. Following this, a real distributed simulation system based on HLA is taken as an example to verify the usefulness of the approach proposed. The results show that the method works well and is applicable in accuracy analysis of distributed simulation systems.

  3. Accuracy of non-Newtonian Lattice Boltzmann simulations

    NASA Astrophysics Data System (ADS)

    Conrad, Daniel; Schneider, Andreas; Böhle, Martin

    2015-11-01

    This work deals with the accuracy of non-Newtonian Lattice Boltzmann simulations. Previous work for Newtonian fluids indicate that, depending on the numerical value of the dimensionless collision frequency Ω, additional artificial viscosity is introduced, which negatively influences the accuracy. Since the non-Newtonian fluid behavior is incorporated through appropriate modeling of the dimensionless collision frequency, a Ω dependent error EΩ is introduced and its influence on the overall error is investigated. Here, simulations with the SRT and the MRT model are carried out for power-law fluids in order to numerically investigate the accuracy of non-Newtonian Lattice Boltzmann simulations. A goal of this accuracy analysis is to derive a recommendation for an optimal choice of the time step size and the simulation Mach number, respectively. For the non-Newtonian case, an error estimate for EΩ in the form of a functional is derived on the basis of a series expansion of the Lattice Boltzmann equation. This functional can be solved analytically for the case of the Hagen-Poiseuille channel flow of non-Newtonian fluids. With the help of the error functional, the prediction of the global error minimum of the velocity field is excellent in regions where the EΩ error is the dominant source of error. With an optimal simulation Mach number, the simulation is about one order of magnitude more accurate. Additionally, for both collision models a detailed study of the convergence behavior of the method in the non-Newtonian case is conducted. The results show that the simulation Mach number has a major impact on the convergence rate and second order accuracy is not preserved for every choice of the simulation Mach number.

  4. An evaluation of information retrieval accuracy with simulated OCR output

    SciTech Connect

    Croft, W.B.; Harding, S.M.; Taghva, K.; Borsack, J.

    1994-12-31

    Optical Character Recognition (OCR) is a critical part of many text-based applications. Although some commercial systems use the output from OCR devices to index documents without editing, there is very little quantitative data on the impact of OCR errors on the accuracy of a text retrieval system. Because of the difficulty of constructing test collections to obtain this data, we have carried out evaluation using simulated OCR output on a variety of databases. The results show that high quality OCR devices have little effect on the accuracy of retrieval, but low quality devices used with databases of short documents can result in significant degradation.

  5. Accuracy of results with NASTRAN modal synthesis

    NASA Technical Reports Server (NTRS)

    Herting, D. N.

    1978-01-01

    A new method for component mode synthesis was developed for installation in NASTRAN level 17.5. Results obtained from the new method are presented, and these results are compared with existing modal synthesis methods.

  6. Performance and accuracy benchmarks for a next generation geodynamo simulation

    NASA Astrophysics Data System (ADS)

    Matsui, H.

    2015-12-01

    A number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field in the last twenty years. However, parameters in the current dynamo model are far from realistic for the Earth's core. To approach a realistic parameters for the Earth's core in geodynmo simulations, extremely large spatial resolutions are required to resolve convective turbulence and small-scale magnetic fields. To assess the next generation dynamo models on a massively parallel computer, we performed performance and accuracy benchmarks from 15 dynamo codes which employ a diverse range of discretization (spectral, finite difference, finite element, and hybrid methods) and parallelization methods. In the performance benchmark, we compare elapsed time and parallelization capability on the TACC Stampede platform, using up to 16384 processor cores. In the accuracy benchmark, we compare required resolutions to obtain less than 1% error from the suggested solutions. The results of the performance benchmark show that codes using 2-D or 3-D parallelization models have a capability to run with 16384 processor cores. The elapsed time for Calypso and Rayleigh, two parallelized codes that use the spectral method, scales with a smaller exponent than the ideal scaling. The elapsed time of SFEMaNS, which uses finite element and Fourier transform, has the smallest growth of the elapsed time with the resolution and parallelization. However, the accuracy benchmark results show that SFEMaNS require three times more degrees of freedoms in each direction compared with a spherical harmonics expansion. Consequently, SFEMaNS needs more than 200 times of elapsed time for the Calypso and Rayleigh with 10000 cores to obtain the same accuracy. These benchmark results indicate that the spectral method with 2-D or 3-D domain decomposition is the most promising methodology for advancing numerical dynamo simulations in the immediate future.

  7. Evaluating the Accuracy of Hessian Approximations for Direct Dynamics Simulations.

    PubMed

    Zhuang, Yu; Siebert, Matthew R; Hase, William L; Kay, Kenneth G; Ceotto, Michele

    2013-01-01

    Direct dynamics simulations are a very useful and general approach for studying the atomistic properties of complex chemical systems, since an electronic structure theory representation of a system's potential energy surface is possible without the need for fitting an analytic potential energy function. In this paper, recently introduced compact finite difference (CFD) schemes for approximating the Hessian [J. Chem. Phys.2010, 133, 074101] are tested by employing the monodromy matrix equations of motion. Several systems, including carbon dioxide and benzene, are simulated, using both analytic potential energy surfaces and on-the-fly direct dynamics. The results show, depending on the molecular system, that electronic structure theory Hessian direct dynamics can be accelerated up to 2 orders of magnitude. The CFD approximation is found to be robust enough to deal with chaotic motion, concomitant with floppy and stiff mode dynamics, Fermi resonances, and other kinds of molecular couplings. Finally, the CFD approximations allow parametrical tuning of different CFD parameters to attain the best possible accuracy for different molecular systems. Thus, a direct dynamics simulation requiring the Hessian at every integration step may be replaced with an approximate Hessian updating by tuning the appropriate accuracy. PMID:26589009

  8. Simulation of Local Tie Accuracy on VLBI Antennas

    NASA Technical Reports Server (NTRS)

    Kallio, Ulla; Poutanen, Markku

    2010-01-01

    We introduce a new mathematical model to compute the centering parameters of a VLBI antenna. These include the coordinates of the reference point, axis offset, orientation, and non-perpendicularity of the axes. Using the model we simulated how precisely parameters can be computed in different cases. Based on the simulation we can give some recommendations and practices to control the accuracy and reliability of the local ties at the VLBI sites.

  9. "Certified" Laboratory Practitioners and the Accuracy of Laboratory Test Results.

    ERIC Educational Resources Information Center

    Boe, Gerard P.; Fidler, James R.

    1988-01-01

    An attempt to replicate a study of the accuracy of test results of medical laboratories was unsuccessful. Limitations of the obtained data prevented the research from having satisfactory internal validity, so no formal report was published. External validity of the study was also limited because the systematic random sample of 78 licensed…

  10. Accuracy assessment of contextual classification results for vegetation mapping

    NASA Astrophysics Data System (ADS)

    Thoonen, Guy; Hufkens, Koen; Borre, Jeroen Vanden; Spanhove, Toon; Scheunders, Paul

    2012-04-01

    A new procedure for quantitatively assessing the geometric accuracy of thematic maps, obtained from classifying hyperspectral remote sensing data, is presented. More specifically, the methodology is aimed at the comparison between results from any of the currently popular contextual classification strategies. The proposed procedure characterises the shapes of all objects in a classified image by defining an appropriate reference and a new quality measure. The results from the proposed procedure are represented in an intuitive way, by means of an error matrix, analogous to the confusion matrix used in traditional thematic accuracy representation. A suitable application for the methodology is vegetation mapping, where lots of closely related and spatially connected land cover types are to be distinguished. Consequently, the procedure is tested on a heathland vegetation mapping problem, related to Natura 2000 habitat monitoring. Object-based mapping and Markov Random Field classification results are compared, showing that the selected Markov Random Fields approach is more suitable for the fine-scale problem at hand, which is confirmed by the proposed procedure.

  11. Open cherry picker simulation results

    NASA Technical Reports Server (NTRS)

    Nathan, C. A.

    1982-01-01

    The simulation program associated with a key piece of support equipment to be used to service satellites directly from the Shuttle is assessed. The Open Cherry Picker (OCP) is a manned platform mounted at the end of the remote manipulator system (RMS) and is used to enhance extra vehicular activities (EVA). The results of simulations performed on the Grumman Large Amplitude Space Simulator (LASS) and at the JSC Water Immersion Facility are summarized.

  12. Study of accuracy of precipitation measurements using simulation method

    NASA Astrophysics Data System (ADS)

    Nagy, Zoltán; Lajos, Tamás; Morvai, Krisztián

    2013-04-01

    Hungarian Meteorological Service1 Budapest University of Technology and Economics2 Precipitation is one of the the most important meteorological parameters describing the state of the climate and to get correct information from trends, accurate measurements of precipitation is very important. The problem is that the precipitation measurements are affected by systematic errors leading to an underestimation of actual precipitation which errors vary by type of precipitaion and gauge type. It is well known that the wind speed is the most important enviromental factor that contributes to the underestimation of actual precipitation, especially for solid precipitation. To study and correct the errors of precipitation measurements there are two basic possibilities: · Use of results and conclusion of International Precipitation Measurements Intercomparisons; · To build standard reference gauges (DFIR, pit gauge) and make own investigation; In 1999 at the HMS we tried to achieve own investigation and built standard reference gauges But the cost-benefit ratio in case of snow (use of DFIR) was very bad (we had several winters without significant amount of snow, while the state of DFIR was continously falling) Due to the problem mentioned above there was need for new approximation that was the modelling made by Budapest University of Technology and Economics, Department of Fluid Mechanics using the FLUENT 6.2 model. The ANSYS Fluent package is featured fluid dynamics solution for modelling flow and other related physical phenomena. It provides the tools needed to describe atmospheric processes, design and optimize new equipment. The CFD package includes solvers that accurately simulate behaviour of the broad range of flows that from single-phase to multi-phase. The questions we wanted to get answer to are as follows: · How do the different types of gauges deform the airflow around themselves? · Try to give quantitative estimation of wind induced error. · How does the use

  13. Forecasting Accuracy as a Performance Measure in Business Simulations.

    ERIC Educational Resources Information Center

    Teach, Richard D.

    1993-01-01

    Describes results of a study of business school students that investigated the link between the ability of business simulation team participants to forecast financial and/or market-related outcomes and the actual results of their decision making. Profitability and forecasting errors are discussed, and implications for designing business…

  14. Poor Metacomprehension Accuracy as a Result of Inappropriate Cue Use

    ERIC Educational Resources Information Center

    Thiede, Keith W.; Griffin, Thomas D.; Wiley, Jennifer; Anderson, Mary C. M.

    2010-01-01

    Two studies attempt to determine the causes of poor metacomprehension accuracy and then, in turn, to identify interventions that circumvent these difficulties to support effective comprehension monitoring performance. The first study explored the cues that both at-risk and typical college readers use as a basis for their metacomprehension…

  15. High-accuracy simulation-based optical proximity correction

    NASA Astrophysics Data System (ADS)

    Keck, Martin C.; Henkel, Thomas; Ziebold, Ralf; Crell, Christian; Thiele, J.÷rg

    2003-12-01

    In times of continuing aggressive shrinking of chip layouts a thorough understanding of the pattern transfer process from layout to silicon is indispensable. We analyzed the most prominent effects limiting the control of this process for a contact layer like process, printing 140nm features of variable length and different proximity using 248nm lithography. Deviations of the photo mask from the ideal layout, in particular mask off-target and corner rounding have been identified as clearly contributing to the printing behavior. In the next step, these deviations from ideal behavior have been incorporated into the optical proximity correction (OPC) modeling process. The degree of accuracy for describing experimental data by simulation, using an OPC model modified in that manner could be increased significantly. Further improvement in modeling the optical imaging process could be accomplished by taking into account lens aberrations of the exposure tool. This suggests a high potential to improve OPC by considering the effects mentioned, delivering a significant contribution to extending the application of OPC techniques beyond current limits.

  16. Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry

    SciTech Connect

    Bostani, Maryam McMillan, Kyle; Cagnon, Chris H.; McNitt-Gray, Michael F.; Mueller, Jonathon W.; Cody, Dianna D.; DeMarco, John J.

    2015-02-15

    Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was −4.9% with standard deviation of 8.7% and a range of −22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.

  17. Analysis of machining accuracy during free form surface milling simulation for different milling strategies

    NASA Astrophysics Data System (ADS)

    Matras, A.; Kowalczyk, R.

    2014-11-01

    The analysis results of machining accuracy after the free form surface milling simulations (based on machining EN AW- 7075 alloys) for different machining strategies (Level Z, Radial, Square, Circular) are presented in the work. Particular milling simulations were performed using CAD/CAM Esprit software. The accuracy of obtained allowance is defined as a difference between the theoretical surface of work piece element (the surface designed in CAD software) and the machined surface after a milling simulation. The difference between two surfaces describes a value of roughness, which is as the result of tool shape mapping on the machined surface. Accuracy of the left allowance notifies in direct way a surface quality after the finish machining. Described methodology of usage CAD/CAM software can to let improve a time design of machining process for a free form surface milling by a 5-axis CNC milling machine with omitting to perform the item on a milling machine in order to measure the machining accuracy for the selected strategies and cutting data.

  18. Simulation of GNSS reflected signals and estimation of position accuracy in GNSS-challenged environment

    NASA Astrophysics Data System (ADS)

    Jakobsen, Jakob; Jensen, Anna B. O.; Nielsen, Allan Aasbjerg

    2015-05-01

    The paper describes the development and testing of a simulation tool, called QualiSIM. The tool estimates GNSS-based position accuracy based on a simulation of the environment surrounding the GNSS antenna, with a special focus on city-scape environments with large amounts of signal reflections from non-line-of-sight satellites. The signal reflections are implemented using the extended geometric path length of the signal path caused by reflections from the surrounding buildings. Based on real GPS satellite positions, simulated Galileo satellite positions, models of atmospheric effect on the satellite signals, designs of representative environments e.g. urban and rural scenarios, and a method to simulate reflection of satellite signals within the environment we are able to estimate the position accuracy given several prerequisites as described in the paper. The result is a modelling of the signal path from satellite to receiver, the satellite availability, the extended pseudoranges caused by signal reflection, and an estimate of the position accuracy based on a least squares adjustment of the extended pseudoranges. The paper describes the models and algorithms used and a verification test where the results of QualiSIM are compared with results from collection of real GPS data in an environment with much signal reflection.

  19. Accuracy and stability of positioning in radiosurgery: long-term results of the Gamma Knife system.

    PubMed

    Heck, Bernhard; Jess-Hempen, Anja; Kreiner, Hans Jürg; Schöpgens, Hans; Mack, Andreas

    2007-04-01

    The primary aim of this investigation was to determine the long term overall accuracy of an irradiation position of Gamma Knife systems. The mechanical accuracy of the system as well as the overall accuracy of an irradiation position was examined by irradiating radiosensitive films. To measure the mechanical accuracy, the GafChromic film was fixed by a special tool at the unit center point (UCP). For overall accuracy the film was mounted inside a phantom at a target position given by a two-dimensional cross. Its position was determined by CT or MRI scans, a treatment was planned to hit this target by use of the standard planning software and the radiation was finally delivered. This procedure is named "system test" according to DIN 6875-1 and is equivalent to a treatment simulation. The used GafChromic films were evaluated by high resolution densitometric measurements. The Munich Gamma Knife UCP coincided within x; y; z: -0.014 +/- 0.09 mm; 0.013 +/- 0.09 mm; -0.002 +/- 0.06 mm (mean +/- SD) to the center of dose distribution. There was no trend in the measured data observed over more than ten years. All measured data were within a sphere of 0.2 mm radius. When basing the target definition in the system test on MRI scans, we obtained an overall accuracy of an irradiation position in the x direction of 0.21 +/- 0.32 mm and in the y direction 0.15 +/- 0.26 mm (mean +/- SD). When a CT-based target definition was used, we measured distances in x direction 0.06 +/- 0.09 mm and in y direction 0.04 +/- 0.09 mm (mean +/- SD), respectively. These results were compared with those obtained with a Gamma Knife equipped with an automatic positioning system (APS) by use of a different phantom. This phantom was found to be slightly less accurate due to its mechanical construction and the soft fixation into the frame. The phantom related position deviation was found to be about +/- 0.2 mm, and therefore the measured accuracy of the APS Gamma Knife was evidently less precise by

  20. The accuracy of climate models' simulated season lengths and the effectiveness of grid scale correction factors

    SciTech Connect

    Winterhalter, Wade E.

    2011-09-01

    Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I found that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.

  1. The accuracy of climate models' simulated season lengths and the effectiveness of grid scale correction factors

    DOE PAGESBeta

    Winterhalter, Wade E.

    2011-09-01

    Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I foundmore » that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.« less

  2. Laboratory assessment of impression accuracy by clinical simulation.

    PubMed

    Wassell, R W; Abuasi, H A

    1992-04-01

    Some laboratory tests of impression material accuracy mimic the clinical situation (simulatory) while others attempt to quantify a material's individual properties. This review concentrates on simulatory testing and aims to give a classification of the numerous tests available. Measurements can be made of the impression itself or the resulting cast. Cast measurements are divided into those made of individual dies and those made of interdie relations. Contact measurement techniques have the advantage of simplicity but are potentially inaccurate because of die abrasion. Non-contact techniques can overcome the abrasion problem but the measurements, especially those made in three dimensions, may be difficult to interpret. Nevertheless, providing that care is taken to avoid parallax error non-contact methods are preferable as experimental variables are easier to control. Where measurements are made of individual dies these should include the die width across the finishing line, as occlusal width measurements provide only limited information. A new concept of 'differential die distortion' (dimensional difference from the master model in one plane minus the dimensional difference in the perpendicular plane) provides a clinically relevant method of interpreting dimensional changes. Where measurements are made between dies movement of the individual dies within the master model must be prevented. Many of the test methods can be criticized as providing clinically unrealistic master models/dies or impression trays. Phantom head typodonts form a useful basis for the morphology of master models providing that undercuts are standardized and the master model temperature adequately controlled. PMID:1564180

  3. Criteria for the accuracy of small polaron quantum master equation in simulating excitation energy transfer dynamics

    SciTech Connect

    Chang, Hung-Tzu; Cheng, Yuan-Chung; Zhang, Pan-Pan

    2013-12-14

    The small polaron quantum master equation (SPQME) proposed by Jang et al. [J. Chem. Phys. 129, 101104 (2008)] is a promising approach to describe coherent excitation energy transfer dynamics in complex molecular systems. To determine the applicable regime of the SPQME approach, we perform a comprehensive investigation of its accuracy by comparing its simulated population dynamics with numerically exact quasi-adiabatic path integral calculations. We demonstrate that the SPQME method yields accurate dynamics in a wide parameter range. Furthermore, our results show that the accuracy of polaron theory depends strongly upon the degree of exciton delocalization and timescale of polaron formation. Finally, we propose a simple criterion to assess the applicability of the SPQME theory that ensures the reliability of practical simulations of energy transfer dynamics with SPQME in light-harvesting systems.

  4. The Impact of Sea Ice Concentration Accuracies on Climate Model Simulations with the GISS GCM

    NASA Technical Reports Server (NTRS)

    Parkinson, Claire L.; Rind, David; Healy, Richard J.; Martinson, Douglas G.; Zukor, Dorothy J. (Technical Monitor)

    2000-01-01

    The Goddard Institute for Space Studies global climate model (GISS GCM) is used to examine the sensitivity of the simulated climate to sea ice concentration specifications in the type of simulation done in the Atmospheric Modeling Intercomparison Project (AMIP), with specified oceanic boundary conditions. Results show that sea ice concentration uncertainties of +/- 7% can affect simulated regional temperatures by more than 6 C, and biases in sea ice concentrations of +7% and -7% alter simulated annually averaged global surface air temperatures by -0.10 C and +0.17 C, respectively, over those in the control simulation. The resulting 0.27 C difference in simulated annual global surface air temperatures is reduced by a third, to 0.18 C, when considering instead biases of +4% and -4%. More broadly, least-squares fits through the temperature results of 17 simulations with ice concentration input changes ranging from increases of 50% versus the control simulation to decreases of 50% yield a yearly average global impact of 0.0107 C warming for every 1% ice concentration decrease, i.e., 1.07 C warming for the full +50% to -50% range. Regionally and on a monthly average basis, the differences can be far greater, especially in the polar regions, where wintertime contrasts between the +50% and -50% cases can exceed 30 C. However, few statistically significant effects are found outside the polar latitudes, and temperature effects over the non-polar oceans tend to be under 1 C, due in part to the specification of an unvarying annual cycle of sea surface temperatures. The +/- 7% and 14% results provide bounds on the impact (on GISS GCM simulations making use of satellite data) of satellite-derived ice concentration inaccuracies, +/- 7% being the current estimated average accuracy of satellite retrievals and +/- 4% being the anticipated improved average accuracy for upcoming satellite instruments. Results show that the impact on simulated temperatures of imposed ice concentration

  5. The effects of mapping CT images to Monte Carlo materials on GEANT4 proton simulation accuracy

    SciTech Connect

    Barnes, Samuel; McAuley, Grant; Slater, James; Wroe, Andrew

    2013-04-15

    Purpose: Monte Carlo simulations of radiation therapy require conversion from Hounsfield units (HU) in CT images to an exact tissue composition and density. The number of discrete densities (or density bins) used in this mapping affects the simulation accuracy, execution time, and memory usage in GEANT4 and other Monte Carlo code. The relationship between the number of density bins and CT noise was examined in general for all simulations that use HU conversion to density. Additionally, the effect of this on simulation accuracy was examined for proton radiation. Methods: Relative uncertainty from CT noise was compared with uncertainty from density binning to determine an upper limit on the number of density bins required in the presence of CT noise. Error propagation analysis was also performed on continuously slowing down approximation range calculations to determine the proton range uncertainty caused by density binning. These results were verified with Monte Carlo simulations. Results: In the presence of even modest CT noise (5 HU or 0.5%) 450 density bins were found to only cause a 5% increase in the density uncertainty (i.e., 95% of density uncertainty from CT noise, 5% from binning). Larger numbers of density bins are not required as CT noise will prevent increased density accuracy; this applies across all types of Monte Carlo simulations. Examining uncertainty in proton range, only 127 density bins are required for a proton range error of <0.1 mm in most tissue and <0.5 mm in low density tissue (e.g., lung). Conclusions: By considering CT noise and actual range uncertainty, the number of required density bins can be restricted to a very modest 127 depending on the application. Reducing the number of density bins provides large memory and execution time savings in GEANT4 and other Monte Carlo packages.

  6. Accuracy of endodontic microleakage results: autoradiographic vs. volumetric measurements.

    PubMed

    Ximénez-Fyvie, L A; Ximénez-García, C; Carter-Bartlett, P M; Collado-Webber, F J

    1996-06-01

    The correlation between autoradiographic and volumetric leakage measurements was evaluated. Seventy-two anterior teeth with a single canal were selected and divided into three groups of 24. Group 1 served as control (no obturation), group 2 was obturated with gutta-percha only, and group 3 was obturated with gutta-percha and endodontic sealer. Samples were placed in a vertical position in 48-well cell culture plates and immersed in 1 ml of [14C]urea for 14 days. One-mm-thick horizontal serial sections were cut with a diamond disk cooled with liquid-nitrogen gas. Linear penetration was recorded by five independent evaluators from autoradiographs. Volumetric results were based on counts per minute registered in a liquid scintillation spectrometer. Pearson's correlation coefficient test was used to determine the lineal correlation between both methods of evaluation. No acceptable correlation values were found in any of the three groups (group 1, r = 0.34; group 2, r = 0.23; group 3, r = 0.20). Our results indicate that there is no correlation between linear and volumetric measurements of leakage. PMID:8934988

  7. Improved Accuracy of the Gravity Probe B Science Results

    NASA Astrophysics Data System (ADS)

    Conklin, John; Adams, M.; Aljadaan, A.; Aljibreen, H.; Almeshari, M.; Alsuwaidan, B.; Bencze, W.; Buchman, S.; Clarke, B.; Debra, D. B.; Everitt, C. W. F.; Heifetz, M.; Holmes, T.; Keiser, G. M.; Kolodziejczak, J.; Li, J.; Lipa, J.; Lockhart, J. M.; Muhlfelder, B.; Parkinson, B. W.; Salomon, M.; Silbergleit, A.; Solomonik, V.; Stahl, K.; Taber, M.; Turneaure, J. P.; Worden, P. W., Jr.

    This paper presents the progress in the science data analysis for the Gravity Probe B (GP-B) experiment. GP-B, sponsored by NASA and launched in April of 2004, tests two fundamental predictions of general relativity, the geodetic effect and the frame-dragging effect. The GP-B spacecraft measures the non-Newtonian drift rates of four ultra-precise cryogenic gyroscopes placed in a circular polar Low Earth Orbit. Science data was collected from 28 August 2004 until cryogen depletion on 29 September 2005. The data analysis is complicated by two unexpected phenomena, a) a continually damping gyroscope polhode affecting the calibration of the gyro readout scale factor, and b) two larger than expected classes of Newtonian torque acting on the gyroscopes. Experimental evidence strongly suggests that both effects are caused by non-uniform electric potentials (i.e. the patch effect) on the surfaces of the gyroscope rotor and its housing. At the end of 2008, the data analysis team reported intermediate results showing that the two complications are well understood and are separable from the relativity signal. Since then we have developed the final GP-B data analysis code, the "2-second Filter", which provides the most accurate and precise determination of the non-Newtonian drifts attainable in the presence of the two Newtonian torques and the fundamental instrument noise. This limit is roughly 5

  8. SPHGal: smoothed particle hydrodynamics with improved accuracy for galaxy simulations

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Yu; Naab, Thorsten; Walch, Stefanie; Moster, Benjamin P.; Oser, Ludwig

    2014-09-01

    We present the smoothed particle hydrodynamics (SPH) implementation SPHGal, which combines some recently proposed improvements in GADGET. This includes a pressure-entropy formulation with a Wendland kernel, a higher order estimate of velocity gradients, a modified artificial viscosity switch with a modified strong limiter, and artificial conduction of thermal energy. With a series of idealized hydrodynamic tests, we show that the pressure-entropy formulation is ideal for resolving fluid mixing at contact discontinuities but performs conspicuously worse at strong shocks due to the large entropy discontinuities. Including artificial conduction at shocks greatly improves the results. In simulations of Milky Way like disc galaxies a feedback-induced instability develops if too much artificial viscosity is introduced. Our modified artificial viscosity scheme prevents this instability and shows efficient shock capturing capability. We also investigate the star formation rate and the galactic outflow. The star formation rates vary slightly for different SPH schemes while the mass loading is sensitive to the SPH scheme and significantly reduced in our favoured implementation. We compare the accretion behaviour of the hot halo gas. The formation of cold blobs, an artefact of simple SPH implementations, can be eliminated efficiently with proper fluid mixing, either by conduction and/or by using a pressure-entropy formulation.

  9. Accuracy of Numerical Simulations of Tip Clearance Flow in Transonic Compressor Rotors Improved Dramatically

    NASA Technical Reports Server (NTRS)

    VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R.; Hathaway, Michael D.; Okiishi, Theodore H.

    2000-01-01

    The tip clearance flows of transonic compressor rotors have a significant impact on rotor and stage performance. Although numerical simulations of these flows are quite sophisticated, they are seldom verified through rigorous comparisons of numerical and measured data because, in high-speed machines, measurements acquired in sufficient detail to be useful are rare. Researchers at the NASA Glenn Research Center at Lewis Field compared measured tip clearance flow details (e.g., trajectory and radial extent) of the NASA Rotor 35 with results obtained from a numerical simulation. Previous investigations had focused on capturing the detailed development of the jetlike flow leaking through the clearance gap between the rotating blade tip and the stationary compressor shroud. However, we discovered that the simulation accuracy depends primarily on capturing the detailed development of a wall-bounded shear layer formed by the relative motion between the leakage jet and the shroud.

  10. The effectiveness of FE model for increasing accuracy in stretch forming simulation of aircraft skin panels

    NASA Astrophysics Data System (ADS)

    Kono, A.; Yamada, T.; Takahashi, S.

    2013-12-01

    In the aerospace industry, stretch forming has been used to form the outer surface parts of aircraft, which are called skin panels. Empirical methods have been used to correct the springback by measuring the formed panels. However, such methods are impractical and cost prohibitive. Therefore, there is a need to develop simulation technologies to predict the springback caused by stretch forming [1]. This paper reports the results of a study on the influences of the modeling conditions and parameters on the accuracy of an FE analysis simulating the stretch forming of aircraft skin panels. The effects of the mesh aspect ratio, convergence criteria, and integration points are investigated, and better simulation conditions and parameters are proposed.

  11. Improved reticle requalification accuracy and efficiency via simulation-powered automated defect classification

    NASA Astrophysics Data System (ADS)

    Paracha, Shazad; Eynon, Benjamin; Noyes, Ben F.; Nhiev, Anthony; Vacca, Anthony; Fiekowsky, Peter; Fiekowsky, Dan; Ham, Young Mog; Uzzel, Doug; Green, Michael; MacDonald, Susan; Morgan, John

    2014-04-01

    Advanced IC fabs must inspect critical reticles on a frequent basis to ensure high wafer yields. These necessary requalification inspections have traditionally carried high risk and expense. Manually reviewing sometimes hundreds of potentially yield-limiting detections is a very high-risk activity due to the likelihood of human error; the worst of which is the accidental passing of a real, yield-limiting defect. Painfully high cost is incurred as a result, but high cost is also realized on a daily basis while reticles are being manually classified on inspection tools since these tools often remain in a non-productive state during classification. An automatic defect analysis system (ADAS) has been implemented at a 20nm node wafer fab to automate reticle defect classification by simulating each defect's printability under the intended illumination conditions. In this paper, we have studied and present results showing the positive impact that an automated reticle defect classification system has on the reticle requalification process; specifically to defect classification speed and accuracy. To verify accuracy, detected defects of interest were analyzed with lithographic simulation software and compared to the results of both AIMS™ optical simulation and to actual wafer prints.

  12. Measuring the Accuracy of Prediction in a Simulated Environment.

    ERIC Educational Resources Information Center

    Mailles, Stephanie; Batatia, Hadj

    1998-01-01

    Describes use of a computerized simulation to study prediction in a complex environment (i.e., bus traffic control). Nature of the task, presentation method, number of repetitions, and length of time taken for prediction were measured. Prediction was significantly affected by all factors except number of repetitions. No learning effect was…

  13. Monte-Carlo Simulation for Accuracy Assessment of a Single Camera Navigation System

    NASA Astrophysics Data System (ADS)

    Bethmann, F.; Luhmann, T.

    2012-07-01

    The paper describes a simulation-based optimization of an optical tracking system that is used as a 6DOF navigation system for neurosurgery. Compared to classical system used in clinical navigation, the presented system has two unique properties: firstly, the system will be miniaturized and integrated into an operating microscope for neurosurgery; secondly, due to miniaturization a single camera approach has been designed. Single camera techniques for 6DOF measurements show a special sensitivity against weak geometric configurations between camera and object. In addition, the achievable accuracy potential depends significantly on the geometric properties of the tracked objects (locators). Besides quality and stability of the targets used on the locator, their geometric configuration is of major importance. In the following the development and investigation of a simulation program is presented which allows for the assessment and optimization of the system with respect to accuracy. Different system parameters can be altered as well as different scenarios indicating the operational use of the system. Measurement deviations are estimated based on the Monte-Carlo method. Practical measurements validate the correctness of the numerical simulation results.

  14. Grid Generation Issues and CFD Simulation Accuracy for the X33 Aerothermal Simulations

    NASA Technical Reports Server (NTRS)

    Polsky, Susan; Papadopoulos, Periklis; Davies, Carol; Loomis, Mark; Prabhu, Dinesh; Langhoff, Stephen R. (Technical Monitor)

    1997-01-01

    Grid generation issues relating to the simulation of the X33 aerothermal environment using the GASP code are explored. Required grid densities and normal grid stretching are discussed with regards to predicting the fluid dynamic and heating environments with the desired accuracy. The generation of volume grids is explored and includes discussions of structured grid generation packages such as GRIDGEN, GRIDPRO and HYPGEN. Volume grid manipulation techniques for obtaining desired outer boundary and grid clustering using the OUTBOUND code are examined. The generation of the surface grid with the required surface grid with the required surface grid topology is also discussed. Utilizing grids without singular axes is explored as a method of avoiding numerical difficulties at the singular line.

  15. Digital core based transmitted ultrasonic wave simulation and velocity accuracy analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Shan, Rui

    2016-06-01

    Transmitted ultrasonic wave simulation (TUWS) in a digital core is one of the important elements of digital rock physics and is used to study wave propagation in porous cores and calculate equivalent velocity. When simulating wave propagates in a 3D digital core, two additional layers are attached to its two surfaces vertical to the wave-direction and one planar wave source and two receiver-arrays are properly installed. After source excitation, the two receivers then record incident and transmitted waves of the digital rock. Wave propagating velocity, which is the velocity of the digital core, is computed by the picked peak-time difference between the two recorded waves. To evaluate the accuracy of TUWS, a digital core is fully saturated with gas, oil, and water to calculate the corresponding velocities. The velocities increase with decreasing wave frequencies in the simulation frequency band, and this is considered to be the result of scattering. When the pore fluids are varied from gas to oil and finally to water, the velocity-variation characteristics between the different frequencies are similar, thereby approximately following the variation law of velocities obtained from linear elastic statics simulation (LESS), although their absolute values are different. However, LESS has been widely used. The results of this paper show that the transmission ultrasonic simulation has high relative precision.

  16. Accuracy of flowmeters measuring horizontal groundwater flow in an unconsolidated aquifer simulator.

    USGS Publications Warehouse

    Bayless, E.R.; Mandell, Wayne A.; Ursic, James R.

    2011-01-01

    Borehole flowmeters that measure horizontal flow velocity and direction of groundwater flow are being increasingly applied to a wide variety of environmental problems. This study was carried out to evaluate the measurement accuracy of several types of flowmeters in an unconsolidated aquifer simulator. Flowmeter response to hydraulic gradient, aquifer properties, and well-screen construction was measured during 2003 and 2005 at the U.S. Geological Survey Hydrologic Instrumentation Facility in Bay St. Louis, Mississippi. The flowmeters tested included a commercially available heat-pulse flowmeter, an acoustic Doppler flowmeter, a scanning colloidal borescope flowmeter, and a fluid-conductivity logging system. Results of the study indicated that at least one flowmeter was capable of measuring borehole flow velocity and direction in most simulated conditions. The mean error in direction measurements ranged from 15.1 degrees to 23.5 degrees and the directional accuracy of all tested flowmeters improved with increasing hydraulic gradient. The range of Darcy velocities examined in this study ranged 4.3 to 155 ft/d. For many plots comparing the simulated and measured Darcy velocity, the squared correlation coefficient (r2) exceeded 0.92. The accuracy of velocity measurements varied with well construction and velocity magnitude. The use of horizontal flowmeters in environmental studies appears promising but applications may require more than one type of flowmeter to span the range of conditions encountered in the field. Interpreting flowmeter data from field settings may be complicated by geologic heterogeneity, preferential flow, vertical flow, constricted screen openings, and nonoptimal screen orientation.

  17. Effective hydrodynamic hydrogen escape from an early Earth atmosphere inferred from high-accuracy numerical simulation

    NASA Astrophysics Data System (ADS)

    Kuramoto, Kiyoshi; Umemoto, Takafumi; Ishiwatari, Masaki

    2013-08-01

    Hydrodynamic escape of hydrogen driven by solar extreme ultraviolet (EUV) radiation heating is numerically simulated by using the constrained interpolation profile scheme, a high-accuracy scheme for solving the one-dimensional advection equation. For a wide range of hydrogen number densities at the lower boundary and solar EUV fluxes, more than half of EUV heating energy is converted to mechanical energy of the escaping hydrogen. Less energy is lost by downward thermal conduction even giving low temperature for the atmospheric base. This result differs from a previous numerical simulation study that yielded much lower escape rates by employing another scheme in which relatively strong numerical diffusion is implemented. Because the solar EUV heating effectively induces hydrogen escape, the hydrogen mixing ratio was likely to have remained lower than 1 vol% in the anoxic Earth atmosphere during the Archean era.

  18. Accuracy of MHD simulations: Effects of simulation initialization in GUMICS-4

    NASA Astrophysics Data System (ADS)

    Lakka, Antti; Pulkkinen, Tuija; Dimmock, Andrew; Osmane, Adnane; Palmroth, Minna; Honkonen, Ilja

    2016-04-01

    We conducted a study aimed at revealing how different global magnetohydrodynamic (MHD) simulation initialization methods affect the dynamics in different parts of the Earth's magnetosphere-ionosphere system. While such magnetosphere-ionosphere coupling codes have been used for more than two decades, their testing still requires significant work to identify the optimal numerical representation of the physical processes. We used the Grand Unified Magnetosphere-Ionosphere Coupling Simulation (GUMICS-4), the only European global MHD simulation being developed by the Finnish Meteorological Institute. GUMICS-4 was put to a test that included two stages: 1) a 10 day Omni data interval was simulated and the results were validated by comparing both the bow shock and the magnetopause spatial positions predicted by the simulation to actual measurements and 2) the validated 10 day simulation run was used as a reference in a comparison of five 3 + 12 hour (3 hour synthetic initialisation + 12 hour actual simulation) simulation runs. The 12 hour input was not only identical in each simulation case but it also represented a subset of the 10 day input thus enabling quantifying the effects of different synthetic initialisations on the magnetosphere-ionosphere system. The used synthetic initialisation data sets were created using stepwise, linear and sinusoidal functions. Switching the used input from the synthetic to real Omni data was immediate. The results show that the magnetosphere forms in each case within an hour after the switch to real data. However, local dissimilarities are found in the magnetospheric dynamics after formation depending on the used initialisation method. This is evident especially in the inner parts of the lobe.

  19. SARDA HITL Simulations: System Performance Results

    NASA Technical Reports Server (NTRS)

    Gupta, Gautam

    2012-01-01

    This presentation gives an overview of the 2012 SARDA human-in-the-loop simulation, and presents a summary of system performance results from the simulation, including delay, throughput and fuel consumption

  20. Simulation of thalamic prosthetic vision: reading accuracy, speed, and acuity in sighted humans

    PubMed Central

    Vurro, Milena; Crowell, Anne Marie; Pezaris, John S.

    2014-01-01

    The psychophysics of reading with artificial sight has received increasing attention as visual prostheses are becoming a real possibility to restore useful function to the blind through the coarse, pseudo-pixelized vision they generate. Studies to date have focused on simulating retinal and cortical prostheses; here we extend that work to report on thalamic designs. This study examined the reading performance of normally sighted human subjects using a simulation of three thalamic visual prostheses that varied in phosphene count, to help understand the level of functional ability afforded by thalamic designs in a task of daily living. Reading accuracy, reading speed, and reading acuity of 20 subjects were measured as a function of letter size, using a task based on the MNREAD chart. Results showed that fluid reading was feasible with appropriate combinations of letter size and phosphene count, and performance degraded smoothly as font size was decreased, with an approximate doubling of phosphene count resulting in an increase of 0.2 logMAR in acuity. Results here were consistent with previous results from our laboratory. Results were also consistent with those from the literature, despite using naive subjects who were not trained on the simulator, in contrast to other reports. PMID:25408641

  1. Deciphering the impact of uncertainty on the accuracy of large wildfire spread simulations.

    PubMed

    Benali, Akli; Ervilha, Ana R; Sá, Ana C L; Fernandes, Paulo M; Pinto, Renata M S; Trigo, Ricardo M; Pereira, José M C

    2016-11-01

    Predicting wildfire spread is a challenging task fraught with uncertainties. 'Perfect' predictions are unfeasible since uncertainties will always be present. Improving fire spread predictions is important to reduce its negative environmental impacts. Here, we propose to understand, characterize, and quantify the impact of uncertainty in the accuracy of fire spread predictions for very large wildfires. We frame this work from the perspective of the major problems commonly faced by fire model users, namely the necessity of accounting for uncertainty in input data to produce reliable and useful fire spread predictions. Uncertainty in input variables was propagated throughout the modeling framework and its impact was evaluated by estimating the spatial discrepancy between simulated and satellite-observed fire progression data, for eight very large wildfires in Portugal. Results showed that uncertainties in wind speed and direction, fuel model assignment and typology, location and timing of ignitions, had a major impact on prediction accuracy. We argue that uncertainties in these variables should be integrated in future fire spread simulation approaches, and provide the necessary data for any fire model user to do so. PMID:27333574

  2. High-Accuracy Finite Difference Equations for Simulation of Photonic Structures

    SciTech Connect

    Hadley, G.R.

    1999-04-23

    Progress towards the development of such algorithms as been reported for waveguide analysis'-3and vertical-cavity laser simulation. In all these cases, the higher accuracy order was obtained for a single spatial dimension. More recently, this concept was extended to differencing of the Helmholtz Equation on a 2-D grid, with uniform regions treated to 4th order and dielectric interfaces to 3'd order5. No attempt was made to treat corners properly. In this talk I will describe the extension of this concept to allow differencing of the Helmholtz Equation on a 2-D grid to 6* order in uniform regions and 5* order at dielectric interfaces. In addition, the first known derivation of a finite difference equation for a dielectric comer that allows correct satisfaction of all boundary conditions will be presented. This equation is only accurate to first order, but as will be shown, results in simulations that are third-order-accurate. In contrast to a previous approach3 that utilized a generalized Douglas scheme to increase the accuracy order of the difference second derivative, the present method invokes the Helmholtz Equation itself to convert derivatives of high order in a single direction into mixed

  3. Effects of the resolution of soil dataset and precipitation dataset on SWAT2005 streamflow calibration parameters and simulation accuracy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The resultant calibration parameter values and simulation accuracy of hydrologic models such as the Soil and Water Assessment Tool (SWAT2005) depends on how well spatial input parameters describe the characteristics of the study area. The objectives of this study were to: 1) investigate the effect o...

  4. Concatenation and Species Tree Methods Exhibit Statistically Indistinguishable Accuracy under a Range of Simulated Conditions

    PubMed Central

    Tonini, João; Moore, Andrew; Stern, David; Shcheglovitova, Maryia; Ortí, Guillermo

    2015-01-01

    Phylogeneticists have long understood that several biological processes can cause a gene tree to disagree with its species tree. In recent years, molecular phylogeneticists have increasingly foregone traditional supermatrix approaches in favor of species tree methods that account for one such source of error, incomplete lineage sorting (ILS). While gene tree-species tree discordance no doubt poses a significant challenge to phylogenetic inference with molecular data, researchers have only recently begun to systematically evaluate the relative accuracy of traditional and ILS-sensitive methods. Here, we report on simulations demonstrating that concatenation can perform as well or better than methods that attempt to account for sources of error introduced by ILS. Based on these and similar results from other researchers, we argue that concatenation remains a useful component of the phylogeneticist’s toolbox and highlight that phylogeneticists should continue to make explicit comparisons of results produced by contemporaneous and classical methods. PMID:25901289

  5. SAR simulations for high-field MRI: how much detail, effort, and accuracy is needed?

    PubMed

    Wolf, S; Diehl, D; Gebhardt, M; Mallow, J; Speck, O

    2013-04-01

    Accurate prediction of specific absorption rate (SAR) for high field MRI is necessary to best exploit its potential and guarantee safe operation. To reduce the effort (time, complexity) of SAR simulations while maintaining robust results, the minimum requirements for the creation (segmentation, labeling) of human models and methods to reduce the time for SAR calculations for 7 Tesla MR-imaging are evaluated. The geometric extent of the model required for realistic head-simulations and the number of tissue types sufficient to form a reliable but simplified model of the human body are studied. Two models (male and female) of the virtual family are analyzed. Additionally, their position within the head-coil is taken into account. Furthermore, the effects of retuning the coils to different load conditions and the influence of a large bore radiofrequency-shield have been examined. The calculation time for SAR simulations in the head can be reduced by 50% without significant error for smaller model extent and simplified tissue structure outside the coil. Likewise, the model generation can be accelerated by reducing the number of tissue types. Local SAR can vary up to 14% due to position alone. This must be considered and sets a limit for SAR prediction accuracy. All these results are comparable between the two body models tested. PMID:22611018

  6. Results of the 2015 Spitzer Exoplanet Data Challenge: Repeatability and Accuracy of Exoplanet Eclipse Depths

    NASA Astrophysics Data System (ADS)

    Ingalls, James G.; Krick, Jessica E.; Carey, Sean J.; Stauffer, John R.; Grillmair, Carl J.; Lowrance, Patrick

    2016-06-01

    We examine the repeatability, reliability, and accuracy of differential exoplanet eclipse depth measurements made using the InfraRed Array Camera (IRAC) on the Spitzer Space Telescope during the post-cryogenic mission. At infrared wavelengths secondary eclipses and phase curves are powerful tools for studying a planet’s atmosphere. Extracting information about atmospheres, however, is extremely challenging due to the small differential signals, which are often at the level of 100 parts per million (ppm) or smaller, and require the removal of significant instrumental systematics. For the IRAC 3.6 and 4.5μm InSb detectors that remain active on post-cryogenic Spitzer, the interplay of residual telescope pointing fluctuations with intrapixel gain variations in the moderately under sampled camera is the largest source of time-correlated noise. Over the past decade, a suite of techniques for removing this noise from IRAC data has been developed independently by various investigators. In summer 2015, the Spitzer Science Center hosted a Data Challenge in which seven exoplanet expert teams, each using a different noise-removal method, were invited to analyze 10 eclipse measurements of the hot Jupiter XO-3 b, as well as a complementary set of 10 simulated measurements. In this contribution we review the results of the Challenge. We describe statistical tools to assess the repeatability, reliability, and validity of data reduction techniques, and to compare and (perhaps) choose between techniques.

  7. Accuracy issues in the finite difference time domain simulation of photomask scattering

    NASA Astrophysics Data System (ADS)

    Pistor, Thomas V.

    2001-09-01

    As the use of electromagnetic simulation in lithography increases, accuracy issues are uncovered and must be addressed. A proper understanding of these issues can allow the lithographer to avoid pitfalls in electromagnetic simulation and to know what can and can not be accurately simulated. This paper addresses the important accuracy issues related to the simulation of photomask scattering using the Finite Difference Time Domain (FDTD) method. Errors related to discretization and periodic boundary conditions are discussed. Discretization-related issues arise when derivatives are replaced by finite differences and when integrals are replaced by summations. These approximations can lead to mask features that do not have exact dimensions. The effects of discretization error on phase wells and thin films are shown. The reflectivity of certain thin film layers is seen to be very sensitive to the layer thickness. Simulation experiments and theory are used to determine how fine a discretization is necessary and various discretization schemes that help minimize error are presented. Boundary-condition-related errors arise from the use of periodic boundary conditions when simulating isolated mask features. The effects of periodic boundary conditions are assessed through the use of simulation experiments. All errors are associated with an ever-present trade-off between accuracy and computational resources. However, choosing the cell size wisely can, in many cases, minimize error without significantly increasing computation resource requirements.

  8. Milestone M4900: Simulant Mixing Analytical Results

    SciTech Connect

    Kaplan, D.I.

    2001-07-26

    This report addresses Milestone M4900, ''Simulant Mixing Sample Analysis Results,'' and contains the data generated during the ''Mixing of Process Heels, Process Solutions, and Recycle Streams: Small-Scale Simulant'' task. The Task Technical and Quality Assurance Plan for this task is BNF-003-98-0079A. A report with a narrative description and discussion of the data will be issued separately.

  9. DKIST Adaptive Optics System: Simulation Results

    NASA Astrophysics Data System (ADS)

    Marino, Jose; Schmidt, Dirk

    2016-05-01

    The 4 m class Daniel K. Inouye Solar Telescope (DKIST), currently under construction, will be equipped with an ultra high order solar adaptive optics (AO) system. The requirements and capabilities of such a solar AO system are beyond those of any other solar AO system currently in operation. We must rely on solar AO simulations to estimate and quantify its performance.We present performance estimation results of the DKIST AO system obtained with a new solar AO simulation tool. This simulation tool is a flexible and fast end-to-end solar AO simulator which produces accurate solar AO simulations while taking advantage of current multi-core computer technology. It relies on full imaging simulations of the extended field Shack-Hartmann wavefront sensor (WFS), which directly includes important secondary effects such as field dependent distortions and varying contrast of the WFS sub-aperture images.

  10. Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report.

    SciTech Connect

    Thompson, Aidan P.; Schultz, Peter A.; Crozier, Paul; Moore, Stan Gerald; Swiler, Laura Painton; Stephens, John Adam; Trott, Christian Robert; Foiles, Stephen M.; Tucker, Garritt J.

    2014-09-01

    This report summarizes the result of LDRD project 12-0395, titled %22Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations.%22 During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel

  11. Simulation of agronomic images for an automatic evaluation of crop/ weed discrimination algorithm accuracy

    NASA Astrophysics Data System (ADS)

    Jones, G.; Gée, Ch.; Truchetet, F.

    2007-01-01

    In the context of precision agriculture, we present a robust and automatic method based on simulated images for evaluating the efficiency of any crop/weed discrimination algorithms for a inter-row weed infestation rate. To simulate these images two different steps are required: 1) modeling of a crop field from the spatial distribution of plants (crop and weed) 2) projection of the created field through an optical system to simulate photographing. Then an application is proposed investigating the accuracy and robustness of crop/weed discrimination algorithm combining a line detection (Hough transform) and a plant discrimination (crop and weeds). The accuracy of weed infestation rate estimate for each image is calculated by direct comparison to the initial weed infestation rate of the simulated images. It reveals an performance better than 85%.

  12. SCEC Earthquake Simulator Comparison Results for California

    NASA Astrophysics Data System (ADS)

    Tullis, T. E.; Richards-Dinger, K. B.; Barall, M.; Dieterich, J. H.; Field, E. H.; Heien, E. M.; Kellogg, L. H.; Pollitz, F. F.; Rundle, J. B.; Sachs, M. K.; Turcotte, D. L.; Ward, S. N.; Zielke, O.

    2011-12-01

    This is our first report on comparisons of earthquake simulator results with one another and with actual earthquake data for all of California, excluding Cascadia. Earthquake simulators are computer programs that simulate long sequences of earthquakes and therefore allow study of a much longer earthquake history than is possible from instrumental, historical and paleoseismic data. The usefulness of simulated histories for anticipating the probabilities of future earthquakes and for contributing to public policy decisions depends on whether simulated earthquake catalogs properly represent actual earthquakes. Thus, we compare simulated histories generated by five different earthquake simulators with one another and with what is known about actual earthquake history in order to evaluate the usefulness of the simulator results. Although sharing common features, our simulators differ from one another in their details in many important ways. All simulators use the same fault geometry and the same ~15,000, 3x3 km elements to represent the strike-slip and thrust faults in California. The set of faults and the input slip rates on them are essentially those of the UCERF2 fault and deformation model; we will switch to the UCERF3 model once it is available. All simulators use the boundary element method to compute stress transfer between elements. Differences between the simulators include how they represent fault friction and what assumptions they make to promote rupture propagation from one element to another. The behavior of the simulators is encouragingly similar and the results are similar to what is known about real earthquakes, although some refinements are being made to some of the simulators to improve these comparisons as a result of our initial results. The frequency magnitude distributions of simulated events from M6 to M7.5 for a 30,000 year simulated history agree well with instrumental observations for all of California. Scaling relations, as seen on plots of

  13. Accuracy of nonmolecular identification of growth-hormone- transgenic coho salmon after simulated escape.

    PubMed

    SundströM, L F; Lõhmus, M; Devlin, R H

    2015-09-01

    Concerns with transgenic animals include the potential ecological risks associated with release or escape to the natural environment, and a critical requirement for assessment of ecological effects is the ability to distinguish transgenic animals from wild type. Here, we explore geometric morphometrics (GeoM) and human expertise to distinguish growth-hormone-transgenic coho salmon (Oncorhynchus kisutch) specimens from wild type. First, we simulated an escape of 3-month-old hatchery-reared wild-type and transgenic fish to an artificial stream, and recaptured them at the time of seaward migration at an age of 13 months. Second, we reared fish in the stream from first-feeding fry until an age of 13 months, thereby simulating fish arising from a successful spawn in the wild of an escaped hatchery-reared transgenic fish. All fish were then assessed from 'photographs by visual identification (VID) by local staff and by GeoM based on 13 morphological landmarks. A leave-one-out discriminant analysis of GeoM data had on average 86% (72-100% for individual groups) accuracy in assigning the correct genotypes, whereas the human experts were correct, on average, in only 49% of cases (range of 18-100% for individual fish groups). However, serious errors (i.e., classifying transgenic specimens as wild type) occurred for 7% (GeoM) and 67% (VID) of transgenic fish, and all of these incorrect assignments arose with fish reared in the stream from the first-feeding stage. The results show that we presently lack the skills of visually distinguishing transgenic coho salmon from wild type with a high level of accuracy, but that further development-of GeoM methods could be useful in identifying second-generation,fish from nature as a nonmolecular approach. PMID:26552269

  14. Evaluation of the soil moisture prediction accuracy of a space radar using simulation techniques. [Kansas

    NASA Technical Reports Server (NTRS)

    Ulaby, F. T. (Principal Investigator); Dobson, M. C.; Stiles, J. A.; Moore, R. K.; Holtzman, J. C.

    1981-01-01

    Image simulation techniques were employed to generate synthetic aperture radar images of a 17.7 km x 19.3 km test site located east of Lawrence, Kansas. The simulations were performed for a space SAR at an orbital altitude of 600 km, with the following sensor parameters: frequency = 4.75 GHz, polarization = HH, and angle of incidence range = 7 deg to 22 deg from nadir. Three sets of images were produced corresponding to three different spatial resolutions; 20 m x 20 m with 12 looks, 100 m x 100 m with 23 looks, and 1 km x 1 km with 1000 looks. Each set consisted of images for four different soil moisture distributions across the test site. Results indicate that, for the agricultural portion of the test site, the soil moisture in about 90% of the pixels can be predicted with an accuracy of = + or - 20% of field capacity. Among the three spatial resolutions, the 1 km x 1 km resolution gave the best results for most cases, however, for very dry soil conditions, the 100 m x 100 m resolution was slightly superior.

  15. Assessing the Accuracy of Classwide Direct Observation Methods: Two Analyses Using Simulated and Naturalistic Data

    ERIC Educational Resources Information Center

    Dart, Evan H.; Radley, Keith C.; Briesch, Amy M.; Furlow, Christopher M.; Cavell, Hannah J.; Briesch, Amy M.

    2016-01-01

    Two studies investigated the accuracy of eight different interval-based group observation methods that are commonly used to assess the effects of classwide interventions. In Study 1, a Microsoft Visual Basic program was created to simulate a large set of observational data. Binary data were randomly generated at the student level to represent…

  16. Evaluating the velocity accuracy of an integrated GPS/INS system: Flight test results

    SciTech Connect

    Owen, T.E.; Wardlaw, R.

    1991-12-31

    Verifying the velocity accuracy of a GPS receiver or an integrated GPS/INS system in a dynamic environment is a difficult proposition when many of the commonly used reference systems have velocity uncertainities of the same order of magnitude or greater than the GPS system. The results of flight tests aboard an aircraft in which multiple reference systems simultaneously collected data to evaluate the accuracy of an integrated GPS/INS system are reported. Emphasis is placed on obtaining high accuracy estimates of the velocity error of the integrated system in order to verify that velocity accuracy is maintained during both linear and circular trajectories. Three different reference systems operating in parallel during flight tests are used to independently determine the position and velocity of an aircraft in flight. They are a transponder/interrogator ranging system, a laser tracker, and GPS carrier phase processing. Results obtained from these reference systems are compared against each other and against an integrated real time differential based GPS/INS system to arrive at a set of conclusions about the accuracy of the integrated system.

  17. Accuracy, Speed, Scalability: the Challenges of Large-Scale DFT Simulations

    NASA Astrophysics Data System (ADS)

    Gygi, Francois

    2014-03-01

    First-Principles Molecular Dynamics (FPMD) simulations based on Density Functional Theory (DFT) have become popular in investigations of electronic and structural properties of liquids and solids. The current upsurge in available computing resources enables simulations of larger and more complex systems, such as solvated ions or defects in crystalline solids. The high cost of FPMD simulations however still strongly limits the size of feasible simulations, in particular when using hybrid-DFT approximations. In addition, the simulation times needed to extract statistically meaningful quantities also grows with system size, which puts a premium on scalable implementations. We discuss recent research in the design and implementation of scalable FPMD algorithms, with emphasis on controlled-accuracy approximations and accurate hybrid-DFT molecular dynamics simulations, using examples of applications to materials science and chemistry. Work supported by DOE-BES under grant DE-SC0008938.

  18. Resource Allocation for Maximizing Prediction Accuracy and Genetic Gain of Genomic Selection in Plant Breeding: A Simulation Experiment

    PubMed Central

    Lorenz, Aaron J.

    2013-01-01

    Allocating resources between population size and replication affects both genetic gain through phenotypic selection and quantitative trait loci detection power and effect estimation accuracy for marker-assisted selection (MAS). It is well known that because alleles are replicated across individuals in quantitative trait loci mapping and MAS, more resources should be allocated to increasing population size compared with phenotypic selection. Genomic selection is a form of MAS using all marker information simultaneously to predict individual genetic values for complex traits and has widely been found superior to MAS. No studies have explicitly investigated how resource allocation decisions affect success of genomic selection. My objective was to study the effect of resource allocation on response to MAS and genomic selection in a single biparental population of doubled haploid lines by using computer simulation. Simulation results were compared with previously derived formulas for the calculation of prediction accuracy under different levels of heritability and population size. Response of prediction accuracy to resource allocation strategies differed between genomic selection models (ridge regression best linear unbiased prediction [RR-BLUP], BayesCπ) and multiple linear regression using ordinary least-squares estimation (OLS), leading to different optimal resource allocation choices between OLS and RR-BLUP. For OLS, it was always advantageous to maximize population size at the expense of replication, but a high degree of flexibility was observed for RR-BLUP. Prediction accuracy of doubled haploid lines included in the training set was much greater than of those excluded from the training set, so there was little benefit to phenotyping only a subset of the lines genotyped. Finally, observed prediction accuracies in the simulation compared well to calculated prediction accuracies, indicating these theoretical formulas are useful for making resource allocation

  19. Resource allocation for maximizing prediction accuracy and genetic gain of genomic selection in plant breeding: a simulation experiment.

    PubMed

    Lorenz, Aaron J

    2013-03-01

    Allocating resources between population size and replication affects both genetic gain through phenotypic selection and quantitative trait loci detection power and effect estimation accuracy for marker-assisted selection (MAS). It is well known that because alleles are replicated across individuals in quantitative trait loci mapping and MAS, more resources should be allocated to increasing population size compared with phenotypic selection. Genomic selection is a form of MAS using all marker information simultaneously to predict individual genetic values for complex traits and has widely been found superior to MAS. No studies have explicitly investigated how resource allocation decisions affect success of genomic selection. My objective was to study the effect of resource allocation on response to MAS and genomic selection in a single biparental population of doubled haploid lines by using computer simulation. Simulation results were compared with previously derived formulas for the calculation of prediction accuracy under different levels of heritability and population size. Response of prediction accuracy to resource allocation strategies differed between genomic selection models (ridge regression best linear unbiased prediction [RR-BLUP], BayesCπ) and multiple linear regression using ordinary least-squares estimation (OLS), leading to different optimal resource allocation choices between OLS and RR-BLUP. For OLS, it was always advantageous to maximize population size at the expense of replication, but a high degree of flexibility was observed for RR-BLUP. Prediction accuracy of doubled haploid lines included in the training set was much greater than of those excluded from the training set, so there was little benefit to phenotyping only a subset of the lines genotyped. Finally, observed prediction accuracies in the simulation compared well to calculated prediction accuracies, indicating these theoretical formulas are useful for making resource allocation

  20. Evaluation of optoelectronic Plethysmography accuracy and precision in recording displacements during quiet breathing simulation.

    PubMed

    Massaroni, C; Schena, E; Saccomandi, P; Morrone, M; Sterzi, S; Silvestri, S

    2015-08-01

    Opto-electronic Plethysmography (OEP) is a motion analysis system used to measure chest wall kinematics and to indirectly evaluate respiratory volumes during breathing. Its working principle is based on the computation of marker displacements placed on the chest wall. This work aims at evaluating the accuracy and precision of OEP in measuring displacement in the range of human chest wall displacement during quiet breathing. OEP performances were investigated by the use of a fully programmable chest wall simulator (CWS). CWS was programmed to move 10 times its eight shafts in the range of physiological displacement (i.e., between 1 mm and 8 mm) at three different frequencies (i.e., 0.17 Hz, 0.25 Hz, 0.33 Hz). Experiments were performed with the aim to: (i) evaluate OEP accuracy and precision error in recording displacement in the overall calibrated volume and in three sub-volumes, (ii) evaluate the OEP volume measurement accuracy due to the measurement accuracy of linear displacements. OEP showed an accuracy better than 0.08 mm in all trials, considering the whole 2m(3) calibrated volume. The mean measurement discrepancy was 0.017 mm. The precision error, expressed as the ratio between measurement uncertainty and the recorded displacement by OEP, was always lower than 0.55%. Volume overestimation due to OEP linear measurement accuracy was always <; 12 mL (<; 3.2% of total volume), considering all settings. PMID:26736504

  1. Simulation-based evaluation of the resolution and quantitative accuracy of temperature-modulated fluorescence tomography

    PubMed Central

    Lin, Yuting; Nouizi, Farouk; Kwong, Tiffany C.; Gulsen, Gultekin

    2016-01-01

    Conventional fluorescence tomography (FT) can recover the distribution of fluorescent agents within a highly scattering medium. However, poor spatial resolution remains its foremost limitation. Previously, we introduced a new fluorescence imaging technique termed “temperature-modulated fluorescence tomography” (TM-FT), which provides high-resolution images of fluorophore distribution. TM-FT is a multimodality technique that combines fluorescence imaging with focused ultrasound to locate thermo-sensitive fluorescence probes using a priori spatial information to drastically improve the resolution of conventional FT. In this paper, we present an extensive simulation study to evaluate the performance of the TM-FT technique on complex phantoms with multiple fluorescent targets of various sizes located at different depths. In addition, the performance of the TM-FT is tested in the presence of background fluorescence. The results obtained using our new method are systematically compared with those obtained with the conventional FT. Overall, TM-FT provides higher resolution and superior quantitative accuracy, making it an ideal candidate for in vivo preclinical and clinical imaging. For example, a 4 mm diameter inclusion positioned in the middle of a synthetic slab geometry phantom (D:40 mm × W :100 mm) is recovered as an elongated object in the conventional FT (x = 4.5 mm; y = 10.4 mm), while TM-FT recovers it successfully in both directions (x = 3.8 mm; y = 4.6 mm). As a result, the quantitative accuracy of the TM-FT is superior because it recovers the concentration of the agent with a 22% error, which is in contrast with the 83% error of the conventional FT. PMID:26368884

  2. Cassini radar : system concept and simulation results

    NASA Astrophysics Data System (ADS)

    Melacci, P. T.; Orosei, R.; Picardi, G.; Seu, R.

    1998-10-01

    The Cassini mission is an international venture, involving NASA, the European Space Agency (ESA) and the Italian Space Agency (ASI), for the investigation of the Saturn system and, in particular, Titan. The Cassini radar will be able to see through Titan's thick, optically opaque atmosphere, allowing us to better understand the composition and the morphology of its surface, but the interpretation of the results, due to the complex interplay of many different factors determining the radar echo, will not be possible without an extensive modellization of the radar system functioning and of the surface reflectivity. In this paper, a simulator of the multimode Cassini radar will be described, after a brief review of our current knowledge of Titan and a discussion of the contribution of the Cassini radar in answering to currently open questions. Finally, the results of the simulator will be discussed. The simulator has been implemented on a RISC 6000 computer by considering only the active modes of operation, that is altimeter and synthetic aperture radar. In the instrument simulation, strict reference has been made to the present planned sequence of observations and to the radar settings, including burst and single pulse duration, pulse bandwidth, pulse repetition frequency and all other parameters which may be changed, and possibly optimized, according to the operative mode. The observed surfaces are simulated by a facet model, allowing the generation of surfaces with Gaussian or non-Gaussian roughness statistic, together with the possibility of assigning to the surface an average behaviour which can represent, for instance, a flat surface or a crater. The results of the simulation will be discussed, in order to check the analytical evaluations of the models of the average received echoes and of the attainable performances. In conclusion, the simulation results should allow the validation of the theoretical evaluations of the capabilities of microwave instruments, when

  3. Evaluation of Accuracy and Reliability of the Six Ensemble Methods Using 198 Sets of Pseudo-Simulation Data

    NASA Astrophysics Data System (ADS)

    Suh, M. S.; Oh, S. G.

    2014-12-01

    The accuracy and reliability of the six ensemble methods were evaluated according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) generated by considering the simulation characteristics of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets with 50 samples. The ensemble methods used were as follows: equal weighted averaging with(out) bias correction (EWA_W(N)BC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), WEA based on reliability (WEA_REA), and multivariate linear regression (Mul_Reg). The weighted ensemble methods showed better projection skills in terms of accuracy and reliability than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. In general, WEA_Tay, WEA_REA and WEA_RAC showed superior skills in terms of accuracy and reliability, regardless of the PSD categories, training periods, and ensemble numbers. The evaluation results showed that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of members. However, the EWA_NBC showed a comparable projection skill with the other methods only in the certain categories with unsystematic biases.

  4. Accuracy and Reliability of Haptic Spasticity Assessment Using HESS (Haptic Elbow Spasticity Simulator)

    PubMed Central

    Kim, Jonghyun; Park, Hyung-Soon; Damiano, Diane L.

    2013-01-01

    Clinical assessment of spasticity tends to be subjective because of the nature of the in-person assessment; severity of spasticity is judged based on the muscle tone felt by a clinician during manual manipulation of a patient’s limb. As an attempt to standardize the clinical assessment of spasticity, we developed HESS (Haptic Elbow Spasticity Simulator), a programmable robotic system that can provide accurate and consistent haptic responses of spasticity and thus can be used as a training tool for clinicians. The aim of this study is to evaluate the accuracy and reliability of the recreated haptic responses. Based on clinical data collected from children with cerebral palsy, four levels of elbow spasticity (1, 1+, 2, and 3 in the Modified Ashworth Scale [MAS]) were recreated by HESS. Seven experienced clinicians manipulated HESS to score the recreated haptic responses. The accuracy of the recreation was assessed by the percent agreement between intended and determined MAS scores. The inter-rater reliability among the clinicians was analyzed by using Fleiss’s kappa. In addition, the level of realism with the recreation was evaluated by a questionnaire on “how realistic” this felt in a qualitative way. The percent agreement was high (85.7±11.7%), and for inter-rater reliability, there was substantial agreement (κ=0.646) among the seven clinicians. The level of realism was 7.71±0.95 out of 10. These results show that the haptic recreation of spasticity by HESS has the potential to be used as a training tool for standardizing and enhancing reliability of clinical assessment. PMID:22256328

  5. Effects of training and simulated combat stress on leg tourniquet application accuracy, time, and effectiveness.

    PubMed

    Schreckengaust, Richard; Littlejohn, Lanny; Zarow, Gregory J

    2014-02-01

    The lower extremity tourniquet failure rate remains significantly higher in combat than in preclinical testing, so we hypothesized that tourniquet placement accuracy, speed, and effectiveness would improve during training and decline during simulated combat. Navy Hospital Corpsman (N = 89), enrolled in a Tactical Combat Casualty Care training course in preparation for deployment, applied Combat Application Tourniquet (CAT) and the Special Operations Forces Tactical Tourniquet (SOFT-T) on day 1 and day 4 of classroom training, then under simulated combat, wherein participants ran an obstacle course to apply a tourniquet while wearing full body armor and avoiding simulated small arms fire (paint balls). Application time and pulse elimination effectiveness improved day 1 to day 4 (p < 0.005). Under simulated combat, application time slowed significantly (p < 0.001), whereas accuracy and effectiveness declined slightly. Pulse elimination was poor for CAT (25% failure) and SOFT-T (60% failure) even in classroom conditions following training. CAT was more quickly applied (p < 0.005) and more effective (p < 0.002) than SOFT-T. Training fostered fast and effective application of leg tourniquets while performance declined under simulated combat. The inherent efficacy of tourniquet products contributes to high failure rates under combat conditions, pointing to the need for superior tourniquets and for rigorous deployment preparation training in simulated combat scenarios. PMID:24491604

  6. Comparison of the Accuracy and Speed of Transient Mobile A/C System Simulation Models: Preprint

    SciTech Connect

    Kiss, T.; Lustbader, J.

    2014-03-01

    The operation of air conditioning (A/C) systems is a significant contributor to the total amount of fuel used by light- and heavy-duty vehicles. Therefore, continued improvement of the efficiency of these mobile A/C systems is important. Numerical simulation has been used to reduce the system development time and to improve the electronic controls, but numerical models that include highly detailed physics run slower than desired for carrying out vehicle-focused drive cycle-based system optimization. Therefore, faster models are needed even if some accuracy is sacrificed. In this study, a validated model with highly detailed physics, the 'Fully-Detailed' model, and two models with different levels of simplification, the 'Quasi-Transient' and the 'Mapped- Component' models, are compared. The Quasi-Transient model applies some simplifications compared to the Fully-Detailed model to allow faster model execution speeds. The Mapped-Component model is similar to the Quasi-Transient model except instead of detailed flow and heat transfer calculations in the heat exchangers, it uses lookup tables created with the Quasi-Transient model. All three models are set up to represent the same physical A/C system and the same electronic controls. Speed and results of the three model versions are compared for steady state and transient operation. Steady state simulated data are also compared to measured data. The results show that the Quasi-Transient and Mapped-Component models ran much faster than the Fully-Detailed model, on the order of 10- and 100-fold, respectively. They also adequately approach the results of the Fully-Detailed model for steady-state operation, and for drive cycle-based efficiency predictions

  7. An action-incongruent secondary task modulates prediction accuracy in experienced performers: evidence for motor simulation.

    PubMed

    Mulligan, Desmond; Lohse, Keith R; Hodges, Nicola J

    2016-07-01

    We provide behavioral evidence that the human motor system is involved in the perceptual decision processes of skilled performers, directly linking prediction accuracy to the (in)ability of the motor system to activate in a response-specific way. Experienced and non-experienced dart players were asked to predict, from temporally occluded video sequences, the landing position of a dart thrown previously by themselves (self) or another (other). This prediction task was performed while additionally performing (a) an action-incongruent secondary motor task (right arm force production), (b) a congruent secondary motor task (mimicking) or (c) an attention-matched task (tone-monitoring). Non-experienced dart players were not affected by any of the secondary task manipulations, relative to control conditions, yet prediction accuracy decreased for the experienced players when additionally performing the force-production, motor task. This interference effect was present for 'self' as well as 'other' decisions, reducing the accuracy of experienced participants to a novice level. The mimicking (congruent) secondary task condition did not interfere with (or facilitate) prediction accuracy for either group. We conclude that visual-motor experience moderates the process of decision making, such that a seemingly visual-cognitive prediction task relies on activation of the motor system for experienced performers. This fits with a motor simulation account of action prediction in sports and other tasks, and alerts to the specificity of these simulative processes. PMID:26021748

  8. High accuracy binary black hole simulations with an extended wave zone

    SciTech Connect

    Pollney, Denis; Reisswig, Christian; Dorband, Nils; Schnetter, Erik; Diener, Peter

    2011-02-15

    We present results from a new code for binary black hole evolutions using the moving-puncture approach, implementing finite differences in generalized coordinates, and allowing the spacetime to be covered with multiple communicating nonsingular coordinate patches. Here we consider a regular Cartesian near-zone, with adapted spherical grids covering the wave zone. The efficiencies resulting from the use of adapted coordinates allow us to maintain sufficient grid resolution to an artificial outer boundary location which is causally disconnected from the measurement. For the well-studied test case of the inspiral of an equal-mass nonspinning binary (evolved for more than 8 orbits before merger), we determine the phase and amplitude to numerical accuracies better than 0.010% and 0.090% during inspiral, respectively, and 0.003% and 0.153% during merger. The waveforms, including the resolved higher harmonics, are convergent and can be consistently extrapolated to r{yields}{infinity} throughout the simulation, including the merger and ringdown. Ringdown frequencies for these modes (to (l,m)=(6,6)) match perturbative calculations to within 0.01%, providing a strong confirmation that the remnant settles to a Kerr black hole with irreducible mass M{sub irr}=0.884355{+-}20x10{sup -6} and spin S{sub f}/M{sub f}{sup 2}=0.686923{+-}10x10{sup -6}.

  9. Accuracy of q-space related parameters in MRI: simulations and phantom measurements.

    PubMed

    Lätt, Jimmy; Nilsson, Markus; Malmborg, Carin; Rosquist, Hannah; Wirestam, Ronnie; Ståhlberg, Freddy; Topgaard, Daniel; Brockstedt, Sara

    2007-11-01

    The accuracy of q-space measurements was evaluated at a 3.0-T clinical magnetic resonance imaging (MRI) scanner, as compared with a 4.7-T nuclear magnetic resonance (NMR) spectrometer. Measurements were performed using a stimulated-echo pulse-sequence on n-decane as well as on polyethylene glycol (PEG) mixed with different concentrations of water, in order to obtain bi-exponential signal decay curves. The diffusion coefficients as well as the modelled diffusional kurtosis K(fit) were obtained from the signal decay curve, while the full-width at half-maximum (FWHM) and the diffusional kurtosis K were obtained from the displacement distribution. Simulations of restricted diffusion, under conditions similar to those obtainable with a clinical MRI scanner, were carried out assuming various degrees of violation of the short gradient pulse (SGP) condition and of the long diffusion time limit. The results indicated that an MRI system can not be used for quantification of structural sizes less than about 10 microm by means of FWHM since the parameter underestimates the confinements due to violation of the SGP condition. However, FWHM can still be used as an important contrast parameter. The obtained kurtosis values were lower than expected from theory and the results showed that care must be taken when interpreting a kurtosis estimate deviating from zero. PMID:18041259

  10. A Simulation Framework for Evaluating Sampling Strategies and Determining Load Accuracies in Suspended Sediment Loads

    NASA Astrophysics Data System (ADS)

    Bajcsy, P.; Li, Q.; Crowder, D.; Markus, M.

    2007-12-01

    Excessive river sedimentation can cause extensive economic and ecological damage. Expensive dredging operations are needed to keep navigation channels clear and to maintain the capacity of water supply reservoirs. Deposition of fine sediments in rivers can eliminate pool habitats, decrease embryo survival rates of certain fish, and affect macroinvertebrate density and diversity. Sedimentation is often associated with anthropogenic watershed activities (e.g. urbanization and agricultural practices). Effort has been spent on developing Best Management Practices (BMPs) to reduce the sediment loads caused by specific watershed activities. Sediment monitoring networks have also been implemented to measure loads within streams and help determine the efficacy of BMPs over time. Yet, fundamental questions remain regarding how accurately loads can be estimated. Research suggests watershed hydrologic and geomorphic characteristics, sampling method and frequency, along with the method used to develop sediment-discharge rating curves can substantially affect the accuracy and precision at which sediment load estimates are made. The confidence at which one can estimate sediment loads, based on a specific sampling protocol, is one of several important pieces of information that hydrologic observatories need to understand in order to help monitor load trends. A computer program is being developed that allows one to estimate sediment loads using several sediment- discharge rating curves and bias correction factors. Using USGS mean daily sediment data for the Illinois River at Valley City, the program is employed to perform Monte Carlo simulations to predict confidence limits for loads estimated using different sampling protocols (e.g. weekly, monthly and hydrologic event based sampling). Results of the different sampling approaches are compared. A discussion regarding how these results, combined with future simulations representative of different sediment monitoring locations, can

  11. Sensitivity of tumor motion simulation accuracy to lung biomechanical modeling approaches and parameters

    NASA Astrophysics Data System (ADS)

    Nasehi Tehrani, Joubin; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing

    2015-11-01

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right, anterior-posterior, and superior-inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation.

  12. Sensitivity of tumor motion simulation accuracy to lung biomechanical modeling approaches and parameters.

    PubMed

    Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing

    2015-11-21

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right, anterior-posterior, and superior-inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation. PMID:26531324

  13. Titan's organic chemistry: Results of simulation experiments

    NASA Technical Reports Server (NTRS)

    Sagan, Carl; Thompson, W. Reid; Khare, Bishun N.

    1992-01-01

    Recent low pressure continuous low plasma discharge simulations of the auroral electron driven organic chemistry in Titan's mesosphere are reviewed. These simulations yielded results in good accord with Voyager observations of gas phase organic species. Optical constants of the brownish solid tholins produced in similar experiments are in good accord with Voyager observations of the Titan haze. Titan tholins are rich in prebiotic organic constituents; the Huygens entry probe may shed light on some of the processes that led to the origin of life on Earth.

  14. [Improvement of root parameters in land surface model (LSM )and its effect on the simulated results].

    PubMed

    Cai, Kui-ye; Liu, Jing-miao; Zhang, Zheng-qiu; Liang, Hong; He, Xiao-dong

    2015-10-01

    In order to improve root parameterization in land surface model, the sub-model for root in CERES-Maize was coupled in the SSiB2 after calibrating of maize parameters in SSiB2. The effects of two improved root parameterization schemes on simulated results of land surface flux were analyzed. Results indicated that simulation accuracy of land surface flux was enhanced when the root module provided root depth only with the SSiB2 model (scheme I). Correlation coefficients between observed and simulated values of latent flux and sensible flux increased during the whole growing season, and RMSE of linear fitting decreased. Simulation accuracy of CO2 flux was also enhanced from 121 days after sowing to mature period. On the other hand, simulation accuracy of the flux was enhanced when the root module provided root depth and root length density simultaneously for the SSiB2 model (scheme II). Compared with the scheme I, the scheme II was more comprehensive, while its simulation accuracy of land surface flux decreased. The improved root parameterization in the SSiB2 model was better than the original one, which made simulated accuracy of land-atmospheric flux improved. The scheme II overestimated root relative growth in the surface layer soil, so its simulated accuracy was lower than that of the scheme I. PMID:26995920

  15. Improved accuracy with 3D planning and patient-specific instruments during simulated pelvic bone tumor surgery.

    PubMed

    Cartiaux, Olivier; Paul, Laurent; Francq, Bernard G; Banse, Xavier; Docquier, Pierre-Louis

    2014-01-01

    In orthopaedic surgery, resection of pelvic bone tumors can be inaccurate due to complex geometry, limited visibility and restricted working space of the pelvis. The present study investigated accuracy of patient-specific instrumentation (PSI) for bone-cutting during simulated tumor surgery within the pelvis. A synthetic pelvic bone model was imaged using a CT-scanner. The set of images was reconstructed in 3D and resection of a simulated periacetabular tumor was defined with four target planes (ischium, pubis, anterior ilium, and posterior ilium) with a 10-mm desired safe margin. Patient-specific instruments for bone-cutting were designed and manufactured using rapid-prototyping technology. Twenty-four surgeons (10 senior and 14 junior) were asked to perform tumor resection. After cutting, ISO1101 location and flatness parameters, achieved surgical margins and the time were measured. With PSI, the location accuracy of the cut planes with respect to the target planes averaged 1 and 1.2 mm in the anterior and posterior ilium, 2 mm in the pubis and 3.7 mm in the ischium (p < 0.0001). Results in terms of the location of the cut planes and the achieved surgical margins did not reveal any significant difference between senior and junior surgeons (p = 0.2214 and 0.8449, respectively). The maximum differences between the achieved margins and the 10-mm desired safe margin were found in the pubis (3.1 and 5.1 mm for senior and junior surgeons respectively). Of the 24 simulated resection, there was no intralesional tumor cutting. This study demonstrates that using PSI technology during simulated bone cuts of the pelvis can provide good cutting accuracy. Compared to a previous report on computer assistance for pelvic bone cutting, PSI technology clearly demonstrates an equivalent value-added for bone cutting accuracy than navigation technology. When in vivo validated, PSI technology may improve pelvic bone tumor surgery by providing clinically acceptable margins. PMID

  16. Flight Test Results: CTAS Cruise/Descent Trajectory Prediction Accuracy for En route ATC Advisories

    NASA Technical Reports Server (NTRS)

    Green, S.; Grace, M.; Williams, D.

    1999-01-01

    The Center/TRACON Automation System (CTAS), under development at NASA Ames Research Center, is designed to assist controllers with the management and control of air traffic transitioning to/from congested airspace. This paper focuses on the transition from the en route environment, to high-density terminal airspace, under a time-based arrival-metering constraint. Two flight tests were conducted at the Denver Air Route Traffic Control Center (ARTCC) to study trajectory-prediction accuracy, the key to accurate Decision Support Tool advisories such as conflict detection/resolution and fuel-efficient metering conformance. In collaboration with NASA Langley Research Center, these test were part of an overall effort to research systems and procedures for the integration of CTAS and flight management systems (FMS). The Langley Transport Systems Research Vehicle Boeing 737 airplane flew a combined total of 58 cruise-arrival trajectory runs while following CTAS clearance advisories. Actual trajectories of the airplane were compared to CTAS and FMS predictions to measure trajectory-prediction accuracy and identify the primary sources of error for both. The research airplane was used to evaluate several levels of cockpit automation ranging from conventional avionics to a performance-based vertical navigation (VNAV) FMS. Trajectory prediction accuracy was analyzed with respect to both ARTCC radar tracking and GPS-based aircraft measurements. This paper presents detailed results describing the trajectory accuracy and error sources. Although differences were found in both accuracy and error sources, CTAS accuracy was comparable to the FMS in terms of both meter-fix arrival-time performance (in support of metering) and 4D-trajectory prediction (key to conflict prediction). Overall arrival time errors (mean plus standard deviation) were measured to be approximately 24 seconds during the first flight test (23 runs) and 15 seconds during the second flight test (25 runs). The major

  17. Balancing Accuracy and Cost of Confinement Simulations by Interpolation and Extrapolation of Confinement Energies.

    PubMed

    Villemot, François; Capelli, Riccardo; Colombo, Giorgio; van der Vaart, Arjan

    2016-06-14

    Improvements to the confinement method for the calculation of conformational free energy differences are presented. By taking advantage of phase space overlap between simulations at different frequencies, significant gains in accuracy and speed are reached. The optimal frequency spacing for the simulations is obtained from extrapolations of the confinement energy, and relaxation time analysis is used to determine time steps, simulation lengths, and friction coefficients. At postprocessing, interpolation of confinement energies is used to significantly reduce discretization errors in the calculation of conformational free energies. The efficiency of this protocol is illustrated by applications to alanine n-peptides and lactoferricin. For the alanine-n-peptide, errors were reduced between 2- and 10-fold and sampling times between 8- and 67-fold, while for lactoferricin the long sampling times at low frequencies were reduced 10-100-fold. PMID:27120438

  18. Numerical simulations of catastrophic disruption: Recent results

    NASA Technical Reports Server (NTRS)

    Benz, W.; Asphaug, E.; Ryan, E. V.

    1994-01-01

    Numerical simulations have been used to study high velocity two-body impacts. In this paper, a two-dimensional Largrangian finite difference hydro-code and a three-dimensional smooth particle hydro-code (SPH) are described and initial results reported. These codes can be, and have been, used to make specific predictions about particular objects in our solar system. But more significantly, they allow us to explore a broad range of collisional events. Certain parameters (size, time) can be studied only over a very restricted range within the laboratory; other parameters (initial spin, low gravity, exotic structure or composition) are difficult to study at all experimentally. The outcomes of numerical simulations lead to a more general and accurate understanding of impacts in their many forms.

  19. Accuracy of the Frensley inflow boundary condition for Wigner equations in simulating resonant tunneling diodes

    SciTech Connect

    Jiang Haiyan; Cai Wei; Tsu, Raphael

    2011-03-01

    In this paper, the accuracy of the Frensley inflow boundary condition of the Wigner equation is analyzed in computing the I-V characteristics of a resonant tunneling diode (RTD). It is found that the Frensley inflow boundary condition for incoming electrons holds only exactly infinite away from the active device region and its accuracy depends on the length of contacts included in the simulation. For this study, the non-equilibrium Green's function (NEGF) with a Dirichlet to Neumann mapping boundary condition is used for comparison. The I-V characteristics of the RTD are found to agree between self-consistent NEGF and Wigner methods at low bias potentials with sufficiently large GaAs contact lengths. Finally, the relation between the negative differential conductance (NDC) of the RTD and the sizes of contact and buffer in the RTD is investigated using both methods.

  20. Measuring the accuracy and precision of quantitative coronary angiography using a digitally simulated test phantom

    NASA Astrophysics Data System (ADS)

    Morioka, Craig A.; Whiting, James S.; LeFree, Michelle T.

    1998-06-01

    Quantitative coronary angiography (QCA) diameter measurements have been used as an endpoint measurement in clinical studies involving therapies to reduce coronary atherosclerosis. The accuracy and precision of the QCA measure can affect the sample size and study conclusions of a clinical study. Measurements using x-ray test phantoms can underestimate the precision and accuracy of the actual arteries in clinical digital angiograms because they do not contain complex patient structures. Determining the clinical performance of QCA algorithms under clinical conditions is difficult because: (1) no gold standard test object exists in clinical images, (2) phantom images do not have any structured background noise. We purpose the use of computer simulated arteries as a replacement for traditional angiographic test phantoms to evaluate QCA algorithm performance.

  1. A quantitative method for evaluating numerical simulation accuracy of time-transient Lamb wave propagation with its applications to selecting appropriate element size and time step.

    PubMed

    Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui

    2016-01-01

    Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation. PMID:26315506

  2. Parameter Estimation Using Multiple Matrix Sampling: Simulated versus Empirical Data Results.

    ERIC Educational Resources Information Center

    Gressard, Risa P.; Loyd, Brenda H.

    1991-01-01

    To determine the accuracy of simulated data sets, an investigation was conducted of the effects of item sampling plans in the application of multiple matrix sampling using both simulated and empirical data sets. Although results were similar, empirical data results were more precise. (SLD)

  3. Fast Plasma Instrument for MMS: Simulation Results

    NASA Technical Reports Server (NTRS)

    Figueroa-Vinas, Adolfo; Adrian, Mark L.; Lobell, James V.; Simpson, David G.; Barrie, Alex; Winkert, George E.; Yeh, Pen-Shu; Moore, Thomas E.

    2008-01-01

    Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. The Dual Electron Spectrometer (DES) of the Fast Plasma Instrument (FPI) for MMS meets these demanding requirements by acquiring the electron velocity distribution functions (VDFs) for the full sky with high-resolution angular measurements every 30 ms. This will provide unprecedented access to electron scale dynamics within the reconnection diffusion region. The DES consists of eight half-top-hat energy analyzers. Each analyzer has a 6 deg. x 11.25 deg. Full-sky coverage is achieved by electrostatically stepping the FOV of each of the eight sensors through four discrete deflection look directions. Data compression and burst memory management will provide approximately 30 minutes of high time resolution data during each orbit of the four MMS spacecraft. Each spacecraft will intelligently downlink the data sequences that contain the greatest amount of temporal structure. Here we present the results of a simulation of the DES analyzer measurements, data compression and decompression, as well as ground-based analysis using as a seed re-processed Cluster/PEACE electron measurements. The Cluster/PEACE electron measurements have been reprocessed through virtual DES analyzers with their proper geometrical, energy, and timing scale factors and re-mapped via interpolation to the DES angular and energy phase-space sampling measurements. The results of the simulated DES measurements are analyzed and the full moments of the simulated VDFs are compared with those obtained from the Cluster/PEACE spectrometer using a standard quadrature moment, a newly implemented spectral spherical harmonic method, and a singular value decomposition method. Our preliminary moment calculations show a remarkable agreement within the uncertainties of the measurements, with the

  4. Estimated results analysis and application of the precise point positioning based high-accuracy ionosphere delay

    NASA Astrophysics Data System (ADS)

    Wang, Shi-tai; Peng, Jun-huan

    2015-12-01

    The characterization of ionosphere delay estimated with precise point positioning is analyzed in this paper. The estimation, interpolation and application of the ionosphere delay are studied based on the processing of 24-h data from 5 observation stations. The results show that the estimated ionosphere delay is affected by the hardware delay bias from receiver so that there is a difference between the estimated and interpolated results. The results also show that the RMSs (root mean squares) are bigger, while the STDs (standard deviations) are better than 0.11 m. When the satellite difference is used, the hardware delay bias can be canceled. The interpolated satellite-differenced ionosphere delay is better than 0.11 m. Although there is a difference between the between the estimated and interpolated ionosphere delay results it cannot affect its application in single-frequency positioning and the positioning accuracy can reach cm level.

  5. Simulation results for the Viterbi decoding algorithm

    NASA Technical Reports Server (NTRS)

    Batson, B. H.; Moorehead, R. W.; Taqvi, S. Z. H.

    1972-01-01

    Concepts involved in determining the performance of coded digital communications systems are introduced. The basic concepts of convolutional encoding and decoding are summarized, and hardware implementations of sequential and maximum likelihood decoders are described briefly. Results of parametric studies of the Viterbi decoding algorithm are summarized. Bit error probability is chosen as the measure of performance and is calculated, by using digital computer simulations, for various encoder and decoder parameters. Results are presented for code rates of one-half and one-third, for constraint lengths of 4 to 8, for both hard-decision and soft-decision bit detectors, and for several important systematic and nonsystematic codes. The effect of decoder block length on bit error rate also is considered, so that a more complete estimate of the relationship between performance and decoder complexity can be made.

  6. Accuracy and performance of three water quality models for simulating nitrate nitrogen losses under corn.

    PubMed

    Jabro, J D; Jabro, A D; Fox, R H

    2006-01-01

    Simulation models can be used to predict N dynamics in a soil-water-plant system. The simulation accuracy and performance of three models: LEACHM (Leaching Estimation And CHemistry Model), NCSWAP (Nitrogen and Carbon cycling in Soil, Water And Plant), and SOILN to predict NO3-N leaching were evaluated and compared to field data from a 5-yr experiment conducted on a Hagerstown silt loam (fine, mixed, mesic Typic Hapludalf). Nitrate N losses past 1.2 m from N-fertilized and manured corn (Zea mays L.) were measured with zero-tension pan lysimeters for 5 yr. The models were calibrated using 1989-1990 data and validated using 1988-1989, 1990-1991, 1991-1992, and 1992-1993 NO3-N leaching data. Statistical analyses indicated that LEACHM, NCSWAP, and SOILN models were able to provide accurate simulations of annual NO3-N leaching losses below the 1.2-m depth for 8, 9, and 7 of 10 cases, respectively, in the validation years. The inaccuracy in the models' annual simulations for the control and manure treatments seems to be related to inadequate description of processes of N and C transformations in the models' code. The overall performance and accuracy of the SOILN model were worse than those of LEACHM and NCSWAP. The root mean square error (RMSE) and modeling efficiency (ME) were 10.7 and 0.9, 9.5 and 0.93, and 20.7 and 0.63 for LEACHM, NCSWAP, and SOILN, respectively. Overall, the three models have the potential to predict NO3-N losses below 1.2-m depth from fertilizer and manure nitrogen applied to corn without recalibration of models from year to year. PMID:16825442

  7. Effects of experimental protocol on global vegetation model accuracy: a comparison of simulated and observed vegetation patterns for Asia

    USGS Publications Warehouse

    Tang, Guoping; Shafer, Sarah L.; Barlein, Patrick J.; Holman, Justin O.

    2009-01-01

    Prognostic vegetation models have been widely used to study the interactions between environmental change and biological systems. This study examines the sensitivity of vegetation model simulations to: (i) the selection of input climatologies representing different time periods and their associated atmospheric CO2 concentrations, (ii) the choice of observed vegetation data for evaluating the model results, and (iii) the methods used to compare simulated and observed vegetation. We use vegetation simulated for Asia by the equilibrium vegetation model BIOME4 as a typical example of vegetation model output. BIOME4 was run using 19 different climatologies and their associated atmospheric CO2 concentrations. The Kappa statistic, Fuzzy Kappa statistic and a newly developed map-comparison method, the Nomad index, were used to quantify the agreement between the biomes simulated under each scenario and the observed vegetation from three different global land- and tree-cover data sets: the global Potential Natural Vegetation data set (PNV), the Global Land Cover Characteristics data set (GLCC), and the Global Land Cover Facility data set (GLCF). The results indicate that the 30-year mean climatology (and its associated atmospheric CO2 concentration) for the time period immediately preceding the collection date of the observed vegetation data produce the most accurate vegetation simulations when compared with all three observed vegetation data sets. The study also indicates that the BIOME4-simulated vegetation for Asia more closely matches the PNV data than the other two observed vegetation data sets. Given the same observed data, the accuracy assessments of the BIOME4 simulations made using the Kappa, Fuzzy Kappa and Nomad index map-comparison methods agree well when the compared vegetation types consist of a large number of spatially continuous grid cells. The results of this analysis can assist model users in designing experimental protocols for simulating vegetation.

  8. Technical Highlight: NREL Evaluates the Thermal Performance of Uninsulated Walls to Improve the Accuracy of Building Energy Simulation Tools

    SciTech Connect

    Ridouane, E.H.

    2012-01-01

    This technical highlight describes NREL research to develop models of uninsulated wall assemblies that help to improve the accuracy of building energy simulation tools when modeling potential energy savings in older homes.

  9. NREL Evaluates Thermal Performance of Uninsulated Walls to Improve Accuracy of Building Energy Simulation Tools (Fact Sheet)

    SciTech Connect

    Not Available

    2012-03-01

    NREL researchers discover ways to increase accuracy in building energy simulations tools to improve predictions of potential energy savings in homes. Uninsulated walls are typical in older U.S. homes where the wall cavities were not insulated during construction or where the insulating material has settled. Researchers at the National Renewable Energy Laboratory (NREL) are investigating ways to more accurately calculate heat transfer through building enclosures to verify the benefit of energy efficiency upgrades that reduce energy use in older homes. In this study, scientists used computational fluid dynamics (CFD) analysis to calculate the energy loss/gain through building walls and visualize different heat transfer regimes within the uninsulated cavities. The effects of ambient outdoor temperature, the radiative properties of building materials, insulation levels, and the temperature dependence of conduction through framing members were considered. The research showed that the temperature dependence of conduction through framing members dominated the differences between this study and previous results - an effect not accounted for in existing building energy simulation tools. The study provides correlations for the resistance of the uninsulated assemblies that can be implemented into building simulation tools to increase the accuracy of energy use estimates in older homes, which are currently over-predicted.

  10. Use of an extracorporeal circulation perfusion simulator: evaluation of its accuracy and repeatability.

    PubMed

    Tokumine, Asako; Momose, Naoki; Tomizawa, Yasuko

    2013-12-01

    Medical simulators have mainly been used as educational tools. They have been used to train technicians and to educate potential users about safety. We combined software for hybrid-type extracorporeal circulation simulation (ECCSIM) with a CPB-Workshop console. We evaluated the performance of ECCSIM, including its accuracy and repeatability, during simulated ECC. We performed a detailed evaluation of the synchronization of the software with the console and the function of the built-in valves. An S-III heart–lung machine was used for the open circuit. It included a venous reservoir, an oxygenator (RX-25), and an arterial filter. The tubes for venous drainage and the arterial line were connected directly to the ports of the console. The ECCSIM recorded the liquid level of the reservoir continuously. The valve in the console controlled the pressure load of the arterial line. The software made any adjustments necessary to both arterial pressure load and the venous drainage flow volume. No external flowmeters were necessary during simulation. We found the CPB-Workshop to be convenient, reliable, and sufficiently exact. It can be used to validate procedures by monitoring the controls and responses by using a combination of qualitative measures. PMID:24022821

  11. Medical Simulation Practices 2010 Survey Results

    NASA Technical Reports Server (NTRS)

    McCrindle, Jeffrey J.

    2011-01-01

    Medical Simulation Centers are an essential component of our learning infrastructure to prepare doctors and nurses for their careers. Unlike the military and aerospace simulation industry, very little has been published regarding the best practices currently in use within medical simulation centers. This survey attempts to provide insight into the current simulation practices at medical schools, hospitals, university nursing programs and community college nursing programs. Students within the MBA program at Saint Joseph's University conducted a survey of medical simulation practices during the summer 2010 semester. A total of 115 institutions responded to the survey. The survey resus discuss overall effectiveness of current simulation centers as well as the tools and techniques used to conduct the simulation activity

  12. Summarising and validating test accuracy results across multiple studies for use in clinical practice.

    PubMed

    Riley, Richard D; Ahmed, Ikhlaaq; Debray, Thomas P A; Willis, Brian H; Noordzij, J Pieter; Higgins, Julian P T; Deeks, Jonathan J

    2015-06-15

    Following a meta-analysis of test accuracy studies, the translation of summary results into clinical practice is potentially problematic. The sensitivity, specificity and positive (PPV) and negative (NPV) predictive values of a test may differ substantially from the average meta-analysis findings, because of heterogeneity. Clinicians thus need more guidance: given the meta-analysis, is a test likely to be useful in new populations, and if so, how should test results inform the probability of existing disease (for a diagnostic test) or future adverse outcome (for a prognostic test)? We propose ways to address this. Firstly, following a meta-analysis, we suggest deriving prediction intervals and probability statements about the potential accuracy of a test in a new population. Secondly, we suggest strategies on how clinicians should derive post-test probabilities (PPV and NPV) in a new population based on existing meta-analysis results and propose a cross-validation approach for examining and comparing their calibration performance. Application is made to two clinical examples. In the first example, the joint probability that both sensitivity and specificity will be >80% in a new population is just 0.19, because of a low sensitivity. However, the summary PPV of 0.97 is high and calibrates well in new populations, with a probability of 0.78 that the true PPV will be at least 0.95. In the second example, post-test probabilities calibrate better when tailored to the prevalence in the new population, with cross-validation revealing a probability of 0.97 that the observed NPV will be within 10% of the predicted NPV. PMID:25800943

  13. Accuracy of relative positioning by interferometry with GPS Double-blind test results

    NASA Technical Reports Server (NTRS)

    Counselman, C. C., III; Gourevitch, S. A.; Herring, T. A.; King, B. W.; Shapiro, I. I.; Cappallo, R. J.; Rogers, A. E. E.; Whitney, A. R.; Greenspan, R. L.; Snyder, R. E.

    1983-01-01

    MITES (Miniature Interferometer Terminals for Earth Surveying) observations conducted on December 17 and 29, 1980, are analyzed. It is noted that the time span of the observations used on each day was 78 minutes, during which five satellites were always above 20 deg elevation. The observations are analyzed to determine the intersite position vectors by means of the algorithm described by Couselman and Gourevitch (1981). The average of the MITES results from the two days is presented. The rms differences between the two determinations of the components of the three vectors, which were about 65, 92, and 124 m long, were 8 mm for the north, 3 mm for the east, and 6 mm for the vertical. It is concluded that, at least for short distances, relative positioning by interferometry with GPS can be done reliably with subcentimeter accuracy.

  14. Interhemispheric Field-Aligned Currents: Simulation Results

    NASA Astrophysics Data System (ADS)

    Lyatsky, Sonya

    2016-04-01

    We present simulation results of the 3-D magnetosphere-ionosphere current system including the Region 1, Region 2, and interhemispheric (IHC) field-aligned currents flowing between the Northern and Southern conjugate ionospheres in the case of asymmetry in ionospheric conductivities in two hemispheres (observed, for instance, during the summer-winter seasons). We also computed the maps of ionospheric and equivalent ionospheric currents in two hemispheres. The IHCs are an important part of the global 3-D current system in high-latitude ionospheres. These currents are especially significant during summer and winter months. In the winter ionosphere, they may be comparable and even exceed both Region 1 and Region 2 field-aligned currents. An important feature of these interhemispheric currents is that they link together processes in two hemispheres, so that the currents observed in one hemisphere can provide us with information about the currents in the opposite hemisphere. Despite the significant role of these IHCs in the global 3-D current system, they have not been sufficiently studied yet. The main results of our research may be summarized as follows: 1) In winter hemisphere, the IHCs may significantly exceed and be a substitute for the local Region 1 and Region 2 currents; 2) The IHCs may strongly affect the magnitude, location, and direction of the ionospheric and equivalent ionospheric currents (especially in the nightside winter auroral ionosphere). 3) The IHCs in winter hemisphere may be, in fact, an important (and sometimes even major) source of the Westward Auroral Electrojet, observed in both hemispheres during substorm activity. The study of the contribution from the IHCs into the total global 3-D current system allows us to improve the understanding and forecasting of geomagnetic, auroral, and ionospheric disturbances in two hemispheres. The results of our studies of the Interhemispheric currents are presented in papers: (note: for publications my last

  15. Assessing the accuracy of improved force-matched water models derived from Ab initio molecular dynamics simulations.

    PubMed

    Köster, Andreas; Spura, Thomas; Rutkai, Gábor; Kessler, Jan; Wiebeler, Hendrik; Vrabec, Jadran; Kühne, Thomas D

    2016-07-15

    The accuracy of water models derived from ab initio molecular dynamics simulations by means on an improved force-matching scheme is assessed for various thermodynamic, transport, and structural properties. It is found that although the resulting force-matched water models are typically less accurate than fully empirical force fields in predicting thermodynamic properties, they are nevertheless much more accurate than generally appreciated in reproducing the structure of liquid water and in fact superseding most of the commonly used empirical water models. This development demonstrates the feasibility to routinely parametrize computationally efficient yet predictive potential energy functions based on accurate ab initio molecular dynamics simulations for a large variety of different systems. © 2016 Wiley Periodicals, Inc. PMID:27232117

  16. Geodetic and geophysical results from a Taiwan airborne gravity survey: Data reduction and accuracy assessment

    NASA Astrophysics Data System (ADS)

    Hwang, Cheinway; Hsiao, Yu-Shen; Shih, Hsuan-Chang; Yang, Ming; Chen, Kwo-Hwa; Forsberg, Rene; Olesen, Arne V.

    2007-04-01

    An airborne gravity survey was conducted over Taiwan using a LaCoste and Romberg (LCR) System II air-sea gravimeter with gravity and global positioning system (GPS) data sampled at 1 Hz. The aircraft trajectories were determined using a GPS network kinematic adjustment relative to eight GPS tracking stations. Long-wavelength errors in position are reduced when doing numerical differentiations for velocity and acceleration. A procedure for computing resolvable wavelength of error-free airborne gravimetry is derived. The accuracy requirements of position, velocity, and accelerations for a 1-mgal accuracy in gravity anomaly are derived. GPS will fulfill these requirements except for vertical acceleration. An iterative Gaussian filter is used to reduce errors in vertical acceleration. A compromising filter width for noise reduction and gravity detail is 150 s. The airborne gravity anomalies are compared with surface values, and large differences are found over high mountains where the gravity field is rough and surface data density is low. The root mean square (RMS) crossover differences before and after a bias-only adjustment are 4.92 and 2.88 mgal, the latter corresponding to a 2-mgal standard error in gravity anomaly. Repeatability analyses at two survey lines suggest that GPS is the dominating factor affecting the repeatability. Fourier transform and least-squares collocation are used for downward continuation, and the latter produces a better result. Two geoid models are computed, one using airborne and surface gravity data and the other using surface data only, and the former yields a better agreement with the GPS-derived geoidal heights. Bouguer anomalies derived from airborne gravity by a rigorous numerical integration reveal important tectonic features.

  17. Improving diagnostic accuracy using EHR in emergency departments: A simulation-based study.

    PubMed

    Ben-Assuli, Ofir; Sagi, Doron; Leshno, Moshe; Ironi, Avinoah; Ziv, Amitai

    2015-06-01

    It is widely believed that Electronic Health Records (EHR) improve medical decision-making by enabling medical staff to access medical information stored in the system. It remains unclear, however, whether EHR indeed fulfills this claim under the severe time constraints of Emergency Departments (EDs). We assessed whether accessing EHR in an ED actually improves decision-making by clinicians. A simulated ED environment was created at the Israel Center for Medical Simulation (MSR). Four different actors were trained to simulate four specific complaints and behavior and 'consulted' 26 volunteer ED physicians. Each physician treated half of the cases (randomly) with access to EHR, and their medical decisions were compared to those where the physicians had no access to EHR. Comparison of diagnostic accuracy with and without access showed that accessing the EHR led to an increase in the quality of the clinical decisions. Physicians accessing EHR were more highly informed and thus made more accurate decisions. The percentage of correct diagnoses was higher and these physicians were more confident in their diagnoses and made their decisions faster. PMID:25817921

  18. High accuracy simulations of black hole binaries: Spins anti-aligned with the orbital angular momentum

    NASA Astrophysics Data System (ADS)

    Chu, Tony; Pfeiffer, Harald P.; Scheel, Mark A.

    2009-12-01

    High-accuracy binary black hole simulations are presented for black holes with spins anti-aligned with the orbital angular momentum. The particular case studied represents an equal-mass binary with spins of equal magnitude S/m2=0.43757±0.00001. The system has initial orbital eccentricity ˜4×10-5, and is evolved through 10.6 orbits plus merger and ringdown. The remnant mass and spin are Mf=(0.961109±0.000003)M and Sf/Mf2=0.54781±0.00001, respectively, where M is the mass during early inspiral. The gravitational waveforms have accumulated numerical phase errors of ≲0.1 radians without any time or phase shifts, and ≲0.01 radians when the waveforms are aligned with suitable time and phase shifts. The waveform is extrapolated to infinity using a procedure accurate to ≲0.01 radians in phase, and the extrapolated waveform differs by up to 0.13 radians in phase and about 1% in amplitude from the waveform extracted at finite radius r=350M. The simulations employ different choices for the constraint damping parameters in the wave zone; this greatly reduces the effects of junk radiation, allowing the extraction of a clean gravitational wave signal even very early in the simulation.

  19. On the accuracy of the state space restriction approximation for spin dynamics simulations

    NASA Astrophysics Data System (ADS)

    Karabanov, Alexander; Kuprov, Ilya; Charnock, G. T. P.; van der Drift, Anniek; Edwards, Luke J.; Köckenberger, Walter

    2011-08-01

    We present an algebraic foundation for the state space restriction approximation in spin dynamics simulations and derive applicability criteria as well as minimal basis set requirements for practically encountered simulation tasks. The results are illustrated with nuclear magnetic resonance (NMR), electron spin resonance (ESR), dynamic nuclear polarization (DNP), and spin chemistry simulations. It is demonstrated that state space restriction yields accurate results in systems where the time scale of spin relaxation processes approximately matches the time scale of the experiment. Rigorous error bounds and basis set requirements are derived.

  20. Technical Note: Maximising accuracy and minimising cost of a potentiometrically regulated ocean acidification simulation system

    NASA Astrophysics Data System (ADS)

    MacLeod, C. D.; Doyle, H. L.; Currie, K. I.

    2014-05-01

    This article describes a potentiometric ocean acidification simulation system which automatically regulates pH through the injection of 100% CO2 gas into temperature-controlled seawater. The system is ideally suited to long-term experimental studies of the effect of acidification on biological processes involving small-bodied (10-20 mm) calcifying or non-calcifying organisms. Using hobbyist grade equipment, the system was constructed for approximately USD 1200 per treatment unit (tank, pH regulation apparatus, chiller, pump/filter unit). An overall accuracy of ±0.05 pHT units (SD) was achieved over 90 days in two acidified treatments (7.60 and 7.40) at 12 °C using glass electrodes calibrated with salt water buffers, thereby preventing liquid junction error. The accuracy of the system was validated through the independent calculation of pHT (12 °C) using dissolved inorganic carbon (DIC) and total alkalinity (AT) data taken from discrete acidified seawater samples. The system was used to compare the shell growth of the marine gastropod Zeacumantus subcarinatus infected with the trematode parasite Maritrema novaezealandensis with that of uninfected snails, at pH levels of 7.4, 7.6, and 8.1.

  1. SALTSTONE MATRIX CHARACTERIZATION AND STADIUM SIMULATION RESULTS

    SciTech Connect

    Langton, C.

    2009-07-30

    SIMCO Technologies, Inc. was contracted to evaluate the durability of the saltstone matrix material and to measure saltstone transport properties. This information will be used to: (1) Parameterize the STADIUM{reg_sign} service life code, (2) Predict the leach rate (degradation rate) for the saltstone matrix over 10,000 years using the STADIUM{reg_sign} concrete service life code, and (3) Validate the modeled results by conducting leaching (water immersion) tests. Saltstone durability for this evaluation is limited to changes in the matrix itself and does not include changes in the chemical speciation of the contaminants in the saltstone. This report summarized results obtained to date which include: characterization data for saltstone cured up to 365 days and characterization of saltstone cured for 137 days and immersed in water for 31 days. Chemicals for preparing simulated non-radioactive salt solution were obtained from chemical suppliers. The saltstone slurry was mixed according to directions provided by SRNL. However SIMCO Technologies Inc. personnel made a mistake in the premix proportions. The formulation SIMCO personnel used to prepare saltstone premix was not the reference mix proportions: 45 wt% slag, 45 wt% fly ash, and 10 wt% cement. SIMCO Technologies Inc. personnel used the following proportions: 21 wt% slag, 65 wt% fly ash, and 14 wt% cement. The mistake was acknowledged and new mixes have been prepared and are curing. The results presented in this report are assumed to be conservative since the excessive fly ash was used in the SIMCO saltstone. The SIMCO mixes are low in slag which is very reactive in the caustic salt solution. The impact is that the results presented in this report are expected to be conservative since the samples prepared were deficient in slag and contained excess fly ash. The hydraulic reactivity of slag is about four times that of fly ash so the amount of hydrated binder formed per unit volume in the SIMCO saltstone samples is

  2. Use of Electronic Health Record Simulation to Understand the Accuracy of Intern Progress Notes.

    PubMed

    March, Christopher A; Scholl, Gretchen; Dversdal, Renee K; Richards, Matthew; Wilson, Leah M; Mohan, Vishnu; Gold, Jeffrey A

    2016-05-01

    Background With the widespread adoption of electronic health records (EHRs), there is a growing awareness of problems in EHR training for new users and subsequent problems with the quality of information present in EHR-generated progress notes. By standardizing the case, simulation allows for the discovery of EHR patterns of use as well as a modality to aid in EHR training. Objective To develop a high-fidelity EHR training exercise for internal medicine interns to understand patterns of EHR utilization in the generation of daily progress notes. Methods Three months after beginning their internship, 32 interns participated in an EHR simulation designed to assess patterns in note writing and generation. Each intern was given a simulated chart and instructed to create a daily progress note. Notes were graded for use of copy-paste, macros, and accuracy of presented data. Results A total of 31 out of 32 interns (97%) completed the exercise. There was wide variance in use of macros to populate data, with multiple macro types used for the same data category. Three-quarters of notes contained either copy-paste elements or the elimination of active medical problems from the prior days' notes. This was associated with a significant number of quality issues, including failure to recognize a lack of deep vein thrombosis prophylaxis, medications stopped on admission, and issues in prior discharge summary. Conclusions Interns displayed wide variation in the process of creating progress notes. Additional studies are being conducted to determine the impact EHR-based simulation has on standardization of note content. PMID:27168894

  3. Accuracy of genomic selection in barley breeding programs: a simulation study based on the real SNP data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The aim of this study was to compare the accuracy of genomic selection (i.e., selection based on genome-wide markers) to phenotypic selection through simulations based on real barley SNPs data (1325 SNPs x 863 breeding lines). We simulated 100 QTL at randomly selected SNPs, which were dropped from t...

  4. Evaluating the Accuracy of Results for Teacher Implemented Trial-Based Functional Analyses.

    PubMed

    Rispoli, Mandy; Ninci, Jennifer; Burke, Mack D; Zaini, Samar; Hatton, Heather; Sanchez, Lisa

    2015-09-01

    Trial-based functional analysis (TBFA) allows for the systematic and experimental assessment of challenging behavior in applied settings. The purposes of this study were to evaluate a professional development package focused on training three Head Start teachers to conduct TBFAs with fidelity during ongoing classroom routines. To assess the accuracy of the TBFA results, the effects of a function-based intervention derived from the TBFA were compared with the effects of a non-function-based intervention. Data were collected on child challenging behavior and appropriate communication. An A-B-A-C-D design was utilized in which A represented baseline, and B and C consisted of either function-based or non-function-based interventions counterbalanced across participants, and D represented teacher implementation of the most effective intervention. Results showed that the function-based intervention produced greater decreases in challenging behavior and greater increases in appropriate communication than the non-function-based intervention for all three children. PMID:26069219

  5. Diagnostic Accuracy of Procalcitonin for Predicting Blood Culture Results in Patients With Suspected Bloodstream Infection

    PubMed Central

    Oussalah, Abderrahim; Ferrand, Janina; Filhine-Tresarrieu, Pierre; Aissa, Nejla; Aimone-Gastin, Isabelle; Namour, Fares; Garcia, Matthieu; Lozniewski, Alain; Guéant, Jean-Louis

    2015-01-01

    Abstract Previous studies have suggested that procalcitonin is a reliable marker for predicting bacteremia. However, these studies have had relatively small sample sizes or focused on a single clinical entity. The primary endpoint of this study was to investigate the diagnostic accuracy of procalcitonin for predicting or excluding clinically relevant pathogen categories in patients with suspected bloodstream infections. The secondary endpoint was to look for organisms significantly associated with internationally validated procalcitonin intervals. We performed a cross-sectional study that included 35,343 consecutive patients who underwent concomitant procalcitonin assays and blood cultures for suspected bloodstream infections. Biochemical and microbiological data were systematically collected in an electronic database and extracted for purposes of this study. Depending on blood culture results, patients were classified into 1 of the 5 following groups: negative blood culture, Gram-positive bacteremia, Gram-negative bacteremia, fungi, and potential contaminants found in blood cultures (PCBCs). The highest procalcitonin concentration was observed in patients with blood cultures growing Gram-negative bacteria (median 2.2 ng/mL [IQR 0.6–12.2]), and the lowest procalcitonin concentration was observed in patients with negative blood cultures (median 0.3 ng/mL [IQR 0.1–1.1]). With optimal thresholds ranging from ≤0.4 to ≤0.75 ng/mL, procalcitonin had a high diagnostic accuracy for excluding all pathogen categories with the following negative predictive values: Gram-negative bacteria (98.9%) (including enterobacteria [99.2%], nonfermenting Gram-negative bacilli [99.7%], and anaerobic bacteria [99.9%]), Gram-positive bacteria (98.4%), and fungi (99.6%). A procalcitonin concentration ≥10 ng/mL was associated with a high risk of Gram-negative (odds ratio 5.98; 95% CI, 5.20–6.88) or Gram-positive (odds ratio 3.64; 95% CI, 3.11–4.26) bacteremia but

  6. Mapping soil texture classes and optimization of the result by accuracy assessment

    NASA Astrophysics Data System (ADS)

    Laborczi, Annamária; Takács, Katalin; Bakacsi, Zsófia; Szabó, József; Pásztor, László

    2014-05-01

    There are increasing demands nowadays on spatial soil information in order to support environmental related and land use management decisions. The GlobalSoilMap.net (GSM) project aims to make a new digital soil map of the world using state-of-the-art and emerging technologies for soil mapping and predicting soil properties at fine resolution. Sand, silt and clay are among the mandatory GSM soil properties. Furthermore, soil texture class information is input data of significant agro-meteorological and hydrological models. Our present work aims to compare and evaluate different digital soil mapping methods and variables for producing the most accurate spatial prediction of texture classes in Hungary. In addition to the Hungarian Soil Information and Monitoring System as our basic data, digital elevation model and its derived components, geological database, and physical property maps of the Digital Kreybig Soil Information System have been applied as auxiliary elements. Two approaches have been applied for the mapping process. At first the sand, silt and clay rasters have been computed independently using regression kriging (RK). From these rasters, according to the USDA categories, we have compiled the texture class map. Different combinations of reference and training soil data and auxiliary covariables have resulted several different maps. However, these results consequentially include the uncertainty factor of the three kriged rasters. Therefore we have suited data mining methods as the other approach of digital soil mapping. By working out of classification trees and random forests we have got directly the texture class maps. In this way the various results can be compared to the RK maps. The performance of the different methods and data has been examined by testing the accuracy of the geostatistically computed and the directly classified results. We have used the GSM methodology to assess the most predictive and accurate way for getting the best among the

  7. Accuracy of the unified approach in maternally influenced traits - illustrated by a simulation study in the honey bee (Apis mellifera)

    PubMed Central

    2013-01-01

    Background The honey bee is an economically important species. With a rapid decline of the honey bee population, it is necessary to implement an improved genetic evaluation methodology. In this study, we investigated the applicability of the unified approach and its impact on the accuracy of estimation of breeding values for maternally influenced traits on a simulated dataset for the honey bee. Due to the limitation to the number of individuals that can be genotyped in a honey bee population, the unified approach can be an efficient strategy to increase the genetic gain and to provide a more accurate estimation of breeding values. We calculated the accuracy of estimated breeding values for two evaluation approaches, the unified approach and the traditional pedigree based approach. We analyzed the effects of different heritabilities as well as genetic correlation between direct and maternal effects on the accuracy of estimation of direct, maternal and overall breeding values (sum of maternal and direct breeding values). The genetic and reproductive biology of the honey bee was accounted for by taking into consideration characteristics such as colony structure, uncertain paternity, overlapping generations and polyandry. In addition, we used a modified numerator relationship matrix and a realistic genome for the honey bee. Results For all values of heritability and correlation, the accuracy of overall estimated breeding values increased significantly with the unified approach. The increase in accuracy was always higher for the case when there was no correlation as compared to the case where a negative correlation existed between maternal and direct effects. Conclusions Our study shows that the unified approach is a useful methodology for genetic evaluation in honey bees, and can contribute immensely to the improvement of traits of apicultural interest such as resistance to Varroa or production and behavioural traits. In particular, the study is of great interest for

  8. Results of a new polarization simulation

    NASA Astrophysics Data System (ADS)

    Fetrow, Matthew P.; Wellems, David; Sposato, Stephanie H.; Bishop, Kenneth P.; Caudill, Thomas R.; Davis, Michael L.; Simrell, Elizabeth R.

    2002-01-01

    Including polarization signatures of material samples in passive sensing may enhance target detection capabilities. To obtain more information on this potential improvement, a simulation is being developed to aid in interpreting IR polarization measurements in a complex environment. The simulation accounts for the background, or incident illumination, and the scattering and emission from the target into the sensor. MODTRAN, in combination with a dipole approximation to singly scattered radiance, is used to polarimetrically model the background, or sky conditions. The scattering and emission from rough surfaces are calculated using an energy conserving polarimetric Torrance and Sparrow BRDF model. The simulation can be used to examine the surface properties of materials in a laboratory environment, to investigate IR polarization signatures in the field, or a complex environment, and to predict trends in LWIR polarization data. In this paper we discuss the simulation architecture, the process for determining and roughness as a function of wavelength, which involves making polarization measurements of flat glass plates at various angles and temperatures in the laboratory at Kirtland AF Base, and the comparison of the simulation with field dat taken at Elgin Air Force Base. The later process entails using the extrapolated index of refraction and surface roughness, and a polarimetric incident sky dome generated by MODTRAN. We also present some parametric studies in which the sky condition, the sky temperature and the sensor declination angle were all varied.

  9. A computer simulation study comparing lesion detection accuracy with digital mammography, breast tomosynthesis, and cone-beam CT breast imaging

    SciTech Connect

    Gong Xing; Glick, Stephen J.; Liu, Bob; Vedula, Aruna A.; Thacker, Samta

    2006-04-15

    Although conventional mammography is currently the best modality to detect early breast cancer, it is limited in that the recorded image represents the superposition of a three-dimensional (3D) object onto a 2D plane. Recently, two promising approaches for 3D volumetric breast imaging have been proposed, breast tomosynthesis (BT) and CT breast imaging (CTBI). To investigate possible improvements in lesion detection accuracy with either breast tomosynthesis or CT breast imaging as compared to digital mammography (DM), a computer simulation study was conducted using simulated lesions embedded into a structured 3D breast model. The computer simulation realistically modeled x-ray transport through a breast model, as well as the signal and noise propagation through a CsI based flat-panel imager. Polyenergetic x-ray spectra of Mo/Mo 28 kVp for digital mammography, Mo/Rh 28 kVp for BT, and W/Ce 50 kVp for CTBI were modeled. For the CTBI simulation, the intensity of the x-ray spectra for each projection view was determined so as to provide a total average glandular dose of 4 mGy, which is approximately equivalent to that given in conventional two-view screening mammography. The same total dose was modeled for both the DM and BT simulations. Irregular lesions were simulated by using a stochastic growth algorithm providing lesions with an effective diameter of 5 mm. Breast tissue was simulated by generating an ensemble of backgrounds with a power law spectrum, with the composition of 50% fibroglandular and 50% adipose tissue. To evaluate lesion detection accuracy, a receiver operating characteristic (ROC) study was performed with five observers reading an ensemble of images for each case. The average area under the ROC curves (A{sub z}) was 0.76 for DM, 0.93 for BT, and 0.94 for CTBI. Results indicated that for the same dose, a 5 mm lesion embedded in a structured breast phantom was detected by the two volumetric breast imaging systems, BT and CTBI, with statistically

  10. Balancing accuracy, robustness, and efficiency in simulations of coupled magma/mantle dynamics

    NASA Astrophysics Data System (ADS)

    Katz, R. F.

    2011-12-01

    Magmatism plays a central role in many Earth-science problems, and is particularly important for the chemical evolution of the mantle. The standard theory for coupled magma/mantle dynamics is fundamentally multi-physical, comprising mass and force balance for two phases, plus conservation of energy and composition in a two-component (minimum) thermochemical system. The tight coupling of these various aspects of the physics makes obtaining numerical solutions a significant challenge. Previous authors have advanced by making drastic simplifications, but these have limited applicability. Here I discuss progress, enabled by advanced numerical software libraries, in obtaining numerical solutions to the full system of governing equations. The goals in developing the code are as usual: accuracy of solutions, robustness of the simulation to non-linearities, and efficiency of code execution. I use the cutting-edge example of magma genesis and migration in a heterogeneous mantle to elucidate these issues. I describe the approximations employed and their consequences, as a means to frame the question of where and how to make improvements. I conclude that the capabilities needed to advance multi-physics simulation are, in part, distinct from those of problems with weaker coupling, or fewer coupled equations. Chief among these distinct requirements is the need to dynamically adjust the solution algorithm to maintain robustness in the face of coupled nonlinearities that would otherwise inhibit convergence. This may mean introducing Picard iteration rather than full coupling, switching between semi-implicit and explicit time-stepping, or adaptively increasing the strength of preconditioners. All of these can be accomplished by the user with, for example, PETSc. Formalising this adaptivity should be a goal for future development of software packages that seek to enable multi-physics simulation.

  11. NREL Evaluates the Thermal Performance of Uninsulated Walls to Improve the Accuracy of Building Energy Simulation Tools (Fact Sheet)

    SciTech Connect

    Not Available

    2012-01-01

    This technical highlight describes NREL research to develop models of uninsulated wall assemblies that help to improve the accuracy of building energy simulation tools when modeling potential energy savings in older homes. Researchers at the National Renewable Energy Laboratory (NREL) have developed models for evaluating the thermal performance of walls in existing homes that will improve the accuracy of building energy simulation tools when predicting potential energy savings of existing homes. Uninsulated walls are typical in older homes where the wall cavities were not insulated during construction or where the insulating material has settled. Accurate calculation of heat transfer through building enclosures will help determine the benefit of energy efficiency upgrades in order to reduce energy consumption in older American homes. NREL performed detailed computational fluid dynamics (CFD) analysis to quantify the energy loss/gain through the walls and to visualize different airflow regimes within the uninsulated cavities. The effects of ambient outdoor temperature, radiative properties of building materials, and insulation level were investigated. The study showed that multi-dimensional airflows occur in walls with uninsulated cavities and that the thermal resistance is a function of the outdoor temperature - an effect not accounted for in existing building energy simulation tools. The study quantified the difference between CFD prediction and the approach currently used in building energy simulation tools over a wide range of conditions. For example, researchers found that CFD predicted lower heating loads and slightly higher cooling loads. Implementation of CFD results into building energy simulation tools such as DOE2 and EnergyPlus will likely reduce the predicted heating load of homes. Researchers also determined that a small air gap in a partially insulated cavity can lead to a significant reduction in thermal resistance. For instance, a 4-in. tall air gap

  12. Accuracy Rates of Sex Estimation by Forensic Anthropologists through Comparison with DNA Typing Results in Forensic Casework.

    PubMed

    Thomas, Richard M; Parks, Connie L; Richard, Adam H

    2016-09-01

    A common task in forensic anthropology involves the estimation of the biological sex of a decedent by exploiting the sexual dimorphism between males and females. Estimation methods are often based on analysis of skeletal collections of known sex and most include a research-based accuracy rate. However, the accuracy rates of sex estimation methods in actual forensic casework have rarely been studied. This article uses sex determinations based on DNA results from 360 forensic cases to develop accuracy rates for sex estimations conducted by forensic anthropologists. The overall rate of correct sex estimation from these cases is 94.7% with increasing accuracy rates as more skeletal material is available for analysis and as the education level and certification of the examiner increases. Nine of 19 incorrect assessments resulted from cases in which one skeletal element was available, suggesting that the use of an "undetermined" result may be more appropriate for these cases. PMID:27352918

  13. Simulation Results for Airborne Precision Spacing along Continuous Descent Arrivals

    NASA Technical Reports Server (NTRS)

    Barmore, Bryan E.; Abbott, Terence S.; Capron, William R.; Baxley, Brian T.

    2008-01-01

    This paper describes the results of a fast-time simulation experiment and a high-fidelity simulator validation with merging streams of aircraft flying Continuous Descent Arrivals through generic airspace to a runway at Dallas-Ft Worth. Aircraft made small speed adjustments based on an airborne-based spacing algorithm, so as to arrive at the threshold exactly at the assigned time interval behind their Traffic-To-Follow. The 40 aircraft were initialized at different altitudes and speeds on one of four different routes, and then merged at different points and altitudes while flying Continuous Descent Arrivals. This merging and spacing using flight deck equipment and procedures to augment or implement Air Traffic Management directives is called Flight Deck-based Merging and Spacing, an important subset of a larger Airborne Precision Spacing functionality. This research indicates that Flight Deck-based Merging and Spacing initiated while at cruise altitude and well prior to the Terminal Radar Approach Control entry can significantly contribute to the delivery of aircraft at a specified interval to the runway threshold with a high degree of accuracy and at a reduced pilot workload. Furthermore, previously documented work has shown that using a Continuous Descent Arrival instead of a traditional step-down descent can save fuel, reduce noise, and reduce emissions. Research into Flight Deck-based Merging and Spacing is a cooperative effort between government and industry partners.

  14. Improving the accuracy of simulation of radiation-reaction effects with implicit Runge-Kutta-Nyström methods.

    PubMed

    Elkina, N V; Fedotov, A M; Herzing, C; Ruhl, H

    2014-05-01

    The Landau-Lifshitz equation provides an efficient way to account for the effects of radiation reaction without acquiring the nonphysical solutions typical for the Lorentz-Abraham-Dirac equation. We solve the Landau-Lifshitz equation in its covariant four-vector form in order to control both the energy and momentum of radiating particles. Our study reveals that implicit time-symmetric collocation methods of the Runge-Kutta-Nyström type are superior in accuracy and better at maintaining the mass-shell condition than their explicit counterparts. We carry out an extensive study of numerical accuracy by comparing the analytical and numerical solutions of the Landau-Lifshitz equation. Finally, we present the results of the simulation of particle scattering by a focused laser pulse. Due to radiation reaction, particles are less capable of penetrating into the focal region compared to the case where radiation reaction is neglected. Our results are important for designing forthcoming experiments with high intensity laser fields. PMID:25353922

  15. Speed and Accuracy of Absolute Pitch Judgments: Some Latter-Day Results.

    ERIC Educational Resources Information Center

    Carroll, John B.

    Nine subjects, 5 of whom claimed absolute pitch (AP) ability were instructed to rapidly strike notes on the piano to match randomized tape-recorded piano notes. Stimulus set sizes were 64, 16, or 4 consecutive semitones, or 7 diatonic notes of a designated octave. A control task involved motor movements to notes announced in advance. Accuracy,…

  16. A Bloch-McConnell simulator with pharmacokinetic modeling to explore accuracy and reproducibility in the measurement of hyperpolarized pyruvate

    NASA Astrophysics Data System (ADS)

    Walker, Christopher M.; Bankson, James A.

    2015-03-01

    Magnetic resonance imaging (MRI) of hyperpolarized (HP) agents has the potential to probe in-vivo metabolism with sensitivity and specificity that was not previously possible. Biological conversion of HP agents specifically for cancer has been shown to correlate to presence of disease, stage and response to therapy. For such metabolic biomarkers derived from MRI of hyperpolarized agents to be clinically impactful, they need to be validated and well characterized. However, imaging of HP substrates is distinct from conventional MRI, due to the non-renewable nature of transient HP magnetization. Moreover, due to current practical limitations in generation and evolution of hyperpolarized agents, it is not feasible to fully experimentally characterize measurement and processing strategies. In this work we use a custom Bloch-McConnell simulator with pharmacokinetic modeling to characterize the performance of specific magnetic resonance spectroscopy sequences over a range of biological conditions. We performed numerical simulations to evaluate the effect of sequence parameters over a range of chemical conversion rates. Each simulation was analyzed repeatedly with the addition of noise in order to determine the accuracy and reproducibility of measurements. Results indicate that under both closed and perfused conditions, acquisition parameters can affect measurements in a tissue dependent manner, suggesting that great care needs to be taken when designing studies involving hyperpolarized agents. More modeling studies will be needed to determine what effect sequence parameters have on more advanced acquisitions and processing methods.

  17. RF propagation simulator to predict location accuracy of GSM mobile phones for emergency applications

    NASA Astrophysics Data System (ADS)

    Green, Marilynn P.; Wang, S. S. Peter

    2002-11-01

    Mobile location is one of the fastest growing areas for the development of new technologies, services and applications. This paper describes the channel models that were developed as a basis of discussion to assist the Technical Subcommittee T1P1.5 in its consideration of various mobile location technologies for emergency applications (1997 - 1998) for presentation to the U.S. Federal Communication Commission (FCC). It also presents the PCS 1900 extension to this model, which is based on the COST-231 extended Hata model and review of the original Okumura graphical interpretation of signal propagation characteristics in different environments. Based on a wide array of published (and non-publicly disclosed) empirical data, the signal propagation models described in this paper were all obtained by consensus of a group of inter-company participants in order to facilitate the direct comparison between simulations of different handset-based and network-based location methods prior to their standardization for emergency E-911 applications by the FCC. Since that time, this model has become a de-facto standard for assessing the positioning accuracy of different location technologies using GSM mobile terminals. In this paper, the radio environment is described to the level of detail that is necessary to replicate it in a software environment.

  18. Accuracy of the discrete dipole approximation for simulation of optical properties of gold nanoparticles

    NASA Astrophysics Data System (ADS)

    Yurkin, Maxim A.; de Kanter, David; Hoekstra, Alfons G.

    2010-02-01

    We studied the accuracy of the discrete dipole approximation (DDA) for simulations of absorption and scattering spectra by gold nanoparticles (spheres, cubes, and rods ranging in size from 10 to 100 nm). We varied the dipole resolution and applied two DDA formulations, employing the standard lattice dispersion relation (LDR) and the relatively new filtered coupled dipoles (FCD) approach. The DDA with moderate dipole resolutions is sufficiently accurate for scattering efficiencies or positions of spectral peaks, but very inaccurate for e.g. values of absorption efficiencies in the near-IR. To keep relative errors of the latter within 10% about 107 dipoles per sphere are required. Surprisingly, errors for cubes are about 10 times smaller than that for spheres or rods, which we explain in terms of shape errors. The FCD is generally more accurate and leads to up to 2 times faster computations than the LDR. Therefore, we recommend FCD as the DDA formulation of choice for gold and other metallic nanoparticles.

  19. Accuracy of linear measurement in the Galileos cone beam computed tomography under simulated clinical conditions

    PubMed Central

    Ganguly, R; Ruprecht, A; Vincent, S; Hellstein, J; Timmons, S; Qian, F

    2011-01-01

    Objectives The aim of this study was to determine the geometric accuracy of cone beam CT (CBCT)-based linear measurements of bone height obtained with the Galileos CBCT (Sirona Dental Systems Inc., Bensheim, Hessen, Germany) in the presence of soft tissues. Methods Six embalmed cadaver heads were imaged with the Galileos CBCT unit subsequent to placement of radiopaque fiduciary markers over the buccal and lingual cortical plates. Electronic linear measurements of bone height were obtained using the Sirona software. Physical measurements were obtained with digital calipers at the same location. This distance was compared on all six specimens bilaterally to determine accuracy of the image measurements. Results The findings showed no statistically significant difference between the imaging and physical measurements (P > 0.05) as determined by a paired sample t-test. The intraclass correlation was used to measure the intrarater reliability of repeated measures and there was no statistically significant difference between measurements performed at the same location (P > 0.05). Conclusions The Galileos CBCT image-based linear measurement between anatomical structures within the mandible in the presence of soft tissues is sufficiently accurate for clinical use. PMID:21697155

  20. High accuracy models of sources in FDTD computations for subwavelength photonics design simulations

    NASA Astrophysics Data System (ADS)

    Cole, James B.; Banerjee, Saswatee

    2014-09-01

    The simple source model used in the conventional finite difference time domain (FDTD) algorithm gives rise to large errors. Conventional second-order FDTD has large errors (order h**2/ 12), h = grid spacing), and the errors due to the source model further increase this error. Nonstandard (NS) FDTD, based on a superposition of second-order finite differences, has been demonstrated to give much higher accuracy than conventional FDTD for the sourceless wave equation and Maxwell's equations (h**6 / 24192). Since the Green's function for the wave equation in free space is known, we can compute the field due to a point source. This analytical solution is inserted into the NS finite difference (FD) model and the parameters of the source model are adjusted so that the FDTD solution matches the analytical one. To derive the scattered field source model, we use the NS-FD model of the total field and of the incident field to deduce the correct source model. We find that sources that generate a scattered field must be modeled differently from ones radiate into free space. We demonstrate the high accuracy of our source models by comparing with analytical solutions. This approach yields a significant improvement inaccuracy, especially for the scattered field, where we verified the results against Mie theory. The computation time and memory requirements are about the same as for conventional FDTD. We apply these developments to solve propagation problems in subwavelength structures.

  1. Assessment of the DNS Data Accuracy Using RANS-DNS Simulations

    NASA Astrophysics Data System (ADS)

    Colmenares F., Juan D.; Poroseva, Svetlana V.; Murman, Scott M.

    2015-11-01

    Direct numerical simulations (DNS) provide the most accurate computational description of a turbulent flow field and its statistical characteristics. Therefore, results of simulations with Reynolds-Averaged Navier-Stokes (RANS) turbulence models are often evaluated against DNS data. The goal of our study is to determine a limit of RANS model performance in relation to existing DNS data. Since no model can outperform DNS, this limit can be determined by solving RANS equations with all unknown terms being represented by their DNS data (RANS-DNS simulations). In the presentation, results of RANS-DNS simulations conducted using transport equations for velocity moments of second, third, and fourth orders in incompressible planar wall-bounded flows are discussed. The results were obtained with two solvers: OpenFOAM and in-house code for fully-developed flows at different Reynolds numbers using different DNS databases. The material is in part based upon work supported by NASA under award NNX12AJ61A.

  2. Psychometric characteristics of simulation-based assessment in anaesthesia and accuracy of self-assessed scores.

    PubMed

    Weller, J M; Robinson, B J; Jolly, B; Watterson, L M; Joseph, M; Bajenov, S; Haughton, A J; Larsen, P D

    2005-03-01

    The purpose of this study was to define the psychometric properties of a simulation-based assessment of anaesthetists. Twenty-one anaesthetic trainees took part in three highly standardised simulations of anaesthetic emergencies. Scenarios were videotaped and rated independently by four judges. Trainees also assessed their own performance in the simulations. Results were analysed using generalisability theory to determine the influence of subject, case and judge on the variance in judges' scores and to determine the number of cases and judges required to produce a reliable result. Self-assessed scores were compared to the mean score of the judges. The results suggest that 12-15 cases are required to rank trainees reliably on their ability to manage simulated crises. Greater reliability is gained by increasing the number of cases than by increasing the number of judges. There was modest but significant correlation between self-assessed scores and external assessors' scores (rho = 0.321; p = 0.01). At the lower levels of performance, trainees consistently overrated their performance compared to those performing at higher levels (p = 0.0001). PMID:15710009

  3. Adaptive constructive processes and memory accuracy: Consequences of counterfactual simulations in young and older adults

    PubMed Central

    Gerlach, Kathy D.; Dornblaser, David W.; Schacter, Daniel L.

    2013-01-01

    People frequently engage in counterfactual thinking: mental simulations of alternative outcomes to past events. Like simulations of future events, counterfactual simulations serve adaptive functions. However, future simulation can also result in various kinds of distortions and has thus been characterized as an adaptive constructive process. Here we approach counterfactual thinking as such and examine whether it can distort memory for actual events. In Experiments 1a/b, young and older adults imagined themselves experiencing different scenarios. Participants then imagined the same scenario again, engaged in no further simulation of a scenario, or imagined a counterfactual outcome. On a subsequent recognition test, participants were more likely to make false alarms to counterfactual lures than novel scenarios. Older adults were more prone to these memory errors than younger adults. In Experiment 2, younger and older participants selected and performed different actions, then recalled performing some of those actions, imagined performing alternative actions to some of the selected actions, and did not imagine others. Participants, especially older adults, were more likely to falsely remember counterfactual actions than novel actions as previously performed. The findings suggest that counterfactual thinking can cause source confusion based on internally generated misinformation, consistent with its characterization as an adaptive constructive process. PMID:23560477

  4. Accuracy and repeatability of weighing for occupational hygiene measurements: results from an inter-laboratory comparison.

    PubMed

    Stacey, Peter; Revell, Graham; Tylee, Barry

    2002-11-01

    Gravimetric analysis is a fundamental technique frequently used in occupational hygiene assessments, but few studies have investigated its repeatability and reproducibility. Four inter-laboratory comparisons are discussed in this paper. The first involved 32 laboratories weighing 25 mm diameter glassfibre filters, the second involved 11 laboratories weighing 25 mm diameter PVC filters and the third involved eight laboratories weighing plastic IOM heads with 25 mm diameter glassfibre filters. Data from the third study found that measurements using this type of IOM head were unreliable. A fourth study, to ascertain if laboratories could improve their performance, involved a selected sub-group of 10 laboratories from the first exercise that analysed the 25 mm diameter glassfibre filters. The studies tested the analytical measurement process and not just the variation in weighings obtained on blank filters, as previous studies have done. Graphs of data from the first and second exercises suggest that a power curve relationship exists between reproducibility and loading and repeatability and loading. The relationship for reproducibility in the first study followed the equation log s(R) = -0.62 log m + 0.86 and in the second study log s(R) = -0.64 log m + 0.57, where s(R) is the reproducibility in terms of per cent relative standard deviation (%RSD) and m is the weight of loading in milligrams. The equation for glassfibre filters from the first exercise suggested that at a measurement of 0.4 mg (about a tenth of the United Kingdom legislative definition of a hazardous substance for a respirable dust for an 8 h sample), the measurement reproducibility is more than +/-25% (2sigma). The results from PVC filters had better repeatability estimates than the glassfibre filters, but overall they had similar estimates of reproducibility. An improvement in both the reproducibility and repeatability for glassfibre filters was observed in the fourth study. This improvement reduced

  5. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, Jacquelyn C.; Thompson, Anne M.; Schmidlin, F. J.; Oltmans, S. J.; Smit, H. G. J.

    2004-01-01

    Since 1998 the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 ozone profiles over eleven southern hemisphere tropical and subtropical stations. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used to measure ozone. The data are archived at: &ttp://croc.gsfc.nasa.gov/shadoz>. In analysis of ozonesonde imprecision within the SHADOZ dataset, Thompson et al. [JGR, 108,8238,20031 we pointed out that variations in ozonesonde technique (sensor solution strength, instrument manufacturer, data processing) could lead to station-to-station biases within the SHADOZ dataset. Imprecisions and accuracy in the SHADOZ dataset are examined in light of new data. First, SHADOZ total ozone column amounts are compared to version 8 TOMS (2004 release). As for TOMS version 7, satellite total ozone is usually higher than the integrated column amount from the sounding. Discrepancies between the sonde and satellite datasets decline two percentage points on average, compared to version 7 TOMS offsets. Second, the SHADOZ station data are compared to results of chamber simulations (JOSE-2000, Juelich Ozonesonde Intercomparison Experiment) in which the various SHADOZ techniques were evaluated. The range of JOSE column deviations from a standard instrument (-10%) in the chamber resembles that of the SHADOZ station data. It appears that some systematic variations in the SHADOZ ozone record are accounted for by differences in solution strength, data processing and instrument type (manufacturer).

  6. Tersoff potential with improved accuracy for simulating graphene in molecular dynamics environment

    NASA Astrophysics Data System (ADS)

    Rajasekaran, G.; Kumar, Rajesh; Parashar, Avinash

    2016-03-01

    Graphene is an elementary unit for various carbon based nanostructures. The recent technological developments have made it possible to manufacture hybrid and sandwich structures with graphene. In order to model these nanostructures in atomistic scale, a compatible interatomic potential is required to successfully model these nanostructures. In this article, an interatomic potential with modified cut-off function for Tersoff potential was proposed to avoid overestimation and also to predict the realistic mechanical behavior of single sheet of graphene. In order to validate the modified form of cut-off function for Tersoff potential, simulations were performed with different set of temperatures and strain rates, and results were made to compare with available experimental data and molecular dynamics simulation results obtained with the help of other empirical interatomic potentials.

  7. Evaluation of radiative transfer simulation accuracy over bright desert calibration sites

    NASA Astrophysics Data System (ADS)

    Govaerts, Y. M.; Clerici, M.

    2003-12-01

    Meteosat Second Generation (MSG) is the new generation of European geostationary meteorological satellites operated at ELIMETSAT. SEVIRI, the MSG main radiometer, measures the reflected solar radiation within three spectral bands centered at 0.6, 0.8 and 1.6 μm, and within a broad band similar to the VIS channel of MVIRI, the radiometer on-board the first generation of METEOSAT satellites. The operational calibration of these channels relies on modelled radiances over bright desert sites, as no in-flight calibration device is available. These simulated radiances represent therefore the "reference" against which SEVIRI is calibrated. The present study evaluates the uncertainties associated with the characterization of this "reference", i.e., the modelled radiances. A theoretical estimation is first assessed, based on the impact of the target surface and atmospheric parameter error on the simulated radiance. Top-of-atmosphere simulated radiances are next compared with several thousands of calibrated observations acquired by the ERS2/ATSR-2 and SeaStar/SeaWiFS instruments over the SEVIRI desert calibration sites. Results show that the relative bias between simulation and observation does not exceed ±5%.

  8. Internal Fiducial Markers and Susceptibility Effects in MRI-Simulation and Measurement of Spatial Accuracy

    SciTech Connect

    Jonsson, Joakim H.; Garpebring, Anders; Karlsson, Magnus G.; Nyholm, Tufve

    2012-04-01

    Background: It is well-known that magnetic resonance imaging (MRI) is preferable to computed tomography (CT) in radiotherapy target delineation. To benefit from this, there are two options available: transferring the MRI delineated target volume to the planning CT or performing the treatment planning directly on the MRI study. A precondition for excluding the CT study is the possibility to define internal structures visible on both the planning MRI and on the images used to position the patient at treatment. In prostate cancer radiotherapy, internal gold markers are commonly used, and they are visible on CT, MRI, x-ray, and portal images. The depiction of the markers in MRI are, however, dependent on their shape and orientation relative the main magnetic field because of susceptibility effects. In the present work, these effects are investigated and quantified using both simulations and phantom measurements. Methods and Materials: Software that simulated the magnetic field distortions around user defined geometries of variable susceptibilities was constructed. These magnetic field perturbation maps were then reconstructed to images that were evaluated. The simulation software was validated through phantom measurements of four commercially available gold markers of different shapes and one in-house gold marker. Results: Both simulations and phantom measurements revealed small position deviations of the imaged marker positions relative the actual marker positions (<1 mm). Conclusion: Cylindrical gold markers can be used as internal fiducial markers in MRI.

  9. Accuracy assessment of the GPS-TEC calibration constants by means of a simulation technique

    NASA Astrophysics Data System (ADS)

    Conte, Juan Federico; Azpilicueta, Francisco; Brunini, Claudio

    2011-10-01

    During the last 2 decades, Global Positioning System (GPS) measurements have become a very important data-source for ionospheric studies. However, it is not a direct and easy task to obtain accurate ionospheric information from these measurements because it is necessary to perform a careful estimation of the calibration constants affecting the GPS observations, the so-called differential code biases (DCBs). In this paper, the most common approximations used in several GPS calibration methods, e.g. the La Plata Ionospheric Model (LPIM), are applied to a set of specially computed synthetic slant Total Electron Content datasets to assess the accuracy of the DCB estimation in a global scale scenario. These synthetic datasets were generated using a modified version of the NeQuick model, and have two important features: they show a realistic temporal and spatial behavior and all a-priori DCBs are set to zero by construction. Then, after the application of the calibration method the deviations from zero of the estimated DCBs are direct indicators of the accuracy of the method. To evaluate the effect of the solar activity radiation level the analysis was performed for years 2001 (high solar activity) and 2006 (low solar activity). To take into account seasonal changes of the ionosphere behavior, the analysis was repeated for three consecutive days close to each equinox and solstice of every year. Then, a data package comprising 24 days from approximately 200 IGS permanent stations was processed. In order to avoid unwanted geomagnetic storms effects, the selected days correspond to periods of quiet geomagnetic conditions. The most important results of this work are: i) the estimated DCBs can be affected by errors around ±8 TECu for high solar activity and ±3 TECu for low solar activity; and ii) DCB errors present a systematic behavior depending on the modip coordinate, that is more evident for the positive modip region.

  10. Implication of CT Table Sag on Geometrical Accuracy During Virtual Simulation

    SciTech Connect

    Zullo, John R. Kudchadker, Rajat; Wu, Richard; Lee, Andrew; Prado, Karl

    2007-01-01

    Computed tomography (CT) scanners are used in hospitals worldwide for radiation oncology treatment simulation. It is critical that the process very accurately represents the patient positioning to be used during the administration of radiation therapy to minimize the dose delivery to normal tissue. Unfortunately, this is not always the case. One problem is that some degree of vertical displacement, or sag, occurs when the table is extended from its base when under a clinical weight load, a problem resulting from mechanical limitations of the CT table. In an effort to determine the extent of the problem, we measured and compared the degree of table sag for various CT scanner tables at our institution. A clinically representative weight load was placed on each table, and the amount of table sag was measured for varying degrees of table extension from its base. Results indicated that the amount of table sag varied from approximately 0.7 to 6.6 mm and that the amount of table sag varied not only between tables from different manufacturers but also between tables of the same model from the same manufacturer. Failure to recognize and prevent this problem could lead to incorrectly derived isocenter localization and subsequent patient positioning errors. Treatment site-specific and scanner-based laser offset correction should be implemented for each patient's virtual simulation procedure. In addition, the amount of sag should be measured under a clinically representative weight load upon CT-simulator commissioning.

  11. Computer simulation of shading and blocking: Discussion of accuracy and recommendations

    SciTech Connect

    Lipps, F W

    1992-04-01

    A field of heliostats suffers losses caused by shading and blocking by neighboring heliostats. The complex geometry of multiple shading and blocking events suggests that a processing code is needed to update the boundary vector for each shading or blocking event. A new version, RSABS, (programmer`s manual included) simulates the split-rectangular heliostat. Researchers concluded that the dominant error for the given heliostat geometry is caused by the departure from planarity of the neighboring heliostats. It is recommended that a version of the heliostat simulation be modified to include losses due to nonreflective structural margins, if they occur. Heliostat neighbors should be given true guidance rather than assumed to be parallel, and the resulting nonidentical quadrilateral images should be processed, as in HELIOS, by ignoring overlapping events, rare in optimized fields.

  12. Computer simulation of shading and blocking: Discussion of accuracy and recommendations

    SciTech Connect

    Lipps, F.W. )

    1992-04-01

    A field of heliostats suffers losses caused by shading and blocking by neighboring heliostats. The complex geometry of multiple shading and blocking events suggests that a processing code is needed to update the boundary vector for each shading or blocking event. A new version, RSABS, (programmer's manual included) simulates the split-rectangular heliostat. Researchers concluded that the dominant error for the given heliostat geometry is caused by the departure from planarity of the neighboring heliostats. It is recommended that a version of the heliostat simulation be modified to include losses due to nonreflective structural margins, if they occur. Heliostat neighbors should be given true guidance rather than assumed to be parallel, and the resulting nonidentical quadrilateral images should be processed, as in HELIOS, by ignoring overlapping events, rare in optimized fields.

  13. From High Accuracy to High Efficiency in Simulations of Processing of Dual-Phase Steels

    NASA Astrophysics Data System (ADS)

    Rauch, L.; Kuziak, R.; Pietrzyk, M.

    2014-04-01

    Searching for a compromise between computing costs and predictive capabilities of metal processing models is the objective of this work. The justification of using multiscale and simplified models in simulations of manufacturing of DP steel products is discussed. Multiscale techniques are described and their applications to modeling annealing and stamping are shown. This approach is costly and should be used in specific applications only. Models based on the JMAK equation are an alternative. Physical simulations of the continuous annealing were conducted for validation of the models. An analysis of the computing time and predictive capabilities of the models allowed to conclude that the modified JMAK equation gives good results as far as prediction of volume fractions after annealing is needed. Contrary, a multiscale model is needed to analyze the distributions of strains in the ferritic-martensitic microstructure. The idea of simplification of multiscale models is presented, as well.

  14. Evaluation of accuracy of non-linear finite element computations for surgical simulation: study using brain phantom.

    PubMed

    Ma, J; Wittek, A; Singh, S; Joldes, G; Washio, T; Chinzei, K; Miller, K

    2010-12-01

    In this paper, the accuracy of non-linear finite element computations in application to surgical simulation was evaluated by comparing the experiment and modelling of indentation of the human brain phantom. The evaluation was realised by comparing forces acting on the indenter and the deformation of the brain phantom. The deformation of the brain phantom was measured by tracking 3D motions of X-ray opaque markers, placed within the brain phantom using a custom-built bi-plane X-ray image intensifier system. The model was implemented using the ABAQUS(TM) finite element solver. Realistic geometry obtained from magnetic resonance images and specific constitutive properties determined through compression tests were used in the model. The model accurately predicted the indentation force-displacement relations and marker displacements. Good agreement between modelling and experimental results verifies the reliability of the finite element modelling techniques used in this study and confirms the predictive power of these techniques in surgical simulation. PMID:21153973

  15. Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models

    NASA Astrophysics Data System (ADS)

    Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.

    2010-10-01

    Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.

  16. Superspreading: molecular dynamics simulations and experimental results

    NASA Astrophysics Data System (ADS)

    Theodorakis, Panagiotis; Kovalchuk, Nina; Starov, Victor; Muller, Erich; Craster, Richard; Matar, Omar

    2015-11-01

    The intriguing ability of certain surfactant molecules to drive the superspreading of liquids to complete wetting on hydrophobic substrates is central to numerous applications that range from coating flow technology to enhanced oil recovery. Recently, we have observed that for superspreading to occur, two key conditions must be simultaneously satisfied: the adsorption of surfactants from the liquid-vapor surface onto the three-phase contact line augmented by local bilayer formation. Crucially, this must be coordinated with the rapid replenishment of liquid-vapor and solid-liquid interfaces with surfactants from the interior of the droplet. Here, we present the structural characteristics and kinetics of the droplet spreading during the different stages of this process, and we compare our results with experimental data for trisiloxane and poly oxy ethylene surfactants. In this way, we highlight and explore the differences between surfactants, paving the way for the design of molecular architectures tailored specifically for applications that rely on the control of wetting. EPSRC Platform Grant MACIPh (EP/L020564/).

  17. Progress toward chemcial accuracy in the computer simulation of condensed phase reactions

    SciTech Connect

    Bash, P.A.; Levine, D.; Hallstrom, P.; Ho, L.L.; Mackerell, A.D. Jr.

    1996-03-01

    A procedure is described for the generation of chemically accurate computer-simulation models to study chemical reactions in the condensed phase. The process involves (1) the use of a coupled semiempirical quantum and classical molecular mechanics method to represent solutes and solvent, respectively; (2) the optimization of semiempirical quantum mechanics (QM) parameters to produce a computationally efficient and chemically accurate QM model; (3) the calibration of a quantum/classical microsolvation model using ab initio quantum theory; and (4) the use of statistical mechanical principles and methods to simulate, on massively parallel computers, the thermodynamic properties of chemical reactions in aqueous solution. The utility of this process is demonstrated by the calculation of the enthalpy of reaction in vacuum and free energy change in aqueous solution for a proton transfer involving methanol, methoxide, imidazole, and imidazolium, which are functional groups involved with proton transfers in many biochemical systems. An optimized semiempirical QM model is produced, which results in the calculation of heats of formation of the above chemical species to within 1.0 kcal/mol of experimental values. The use of the calibrated QM and microsolvation QM/MM models for the simulation of a proton transfer in aqueous solution gives a calculated free energy that is within 1.0 kcal/mol (12.2 calculated vs. 12.8 experimental) of a value estimated from experimental pKa`s of the reacting species.

  18. Increasing the efficiency of bacterial transcription simulations: When to exclude the genome without loss of accuracy

    PubMed Central

    Iafolla, Marco AJ; Dong, Guang Qiang; McMillen, David R

    2008-01-01

    Background Simulating the major molecular events inside an Escherichia coli cell can lead to a very large number of reactions that compose its overall behaviour. Not only should the model be accurate, but it is imperative for the experimenter to create an efficient model to obtain the results in a timely fashion. Here, we show that for many parameter regimes, the effect of the host cell genome on the transcription of a gene from a plasmid-borne promoter is negligible, allowing one to simulate the system more efficiently by removing the computational load associated with representing the presence of the rest of the genome. The key parameter is the on-rate of RNAP binding to the promoter (k_on), and we compare the total number of transcripts produced from a plasmid vector generated as a function of this rate constant, for two versions of our gene expression model, one incorporating the host cell genome and one excluding it. By sweeping parameters, we identify the k_on range for which the difference between the genome and no-genome models drops below 5%, over a wide range of doubling times, mRNA degradation rates, plasmid copy numbers, and gene lengths. Results We assess the effect of the simulating the presence of the genome over a four-dimensional parameter space, considering: 24 min <= bacterial doubling time <= 100 min; 10 <= plasmid copy number <= 1000; 2 min <= mRNA half-life <= 14 min; and 10 bp <= gene length <= 10000 bp. A simple MATLAB user interface generates an interpolated k_on threshold for any point in this range; this rate can be compared to the ones used in other transcription studies to assess the need for including the genome. Conclusion Exclusion of the genome is shown to yield less than 5% difference in transcript numbers over wide ranges of values, and computational speed is improved by two to 24 times by excluding explicit representation of the genome. PMID:18789148

  19. Evaluation of the geomorphometric results and residual values of a robust plane fitting method applied to different DTMs of various scales and accuracy

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor

    2013-04-01

    Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is

  20. Accuracy of acoustic ear canal impedances: finite element simulation of measurement methods using a coupling tube.

    PubMed

    Schmidt, Sebastian; Hudde, Herbert

    2009-06-01

    Acoustic impedances measured at the entrance of the ear canal provide information on both the ear canal geometry and the terminating impedance at the eardrum, in principle. However, practical experience reveals that measured results in the audio frequency range up to 20 kHz are frequently not very accurate. Measurement methods successfully tested in artificial tubes with varying area functions often fail when applied to real ear canals. The origin of these errors is investigated in this paper. To avoid mixing of systematical and other errors, no real measurements are performed. Instead finite element simulations focusing on the coupling between a connecting tube and the ear canal are regarded without simulating a particular measuring method in detail. It turns out that realistic coupling between the connecting tube and the ear canal causes characteristic shifts of the frequencies of measured pressure minima and maxima. The errors in minima mainly depend on the extent of the area discontinuity arising at the interface; the errors in maxima are determined by the alignment of the tube with respect to the ear canal. In summary, impedance measurements using coupling tubes appear questionable beyond 3 kHz. PMID:19507964

  1. Simulation of electronic registration of multispectral remote sensing images to 0.1 pixel accuracy

    NASA Technical Reports Server (NTRS)

    Reitsema, H. J.; Mord, A. J.; Fraser, D.; Richard, H. L.; Speaker, E. E.

    1984-01-01

    Band-to-band coregistration of multispectral remote sensing images can be achieved by electronic signal processing techniques rather than by costly and difficult mechanical alignment. This paper describes the results of a study of the end-to-end performance of electronic registration. The software simulation includes steps which model the performance of the geometric calibration process, the instrument image quality, detector performance and the effects of achieving coregistration through image resampling. The image resampling step emulates the Pipelined Resampling Processor, a real-time image resampler. The study demonstrates that the electronic alignment technique produces multispectral images which are superior to those produced by an imager whose pixel geometry is accurate to 0.1 pixel rms. The implications of this approach for future earth observation programs are discussed.

  2. Simulation of ultrasound radio-frequency signals in deformed tissue for validation of 2D motion estimation with sub-sample accuracy.

    PubMed

    Goksel, Orcun; Zahiri-Azar, Reza; Salcudean, Septimiu E

    2007-01-01

    Motion estimation in sequences of ultrasound echo signals is essential for a wide range of applications. In time domain cross correlation, which is a common motion estimation technique, the displacements are typically not integral multiples of the sampling period. Therefore, to estimate the motion with sub-sample accuracy, 1D and 2D interpolation methods such as parabolic, cosine, and ellipsoid fitting have been introduced in the literature. In this paper, a simulation framework is presented in order to compare the performance of currently available techniques. First, the tissue deformation is modeled using the finite element method (FEM) and then the corresponding pre-/post-deformation radio-frequency (RF) signals are generated using Field II ultrasound simulation software. Using these simulated RF data of deformation, both axial and lateral tissue motion are estimated with sub-sample accuracy. The estimated displacements are then evaluated by comparing them to the known displacements computed by the FEM. This simulation approach was used to evaluate three different lateral motion estimation techniques employing (i) two separate 1D sub-sampling, (ii) two consecutive 1D sub-sampling, and (iii) 2D joint sub-sampling estimators. The estimation errors during two different tissue compression tests are presented with and without spatial filtering. Results show that RF signal processing methods involving tissue deformation can be evaluated using the proposed simulation technique, which employs accurate models. PMID:18002416

  3. Spaceborne lidar measurement accuracy - Simulation of aerosol, cloud, molecular density, and temperature retrievals

    NASA Technical Reports Server (NTRS)

    Russell, P. B.; Morley, B. M.; Browell, E. V.

    1982-01-01

    In connection with studies concerning the use of an orbiting optical radar (lidar) to conduct aerosol and cloud measurements, attention has been given to the accuracy with which lidar return signals could be measured. However, signal-measurement error is not the only source of error which can affect the accuracy of the derived information. Other error sources are the assumed molecular-density and atmospheric-transmission profiles, and the lidar calibration factor (which relates signal to backscatter coefficient). The present investigation has the objective to account for the effects of all these errors sources for several realistic combinations of lidar parameters, model atmospheres, and background lighting conditions. In addition, a procedure is tested and developed for measuring density and temperature profiles with the lidar, and for using the lidar-derived density profiles to improve aerosol retrievals.

  4. The VIIRS ocean data simulator enhancements and results

    NASA Astrophysics Data System (ADS)

    Robinson, Wayne D.; Patt, Frederick S.; Franz, Bryan A.; Turpie, Kevin R.; McClain, Charles R.

    2011-10-01

    The VIIRS Ocean Science Team (VOST) has been developing an Ocean Data Simulator to create realistic VIIRS SDR datasets based on MODIS water-leaving radiances. The simulator is helping to assess instrument performance and scientific processing algorithms. Several changes were made in the last two years to complete the simulator and broaden its usefulness. The simulator is now fully functional and includes all sensor characteristics measured during prelaunch testing, including electronic and optical crosstalk influences, polarization sensitivity, and relative spectral response. Also included is the simulation of cloud and land radiances to make more realistic data sets and to understand their important influence on nearby ocean color data. The atmospheric tables used in the processing, including aerosol and Rayleigh reflectance coefficients, have been modeled using VIIRS relative spectral responses. The capabilities of the simulator were expanded to work in an unaggregated sample mode and to produce scans with additional samples beyond the standard scan. These features improve the capability to realistically add artifacts which act upon individual instrument samples prior to aggregation and which may originate from beyond the actual scan boundaries. The simulator was expanded to simulate all 16 M-bands and the EDR processing was improved to use these bands to make an SST product. The simulator is being used to generate global VIIRS data from and in parallel with the MODIS Aqua data stream. Studies have been conducted using the simulator to investigate the impact of instrument artifacts. This paper discusses the simulator improvements and results from the artifact impact studies.

  5. The VIIRS Ocean Data Simulator Enhancements and Results

    NASA Technical Reports Server (NTRS)

    Robinson, Wayne D.; Patt, Fredrick S.; Franz, Bryan A.; Turpie, Kevin R.; McClain, Charles R.

    2011-01-01

    The VIIRS Ocean Science Team (VOST) has been developing an Ocean Data Simulator to create realistic VIIRS SDR datasets based on MODIS water-leaving radiances. The simulator is helping to assess instrument performance and scientific processing algorithms. Several changes were made in the last two years to complete the simulator and broaden its usefulness. The simulator is now fully functional and includes all sensor characteristics measured during prelaunch testing, including electronic and optical crosstalk influences, polarization sensitivity, and relative spectral response. Also included is the simulation of cloud and land radiances to make more realistic data sets and to understand their important influence on nearby ocean color data. The atmospheric tables used in the processing, including aerosol and Rayleigh reflectance coefficients, have been modeled using VIIRS relative spectral responses. The capabilities of the simulator were expanded to work in an unaggregated sample mode and to produce scans with additional samples beyond the standard scan. These features improve the capability to realistically add artifacts which act upon individual instrument samples prior to aggregation and which may originate from beyond the actual scan boundaries. The simulator was expanded to simulate all 16 M-bands and the EDR processing was improved to use these bands to make an SST product. The simulator is being used to generate global VIIRS data from and in parallel with the MODIS Aqua data stream. Studies have been conducted using the simulator to investigate the impact of instrument artifacts. This paper discusses the simulator improvements and results from the artifact impact studies.

  6. Analysis of Factors Influencing Measurement Accuracy of Al Alloy Tensile Test Results

    NASA Astrophysics Data System (ADS)

    Podgornik, Bojan; Žužek, Borut; Sedlaček, Marko; Kevorkijan, Varužan; Hostej, Boris

    2016-02-01

    In order to properly use materials in design, a complete understanding of and information on their mechanical properties, such as yield and ultimate tensile strength must be obtained. Furthermore, as the design of automotive parts is constantly pushed toward higher limits, excessive measuring uncertainty can lead to unexpected premature failure of the component, thus requiring reliable determination of material properties with low uncertainty. The aim of the present work was to evaluate the effect of different metrology factors, including the number of tested samples, specimens machining and surface quality, specimens input diameter, type of testing and human error on the tensile test results and measurement uncertainty when performed on 2xxx series Al alloy. Results show that the most significant contribution to measurement uncertainty comes from the number of samples tested, which can even exceed 1 %. Furthermore, moving from experimental laboratory conditions to very intense industrial environment further amplifies measurement uncertainty, where even if using automated systems human error cannot be neglected.

  7. On the accuracy of a video-based drill-guidance solution for orthopedic and trauma surgery: preliminary results

    NASA Astrophysics Data System (ADS)

    Magaraggia, Jessica; Kleinszig, Gerhard; Wei, Wei; Weiten, Markus; Graumann, Rainer; Angelopoulou, Elli; Hornegger, Joachim

    2014-03-01

    Over the last years, several methods have been proposed to guide the physician during reduction and fixation of bone fractures. Available solutions often use bulky instrumentation inside the operating room (OR). The latter ones usually consist of a stereo camera, placed outside the operative field, and optical markers directly attached to both the patient and the surgical instrumentation, held by the surgeon. Recently proposed techniques try to reduce the required additional instrumentation as well as the radiation exposure to both patient and physician. In this paper, we present the adaptation and the first implementation of our recently proposed video camera-based solution for screw fixation guidance. Based on the simulations conducted in our previous work, we mounted a small camera on a drill in order to recover its tip position and axis orientation w.r.t our custom-made drill sleeve with attached markers. Since drill-position accuracy is critical, we thoroughly evaluated the accuracy of our implementation. We used an optical tracking system for ground truth data collection. For this purpose, we built a custom plate reference system and attached reflective markers to both the instrument and the plate. Free drilling was then performed 19 times. The position of the drill axis was continuously recovered using both our video camera solution and the tracking system for comparison. The recorded data covered targeting, perforation of the surface bone by the drill bit and bone drilling. The orientation of the instrument axis and the position of the instrument tip were recovered with an accuracy of 1:60 +/- 1:22° and 2:03 +/- 1:36 mm respectively.

  8. The accuracy of diffusion quantum Monte Carlo simulations in the determination of molecular equilibrium structures

    NASA Astrophysics Data System (ADS)

    Lu, Shih-I.

    2004-12-01

    For a test set of 17 first-row small molecules, the equilibrium structures are calculated with Ornstein-Uhlenbeck diffusion quantum Monte Carlo simulations guiding by trial wave functions constructed from floating spherical Gaussian orbitals and spherical Gaussian geminals. To measure performance of the Monte Carlo calculations, the mean deviation, the mean absolute deviation, the maximum absolute deviation, and the standard deviation of Monte Carlo calculated equilibrium structures with respect to empirical equilibrium structures are given. This approach is found to yield results having a uniformly high quality, being consistent with empirical equilibrium structures and surpassing calculated values from the coupled cluster model with single, double, and noniterative triple excitations [CCSD(T)] with the basis sets of cc-pCVQZ and cc-pVQZ. The nonrelativistic equilibrium atomization energies are also presented to assess performance of the calculated methods. The mean absolute deviations regarding experimental atomization energy are 0.16 and 0.21 kcal/mol for the Monte Carlo and CCSD(T)/cc-pCV(56)Z calculations, respectively.

  9. Accuracy of core mass estimates in simulated observations of dust emission

    NASA Astrophysics Data System (ADS)

    Malinen, J.; Juvela, M.; Collins, D. C.; Lunttila, T.; Padoan, P.

    2011-06-01

    Aims: We study the reliability of the mass estimates obtained for molecular cloud cores using sub-millimetre and infrared dust emission. Methods: We use magnetohydrodynamic simulations and radiative transfer to produce synthetic observations with spatial resolution and noise levels typical of Herschel surveys. We estimate dust colour temperatures using different pairs of intensities, calculate column densities with opacity at one wavelength, and compare the estimated masses with the true values. We compare these results to the case when all five Herschel wavelengths are available. We investigate the effects of spatial variations of dust properties and the influence of embedded heating sources. Results: Wrong assumptions of dust opacity and its spectral index β can cause significant systematic errors in mass estimates. These are mainly multiplicative and leave the slope of the mass spectrum intact, unless cores with very high optical depth are included. Temperature variations bias the colour temperature estimates and, in quiescent cores with optical depths higher than for normal stable cores, masses can be underestimated by up to one order of magnitude. When heated by internal radiation sources, the dust in the core centre becomes visible and the observations recover the true mass spectra. Conclusions: The shape, although not the position, of the mass spectrum is reliable against observational errors and biases introduced in the analysis. This changes only if the cores have optical depths much higher than expected for basic hydrostatic equilibrium conditions. Observations underestimate the value of β whenever there are temperature variations along the line of sight. A bias can also be observed when the true β varies with wavelength. Internal heating sources produce an inverse correlation between colour temperature and β that may be difficult to separate from any intrinsic β(T) relation of the dust grains. This suggests caution when interpreting the observed

  10. Gravity Probe B Data Analysis. Status and Potential for Improved Accuracy of Scientific Results

    NASA Astrophysics Data System (ADS)

    Everitt, C. W. F.; Adams, M.; Bencze, W.; Buchman, S.; Clarke, B.; Conklin, J. W.; Debra, D. B.; Dolphin, M.; Heifetz, M.; Hipkins, D.; Holmes, T.; Keiser, G. M.; Kolodziejczak, J.; Li, J.; Lipa, J.; Lockhart, J. M.; Mester, J. C.; Muhlfelder, B.; Ohshima, Y.; Parkinson, B. W.; Salomon, M.; Silbergleit, A.; Solomonik, V.; Stahl, K.; Taber, M.; Turneaure, J. P.; Wang, S.; Worden, P. W.

    2009-12-01

    This is the first of five connected papers detailing progress on the Gravity Probe B (GP-B) Relativity Mission. GP-B, launched 20 April 2004, is a landmark physics experiment in space to test two fundamental predictions of Einstein’s general relativity theory, the geodetic and frame-dragging effects, by means of cryogenic gyroscopes in Earth orbit. Data collection began 28 August 2004 and science operations were completed 29 September 2005. The data analysis has proven deeper than expected as a result of two mutually reinforcing complications in gyroscope performance: (1) a changing polhode path affecting the calibration of the gyroscope scale factor C g against the aberration of starlight and (2) two larger than expected manifestations of a Newtonian gyro torque due to patch potentials on the rotor and housing. In earlier papers, we reported two methods, ‘geometric’ and ‘algebraic’, for identifying and removing the first Newtonian effect (‘misalignment torque’), and also a preliminary method of treating the second (‘roll-polhode resonance torque’). Central to the progress in both torque modeling and C g determination has been an extended effort on “Trapped Flux Mapping” commenced in November 2006. A turning point came in August 2008 when it became possible to include a detailed history of the resonance torques into the computation. The East-West (frame-dragging) effect is now plainly visible in the processed data. The current statistical uncertainty from an analysis of 155 days of data is 5.4 marc-s/yr (˜14% of the predicted effect), though it must be emphasized that this is a preliminary result requiring rigorous investigation of systematics by methods discussed in the accompanying paper by Muhlfelder et al. A covariance analysis incorporating models of the patch effect torques indicates that a 3-5% determination of frame-dragging is possible with more complete, computationally intensive data analysis.

  11. Post-glacial landforms dating by lichenometry in Iceland - the accuracy of relative results and conversely

    NASA Astrophysics Data System (ADS)

    Decaulne, Armelle

    2014-05-01

    Lichenometry studies are carried out in Iceland since 1970 all over the country, using various techniques to solve a range of geomorphologic issues, from moraine dating and glacial advances, outwash timing, proglacial river incision, soil erosion, rock-glacier development, climate variations, to debris-flow occurrence and extreme snow-avalanche frequency. Most users have sought to date proglacial landforms in two main areas, around the southern ice-caps of Vatnajökull and Myrdalsjökull; and in Tröllaskagi in northern Iceland. Based on the results of over thirty five published studies, lichenometry is deemed to be successful dating tool in Iceland, and seems to approach an absolute dating technique at least over the last hundred years, under well constrained environmental conditions at local scale. With an increasing awareness of the methodological limitations of the technique, together with more sophisticated data treatments, predicted lichenometric 'ages' are supposedly gaining in robustness and in precision. However, comparisons between regions, and even between studies in the same area, are hindered by the use of different measurement techniques and data processing. These issues are exacerbated in Iceland by rapid environmental changes across short distances and, more generally, by the common problems surrounding lichen species mis-identification in the field; not mentioning the age discrepancy offered by other dating tools, such as tephrochronology. Some authors claim lichenometry can help to a precise reconstruction of landforms and geomorphic processes in Iceland, proposing yearly dating, others includes margin errors in their reconstructions, while some limit its use to generation identifications, refusing to overpass the nature of the gathered data and further interpretation. Finally, can lichenometry be a relatively accurate dating technique or rather an accurate relative dating tool in Iceland?

  12. Accuracy of a Decision Aid for Advance Care Planning: Simulated End-of-Life Decision Making

    PubMed Central

    Levi, Benjamin H.; Heverley, Steven R.; Green, Michael J.

    2013-01-01

    Purpose Advance directives have been criticized for failing to help physicians make decisions consistent with patients’ wishes. This pilot study sought to determine if an interactive, computer-based decision aid that generates an advance directive can help physicians accurately translate patients’ wishes into treatment decisions. Methods We recruited 19 patient-participants who had each previously created an advance directive using a computer-based decision aid, and 14 physicians who had no prior knowledge of the patient-participants. For each advance directive, three physicians were randomly assigned to review the advance directive and make five to six treatment decisions for each of six (potentially) end-of-life clinical scenarios. From the three individual physicians’ responses, a “consensus physician response” was generated for each treatment decision (total decisions = 32). This consensus response was shared with the patient whose advance directive had been reviewed, and she/he was then asked to indicate how well the physician translated his/her wishes into clinical decisions. Results Patient-participants agreed with the consensus physician responses 84 percent (508/608) of the time, including 82 percent agreement on whether to provide mechanical ventilation, and 75 percent on decisions about cardiopulmonary resuscitation (CPR). Across the six vignettes, patient-participants’ rating of how well physicians translated their advance directive into medical decisions was 8.4 (range = 6.5–10, where 1 = extremely poorly, and 10 = extremely well). Physicians’ overall rating of their confidence at accurately translating patients’ wishes into clinical decisions was 7.8 (range = 6.1–9.3, 1 = not at all confident, 10 = extremely confident). Conclusion For simulated cases, a computer-based decision aid for advance care planning can help physicians more confidently make end-of-life decisions that patients will endorse. PMID:22167985

  13. Accuracy of surface registration compared to conventional volumetric registration in patient positioning for head-and-neck radiotherapy: A simulation study using patient data

    SciTech Connect

    Kim, Youngjun; Li, Ruijiang; Na, Yong Hum; Xing, Lei; Lee, Rena

    2014-12-15

    Purpose: 3D optical surface imaging has been applied to patient positioning in radiation therapy (RT). The optical patient positioning system is advantageous over conventional method using cone-beam computed tomography (CBCT) in that it is radiation free, frameless, and is capable of real-time monitoring. While the conventional radiographic method uses volumetric registration, the optical system uses surface matching for patient alignment. The relative accuracy of these two methods has not yet been sufficiently investigated. This study aims to investigate the theoretical accuracy of the surface registration based on a simulation study using patient data. Methods: This study compares the relative accuracy of surface and volumetric registration in head-and-neck RT. The authors examined 26 patient data sets, each consisting of planning CT data acquired before treatment and patient setup CBCT data acquired at the time of treatment. As input data of surface registration, patient’s skin surfaces were created by contouring patient skin from planning CT and treatment CBCT. Surface registration was performed using the iterative closest points algorithm by point–plane closest, which minimizes the normal distance between source points and target surfaces. Six degrees of freedom (three translations and three rotations) were used in both surface and volumetric registrations and the results were compared. The accuracy of each method was estimated by digital phantom tests. Results: Based on the results of 26 patients, the authors found that the average and maximum root-mean-square translation deviation between the surface and volumetric registrations were 2.7 and 5.2 mm, respectively. The residual error of the surface registration was calculated to have an average of 0.9 mm and a maximum of 1.7 mm. Conclusions: Surface registration may lead to results different from those of the conventional volumetric registration. Only limited accuracy can be achieved for patient

  14. Computer simulation results of attitude estimation of earth orbiting satellites

    NASA Technical Reports Server (NTRS)

    Kou, S. R.

    1976-01-01

    Computer simulation results of attitude estimation of Earth-orbiting satellites (including Space Telescope) subjected to environmental disturbances and noises are presented. Decomposed linear recursive filter and Kalman filter were used as estimation tools. Six programs were developed for this simulation, and all were written in the basic language and were run on HP 9830A and HP 9866A computers. Simulation results show that a decomposed linear recursive filter is accurate in estimation and fast in response time. Furthermore, for higher order systems, this filter has computational advantages (i.e., less integration errors and roundoff errors) over a Kalman filter.

  15. On Achieving Experimental Accuracy from Molecular Dynamics Simulations of Flexible Molecules: Aqueous Glycerol

    PubMed Central

    Yongye, Austin B.; Foley, B. Lachele; Woods, Robert J.

    2014-01-01

    The rotational isomeric states (RIS) of glycerol at infinite dilution have been characterized in the aqueous phase via a 1 μs conventional molecular dynamics (MD) simulation, a 40 ns enhanced sampling replica exchange molecular dynamics (REMD) simulation, and a reevaluation of the experimental NMR data. The MD and REMD simulations employed the GLYCAM06/AMBER force field with explicit treatment of solvation. The shorter time scale of the REMD sampling method gave rise to RIS and theoretical scalar 3JHH coupling constants that were comparable to those from the much longer traditional MD simulation. The 3JHH coupling constants computed from the MD methods were in excellent agreement with those observed experimentally. Despite the agreement between the computed and the experimental J-values, there were variations between the rotamer populations computed directly from the MD data and those derived from the experimental NMR data. The experimentally derived populations were determined utilizing limiting J-values from an analysis of NMR data from substituted ethane molecules and may not be completely appropriate for application in more complex molecules, such as glycerol. Here, new limiting J-values have been derived via a combined MD and quantum mechanical approach and were used to decompose the experimental 3JHH coupling constants into population distributions for the glycerol RIS. PMID:18311953

  16. SU-E-J-235: Varian Portal Dosimetry Accuracy at Detecting Simulated Delivery Errors

    SciTech Connect

    Gordon, J; Bellon, M; Barton, K; Gulam, M; Chetty, I

    2014-06-01

    Purpose: To use receiver operating characteristic (ROC) analysis to quantify the Varian Portal Dosimetry (VPD) application's ability to detect delivery errors in IMRT fields. Methods: EPID and VPD were calibrated/commissioned using vendor-recommended procedures. Five clinical plans comprising 56 modulated fields were analyzed using VPD. Treatment sites were: pelvis, prostate, brain, orbit, and base of tongue. Delivery was on a Varian Trilogy linear accelerator at 6MV using a Millenium120 multi-leaf collimator. Image pairs (VPD-predicted and measured) were exported in dicom format. Each detection test imported an image pair into Matlab, optionally inserted a simulated error (rectangular region with intensity raised or lowered) into the measured image, performed 3%/3mm gamma analysis, and saved the gamma distribution. For a given error, 56 negative tests (without error) were performed, one per 56 image pairs. Also, 560 positive tests (with error) with randomly selected image pairs and randomly selected in-field error location. Images were classified as errored (or error-free) if percent pixels with γ<κ was < (or ≥) τ. (Conventionally, κ=1 and τ=90%.) A ROC curve was generated from the 616 tests by varying τ. For a range of κ and τ, true/false positive/negative rates were calculated. This procedure was repeated for inserted errors of different sizes. VPD was considered to reliably detect an error if images were correctly classified as errored or error-free at least 95% of the time, for some κ+τ combination. Results: 20mm{sup 2} errors with intensity altered by ≥20% could be reliably detected, as could 10mm{sup 2} errors with intensity was altered by ≥50%. Errors with smaller size or intensity change could not be reliably detected. Conclusion: Varian Portal Dosimetry using 3%/3mm gamma analysis is capable of reliably detecting only those fluence errors that exceed the stated sizes. Images containing smaller errors can pass mathematical analysis, though

  17. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  18. Aerosol kinetic code "AERFORM": Model, validation and simulation results

    NASA Astrophysics Data System (ADS)

    Gainullin, K. G.; Golubev, A. I.; Petrov, A. M.; Piskunov, V. N.

    2016-06-01

    The aerosol kinetic code "AERFORM" is modified to simulate droplet and ice particle formation in mixed clouds. The splitting method is used to calculate condensation and coagulation simultaneously. The method is calibrated with analytic solutions of kinetic equations. Condensation kinetic model is based on cloud particle growth equation, mass and heat balance equations. The coagulation kinetic model includes Brownian, turbulent and precipitation effects. The real values are used for condensation and coagulation growth of water droplets and ice particles. The model and the simulation results for two full-scale cloud experiments are presented. The simulation model and code may be used autonomously or as an element of another code.

  19. Experimental and simulational result multipactors in 112 MHz QWR injector

    SciTech Connect

    Xin, T.; Ben-Zvi, I.; Belomestnykh, S.; Brutus, J. C.; Skaritka, J.; Wu, Q.; Xiao, B.

    2015-05-03

    The first RF commissioning of 112 MHz QWR superconducting electron gun was done in late 2014. The coaxial Fundamental Power Coupler (FPC) and Cathode Stalk (stalk) were installed and tested for the first time. During this experiment, we observed several multipacting barriers at different gun voltage levels. The simulation work was done within the same range. The comparison between the experimental observation and the simulation results are presented in this paper. The observations during the test are consisted with the simulation predictions. We were able to overcome most of the multipacting barriers and reach 1.8 MV gun voltage under pulsed mode after several round of conditioning processes.

  20. Preliminary Results from SCEC Earthquake Simulator Comparison Project

    NASA Astrophysics Data System (ADS)

    Tullis, T. E.; Barall, M.; Richards-Dinger, K. B.; Ward, S. N.; Heien, E.; Zielke, O.; Pollitz, F. F.; Dieterich, J. H.; Rundle, J. B.; Yikilmaz, M. B.; Turcotte, D. L.; Kellogg, L. H.; Field, E. H.

    2010-12-01

    Earthquake simulators are computer programs that simulate long sequences of earthquakes. If such simulators could be shown to produce synthetic earthquake histories that are good approximations to actual earthquake histories they could be of great value in helping to anticipate the probabilities of future earthquakes and so could play an important role in helping to make public policy decisions. Consequently it is important to discover how realistic are the earthquake histories that result from these simulators. One way to do this is to compare their behavior with the limited knowledge we have from the instrumental, historic, and paleoseismic records of past earthquakes. Another, but slow process for large events, is to use them to make predictions about future earthquake occurrence and to evaluate how well the predictions match what occurs. A final approach is to compare the results of many varied earthquake simulators to determine the extent to which the results depend on the details of the approaches and assumptions made by each simulator. Five independently developed simulators, capable of running simulations on complicated geometries containing multiple faults, are in use by some of the authors of this abstract. Although similar in their overall purpose and design, these simulators differ from one another widely in their details in many important ways. They require as input for each fault element a value for the average slip rate as well as a value for friction parameters or stress reduction due to slip. They share the use of the boundary element method to compute stress transfer between elements. None use dynamic stress transfer by seismic waves. A notable difference is the assumption different simulators make about the constitutive properties of the faults. The earthquake simulator comparison project is designed to allow comparisons among the simulators and between the simulators and past earthquake history. The project uses sets of increasingly detailed

  1. Hyper-X Stage Separation: Simulation Development and Results

    NASA Technical Reports Server (NTRS)

    Reubush, David E.; Martin, John G.; Robinson, Jeffrey S.; Bose, David M.; Strovers, Brian K.

    2001-01-01

    This paper provides an overview of stage separation simulation development and results for NASA's Hyper-X program; a focused hypersonic technology effort designed to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. This paper presents an account of the development of the current 14 degree of freedom stage separation simulation tool (SepSim) and results from use of the tool in a Monte Carlo analysis to evaluate the risk of failure for the separation event. Results from use of the tool show that there is only a very small risk of failure in the separation event.

  2. Accuracy of Korean-Mini-Mental Status Examination Based on Seoul Neuro-Psychological Screening Battery II Results

    PubMed Central

    Kang, In-Woong; Beom, In-Gyu; Cho, Ji-Yeon

    2016-01-01

    Background The Korean-Mini-Mental Status Examination (K-MMSE) is a dementia-screening test that can be easily applied in both community and clinical settings. However, in 20% to 30% of cases, the K-MMSE produces a false negative response. This suggests that it is necessary to evaluate the accuracy of K-MMSE as a screening test for dementia, which can be achieved through comparison of K-MMSE and Seoul Neuropsychological Screening Battery (SNSB)-II results. Methods The study included 713 subjects (male 534, female 179; mean age, 69.3±6.9 years). All subjects were assessed using K-MMSE and SNSB-II tests, the results of which were divided into normal and abnormal in 15 percentile standards. Results The sensitivity of the K-MMSE was 48.7%, with a specificity of 89.9%. The incidence of false positive and negative results totaled 10.1% and 51.2%, respectively. In addition, the positive predictive value of the K-MMSE was 87.1%, while the negative predictive value was 55.6%. The false-negative group showed cognitive impairments in regions of memory and executive function. Subsequently, in the false-positive group, subjects demonstrated reduced performance in memory recall, time orientation, attention, and calculation of K-MMSE items. Conclusion The results obtained in the study suggest that cognitive function might still be impaired even if an individual obtained a normal score on the K-MMSE. If the K-MMSE is combined with tests of memory or executive function, the accuracy of dementia diagnosis could be greatly improved. PMID:27274389

  3. Leveraging data analytics, patterning simulations and metrology models to enhance CD metrology accuracy for advanced IC nodes

    NASA Astrophysics Data System (ADS)

    Rana, Narender; Zhang, Yunlin; Kagalwala, Taher; Hu, Lin; Bailey, Todd

    2014-04-01

    Integrated Circuit (IC) technology is changing in multiple ways: 193i to EUV exposure, planar to non-planar device architecture, from single exposure lithography to multiple exposure and DSA patterning etc. Critical dimension (CD) control requirement is becoming stringent and more exhaustive: CD and process window are shrinking., three sigma CD control of < 2 nm is required in complex geometries, and metrology uncertainty of < 0.2 nm is required to achieve the target CD control for advanced IC nodes (e.g. 14 nm, 10 nm and 7 nm nodes). There are fundamental capability and accuracy limits in all the metrology techniques that are detrimental to the success of advanced IC nodes. Reference or physical CD metrology is provided by CD-AFM, and TEM while workhorse metrology is provided by CD-SEM, Scatterometry, Model Based Infrared Reflectrometry (MBIR). Precision alone is not sufficient moving forward. No single technique is sufficient to ensure the required accuracy of patterning. The accuracy of CD-AFM is ~1 nm and precision in TEM is poor due to limited statistics. CD-SEM, scatterometry and MBIR need to be calibrated by reference measurements for ensuring the accuracy of patterned CDs and patterning models. There is a dire need of measurement with < 0.5 nm accuracy and the industry currently does not have that capability with inline measurments. Being aware of the capability gaps for various metrology techniques, we have employed data processing techniques and predictive data analytics, along with patterning simulation and metrology models, and data integration techniques to selected applications demonstrating the potential solution and practicality of such an approach to enhance CD metrology accuracy. Data from multiple metrology techniques has been analyzed in multiple ways to extract information with associated uncertainties and integrated to extract the useful and more accurate CD and profile information of the structures. This paper presents the optimization of

  4. Contribution of Sample Processing to Variability and Accuracy of the Results of Pesticide Residue Analysis in Plant Commodities.

    PubMed

    Ambrus, Árpád; Buczkó, Judit; Hamow, Kamirán Á; Juhász, Viktor; Solymosné Majzik, Etelka; Szemánné Dobrik, Henriett; Szitás, Róbert

    2016-08-10

    Significant reduction of concentration of some pesticide residues and substantial increase of the uncertainty of the results derived from the homogenization of sample materials have been reported in scientific papers long ago. Nevertheless, performance of methods is frequently evaluated on the basis of only recovery tests, which exclude sample processing. We studied the effect of sample processing on accuracy and uncertainty of the measured residue values with lettuce, tomato, and maize grain samples applying mixtures of selected pesticides. The results indicate that the method is simple and robust and applicable in any pesticide residue laboratory. The analytes remaining in the final extract are influenced by their physical-chemical properties, the nature of the sample material, the temperature of comminution of sample, and the mass of test portion extracted. Consequently, validation protocols should include testing the effect of sample processing, and the performance of the complete method should be regularly checked within internal quality control. PMID:26755282

  5. Tempest: Mesoscale test case suite results and the effect of order-of-accuracy on pressure gradient force errors

    NASA Astrophysics Data System (ADS)

    Guerra, J. E.; Ullrich, P. A.

    2014-12-01

    Tempest is a new non-hydrostatic atmospheric modeling framework that allows for investigation and intercomparison of high-order numerical methods. It is composed of a dynamical core based on a finite-element formulation of arbitrary order operating on cubed-sphere and Cartesian meshes with topography. The underlying technology is briefly discussed, including a novel Hybrid Finite Element Method (HFEM) vertical coordinate coupled with high-order Implicit/Explicit (IMEX) time integration to control vertically propagating sound waves. Here, we show results from a suite of Mesoscale testing cases from the literature that demonstrate the accuracy, performance, and properties of Tempest on regular Cartesian meshes. The test cases include wave propagation behavior, Kelvin-Helmholtz instabilities, and flow interaction with topography. Comparisons are made to existing results highlighting improvements made in resolving atmospheric dynamics in the vertical direction where many existing methods are deficient.

  6. Autonomous navigation accuracy using simulated horizon sensor and sun sensor observations

    NASA Technical Reports Server (NTRS)

    Pease, G. E.; Hendrickson, H. T.

    1980-01-01

    A relatively simple autonomous system which would use horizon crossing indicators, a sun sensor, a quartz oscillator, and a microprogrammed computer is discussed. The sensor combination is required only to effectively measure the angle between the centers of the Earth and the Sun. Simulations for a particular orbit indicate that 2 km r.m.s. orbit determination uncertainties may be expected from a system with 0.06 deg measurement uncertainty. A key finding is that knowledge of the satellite orbit plane orientation can be maintained to this level because of the annual motion of the Sun and the predictable effects of Earth oblateness. The basic system described can be updated periodically by transits of the Moon through the IR horizon crossing indicator fields of view.

  7. Advanced Thermal Simulator Testing: Thermal Analysis and Test Results

    SciTech Connect

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Reid, Robert; Adams, Mike; Davis, Joe

    2008-01-21

    Work at the NASA Marshall Space Flight Center seeks to develop high fidelity, electrically heated thermal simulators that represent fuel elements in a nuclear reactor design to support non-nuclear testing applicable to the potential development of a space nuclear power or propulsion system. Comparison between the fuel pins and thermal simulators is made at the outer fuel clad surface, which corresponds to the outer sheath surface in the thermal simulator. The thermal simulators that are currently being tested correspond to a liquid metal cooled reactor design that could be applied for Lunar surface power. These simulators are designed to meet the geometric and power requirements of a proposed surface power reactor design, accommodate testing of various axial power profiles, and incorporate imbedded instrumentation. This paper reports the results of thermal simulator analysis and testing in a bare element configuration, which does not incorporate active heat removal, and testing in a water-cooled calorimeter designed to mimic the heat removal that would be experienced in a reactor core.

  8. Advanced Thermal Simulator Testing: Thermal Analysis and Test Results

    NASA Technical Reports Server (NTRS)

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Reid, Robert; Adams, Mike; Davis, Joe

    2008-01-01

    Work at the NASA Marshall Space Flight Center seeks to develop high fidelity, electrically heated thermal simulators that represent fuel elements in a nuclear reactor design to support non-nuclear testing applicable to the development of a space nuclear power or propulsion system. Comparison between the fuel pins and thermal simulators is made at the outer fuel clad surface, which corresponds to the outer sheath surface in the thermal simulator. The thermal simulators that are currently being tested correspond to a SNAP derivative reactor design that could be applied for Lunar surface power. These simulators are designed to meet the geometric and power requirements of a proposed surface power reactor design, accommodate testing of various axial power profiles, and incorporate imbedded instrumentation. This paper reports the results of thermal simulator analysis and testing in a bare element configuration, which does not incorporate active heat removal, and testing in a water-cooled calorimeter designed to mimic the heat removal that would be experienced in a reactor core.

  9. Technical Note: Maximising accuracy and minimising cost of a potentiometrically regulated ocean acidification simulation system

    NASA Astrophysics Data System (ADS)

    MacLeod, C. D.; Doyle, H. L.; Currie, K. I.

    2015-02-01

    This article describes a potentiometric ocean acidification simulation system which automatically regulates pH through the injection of 100% CO2 gas into temperature-controlled seawater. The system is ideally suited to long-term experimental studies of the effect of acidification on biological processes involving small-bodied (10-20 mm) calcifying or non-calcifying organisms. Using hobbyist-grade equipment, the system was constructed for approximately USD 1200 per treatment unit (tank, pH regulation apparatus, chiller, pump/filter unit). An overall tolerance of ±0.05 pHT units (SD) was achieved over 90 days in two acidified treatments (7.60 and 7.40) at 12 °C using glass electrodes calibrated with synthetic seawater buffers, thereby preventing liquid junction error. The performance of the system was validated through the independent calculation of pHT (12 °C) using dissolved inorganic carbon and total alkalinity data taken from discrete acidified seawater samples. The system was used to compare the shell growth of the marine gastropod Zeacumantus subcarinatus infected with the trematode parasite Maritrema novaezealandensis with that of uninfected snails at pH levels of 7.4, 7.6, and 8.1.

  10. Results from Binary Black Hole Simulations in Astrophysics Applications

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2007-01-01

    Present and planned gravitational wave observatories are opening a new astronomical window to the sky. A key source of gravitational waves is the merger of two black holes. The Laser Interferometer Space Antenna (LISA), in particular, is expected to observe these events with signal-to-noise ratio's in the thousands. To fully reap the scientific benefits of these observations requires a detailed understanding, based on numerical simulations, of the predictions of General Relativity for the waveform signals. New techniques for simulating binary black hole mergers, introduced two years ago, have led to dramatic advances in applied numerical simulation work. Over the last two years, numerical relativity researchers have made tremendous strides in understanding the late stages of binary black hole mergers. Simulations have been applied to test much of the basic physics of binary black hole interactions, showing robust results for merger waveform predictions, and illuminating such phenomena as spin-precession. Calculations have shown that merging systems can be kicked at up to 2500 km/s by the thrust from asymmetric emission. Recently, long lasting simulations of ten or more orbits allow tests of post-Newtonian (PN) approximation results for radiation from the last orbits of the binary's inspiral. Already, analytic waveform models based PN techniques with incorporated information from numerical simulations may be adequate for observations with current ground based observatories. As new advances in simulations continue to rapidly improve our theoretical understanding of the systems, it seems certain that high-precision predictions will be available in time for LISA and other advanced ground-based instruments. Future gravitational wave observatories are expected to make precision.

  11. Simulation of diurnal thermal energy storage systems: Preliminary results

    NASA Astrophysics Data System (ADS)

    Katipamula, S.; Somasundaram, S.; Williams, H. R.

    1994-12-01

    This report describes the results of a simulation of thermal energy storage (TES) integrated with a simple-cycle gas turbine cogeneration system. Integrating TES with cogeneration can serve the electrical and thermal loads independently while firing all fuel in the gas turbine. The detailed engineering and economic feasibility of diurnal TES systems integrated with cogeneration systems has been described in two previous PNL reports. The objective of this study was to lay the ground work for optimization of the TES system designs using a simulation tool called TRNSYS (TRaNsient SYstem Simulation). TRNSYS is a transient simulation program with a sequential-modular structure developed at the Solar Energy Laboratory, University of Wisconsin-Madison. The two TES systems selected for the base-case simulations were: (1) a one-tank storage model to represent the oil/rock TES system; and (2) a two-tank storage model to represent the molten nitrate salt TES system. Results of the study clearly indicate that an engineering optimization of the TES system using TRNSYS is possible. The one-tank stratified oil/rock storage model described here is a good starting point for parametric studies of a TES system. Further developments to the TRNSYS library of available models (economizer, evaporator, gas turbine, etc.) are recommended so that the phase-change processes is accurately treated.

  12. Simulating lightning into the RAMS model: implementation and preliminary results

    NASA Astrophysics Data System (ADS)

    Federico, S.; Avolio, E.; Petracca, M.; Panegrossi, G.; Sanò, P.; Casella, D.; Dietrich, S.

    2014-05-01

    This paper shows the results of a tailored version of a previously published methodology, designed to simulate lightning activity, implemented into the Regional Atmospheric Modeling System (RAMS). The method gives the flash density at the resolution of the RAMS grid-scale allowing for a detailed analysis of the evolution of simulated lightning activity. The system is applied in detail to two case studies occurred over the Lazio Region, in Central Italy. Simulations are compared with the lightning activity detected by the LINET network. The cases refer to two thunderstorms of different intensity. Results show that the model predicts reasonably well both cases and that the lightning activity is well reproduced especially for the most intense case. However, there are errors in timing and positioning of the convection, whose magnitude depends on the case study, which mirrors in timing and positioning errors of the lightning distribution. To assess objectively the performance of the methodology, standard scores are presented for four additional case studies. Scores show the ability of the methodology to simulate the daily lightning activity for different spatial scales and for two different minimum thresholds of flash number density. The performance decreases at finer spatial scales and for higher thresholds. The comparison of simulated and observed lighting activity is an immediate and powerful tool to assess the model ability to reproduce the intensity and the evolution of the convection. This shows the importance of the use of computationally efficient lightning schemes, such as the one described in this paper, in forecast models.

  13. Development of a Haptic Elbow Spasticity Simulator (HESS) for Improving Accuracy and Reliability of Clinical Assessment of Spasticity

    PubMed Central

    Park, Hyung-Soon; Kim, Jonghyun; Damiano, Diane L.

    2013-01-01

    This paper presents the framework for developing a robotic system to improve accuracy and reliability of clinical assessment. Clinical assessment of spasticity tends to have poor reliability because of the nature of the in-person assessment. To improve accuracy and reliability of spasticity assessment, a haptic device, named the HESS (Haptic Elbow Spasticity Simulator) has been designed and constructed to recreate the clinical “feel” of elbow spasticity based on quantitative measurements. A mathematical model representing the spastic elbow joint was proposed based on clinical assessment using the Modified Ashworth Scale (MAS) and quantitative data (position, velocity, and torque) collected on subjects with elbow spasticity. Four haptic models (HMs) were created to represent the haptic feel of MAS 1, 1+, 2, and 3. The four HMs were assessed by experienced clinicians; three clinicians performed both in-person and haptic assessments, and had 100% agreement in MAS scores; and eight clinicians who were experienced with MAS assessed the four HMs without receiving any training prior to the test. Inter-rater reliability among the eight clinicians had substantial agreement (κ = 0.626). The eight clinicians also rated the level of realism (7.63 ± 0.92 out of 10) as compared to their experience with real patients. PMID:22562769

  14. Experiment vs simulation RT WFNDEC 2014 benchmark: CIVA results

    SciTech Connect

    Tisseur, D. Costin, M. Rattoni, B. Vienne, C. Vabre, A. Cattiaux, G.; Sollier, T.

    2015-03-31

    The French Atomic Energy Commission and Alternative Energies (CEA) has developed for years the CIVA software dedicated to simulation of NDE techniques such as Radiographic Testing (RT). RT modelling is achieved in CIVA using combination of a determinist approach based on ray tracing for transmission beam simulation and a Monte Carlo model for the scattered beam computation. Furthermore, CIVA includes various detectors models, in particular common x-ray films and a photostimulable phosphor plates. This communication presents the results obtained with the configurations proposed in the World Federation of NDEC 2014 RT modelling benchmark with the RT models implemented in the CIVA software.

  15. Experiment vs simulation RT WFNDEC 2014 benchmark: CIVA results

    NASA Astrophysics Data System (ADS)

    Tisseur, D.; Costin, M.; Rattoni, B.; Vienne, C.; Vabre, A.; Cattiaux, G.; Sollier, T.

    2015-03-01

    The French Atomic Energy Commission and Alternative Energies (CEA) has developed for years the CIVA software dedicated to simulation of NDE techniques such as Radiographic Testing (RT). RT modelling is achieved in CIVA using combination of a determinist approach based on ray tracing for transmission beam simulation and a Monte Carlo model for the scattered beam computation. Furthermore, CIVA includes various detectors models, in particular common x-ray films and a photostimulable phosphor plates. This communication presents the results obtained with the configurations proposed in the World Federation of NDEC 2014 RT modelling benchmark with the RT models implemented in the CIVA software.

  16. Recent results in analysis and simulation of beam halo

    SciTech Connect

    Ryne, Robert D.; Wangler, Thomas P.

    1995-09-15

    Understanding and predicting beam halo is a major issue for accelerator driven transmutation technologies. If strict beam loss requirements are not met, the resulting radioactivation can reduce the availability of the accelerator facility and may lead to the necessity for time-consuming remote maintenance. Recently there has been much activity related to the core-halo model of halo evolution [1-5]. In this paper we will discuss the core-halo model in the context of constant focusing channels and periodic focusing channels. We will present numerical results based on this model and we will show comparisons with results from large scale particle simulations run on a massively parallel computer. We will also present results from direct Vlasov simulations.

  17. Recent results in analysis and simulation of beam halo

    SciTech Connect

    Ryne, R.D.; Wangler, T.P.

    1994-09-01

    Understanding and predicting beam halo is a major issue for accelerator driven transmutation technologies. If strict beam loss requirements are not met, the resulting radioactivation can reduce the availability of the accelerator facility and may lead to the necessity for time-consuming remote maintenance. Recently there has been much activity related to the core-halo model of halo evolution. In this paper the authors will discuss the core-halo model in the context of constant focusing channels and periodic focusing channels. They will present numerical results based on this model and they will show comparisons with results from large scale particle simulations run on a massively parallel computer. They will also present results from direct Vlasov simulations.

  18. LENS: μLENS Simulations, Analysis, and Results

    NASA Astrophysics Data System (ADS)

    Rasco, Charles

    2013-04-01

    Simulations of the Low-Energy Neutrino Spectrometer prototype, μLENS, have been performed in order to benchmark the first measurements of the μLENS detector at the Kimballton Underground Research Facility (KURF). μLENS is a 6x6x6 celled scintillation lattice filled with Linear Alkylbenzene based scintillator. We have performed simulations of μLENS using the GEANT4 toolkit. We have measured various radioactive sources, LEDs, and environmental background radiation measurements at KURF using up to 96 PMTs with a simplified data acquisition system of QDCs and TDCs. In this talk we will demonstrate our understanding of the light propagation and we will compare simulation results with measurements of the μLENS detector of various radioactive sources, LEDs, and the environmental background radiation.

  19. Some results of a simulated test for administration of activity in nuclear medicine.

    PubMed

    Oropesa, P; Hernández, A T; Serra, R A; Varela, C; Woods, M J

    2006-04-01

    This paper describes the results obtained using a simulated test for administration of activity in nuclear medicine between 2002 and 2004. Measurements in the radionuclide calibrator are made during the different stages of the procedure. The test attempts to obtain supplementary information on the quality of the measurement, with the aim of evaluating in a more complete way the accuracy of the administered activity value compared with the prescribed one. The participants' performance has been assessed by means of a statistical analysis of the reported data. Dependences between several attributes of the simulated administration tests results are discussed. Specifically, the proportion of satisfactory results in the 2003-2004 period was found to be higher than in 2002. It reveals an improvement of the activity administration in the Cuban nuclear medicine departments since 2003. PMID:16303312

  20. Expected accuracy of tilt measurements on a novel hexapod-based digital zenith camera system: a Monte-Carlo simulation study

    NASA Astrophysics Data System (ADS)

    Hirt, Christian; Papp, Gábor; Pál, András; Benedek, Judit; Szũcs, Eszter

    2014-08-01

    Digital zenith camera systems (DZCS) are dedicated astronomical-geodetic measurement systems for the observation of the direction of the plumb line. A DZCS key component is a pair of tilt meters for the determination of the instrumental tilt with respect to the plumb line. Highest accuracy (i.e., 0.1 arc-seconds or better) is achieved in practice through observation with precision tilt meters in opposite faces (180° instrumental rotation), and application of rigorous tilt reduction models. A novel concept proposes the development of a hexapod (Stewart platform)-based DZCS. However, hexapod-based total rotations are limited to about 30°-60° in azimuth (equivalent to ±15° to ±30° yaw rotation), which raises the question of the impact of the rotation angle between the two faces on the accuracy of the tilt measurement. The goal of the present study is the investigation of the expected accuracy of tilt measurements to be carried out on future hexapod-based DZCS, with special focus placed on the role of the limited rotation angle. A Monte-Carlo simulation study is carried out in order to derive accuracy estimates for the tilt determination as a function of several input parameters, and the results are validated against analytical error propagation. As the main result of the study, limitation of the instrumental rotation to 60° (30°) deteriorates the tilt accuracy by a factor of about 2 (4) compared to a 180° rotation between the faces. Nonetheless, a tilt accuracy at the 0.1 arc-second level is expected when the rotation is at least 45°, and 0.05 arc-second (about 0.25 microradian) accurate tilt meters are deployed. As such, a hexapod-based DZCS can be expected to allow sufficiently accurate determination of the instrumental tilt. This provides supporting evidence for the feasibility of such a novel instrumentation. The outcomes of our study are not only relevant to the field of DZCS, but also to all other types of instruments where the instrumental tilt

  1. CFD simulation of pollutant dispersion around isolated buildings: on the role of convective and turbulent mass fluxes in the prediction accuracy.

    PubMed

    Gousseau, P; Blocken, B; van Heijst, G J F

    2011-10-30

    Computational Fluid Dynamics (CFD) is increasingly used to predict wind flow and pollutant dispersion around buildings. The two most frequently used approaches are solving the Reynolds-averaged Navier-Stokes (RANS) equations and Large-Eddy Simulation (LES). In the present study, we compare the convective and turbulent mass fluxes predicted by these two approaches for two configurations of isolated buildings with distinctive features. We use this analysis to clarify the role of these two components of mass transport on the prediction accuracy of RANS and LES in terms of mean concentration. It is shown that the proper simulation of the convective fluxes is essential to predict an accurate concentration field. In addition, appropriate parameterization of the turbulent fluxes is needed with RANS models, while only the subgrid-scale effects are modeled with LES. Therefore, when the source is located outside of recirculation regions (case 1), both RANS and LES can provide accurate results. When the influence of the building is higher (case 2), RANS models predict erroneous convective fluxes and are largely outperformed by LES in terms of prediction accuracy of mean concentration. These conclusions suggest that the choice of the appropriate turbulence model depends on the configuration of the dispersion problem under study. It is also shown that for both cases LES predicts a counter-gradient mechanism of the streamwise turbulent mass transport, which is not reproduced by the gradient-diffusion hypothesis that is generally used with RANS models. PMID:21880420

  2. Primary simulation and experimental results of a coaxial plasma accelerator

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Huang, J.; Han, J.; Zhang, Z.; Quan, R.; Wang, L.; Yang, X.; Feng, C.

    A coaxial plasma accelerator with a compressing coil is developed to simulate the impacting and erosion effect of space debris on exposed materials of spacecrafts During its adjustment operation some measurements are conducted including discharging current by Rogowski coil average plasma speed in the coaxial gun by magnetic coils and ejected particle speed by piezoelectric sensor etc In concert with the experiment a primary physical model is constructed in which only the coaxial gun is taken into account with the compressor coil not considered for its unimportant contribution to the plasma ejection speed The calculation results by the model agree well with the diagnostic results considering some assumptions for simplification Based on the simulation result some important suggestions for optimum design and adjustment of the accelerator are obtained for its later operation

  3. ANOVA parameters influence in LCF experimental data and simulation results

    NASA Astrophysics Data System (ADS)

    Delprete, C.; Sesanaa, R.; Vercelli, A.

    2010-06-01

    The virtual design of components undergoing thermo mechanical fatigue (TMF) and plastic strains is usually run in many phases. The numerical finite element method gives a useful instrument which becomes increasingly effective as the geometrical and numerical modelling gets more accurate. The constitutive model definition plays an important role in the effectiveness of the numerical simulation [1, 2] as, for example, shown in Figure 1. In this picture it is shown how a good cyclic plasticity constitutive model can simulate a cyclic load experiment. The component life estimation is the subsequent phase and it needs complex damage and life estimation models [3-5] which take into account of several parameters and phenomena contributing to damage and life duration. The calibration of these constitutive and damage models requires an accurate testing activity. In the present paper the main topic of the research activity is to investigate whether the parameters, which result to be influent in the experimental activity, influence the numerical simulations, thus defining the effectiveness of the models in taking into account of all the phenomena actually influencing the life of the component. To obtain this aim a procedure to tune the parameters needed to estimate the life of mechanical components undergoing TMF and plastic strains is presented for commercial steel. This procedure aims to be easy and to allow calibrating both material constitutive model (for the numerical structural simulation) and the damage and life model (for life assessment). The procedure has been applied to specimens. The experimental activity has been developed on three sets of tests run at several temperatures: static tests, high cycle fatigue (HCF) tests, low cycle fatigue (LCF) tests. The numerical structural FEM simulations have been run on a commercial non linear solver, ABAQUS®6.8. The simulations replied the experimental tests. The stress, strain, thermal results from the thermo structural FEM

  4. The Mayfield method of estimating nesting success: A model, estimators and simulation results

    USGS Publications Warehouse

    Hensler, G.L.; Nichols, J.D.

    1981-01-01

    Using a nesting model proposed by Mayfield we show that the estimator he proposes is a maximum likelihood estimator (m.l.e.). M.l.e. theory allows us to calculate the asymptotic distribution of this estimator, and we propose an estimator of the asymptotic variance. Using these estimators we give approximate confidence intervals and tests of significance for daily survival. Monte Carlo simulation results show the performance of our estimators and tests under many sets of conditions. A traditional estimator of nesting success is shown to be quite inferior to the Mayfield estimator. We give sample sizes required for a given accuracy under several sets of conditions.

  5. Preliminary Simulation Results of the 23 June, 2001 Peruvian Tsunami

    NASA Astrophysics Data System (ADS)

    Titov, V. V.; Koshimura, S.; Ortiz, M.; Borrero, J.

    2001-12-01

    The tsunami generated by the June 23, 2001 Peruvian earthquake devastated a 50--km section of coast near the earthquake epicenter and was recorded on tide-gages throughout the Pacific. The coastal town of Camana sustained the most damage with tsunami waves penetrating up to 1--km inland and runup exceeding 5--m. The extreme local effects and widespread impact motivated modeling efforts to produce a realistic tsunami simulation of this event. Preliminary results were produced by the TIME center using two resident numerical models, TUNAMI--2 and MOST. Both models were used to produce preliminary simulations shortly after the earthquake, and first results were posted on the Internet a day after the event (http://www.pmel.noaa.gov/tsunami/peru_pmel.html). These numerical results aimed to quantify the magnitude of the tsunami and, to certain extent, to guide the post-tsunami survey. The first simulations have been revised using new data about the seismic source and the results of the post-tsunami survey. Measured inundation distances, flow depths, and runup along topographic transects are used to constrain the inundation model. Preliminary numerical analysis of tsunami inundation will be presented.

  6. Simulating lightning into the RAMS model: implementation and preliminary results

    NASA Astrophysics Data System (ADS)

    Federico, S.; Avolio, E.; Petracca, M.; Panegrossi, G.; Sanò, P.; Casella, D.; Dietrich, S.

    2014-11-01

    This paper shows the results of a tailored version of a previously published methodology, designed to simulate lightning activity, implemented into the Regional Atmospheric Modeling System (RAMS). The method gives the flash density at the resolution of the RAMS grid scale allowing for a detailed analysis of the evolution of simulated lightning activity. The system is applied in detail to two case studies occurred over the Lazio Region, in Central Italy. Simulations are compared with the lightning activity detected by the LINET network. The cases refer to two thunderstorms of different intensity which occurred, respectively, on 20 October 2011 and on 15 October 2012. The number of flashes simulated (observed) over Lazio is 19435 (16231) for the first case and 7012 (4820) for the second case, and the model correctly reproduces the larger number of flashes that characterized the 20 October 2011 event compared to the 15 October 2012 event. There are, however, errors in timing and positioning of the convection, whose magnitude depends on the case study, which mirrors in timing and positioning errors of the lightning distribution. For the 20 October 2011 case study, spatial errors are of the order of a few tens of kilometres and the timing of the event is correctly simulated. For the 15 October 2012 case study, the spatial error in the positioning of the convection is of the order of 100 km and the event has a longer duration in the simulation than in the reality. To assess objectively the performance of the methodology, standard scores are presented for four additional case studies. Scores show the ability of the methodology to simulate the daily lightning activity for different spatial scales and for two different minimum thresholds of flash number density. The performance decreases at finer spatial scales and for higher thresholds. The comparison of simulated and observed lighting activity is an immediate and powerful tool to assess the model ability to reproduce the

  7. Enhanced vision systems: results of simulation and operational tests

    NASA Astrophysics Data System (ADS)

    Hecker, Peter; Doehler, Hans-Ullrich

    1998-07-01

    Today's aircrews have to handle more and more complex situations. Most critical tasks in the field of civil aviation are landing approaches and taxiing. Especially under bad weather conditions the crew has to handle a tremendous workload. Therefore DLR's Institute of Flight Guidance has developed a concept for an enhanced vision system (EVS), which increases performance and safety of the aircrew and provides comprehensive situational awareness. In previous contributions some elements of this concept have been presented, i.e. the 'Simulation of Imaging Radar for Obstacle Detection and Enhanced Vision' by Doehler and Bollmeyer 1996. Now the presented paper gives an overview about the DLR's enhanced vision concept and research approach, which consists of two main components: simulation and experimental evaluation. In a first step the simulational environment for enhanced vision research with a pilot-in-the-loop is introduced. An existing fixed base flight simulator is supplemented by real-time simulations of imaging sensors, i.e. imaging radar and infrared. By applying methods of data fusion an enhanced vision display is generated combining different levels of information, such as terrain model data, processed images acquired by sensors, aircraft state vectors and data transmitted via datalink. The second part of this contribution presents some experimental results. In cooperation with Daimler Benz Aerospace Sensorsystems Ulm, a test van and a test aircraft were equipped with a prototype of an imaging millimeter wave radar. This sophisticated HiVision Radar is up to now one of the most promising sensors for all weather operations. Images acquired by this sensor are shown as well as results of data fusion processes based on digital terrain models. The contribution is concluded by a short video presentation.

  8. Key results from SB8 simulant flowsheet studies

    SciTech Connect

    Koopman, D. C.

    2013-04-26

    Key technically reviewed results are presented here in support of the Defense Waste Processing Facility (DWPF) acceptance of Sludge Batch 8 (SB8). This report summarizes results from simulant flowsheet studies of the DWPF Chemical Process Cell (CPC). Results include: Hydrogen generation rate for the Sludge Receipt and Adjustment Tank (SRAT) and Slurry Mix Evaporator (SME) cycles of the CPC on a 6,000 gallon basis; Volume percent of nitrous oxide, N2O, produced during the SRAT cycle; Ammonium ion concentrations recovered from the SRAT and SME off-gas; and, Dried weight percent solids (insoluble, soluble, and total) measurements and density.

  9. Dosimetric accuracy of a deterministic radiation transport based {sup 192}Ir brachytherapy treatment planning system. Part III. Comparison to Monte Carlo simulation in voxelized anatomical computational models

    SciTech Connect

    Zourari, K.; Pantelis, E.; Moutsatsos, A.; Sakelliou, L.; Georgiou, E.; Karaiskos, P.; Papagiannis, P.

    2013-01-15

    Purpose: To compare TG43-based and Acuros deterministic radiation transport-based calculations of the BrachyVision treatment planning system (TPS) with corresponding Monte Carlo (MC) simulation results in heterogeneous patient geometries, in order to validate Acuros and quantify the accuracy improvement it marks relative to TG43. Methods: Dosimetric comparisons in the form of isodose lines, percentage dose difference maps, and dose volume histogram results were performed for two voxelized mathematical models resembling an esophageal and a breast brachytherapy patient, as well as an actual breast brachytherapy patient model. The mathematical models were converted to digital imaging and communications in medicine (DICOM) image series for input to the TPS. The MCNP5 v.1.40 general-purpose simulation code input files for each model were prepared using information derived from the corresponding DICOM RT exports from the TPS. Results: Comparisons of MC and TG43 results in all models showed significant differences, as reported previously in the literature and expected from the inability of the TG43 based algorithm to account for heterogeneities and model specific scatter conditions. A close agreement was observed between MC and Acuros results in all models except for a limited number of points that lay in the penumbra of perfectly shaped structures in the esophageal model, or at distances very close to the catheters in all models. Conclusions: Acuros marks a significant dosimetry improvement relative to TG43. The assessment of the clinical significance of this accuracy improvement requires further work. Mathematical patient equivalent models and models prepared from actual patient CT series are useful complementary tools in the methodology outlined in this series of works for the benchmarking of any advanced dose calculation algorithm beyond TG43.

  10. Machine learning methods for empirical streamflow simulation: a comparison of model accuracy, interpretability, and uncertainty in seasonal watersheds

    NASA Astrophysics Data System (ADS)

    Shortridge, Julie E.; Guikema, Seth D.; Zaitchik, Benjamin F.

    2016-07-01

    In the past decade, machine learning methods for empirical rainfall-runoff modeling have seen extensive development and been proposed as a useful complement to physical hydrologic models, particularly in basins where data to support process-based models are limited. However, the majority of research has focused on a small number of methods, such as artificial neural networks, despite the development of multiple other approaches for non-parametric regression in recent years. Furthermore, this work has often evaluated model performance based on predictive accuracy alone, while not considering broader objectives, such as model interpretability and uncertainty, that are important if such methods are to be used for planning and management decisions. In this paper, we use multiple regression and machine learning approaches (including generalized additive models, multivariate adaptive regression splines, artificial neural networks, random forests, and M5 cubist models) to simulate monthly streamflow in five highly seasonal rivers in the highlands of Ethiopia and compare their performance in terms of predictive accuracy, error structure and bias, model interpretability, and uncertainty when faced with extreme climate conditions. While the relative predictive performance of models differed across basins, data-driven approaches were able to achieve reduced errors when compared to physical models developed for the region. Methods such as random forests and generalized additive models may have advantages in terms of visualization and interpretation of model structure, which can be useful in providing insights into physical watershed function. However, the uncertainty associated with model predictions under extreme climate conditions should be carefully evaluated, since certain models (especially generalized additive models and multivariate adaptive regression splines) become highly variable when faced with high temperatures.

  11. Preliminary Results of Laboratory Simulation of Magnetic Reconnection

    NASA Astrophysics Data System (ADS)

    Zhang, Shou-Biao; Xie, Jin-Lin; Hu, Guang-Hai; Li, Hong; Huang, Guang-Li; Liu, Wan-Dong

    2011-10-01

    In the Linear Magnetized Plasma (LMP) device of University of Science and Technology of China and by exerting parallel currents on two parallel copper plates, we have realized the magnetic reconnection in laboratory plasma. With the emissive probes, we have measured the parallel (along the axial direction) electric field in the process of reconnection, and verified the dependence of reconnection current on passing particles. Using the magnetic probe, we have measured the time evolution of magnetic flux, and the measured result shows no pileup of magnetic flux, in consistence with the result of numerical simulation.

  12. Airflow Hazard Visualization for Helicopter Pilots: Flight Simulation Study Results

    NASA Technical Reports Server (NTRS)

    Aragon, Cecilia R.; Long, Kurtis R.

    2005-01-01

    Airflow hazards such as vortices or low level wind shear have been identified as a primary contributing factor in many helicopter accidents. US Navy ships generate airwakes over their decks, creating potentially hazardous conditions for shipboard rotorcraft launch and recovery. Recent sensor developments may enable the delivery of airwake data to the cockpit, where visualizing the hazard data may improve safety and possibly extend ship/helicopter operational envelopes. A prototype flight-deck airflow hazard visualization system was implemented on a high-fidelity rotorcraft flight dynamics simulator. Experienced helicopter pilots, including pilots from all five branches of the military, participated in a usability study of the system. Data was collected both objectively from the simulator and subjectively from post-test questionnaires. Results of the data analysis are presented, demonstrating a reduction in crash rate and other trends that illustrate the potential of airflow hazard visualization to improve flight safety.

  13. BWR Full Integral Simulation Test (FIST). Phase I test results

    SciTech Connect

    Hwang, W S; Alamgir, M; Sutherland, W A

    1984-09-01

    A new full height BWR system simulator has been built under the Full-Integral-Simulation-Test (FIST) program to investigate the system responses to various transients. The test program consists of two test phases. This report provides a summary, discussions, highlights and conclusions of the FIST Phase I tests. Eight matrix tests were conducted in the FIST Phase I. These tests have investigated the large break, small break and steamline break LOCA's, as well as natural circulation and power transients. Results and governing phenomena of each test have been evaluated and discussed in detail in this report. One of the FIST program objectives is to assess the TRAC code by comparisons with test data. Two pretest predictions made with TRACB02 are presented and compared with test data in this report.

  14. Modeling results for a linear simulator of a divertor

    SciTech Connect

    Hooper, E.B.; Brown, M.D.; Byers, J.A.; Casper, T.A.; Cohen, B.I.; Cohen, R.H.; Jackson, M.C.; Kaiser, T.B.; Molvik, A.W.; Nevins, W.M.; Nilson, D.G.; Pearlstein, L.D.; Rognlien, T.D.

    1993-06-23

    A divertor simulator, IDEAL, has been proposed by S. Cohen to study the difficult power-handling requirements of the tokamak program in general and the ITER program in particular. Projections of the power density in the ITER divertor reach {approximately} 1 Gw/m{sup 2} along the magnetic fieldlines and > 10 MW/m{sup 2} on a surface inclined at a shallow angle to the fieldlines. These power densities are substantially greater than can be handled reliably on the surface, so new techniques are required to reduce the power density to a reasonable level. Although the divertor physics must be demonstrated in tokamaks, a linear device could contribute to the development because of its flexibility, the easy access to the plasma and to tested components, and long pulse operation (essentially cw). However, a decision to build a simulator requires not just the recognition of its programmatic value, but also confidence that it can meet the required parameters at an affordable cost. Accordingly, as reported here, it was decided to examine the physics of the proposed device, including kinetic effects resulting from the intense heating required to reach the plasma parameters, and to conduct an independent cost estimate. The detailed role of the simulator in a divertor program is not explored in this report.

  15. Simulating Gravity Changes in Topologically Realistic Driven Earthquake Fault Systems: First Results

    NASA Astrophysics Data System (ADS)

    Schultz, Kasey W.; Sachs, Michael K.; Heien, Eric M.; Rundle, John B.; Turcotte, Don L.; Donnellan, Andrea

    2016-03-01

    Currently, GPS and InSAR measurements are used to monitor deformation produced by slip on earthquake faults. It has been suggested that another method to accomplish many of the same objectives would be through satellite-based gravity measurements. The Gravity Recovery and Climate Experiment (GRACE) mission has shown that it is possible to make detailed gravity measurements from space for climate dynamics and other purposes. To build the groundwork for a more advanced satellite-based gravity survey, we must estimate the level of accuracy needed for precise estimation of fault slip in earthquakes. We turn to numerical simulations of earthquake fault systems and use these to estimate gravity changes. The current generation of Virtual California (VC) simulates faults of any orientation, dip, and rake. In this work, we discuss these computations and the implications they have for accuracies needed for a dedicated gravity monitoring mission. Preliminary results are in agreement with previous results calculated from an older and simpler version of VC. Computed gravity changes are in the range of tens of μGal over distances up to a few hundred kilometers, near the detection threshold for GRACE.

  16. Assessment of the accuracy of an MCNPX-based Monte Carlo simulation model for predicting three-dimensional absorbed dose distributions

    PubMed Central

    Titt, U; Sahoo, N; Ding, X; Zheng, Y; Newhauser, W D; Zhu, X R; Polf, J C; Gillin, M T; Mohan, R

    2014-01-01

    In recent years, the Monte Carlo method has been used in a large number of research studies in radiation therapy. For applications such as treatment planning, it is essential to validate the dosimetric accuracy of the Monte Carlo simulations in heterogeneous media. The AAPM Report no 105 addresses issues concerning clinical implementation of Monte Carlo based treatment planning for photon and electron beams, however for proton-therapy planning, such guidance is not yet available. Here we present the results of our validation of the Monte Carlo model of the double scattering system used at our Proton Therapy Center in Houston. In this study, we compared Monte Carlo simulated depth doses and lateral profiles to measured data for a magnitude of beam parameters. We varied simulated proton energies and widths of the spread-out Bragg peaks, and compared them to measurements obtained during the commissioning phase of the Proton Therapy Center in Houston. Of 191 simulated data sets, 189 agreed with measured data sets to within 3% of the maximum dose difference and within 3 mm of the maximum range or penumbra size difference. The two simulated data sets that did not agree with the measured data sets were in the distal falloff of the measured dose distribution, where large dose gradients potentially produce large differences on the basis of minute changes in the beam steering. Hence, the Monte Carlo models of medium- and large-size double scattering proton-therapy nozzles were valid for proton beams in the 100 MeV–250 MeV interval. PMID:18670050

  17. Distortion measurement of antennas under space simulation conditions with high accuracy and high resolution by means of holography

    NASA Technical Reports Server (NTRS)

    Frey, H. U.

    1984-01-01

    The use of laser holography for measuring the distortion of antennas under space simulation conditions is described. The subject is the so-called double exposure procedure which allows to measure the distortion in the order of 1 to 30/micrometers + or - 0.5 per hologramme of an area of 4 m diameter max. The method of holography takes into account the constraints of the space simulation facility. The test method, the test set up and the constraints by the space simulation facility are described. The results of the performed tests are presented and compared with the theoretical predictions. The test on the K-Band Antenna e.g., showed a distortion of approximately 140/micrometers + or - 5/micrometers measured during the cool down from -10 C to -120 C.

  18. Accuracy of numerical functional transforms applied to derive Molière series terms and comparison with analytical results

    NASA Astrophysics Data System (ADS)

    Takahashi, N.; Okei, K.; Nakatsuka, T.

    Accuracies of numerical Fourier and Hankel transforms are examined with the Takahasi-Mori theory of error evaluation. The higher Moliere terms both for spatial and projected distributions derived by these methods agree very well with those derived analytically. The methods will be valuable to solve other transport problems concerning fast charged particles.

  19. Simulation results of corkscrew motion in DARHT-II

    SciTech Connect

    Chan, K. D.; Ekdahl, C. A.; Chen, Y. J.; Hughes, T. P.

    2003-01-01

    DARHT-II, the second axis of the Dual-Axis Radiographic Hydrodynamics Test Facility, is being commissioned. DARHT-II is a linear induction accelerator producing 2-microsecond electron beam pulses at 20 MeV and 2 kA. These 2-microsecond pulses will be chopped into four short pulses to produce time resolved x-ray images. Radiographic application requires the DARHT-II beam to have excellent beam quality, and it is important to study various beam effects that may cause quality degradation of a DARHT-II beam. One of the beam dynamic effects under study is 'corkscrew' motion. For corkscrew motion, the beam centroid is deflected off axis due to misalignments of the solenoid magnets. The deflection depends on the beam energy variation, which is expected to vary by {+-}0.5% during the 'flat-top' part of a beam pulse. Such chromatic aberration will result in broadening of beam spot size. In this paper, we will report simulation results of our study of corkscrew motion in DARHT-II. Sensitivities of beam spot size to various accelerator parameters and the strategy for minimizing corkscrew motion will be described. Measured magnet misalignment is used in the simulation.

  20. EFFECT OF RADIATION DOSE LEVEL ON ACCURACY AND PRECISION OF MANUAL SIZE MEASUREMENTS IN CHEST TOMOSYNTHESIS EVALUATED USING SIMULATED PULMONARY NODULES

    PubMed Central

    Söderman, Christina; Johnsson, Åse Allansdotter; Vikgren, Jenny; Norrlund, Rauni Rossi; Molnar, David; Svalkvist, Angelica; Månsson, Lars Gunnar; Båth, Magnus

    2016-01-01

    The aim of the present study was to investigate the dependency of the accuracy and precision of nodule diameter measurements on the radiation dose level in chest tomosynthesis. Artificial ellipsoid-shaped nodules with known dimensions were inserted in clinical chest tomosynthesis images. Noise was added to the images in order to simulate radiation dose levels corresponding to effective doses for a standard-sized patient of 0.06 and 0.04 mSv. These levels were compared with the original dose level, corresponding to an effective dose of 0.12 mSv for a standard-sized patient. Four thoracic radiologists measured the longest diameter of the nodules. The study was restricted to nodules located in high-dose areas of the tomosynthesis projection radiographs. A significant decrease of the measurement accuracy and intraobserver variability was seen for the lowest dose level for a subset of the observers. No significant effect of dose level on the interobserver variability was found. The number of non-measurable small nodules (≤5 mm) was higher for the two lowest dose levels compared with the original dose level. In conclusion, for pulmonary nodules at positions in the lung corresponding to locations in high-dose areas of the projection radiographs, using a radiation dose level resulting in an effective dose of 0.06 mSv to a standard-sized patient may be possible in chest tomosynthesis without affecting the accuracy and precision of nodule diameter measurements to any large extent. However, an increasing number of non-measurable small nodules (≤5 mm) with decreasing radiation dose may raise some concerns regarding an applied general dose reduction for chest tomosynthesis examinations in the clinical praxis. PMID:26994093

  1. Comparison of Repositioning Accuracy of Two Commercially Available Immobilization Systems for Treatment of Head-and-Neck Tumors Using Simulation Computed Tomography Imaging

    SciTech Connect

    Rotondo, Ronny L.; Sultanem, Khalil Lavoie, Isabelle; Skelly, Julie; Raymond, Luc

    2008-04-01

    Purpose: To compare the setup accuracy, comfort level, and setup time of two immobilization systems used in head-and-neck radiotherapy. Methods and Materials: Between February 2004 and January 2005, 21 patients undergoing radiotherapy for head-and-neck tumors were assigned to one of two immobilization devices: a standard thermoplastic head-and-shoulder mask fixed to a carbon fiber base (Type S) or a thermoplastic head mask fixed to the Accufix cantilever board equipped with the shoulder depression system. All patients underwent planning computed tomography (CT) followed by repeated control CT under simulation conditions during the course of therapy. The CT images were subsequently co-registered and setup accuracy was examined by recording displacement in the three cartesian planes at six anatomic landmarks and calculating the three-dimensional vector errors. In addition, the setup time and comfort of the two systems were compared. Results: A total of 64 CT data sets were analyzed. No difference was found in the cartesian total displacement errors or total vector displacement errors between the two populations at any landmark considered. A trend was noted toward a smaller mean systemic error for the upper landmarks favoring the Accufix system. No difference was noted in the setup time or comfort level between the two systems. Conclusion: No significant difference in the three-dimensional setup accuracy was identified between the two immobilization systems compared. The data from this study reassure us that our technique provides accurate patient immobilization, allowing us to limit our planning target volume to <4 mm when treating head-and-neck tumors.

  2. EFFECT OF RADIATION DOSE LEVEL ON ACCURACY AND PRECISION OF MANUAL SIZE MEASUREMENTS IN CHEST TOMOSYNTHESIS EVALUATED USING SIMULATED PULMONARY NODULES.

    PubMed

    Söderman, Christina; Johnsson, Åse Allansdotter; Vikgren, Jenny; Norrlund, Rauni Rossi; Molnar, David; Svalkvist, Angelica; Månsson, Lars Gunnar; Båth, Magnus

    2016-06-01

    The aim of the present study was to investigate the dependency of the accuracy and precision of nodule diameter measurements on the radiation dose level in chest tomosynthesis. Artificial ellipsoid-shaped nodules with known dimensions were inserted in clinical chest tomosynthesis images. Noise was added to the images in order to simulate radiation dose levels corresponding to effective doses for a standard-sized patient of 0.06 and 0.04 mSv. These levels were compared with the original dose level, corresponding to an effective dose of 0.12 mSv for a standard-sized patient. Four thoracic radiologists measured the longest diameter of the nodules. The study was restricted to nodules located in high-dose areas of the tomosynthesis projection radiographs. A significant decrease of the measurement accuracy and intraobserver variability was seen for the lowest dose level for a subset of the observers. No significant effect of dose level on the interobserver variability was found. The number of non-measurable small nodules (≤5 mm) was higher for the two lowest dose levels compared with the original dose level. In conclusion, for pulmonary nodules at positions in the lung corresponding to locations in high-dose areas of the projection radiographs, using a radiation dose level resulting in an effective dose of 0.06 mSv to a standard-sized patient may be possible in chest tomosynthesis without affecting the accuracy and precision of nodule diameter measurements to any large extent. However, an increasing number of non-measurable small nodules (≤5 mm) with decreasing radiation dose may raise some concerns regarding an applied general dose reduction for chest tomosynthesis examinations in the clinical praxis. PMID:26994093

  3. Accuracy of System Step Response Roll Magnitude Estimation from Central and Peripheral Visual Displays and Simulator Cockpit Motion

    NASA Technical Reports Server (NTRS)

    Hosman, R. J. A. W.; Vandervaart, J. C.

    1984-01-01

    An experiment to investigate visual roll attitude and roll rate perception is described. The experiment was also designed to assess the improvements of perception due to cockpit motion. After the onset of the motion, subjects were to make accurate and quick estimates of the final magnitude of the roll angle step response by pressing the appropriate button of a keyboard device. The differing time-histories of roll angle, roll rate and roll acceleration caused by a step response stimulate the different perception processes related the central visual field, peripheral visual field and vestibular organs in different, yet exactly known ways. Experiments with either of the visual displays or cockpit motion and some combinations of these were run to asses the roles of the different perception processes. Results show that the differences in response time are much more pronounced than the differences in perception accuracy.

  4. On the accuracy of simulations of a 2D boundary layer with RANS models implemented in OpenFoam

    NASA Astrophysics Data System (ADS)

    Graves, Benjamin J.; Gomez, Sebastian; Poroseva, Svetlana V.

    2013-11-01

    The OpenFoam software is an attractive Computational Fluid Dynamics solver for evaluating new turbulence models due to the open-source nature, and the suite of existing standard model implementations. Before interpreting results obtained with a new model, a baseline for performance of the OpenFoam solver and existing models is required. In the current study we analyze the RANS models in the OpenFoam incompressible solver for two planar (two-dimensional mean flow) benchmark cases generated by the AIAA Turbulence Model Benchmarking Working Group (TMBWG): a zero-pressure-gradient flat plate and a bump-in-channel. The OpenFoam results are compared against both experimental data and simulation results obtained with the NASA CFD codes CFL3D and FUN3D. Sensitivity of simulation results to the grid resolution and model implementation are analyzed. Testing is conducted using the Spalart-Allmaras one-equation model, Wilcox's two-equation k-omega model, and the Launder-Reece-Rodi Reynolds-stress model. Simulations using both wall functions and wall-resolved (low Reynolds number) formulations are considered. The material is based upon work supported by NASA under award NNX12AJ61A.

  5. Some results on ethnic conflicts based on evolutionary game simulation

    NASA Astrophysics Data System (ADS)

    Qin, Jun; Yi, Yunfei; Wu, Hongrun; Liu, Yuhang; Tong, Xiaonian; Zheng, Bojin

    2014-07-01

    The force of the ethnic separatism, essentially originating from the negative effect of ethnic identity, is damaging the stability and harmony of multiethnic countries. In order to eliminate the foundation of the ethnic separatism and set up a harmonious ethnic relationship, some scholars have proposed a viewpoint: ethnic harmony could be promoted by popularizing civic identity. However, this viewpoint is discussed only from a philosophical prospective and still lacks support of scientific evidences. Because ethnic group and ethnic identity are products of evolution and ethnic identity is the parochialism strategy under the perspective of game theory, this paper proposes an evolutionary game simulation model to study the relationship between civic identity and ethnic conflict based on evolutionary game theory. The simulation results indicate that: (1) the ratio of individuals with civic identity has a negative association with the frequency of ethnic conflicts; (2) ethnic conflict will not die out by killing all ethnic members once for all, and it also cannot be reduced by a forcible pressure, i.e., increasing the ratio of individuals with civic identity; (3) the average frequencies of conflicts can stay in a low level by promoting civic identity periodically and persistently.

  6. HOMs simulation and measurement results of IHEP02 cavity

    NASA Astrophysics Data System (ADS)

    Zheng, Hong-Juan; Zhai, Ji-Yuan; Zhao, Tong-Xian; Gao, Jie

    2015-11-01

    In accelerator RF cavities, there exists not only the fundamental mode which is used to accelerate the beam, but also higher order modes (HOMs). The higher order modes excited by the beam can seriously affect beam quality, especially for the higher R/Q modes. 1.3 GHz low-loss 9-cell superconducting cavity as a candidate for ILC high gradient cavity, the properties of higher order mode has not been studied carefully. IHEP based on existing low loss cavity, designed and developed a large grain size 1.3 GHz low-loss 9-cell superconducting cavity (IHEP02 cavity). The higher order mode coupler of IHEP02 used TESLA coupler's design. As a result of the limitation of the mechanical design, the distance between higher order mode coupler and end cell is larger than TESLA cavity. This paper reports on measured results of higher order modes in the IHEP02 1.3 GHz low-loss 9-cell superconducting cavity. Using different methods, Qe of the dangerous modes passbands have been obtained. The results are compared with TESLA cavity results. R/Q of the first three passbands have also been obtained by simulation and compared with the results of the TESLA cavity. Supported by Knowledge Innovation Project of The Chinese Academy of Sciences

  7. Radiative Transfer Methods: new exact results for testing the accuracy of the ALI numerical method for a stellar atmosphere

    NASA Astrophysics Data System (ADS)

    Chevallier, L.

    2010-11-01

    Tests are presented of the 1D Accelerated Lambda Iteration method, which is widely used for solving the radiative transfer equation for a stellar atmosphere. We use our ARTY code as a reference solution and tables for these tests are provided. We model a static idealized stellar atmosphere, which is illuminated on its inner face and where internal sources are distributed with weak or strong gradients. This is an extension of published tests for a slab without incident radiation and gradients. Typical physical conditions for the continuum radiation and spectral lines are used, as well as typical values for the numerical parameters in order to reach a 1% accuracy. It is shown that the method is able to reach such an accuracy for most cases but the spatial discretization has to be refined for strong gradients and spectral lines, beyond the scope of realistic stellar atmospheres models. Discussion is provided on faster methods.

  8. Relative significance of heat transfer processes to quantify tradeoffs between complexity and accuracy of energy simulations with a building energy use patterns classification

    NASA Astrophysics Data System (ADS)

    Heidarinejad, Mohammad

    the indoor condition regardless of the contribution of internal and external loads. To deploy the methodology to another portfolio of buildings, simulated LEED NC office buildings are selected. The advantage of this approach is to isolate energy performance due to inherent building characteristics and location, rather than operational and maintenance factors that can contribute to significant variation in building energy use. A framework for detailed building energy databases with annual energy end-uses is developed to select variables and omit outliers. The results show that the high performance office buildings are internally-load dominated with existence of three different clusters of low-intensity, medium-intensity, and high-intensity energy use pattern for the reviewed office buildings. Low-intensity cluster buildings benefit from small building area, while the medium- and high-intensity clusters have a similar range of floor areas and different energy use intensities. Half of the energy use in the low-intensity buildings is associated with the internal loads, such as lighting and plug loads, indicating that there are opportunities to save energy by using lighting or plug load management systems. A comparison between the frameworks developed for the campus buildings and LEED NC office buildings indicates these two frameworks are complementary to each other. Availability of the information has yielded to two different procedures, suggesting future studies for a portfolio of buildings such as city benchmarking and disclosure ordinance should collect and disclose minimal required inputs suggested by this study with the minimum level of monthly energy consumption granularity. This dissertation developed automated methods using the OpenStudio API (Application Programing Interface) to create energy models based on the building class. ASHRAE Guideline 14 defines well-accepted criteria to measure accuracy of energy simulations; however, there is no well

  9. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset 1998-2000 in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, J. C.; Thompson, A. M.; Schmidlin, F. J.; Oltmans, S. J.; McPeters, R. D.; Smit, H. G. J.

    2003-01-01

    A network of 12 southern hemisphere tropical and subtropical stations in the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 profiles of stratospheric and tropospheric ozone since 1998. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used with standard radiosondes for pressure, temperature and relative humidity measurements. The archived data are available at:http: //croc.gsfc.nasa.gov/shadoz. In Thompson et al., accuracies and imprecisions in the SHADOZ 1998- 2000 dataset were examined using ground-based instruments and the TOMS total ozone measurement (version 7) as references. Small variations in ozonesonde technique introduced possible biases from station-to-station. SHADOZ total ozone column amounts are now compared to version 8 TOMS; discrepancies between the two datasets are reduced 2\\% on average. An evaluation of ozone variations among the stations is made using the results of a series of chamber simulations of ozone launches (JOSIE-2000, Juelich Ozonesonde Intercomparison Experiment) in which a standard reference ozone instrument was employed with the various sonde techniques used in SHADOZ. A number of variations in SHADOZ ozone data are explained when differences in solution strength, data processing and instrument type (manufacturer) are taken into account.

  10. SLAC E144 Plots, Simulation Results, and Data

    DOE Data Explorer

    The 1997 E144 experiments at the Stanford Linear Accelerator Center (SLAC) utilitized extremely high laser intensities and collided huge groups of photons together so violently that positron-electron pairs were briefly created, actual particles of matter and antimatter. Instead of matter exploding into heat and light, light actually become matter. That accomplishment opened a new path into the exploration of the interactions of electrons and photons or quantum electrodynamics (QED). The E144 information at this website includes Feynmann Diagrams, simulation results, and data files. See also aseries of frames showing the E144 laser colliding with a beam electron and producing an electron-positron pair at http://www.slac.stanford.edu/exp/e144/focpic/focpic.html and lists of collaborators' papers, theses, and a page of press articles.

  11. Wastewater neutralization control based in fuzzy logic: Simulation results

    SciTech Connect

    Garrido, R.; Adroer, M.; Poch, M.

    1997-05-01

    Neutralization is a technique widely used as a part of wastewater treatment processes. Due to the importance of this technique, extensive study has been devoted to its control. However, industrial wastewater neutralization control is a procedure with a lot of problems--nonlinearity of the titration curve, variable buffering, changes in loading--and despite the efforts devoted to this subject, the problem has not been totally solved. in this paper, the authors present the development of a controller based in fuzzy logic (FLC). In order to study its effectiveness, it has been compared, by simulation, with other advanced controllers (using identification techniques and adaptive control algorithms using reference models) when faced with various types of wastewater with different buffer capacity or when changes in the concentration of the acid present in the wastewater take place. Results obtained show that FLC could be considered as a powerful alternative for wastewater neutralization processes.

  12. Governance of complex systems: results of a sociological simulation experiment.

    PubMed

    Adelt, Fabian; Weyer, Johannes; Fink, Robin D

    2014-01-01

    Social sciences have discussed the governance of complex systems for a long time. The following paper tackles the issue by means of experimental sociology, in order to investigate the performance of different modes of governance empirically. The simulation framework developed is based on Esser's model of sociological explanation as well as on Kroneberg's model of frame selection. The performance of governance has been measured by means of three macro and two micro indicators. Surprisingly, central control mostly performs better than decentralised coordination. However, results not only depend on the mode of governance, but there is also a relation between performance and the composition of actor populations, which has yet not been investigated sufficiently. Practitioner Summary: Practitioners can gain insights into the functioning of complex systems and learn how to better manage them. Additionally, they are provided with indicators to measure the performance of complex systems. PMID:24456093

  13. Accuracy of the water column approximation in numerically simulating propagation of teleseismic PP waves and Rayleigh waves

    NASA Astrophysics Data System (ADS)

    Zhou, Yong; Ni, Sidao; Chu, Risheng; Yao, Huajian

    2016-06-01

    Numerical solvers of wave equations have been widely used to simulate global seismic waves including PP waves for modeling 410/660 km discontinuity and Rayleigh waves for imaging crustal structure. In order to avoid extra computation cost due to ocean water effects, these numerical solvers usually adopt water column approximation, whose accuracy depends on frequency and needs to be investigated quantitatively. In this paper, we describe a unified representation of accurate and approximate forms of the equivalent water column boundary condition as well as the free boundary condition. Then we derive an analytical form of the PP-wave reflection coefficient with the unified boundary condition, and quantify the effects of water column approximation on amplitude and phase shift of the PP waves. We also study the effects of water column approximation on phase velocity dispersion of the fundamental mode Rayleigh wave with a propagation matrix method. We find that with the water column approximation: (1) The error of PP amplitude and phase shift is less than 5% and 9 ° at periods greater than 25 s for most oceanic regions. But at periods of 15 s or less, PP is inaccurate up to 10% in amplitude and a few seconds in time shift for deep oceans. (2) The error in Rayleigh wave phase velocity is less than 1% at periods greater than 30 s in most oceanic regions, but the error is up to 2% for deep oceans at periods of 20 s or less. This study confirms that the water column approximation is only accurate at long periods and it needs to be improved at shorter periods.

  14. Accuracy of the water column approximation in numerically simulating propagation of teleseismic PP waves and Rayleigh waves

    NASA Astrophysics Data System (ADS)

    Zhou, Yong; Ni, Sidao; Chu, Risheng; Yao, Huajian

    2016-08-01

    Numerical solvers of wave equations have been widely used to simulate global seismic waves including PP waves for modelling 410/660 km discontinuity and Rayleigh waves for imaging crustal structure. In order to avoid extra computation cost due to ocean water effects, these numerical solvers usually adopt water column approximation, whose accuracy depends on frequency and needs to be investigated quantitatively. In this paper, we describe a unified representation of accurate and approximate forms of the equivalent water column boundary condition as well as the free boundary condition. Then we derive an analytical form of the PP-wave reflection coefficient with the unified boundary condition, and quantify the effects of water column approximation on amplitude and phase shift of the PP waves. We also study the effects of water column approximation on phase velocity dispersion of the fundamental mode Rayleigh wave with a propagation matrix method. We find that with the water column approximation: (1) The error of PP amplitude and phase shift is less than 5 per cent and 9° at periods greater than 25 s for most oceanic regions. But at periods of 15 s or less, PP is inaccurate up to 10 per cent in amplitude and a few seconds in time shift for deep oceans. (2) The error in Rayleigh wave phase velocity is less than 1 per cent at periods greater than 30 s in most oceanic regions, but the error is up to 2 per cent for deep oceans at periods of 20 s or less. This study confirms that the water column approximation is only accurate at long periods and it needs to be improved at shorter periods.

  15. Mid-Holocene permafrost: Results from CMIP5 simulations

    NASA Astrophysics Data System (ADS)

    Liu, Yeyi; Jiang, Dabang

    2016-01-01

    Distribution of frozen ground and active layer thickness in the Northern Hemisphere during the mid-Holocene (MH) and differences with respect to the preindustrial (PI) were investigated here using the Coupled Model Intercomparison Project Phase 5 (CMIP5) models. Two typical diagnostic methods, respectively, based on soil temperature (Ts based; a direct method) and air temperature (Ta based; an indirect method) were employed to classify categories and extents of frozen ground. In relation to orbitally induced changes in climate and in turn freezing and thawing indices, the MH permafrost extent was 20.5% (1.8%) smaller than the PI, whereas seasonally frozen ground increased by 9.2% (0.8%) in the Northern Hemisphere according to the Ts-based (Ta-based) method. Active layer thickness became larger, but by ≤ 1.0 m in most of permafrost areas during the MH. Intermodel disagreement remains within areas of permafrost boundary by both the Ts-based and Ta-based results, with the former demonstrating less agreement among the CMIP5 models because of larger variation in land model abilities to represent permafrost processes. However, both the methods were able to reproduce the MH relatively degenerated permafrost and increased active layer thickness (although with smaller magnitudes) as observed in data reconstruction. Disparity between simulation and reconstruction was mainly found in the seasonally frozen ground regions at low to middle latitudes, where the reconstruction suggested a reduction of seasonally frozen ground extent to the north, whereas the simulation demonstrated a slightly expansion to the south for the MH compared to the PI.

  16. Improving the trust in results of numerical simulations and scientific data analytics

    SciTech Connect

    Cappello, Franck; Constantinescu, Emil; Hovland, Paul; Peterka, Tom; Phillips, Carolyn; Snir, Marc; Wild, Stefan

    2015-04-30

    approaches to address it. This paper does not focus on the trust that the execution will actually complete. The product of simulation or of data analytic executions is the final element of a potentially long chain of transformations, where each stage has the potential to introduce harmful corruptions. These corruptions may produce results that deviate from the user-expected accuracy without notifying the user of this deviation. There are many potential sources of corruption before and during the execution; consequently, in this white paper we do not focus on the protection of the end result after the execution.

  17. Evaluating the velocity accuracy of an integrated GPS/INS system: Flight test results. [Global positioning system/inertial navigation systems (GPS/INS)

    SciTech Connect

    Owen, T.E.; Wardlaw, R.

    1991-01-01

    Verifying the velocity accuracy of a GPS receiver or an integrated GPS/INS system in a dynamic environment is a difficult proposition when many of the commonly used reference systems have velocity uncertainities of the same order of magnitude or greater than the GPS system. The results of flight tests aboard an aircraft in which multiple reference systems simultaneously collected data to evaluate the accuracy of an integrated GPS/INS system are reported. Emphasis is placed on obtaining high accuracy estimates of the velocity error of the integrated system in order to verify that velocity accuracy is maintained during both linear and circular trajectories. Three different reference systems operating in parallel during flight tests are used to independently determine the position and velocity of an aircraft in flight. They are a transponder/interrogator ranging system, a laser tracker, and GPS carrier phase processing. Results obtained from these reference systems are compared against each other and against an integrated real time differential based GPS/INS system to arrive at a set of conclusions about the accuracy of the integrated system.

  18. Airborne ICESat-2 simulator (MABEL) results from Greenland

    NASA Astrophysics Data System (ADS)

    Neumann, T.; Markus, T.; Brunt, K. M.; Walsh, K.; Hancock, D.; Cook, W. B.; Brenner, A. C.; Csatho, B. M.; De Marco, E.

    2012-12-01

    The Ice, Cloud, and land Elevation Satellite-2 (ICESat-2) is a next-generation laser altimeter designed to continue key observations of sea ice freeboard, ice sheet elevation change, vegetation canopy height, earth surface elevation and sea surface heights. Scheduled for launch in mid-2016, ICESat-2 will collect data between 88 degrees north and south using a high-repetition rate (10 kHz) laser operating at 532nm, and using a photon-counting detection strategy. Our airborne simulator, the Multiple Altimeter Beam Experimental Lidar (MABEL) uses a similar photon-counting measurement strategy, operates at 532nm (16 beams) and 1064 nm (8 beams) to collect similar data to what we expect for ICESat-2. The comparison between frequencies allows for studies of possible penetration of green light into water or snow. MABEL collects more spatially-dense data than ICESat-2 (2cm along-track vs. 70 cm along track for ICESat-2, and has a smaller footprint than ICESat-2 (2m nominal diameter vs. 10m nominal diameter for ICESat-2) requiring geometric and radiometric scaling to relate MABEL data to simulate ICESat-2 data. We based MABEL out of Keflavik, Iceland during April 2012, and collected ~ 100 hours of data from 20km altitude over a variety of targets. MABEL collected sea ice data over the Nares Strait, and off the east coast of Greenland, the later flight in coordination with NASA's Operation IceBridge, which collected ATM data along the same track within 90 minutes of MABEL data collection. MABEL flew a variety of lines over Greenland in the southwest, Jakobshavn region, and over the ice sheet interior, including 4 hours of coincident data with Operation IceBridge in southwest Greenland. MABEL flew a number of calibration sites, including corner cubes in Svalbard, Summit Station (where a GPS survey of the surface elevation was collected within an hour of our overflight), and well-surveyed targets in Iceland and western Greenland. In this presentation, we present an overview of

  19. RFI in hybrid loops - Simulation and experimental results.

    NASA Technical Reports Server (NTRS)

    Ziemer, R. E.; Nelson, D. R.; Raghavan, H. R.

    1972-01-01

    A digital simulation of an imperfect second-order hybrid phase-locked loop (HPLL) operating in radio frequency interference (RFI) is described. Its performance is characterized in terms of phase error variance and phase error probability density function (PDF). Monte-Carlo simulation is used to show that the HPLL can be superior to the conventional phase-locked loops in RFI backgrounds when minimum phase error variance is the goodness criterion. Similar experimentally obtained data are given in support of the simulation data.

  20. Assessing the performance of the MM/PBSA and MM/GBSA methods: I. The accuracy of binding free energy calculations based on molecular dynamics simulations

    PubMed Central

    Hou, Tingjun; Wang, Junmei; Li, Youyong; Wang, Wei

    2011-01-01

    The Molecular Mechanics/Poisson Boltzmann Surface Area (MM/PBSA) and the Molecular Mechanics/Generalized Born Surface Area (MM/GBSA) methods calculate binding free energies for macromolecules by combining molecular mechanics calculations and continuum solvation models. To systematically evaluate the performance of these methods, we report here an extensive study of 59 ligands interacting with six different proteins. First, we explored the effects of the length of the molecular dynamics (MD) simulation, ranging from 400 to 4800 ps, and the solute dielectric constant (1, 2 or 4) to the binding free energies predicted by MM/PBSA. The following three important conclusions could be observed: (1). MD simulation lengths have obvious impact on the predictions, and longer MD simulations are not always necessary to achieve better predictions; (2). The predictions are quite sensitive to solute dielectric constant, and this parameter should be carefully determined according to the characteristics of the protein/ligand binding interface; (3). Conformational entropy showed large fluctuations in MD trajectories and a large number of snapshots are necessary to achieve stable predictions. Next, we evaluated the accuracy of the binding free energies calculated by three Generalized Born (GB) models. We found that the GB model developed by Onufriev and Case was the most successful model in ranking the binding affinities of the studied inhibitors. Finally, we evaluated the performance of MM/GBSA and MM/PBSA in predicting binding free energies. Our results showed that MM/PBSA performed better in calculating absolute, but not necessarily relative, binding free energies than MM/GBSA. Considering its computational efficiency, MM/GBSA can serve as a powerful tool in drug design, where correct ranking of inhibitors is often emphasized. PMID:21117705

  1. Simulation of optical diagnostics for crystal growth: models and results

    NASA Astrophysics Data System (ADS)

    Banish, Michele R.; Clark, Rodney L.; Kathman, Alan D.; Lawson, Shelah M.

    1991-12-01

    A computer simulation of a two-color holographic interferometric (TCHI) optical system was performed using a physical (wave) optics model. This model accurately simulates propagation through time-varying, 2-D or 3-D concentration and temperature fields as a wave phenomenon. The model calculates wavefront deformations that can be used to generate fringe patterns. This simulation modeled a proposed TriGlycine sulphate TGS flight experiment by propagating through the simplified onion-like refractive index distribution of the growing crystal and calculating the recorded wavefront deformation. The phase of this wavefront was used to generate sample interferograms that map index of refraction variation. Two such fringe patterns, generated at different wavelengths, were used to extract the original temperature and concentration field characteristics within the growth chamber. This proves feasibility for this TCHI crystal growth diagnostic technique. This simulation provides feedback to the experimental design process.

  2. Results of a Flight Simulation Software Methods Survey

    NASA Technical Reports Server (NTRS)

    Jackson, E. Bruce

    1995-01-01

    A ten-page questionnaire was mailed to members of the AIAA Flight Simulation Technical Committee in the spring of 1994. The survey inquired about various aspects of developing and maintaining flight simulation software, as well as a few questions dealing with characterization of each facility. As of this report, 19 completed surveys (out of 74 sent out) have been received. This paper summarizes those responses.

  3. Improved Accuracy in RNA-Protein Rigid Body Docking by Incorporating Force Field for Molecular Dynamics Simulation into the Scoring Function.

    PubMed

    Iwakiri, Junichi; Hamada, Michiaki; Asai, Kiyoshi; Kameda, Tomoshi

    2016-09-13

    RNA-protein interactions play fundamental roles in many biological processes. To understand these interactions, it is necessary to know the three-dimensional structures of RNA-protein complexes. However, determining the tertiary structure of these complexes is often difficult, suggesting that an accurate rigid body docking for RNA-protein complexes is needed. In general, the rigid body docking process is divided into two steps: generating candidate structures from the individual RNA and protein structures and then narrowing down the candidates. In this study, we focus on the former problem to improve the prediction accuracy in RNA-protein docking. Our method is based on the integration of physicochemical information about RNA into ZDOCK, which is known as one of the most successful computer programs for protein-protein docking. Because recent studies showed the current force field for molecular dynamics simulation of protein and nucleic acids is quite accurate, we modeled the physicochemical information about RNA by force fields such as AMBER and CHARMM. A comprehensive benchmark of RNA-protein docking, using three recently developed data sets, reveals the remarkable prediction accuracy of the proposed method compared with existing programs for docking: the highest success rate is 34.7% for the predicted structure of the RNA-protein complex with the best score and 79.2% for 3,600 predicted ones. Three full atomistic force fields for RNA (AMBER94, AMBER99, and CHARMM22) produced almost the same accurate result, which showed current force fields for nucleic acids are quite accurate. In addition, we found that the electrostatic interaction and the representation of shape complementary between protein and RNA plays the important roles for accurate prediction of the native structures of RNA-protein complexes. PMID:27494732

  4. Summary Results of the Neptun Boil-Off Experiments to Investigate the Accuracy and Cooling Influence of LOFT Cladding-Surface Thermocouples (System 00)

    SciTech Connect

    E. L. Tolman S. N. Aksan

    1981-10-01

    Nine boil-off experiments were conducted in the Swiss NEPTUN Facility primarily to obtain experimental data for assessing the perturbation effects of LOFT thermocouples during simulated small-break core uncovery conditions. The data will also be useful in assessing computer model capability to predict thermal hydraulic response data for this type of experiment. System parameters that were varied for these experiments included heater rod power, system pressure, and initial coolant subcooling. The experiments showed that the LOFT thermocouples do not cause a significant cooling influence in the rods to which they are attached. Furthermore, the accuracy of the LOFT thermocouples is within 20 K at the peak cladding temperature zone.

  5. Evaluation of the accuracy of an offline seasonally-varying matrix transport model for simulating ideal age

    DOE PAGESBeta

    Bardin, Ann; Primeau, Francois; Lindsay, Keith; Bradley, Andrew

    2016-07-21

    Newton-Krylov solvers for ocean tracers have the potential to greatly decrease the computational costs of spinning up deep-ocean tracers, which can take several thousand model years to reach equilibrium with surface processes. One version of the algorithm uses offline tracer transport matrices to simulate an annual cycle of tracer concentrations and applies Newton’s method to find concentrations that are periodic in time. Here we present the impact of time-averaging the transport matrices on the equilibrium values of an ideal-age tracer. We compared annually-averaged, monthly-averaged, and 5-day-averaged transport matrices to an online simulation using the ocean component of the Community Earthmore » System Model (CESM) with a nominal horizontal resolution of 1° × 1° and 60 vertical levels. We found that increasing the time resolution of the offline transport model reduced a low age bias from 12% for the annually-averaged transport matrices, to 4% for the monthly-averaged transport matrices, and to less than 2% for the transport matrices constructed from 5-day averages. The largest differences were in areas with strong seasonal changes in the circulation, such as the Northern Indian Ocean. As a result, for many applications the relatively small bias obtained using the offline model makes the offline approach attractive because it uses significantly less computer resources and is simpler to set up and run.« less

  6. Space Geodetic Technique Co-location in Space: Simulation Results for the GRASP Mission

    NASA Astrophysics Data System (ADS)

    Kuzmicz-Cieslak, M.; Pavlis, E. C.

    2011-12-01

    The Global Geodetic Observing System-GGOS, places very stringent requirements in the accuracy and stability of future realizations of the International Terrestrial Reference Frame (ITRF): an origin definition at 1 mm or better at epoch and a temporal stability on the order of 0.1 mm/y, with similar numbers for the scale (0.1 ppb) and orientation components. These goals were derived from the requirements of Earth science problems that are currently the international community's highest priority. None of the geodetic positioning techniques can achieve this goal alone. This is due in part to the non-observability of certain attributes from a single technique. Another limitation is imposed from the extent and uniformity of the tracking network and the schedule of observational availability and number of suitable targets. The final limitation derives from the difficulty to "tie" the reference points of each technique at the same site, to an accuracy that will support the GGOS goals. The future GGOS network will address decisively the ground segment and to certain extent the space segment requirements. The JPL-proposed multi-technique mission GRASP (Geodetic Reference Antenna in Space) attempts to resolve the accurate tie between techniques, using their co-location in space, onboard a well-designed spacecraft equipped with GNSS receivers, a SLR retroreflector array, a VLBI beacon and a DORIS system. Using the anticipated system performance for all four techniques at the time the GGOS network is completed (ca 2020), we generated a number of simulated data sets for the development of a TRF. Our simulation studies examine the degree to which GRASP can improve the inter-technique "tie" issue compared to the classical approach, and the likely modus operandi for such a mission. The success of the examined scenarios is judged by the quality of the origin and scale definition of the resulting TRF.

  7. Finite-volume versus streaming-based lattice Boltzmann algorithm for fluid-dynamics simulations: A one-to-one accuracy and performance study.

    PubMed

    Shrestha, Kalyan; Mompean, Gilmar; Calzavarini, Enrico

    2016-02-01

    A finite-volume (FV) discretization method for the lattice Boltzmann (LB) equation, which combines high accuracy with limited computational cost is presented. In order to assess the performance of the FV method we carry out a systematic comparison, focused on accuracy and computational performances, with the standard streaming lattice Boltzmann equation algorithm. In particular we aim at clarifying whether and in which conditions the proposed algorithm, and more generally any FV algorithm, can be taken as the method of choice in fluid-dynamics LB simulations. For this reason the comparative analysis is further extended to the case of realistic flows, in particular thermally driven flows in turbulent conditions. We report the successful simulation of high-Rayleigh number convective flow performed by a lattice Boltzmann FV-based algorithm with wall grid refinement. PMID:26986438

  8. Finite-volume versus streaming-based lattice Boltzmann algorithm for fluid-dynamics simulations: A one-to-one accuracy and performance study

    NASA Astrophysics Data System (ADS)

    Shrestha, Kalyan; Mompean, Gilmar; Calzavarini, Enrico

    2016-02-01

    A finite-volume (FV) discretization method for the lattice Boltzmann (LB) equation, which combines high accuracy with limited computational cost is presented. In order to assess the performance of the FV method we carry out a systematic comparison, focused on accuracy and computational performances, with the standard streaming lattice Boltzmann equation algorithm. In particular we aim at clarifying whether and in which conditions the proposed algorithm, and more generally any FV algorithm, can be taken as the method of choice in fluid-dynamics LB simulations. For this reason the comparative analysis is further extended to the case of realistic flows, in particular thermally driven flows in turbulent conditions. We report the successful simulation of high-Rayleigh number convective flow performed by a lattice Boltzmann FV-based algorithm with wall grid refinement.

  9. On the accuracy of thickness measurements in impact-echo testing of finite concrete specimens--numerical and experimental results.

    PubMed

    Schubert, Frank; Wiggenhauser, Herbert; Lausch, Regine

    2004-04-01

    In impact-echo testing of finite concrete structures, reflections of Rayleigh and body waves from lateral boundaries significantly affect time-domain signals and spectra. In the present paper we demonstrate by numerical simulations and experimental measurements at a concrete specimen that these reflections can lead to systematic errors in thickness determination. These effects depend not only on the dimensions of the specimen, but also on the location of the actual measuring point and on the duration of the detected time-domain signal. PMID:15047403

  10. Effects of heterogeneity in aquifer permeability and biomass on biodegradation rate calculations - Results from numerical simulations

    USGS Publications Warehouse

    Scholl, M.A.

    2000-01-01

    Numerical simulations were used to examine the effects of heterogeneity in hydraulic conductivity (K) and intrinsic biodegradation rate on the accuracy of contaminant plume-scale biodegradation rates obtained from field data. The simulations were based on a steady-state BTEX contaminant plume-scale biodegradation under sulfate-reducing conditions, with the electron acceptor in excess. Biomass was either uniform or correlated with K to model spatially variable intrinsic biodegradation rates. A hydraulic conductivity data set from an alluvial aquifer was used to generate three sets of 10 realizations with different degrees of heterogeneity, and contaminant transport with biodegradation was simulated with BIOMOC. Biodegradation rates were calculated from the steady-state contaminant plumes using decreases in concentration with distance downgradient and a single flow velocity estimate, as is commonly done in site characterization to support the interpretation of natural attenuation. The observed rates were found to underestimate the actual rate specified in the heterogeneous model in all cases. The discrepancy between the observed rate and the 'true' rate depended on the ground water flow velocity estimate, and increased with increasing heterogeneity in the aquifer. For a lognormal K distribution with variance of 0.46, the estimate was no more than a factor of 1.4 slower than the true rate. For aquifer with 20% silt/clay lenses, the rate estimate was as much as nine times slower than the true rate. Homogeneous-permeability, uniform-degradation rate simulations were used to generate predictions of remediation time with the rates estimated from heterogeneous models. The homogeneous models were generally overestimated the extent of remediation or underestimated remediation time, due to delayed degradation of contaminants in the low-K areas. Results suggest that aquifer characterization for natural attenuation at contaminated sites should include assessment of the presence

  11. Albedo in the ATIC Experiment: Results of Measurements and Simulation

    NASA Technical Reports Server (NTRS)

    Sokolskaya, N. V.; Adams, J. H., Jr.; Ahn, H. S.; Bashindzhagyan, G. L.; Batkov, K. E.; Chang, J.; Christl, M.; Fazely, A. R.; Ganel, O.; Gunasingha, R. M.

    2004-01-01

    Characteristics of albedo, or backscatter current, providing a 'background' for calorimeter experiments in high energy cosmic rays are analyzed. The comparison of experimental data obtained in the flights of the ATIC spectrometer is made with simulations performed using the GEANT 3.21 code. The influence of the backscatter on charge resolution in the ATIC experiment is discussed.

  12. SOME RESULTS OF A SIMULATION OF AN URBAN SCHOOL DISTRICT.

    ERIC Educational Resources Information Center

    SISSON, ROGER L.

    A COMPUTER PROGRAM WHICH SIMULATES THE GROSS OPERATIONAL FEATURES OF A LARGE URBAN SCHOOL DISTRICT IS DESIGNED TO PREDICT SCHOOL DISTRICT POLICY VARIABLES ON A YEAR-TO-YEAR BASIS. THE MODEL EXPLORES THE CONSEQUENCES OF VARYING SUCH DISTRICT PARAMETERS AS STUDENT POPULATION, STAFF, COMPUTER EQUIPMENT, NUMBERS AND SIZES OF SCHOOL BUILDINGS, SALARY,…

  13. SIMULATION OF DNAPL DISTRIBUTION RESULTING FROM MULTIPLE SOURCES

    EPA Science Inventory

    A three-dimensional and three-phase (water, NAPL and gas) numerical simulator, called NAPL, was employed to study the interaction between DNAPL (PCE) plumes in a variably saturated porous media. Several model verification tests have been performed, including a series of 2-D labo...

  14. Fast Plasma Instrument for MMS: Data Compression Simulation Results

    NASA Technical Reports Server (NTRS)

    Barrie, A.; Adrian, Mark L.; Yeh, P.-S.; Winkert, G. E.; Lobell, J. V.; Vinas, A.F.; Simpson, D. J.; Moore, T. E.

    2008-01-01

    Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eights (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6 deg x 180 deg fields-of-view (FOV) are set 90 deg apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45 deg x 180 deg fan about its nominal viewing (0 deg deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the results in the DES complement of a given spacecraft generating 6.5-Mbs(exp -1) of electron data while the DIS generates 1.1-Mbs(exp -1) of ion data yielding an FPI total data rate of 6.6-MBs(exp -1). The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mbs(exp -1). Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present simulations of the CCSDS 122.0-B-1 algorithm-based compression of the FPI-DES electron data. Compression analysis is based upon a seed of re-processed Cluster/PEACE electron measurements. Topics to be discussed include: review of compression algorithm; data quality

  15. Fast Plasma Instrument for MMS: Data Compression Simulation Results

    NASA Astrophysics Data System (ADS)

    Barrie, A.; Adrian, M. L.; Yeh, P.; Winkert, G.; Lobell, J.; Vinas, A. F.; Simpson, D. G.

    2009-12-01

    Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eight (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6° x 180° fields-of-view (FOV) are set 90° apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45° x 180° fan about the its nominal viewing (0° deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the diffusion regions of reconnection, the highest temporal/spatial resolution mode of FPI results in the DES complement of a given spacecraft generating 6.5-Mb s-1 of electron data while the DIS generates 1.1-Mb s-1 of ion data yielding an FPI total data rate of 6.6-Mb s-1. The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mb s-1. Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present updated simulations of the CCSDS 122.0-B-1 algorithm-based compression of the FPI-DES electron data as well as the FPI-DIS ion data. Compression analysis is based upon a seed of re-processed Cluster

  16. Fast Plasma Instrument for MMS: Data Compression Simulation Results

    NASA Astrophysics Data System (ADS)

    Barrie, A. C.; Adrian, M. L.; Yeh, P.; Winkert, G. E.; Lobell, J. V.; Viňas, A. F.; Simpson, D. G.; Moore, T. E.

    2008-12-01

    Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eight (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6° × 180° fields-of-view (FOV) are set 90° apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45° × 180° fan about the its nominal viewing (0° deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the diffusion regions of reconnection, the highest temporal/spatial resolution mode of FPI results in the DES complement of a given spacecraft generating 6.5-Mb s-1 of electron data while the DIS generates 1.1-Mb s-1 of ion data yielding an FPI total data rate of 7.6-Mb s-1. The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mb s-1. Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present simulations of the CCSDS 122.0-B-1 algorithm- based compression of the FPI-DES electron data. Compression analysis is based upon a seed of re- processed Cluster/PEACE electron measurements. Topics to be

  17. FINAL SIMULATION RESULTS FOR DEMONSTRATION CASE 1 AND 2

    SciTech Connect

    David Sloan; Woodrow Fiveland

    2003-10-15

    The goal of this DOE Vision-21 project work scope was to develop an integrated suite of software tools that could be used to simulate and visualize advanced plant concepts. Existing process simulation software did not meet the DOE's objective of ''virtual simulation'' which was needed to evaluate complex cycles. The overall intent of the DOE was to improve predictive tools for cycle analysis, and to improve the component models that are used in turn to simulate equipment in the cycle. Advanced component models are available; however, a generic coupling capability that would link the advanced component models to the cycle simulation software remained to be developed. In the current project, the coupling of the cycle analysis and cycle component simulation software was based on an existing suite of programs. The challenge was to develop a general-purpose software and communications link between the cycle analysis software Aspen Plus{reg_sign} (marketed by Aspen Technology, Inc.), and specialized component modeling packages, as exemplified by industrial proprietary codes (utilized by ALSTOM Power Inc.) and the FLUENT{reg_sign} computational fluid dynamics (CFD) code (provided by Fluent Inc). A software interface and controller, based on an open CAPE-OPEN standard, has been developed and extensively tested. Various test runs and demonstration cases have been utilized to confirm the viability and reliability of the software. ALSTOM Power was tasked with the responsibility to select and run two demonstration cases to test the software--(1) a conventional steam cycle (designated as Demonstration Case 1), and (2) a combined cycle test case (designated as Demonstration Case 2). Demonstration Case 1 is a 30 MWe coal-fired power plant for municipal electricity generation, while Demonstration Case 2 is a 270 MWe, natural gas-fired, combined cycle power plant. Sufficient data was available from the operation of both power plants to complete the cycle configurations. Three runs

  18. Simulation study on potential accuracy gains from dual energy CT tissue segmentation for low-energy brachytherapy Monte Carlo dose calculations

    NASA Astrophysics Data System (ADS)

    Landry, Guillaume; Granton, Patrick V.; Reniers, Brigitte; Öllers, Michel C.; Beaulieu, Luc; Wildberger, Joachim E.; Verhaegen, Frank

    2011-10-01

    This work compares Monte Carlo (MC) dose calculations for 125I and 103Pd low-dose rate (LDR) brachytherapy sources performed in virtual phantoms containing a series of human soft tissues of interest for brachytherapy. The geometries are segmented (tissue type and density assignment) based on simulated single energy computed tomography (SECT) and dual energy (DECT) images, as well as the all-water TG-43 approach. Accuracy is evaluated by comparison to a reference MC dose calculation performed in the same phantoms, where each voxel's material properties are assigned with exactly known values. The objective is to assess potential dose calculation accuracy gains from DECT. A CT imaging simulation package, ImaSim, is used to generate CT images of calibration and dose calculation phantoms at 80, 120, and 140 kVp. From the high and low energy images electron density ρe and atomic number Z are obtained using a DECT algorithm. Following a correction derived from scans of the calibration phantom, accuracy on Z and ρe of ±1% is obtained for all soft tissues with atomic number Z in [6,8] except lung. GEANT4 MC dose calculations based on DECT segmentation agreed with the reference within ±4% for 103Pd, the most sensitive source to tissue misassignments. SECT segmentation with three tissue bins as well as the TG-43 approach showed inferior accuracy with errors of up to 20%. Using seven tissue bins in our SECT segmentation brought errors within ±10% for 103Pd. In general 125I dose calculations showed higher accuracy than 103Pd. Simulated image noise was found to decrease DECT accuracy by 3-4%. Our findings suggest that DECT-based segmentation yields improved accuracy when compared to SECT segmentation with seven tissue bins in LDR brachytherapy dose calculation for the specific case of our non-anthropomorphic phantom. The validity of our conclusions for clinical geometry as well as the importance of image noise in the tissue segmentation procedure deserves further

  19. Simulation study on potential accuracy gains from dual energy CT tissue segmentation for low-energy brachytherapy Monte Carlo dose calculations.

    PubMed

    Landry, Guillaume; Granton, Patrick V; Reniers, Brigitte; Ollers, Michel C; Beaulieu, Luc; Wildberger, Joachim E; Verhaegen, Frank

    2011-10-01

    This work compares Monte Carlo (MC) dose calculations for (125)I and (103)Pd low-dose rate (LDR) brachytherapy sources performed in virtual phantoms containing a series of human soft tissues of interest for brachytherapy. The geometries are segmented (tissue type and density assignment) based on simulated single energy computed tomography (SECT) and dual energy (DECT) images, as well as the all-water TG-43 approach. Accuracy is evaluated by comparison to a reference MC dose calculation performed in the same phantoms, where each voxel's material properties are assigned with exactly known values. The objective is to assess potential dose calculation accuracy gains from DECT. A CT imaging simulation package, ImaSim, is used to generate CT images of calibration and dose calculation phantoms at 80, 120, and 140 kVp. From the high and low energy images electron density ρ(e) and atomic number Z are obtained using a DECT algorithm. Following a correction derived from scans of the calibration phantom, accuracy on Z and ρ(e) of ±1% is obtained for all soft tissues with atomic number Z ∊ [6,8] except lung. GEANT4 MC dose calculations based on DECT segmentation agreed with the reference within ±4% for (103)Pd, the most sensitive source to tissue misassignments. SECT segmentation with three tissue bins as well as the TG-43 approach showed inferior accuracy with errors of up to 20%. Using seven tissue bins in our SECT segmentation brought errors within ±10% for (103)Pd. In general (125)I dose calculations showed higher accuracy than (103)Pd. Simulated image noise was found to decrease DECT accuracy by 3-4%. Our findings suggest that DECT-based segmentation yields improved accuracy when compared to SECT segmentation with seven tissue bins in LDR brachytherapy dose calculation for the specific case of our non-anthropomorphic phantom. The validity of our conclusions for clinical geometry as well as the importance of image noise in the tissue segmentation procedure deserves

  20. Secondary hypoxemia exacerbates the reduction of visual discrimination accuracy and neuronal cell density in the dorsal lateral geniculate nucleus resulting from fluid percussion injury.

    PubMed

    Bauman, R A; Widholm, J J; Petras, J M; McBride, K; Long, J B

    2000-08-01

    The purpose of this study was to determine the impact of secondary hypoxemia on visual discrimination accuracy after parasagittal fluid percussion injury (FPI). Rats lived singly in test cages, where they were trained to repeatedly execute a flicker-frequency visual discrimination for food. After learning was complete, all rats were surgically prepared and then retested over the following 4-5 days to ensure recovery to presurgery levels of performance. Rats were then assigned to one of three groups [FPI + Hypoxia (IH), FPI + Normoxia (IN), or Sham Injury + Hypoxia (SH)] and were anesthetized with halothane delivered by compressed air. Immediately after injury or sham injury, rats in groups IH and SH were switched to a 13% O2 source to continue halothane anesthesia for 30 min before being returned to their test cages. Anesthesia for rats in group IN was maintained using compressed air for 30 min after injury. FPI significantly reduced visual discrimination accuracy and food intake, and increased incorrect choices. Thirty minutes of immediate posttraumatic hypoxemia significantly (1) exacerbated the FPI-induced reductions of visual discrimination accuracy and food intake, (2) further increased numbers of incorrect choices, and (3) delayed the progressive recovery of visual discrimination accuracy. Thionine stains of midbrain coronal sections revealed that, in addition to the loss of neurons seen in several thalamic nuclei following FPI, cell loss in the ipsilateral dorsal lateral geniculate nucleus (dLG) was significantly greater after FPI and hypoxemia than after FPI alone. In contrast, neuropathological changes were not evident following hypoxemia alone. These results show that, although hypoxemia alone was without effect, posttraumatic hypoxemia exacerbates FPI-induced reductions in visual discrimination accuracy and secondary hypoxemia interferes with control of the rat's choices by flicker frequency, perhaps in part as a result of neuronal loss and fiber

  1. Simulations Build Efficacy: Empirical Results from a Four-Week Congressional Simulation

    ERIC Educational Resources Information Center

    Mariani, Mack; Glenn, Brian J.

    2014-01-01

    This article describes a four-week congressional committee simulation implemented in upper level courses on Congress and the Legislative process at two liberal arts colleges. We find that the students participating in the simulation possessed high levels of political knowledge and confidence in their political skills prior to the simulation. An…

  2. Frontotemporal oxyhemoglobin dynamics predict performance accuracy of dance simulation gameplay: temporal characteristics of top-down and bottom-up cortical activities.

    PubMed

    Ono, Yumie; Nomoto, Yasunori; Tanaka, Shohei; Sato, Keisuke; Shimada, Sotaro; Tachibana, Atsumichi; Bronner, Shaw; Noah, J Adam

    2014-01-15

    We utilized the high temporal resolution of functional near-infrared spectroscopy to explore how sensory input (visual and rhythmic auditory cues) are processed in the cortical areas of multimodal integration to achieve coordinated motor output during unrestricted dance simulation gameplay. Using an open source clone of the dance simulation video game, Dance Dance Revolution, two cortical regions of interest were selected for study, the middle temporal gyrus (MTG) and the frontopolar cortex (FPC). We hypothesized that activity in the FPC would indicate top-down regulatory mechanisms of motor behavior; while that in the MTG would be sustained due to bottom-up integration of visual and auditory cues throughout the task. We also hypothesized that a correlation would exist between behavioral performance and the temporal patterns of the hemodynamic responses in these regions of interest. Results indicated that greater temporal accuracy of dance steps positively correlated with persistent activation of the MTG and with cumulative suppression of the FPC. When auditory cues were eliminated from the simulation, modifications in cortical responses were found depending on the gameplay performance. In the MTG, high-performance players showed an increase but low-performance players displayed a decrease in cumulative amount of the oxygenated hemoglobin response in the no music condition compared to that in the music condition. In the FPC, high-performance players showed relatively small variance in the activity regardless of the presence of auditory cues, while low-performance players showed larger differences in the activity between the no music and music conditions. These results suggest that the MTG plays an important role in the successful integration of visual and rhythmic cues and the FPC may work as top-down control to compensate for insufficient integrative ability of visual and rhythmic cues in the MTG. The relative relationships between these cortical areas indicated

  3. Fault induction dynamic model, suitable for computer simulation: Simulation results and experimental validation

    NASA Astrophysics Data System (ADS)

    Baccarini, Lane Maria Rabelo; de Menezes, Benjamim Rodrigues; Caminhas, Walmir Matos

    2010-01-01

    The study of induction motor behavior under not normal conditions and the ability to detect and predict these conditions has been an area of increasing interest. Early detection and diagnosis of incipient faults are desirable for interactive evaluation over the running condition, product quality guarantee, and improved operational efficiency of induction motors. The main difficulty in this task is the lack of accurate analytical models to describe a faulty motor. This paper proposes a dynamic model to analyze electrical and mechanical faults in induction machines and includes net asymmetries and load conditions. The model permits to analyze the interactions between different faults in order to detect possible false alarms. Simulations and experimental results were performed to confirm the validity of the model.

  4. Direct drive: Simulations and results from the National Ignition Facility

    DOE PAGESBeta

    Radha, P. B.; Hohenberger, M.; Edgell, D. H.; Marozas, J. A.; Marshall, F. J.; Michel, D. T.; Rosenberg, M. J.; Seka, W.; Shvydky, A.; Boehly, T. R.; et al

    2016-04-19

    Here, the direct-drive implosion physics is being investigated at the National Ignition Facility. The primary goal of the experiments is twofold: to validate modeling related to implosion velocity and to estimate the magnitude of hot-electron preheat. Implosion experiments indicate that the energetics is well-modeled when cross-beam energy transfer (CBET) is included in the simulation and an overall multiplier to the CBET gain factor is employed; time-resolved scattered light and scattered-light spectra display the correct trends. Trajectories from backlit images are well modeled, although those from measured self-emission images indicate increased shell thickness and reduced shell density relative to simulations. Sensitivitymore » analyses indicate that the most likely cause for the density reduction is nonuniformity growth seeded by laser imprint and not laser-energy coupling. Hot-electron preheat is at tolerable levels in the ongoing experiments, although it is expected to increase after the mitigation of CBET. Future work will include continued model validation, imprint measurements, and mitigation of CBET and hot-electron preheat.« less

  5. Direct drive: Simulations and results from the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Radha, P. B.; Hohenberger, M.; Edgell, D. H.; Marozas, J. A.; Marshall, F. J.; Michel, D. T.; Rosenberg, M. J.; Seka, W.; Shvydky, A.; Boehly, T. R.; Collins, T. J. B.; Campbell, E. M.; Craxton, R. S.; Delettrez, J. A.; Dixit, S. N.; Frenje, J. A.; Froula, D. H.; Goncharov, V. N.; Hu, S. X.; Knauer, J. P.; McCrory, R. L.; McKenty, P. W.; Meyerhofer, D. D.; Moody, J.; Myatt, J. F.; Petrasso, R. D.; Regan, S. P.; Sangster, T. C.; Sio, H.; Skupsky, S.; Zylstra, A.

    2016-05-01

    Direct-drive implosion physics is being investigated at the National Ignition Facility. The primary goal of the experiments is twofold: to validate modeling related to implosion velocity and to estimate the magnitude of hot-electron preheat. Implosion experiments indicate that the energetics is well-modeled when cross-beam energy transfer (CBET) is included in the simulation and an overall multiplier to the CBET gain factor is employed; time-resolved scattered light and scattered-light spectra display the correct trends. Trajectories from backlit images are well modeled, although those from measured self-emission images indicate increased shell thickness and reduced shell density relative to simulations. Sensitivity analyses indicate that the most likely cause for the density reduction is nonuniformity growth seeded by laser imprint and not laser-energy coupling. Hot-electron preheat is at tolerable levels in the ongoing experiments, although it is expected to increase after the mitigation of CBET. Future work will include continued model validation, imprint measurements, and mitigation of CBET and hot-electron preheat.

  6. Implementation and Simulation Results using Autonomous Aerobraking Development Software

    NASA Technical Reports Server (NTRS)

    Maddock, Robert W.; DwyerCianciolo, Alicia M.; Bowes, Angela; Prince, Jill L. H.; Powell, Richard W.

    2011-01-01

    An Autonomous Aerobraking software system is currently under development with support from the NASA Engineering and Safety Center (NESC) that would move typically ground-based operations functions to onboard an aerobraking spacecraft, reducing mission risk and mission cost. The suite of software that will enable autonomous aerobraking is the Autonomous Aerobraking Development Software (AADS) and consists of an ephemeris model, onboard atmosphere estimator, temperature and loads prediction, and a maneuver calculation. The software calculates the maneuver time, magnitude and direction commands to maintain the spacecraft periapsis parameters within design structural load and/or thermal constraints. The AADS is currently tested in simulations at Mars, with plans to also evaluate feasibility and performance at Venus and Titan.

  7. Relative efficiency and accuracy of two Navier-Stokes codes for simulating attached transonic flow over wings

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl L.; Wornom, Stephen F.

    1991-01-01

    Two codes which solve the 3-D Thin Layer Navier-Stokes (TLNS) equations are used to compute the steady state flow for two test cases representing typical finite wings at transonic conditions. Several grids of C-O topology and varying point densities are used to determine the effects of grid refinement. After a description of each code and test case, standards for determining code efficiency and accuracy are defined and applied to determine the relative performance of the two codes in predicting turbulent transonic wing flows. Comparisons of computed surface pressure distributions with experimental data are made.

  8. Research on the classification result and accuracy of building windows in high resolution satellite images: take the typical rural buildings in Guangxi, China, as an example

    NASA Astrophysics Data System (ADS)

    Li, Baishou; Gao, Yujiu

    2015-12-01

    The information extracted from the high spatial resolution remote sensing images has become one of the important data sources of the GIS large scale spatial database updating. The realization of the building information monitoring using the high resolution remote sensing, building small scale information extracting and its quality analyzing has become an important precondition for the applying of the high-resolution satellite image information, because of the large amount of regional high spatial resolution satellite image data. In this paper, a clustering segmentation classification evaluation method for the high resolution satellite images of the typical rural buildings is proposed based on the traditional KMeans clustering algorithm. The factors of separability and building density were used for describing image classification characteristics of clustering window. The sensitivity of the factors influenced the clustering result was studied from the perspective of the separability between high image itself target and background spectrum. This study showed that the number of the sample contents is the important influencing factor to the clustering accuracy and performance, the pixel ratio of the objects in images and the separation factor can be used to determine the specific impact of cluster-window subsets on the clustering accuracy, and the count of window target pixels (Nw) does not alone affect clustering accuracy. The result can provide effective research reference for the quality assessment of the segmentation and classification of high spatial resolution remote sensing images.

  9. Chromium coatings by HVOF thermal spraying: Simulation and practical results

    SciTech Connect

    Knotek, O.; Lugscheider, E.; Jokiel, P.; Schnaut, U.; Wiemers, A.

    1994-12-31

    Within recent years High Velocity Oxygen-Fuel (HVOF) thermal spraying has been considered an asset to the family of thermal spraying processes. Especially for spray materials with melting points below 3,000 K it has proven successful, since it shows advantages when compared to coating processes that produce similar qualities. In order to enlarge the fields of thermal spraying applications into regions with rather low thickness, e.g. about 50--100 {micro}m, especially HVOF thermally sprayed coatings seem to be advantageous. The usual evaluation of optimized spraying parameters, including spray distance, traverse speed, gas flow rates etc. is, however, based on numerous and extensive experiments laid out by trial-and-error or statistical experimental design and thus being expensive: man-power and material is required, spray systems are occupied for experimental works and the optimal solution is questioned, for instance, when a new powder fraction or nozzle is used. In this paper the possibility of reducing such experimental efforts by using modeling and simulation is exemplified for producing thin chromium coatings with a CDS{trademark}-HVOF system. The aim is the production of thermally sprayed chromium coatings competing with galvanic hard chromium platings, which are applied to reduce friction and corrosion but are environmentally disadvantageous during their production.

  10. Stellar populations of stellar halos: Results from the Illustris simulation

    NASA Astrophysics Data System (ADS)

    Cook, B. A.; Conroy, C.; Pillepich, A.; Hernquist, L.

    2016-08-01

    The influence of both major and minor mergers is expected to significantly affect gradients of stellar ages and metallicities in the outskirts of galaxies. Measurements of observed gradients are beginning to reach large radii in galaxies, but a theoretical framework for connecting the findings to a picture of galactic build-up is still in its infancy. We analyze stellar populations of a statistically representative sample of quiescent galaxies over a wide mass range from the Illustris simulation. We measure metallicity and age profiles in the stellar halos of quiescent Illustris galaxies ranging in stellar mass from 1010 to 1012 M ⊙, accounting for observational projection and luminosity-weighting effects. We find wide variance in stellar population gradients between galaxies of similar mass, with typical gradients agreeing with observed galaxies. We show that, at fixed mass, the fraction of stars born in-situ within galaxies is correlated with the metallicity gradient in the halo, confirming that stellar halos contain unique information about the build-up and merger histories of galaxies.

  11. SLUDGE BATCH 4 SIMULANT FLOWSHEET STUDIES: PHASE II RESULTS

    SciTech Connect

    Stone, M; David Best, D

    2006-09-12

    The Defense Waste Processing Facility (DWPF) will transition from Sludge Batch 3 (SB3) processing to Sludge Batch 4 (SB4) processing in early fiscal year 2007. Tests were conducted using non-radioactive simulants of the expected SB4 composition to determine the impact of varying the acid stoichiometry during the Sludge Receipt and Adjustment Tank (SRAT) process. The work was conducted to meet the Technical Task Request (TTR) HLW/DWPF/TTR-2004-0031 and followed the guidelines of a Task Technical and Quality Assurance Plan (TT&QAP). The flowsheet studies are performed to evaluate the potential chemical processing issues, hydrogen generation rates, and process slurry rheological properties as a function of acid stoichiometry. Initial SB4 flowsheet studies were conducted to guide decisions during the sludge batch preparation process. These studies were conducted with the estimated SB4 composition at the time of the study. The composition has changed slightly since these studies were completed due to changes in the sludges blended to prepare SB4 and the estimated SB3 heel mass. The following TTR requirements were addressed in this testing: (1) Hydrogen and nitrous oxide generation rates as a function of acid stoichiometry; (2) Acid quantities and processing times required for mercury removal; (3) Acid quantities and processing times required for nitrite destruction; and (4) Impact of SB4 composition (in particular, oxalate, manganese, nickel, mercury, and aluminum) on DWPF processing (i.e. acid addition strategy, foaming, hydrogen generation, REDOX control, rheology, etc.).

  12. [Computer simulation of DOI-PET detector (1) -analysis of DOI discrimination accuracy in a detector block-].

    PubMed

    Yamada, Akira; Haneishi, Hideaki; Inadama, Naoko; Murayama, Hideo

    2003-01-01

    A detector proposed by Murayama et al. for detection of depth-of-interaction (DOI) in PET consists of three-dimensionally arranged crystal elements with proper optical reflectors and is coupled to an array of photomultiplier tubes. This detector has a great advantage in easiness and cost in fabrication. We implemented a simulator of this detector that allows us to find appropriate values of parameters such as optical properties of crystal or detector unit geometry before making detectors. The simulator is based on the Monte Carlo method that traces the migration of optical photons generated by interaction of a gamma ray with crystal. First, the simulator performance was validated by comparing with the experimental data obtained with some prototype detectors. Then, on some parameters including refractive index of inter-crystal material, reflectance of optical reflector and detector geometry, appropriate values were investigated for accurate discrimination of crystal element of interaction. PMID:12832869

  13. Accuracy of the electron transport in mcnp5 and its suitability for ionization chamber response simulations: A comparison with the egsnrc and penelope codes

    SciTech Connect

    Koivunoro, Hanna; Siiskonen, Teemu; Kotiluoto, Petri; Auterinen, Iiro; Hippelaeinen, Eero; Savolainen, Sauli

    2012-03-15

    Purpose: In this work, accuracy of the mcnp5 code in the electron transport calculations and its suitability for ionization chamber (IC) response simulations in photon beams are studied in comparison to egsnrc and penelope codes. Methods: The electron transport is studied by comparing the depth dose distributions in a water phantom subdivided into thin layers using incident energies (0.05, 0.1, 1, and 10 MeV) for the broad parallel electron beams. The IC response simulations are studied in water phantom in three dosimetric gas materials (air, argon, and methane based tissue equivalent gas) for photon beams ({sup 60}Co source, 6 MV linear medical accelerator, and mono-energetic 2 MeV photon source). Two optional electron transport models of mcnp5 are evaluated: the ITS-based electron energy indexing (mcnp5{sub ITS}) and the new detailed electron energy-loss straggling logic (mcnp5{sub new}). The electron substep length (ESTEP parameter) dependency in mcnp5 is investigated as well. Results: For the electron beam studies, large discrepancies (>3%) are observed between the mcnp5 dose distributions and the reference codes at 1 MeV and lower energies. The discrepancy is especially notable for 0.1 and 0.05 MeV electron beams. The boundary crossing artifacts, which are well known for the mcnp5{sub ITS}, are observed for the mcnp5{sub new} only at 0.1 and 0.05 MeV beam energies. If the excessive boundary crossing is eliminated by using single scoring cells, the mcnp5{sub ITS} provides dose distributions that agree better with the reference codes than mcnp5{sub new}. The mcnp5 dose estimates for the gas cavity agree within 1% with the reference codes, if the mcnp5{sub ITS} is applied or electron substep length is set adequately for the gas in the cavity using the mcnp5{sub new}. The mcnp5{sub new} results are found highly dependent on the chosen electron substep length and might lead up to 15% underestimation of the absorbed dose. Conclusions: Since the mcnp5 electron

  14. Accuracy in contouring of small and low contrast lesions: Comparison between diagnostic quality computed tomography scanner and computed tomography simulation scanner-A phantom study

    SciTech Connect

    Ho, Yick Wing; Wong, Wing Kei Rebecca; Yu, Siu Ki; Lam, Wai Wang; Geng Hui

    2012-01-01

    To evaluate the accuracy in detection of small and low-contrast regions using a high-definition diagnostic computed tomography (CT) scanner compared with a radiotherapy CT simulation scanner. A custom-made phantom with cylindrical holes of diameters ranging from 2-9 mm was filled with 9 different concentrations of contrast solution. The phantom was scanned using a 16-slice multidetector CT simulation scanner (LightSpeed RT16, General Electric Healthcare, Milwaukee, WI) and a 64-slice high-definition diagnostic CT scanner (Discovery CT750 HD, General Electric Healthcare). The low-contrast regions of interest (ROIs) were delineated automatically upon their full width at half maximum of the CT number profile in Hounsfield units on a treatment planning workstation. Two conformal indexes, CI{sub in}, and CI{sub out}, were calculated to represent the percentage errors of underestimation and overestimation in the automated contours compared with their actual sizes. Summarizing the conformal indexes of different sizes and contrast concentration, the means of CI{sub in} and CI{sub out} for the CT simulation scanner were 33.7% and 60.9%, respectively, and 10.5% and 41.5% were found for the diagnostic CT scanner. The mean differences between the 2 scanners' CI{sub in} and CI{sub out} were shown to be significant with p < 0.001. A descending trend of the index values was observed as the ROI size increases for both scanners, which indicates an improved accuracy when the ROI size increases, whereas no observable trend was found in the contouring accuracy with respect to the contrast levels in this study. Images acquired by the diagnostic CT scanner allow higher accuracy on size estimation compared with the CT simulation scanner in this study. We recommend using a diagnostic CT scanner to scan patients with small lesions (<1 cm in diameter) for radiotherapy treatment planning, especially for those pending for stereotactic radiosurgery in which accurate delineation of small

  15. Design and analysis of ALE schemes with provable second-order time-accuracy for inviscid and viscous flow simulations

    NASA Astrophysics Data System (ADS)

    Geuzaine, Philippe; Grandmont, Céline; Farhat, Charbel

    2003-10-01

    We consider the solution of inviscid as well as viscous unsteady flow problems with moving boundaries by the arbitrary Lagrangian-Eulerian (ALE) method. We present two computational approaches for achieving formal second-order time-accuracy on moving grids. The first approach is based on flux time-averaging, and the second one on mesh configuration time-averaging. In both cases, we prove that formally second-order time-accurate ALE schemes can be designed. We illustrate our theoretical findings and highlight their impact on practice with the solution of inviscid as well as viscous, unsteady, nonlinear flow problems associated with the AGARD Wing 445.6 and a complete F-16 configuration.

  16. Electron transport in the solar wind -results from numerical simulations

    NASA Astrophysics Data System (ADS)

    Smith, Håkan; Marsch, Eckart; Helander, Per

    A conventional fluid approach is in general insufficient for a correct description of electron trans-port in weakly collisional plasmas such as the solar wind. The classical Spitzer-Hürm theory is a not valid when the Knudsen number (the mean free path divided by the length scale of tem-perature variation) is greater than ˜ 10-2 . Despite this, the heat transport from Spitzer-Hürm a theory is widely used in situations with relatively long mean free paths. For realistic Knud-sen numbers in the solar wind, the electron distribution function develops suprathermal tails, and the departure from a local Maxwellian can be significant at the energies which contribute the most to the heat flux moment. To accurately model heat transport a kinetic approach is therefore more adequate. Different techniques have been used previously, e.g. particle sim-ulations [Landi, 2003], spectral methods [Pierrard, 2001], the so-called 16 moment method [Lie-Svendsen, 2001], and approximation by kappa functions [Dorelli, 2003]. In the present study we solve the Fokker-Planck equation for electrons in one spatial dimension and two velocity dimensions. The distribution function is expanded in Laguerre polynomials in energy, and a finite difference scheme is used to solve the equation in the spatial dimension and the velocity pitch angle. The ion temperature and density profiles are assumed to be known, but the electric field is calculated self-consistently to guarantee quasi-neutrality. The kinetic equation is of a two-way diffusion type, for which the distribution of particles entering the computational domain in both ends of the spatial dimension must be specified, leaving the outgoing distributions to be calculated. The long mean free path of the suprathermal electrons has the effect that the details of the boundary conditions play an important role in determining the particle and heat fluxes as well as the electric potential drop across the domain. Dorelli, J. C., and J. D. Scudder, J. D

  17. Diamond-NICAM-SPRINTARS: downscaling and simulation results

    NASA Astrophysics Data System (ADS)

    Uchida, J.

    2012-12-01

    As a part of initiative "Research Program on Climate Change Adaptation" (RECCA) which investigates how predicted large-scale climate change may affect a local weather, and further examines possible atmospheric hazards that cities may encounter due to such a climate change, thus to guide policy makers on implementing new environmental measures, a "Development of Seamless Chemical AssimiLation System and its Application for Atmospheric Environmental Materials" (SALSA) project is funded by the Japanese Ministry of Education, Culture, Sports, Science and Technology and is focused on creating a regional (local) scale assimilation system that can accurately recreate and predict a transport of carbon dioxide and other air pollutants. In this study, a regional model of the next generation global cloud-resolving model NICAM (Non-hydrostatic ICosahedral Atmospheric Model) (Tomita and Satoh, 2004) is used and ran together with a transport model SPRINTARS (Spectral Radiation Transport Model for Aerosol Species) (Takemura et al, 2000) and a chemical transport model CHASER (Sudo et al, 2002) to simulate aerosols across urban cities (over a Kanto region including metropolitan Tokyo). The presentation will mainly be on a "Diamond-NICAM" (Figure 1), a regional climate model version of the global climate model NICAM, and its dynamical downscaling methodologies. Originally, a global NICAM can be described as twenty identical equilateral triangular-shaped panels covering the entire globe where grid points are at the corners of those panels, and to increase a resolution (called a "global-level" in NICAM), additional points are added at the middle of existing two adjacent points, so a number of panels increases by fourfold with an increment of one global-level. On the other hand, a Diamond-NICAM only uses two of those initial triangular-shaped panels, thus only covers part of the globe. In addition, NICAM uses an adaptive mesh scheme and its grid size can gradually decrease, as the grid

  18. Evaluating the accuracy of VEMAP daily weather data for application in crop simulations on a regional scale

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Weather plays a critical role in eco-environmental and agricultural systems. Limited availability of meteorological records often constrains the applications of simulation models and related decision support tools. The Vegetation/Ecosystem Modeling and Analysis Project (VEMAP) provides daily weather...

  19. Relative accuracy evaluation.

    PubMed

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  20. Relative Accuracy Evaluation

    PubMed Central

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  1. Accuracy for detection of simulated lesions: comparison of fluid-attenuated inversion-recovery, proton density--weighted, and T2-weighted synthetic brain MR imaging

    NASA Technical Reports Server (NTRS)

    Herskovits, E. H.; Itoh, R.; Melhem, E. R.

    2001-01-01

    OBJECTIVE: The objective of our study was to determine the effects of MR sequence (fluid-attenuated inversion-recovery [FLAIR], proton density--weighted, and T2-weighted) and of lesion location on sensitivity and specificity of lesion detection. MATERIALS AND METHODS: We generated FLAIR, proton density-weighted, and T2-weighted brain images with 3-mm lesions using published parameters for acute multiple sclerosis plaques. Each image contained from zero to five lesions that were distributed among cortical-subcortical, periventricular, and deep white matter regions; on either side; and anterior or posterior in position. We presented images of 540 lesions, distributed among 2592 image regions, to six neuroradiologists. We constructed a contingency table for image regions with lesions and another for image regions without lesions (normal). Each table included the following: the reviewer's number (1--6); the MR sequence; the side, position, and region of the lesion; and the reviewer's response (lesion present or absent [normal]). We performed chi-square and log-linear analyses. RESULTS: The FLAIR sequence yielded the highest true-positive rates (p < 0.001) and the highest true-negative rates (p < 0.001). Regions also differed in reviewers' true-positive rates (p < 0.001) and true-negative rates (p = 0.002). The true-positive rate model generated by log-linear analysis contained an additional sequence-location interaction. The true-negative rate model generated by log-linear analysis confirmed these associations, but no higher order interactions were added. CONCLUSION: We developed software with which we can generate brain images of a wide range of pulse sequences and that allows us to specify the location, size, shape, and intrinsic characteristics of simulated lesions. We found that the use of FLAIR sequences increases detection accuracy for cortical-subcortical and periventricular lesions over that associated with proton density- and T2-weighted sequences.

  2. Preliminary Benchmarking and MCNP Simulation Results for Homeland Security

    SciTech Connect

    Robert Hayes

    2008-03-01

    The purpose of this article is to create Monte Carlo N-Particle (MCNP) input stacks for benchmarked measurements sufficient for future perturbation studies and analysis. The approach was to utilize historical experimental measurements to recreate the empirical spectral results in MCNP, both qualitatively and quantitatively. Results demonstrate that perturbation analysis of benchmarked MCNP spectra can be used to obtain a better understanding of field measurement results which may be of national interest. If one or more spectral radiation measurements are made in the field and deemed of national interest, the potential source distribution, naturally occurring radioactive material shielding, and interstitial materials can only be estimated in many circumstances. The effects from these factors on the resultant spectral radiation measurements can be very confusing. If benchmarks exist which are sufficiently similar to the suspected configuration, these benchmarks can then be compared to the suspect measurements. Having these benchmarks with validated MCNP input stacks can substantially improve the predictive capability of experts supporting these efforts.

  3. Analysis of Numerical Simulation Results of LIPS-200 Lifetime Experiments

    NASA Astrophysics Data System (ADS)

    Chen, Juanjuan; Zhang, Tianping; Geng, Hai; Jia, Yanhui; Meng, Wei; Wu, Xianming; Sun, Anbang

    2016-06-01

    Accelerator grid structural and electron backstreaming failures are the most important factors affecting the ion thruster's lifetime. During the thruster's operation, Charge Exchange Xenon (CEX) ions are generated from collisions between plasma and neutral atoms. Those CEX ions grid's barrel and wall frequently, which cause the failures of the grid system. In order to validate whether the 20 cm Lanzhou Ion Propulsion System (LIPS-200) satisfies China's communication satellite platform's application requirement for North-South Station Keeping (NSSK), this study analyzed the measured depth of the pit/groove on the accelerator grid's wall and aperture diameter's variation and estimated the operating lifetime of the ion thruster. Different from the previous method, in this paper, the experimental results after the 5500 h of accumulated operation of the LIPS-200 ion thruster are presented firstly. Then, based on these results, theoretical analysis and numerical calculations were firstly performed to predict the on-orbit lifetime of LIPS-200. The results obtained were more accurate to calculate the reliability and analyze the failure modes of the ion thruster. The results indicated that the predicted lifetime of LIPS-200's was about 13218.1 h which could satisfy the required lifetime requirement of 11000 h very well.

  4. Comparison of the effect of web-based, simulation-based, and conventional training on the accuracy of visual estimation of postpartum hemorrhage volume on midwifery students: A randomized clinical trial

    PubMed Central

    Kordi, Masoumeh; Fakari, Farzaneh Rashidi; Mazloum, Seyed Reza; Khadivzadeh, Talaat; Akhlaghi, Farideh; Tara, Mahmoud

    2016-01-01

    Introduction: Delay in diagnosis of bleeding can be due to underestimation of the actual amount of blood loss during delivery. Therefore, this research aimed to compare the efficacy of web-based, simulation-based, and conventional training on the accuracy of visual estimation of postpartum hemorrhage volume. Materials and Methods: This three-group randomized clinical trial study was performed on 105 midwifery students in Mashhad School of Nursing and Midwifery in 2013. The samples were selected by the convenience method and were randomly divided into three groups of web-based, simulation-based, and conventional training. The three groups participated before and 1 week after the training course in eight station practical tests, then, the students of the web-based group were trained on-line for 1 week, the students of the simulation-based group were trained in the Clinical Skills Centre for 4 h, and the students of the conventional group were trained for 4 h presentation by researchers. The data gathering tool was a demographic questionnaire designed by the researchers and objective structured clinical examination. Data were analyzed by software version 11.5. Results: The accuracy of visual estimation of postpartum hemorrhage volume after training increased significantly in the three groups at all stations (1, 2, 4, 5, 6 and 7 (P = 0.001), 8 (P = 0.027)) except station 3 (blood loss of 20 cc, P = 0.095), but the mean score of blood loss estimation after training did not significantly different between the three groups (P = 0.95). Conclusion: Training increased the accuracy of estimation of postpartum hemorrhage, but no significant difference was found among the three training groups. We can use web-based training as a substitute or supplement of training along with two other more common simulation and conventional methods. PMID:27500175

  5. Accuracy of diagnostic heat and moisture budgets using SESAME-79 field data as revealed by observing system simulation experiments. [Severe Environmental Storm and Mesoscale Experiment

    NASA Technical Reports Server (NTRS)

    Kuo, Y.-H.; Anthes, R. A.

    1984-01-01

    Observing system simulation experiments are used to investigate the accuracy of diagnostic heat and moisture budgets which employ the AVE-SESAME 1979 data. The time-including, four-dimensional data set of a mesoscale model is used to simulate rawinsonde observations from AVE-SESAME 1979. The 5 C/day (heat budget) and 2 g/kg per day (moisture budget) magnitudes of error obtained indicate difficulties in the diagnosis of the heating rate in weak convective systems. The influences exerted by observational frequency, objective analysis, observational density, vertical interpolation, and observational errors on the budget are also studied, and it is found that the temporal and spatial resolution of the SESAME regional network is marginal for diagnosing convective effects on a horizontal time scale of 550 x 550 km.

  6. Disc Motor: Conventional and Superconductor Simulated Results Analysis

    NASA Astrophysics Data System (ADS)

    Inácio, David; Martins, João; Neves, Mário Ventim; Álvarez, Alfredo; Rodrigues, Amadeu Leão

    Taking into consideration the development and integration of electrical machines with lower dimensions and higher performance, this paper presents the design and development of a three-phase axial flux disc motor, with 50 Hz frequency supply. It is made with two conventional semi-stators and a rotor, which can be implemented with a conventional aluminum disc or a high temperature-superconducting disc. The analysis of the motor characteristics is done with a 2D commercial finite elements package, being the modeling performed as a linear motor. The obtained results allow concluding that the superconductor motor provides a higher force than the conventional one. The conventional disc motor presents an asynchronous behavior, like a conventional induction motor, while the superconductor motor presents both synchronous and asynchronous behaviors.

  7. On the Standardization of Vertical Accuracy Figures in Dems

    NASA Astrophysics Data System (ADS)

    Casella, V.; Padova, B.

    2013-01-01

    Digital Elevation Models (DEMs) play a key role in hydrological risk prevention and mitigation: hydraulic numeric simulations, slope and aspect maps all heavily rely on DEMs. Hydraulic numeric simulations require the used DEM to have a defined accuracy, in order to obtain reliable results. Are the DEM accuracy figures clearly and uniquely defined? The paper focuses on some issues concerning DEM accuracy definition and assessment. Two DEM accuracy definitions can be found in literature: accuracy at the interpolated point and accuracy at the nodes. The former can be estimated by means of randomly distributed check points, while the latter by means of check points coincident with the nodes. The two considered accuracy figures are often treated as equivalent, but they aren't. Given the same DEM, assessing it through one or the other approach gives different results. Our paper performs an in-depth characterization of the two figures and proposes standardization coefficients.

  8. Evaluation of the accuracy of an offline seasonally-varying matrix transport model for simulating ideal age

    NASA Astrophysics Data System (ADS)

    Bardin, Ann; Primeau, François; Lindsay, Keith; Bradley, Andrew

    2016-09-01

    Newton-Krylov solvers for ocean tracers have the potential to greatly decrease the computational costs of spinning up deep-ocean tracers, which can take several thousand model years to reach equilibrium with surface processes. One version of the algorithm uses offline tracer transport matrices to simulate an annual cycle of tracer concentrations and applies Newton's method to find concentrations that are periodic in time. Here we present the impact of time-averaging the transport matrices on the equilibrium values of an ideal-age tracer. We compared annually-averaged, monthly-averaged, and 5-day-averaged transport matrices to an online simulation using the ocean component of the Community Earth System Model (CESM) with a nominal horizontal resolution of 1° × 1° and 60 vertical levels. We found that increasing the time resolution of the offline transport model reduced a low age bias from 12% for the annually-averaged transport matrices, to 4% for the monthly-averaged transport matrices, and to less than 2% for the transport matrices constructed from 5-day averages. The largest differences were in areas with strong seasonal changes in the circulation, such as the Northern Indian Ocean. For many applications the relatively small bias obtained using the offline model makes the offline approach attractive because it uses significantly less computer resources and is simpler to set up and run.

  9. Micro-scale finite element modeling of ultrasound propagation in aluminum trabecular bone-mimicking phantoms: A comparison between numerical simulation and experimental results.

    PubMed

    Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S

    2016-05-01

    The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures. PMID:26894840

  10. Impact of Calibrated Land Surface Model Parameters on the Accuracy and Uncertainty of Land-Atmosphere Coupling in WRF Simulations

    NASA Technical Reports Server (NTRS)

    Santanello, Joseph A., Jr.; Kumar, Sujay V.; Peters-Lidard, Christa D.; Harrison, Ken; Zhou, Shujia

    2012-01-01

    Land-atmosphere (L-A) interactions play a critical role in determining the diurnal evolution of both planetary boundary layer (PBL) and land surface temperature and moisture budgets, as well as controlling feedbacks with clouds and precipitation that lead to the persistence of dry and wet regimes. Recent efforts to quantify the strength of L-A coupling in prediction models have produced diagnostics that integrate across both the land and PBL components of the system. In this study, we examine the impact of improved specification of land surface states, anomalies, and fluxes on coupled WRF forecasts during the summers of extreme dry (2006) and wet (2007) land surface conditions in the U.S. Southern Great Plains. The improved land initialization and surface flux parameterizations are obtained through the use of a new optimization and uncertainty estimation module in NASA's Land Information System (LIS-OPT/UE), whereby parameter sets are calibrated in the Noah land surface model and classified according to a land cover and soil type mapping of the observation sites to the full model domain. The impact of calibrated parameters on the a) spinup of the land surface used as initial conditions, and b) heat and moisture states and fluxes of the coupled WRF simulations are then assessed in terms of ambient weather and land-atmosphere coupling along with measures of uncertainty propagation into the forecasts. In addition, the sensitivity of this approach to the period of calibration (dry, wet, average) is investigated. Finally, tradeoffs of computational tractability and scientific validity, and the potential for combining this approach with satellite remote sensing data are also discussed.

  11. Assessment of accuracy of PET utilizing a 3-D phantom to simulate the activity distribution of ( sup 18 F)fluorodeoxyglucose uptake in the human brain

    SciTech Connect

    Hoffman, E.J.; Cutler, P.D.; Guerrero, T.M.; Digby, W.M.; Mazziotta, J.C. )

    1991-03-01

    A three-dimensional brain phantom has been developed to simulate the activity distributions found in human brain studies currently employed in positron emission tomography (PET). The phantom has a single contiguous chamber and utilizes thin layers of lucite to provide apparent relative concentrations of 5, 1, and 0 for gray matter, white matter, and CSF structures, respectively. The phantom and an ideal image set were created from the same set of data. Thus, the user has a basis for comparing measured images with an ideal set that allows a quantitative evaluation of errors in PET studies with an activity distribution similar to that found in patients. The phantom was employed in a study of the effect of deadtime and scatter on accuracy in quantitation on a current PET system. Deadtime correction factors were found to be significant (1.1-2.5) at count rates found in clinical studies. Deadtime correction techniques were found to be accurate to within 5%. Scatter in emission and attenuation correction data consistently caused 5-15% errors in quantitation, whereas correction for scatter in both types of data reduced errors in accuracy to less than 5%.

  12. Consideration of shear modulus in biomechanical analysis of peri-implant jaw bone: accuracy verification using image-based multi-scale simulation.

    PubMed

    Matsunaga, Satoru; Naito, Hiroyoshi; Tamatsu, Yuichi; Takano, Naoki; Abe, Shinichi; Ide, Yoshinobu

    2013-01-01

    The aim of this study was to clarify the influence of shear modulus on the analytical accuracy in peri-implant jaw bone simulation. A 3D finite element (FE) model was prepared based on micro-CT data obtained from images of a jawbone containing implants. A precise model that closely reproduced the trabecular architecture, and equivalent models that gave shear modulus values taking the trabecular architecture into account, were prepared. Displacement norms during loading were calculated, and the displacement error was evaluated. The model that gave shear modulus values taking the trabecular architecture into account showed an analytical error of around 10-20% in the cancellous bone region, while in the model that used incorrect shear modulus, the analytical error exceeded 40% in certain regions. The shear modulus should be evaluated precisely in addition to the Young modulus when considering the mechanics of peri-implant trabecular bone structure. PMID:23719004

  13. The accuracy of linear measurements of maxillary and mandibular edentulous sites in cone-beam computed tomography images with different fields of view and voxel sizes under simulated clinical conditions

    PubMed Central

    Ramesh, Aruna; Pagni, Sarah

    2016-01-01

    Purpose The objective of this study was to investigate the effect of varying resolutions of cone-beam computed tomography images on the accuracy of linear measurements of edentulous areas in human cadaver heads. Intact cadaver heads were used to simulate a clinical situation. Materials and Methods Fiduciary markers were placed in the edentulous areas of 4 intact embalmed cadaver heads. The heads were scanned with two different CBCT units using a large field of view (13 cm×16 cm) and small field of view (5 cm×8 cm) at varying voxel sizes (0.3 mm, 0.2 mm, and 0.16 mm). The ground truth was established with digital caliper measurements. The imaging measurements were then compared with caliper measurements to determine accuracy. Results The Wilcoxon signed rank test revealed no statistically significant difference between the medians of the physical measurements obtained with calipers and the medians of the CBCT measurements. A comparison of accuracy among the different imaging protocols revealed no significant differences as determined by the Friedman test. The intraclass correlation coefficient was 0.961, indicating excellent reproducibility. Inter-observer variability was determined graphically with a Bland-Altman plot and by calculating the intraclass correlation coefficient. The Bland-Altman plot indicated very good reproducibility for smaller measurements but larger discrepancies with larger measurements. Conclusion The CBCT-based linear measurements in the edentulous sites using different voxel sizes and FOVs are accurate compared with the direct caliper measurements of these sites. Higher resolution CBCT images with smaller voxel size did not result in greater accuracy of the linear measurements. PMID:27358816

  14. Diagnostic Accuracy of Ultrasound B scan using 10 MHz linear probe in ocular trauma;results from a high burden country

    PubMed Central

    Shazlee, Muhammad Kashif; Ali, Muhammad; SaadAhmed, Muhammad; Hussain, Ammad; Hameed, Kamran; Lutfi, Irfan Amjad; Khan, Muhammad Tahir

    2016-01-01

    Objective: To study the diagnostic accuracy of Ultrasound B scan using 10 MHz linear probe in ocular trauma. Methods: A total of 61 patients with 63 ocular injuries were assessed during July 2013 to January 2014. All patients were referred to the department of Radiology from Emergency Room since adequate clinical assessment of the fundus was impossible because of the presence of opaque ocular media. Based on radiological diagnosis, the patients were provided treatment (surgical or medical). Clinical diagnosis was confirmed during surgical procedures or clinical follow-up. Results: A total of 63 ocular injuries were examined in 61 patients. The overall sensitivity was 91.5%, Specificity was 98.87%, Positive predictive value was 87.62 and Negative predictive value was 99%. Conclusion: Ultrasound B-scan is a sensitive, non invasive and rapid way of assessing intraocular damage caused by blunt or penetrating eye injuries. PMID:27182245

  15. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL

  16. ON THE MINIMAL ACCURACY REQUIRED FOR SIMULATING SELF-GRAVITATING SYSTEMS BY MEANS OF DIRECT N-BODY METHODS

    SciTech Connect

    Portegies Zwart, Simon; Boekholt, Tjarda

    2014-04-10

    The conservation of energy, linear momentum, and angular momentum are important drivers of our physical understanding of the evolution of the universe. These quantities are also conserved in Newton's laws of motion under gravity. Numerical integration of the associated equations of motion is extremely challenging, in particular due to the steady growth of numerical errors (by round-off and discrete time-stepping and the exponential divergence between two nearby solutions. As a result, numerical solutions to the general N-body problem are intrinsically questionable. Using brute force integrations to arbitrary numerical precision we demonstrate empirically that ensembles of different realizations of resonant three-body interactions produce statistically indistinguishable results. Although individual solutions using common integration methods are notoriously unreliable, we conjecture that an ensemble of approximate three-body solutions accurately represents an ensemble of true solutions, so long as the energy during integration is conserved to better than 1/10. We therefore provide an independent confirmation that previous work on self-gravitating systems can actually be trusted, irrespective of the intrinsically chaotic nature of the N-body problem.

  17. Linear and Logarithmic Speed-Accuracy Trade-Offs in Reciprocal Aiming Result from Task-Specific Parameterization of an Invariant Underlying Dynamics

    ERIC Educational Resources Information Center

    Bongers, Raoul M.; Fernandez, Laure; Bootsma, Reinoud J.

    2009-01-01

    The authors examined the origins of linear and logarithmic speed-accuracy trade-offs from a dynamic systems perspective on motor control. In each experiment, participants performed 2 reciprocal aiming tasks: (a) a velocity-constrained task in which movement time was imposed and accuracy had to be maximized, and (b) a distance-constrained task in…

  18. Analysis procedures and subjective flight results of a simulator validation and cue fidelity experiment

    NASA Technical Reports Server (NTRS)

    Carr, Peter C.; Mckissick, Burnell T.

    1988-01-01

    A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.

  19. A simulation study of the flight dynamics of elastic aircraft. Volume 1: Experiment, results and analysis

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Davidson, John B.; Schmidt, David K.

    1987-01-01

    The simulation experiment described addresses the effects of structural flexibility on the dynamic characteristics of a generic family of aircraft. The simulation was performed using the NASA Langley VMS simulation facility. The vehicle models were obtained as part of this research. The simulation results include complete response data and subjective pilot ratings and comments and so allow a variety of analyses. The subjective ratings and analysis of the time histories indicate that increased flexibility can lead to increased tracking errors, degraded handling qualities, and changes in the frequency content of the pilot inputs. These results, furthermore, are significantly affected by the visual cues available to the pilot.

  20. Identification of Tryptic Peptides from Large Databases using Multiplexed Tandem Mass Spectrometry: Simulations and Experimental Results

    SciTech Connect

    Masselon, Christophe D. ); Pasa-Tolic, Ljiljana ); Lee, Sang-Won ); Li, Lingjun; Anderson, Gordon A. ); Harkewicz, Richard ); Smith, Richard D. )

    2003-07-01

    Multiplexed MS/MS was recently demonstrated as a means to increase the throughput of peptides identification in LC-MS/MS experiments. In this approach, a set of parent species is dissociated simultaneously and measured in a single spectrum (in the same manner that a single parent ion is conventionally studied), providing a gain in sensitivity and throughput proportional to the number of species that can be simultaneously addressed. In the present work, simulations performed using the Caenorhabditis elegans predicted proteome database show that multiplexed MS/MS data allow the identification of tryptic peptides from mixtures of up to 10 peptides from a single dataset with only 3 y or b fragments per peptide and a mass accuracy of 2.5 to 5 ppm. At this level of database and data complexity, 98% of the 500 peptides considered in the simulation were correctly identified. This compares favorably with the rates obtained for classical MS/MS at more modest mass measurement accuracy. LC-multiplexed FTICR MS/MS data obtained from a 66 kDa protein (bovine serum albumin) tryptic digest sample are presented to illustrate the approach, and confirm that peptides can be effectively identified from the C. elegans database to which the protein sequence had been appended.

  1. Fault diagnosis using a diagnostic shell and its verification results by connecting to an operator training simulator

    SciTech Connect

    Kobayashi, T.; Moridera, D.; Komai, K.; Fukui, S.; Matsumoto, K.

    1995-02-01

    This paper describes a fault diagnostic system using a diagnostic shell, MELDASH, and results that confirm its effectiveness. The diagnostic shell that reflects and makes use of the nature of model-based diagnosis is developed to overcome the drawbacks of methods that depend on operator knowledge. A high-performance fault diagnostic system is constructed simply by adding an application model to the diagnostic shell. A prototype system is verified by connecting it to an operator training simulator. It is able to make a proper diagnosis in 79 difficult fault cases. Verification results shows that the prototype system has sufficient accuracy. The authors confirm the effectiveness of this fault diagnostic method for future energy management systems.

  2. First results using a new technology for measuring masses of very short-lived nuclides with very high accuracy: The MISTRAL program at ISOLDE

    SciTech Connect

    Monsanglant, C.; Audi, G.; Conreur, G.; Cousin, R.; Doubre, H.; Jacotin, M.; Henry, S.; Kepinski, J.-F.; Lunney, D.; Saint Simon, M. de; Thibault, C.; Toader, C.; Bollen, G.; Lebee, G.; Scheidenberger, C.; Borcea, C.; Duma, M.; Kluge, H.-J.; Le Scornet, G.

    1999-11-16

    MISTRAL is an experimental program to measure masses of very short-lived nuclides (T{sub 1/2} down to a few ms), with a very high accuracy (a few 10{sup -7}). There were three data taking periods with radioactive beams and 22 masses of isotopes of Ne, Na, Mg, Al, K, Ca, and Ti were measured. The systematic errors are now under control at the level of 8x10{sup -7}, allowing to come close to the expected accuracy. Even for the very weakly produced {sup 30}Na (1 ion at the detector per proton burst), the final accuracy is 7x10{sup -7}.

  3. Three-dimensional Simulations of Thermonuclear Detonation with α-Network: Numerical Method and Preliminary Results

    NASA Astrophysics Data System (ADS)

    Khokhlov, A.; Domínguez, I.; Bacon, C.; Clifford, B.; Baron, E.; Hoeflich, P.; Krisciunas, K.; Suntzeff, N.; Wang, L.

    2012-07-01

    We describe a new astrophysical version of a cell-based adaptive mesh refinement code ALLA for reactive flow fluid dynamic simulations, including a new implementation of α-network nuclear kinetics, and present preliminary results of first three-dimensional simulations of incomplete carbon-oxygen detonation in Type Ia Supernovae.

  4. Field measurement results versus DAYCENT simulations in nitrous oxide emission from agricultural soil in Central Iowa

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Nitrous oxide emissions measured from corn-soybean rotations in Central Iowa were compared with the results obtained from DAYCENT simulations. Available whole year emission field data taken weekly during the growing season and monthly during the winter time, were used. DAYCENT simulations were perfo...

  5. Special Education Simulation and Consultation Project: Special Training Project. Final Report. Part I: Results and Learnings.

    ERIC Educational Resources Information Center

    Batten, Murray O.; Burello, Leonard C.

    Presented is the final report of the Special Education Simulation and Consultation (SECAC) Project designed to provide simulation-based inservice training to Michigan building principals. Part I reviews project goals, objectives, procedures, results, and learnings. It is explained that the training employed the Special Education Administrators…

  6. Results of GEANT simulations and comparison with first experiments at DANCE.

    SciTech Connect

    Reifarth, R.; Bredeweg, T. A.; Browne, J. C.; Esch, E. I.; Haight, R. C.; O'Donnell, J. M.; Kronenberg, A.; Rundberg, R. S.; Ullmann, J. L.; Vieira, D. J.; Wilhelmy, J. B.; Wouters, J. M.

    2003-07-29

    This report describes intensive Monte Carlo simulations carried out to be compared with the results of the first run cycle with DANCE (Detector for Advanced Neutron Capture Experiments). The experimental results were gained during the commissioning phase 2002/2003 with only a part of the array. Based on the results of these simulations the most important items to be improved before the next experiments will be addressed.

  7. A method for data handling numerical results in parallel OpenFOAM simulations

    NASA Astrophysics Data System (ADS)

    Anton, Alin; Muntean, Sebastian

    2015-12-01

    Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit®[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.

  8. A method for data handling numerical results in parallel OpenFOAM simulations

    SciTech Connect

    Anton, Alin; Muntean, Sebastian

    2015-12-31

    Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.

  9. Effect of Model Scale and Particle Size Distribution on PFC3D Simulation Results

    NASA Astrophysics Data System (ADS)

    Ding, Xiaobin; Zhang, Lianyang; Zhu, Hehua; Zhang, Qi

    2014-11-01

    This paper investigates the effect of model scale and particle size distribution on the simulated macroscopic mechanical properties, unconfined compressive strength (UCS), Young's modulus and Poisson's ratio, using the three-dimensional particle flow code (PFC3D). Four different maximum to minimum particle size ( d max/ d min) ratios, all having a continuous uniform size distribution, were considered and seven model (specimen) diameter to median particle size ratios ( L/ d) were studied for each d max/ d min ratio. The results indicate that the coefficients of variation (COVs) of the simulated macroscopic mechanical properties using PFC3D decrease significantly as L/ d increases. The results also indicate that the simulated mechanical properties using PFC3D show much lower COVs than those in PFC2D at all model scales. The average simulated UCS and Young's modulus using the default PFC3D procedure keep increasing with larger L/ d, although the rate of increase decreases with larger L/ d. This is mainly caused by the decrease of model porosity with larger L/ d associated with the default PFC3D method and the better balanced contact force chains at larger L/ d. After the effect of model porosity is eliminated, the results on the net model scale effect indicate that the average simulated UCS still increases with larger L/ d but the rate is much smaller, the average simulated Young's modulus decreases with larger L/ d instead, and the average simulated Poisson's ratio versus L/ d relationship remains about the same. Particle size distribution also affects the simulated macroscopic mechanical properties, larger d max/ d min leading to greater average simulated UCS and Young's modulus and smaller average simulated Poisson's ratio, and the changing rates become smaller at larger d max/ d min. This study shows that it is important to properly consider the effect of model scale and particle size distribution in PFC3D simulations.

  10. Determining the value of simulation in nurse education: study design and initial results.

    PubMed

    Alinier, Guillaume; Hunt, William B; Gordon, Ray

    2004-09-01

    Nowadays simulation is taking an important place in training and education of healthcare professionals. The University of Hertfordshire is carrying out a study which aims to determine the effect of realistic scenario-based simulation on nursing students' competence and confidence. This project is sponsored by the British Heart Foundation and takes place in the Hertfordshire Intensive Care and Emergency Simulation Centre (HICESC), a simulated three adult beds Intensive Care Unit. The simulation platform used is a Laerdal SimMan Universal Patient Simulator. A unique and robust study design, and results of the study are presented in this article. Consecutive cohorts of students are being assessed and reassessed after six months using an Objective Structured Clinical Examination (OSCE). Students are randomly divided into a control and experimental group for the period intervening between the two examinations. The experimental group is exposed to simulation training while the other students follow their usual nursing courses. Comparison is made between the OSCE results of the two groups of students. The experimental group had a greater improvement in performance than the control group (13.43% compared with 6.76% (p<0.05)). The results and feedback received from students and lecturers suggest that simulation training in nursing education is beneficial. PMID:19038158

  11. High-accuracy, high-precision, high-resolution, continuous monitoring of urban greenhouse gas emissions? Results to date from INFLUX

    NASA Astrophysics Data System (ADS)

    Davis, K. J.; Brewer, A.; Cambaliza, M. O. L.; Deng, A.; Hardesty, M.; Gurney, K. R.; Heimburger, A. M. F.; Karion, A.; Lauvaux, T.; Lopez-Coto, I.; McKain, K.; Miles, N. L.; Patarasuk, R.; Prasad, K.; Razlivanov, I. N.; Richardson, S.; Sarmiento, D. P.; Shepson, P. B.; Sweeney, C.; Turnbull, J. C.; Whetstone, J. R.; Wu, K.

    2015-12-01

    The Indianapolis Flux Experiment (INFLUX) is testing the boundaries of our ability to use atmospheric measurements to quantify urban greenhouse gas (GHG) emissions. The project brings together inventory assessments, tower-based and aircraft-based atmospheric measurements, and atmospheric modeling to provide high-accuracy, high-resolution, continuous monitoring of emissions of GHGs from the city. Results to date include a multi-year record of tower and aircraft based measurements of the urban CO2 and CH4 signal, long-term atmospheric modeling of GHG transport, and emission estimates for both CO2 and CH4 based on both tower and aircraft measurements. We will present these emissions estimates, the uncertainties in each, and our assessment of the primary needs for improvements in these emissions estimates. We will also present ongoing efforts to improve our understanding of atmospheric transport and background atmospheric GHG mole fractions, and to disaggregate GHG sources (e.g. biogenic vs. fossil fuel CO2 fluxes), topics that promise significant improvement in urban GHG emissions estimates.

  12. Accuracy of Colposcopically Directed Biopsy: Results from an Online Quality Assurance Programme for Colposcopy in a Population-Based Cervical Screening Setting in Italy

    PubMed Central

    Sideri, Mario; Garutti, Paola; Costa, Silvano; Cristiani, Paolo; Schincaglia, Patrizia; Sassoli de Bianchi, Priscilla; Naldoni, Carlo; Bucchi, Lauro

    2015-01-01

    Purpose. To report the accuracy of colposcopically directed biopsy in an internet-based colposcopy quality assurance programme in northern Italy. Methods. A web application was made accessible on the website of the regional Administration. Fifty-nine colposcopists out of the registered 65 logged in, viewed a posted set of 50 digital colpophotographs, classified them for colposcopic impression and need for biopsy, and indicated the most appropriate site for biopsy with a left-button mouse click on the image. Results. Total biopsy failure rate, comprising both nonbiopsy and incorrect selection of biopsy site, was 0.20 in CIN1, 0.11 in CIN2, 0.09 in CIN3, and 0.02 in carcinoma. Errors in the selection of biopsy site were stable between 0.08 and 0.09 in the three grades of CIN while decreasing to 0.01 in carcinoma. In multivariate analysis, the risk of incorrect selection of biopsy site was 1.97 for CIN2, 2.52 for CIN3, and 0.29 for carcinoma versus CIN1. Conclusions. Although total biopsy failure rate decreased regularly with increasing severity of histological diagnosis, the rate of incorrect selection of biopsy site was stable up to CIN3. In multivariate analysis, CIN2 and CIN3 had an independently increased risk of incorrect selection of biopsy site. PMID:26180805

  13. Results of computer calculations for a simulated distribution of kidney cells

    NASA Technical Reports Server (NTRS)

    Micale, F. J.

    1985-01-01

    The results of computer calculations for a simulated distribution of kidney cells are given. The calculations were made for different values of electroosmotic flow, U sub o, and the ratio of sample diameter to channel diameter, R.

  14. SUBWATERSHEDS OF THE UPPER SAN PEDRO BASIN WITH PERCENT DIFFERENCE BETWEEN RESULTS FROM TWO SWAT SIMULATIONS

    EPA Science Inventory

    Subwatersheds of the Upper San Pedro basin with percent difference between results from two SWAT simulations run through AGWA: one using the 1973 NALC landcover for model parameterization, and the other using the 1997 NALC landcover.

  15. Simulation of plasma turbulence in scrape-off layer conditions: the GBS code, simulation results and code validation

    NASA Astrophysics Data System (ADS)

    Ricci, P.; Halpern, F. D.; Jolliet, S.; Loizu, J.; Mosetto, A.; Fasoli, A.; Furno, I.; Theiler, C.

    2012-12-01

    Based on the drift-reduced Braginskii equations, the Global Braginskii Solver, GBS, is able to model the scrape-off layer (SOL) plasma turbulence in terms of the interplay between the plasma outflow from the tokamak core, the turbulent transport, and the losses at the vessel. Model equations, the GBS numerical algorithm, and GBS simulation results are described. GBS has been first developed to model turbulence in basic plasma physics devices, such as linear and simple magnetized toroidal devices, which contain some of the main elements of SOL turbulence in a simplified setting. In this paper we summarize the findings obtained from the simulation carried out in these configurations and we report the first simulations of SOL turbulence. We also discuss the validation project that has been carried out together with the GBS development.

  16. A Novel Simulation Technician Laboratory Design: Results of a Survey-Based Study

    PubMed Central

    Hughes, Patrick G; Friedl, Ed; Ortiz Figueroa, Fabiana; Cepeda Brito, Jose R; Frey, Jennifer; Birmingham, Lauren E; Atkinson, Steven Scott

    2016-01-01

    Objective  The purpose of this study was to elicit feedback from simulation technicians prior to developing the first simulation technician-specific simulation laboratory in Akron, OH. Background Simulation technicians serve a vital role in simulation centers within hospitals/health centers around the world. The first simulation technician degree program in the US has been approved in Akron, OH. To satisfy the requirements of this program and to meet the needs of this special audience of learners, a customized simulation lab is essential.  Method A web-based survey was circulated to simulation technicians prior to completion of the lab for the new program. The survey consisted of questions aimed at identifying structural and functional design elements of a novel simulation center for the training of simulation technicians. Quantitative methods were utilized to analyze data. Results Over 90% of technicians (n=65) think that a lab designed explicitly for the training of technicians is novel and beneficial. Approximately 75% of respondents think that the space provided appropriate audiovisual (AV) infrastructure and space to evaluate the ability of technicians to be independent. The respondents think that the lab needed more storage space, visualization space for a large number of students, and more space in the technical/repair area. Conclusions  A space designed for the training of simulation technicians was considered to be beneficial. This laboratory requires distinct space for technical repair, adequate bench space for the maintenance and repair of simulators, an appropriate AV infrastructure, and space to evaluate the ability of technicians to be independent. PMID:27096134

  17. Optimal design of robot accuracy compensators

    SciTech Connect

    Zhuang, H.; Roth, Z.S. . Robotics Center and Electrical Engineering Dept.); Hamano, Fumio . Dept. of Electrical Engineering)

    1993-12-01

    The problem of optimal design of robot accuracy compensators is addressed. Robot accuracy compensation requires that actual kinematic parameters of a robot be previously identified. Additive corrections of joint commands, including those at singular configurations, can be computed without solving the inverse kinematics problem for the actual robot. This is done by either the damped least-squares (DLS) algorithm or the linear quadratic regulator (LQR) algorithm, which is a recursive version of the DLS algorithm. The weight matrix in the performance index can be selected to achieve specific objectives, such as emphasizing end-effector's positioning accuracy over orientation accuracy or vice versa, or taking into account proximity to robot joint travel limits and singularity zones. The paper also compares the LQR and the DLS algorithms in terms of computational complexity, storage requirement, and programming convenience. Simulation results are provided to show the effectiveness of the algorithms.

  18. First Results Using a New Technology for Measuring Masses of Very Short-Lived Nuclides with Very High Accuracy: the MISTRAL Program at ISOLDE

    SciTech Connect

    C. Monsanglant; C. Toader; G. Audi; G. Bollen; C. Borcea; G. Conreur; R. Cousin; H. Doubre; M. Duma; M. Jacotin; S. Henry; J.-F. Kepinski; H.-J. Kluge; G. Lebee; G. Le Scornet; D. Lunney; M. de Saint Simon; C. Scheidenberger; C. Thibault

    1999-12-31

    MISTRAL is an experimental program to measure masses of very short-lived nuclides (T{sub 1/2} down to a few ms), with a very high accuracy (a few 10{sup -7}). There were three data taking periods with radioactive beams and 22 masses of isotopes of Ne, Na{clubsuit}, Mg, Al{clubsuit}, K, Ca, and Ti were measured. The systematic errors are now under control at the level of 8x10{sup -7}, allowing to come close to the expected accuracy. Even for the very weakly produced {sup 30}Na (1 ion at the detector per proton burst), the final accuracy is 7x10{sup -7}.

  19. Wave spectra of a shoaling wave field: A comparison of experimental and simulated results

    NASA Technical Reports Server (NTRS)

    Morris, W. D.; Grosch, C. E.; Poole, L. R.

    1982-01-01

    Wave profile measurements made from an aircraft crossing the North Carolina continental shelf after passage of Tropical Storm Amy in 1975 are used to compute a series of wave energy spectra for comparison with simulated spectra. Results indicate that the observed wave field experiences refraction and shoaling effects causing statistically significant changes in the spectral density levels. A modeling technique is used to simulate the spectral density levels. Total energy levels of the simulated spectra are within 20 percent of those of the observed wave field. The results represent a successful attempt to theoretically simulate, at oceanic scales, the decay of a wave field which contains significant wave energies from deepwater through shoaling conditions.

  20. THEMATIC ACCURACY OF THE 1992 NATIONAL LAND-COVER DATA (NLCD) FOR THE EASTERN UNITED STATES: STATISTICAL METHODOLOGY AND REGIONAL RESULTS

    EPA Science Inventory

    The accuracy of the National Land Cover Data (NLCD) map is assessed via a probability sampling design incorporating three levels of stratification and two stages of selection. Agreement between the map and reference land-cover labels is defined as a match between the primary or a...

  1. Simulations and cold-test results of a prototype plane wave transformer linac structure

    NASA Astrophysics Data System (ADS)

    Kumar, Arvind; Pant, K. K.; Krishnagopal, S.

    2002-03-01

    We have built a 4-cell prototype plane wave transformer (PWT) linac structure. We discuss here details of the design and fabrication of the PWT linac structure. We present results from superfish and gdfidl simulations as well as cold tests, which are in good agreement with each other. We also present detailed tolerance maps for the PWT structure. We discuss beam dynamics simulation studies performed using parmela.

  2. Columbus meteoroid/debris protection study - Experimental simulation techniques and results

    NASA Astrophysics Data System (ADS)

    Schneider, E.; Kitta, K.; Stilp, A.; Lambert, M.; Reimerdes, H. G.

    1992-08-01

    The methods and measurement techniques used in experimental simulations of micrometeoroid and space debris impacts with the ESA's laboratory module Columbus are described. Experiments were carried out at the two-stage light gas gun acceleration facilities of the Ernst-Mach Institute. Results are presented on simulations of normal impacts on bumper systems, oblique impacts on dual bumper systems, impacts into cooled targets, impacts into pressurized targets, and planar impacts of low-density projectiles.

  3. Results of NASA/FAA ground and flight simulation experiments concerning helicopter IFR airworthiness criteria

    NASA Technical Reports Server (NTRS)

    Lebacqz, J. V.; Chen, R. T. N.; Gerdes, R. M.; Weber, J. M.; Forrest, R. D.

    1982-01-01

    A sequence of ground and flight simulation experiments was conducted to investigate helicopter instrument-flight-rules airworthiness criteria. The first six of these experiments and major results are summarized. Five of the experiments were conducted on large-amplitude motion base simulators. The NASA-Army V/STOLAND UH-1H variable-stability helicopter was used in the flight experiment. Artificial stability and control augmentation, longitudinal and lateral control, and in pitch and roll attitude augmentation were investigated.

  4. Design and CFD Simulation of the Drift Eliminators in Comparison with PIV Results

    NASA Astrophysics Data System (ADS)

    Stodůlka, Jiří; Vitkovičová, Rut

    2015-05-01

    Drift eliminators are the essential part of all modern cooling towers preventing significant losses of liquid water escaping to the enviroment. These eliminators need to be effective in terms of water capture but on the other hand causing only minimal pressure loss as well. A new type of such eliminator was designed and numerically simulated using CFD tools. Results of the simulation are compared with PIV visulisation on the prototype model.

  5. Ride qualities criteria validation/pilot performance study: Flight simulator results

    NASA Technical Reports Server (NTRS)

    Nardi, L. U.; Kawana, H. Y.; Borland, C. J.; Lefritz, N. M.

    1976-01-01

    Pilot performance was studied during simulated manual terrain following flight for ride quality criteria validation. An existing B-1 simulation program provided the data for these investigations. The B-1 simulation program included terrain following flights under varying controlled conditions of turbulence, terrain, mission length, and system dynamics. The flight simulator consisted of a moving base cockpit which reproduced motions due to turbulence and control inputs. The B-1 aircraft dynamics were programmed with six-degrees-of-freedom equations of motion with three symmetric and two antisymmetric structural degrees of freedom. The results provided preliminary validation of existing ride quality criteria and identified several ride quality/handling quality parameters which may be of value in future ride quality/criteria development.

  6. Comparisons of simulator and flight results on augmentor-wing jet STOL research aircraft

    NASA Technical Reports Server (NTRS)

    Innis, R. C.; Anderson, S. B.

    1972-01-01

    The considerations involved in making a piloted simulator an effective research tool in the design and development of new aircraft are discussed. An assessment of the limitations of the simulator in depicting real flight as well as the problem of recognizing erroneous results when the simulator is supplied with incorrect input data is made. Examples of the ways in which the simulator is used to design and develop the augmentor-wing aircraft are presented. Four areas of investigation are: (1) to design the lateral control system for proper feel and response, (2) determine the effect of engine failure during approach, (3) develop the best technique for controlling flight path during approach, and (4) the significance of lift loss in ground effect and how to compensate for such loss.

  7. Handling Qualities Results of an Initial Geared Flap Tilt Wing Piloted Simulation

    NASA Technical Reports Server (NTRS)

    Guerrero, Lourdes M.; Corliss, Lloyd D.

    1991-01-01

    An exploratory simulation study of a novel approach to pitch control for a tilt wing aircraft was conducted in 1990 on the NASA-Ames Vertical Motion Simulator. The purpose of the study was to evaluate and compare the handling qualities of both a conventional and a geared flap tilt wing control configuration. The geared flap is an innovative control concept which has the potential for reducing or eliminating the horizontal pitch control tail rotor or reaction jets required by prior tilt wing designs. The handling qualities results of the geared flap control configuration are presented in this paper and compared to the conventional (programmed flap) tilt wing control configuration. This paper also describes the geared flap concept, the tilt wing aircraft, the simulation model, the simulation facility and experiment setup, and the pilot evaluation tasks and procedures.

  8. Ship's behaviour during hurricane Sandy near the USA coasts. Simulation results

    NASA Astrophysics Data System (ADS)

    Chiotoroiu, B.; Grosan, N.; Soare, L.

    2015-11-01

    The aim of this study is to analyze the impact of the stormy weather during hurricane Sandy on an oil tank using the navigation simulator. Meteorological and waves maps from forecast models are used, together with relevant information from the meteorological warnings. The simulation sessions were performed on the navigation simulator from the Constanta Maritime University and allowed us the selection of specific parameters for the ship and the environment in order to observe the ship's behavior in heavy sea conditions. Simulation results are important due to the unexpected environmental conditions and the ship position: very close to the hurricane centre when the storm began to change its track and to transform into an extra tropical cyclone.

  9. Comparing Simulation Results with Traditional PRA Model on a Boiling Water Reactor Station Blackout Case Study

    SciTech Connect

    Zhegang Ma; Diego Mandelli; Curtis Smith

    2011-07-01

    A previous study used RELAP and RAVEN to conduct a boiling water reactor station black-out (SBO) case study in a simulation based environment to show the capabilities of the risk-informed safety margin characterization methodology. This report compares the RELAP/RAVEN simulation results with traditional PRA model results. The RELAP/RAVEN simulation run results were reviewed for their input parameters and output results. The input parameters for each simulation run include various timing information such as diesel generator or offsite power recovery time, Safety Relief Valve stuck open time, High Pressure Core Injection or Reactor Core Isolation Cooling fail to run time, extended core cooling operation time, depressurization delay time, and firewater injection time. The output results include the maximum fuel clad temperature, the outcome, and the simulation end time. A traditional SBO PRA model in this report contains four event trees that are linked together with the transferring feature in SAPHIRE software. Unlike the usual Level 1 PRA quantification process in which only core damage sequences are quantified, this report quantifies all SBO sequences, whether they are core damage sequences or success (i.e., non core damage) sequences, in order to provide a full comparison with the simulation results. Three different approaches were used to solve event tree top events and quantify the SBO sequences: “W” process flag, default process flag without proper adjustment, and default process flag with adjustment to account for the success branch probabilities. Without post-processing, the first two approaches yield incorrect results with a total conditional probability greater than 1.0. The last approach accounts for the success branch probabilities and provides correct conditional sequence probabilities that are to be used for comparison. To better compare the results from the PRA model and the simulation runs, a simplified SBO event tree was developed with only four

  10. High Fidelity Thermal Simulators for Non-Nuclear Testing: Analysis and Initial Results

    NASA Technical Reports Server (NTRS)

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David

    2007-01-01

    Non-nuclear testing can be a valuable tool in the development of a space nuclear power system, providing system characterization data and allowing one to work through various fabrication, assembly and integration issues without the cost and time associated with a full ground nuclear test. In a non-nuclear test bed, electric heaters are used to simulate the heat from nuclear fuel. Testing with non-optimized heater elements allows one to assess thermal, heat transfer, and stress related attributes of a given system, but fails to demonstrate the dynamic response that would be present in an integrated, fueled reactor system. High fidelity thermal simulators that match both the static and the dynamic fuel pin performance that would be observed in an operating, fueled nuclear reactor can vastly increase the value of non-nuclear test results. With optimized simulators, the integration of thermal hydraulic hardware tests with simulated neutronie response provides a bridge between electrically heated testing and fueled nuclear testing, providing a better assessment of system integration issues, characterization of integrated system response times and response characteristics, and assessment of potential design improvements' at a relatively small fiscal investment. Initial conceptual thermal simulator designs are determined by simple one-dimensional analysis at a single axial location and at steady state conditions; feasible concepts are then input into a detailed three-dimensional model for comparison to expected fuel pin performance. Static and dynamic fuel pin performance for a proposed reactor design is determined using SINDA/FLUINT thermal analysis software, and comparison is made between the expected nuclear performance and the performance of conceptual thermal simulator designs. Through a series of iterative analyses, a conceptual high fidelity design can developed. Test results presented in this paper correspond to a "first cut" simulator design for a potential

  11. Geometry and Simulation Results for a Gas Turbine Representative of the Energy Efficient Engine (EEE)

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.; Beach, Tim; Turner, Mark; Hendricks, Eric S.

    2015-01-01

    This paper describes the geometry and simulation results of a gas-turbine engine based on the original EEE engine developed in the 1980s. While the EEE engine was never in production, the technology developed during the program underpins many of the current generation of gas turbine engines. This geometry is being explored as a potential multi-stage turbomachinery test case that may be used to develop technology for virtual full-engine simulation. Simulation results were used to test the validity of each component geometry representation. Results are compared to a zero-dimensional engine model developed from experimental data. The geometry is captured in a series of Initial Graphical Exchange Specification (IGES) files and is available on a supplemental DVD to this report.

  12. Assessing effects of the e-Chasqui laboratory information system on accuracy and timeliness of bacteriology results in the Peruvian tuberculosis program.

    PubMed

    Blaya, Joaquin A; Shin, Sonya S; Yagui, Martin J A; Yale, Gloria; Suarez, Carmen; Asencios, Luis; Fraser, Hamish

    2007-01-01

    We created a web-based laboratory information system, e-Chasqui to connect public laboratories to health centers to improve communication and analysis. After one year, we performed a pre and post assessment of communication delays and found that e-Chasqui maintained the average delay but eliminated delays of over 60 days. Adding digital verification maintained the average delay, but should increase accuracy. We are currently performing a randomized evaluation of the impacts of e-Chasqui. PMID:18693974

  13. Effect of vertebral surface extraction on registration accuracy: a comparison of registration results for iso-intensity algorithms applied to computed tomography images

    NASA Astrophysics Data System (ADS)

    Herring, Jeannette L.; Maurer, Calvin R., Jr.; Muratore, Diane M.; Galloway, Robert L., Jr.; Dawant, Benoit M.

    1999-05-01

    This paper presents a comparison of iso-intensity-based surface extraction algorithms applied to computed tomography (CT) images of the spine. The extracted vertebral surfaces are used in surface-based registration of CT images to physical space, where our ultimate goal is the development of a technique that can be used for image-guided spinal surgery. The surface extraction process has a direct effect on image-guided surgery in two ways: the extracted surface must provide an accurate representation of the actual surface so that a good registration can be achieved, and the number of polygons in the mesh representation of the extracted surface must be small enough to allow the registration to be performed quickly. To examine the effect of the surface extraction process on registration error and run time, we have performed a large number of experiments on two plastic spine phantoms. Using a marker-based system to assess accuracy, we have found that submillimetric registration accuracy can be achieved using a point-to- surface registration algorithm with simplified and unsimplified members of the general class of iso-intensity- based surface extraction algorithms. This research has practical implications, since it shows that several versions of the widely available class of intensity-based surface extraction algorithms can be used to provide sufficient accuracy for vertebral registration. Since intensity-based algorithms are completely deterministic and fully automatic, this finding simplifies the pre-processing required for image-guided back surgery.

  14. Canonical Signed Digit Study. Part 2; FIR Digital Filter Simulation Results

    NASA Technical Reports Server (NTRS)

    Kim, Heechul

    1996-01-01

    Finite Impulse Response digital filter using Canonical Signed-Digit (CSD) number representation for the coefficients has been studied and its computer simulation results are presented here. Minimum Mean Square Error (MMSE) criterion is employed to optimize filter coefficients into the corresponding CSD numbers. To further improve coefficients optimization process, an extra non-zero bit is added for any filter coefficients exceeding 1/2. This technique improves frequency response of filter without increasing filter complexity almost at all. The simulation results show outstanding performance in bit-error-rate (BER) curve for all CSD implemented digital filters included in this presentation material.

  15. SIMULATION AND ANALYSIS OF MICROWAVE TRANSMISSION THROUGH ANELECTRON CLOUD, A COMPARISON OF RESULTS

    SciTech Connect

    Sonnad, Kiran G.; Furman, Miguel; Veitzer, Seth A.; Cary, John

    2006-04-15

    Simulation studies for transmission of microwaves through electron clouds show good agreement with analytic results. The electron cloud produces a shift in phase of the microwave. Experimental observation of this phenomena would lead to a useful diagnostic tool for accessing the local density of electron clouds in an accelerator. These experiments are being carried out at the CERN SPS and the PEP-II LER at SLAC and is proposed to be done at the Fermilab main injector. In this study, a brief analysis of the phase shift is provided and the results are compared with that obtained from simulations.

  16. Simulation and Analysis of Microwave Transmission through an Electron Cloud, a Comparison of Results

    SciTech Connect

    Sonnad, Kiran; Sonnad, Kiran; Furman, Miguel; Veitzer, Seth; Stoltz, Peter; Cary, John

    2007-03-12

    Simulation studies for transmission of microwaves through electron cloudes show good agreement with analytic results. The elctron cloud produces a shift in phase of the microwave. Experimental observation of this phenomena would lead to a useful diagnostic tool for acessing the local density of electron clouds in an accelerator. These experiments are being carried out at the CERN SPS and the PEP-II LER at SLAC and is proposed to be done at the Fermilab maininjector. In this study, a brief analysis of the phase shift is provided and the results are compared with that obtained from simulations.

  17. Residual stresses in resistance spot welding: Comparison of simulation and measured results

    SciTech Connect

    Sheppard, S.; Syed, M.

    1994-12-31

    Numerical simulations of welding processes offer researchers and engineers the opportunity to study in detail thermal and mechanical histories created by welding. The objective of this work is to explore the influence of the dynamically changing contact patch size on thermal and mechanical histories in resistance spot welding. To this end, a fully coupled electrical-thermal-mechanical simulation of RSW has been developed. The simulation considers welding and the subsequent cooling of the workpiece. The results of such a simulation are presented for the case of HSLA galvanized sheet and are compared with numerical results where such a coupling was not included. In particular, thermal histories and the final states of residual stresses are compared. Specifically, the fully coupled simulation results show that: (1) There is a 44% reduction in contact area at the faying surface as welding progresses. (2) There are substantial (near yield strength) residual stresses in the annulus surrounding the weld nugget. (3) Cooling rates in the nugget are on the order of 10,000{degrees}F/s when welding with electrode hold time. Rates are closer to 1000{degrees}F/s when there is no electrode hold time. (4) predicted residual stresses compare favorably with measured values. Note that it is extremely difficult (if not impossible) to make residual stress measurements in the area of greatest concern with regards to weld fatigue failure. The predicted residual stresses will be valuable input to engineers and researchers concerned with the fatigue performance of resistance spot welded structures.

  18. Training in timing improves accuracy in golf.

    PubMed

    Libkuman, Terry M; Otani, Hajime; Steger, Neil

    2002-01-01

    In this experiment, the authors investigated the influence of training in timing on performance accuracy in golf. During pre- and posttesting, 40 participants hit golf balls with 4 different clubs in a golf course simulator. The dependent measure was the distance in feet that the ball ended from the target. Between the pre- and posttest, participants in the experimental condition received 10 hr of timing training with an instrument that was designed to train participants to tap their hands and feet in synchrony with target sounds. The participants in the control condition read literature about how to improve their golf swing. The results indicated that the participants in the experimental condition significantly improved their accuracy relative to the participants in the control condition, who did not show any improvement. We concluded that training in timing leads to improvement in accuracy, and that our results have implications for training in golf as well as other complex motor activities. PMID:12038497

  19. Comparison of the analytical and simulation results of the equilibrium beam profile

    SciTech Connect

    Liu, Z. J.; Zhu Shaoping; Cao, L. H.; Zheng, C. Y.

    2007-10-15

    The evolution of high current electron beams in dense plasmas has been investigated by using two-dimensional particle-in-cell (PIC) simulations with immobile ions. It is shown that electron beams are split into many filaments at the beginning due to the Weibel instability, and then different filamentation beams attract each other and coalesce. The profile of the filaments can be described by formulas. Hammer et al. [Phys. Fluids 13, 1831 (1970)] developed a self-consistent relativistic electron beam model that allows the propagation of relativistic electron fluxes in excess of the Alfven-Lawson critical-current limit for a fully neutralized beam. The equilibrium solution has been observed in the simulation results, but the electron distribution function assumed by Hammer et al. is different from the simulation results.

  20. Simulation of casing vibration resulting from blade-casing rubbing and its verifications

    NASA Astrophysics Data System (ADS)

    Chen, G.

    2016-01-01

    In order to diagnose effectively the blade-casing rubbing fault, it is very much necessary to simulate the casing vibration correctively and study the casing signals' characteristics under blade-casing rubbing. In this paper, the casing vibrations in aero-engine resulting from the blade-casing rubbing are simulated. Firstly, an improved aero-engine blade-casing rubbing model is introduced, in which, the effects of the number of blades and changes in the rotor-stator clearance on rubbing forces are considered, the improved rubbing model can simulate rubbing faults for various rubbing conditions, including single-point, multi-point, local-part, and complete-cycle rubbing on the casing and rotor. Secondly, the rubbing model was applied to the rotor-support-casing coupling model, and the casing acceleration responses under rubbing faults are obtained using the time integration approach, which combines the Newmark-β method and an improved Newmark-β method that is a new explicit integral method named the Zhai method. Thirdly, an aero-engine rotor tester with the casings was used to carry out rubbing experiments for single-point rubbing on the casing and complete-cycle rubbing on the rotor, the simulation result was found to agree well with the experimental values, and the improved blade-casing rubbing model was fully verified. Finally, other rubbing faults were simulated for various rubbing conditions and their rubbing characteristics were analyzed.

  1. SU-D-16A-04: Accuracy of Treatment Plan TCP and NTCP Values as Determined Via Treatment Course Delivery Simulations

    SciTech Connect

    Siebers, J; Xu, H; Gordon, J

    2014-06-01

    Purpose: To to determine if tumor control probability (TCP) and normal tissue control probability (NTCP) values computed on the treatment planning image are representative of TCP/NTCP distributions resulting from probable positioning variations encountered during external-beam radiotherapy. Methods: We compare TCP/NTCP as typically computed on the planning PTV/OARs with distributions of those parameters computed for CTV/OARs via treatment delivery simulations which include the effect of patient organ deformations for a group of 19 prostate IMRT pseudocases. Planning objectives specified 78 Gy to PTV1=prostate CTV+5 mm margin, 66 Gy to PTV2=seminal vesicles+8 mm margin, and multiple bladder/rectum OAR objectives to achieve typical clinical OAR sparing. TCP were computed using the Poisson Model while NTCPs used the Lyman-Kutcher-Bruman model. For each patient, 1000 30-fraction virtual treatment courses were simulated with each fractional pseudo- time-oftreatment anatomy sampled from a principle component analysis patient deformation model. Dose for each virtual treatment-course was determined via deformable summation of dose from the individual fractions. CTVTCP/ OAR-NTCP values were computed for each treatment course, statistically analyzed, and compared with the planning PTV-TCP/OARNTCP values. Results: Mean TCP from the simulations differed by <1% from planned TCP for 18/19 patients; 1/19 differed by 1.7%. Mean bladder NTCP differed from the planned NTCP by >5% for 12/19 patients and >10% for 4/19 patients. Similarly, mean rectum NTCP differed by >5% for 12/19 patients, >10% for 4/19 patients. Both mean bladder and mean rectum NTCP differed by >5% for 10/19 patients and by >10% for 2/19 patients. For several patients, planned NTCP was less than the minimum or more than the maximum from the treatment course simulations. Conclusion: Treatment course simulations yield TCP values that are similar to planned values, while OAR NTCPs differ significantly, indicating the

  2. Battery Performance of ADEOS (Advanced Earth Observing Satellite) and Ground Simulation Test Results

    NASA Technical Reports Server (NTRS)

    Koga, K.; Suzuki, Y.; Kuwajima, S.; Kusawake, H.

    1997-01-01

    The Advanced Earth Observing Satellite (ADEOS) is developed with the aim of establishment of platform technology for future spacecraft and inter-orbit communication technology for the transmission of earth observation data. ADEOS uses 5 batteries, consists of two packs. This paper describes, using graphs and tables, the ground simulation tests and results that are carried to determine the performance of the ADEOS batteries.

  3. An outcome-based learning model to identify emerging threats : experimental and simulation results.

    SciTech Connect

    Martinez-Moyano, I. J.; Conrad, S. H.; Andersen, D. F.; Decision and Information Sciences; SNL; Univ. at Albany

    2007-01-01

    The authors present experimental and simulation results of an outcome-based learning model as it applies to the identification of emerging threats. This model integrates judgment, decision making, and learning theories to provide an integrated framework for the behavioral study of emerging threats.

  4. Numerical simulation of particle fluxes formation generated as a result of space objects breakups in orbit

    NASA Astrophysics Data System (ADS)

    Aleksandrova, A. G.; Galushina, T. Yu.

    2015-12-01

    The paper describes the software package developed for the numerical simulation of the breakups of natural and artificial objects and algorithms on which it is based. A new software "Numerical model of breakups" includes models of collapse of the spacecraft (SC) as a result of the explosion and collision as well as two models of the explosion of an asteroid.

  5. [Simulation in healthcare for the announcement of harm resulting from healthcare].

    PubMed

    Cluzel, Franck

    2016-04-01

    Simulation is an effective means of transferring competencies in a complex situation such as the announcement of harm resulting from healthcare. The aim is to reinforce patient safety, to improve communication between nurses and patients and between health professionals. PMID:27085931

  6. Methods for improving accuracy and extending results beyond periods covered by traditional ground-truth in remote sensing classification of a complex landscape

    NASA Astrophysics Data System (ADS)

    Mueller-Warrant, George W.; Whittaker, Gerald W.; Banowetz, Gary M.; Griffith, Stephen M.; Barnhart, Bradley L.

    2015-06-01

    Successful development of approaches to quantify impacts of diverse landuse and associated agricultural management practices on ecosystem services is frequently limited by lack of historical and contemporary landuse data. We hypothesized that ground truth data from one year could be used to extrapolate previous or future landuse in a complex landscape where cropping systems do not generally change greatly from year to year because the majority of crops are established perennials or the same annual crops grown on the same fields over multiple years. Prior to testing this hypothesis, it was first necessary to classify 57 major landuses in the Willamette Valley of western Oregon from 2005 to 2011 using normal same year ground-truth, elaborating on previously published work and traditional sources such as Cropland Data Layers (CDL) to more fully include minor crops grown in the region. Available remote sensing data included Landsat, MODIS 16-day composites, and National Aerial Imagery Program (NAIP) imagery, all of which were resampled to a common 30 m resolution. The frequent presence of clouds and Landsat7 scan line gaps forced us to conduct of series of separate classifications in each year, which were then merged by choosing whichever classification used the highest number of cloud- and gap-free bands at any given pixel. Procedures adopted to improve accuracy beyond that achieved by maximum likelihood pixel classification included majority-rule reclassification of pixels within 91,442 Common Land Unit (CLU) polygons, smoothing and aggregation of areas outside the CLU polygons, and majority-rule reclassification over time of forest and urban development areas. Final classifications in all seven years separated annually disturbed agriculture, established perennial crops, forest, and urban development from each other at 90 to 95% overall 4-class validation accuracy. In the most successful use of subsequent year ground-truth data to classify prior year landuse, an

  7. Analysis Results for Lunar Soil Simulant Using a Portable X-Ray Fluorescence Analyzer

    NASA Technical Reports Server (NTRS)

    Boothe, R. E.

    2006-01-01

    Lunar soil will potentially be used for oxygen generation, water generation, and as filler for building blocks during habitation missions on the Moon. NASA s in situ fabrication and repair program is evaluating portable technologies that can assess the chemistry of lunar soil and lunar soil simulants. This Technical Memorandum summarizes the results of the JSC 1 lunar soil simulant analysis using the TRACeR III IV handheld x-ray fluorescence analyzer, manufactured by KeyMaster Technologies, Inc. The focus of the evaluation was to determine how well the current instrument configuration would detect and quantify the components of JSC-1.

  8. Performance and human factors results from thrust vectoring investigations in simulated air combat

    NASA Technical Reports Server (NTRS)

    Pennington, J. E.; Meintel, A. J., Jr.

    1980-01-01

    In support of research related to advanced fighter technology, the Langley Differential Maneuvering Simulator (DMS) has been used to investigate the effects of advanced aerodynamic concepts, parametric changes in performance parameters, and advanced flight control systems on the combat capability of fighter airplanes. At least five studies were related to thrust vectoring and/or inflight thrust reversing. The aircraft simulated ranged from F-4 class to F-15 class, and included the AV-8 Harrier. This paper presents an overview of these studies including the assumptions involved, trends of results, and human factors considerations that were found.

  9. Results of intravehicular manned cargo-transfer studies in simulated weightlessness

    NASA Technical Reports Server (NTRS)

    Spady, A. A., Jr.; Beasley, G. P.; Yenni, K. R.; Eisele, D. F.

    1972-01-01

    A parametric investigation was conducted in a water immersion simulator to determine the effect of package mass, moment of inertia, and size on the ability of man to transfer cargo in simulated weightlessness. Results from this study indicate that packages with masses of at least 744 kg and moments of inertia of at least 386 kg-m2 can be manually handled and transferred satisfactorily under intravehicular conditions using either one- or two-rail motion aids. Data leading to the conclusions and discussions of test procedures and equipment are presented.

  10. Reconfigurable computing for Monte Carlo simulations: Results and prospects of the Janus project

    NASA Astrophysics Data System (ADS)

    Baity-Jesi, M.; Baños, R. A.; Cruz, A.; Fernandez, L. A.; Gil-Narvion, J. M.; Gordillo-Guerrero, A.; Guidetti, M.; Iñiguez, D.; Maiorano, A.; Mantovani, F.; Marinari, E.; Martin-Mayor, V.; Monforte-Garcia, J.; Muñoz Sudupe, A.; Navarro, D.; Parisi, G.; Pivanti, M.; Perez-Gaviro, S.; Ricci-Tersenghi, F.; Ruiz-Lorenzo, J. J.; Schifano, S. F.; Seoane, B.; Tarancon, A.; Tellez, P.; Tripiccione, R.; Yllanes, D.

    2012-08-01

    We describe Janus, a massively parallel FPGA-based computer optimized for the simulation of spin glasses, theoretical models for the behavior of glassy materials. FPGAs (as compared to GPUs or many-core processors) provide a complementary approach to massively parallel computing. In particular, our model problem is formulated in terms of binary variables, and floating-point operations can be (almost) completely avoided. The FPGA architecture allows us to run many independent threads with almost no latencies in memory access, thus updating up to 1024 spins per cycle. We describe Janus in detail and we summarize the physics results obtained in four years of operation of this machine; we discuss two types of physics applications: long simulations on very large systems (which try to mimic and provide understanding about the experimental non-equilibrium dynamics), and low-temperature equilibrium simulations using an artificial parallel tempering dynamics. The time scale of our non-equilibrium simulations spans eleven orders of magnitude (from picoseconds to a tenth of a second). On the other hand, our equilibrium simulations are unprecedented both because of the low temperatures reached and for the large systems that we have brought to equilibrium. A finite-time scaling ansatz emerges from the detailed comparison of the two sets of simulations. Janus has made it possible to perform spin-glass simulations that would take several decades on more conventional architectures. The paper ends with an assessment of the potential of possible future versions of the Janus architecture, based on state-of-the-art technology.

  11. The latest results from ELM-simulation experiments in plasma accelerators

    NASA Astrophysics Data System (ADS)

    Garkusha, I. E.; Arkhipov, N. I.; Klimov, N. S.; Makhlaj, V. A.; Safronov, V. M.; Landman, I.; Tereshin, V. I.

    2009-12-01

    Recent results of ELM-simulation experiments with quasi-stationary plasma accelerators (QSPAs) Kh-50 (Kharkov, Ukraine) and QSPA-T (Troitsk, Russia) as well as experiments in the pulsed plasma gun MK-200UG (Troitsk, Russia) are discussed. Primary attention in Troitsk experiments has been focused on investigating the carbon-fibre composite (CFC) and tungsten erosion mechanisms, their onset conditions and the contribution of various erosion mechanisms (including droplet splashing) to the resultant surface damage at varying plasma heat flux. The obtained results are used for validating the numerical codes PEGASUS and MEMOS developed in FZK. Crack patterns and residual stresses in tungsten targets under repetitive edge localized mode (ELM)-like plasma pulses are studied in simulation experiments with QSPA Kh-50. Statistical processing of the experimental results on crack patterns after different numbers of QSPA Kh-50 exposures as well as those on the dependence of cracking on the heat load and surface temperature is performed.

  12. Ca-Pri a Cellular Automata Phenomenological Research Investigation: Simulation Results

    NASA Astrophysics Data System (ADS)

    Iannone, G.; Troisi, A.

    2013-05-01

    Following the introduction of a phenomenological cellular automata (CA) model capable to reproduce city growth and urban sprawl, we develop a toy model simulation considering a realistic framework. The main characteristic of our approach is an evolution algorithm based on inhabitants preferences. The control of grown cells is obtained by means of suitable functions which depend on the initial condition of the simulation. New born urban settlements are achieved by means of a logistic evolution of the urban pattern while urban sprawl is controlled by means of the population evolution function. In order to compare model results with a realistic urban framework we have considered, as the area of study, the island of Capri (Italy) in the Mediterranean Sea. Two different phases of the urban evolution on the island have been taken into account: a new born initial growth as induced by geographic suitability and the simulation of urban spread after 1943 induced by the population evolution after this date.

  13. Monte Carlo simulations of microchannel plate detectors I: steady-state voltage bias results

    SciTech Connect

    Ming Wu, Craig Kruschwitz, Dane Morgan, Jiaming Morgan

    2008-07-01

    X-ray detectors based on straight-channel microchannel plates (MCPs) are a powerful diagnostic tool for two-dimensional, time-resolved imaging and timeresolved x-ray spectroscopy in the fields of laser-driven inertial confinement fusion and fast z-pinch experiments. Understanding the behavior of microchannel plates as used in such detectors is critical to understanding the data obtained. The subject of this paper is a Monte Carlo computer code we have developed to simulate the electron cascade in a microchannel plate under a static applied voltage. Also included in the simulation is elastic reflection of low-energy electrons from the channel wall, which is important at lower voltages. When model results were compared to measured microchannel plate sensitivities, good agreement was found. Spatial resolution simulations of MCP-based detectors were also presented and found to agree with experimental measurements.

  14. Some numerical simulation results of swirling flow in d.c. plasma torch

    NASA Astrophysics Data System (ADS)

    Felipini, C. L.; Pimenta, M. M.

    2015-03-01

    We present and discuss some results of numerical simulation of swirling flow in d.c. plasma torch, obtained with a two-dimensional mathematical model (MHD model) which was developed to simulate the phenomena related to the interaction between the swirling flow and the electric arc in a non-transferred arc plasma torch. The model was implemented in a computer code based on the Finite Volume Method (FVM) to enable the numerical solution of the governing equations. For the study, cases were simulated with different operating conditions (gas flow rate; swirl number). Some obtained results were compared to the literature and have proved themselves to be in good agreement in most part of computational domain regions. The numerical simulations performed with the computer code enabled the study of the behaviour of the flow in the plasma torch and also study the effects of different swirl numbers on temperature and axial velocity of the plasma flow. The results demonstrated that the developed model is suitable to obtain a better understanding of the involved phenomena and also for the development and optimization of plasma torches.

  15. Computer simulation of shelf and stream profile geomorphic evolution resulting from eustasy and uplift

    SciTech Connect

    Johnson, R.M. )

    1993-04-01

    A two-dimensional computer simulation of shelf and stream profile evolution with sea level oscillation has been developed to illustrate the interplay of coastal and fluvial processes on uplifting continental margins. The shelf evolution portion of the simulation is based on the erosional model of Trenhaile (1989). The rate of high tide cliff erosion decreases as abrasion platform gradient decreases the sea cliff height increases. The rate of subtidal erosion decreases as the subtidal sea floor gradient decreases. Values are specified for annual wave energy, energy required to erode a cliff notch 1 meter deep, nominal low tidal erosion rate, and rate of removal of cliff debris. The values were chosen arbitrarily to yield a geomorphic evolution consistent with the present coast of northern California, where flights of uplifted marine terraces are common. The stream profile evolution simulation interfaces in real time with the shelf simulation. The stream profile consists of uniformly spaced cells, each representing the median height of a profile segment. The stream simulation results show that stream response to sea level change on an uplifting coast is dependent on the profile gradient near the stream mouth, relative to the shelf gradient. Small streams with steep gradients aggrade onto the emergent shelf during sea level fall and incise at the mountain front during sea level rise. Large streams with low gradients incise the emergent shelf during sea level fall and aggrade in their valleys during sea level rise.

  16. Simulation Results for the New NSTX HHFW Antenna Straps Design by Using Microwave Studio

    SciTech Connect

    Kung, C C; Brunkhorst, C; Greenough, N; Fredd, E; Castano, A; Miller, D; D'Amico, G; Yager, R; Hosea, J; Wilson, J R; Ryan, P

    2009-05-26

    Experimental results have shown that the high harmonic fast wave (HHFW) at 30 MHz can provide substantial plasma heating and current drive for the NSTX spherical tokamak operation. However, the present antenna strap design rarely achieves the design goal of delivering the full transmitter capability of 6 MW to the plasma. In order to deliver more power to the plasma, a new antenna strap design and the associated coaxial line feeds are being constructed. This new antenna strap design features two feedthroughs to replace the old single feed-through design. In the design process, CST Microwave Studio has been used to simulate the entire new antenna strap structure including the enclosure and the Faraday shield. In this paper, the antenna strap model and the simulation results will be discussed in detail. The test results from the new antenna straps with their associated resonant loops will be presented as well.

  17. Results of an A109 simulation validation and handling qualities study

    NASA Technical Reports Server (NTRS)

    Eshow, Michelle M.; Orlandi, Diego; Bonaita, Giovanni; Barbieri, Sergio

    1989-01-01

    The results for the validation of a mathematical model of the Agusta A109 helicopter, and subsequent use of the model as the baseline for a handling qualities study of cockpit centerstick requirements, are described. The technical approach included flight test, non-realtime analysis, and realtime piloted simulation. Results of the validation illustrate a time- and frequency-domain approach to the model and simulator issues. The final A109 model correlates well with the actual aircraft with the Stability Augmentation System (SAS) engaged, but is unacceptable without the SAS because of instability and response coupling at low speeds. Results of the centerstick study support the current U.S. Army handling qualities requirements for centerstick characteristics.

  18. Results of an A109 simulation validation and handling qualities study

    NASA Technical Reports Server (NTRS)

    Eshow, Michelle M.; Orlandi, Diego; Bonaita, Giovanni; Barbieri, Sergio

    1990-01-01

    The results for the validation of a mathematical model of the Agusta A109 helicopter, and subsequent use of the model as the baseline for a handling qualities study of cockpit centerstick requirements, are described. The technical approach included flight test, non-realtime analysis, and realtime piloted simulation. Results of the validation illustrate a time- and frequency-domain approach to the model and simulator issues. The final A109 model correlates well with the actual aircraft with the Stability Augmentation System (SAS) engaged, but is unacceptable without the SAS because of instability and response coupling at low speeds. Results of the centerstick study support the current U.S. Army handling qualities requirements for centerstick characteristics.

  19. Results of aerodynamic testing of large-scale wing sections in a simulated natural rain environment

    NASA Technical Reports Server (NTRS)

    Bezos, Gaudy M.; Dunham, R. Earl, Jr.; Campbell, Bryan A.; Melson, W. Edward, Jr.

    1990-01-01

    The NASA Langley Research Center has developed a large-scale ground testing capability for evaluating the effect of heavy rain on airfoil lift. The paper presents the results obtained at the Langley Aircraft Landing Dynamics Facility on a 10-foot cord NACA 64-210 wing section equipped with a leading-edge slat and double-slotted trailing-edge flap deflected to simulate landing conditions. Aerodynamic lift data were obtained with and without the rain simulation system turned on for an angle-of-attack range of 7.5 to 19.5 deg and for two rainfall conditions: 9 in/hr and 40 in/hr. The results are compared to and correlated with previous small-scale wind tunnel results for the same airfoil section. It appears that to first order, scale effects are not large and the wind tunnel research technique can be used to predict rain effects on airplane performance.

  20. Results from tight and loose coupled multiphysics in nuclear fuels performance simulations using BISON

    SciTech Connect

    Novascone, S. R.; Spencer, B. W.; Andrs, D.; Williamson, R. L.; Hales, J. D.; Perez, D. M.

    2013-07-01

    The behavior of nuclear fuel in the reactor environment is affected by multiple physics, most notably heat conduction and solid mechanics, which can have a strong influence on each other. To provide credible solutions, a fuel performance simulation code must have the ability to obtain solutions for each of the physics, including coupling between them. Solution strategies for solving systems of coupled equations can be categorized as loosely-coupled, where the individual physics are solved separately, keeping the solutions for the other physics fixed at each iteration, or tightly coupled, where the nonlinear solver simultaneously drives down the residual for each physics, taking into account the coupling between the physics in each nonlinear iteration. In this paper, we compare the performance of loosely and tightly coupled solution algorithms for thermomechanical problems involving coupled thermal and mechanical contact, which is a primary source of interdependence between thermal and mechanical solutions in fuel performance models. The results indicate that loosely-coupled simulations require significantly more nonlinear iterations, and may lead to convergence trouble when the thermal conductivity of the gap is too small. We also apply the tightly coupled solution strategy to a nuclear fuel simulation of an experiment in a test reactor. Studying the results from these simulations indicates that perhaps convergence for either approach may be problem dependent, i.e., there may be problems for which a loose coupled approach converges, where tightly coupled won't converge and vice versa. (authors)

  1. Results of Small-scale Solid Rocket Combustion Simulator testing at Marshall Space Flight Center

    NASA Astrophysics Data System (ADS)

    Goldberg, Benjamin E.; Cook, Jerry

    1993-06-01

    The Small-scale Solid Rocket Combustion Simulator (SSRCS) program was established at the Marshall Space Flight Center (MSFC), and used a government/industry team consisting of Hercules Aerospace Corporation, Aerotherm Corporation, United Technology Chemical Systems Division, Thiokol Corporation and MSFC personnel to study the feasibility of simulating the combustion species, temperatures and flow fields of a conventional solid rocket motor (SRM) with a versatile simulator system. The SSRCS design is based on hybrid rocket motor principles. The simulator uses a solid fuel and a gaseous oxidizer. Verification of the feasibility of a SSRCS system as a test bed was completed using flow field and system analyses, as well as empirical test data. A total of 27 hot firings of a subscale SSRCS motor were conducted at MSFC. Testing of the Small-scale SSRCS program was completed in October 1992. This paper, a compilation of reports from the above team members and additional analysis of the instrumentation results, will discuss the final results of the analyses and test programs.

  2. Results of Small-scale Solid Rocket Combustion Simulator testing at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Goldberg, Benjamin E.; Cook, Jerry

    1993-01-01

    The Small-scale Solid Rocket Combustion Simulator (SSRCS) program was established at the Marshall Space Flight Center (MSFC), and used a government/industry team consisting of Hercules Aerospace Corporation, Aerotherm Corporation, United Technology Chemical Systems Division, Thiokol Corporation and MSFC personnel to study the feasibility of simulating the combustion species, temperatures and flow fields of a conventional solid rocket motor (SRM) with a versatile simulator system. The SSRCS design is based on hybrid rocket motor principles. The simulator uses a solid fuel and a gaseous oxidizer. Verification of the feasibility of a SSRCS system as a test bed was completed using flow field and system analyses, as well as empirical test data. A total of 27 hot firings of a subscale SSRCS motor were conducted at MSFC. Testing of the Small-scale SSRCS program was completed in October 1992. This paper, a compilation of reports from the above team members and additional analysis of the instrumentation results, will discuss the final results of the analyses and test programs.

  3. High-Alpha Research Vehicle Lateral-Directional Control Law Description, Analyses, and Simulation Results

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Murphy, Patrick C.; Lallman, Frederick J.; Hoffler, Keith D.; Bacon, Barton J.

    1998-01-01

    This report contains a description of a lateral-directional control law designed for the NASA High-Alpha Research Vehicle (HARV). The HARV is a F/A-18 aircraft modified to include a research flight computer, spin chute, and thrust-vectoring in the pitch and yaw axes. Two separate design tools, CRAFT and Pseudo Controls, were integrated to synthesize the lateral-directional control law. This report contains a description of the lateral-directional control law, analyses, and nonlinear simulation (batch and piloted) results. Linear analysis results include closed-loop eigenvalues, stability margins, robustness to changes in various plant parameters, and servo-elastic frequency responses. Step time responses from nonlinear batch simulation are presented and compared to design guidelines. Piloted simulation task scenarios, task guidelines, and pilot subjective ratings for the various maneuvers are discussed. Linear analysis shows that the control law meets the stability margin guidelines and is robust to stability and control parameter changes. Nonlinear batch simulation analysis shows the control law exhibits good performance and meets most of the design guidelines over the entire range of angle-of-attack. This control law (designated NASA-1A) was flight tested during the Summer of 1994 at NASA Dryden Flight Research Center.

  4. Results from Tight and Loose Coupled Multiphysics in Nuclear Fuels Performance Simulations using BISON

    SciTech Connect

    S. R. Novascone; B. W. Spencer; D. Andrs; R. L. Williamson; J. D. Hales; D. M. Perez

    2013-05-01

    The behavior of nuclear fuel in the reactor environment is affected by multiple physics, most notably heat conduction and solid mechanics, which can have a strong influence on each other. To provide credible solutions, a fuel performance simulation code must have the ability to obtain solutions for each of the physics, including coupling between them. Solution strategies for solving systems of coupled equations can be categorized as loosely-coupled, where the individual physics are solved separately, keeping the solutions for the other physics fixed at each iteration, or tightly coupled, where the nonlinear solver simultaneously drives down the residual for each physics, taking into account the coupling between the physics in each nonlinear iteration. In this paper, we compare the performance of loosely and tightly coupled solution algorithms for thermomechanical problems involving coupled thermal and mechanical contact, which is a primary source of interdependence between thermal and mechanical solutions in fuel performance models. The results indicate that loosely-coupled simulations require significantly more nonlinear iterations, and may lead to convergence trouble when the thermal conductivity of the gap is too small. We also apply the tightly coupled solution strategy to a nuclear fuel simulation of an experiment in a test reactor. Studying the results from these simulations indicates that perhaps convergence for either approach may be problem dependent, i.e., there may be problems for which a loose coupled approach converges, where tightly coupled won’t converge and vice versa.

  5. From single Debye-Hückel chains to polyelectrolyte solutions: Simulation results

    NASA Astrophysics Data System (ADS)

    Kremer, Kurt

    1996-03-01

    This lecture will present results from simulations of single weakly charged flexible chains, where the electrostatic part of the interaction is modeled by a Debye-Hückel potential,( with U. Micka, IFF, Forschungszentrum Jülich, 52425 Jülich, Germany) as well as simulations of polyelectrolyte solutions, where the counterions are explicitly taken into account( with M. J. Stevens, Sandia Nat. Lab., Albuquerque, NM 87185-1111) ( M. J. Stevens, K. Kremer, JCP 103), 1669 (1995). The first set of the simulations is meant to clear a recent contoversy on the dependency of the persistence length LP on the screening length Γ. While the analytic theories give Lp ~ Γ^x with either x=1 or x=2, the simulations find for all experimentally accessible chain lengths a varying exponent, which is significantly smaller than 1. This causes serious doubts on the applicability of this model for weakly charged polyelectrolytes in general. The second part deals with strongly charged flexible polyelectrolytes in salt free solution. These simulations are performed for multichain systems. The full Coulomb interactions of the monomers and counterions are treated explicitly. Experimental measurements of the osmotic pressure and the structure factor are reproduced and extended. The simulations reveal a new picture of the chain structure based on calculations of the structure factor, persistence length, end-to-end distance, etc. Even at very low density, the chains show significant bending. Furthermore, the chains contract significantly before they start to overlap. We also show that counterion condensation dramatically alters the chain structure, even for a good solvent backbone.

  6. Simulated Driving Assessment (SDA) for Teen Drivers: Results from a Validation Study

    PubMed Central

    McDonald, Catherine C.; Kandadai, Venk; Loeb, Helen; Seacrist, Thomas S.; Lee, Yi-Ching; Winston, Zachary; Winston, Flaura K.

    2015-01-01

    Background Driver error and inadequate skill are common critical reasons for novice teen driver crashes, yet few validated, standardized assessments of teen driving skills exist. The purpose of this study was to evaluate the construct and criterion validity of a newly developed Simulated Driving Assessment (SDA) for novice teen drivers. Methods The SDA's 35-minute simulated drive incorporates 22 variations of the most common teen driver crash configurations. Driving performance was compared for 21 inexperienced teens (age 16–17 years, provisional license ≤90 days) and 17 experienced adults (age 25–50 years, license ≥5 years, drove ≥100 miles per week, no collisions or moving violations ≤3 years). SDA driving performance (Error Score) was based on driving safety measures derived from simulator and eye-tracking data. Negative driving outcomes included simulated collisions or run-off-the-road incidents. A professional driving evaluator/instructor reviewed videos of SDA performance (DEI Score). Results The SDA demonstrated construct validity: 1.) Teens had a higher Error Score than adults (30 vs. 13, p=0.02); 2.) For each additional error committed, the relative risk of a participant's propensity for a simulated negative driving outcome increased by 8% (95% CI: 1.05–1.10, p<0.01). The SDA demonstrated criterion validity: Error Score was correlated with DEI Score (r=−0.66, p<0.001). Conclusions This study supports the concept of validated simulated driving tests like the SDA to assess novice driver skill in complex and hazardous driving scenarios. The SDA, as a standard protocol to evaluate teen driver performance, has the potential to facilitate screening and assessment of teen driving readiness and could be used to guide targeted skill training. PMID:25740939

  7. Stable water isotope simulation by current land-surface schemes:Results of IPILPS phase 1

    SciTech Connect

    Henderson-Sellers, A.; Fischer, M.; Aleinov, I.; McGuffie, K.; Riley, W.J.; Schmidt, G.A.; Sturm, K.; Yoshimura, K.; Irannejad, P.

    2005-10-31

    Phase 1 of isotopes in the Project for Intercomparison of Land-surface Parameterization Schemes (iPILPS) compares the simulation of two stable water isotopologues ({sup 1}H{sub 2} {sup 18}O and {sup 1}H{sup 2}H{sup 16}O) at the land-atmosphere interface. The simulations are off-line, with forcing from an isotopically enabled regional model for three locations selected to offer contrasting climates and ecotypes: an evergreen tropical forest, a sclerophyll eucalypt forest and a mixed deciduous wood. Here we report on the experimental framework, the quality control undertaken on the simulation results and the method of intercomparisons employed. The small number of available isotopically-enabled land-surface schemes (ILSSs) limits the drawing of strong conclusions but, despite this, there is shown to be benefit in undertaking this type of isotopic intercomparison. Although validation of isotopic simulations at the land surface must await more, and much more complete, observational campaigns, we find that the empirically-based Craig-Gordon parameterization (of isotopic fractionation during evaporation) gives adequately realistic isotopic simulations when incorporated in a wide range of land-surface codes. By introducing two new tools for understanding isotopic variability from the land surface, the Isotope Transfer Function and the iPILPS plot, we show that different hydrological parameterizations cause very different isotopic responses. We show that ILSS-simulated isotopic equilibrium is independent of the total water and energy budget (with respect to both equilibration time and state), but interestingly the partitioning of available energy and water is a function of the models' complexity.

  8. Examining the results of certain effects of high altitude on soldiers using modeling and simulation.

    PubMed

    von Tersch, Robert; Birch, Harry

    2009-10-01

    Operation Enduring Freedom conducted in the high mountains of Afghanistan posed new challenges for U.S. and coalition forces. The high mountains with elevations up to 25,000 feet and little to no road access limited the use of combat vehicles and some advanced weaponry. Small unit actions became the norm and soldiers experienced the effect of high elevation, where limited oxygen and its debilitating effects negatively impacted unacclimated soldiers. While the effects of high altitude on unacclimated soldiers are well documented, the results of those effects in a combat setting are not as well known. For this study, the authors focused on 3 areas: movement speed, response time, and judgment; used a state-of-the-art constructive modeling and simulation (M&S) tool; simulated a combat engagement with less capable unacclimated and fully capable acclimated soldiers; and captured the results, which scaled increased casualties for unacclimated and decreased casualties for acclimated soldiers. PMID:19891222

  9. Simulating Late Ordovician deep ocean O2 with an earth system climate model. Preliminary results.

    NASA Astrophysics Data System (ADS)

    D'Amico, Daniel F.; Montenegro, Alvaro

    2016-04-01

    The geological record provides several lines of evidence that point to the occurrence of widespread and long lasting deep ocean anoxia during the Late Ordovician, between about 460-440 million years ago (ma). While a series of potential causes have been proposed, there is still large uncertainty regarding how the low oxygen levels came about. Here we use the University of Victoria Earth System Climate Model (UVic ESCM) with Late Ordovician paleogeography to verify the impacts of paleogeography, bottom topography, nutrient loading and cycling and atmospheric concentrations of O2 and CO2 on deep ocean oxygen concentration during the period of interest. Preliminary results so far are based on 10 simulations (some still ongoing) covering the following parameter space: CO2 concentrations of 2240 to 3780 ppmv (~8x to 13x pre-industrial), atmospheric O2 ranging from 8% to 12% per volume, oceanic PO4 and NO3 loading from present day to double present day, reductions in wind speed of 50% and 30% (winds are provided as a boundary condition in the UVic ESCM). For most simulations the deep ocean remains well ventilated. While simulations with higher CO2, lower atmospheric O2 and greater nutrient loading generate lower oxygen concentration in the deep ocean, bottom anoxia - here defined as concentrations <10 μmol L-1 - in these cases is restricted to the high-latitue northern hemisphere. Further simulations will address the impact of greater nutrient loads and bottom topography on deep ocean oxygen concentrations.

  10. Induced current electrical impedance tomography system: experimental results and numerical simulations.

    PubMed

    Zlochiver, Sharon; Radai, M Michal; Abboud, Shimon; Rosenfeld, Moshe; Dong, Xiu-Zhen; Liu, Rui-Gang; You, Fu-Sheng; Xiang, Hai-Yan; Shi, Xue-Tao

    2004-02-01

    In electrical impedance tomography (EIT), measurements of developed surface potentials due to applied currents are used for the reconstruction of the conductivity distribution. Practical implementation of EIT systems is known to be problematic due to the high sensitivity to noise of such systems, leading to a poor imaging quality. In the present study, the performance of an induced current EIT (ICEIT) system, where eddy current is applied using magnetic induction, was studied by comparing the voltage measurements to simulated data, and examining the imaging quality with respect to simulated reconstructions for several phantom configurations. A 3-coil, 32-electrode ICEIT system was built, and an iterative modified Newton-Raphson algorithm was developed for the solution of the inverse problem. The RMS norm between the simulated and the experimental voltages was found to be 0.08 +/- 0.05 mV (<3%). Two regularization methods were implemented and compared: the Marquardt regularization and the Laplacian regularization (a bounded second-derivative regularization). While the Laplacian regularization method was found to be preferred for simulated data, it resulted in distinctive spatial artifacts for measured data. The experimental reconstructed images were found to be indicative of the angular positioning of the conductivity perturbations, though the radial sensitivity was low, especially when using the Marquardt regularization method. PMID:15005319

  11. Phase transition-like behavior of magnetospheric substorms: Global MHD simulation results

    NASA Astrophysics Data System (ADS)

    Shao, X.; Sitnov, M. I.; Sharma, S. A.; Papadopoulos, K.; Goodrich, C. C.; Guzdar, P. N.; Milikh, G. M.; Wiltberger, M. J.; Lyon, J. G.

    2003-01-01

    Using nonlinear dynamical techniques, we statistically investigate whether the simulated substorms from global magnetohydrodynamic (MHD) models have a combination of global and multiscale features, revealed in substorm dynamics by [2000] and featured the phase transition-like behavior. We simulate seven intervals of total duration of 280 hours from the data set used in the above works [, 1985]. We analyze the input-output (vBs-pseudo AL index) system obtained from the global MHD model and compare the results to those inferred from the original set (vBs-observed AL index). The analysis of the coupled vBs-pseudo AL index system shows the first-order phase transition map, which is consistent with the map obtained for the vBs-observed AL index system. Although the comparison between observations and global MHD simulations for individual events may vary, the overall global transition pattern during the substorm cycle revealed by singular spectrum analysis (SSA) is statistically consistent between simulations and observations. The coupled vBs-pseudo AL index system also shows multiscale behavior (scale-invariant power law dependence) in SSA power spectrum. Besides, we find the critical exponent of the nonequilibrium transitions in the magnetosphere, which reflects the multiscale aspect of the substorm activity, different from power law frequency of autonomous systems. The exponent relates input and output parameters of the magnetosphere. We also discuss the limitations of the global MHD model in reproducing the multiscale behavior when compared to the real system.

  12. Analysis of formation pressure test results in the Mount Elbert methane hydrate reservoir through numerical simulation

    USGS Publications Warehouse

    Kurihara, M.; Sato, A.; Funatsu, K.; Ouchi, H.; Masuda, Y.; Narita, H.; Collett, T.S.

    2011-01-01

    Targeting the methane hydrate (MH) bearing units C and D at the Mount Elbert prospect on the Alaska North Slope, four MDT (Modular Dynamic Formation Tester) tests were conducted in February 2007. The C2 MDT test was selected for history matching simulation in the MH Simulator Code Comparison Study. Through history matching simulation, the physical and chemical properties of the unit C were adjusted, which suggested the most likely reservoir properties of this unit. Based on these properties thus tuned, the numerical models replicating "Mount Elbert C2 zone like reservoir" "PBU L-Pad like reservoir" and "PBU L-Pad down dip like reservoir" were constructed. The long term production performances of wells in these reservoirs were then forecasted assuming the MH dissociation and production by the methods of depressurization, combination of depressurization and wellbore heating, and hot water huff and puff. The predicted cumulative gas production ranges from 2.16??106m3/well to 8.22??108m3/well depending mainly on the initial temperature of the reservoir and on the production method.This paper describes the details of modeling and history matching simulation. This paper also presents the results of the examinations on the effects of reservoir properties on MH dissociation and production performances under the application of the depressurization and thermal methods. ?? 2010 Elsevier Ltd.

  13. Simulation of human atherosclerotic femoral plaque tissue: the influence of plaque material model on numerical results

    PubMed Central

    2015-01-01

    Background Due to the limited number of experimental studies that mechanically characterise human atherosclerotic plaque tissue from the femoral arteries, a recent trend has emerged in current literature whereby one set of material data based on aortic plaque tissue is employed to numerically represent diseased femoral artery tissue. This study aims to generate novel vessel-appropriate material models for femoral plaque tissue and assess the influence of using material models based on experimental data generated from aortic plaque testing to represent diseased femoral arterial tissue. Methods Novel material models based on experimental data generated from testing of atherosclerotic femoral artery tissue are developed and a computational analysis of the revascularisation of a quarter model idealised diseased femoral artery from a 90% diameter stenosis to a 10% diameter stenosis is performed using these novel material models. The simulation is also performed using material models based on experimental data obtained from aortic plaque testing in order to examine the effect of employing vessel appropriate material models versus those currently employed in literature to represent femoral plaque tissue. Results Simulations that employ material models based on atherosclerotic aortic tissue exhibit much higher maximum principal stresses within the plaque than simulations that employ material models based on atherosclerotic femoral tissue. Specifically, employing a material model based on calcified aortic tissue, instead of one based on heavily calcified femoral tissue, to represent diseased femoral arterial vessels results in a 487 fold increase in maximum principal stress within the plaque at a depth of 0.8 mm from the lumen. Conclusions Large differences are induced on numerical results as a consequence of employing material models based on aortic plaque, in place of material models based on femoral plaque, to represent a diseased femoral vessel. Due to these large

  14. Recent results from the GISS model of the global atmosphere. [circulation simulation for weather forecasting

    NASA Technical Reports Server (NTRS)

    Somerville, R. C. J.

    1975-01-01

    Large numerical atmospheric circulation models are in increasingly widespread use both for operational weather forecasting and for meteorological research. The results presented here are from a model developed at the Goddard Institute for Space Studies (GISS) and described in detail by Somerville et al. (1974). This model is representative of a class of models, recently surveyed by the Global Atmospheric Research Program (1974), designed to simulate the time-dependent, three-dimensional, large-scale dynamics of the earth's atmosphere.

  15. PRELIMINARY RESULTS FROM A SIMULATION OF QUENCHED QCD WITH OVERL AP FERMIONS ON A LARGE LATTICE.

    SciTech Connect

    BERRUTO,F.GARRON,N.HOELBLING,D.LELLOUCH,L.REBBI,C.SHORESH,N.

    2003-07-15

    We simulate quenched QCD with the overlap Dirac operator. We work with the Wilson gauge action at {beta} = 6 on an 18{sup 3} x 64 lattice. We calculate quark propagators for a single source point and quark mass ranging from am{sub 4} = 0.03 to 0.75. We present here preliminary results based on the propagators for 60 gauge field configurations.

  16. Scanning L-Band Active Passive (SLAP) - Recent Results from an Airborne Simulator for SMAP

    NASA Technical Reports Server (NTRS)

    Kim, Edward

    2015-01-01

    Scanning L-band Active Passive (SLAP) is a recently-developed NASA airborne instrument specially tailored to simulate the new Soil Moisture Active Passive (SMAP) satellite instrument suite. SLAP conducted its first test flights in December, 2013 and participated in its first science campaign-the IPHEX ground validation campaign of the GPM mission-in May, 2014. This paper will present results from additional test flights and science observations scheduled for 2015.

  17. Femtosecond laser for glaucoma treatment: the comparison between simulation and experimentation results on ocular tissue removal

    NASA Astrophysics Data System (ADS)

    Hou, Dong Xia; Ngoi, Bryan K. A.; Hoh, Sek Tien; Koh, Lee Huat K.; Deng, Yuan Zi

    2005-04-01

    In ophthalmology, the use of femtosecond lasers is receiving more attention than ever due to its extremely high intensity and ultra short pulse duration. It opens the highly beneficial possibilities for minimized side effects during surgery process, and one of the specific areas is laser surgery in glaucoma treatment. However, the sophisticated femtosecond laser-ocular tissue interaction mechanism hampers the clinical application of femtosecond laser to treat glaucoma. The potential contribution in this work lies in the fact, that this is the first time a modified moving breakdown theory is applied, which is appropriate for femtosecond time scale, to analyze femtosecond laser-ocular tissue interaction mechanism. Based on this theory, energy deposition and corresponding thermal increase are studied by both simulation and experimentation. A simulation model was developed using Matlab software, and the simulation result was validated through in-vitro laser-tissue interaction experiment using pig iris. By comparing the theoretical and experimental results, it is shown that femtosecond laser can obtain determined ocular tissue removal, and the thermal damage is evidently reduced. This result provides a promising potential for femtosecond laser in glaucoma treatment.

  18. Transient thermal behaviour of a compressor rotor with ventilation: Test results under simulated engine conditions

    NASA Astrophysics Data System (ADS)

    Reile, E.; Radons, U.; Hennecke, D. K.

    1985-09-01

    The development of advanced compressors for modern aero-engines requires detailed knowledge of the transient thermal behavior of the rotor disks to enable accurate prediction of rotor life and, additionally, of the thermal growth of the rotor for the evaluation of tip clearances. In the quest for longer life and higher reliability of the parts as well as reduced clearances even at transient conditions, the designer has to be able to influence the thermal behavior of the rotor. A very effective way is to vent small amounts of air through the rotor cavities. The design of such a vented rotor is presented. The main emphasis is placed on a detailed description of a test rig specially built for this purpose. The testing was carried out under simulated engine conditions for a wide range of parameters. The results are compared with those obtained with a theoretical model derived from fundamental tests at the University of Sussex, where heat transfer in rotating cavities is investigated. Good agreement is observed. Some final tests were done in an engine. The results also exhibit good agreement with the rig results under simulated conditions, when the proper dimensionless parameters are considered, providing the validity of the simulation.

  19. Phase Transition-like Behavior of Magnetospheric Substorms: Global MHD Simulation Results

    NASA Astrophysics Data System (ADS)

    Shao, X.; Sitnov, M.; Sharma, A. S.; Papadopoulos, K.; Guzdar, P. N.; Goodrich, C. C.; Milikh, G. M.; Wiltberger, M. J.; Lyon, J. G.

    2001-12-01

    Because of their relevance to massive global energy loading and unloading, lots of observations and studies have been made for magnetic substorm events. Using nonlinear dynamical techniques, we investigate whether the simulated substorms from global MHD models have the non-equilibrium phase transition-like features revealed by \\markcite{Sitnov et al. [2000]}. We simulated 6 intervals of total duration of 240 hours from the same data set used in Sitnov et al. [2000]. We analyzed the input-output (vBs--pseudo-AL index) system obtained from the global MHD model and compared the results to those in \\markcite{Sitnov et al. [2000, 2001]}. The analysis of the coupled vBs--pseudo-AL index system shows the first-order phase transition map, which is consistent with the map obtained for the vBs--observed-AL index system from Sitnov et al. [2000]. The explanation lies in the cusp catastrophe model proposed by Lewis [1991]. Although, the comparison between observation and individual global MHD simulations may vary, the overall global transition pattern during the substorm cycle revealed by Singular Spectrum Analysis (SSA) is consistent between simulations and observations. This is an important validation of the global MHD simulations of the magnetosphere. The coupled vBs--pseudo-AL index system shows multi-scale behavior (scale-invarianet power-law dependence) in singular power spectrum. We found critical exponents of the non-equilibrium transitions in the magnetosphere, which reflect the multi-scale aspect of the substorm activity, different from power-law frequency of autonomous systems. The exponents relate input and output parameters of the magnetosphere and distinguish the second order phase transition model from the self-organized criticality model. We also discuss the limitations of the global MHD model in reproducing the multi-scale behavior when compared to the real system.

  20. Role of depleted flux tubes in steady magnetospheric convection: Results of RCM-E simulations

    NASA Astrophysics Data System (ADS)

    Yang, J.; Toffoletto, F. R.; Song, Y.

    2010-12-01

    We present results of a simulation of an idealized steady magnetospheric convection (SMC) event during steady southward IMF BZ using a version of the Rice Convection Model that is coupled to an equilibrium magnetic field solver (RCM-E) and compare that to a simulation of a substorm growth phase. In contrast to the 1-hour growth phase, the 5-hour SMC event is modeled by placing a plasma distribution with substantially depleted entropy parameter PV5/3 on the RCM's high-latitude boundary. We find that the modeled large-scale configuration on the nightside during the SMC event differs significantly from the growth phase simulation. First, in the magnetotail tailward of X ≈ -10 RE, the magnetic field is dipole-like associated with thick plasma sheet. Second, near geosynchronous orbit, the magnetic field is more stretched associated with the strongly enhanced partial ring current and the inner edge of the plasma sheet moves well inside geosynchronous orbit. Third, the electric field shows strong shielding or even overshielding during the SMC; while a penetration electric field emerges in the growth phase simulation. Fourth, the ground magnetogram calculation shows large horizontal magnetic field disturbances in a much thicker auroral zone which is mainly attributed to Hall currents. Meantime, fairly negative magnetic disturbance emerges in the mid and low latitudes which is mainly attributed to the partial ring current approximately extended to terminators. Contrary to previous studies, our simulation does not produce a deep BZ minimum during strong magnetospheric convection, which implies that the pressure balance inconsistency may be dramatically alleviated if the inner magnetosphere is continuously fed with under-populated flux tubes. We also suggest that strong magnetic field without BZ minimum in the plasma sheet may explain why SMCs can last for hours without a substorm expansion since certain instabilities may not build up to threshold in such a configuration.

  1. A three-phase series-parallel resonant converter -- analysis, design, simulation and experimental results

    SciTech Connect

    Bhat, A.K.S.; Zheng, L.

    1995-12-31

    A three-phase dc-to-dc series-parallel resonant converter is proposed and its operating modes for 180{degree} wide gating pulse scheme are explained. A detailed analysis of the converter using constant current model and Fourier series approach is presented. Based on the analysis, design curves are obtained and a design example of 1 kW converter is given. SPICE simulation results for the designed converter and experimental results for a 500 W converter are presented to verify the performance of the proposed converter for varying load conditions. The converter operates in lagging PF mode for the entire load range and requires a narrow variation in switching frequency.

  2. Molecular simulation of aqueous electrolytes: Water chemical potential results and Gibbs-Duhem equation consistency tests

    NASA Astrophysics Data System (ADS)

    Moučka, Filip; Nezbeda, Ivo; Smith, William R.

    2013-09-01

    This paper deals with molecular simulation of the chemical potentials in aqueous electrolyte solutions for the water solvent and its relationship to chemical potential simulation results for the electrolyte solute. We use the Gibbs-Duhem equation linking the concentration dependence of these quantities to test the thermodynamic consistency of separate calculations of each quantity. We consider aqueous NaCl solutions at ambient conditions, using the standard SPC/E force field for water and the Joung-Cheatham force field for the electrolyte. We calculate the water chemical potential using the osmotic ensemble Monte Carlo algorithm by varying the number of water molecules at a constant amount of solute. We demonstrate numerical consistency of these results in terms of the Gibbs-Duhem equation in conjunction with our previous calculations of the electrolyte chemical potential. We present the chemical potential vs molality curves for both solvent and solute in the form of appropriately chosen analytical equations fitted to the simulation data. As a byproduct, in the context of the force fields considered, we also obtain values for the Henry convention standard molar chemical potential for aqueous NaCl using molality as the concentration variable and for the chemical potential of pure SPC/E water. These values are in reasonable agreement with the experimental values.

  3. Spatial resolution effect on the simulated results of watershed scale models

    NASA Astrophysics Data System (ADS)

    Epelde, Ane; Antiguedad, Iñaki; Brito, David; Jauch, Eduardo; Neves, Ramiro; Sauvage, Sabine; Sánchez-Pérez, José Miguel

    2016-04-01

    Numerical models are useful tools for water resources planning, development and management. Currently, their use is being spread and more complex modeling systems are being employed for these purposes. The adding of complexity allows the simulation of water quality related processes. Nevertheless, this implies a considerable increase on the computational requirements, which usually is compensated on the models by a decrease on their spatial resolution. The spatial resolution of the models is known to affect the simulation of hydrological processes and therefore, also the nutrient exportation and cycling processes. However, the implication of the spatial resolution on the simulated results is rarely assessed. In this study, we examine the effect of the change in the grid size on the integrated and distributed results of the Alegria River watershed model (Basque Country, Northern Spain). Variables such as discharge, water table level, relative water content of soils, nitrogen exportation and denitrification are analyzed in order to quantify the uncertainty involved in the spatial discretization of the watershed scale models. This is an aspect that needs to be carefully considered when numerical models are employed in watershed management studies or quality programs.

  4. Pointing a ground antenna at a spinning spacecraft using conical scan - Simulation results

    NASA Technical Reports Server (NTRS)

    Mileant, Alexander; Peng, Ted

    1989-01-01

    The results are presented for an investigation of ground antenna pointing errors which are caused by fluctuations of the receiver AGC signal due to thermal noise and a spinning spacecraft. Transient responses and steady-state errors and losses are estimated using models of the digital Conscan (conical scan) loop, the FFT, and antenna characteristics. Simulation results are given for the on-going Voyager mission and for the upcoming Galileo mission, which includes a spinning spacecraft. The simulation predicts a 1 sigma pointing error of 0.5 to 2.0 mdeg for Voyager, assuming an AGC loop SNR of 35 to 30 dB with a scan period varying from 128 to 32 sec, respectively. This prediction is in agreement with the DSS 14 antenna Conscan performance of 1.7 mdeg for 32 sec scans as reported in earlier studies. The simulation of Galileo predicts 1 mdeg error with a 128 sec scan and 4 mdeg with a 32 sec scan under similar AGC conditions.

  5. Pointing a ground antenna at a spinning spacecraft using Conscan-simulation results

    NASA Technical Reports Server (NTRS)

    Mileant, A.; Peng, T.

    1988-01-01

    The results are presented for an investigation of ground antenna pointing errors which are caused by fluctuations of the receiver AGC signal due to thermal noise and a spinning spacecraft. Transient responses and steady-state errors and losses are estimated using models of the digital Conscan (conical scan) loop, the FFT, and antenna characteristics. Simulation results are given for the on-going Voyager mission and for the upcoming Galileo mission, which includes a spinning spacecraft. The simulation predicts a 1 sigma pointing error of 0.5 to 2.0 mdeg for Voyager, assuming an AGC loop SNR of 35 to 30 dB with a scan period varying from 128 to 32 sec, respectively. This prediction is in agreement with the DSS 14 antenna Conscan performance of 1.7 mdeg for 32 sec scans as reported in earlier studies. The simulation of Galileo predicts 1 mdeg error with a 128 sec scan and 4 mdeg with a 32 sec scan under similar AGC conditions.

  6. Experimental and computer simulation results of the spot welding process using SORPAS software

    NASA Astrophysics Data System (ADS)

    Al-Jader, M. A.; Cullen, J. D.; Athi, N.; Al-Shamma'a, A. I.

    2009-07-01

    The highly competitive nature of the automotive industry drives demand for improvements and increased precision engineering in resistance spot welding. Currently there are about 4300 weld points on the average steel vehicle. Current industrial monitoring systems check the quality of the nugget after processing 15 cars, once every two weeks. The nuggets are examined off line using a destructive process, which takes approximately 10 days to complete causing a long delay in the production process. This paper presents a simulation of the spot welding growth curves, along with a comparison to growth curves performed on an industrial spot welding machine. The correlation of experimental results shows that SORPAS simulations can be used as an off line measurement to reduce factory energy usage. The first section in your paper

  7. Systematic coarse graining flowing polymer melts: thermodynamically guided simulations and resulting constitutive model.

    PubMed

    Iig, Patrick

    2011-01-01

    Complex fluids, such as polymers, colloids, liquid-crystals etc., show intriguing viscoelastic properties, due to the complicated interplay between flow-induced structure formation and dynamical behavior. Starting from microscopic models of complex fluids, a systematic coarse-graining method is presented that allows us to derive closed-form and thermodynamically consistent constitutive equations for such fluids. Essential ingredients of the proposed approach are thermodynamically guided simulations within a consistent coarse-graining scheme. In addition to this new type of multiscale simulations, we reconstruct the building blocks that constitute the thermodynamically consistent coarse-grained model. We illustrate the method for low-molecular polymer melts, which are subject to different imposed flow fields like planar shear and different elongational flows. The constitutive equation for general flow conditions we obtain shows rheological behavior including shear thinning, normal stress differences, and elongational viscosities in good agreement with reference results. PMID:21678766

  8. Using multidimensional Rasch to enhance measurement precision: initial results from simulation and empirical studies.

    PubMed

    Mok, Magdalena Mo Ching; Xu, Kun

    2013-01-01

    This study aimed to explore the effect on measurement precision of multidimensional, as compared with unidimensional, Rasch measurement for constructing measures from multidimensional Likert-type scales. Many educational and psychological tests are multidimensional but common practice is to ignore correlations among the latent traits in these multidimensional scales in the measurement process. These practices may have serious validity and reliability implications. This study made use of both empirical data from 208,083 students, and simulated data simulated by 24 systematic combinations, each replicated 1000 times, of three conditions, namely, sample size, degree of dimensionality, and scale length to compare unidimensional and multidimensional approaches and to identify effects of sample size, dimensionality and scale length on measurement precision. Results showed that the multidimensional Rasch approach yielded more precise estimates than did unidimensional approach if the two dimensions were strongly correlated. The effect was more pronounced for long scales. PMID:23442326

  9. Structured water in polyelectrolyte dendrimers: Understanding small angle neutron scattering results through atomistic simulation

    SciTech Connect

    Chen, Wei-Ren; Do, Changwoo; Hong, Kunlun; Liu, Emily; Liu, Yun; Porcar, L.; Smith, Gregory Scott; Wu, Bin; Egami, T; Smith, Sean C

    2012-01-01

    Based on atomistic molecular dynamics (MD) simulations, the small angle neutron scattering (SANS) intensity behavior of a single generation-4 (G4) polyelectrolyte polyamidoamine (PAMAM) starburst dendrimer is investigated at different levels of molecular protonation. The SANS form factor, P(Q), and Debye autocorrelation function, (r), are calculated from the equilibrium MD trajectory based on a mathematical approach proposed in this work which provides a link between the neutron scattering experiment and MD computation. The simulations enable scattering calculations of not only the hydrocarbons, but also the contribution to the scattering length density fluctuations caused by structured, confined water within the dendrimer. Based on our computational results, we question the validity of using radius of gyration RG for microstructure characterization of a polyelectrolyte dendrimer from the scattering perspective.

  10. Motion-base simulator results of advanced supersonic transport handling qualities with active controls

    NASA Technical Reports Server (NTRS)

    Feather, J. B.; Joshi, D. S.

    1981-01-01

    Handling qualities of the unaugmented advanced supersonic transport (AST) are deficient in the low-speed, landing approach regime. Consequently, improvement in handling with active control augmentation systems has been achieved using implicit model-following techniques. Extensive fixed-based simulator evaluations were used to validate these systems prior to tests with full motion and visual capabilities on a six-axis motion-base simulator (MBS). These tests compared the handling qualities of the unaugmented AST with several augmented configurations to ascertain the effectiveness of these systems. Cooper-Harper ratings, tracking errors, and control activity data from the MBS tests have been analyzed statistically. The results show the fully augmented AST handling qualities have been improved to an acceptable level.

  11. Preliminary Analysis and Simulation Results of Microwave Transmission Through an Electron Cloud

    SciTech Connect

    Sonnad, Kiran; Sonnad, Kiran; Furman, Miguel; Veitzer, Seth; Stoltz, Peter; Cary, John

    2007-01-12

    The electromagnetic particle-in-cell (PIC) code VORPAL is being used to simulate the interaction of microwave radiation through an electron cloud. The results so far showgood agreement with theory for simple cases. The study has been motivated by previous experimental work on this problem at the CERN SPS [1], experiments at the PEP-II Low Energy Ring (LER) at SLAC [4], and proposed experiments at the Fermilab Main Injector (MI). With experimental observation of quantities such as amplitude, phase and spectrum of the output microwave radiation and with support from simulations for different cloud densities and applied magnetic fields, this technique can prove to be a useful probe for assessing the presence as well as the densityof electron clouds.

  12. Simulation and experimental results of optical and thermal modeling of gold nanoshells.

    PubMed

    Ghazanfari, Lida; Khosroshahi, Mohammad E

    2014-09-01

    This paper proposes a generalized method for optical and thermal modeling of synthesized magneto-optical nanoshells (MNSs) for biomedical applications. Superparamagnetic magnetite nanoparticles with diameter of 9.5 ± 1.4 nm are fabricated using co-precipitation method and subsequently covered by a thin layer of gold to obtain 15.8 ± 3.5 nm MNSs. In this paper, simulations and detailed analysis are carried out for different nanoshell geometry to achieve a maximum heat power. Structural, magnetic and optical properties of MNSs are assessed using vibrating sample magnetometer (VSM), X-ray diffraction (XRD), UV-VIS spectrophotometer, dynamic light scattering (DLS), and transmission electron microscope (TEM). Magnetic saturation of synthesized magnetite nanoparticles are reduced from 46.94 to 11.98 emu/g after coating with gold. The performance of the proposed optical-thermal modeling technique is verified by simulation and experimental results. PMID:25063109

  13. Simulated cosmic microwave background maps at 0.5 deg resolution: Basic results

    NASA Technical Reports Server (NTRS)

    Hinshaw, G.; Bennett, C. L.; Kogut, A.

    1995-01-01

    We have simulated full-sky maps of the cosmic microwave background (CMB) anisotropy expected from cold dark matter (CDM) models at 0.5 deg and 1.0 deg angular resolution. Statistical properties of the maps are presented as a function of sky coverage, angular resolution, and instrument noise, and the implications of these results for observability of the Doppler peak are discussed. The rms fluctuations in a map are not a particularly robust probe of the existence of a Doppler peak; however, a full correlation analysis can provide reasonable sensitivity. We find that sensitivity to the Doppler peak depends primarily on the fraction of sky covered, and only secondarily on the angular resolution and noise level. Color plates of the simulated maps are presented to illustrate the anisotropies.

  14. Asymptotic accuracy of two-class discrimination

    SciTech Connect

    Ho, T.K.; Baird, H.S.

    1994-12-31

    Poor quality-e.g. sparse or unrepresentative-training data is widely suspected to be one cause of disappointing accuracy of isolated-character classification in modern OCR machines. We conjecture that, for many trainable classification techniques, it is in fact the dominant factor affecting accuracy. To test this, we have carried out a study of the asymptotic accuracy of three dissimilar classifiers on a difficult two-character recognition problem. We state this problem precisely in terms of high-quality prototype images and an explicit model of the distribution of image defects. So stated, the problem can be represented as a stochastic source of an indefinitely long sequence of simulated images labeled with ground truth. Using this sequence, we were able to train all three classifiers to high and statistically indistinguishable asymptotic accuracies (99.9%). This result suggests that the quality of training data was the dominant factor affecting accuracy. The speed of convergence during training, as well as time/space trade-offs during recognition, differed among the classifiers.

  15. Accuracy of deception judgments.

    PubMed

    Bond, Charles F; DePaulo, Bella M

    2006-01-01

    We analyze the accuracy of deception judgments, synthesizing research results from 206 documents and 24,483 judges. In relevant studies, people attempt to discriminate lies from truths in real time with no special aids or training. In these circumstances, people achieve an average of 54% correct lie-truth judgments, correctly classifying 47% of lies as deceptive and 61% of truths as nondeceptive. Relative to cross-judge differences in accuracy, mean lie-truth discrimination abilities are nontrivial, with a mean accuracy d of roughly .40. This produces an effect that is at roughly the 60th percentile in size, relative to others that have been meta-analyzed by social psychologists. Alternative indexes of lie-truth discrimination accuracy correlate highly with percentage correct, and rates of lie detection vary little from study to study. Our meta-analyses reveal that people are more accurate in judging audible than visible lies, that people appear deceptive when motivated to be believed, and that individuals regard their interaction partners as honest. We propose that people judge others' deceptions more harshly than their own and that this double standard in evaluating deceit can explain much of the accumulated literature. PMID:16859438

  16. Computer simulation applied to jewellery casting: challenges, results and future possibilities

    NASA Astrophysics Data System (ADS)

    Tiberto, Dario; Klotz, Ulrich E.

    2012-07-01

    Computer simulation has been successfully applied in the past to several industrial processes (such as lost foam and die casting) by larger foundries and direct automotive suppliers, while for the jewelry sector it is a procedure which is not widespread, and which has been tested mainly in the context of research projects. On the basis of a recently concluded EU project, the authors here present the simulation of investment casting, using two different softwares: one for the filling step (Flow-3D®), the other one for the solidification (PoligonSoft®). A work on material characterization was conducted to obtain the necessary physical parameters for the investment (used for the mold) and for the gold alloys (through thermal analysis). A series of 18k and 14k gold alloys were cast in standard set-ups to have a series of benchmark trials with embedded thermocouples for temperature measurement, in order to compare and validate the software output in terms of the cooling curves for definite test parts. Results obtained with the simulation included the reduction of micro-porosity through an optimization of the feeding channels for a controlled solidification of the metal: examples of the predicted porosity in the cast parts (with metallographic comparison) will be shown. Considerations on the feasibility of applying the casting simulation in the jewelry sector will be reached, underlining the importance of the software parametrization necessary to obtain reliable results, and the discrepancies found with the experimental comparison. In addition an overview on further possibilities of application for the CFD in jewellery casting, such as the modeling of the centrifugal and tilting processes, will be presented.

  17. A limited assessment of the ASEP human reliability analysis procedure using simulator examination results

    SciTech Connect

    Gore, B.R.; Dukelow, J.S. Jr.; Mitts, T.M.; Nicholson, W.L.

    1995-10-01

    This report presents a limited assessment of the conservatism of the Accident Sequence Evaluation Program (ASEP) human reliability analysis (HRA) procedure described in NUREG/CR-4772. In particular, the, ASEP post-accident, post-diagnosis, nominal HRA procedure is assessed within the context of an individual`s performance of critical tasks on the simulator portion of requalification examinations administered to nuclear power plant operators. An assessment of the degree to which operator perforn:Lance during simulator examinations is an accurate reflection of operator performance during actual accident conditions was outside the scope of work for this project; therefore, no direct inference can be made from this report about such performance. The data for this study are derived from simulator examination reports from the NRC requalification examination cycle. A total of 4071 critical tasks were identified, of which 45 had been failed. The ASEP procedure was used to estimate human error probability (HEP) values for critical tasks, and the HEP results were compared with the failure rates observed in the examinations. The ASEP procedure was applied by PNL operator license examiners who supplemented the limited information in the examination reports with expert judgment based upon their extensive simulator examination experience. ASEP analyses were performed for a sample of 162 critical tasks selected randomly from the 4071, and the results were used to characterize the entire population. ASEP analyses were also performed for all of the 45 failed critical tasks. Two tests were performed to assess the bias of the ASEP HEPs compared with the data from the requalification examinations. The first compared the average of the ASEP HEP values with the fraction of the population actually failed and it found a statistically significant factor of two bias on the average.

  18. RESULTS OF CESIUM MASS TRANSFER TESTING FOR NEXT GENERATION SOLVENT WITH HANFORD WASTE SIMULANT AP-101

    SciTech Connect

    Peters, T.; Washington, A.; Fink, S.

    2011-09-27

    SRNL has performed an Extraction, Scrub, Strip (ESS) test using the next generation solvent and AP-101 Hanford Waste simulant. The results indicate that the next generation solvent (MG solvent) has adequate extraction behavior even in the face of a massive excess of potassium. The stripping results indicate poorer behavior, but this may be due to inadequate method detection limits. SRNL recommends further testing using hot tank waste or spiked simulant to provide for better detection limits. Furthermore, strong consideration should be given to performing an actual waste, or spiked waste demonstration using the 2cm contactor bank. The Savannah River Site currently utilizes a solvent extraction technology to selectively remove cesium from tank waste at the Multi-Component Solvent Extraction unit (MCU). This solvent consists of four components: the extractant - BoBCalixC6, a modifier - Cs-7B, a suppressor - trioctylamine, and a diluent, Isopar L{trademark}. This solvent has been used to successfully decontaminate over 2 million gallons of tank waste. However, recent work at Oak Ridge National Laboratory (ORNL), Argonne National Laboratory (ANL), and Savannah River National Laboratory (SRNL) has provided a basis to implement an improved solvent blend. This new solvent blend - referred to as Next Generation Solvent (NGS) - is similar to the current solvent, and also contains four components: the extractant - MAXCalix, a modifier - Cs-7B, a suppressor - LIX-79{trademark} guanidine, and a diluent, Isopar L{trademark}. Testing to date has shown that this 'Next Generation' solvent promises to provide far superior cesium removal efficiencies, and furthermore, is theorized to perform adequately even in waste with high potassium concentrations such that it could be used for processing Hanford wastes. SRNL has performed a cesium mass transfer test in to confirm this behavior, using a simulant designed to simulate Hanford AP-101 waste.

  19. Theory and simulation of oscillations on near-steady state in crossed-field electron flow and the resulting transport

    NASA Astrophysics Data System (ADS)

    Cartwright, Keith Lewis

    The purpose of this study is to understand the oscillatory steady-state behavior of crossed-field electron flow in diodes for magnetic fields greater than the Hull field (B > BH) by the means of theory and self-consistent, electrostatic particle-in-cell (PIC) simulations. Many prior analytic studies of diode-like problems have been time-independent, which leaves the stability and time-dependence of these models unresolved. We investigate fluctuations through the system, including virtual cathode oscillations, and compare results for various cathode injection models. The dominant oscillations in magnetically insulated crossed-field diodes are found to be a series resonance, Z(ω s) = 0, between the pure electron plasma and vacuum impedance of the diode. The series resonance in crossed-field electron flow is shown to be the ky --> 0 (one-dimensional) limit of the diocotron/magnetron eigenmode equation. The wavenumber, ky, is perpendicular to the direction across the diode and magnetic field. The series resonance is derived theoretically and verified with self-consistent, electrostatic, PIC simulations. Electron transport across the magnetic field in a cutoff planar smooth-bore magnetron is described on the basis of surface waves (formed by the shear flow instability) perpendicular to the magnetic field and along the cathode. A self-consistent, 2d3v (two spatial dimensions and three velocity components), electrostatic PIC simulation of a crossed-field diode produces a near- Brillouin flow which slowly expands across the diode, punctuated by sudden transport across the diode. The theory of slow transport across the diode is explained by the addition of perturbed orbits to the Brillouin shear flow motion of the plasma in the diode. A slow drift compared to the shear flow is described which results from the fields caused by the surface wave inducing an electrostatic ponderomotive-like force in a dc external magnetic field. In order to perform the above

  20. Accuracy of remotely sensed data: Sampling and analysis procedures

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Oderwald, R. G.; Mead, R. A.

    1982-01-01

    A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.

  1. Testing Friction Laws by Comparing Simulation Results With Experiments of Spontaneous Dynamic Rupture

    NASA Astrophysics Data System (ADS)

    Lu, X.; Lapusta, N.; Rosakis, A. J.

    2005-12-01

    Friction laws are typically introduced either based on theoretic ideas or by fitting laboratory experiments that reproduce only a small subset of possible behaviors. Hence it is important to validate the resulting laws by modeling experiments that produce spontaneous frictional behavior. Here we simulate experiments of spontaneous rupture transition from sub-Rayleigh to supershear done by Xia et al. (Science, 2004). In the experiments, two thin Homalite plates are pressed together along an inclined interface. Compressive load P is applied to the edges of the plates and the rupture is triggered by an explosion of a small wire. Xia et al. (2004) link the transition in their experiments to the Burridge-Andrews mechanism (Andrews, JGR, 1976) which involves initiation of a daughter crack in front of the main rupture. Xia et al. have measured transition lengths for different values of the load P and compared their results with numerical simulations of Andrews who used linear slip-weakening friction. They conclude that to obtain a good fit they need to assume that the critical slip of the slip-weakening law scales as P-1/2, as proposed by Ohnaka (JGR, 2003). Hence our first goal is to verify whether the dependence of the critical slip on the compressive load P is indeed necessary for a good fit to experimental measurements. To test that, we conducted simulations of the experiments by using boundary integral methodology in its spectral formulation (Perrin et al., 1995; Geubelle and Rice, 1995). We approximately model the wire explosion by temporary normal stress decrease in the region of the interface comparable to the size of the exploding wire. The simulations show good agreement of the transition length with the experimental results for different values of the load P, even though we keep the critical slip constant. Hence the dependence of the critical slip on P is not necessary to fit the experimental measurements. The inconsistency between Andrews' numerical results

  2. Experiments with encapsulation of Monte Carlo simulation results in machine learning models

    NASA Astrophysics Data System (ADS)

    Lal Shrestha, Durga; Kayastha, Nagendra; Solomatine, Dimitri

    2010-05-01

    Uncertainty analysis techniques based on Monte Carlo (MC) simulation have been applied in hydrological sciences successfully in the last decades. They allow for quantification of the model output uncertainty resulting from uncertain model parameters, input data or model structure. They are very flexible, conceptually simple and straightforward, but become impractical in real time applications for complex models when there is little time to perform the uncertainty analysis because of the large number of model runs required. A number of new methods were developed to improve the efficiency of Monte Carlo methods and still these methods require considerable number of model runs in both offline and operational mode to produce reliable and meaningful uncertainty estimation. This paper presents experiments with machine learning techniques used to encapsulate the results of MC runs. A version of MC simulation method, the generalised likelihood uncertain estimation (GLUE) method, is first used to assess the parameter uncertainty of the conceptual rainfall-runoff model HBV. Then the three machines learning methods, namely artificial neural networks, M5 model trees and locally weighted regression methods are trained to encapsulate the uncertainty estimated by the GLUE method using the historical input data. The trained machine learning models are then employed to predict the uncertainty of the model output for the new input data. This method has been applied to two contrasting catchments: the Brue catchment (United Kingdom) and the Bagamati catchment (Nepal). The experimental results demonstrate that the machine learning methods are reasonably accurate in approximating the uncertainty estimated by GLUE. The great advantage of the proposed method is its efficiency to reproduce the MC based simulation results; it can thus be an effective tool to assess the uncertainty of flood forecasting in real time.

  3. SRG110 Stirling Generator Dynamic Simulator Vibration Test Results and Analysis Correlation

    NASA Technical Reports Server (NTRS)

    Suarez, Vicente J.; Lewandowski, Edward J.; Callahan, John

    2006-01-01

    The U.S. Department of Energy (DOE), Lockheed Martin (LM), and NASA Glenn Research Center (GRC) have been developing the Stirling Radioisotope Generator (SRG110) for use as a power system for space science missions. The launch environment enveloping potential missions results in a random input spectrum that is significantly higher than historical RPS launch levels and is a challenge for designers. Analysis presented in prior work predicted that tailoring the compliance at the generator-spacecraft interface reduced the dynamic response of the system thereby allowing higher launch load input levels and expanding the range of potential generator missions. To confirm analytical predictions, a dynamic simulator representing the generator structure, Stirling convertors and heat sources was designed and built for testing with and without a compliant interface. Finite element analysis was performed to guide the generator simulator and compliant interface design so that test modes and frequencies were representative of the SRG110 generator. This paper presents the dynamic simulator design, the test setup and methodology, test article modes and frequencies and dynamic responses, and post-test analysis results. With the compliant interface, component responses to an input environment exceeding the SRG110 qualification level spectrum were all within design allowables. Post-test analysis included finite element model tuning to match test frequencies and random response analysis using the test input spectrum. Analytical results were in good overall agreement with the test results and confirmed previous predictions that the SRG110 power system may be considered for a broad range of potential missions, including those with demanding launch environments.

  4. SRG110 Stirling Generator Dynamic Simulator Vibration Test Results and Analysis Correlation

    NASA Technical Reports Server (NTRS)

    Lewandowski, Edward J.; Suarez, Vicente J.; Goodnight, Thomas W.; Callahan, John

    2007-01-01

    The U.S. Department of Energy (DOE), Lockheed Martin (LM), and NASA Glenn Research Center (GRC) have been developing the Stirling Radioisotope Generator (SRG110) for use as a power system for space science missions. The launch environment enveloping potential missions results in a random input spectrum that is significantly higher than historical radioisotope power system (RPS) launch levels and is a challenge for designers. Analysis presented in prior work predicted that tailoring the compliance at the generator-spacecraft interface reduced the dynamic response of the system thereby allowing higher launch load input levels and expanding the range of potential generator missions. To confirm analytical predictions, a dynamic simulator representing the generator structure, Stirling convertors and heat sources were designed and built for testing with and without a compliant interface. Finite element analysis was performed to guide the generator simulator and compliant interface design so that test modes and frequencies were representative of the SRG110 generator. This paper presents the dynamic simulator design, the test setup and methodology, test article modes and frequencies and dynamic responses, and post-test analysis results. With the compliant interface, component responses to an input environment exceeding the SRG110 qualification level spectrum were all within design allowables. Post-test analysis included finite element model tuning to match test frequencies and random response analysis using the test input spectrum. Analytical results were in good overall agreement with the test results and confirmed previous predictions that the SRG110 power system may be considered for a broad range of potential missions, including those with demanding launch environments.

  5. Development of ADOCS controllers and control laws. Volume 3: Simulation results and recommendations

    NASA Technical Reports Server (NTRS)

    Landis, Kenneth H.; Glusman, Steven I.

    1985-01-01

    The Advanced Cockpit Controls/Advanced Flight Control System (ACC/AFCS) study was conducted by the Boeing Vertol Company as part of the Army's Advanced Digital/Optical Control System (ADOCS) program. Specifically, the ACC/AFCS investigation was aimed at developing the flight control laws for the ADOCS demonstator aircraft which will provide satisfactory handling qualities for an attack helicopter mission. The three major elements of design considered are as follows: Pilot's integrated Side-Stick Controller (SSC) -- Number of axes controlled; force/displacement characteristics; ergonomic design. Stability and Control Augmentation System (SCAS)--Digital flight control laws for the various mission phases; SCAS mode switching logic. Pilot's Displays--For night/adverse weather conditions, the dynamics of the superimposed symbology presented to the pilot in a format similar to the Advanced Attack Helicopter (AAH) Pilot Night Vision System (PNVS) for each mission phase is a function of SCAS characteristics; display mode switching logic. Results of the five piloted simulations conducted at the Boeing Vertol and NASA-Ames simulation facilities are presented in Volume 3. Conclusions drawn from analysis of pilot rating data and commentary were used to formulate recommendations for the ADOCS demonstrator flight control system design. The ACC/AFCS simulation data also provide an extensive data base to aid the development of advanced flight control system design for future V/STOL aircraft.

  6. Flow-driven cloud formation and fragmentation: results from Eulerian and Lagrangian simulations

    NASA Astrophysics Data System (ADS)

    Heitsch, Fabian; Naab, Thorsten; Walch, Stefanie

    2011-07-01

    The fragmentation of shocked flows in a thermally bistable medium provides a natural mechanism to form turbulent cold clouds as precursors to molecular clouds. Yet because of the large density and temperature differences and the range of dynamical scales involved, following this process with numerical simulations is challenging. We compare two-dimensional simulations of flow-driven cloud formation without self-gravity, using the Lagrangian smoothed particle hydrodynamics (SPH) code VINE and the Eulerian grid code PROTEUS. Results are qualitatively similar for both methods, yet the variable spatial resolution of the SPH method leads to smaller fragments and thinner filaments, rendering the overall morphologies different. Thermal and hydrodynamical instabilities lead to rapid cooling and fragmentation into cold clumps with temperatures below 300 K. For clumps more massive than 1 M⊙ pc-1, the clump mass function has an average slope of -0.8. The internal velocity dispersion of the clumps is nearly an order of magnitude smaller than their relative motion, rendering it subsonic with respect to the internal sound speed of the clumps but supersonic as seen by an external observer. For the SPH simulations most of the cold gas resides at temperatures below 100 K, while the grid-based models show an additional, substantial component between 100 and 300 K. Independent of the numerical method, our models confirm that converging flows of warm neutral gas fragment rapidly and form high-density, low-temperature clumps as possible seeds for star formation.

  7. The Ten Commandments for Translating Simulation Results into Real-Life Performance

    ERIC Educational Resources Information Center

    Wenzler, Ivo

    2009-01-01

    Simulation designers are continuously facing the challenge of determining how much of the expected value the simulation has delivered to the client. Addressing this challenge is not easy, and it requires simulation designers to stretch their comfort zones. This article presents a ten-step approach for meeting simulation objectives and translating…

  8. Fluid Instabilities in the Crab Nebula Jet: Results from Numerical Simulations

    NASA Astrophysics Data System (ADS)

    Mignone, A.; Striani, E.; Bodo, G.; Anjiri, M.

    2014-09-01

    We present an overview of high-resolution relativistic MHD numerical simulations of the Crab Nebula South-East jet. The models are based on hot and relativistic hollow outflows initially carrying a purely toroidal magnetic field. Our results indicate that weakly relativistic (γ˜ 2) and strongly magnetized jets are prone to kink instabilities leading to a noticeable deflection of the jet. These conclusions are in good agreement with the recent X-ray (Chandra) data of Crab Nebula South-East jet indicating a change in the direction of propagation on a time scale of the order of few years.

  9. Two-dimensional copolymers and multifractality: comparing perturbative expansions, Monte Carlo simulations, and exact results.

    PubMed

    von Ferber, C; Holovatch, Yu

    2002-04-01

    We analyze the scaling laws for a set of two different species of long flexible polymer chains joined together at one of their extremities (copolymer stars) in space dimension D=2. We use a formerly constructed field-theoretic description and compare our perturbative results for the scaling exponents with recent conjectures for exact conformal scaling dimensions derived by a conformal invariance technique in the context of D=2 quantum gravity. A simple Monte Carlo simulation brings about reasonable agreement with both approaches. We analyze the remarkable multifractal properties of the spectrum of scaling exponents. PMID:12005898

  10. Entry, Descent and Landing Systems Analysis: Exploration Class Simulation Overview and Results

    NASA Technical Reports Server (NTRS)

    DwyerCianciolo, Alicia M.; Davis, Jody L.; Shidner, Jeremy D.; Powell, Richard W.

    2010-01-01

    NASA senior management commissioned the Entry, Descent and Landing Systems Analysis (EDL-SA) Study in 2008 to identify and roadmap the Entry, Descent and Landing (EDL) technology investments that the agency needed to make in order to successfully land large payloads at Mars for both robotic and exploration or human-scale missions. The year one exploration class mission activity considered technologies capable of delivering a 40-mt payload. This paper provides an overview of the exploration class mission study, including technologies considered, models developed and initial simulation results from the EDL-SA year one effort.

  11. Optical imaging of alpha emitters: simulations, phantom, and in vivo results

    NASA Astrophysics Data System (ADS)

    Boschi, Federico; Meo, Sergio Lo; Rossi, Pier Luca; Calandrino, Riccardo; Sbarbati, Andrea; Spinelli, Antonello E.

    2011-12-01

    There has been growing interest in investigating both the in vitro and in vivo detection of optical photons from a plethora of beta emitters using optical techniques. In this paper we have investigated an alpha particle induced fluorescence signal by using a commercial CCD-based small animal optical imaging system. The light emission of a 241Am source was simulated using GEANT4 and tested in different experimental conditions including the imaging of in vivo tissue. We believe that the results presented in this work can be useful to describe a possible mechanism for the in vivo detection of alpha emitters used for therapeutic purposes.

  12. Design and Simulation Results of Waveguide Bends Used in Debuncher Cooling System

    SciTech Connect

    Sun, Ding; /Fermilab

    2000-09-13

    This note is a document about design and simulation results of waveguide bends installed with the arrays in debuncher cooling upgrade. The main feature of these bends is that they are not traditional mitered bends or round bends. Instead, a cylinder is placed in the corner area of the bend. The reason for this design is purely to overcome some practical problems: (1) since these bends are very close to the slotted foil which serves as part of the waveguide array, it is very difficult to make good joint and contact if mitered bends are used, (2) assembly difficulty due to the location of these bends, and (3) limited space requires a compact design. Shown in Figure 1 is a schematic drawing of a bend. Dimensions of bends for each frequency band are listed in Table 1. Shown in Figure 2-5 are the simulation results using HFSS. One of the bends was fabricated with flanges on both ends and measured using a Network Analyzer. The HFSS result was confirmed by the measured data.

  13. JT9D performance deterioration results from a simulated aerodynamic load test

    NASA Technical Reports Server (NTRS)

    Stakolich, E. G.; Stromberg, W. J.

    1981-01-01

    This paper presents the results of testing to identify the effects of simulated aerodynamic flight loads on JT9D engine performance. The test results were also used to refine previous analytical studies on the impact of aerodynamic flight loads on performance losses. To accomplish these objectives, a JT9D-7AH engine was assembled with average production clearances and new seals as well as extensive instrumentation to monitor engine performance, case temperatures, and blade tip clearance changes. A special loading device was designed and constructed to permit application of known moments and shear forces to the engine by the use of cables placed around the flight inlet. The test was conducted in the Pratt and Whitney Aircraft X-Ray Test Facility to permit the use of X-ray techniques in conjunction with laser blade tip proximity probes to monitor important engine clearance changes. Upon completion of the test program, the test engine was disassembled, and the condition of gas path parts and final clearances were documented. The test results indicate that the engine lost 1.1 percent in thrust specific fuel consumption (TSFC), as measured under sea level static conditions, due to increased operating clearances caused by simulated flight loads. This compares with 0.9 percent predicted by the analytical model and previous study efforts.

  14. JT9D performance deterioration results from a simulated aerodynamic load test

    NASA Technical Reports Server (NTRS)

    Stakolich, E. G.; Stromberg, W. J.

    1981-01-01

    The results of testing to identify the effects of simulated aerodynamic flight loads on JT9D engine performance are presented. The test results were also used to refine previous analytical studies on the impact of aerodynamic flight loads on performance losses. To accomplish these objectives, a JT9D-7AH engine was assembled with average production clearances and new seals as well as extensive instrumentation to monitor engine performance, case temperatures, and blade tip clearance changes. A special loading device was designed and constructed to permit application of known moments and shear forces to the engine by the use of cables placed around the flight inlet. The test was conducted in the Pratt & Whitney Aircraft X-Ray Test Facility to permit the use of X-ray techniques in conjunction with laser blade tip proximity probes to monitor important engine clearance changes. Upon completion of the test program, the test engine was disassembled, and the condition of gas path parts and final clearances were documented. The test results indicate that the engine lost 1.1 percent in thrust specific fuel consumption (TSFC), as measured under sea level static conditions, due to increased operating clearances caused by simulated flight loads. This compares with 0.9 percent predicted by the analytical model and previous study efforts.

  15. Finite difference model for aquifer simulation in two dimensions with results of numerical experiments

    USGS Publications Warehouse

    Trescott, Peter C.; Pinder, George Francis; Larson, S.P.

    1976-01-01

    The model will simulate ground-water flow in an artesian aquifer, a water-table aquifer, or a combined artesian and water-table aquifer. The aquifer may be heterogeneous and anisotropic and have irregular boundaries. The source term in the flow equation may include well discharge, constant recharge, leakage from confining beds in which the effects of storage are considered, and evapotranspiration as a linear function of depth to water. The theoretical development includes presentation of the appropriate flow equations and derivation of the finite-difference approximations (written for a variable grid). The documentation emphasizes the numerical techniques that can be used for solving the simultaneous equations and describes the results of numerical experiments using these techniques. Of the three numerical techniques available in the model, the strongly implicit procedure, in general, requires less computer time and has fewer numerical difficulties than do the iterative alternating direction implicit procedure and line successive overrelaxation (which includes a two-dimensional correction procedure to accelerate convergence). The documentation includes a flow chart, program listing, an example simulation, and sections on designing an aquifer model and requirements for data input. It illustrates how model results can be presented on the line printer and pen plotters with a program that utilizes the graphical display software available from the Geological Survey Computer Center Division. In addition the model includes options for reading input data from a disk and writing intermediate results on a disk.

  16. Real-gas simulation for the Shuttle Orbiter and planetary entry configurations including flight results

    NASA Technical Reports Server (NTRS)

    Calloway, R. L.

    1984-01-01

    By testing configurations in a gas (like CF4) which can produce high normal-shock density ratios, such as those encountered during hypersonic entry, certain aspects of real-gas effects can be simulated. Results from force-moment, shock-shape and oil flow visualization tests are presented for both the Shuttle Orbiter and a 45 deg sphere-cone in CF4 and air at M = 6, and comparisons are made with flight results. Pitching-moment coefficients measured on a Shuttle Orbiter model in CF4 showed a nose-up increment, compared with air results, that was almost identical to the difference between preflight predictions and flight in the high hypersonic regime. The drag coefficient measured in CF4 on the 45 deg sphere-cone, which is the same configuration used on the forebody of the Pioneer Venus entry vehicles, showed excellent agreement with flight data at M = 6.

  17. Influence of land use on rainfall simulation results in the Souss basin, Morocco

    NASA Astrophysics Data System (ADS)

    Peter, Klaus Daniel; Ries, Johannes B.; Hssaine, Ali Ait

    2013-04-01

    Situated between the High and Anti-Atlas, the Souss basin is characterized by a dynamic land use change. It is one of the fastest growing agricultural regions of Morocco. Traditional mixed agriculture is replaced by extensive plantations of citrus fruits, bananas and vegetables in monocropping, mainly for the European market. For the implementation of the land use change and further expansion of the plantations into marginal land which was former unsuitable for agriculture, land levelling by heavy machinery is used to plane the fields and close the widespread gullies. These gully systems are cutting deep between the plantations and other arable land. Their development started already over 400 years ago with the introduction of sugar production. Heavy rainfall events lead to further strong soil and gully erosion in this with 200 mm mean annual precipitation normally arid region. Gullies are cutting into the arable land or are re-excavating their old stream courses. On the test sites around the city of Taroudant, a total of 122 rainfall simulations were conducted to analyze the susceptibility of soils to surface runoff and soil erosion under different land use. A small portable nozzle rainfall simulator is used for the rainfall simulation experiments, quantifying runoff and erosion rates on micro-plots with a size of 0.28 m2. A motor pump boosts the water regulated by a flow metre into the commercial full cone nozzle at a height of 2 m. The rainfall intensity is maintained at about 40 mm h-1 for each of the 30 min lasting experiments. Ten categories of land use are classified for different stages of levelling, fallow land, cultivation and rangeland. Results show that mean runoff coefficients and mean sediment loads are significantly higher (1.4 and 3.5 times respectively) on levelled study sites compared to undisturbed sites. However, the runoff coefficients of all land use types are relatively equal and reach high median coefficients from 39 to 56 %. Only the

  18. SZ effects in the Magneticum Pathfinder Simulation: Comparison with the Planck, SPT, and ACT results

    NASA Astrophysics Data System (ADS)

    Dolag, K.; Komatsu, E.; Sunyaev, R.

    2016-08-01

    We calculate the one-point probability density distribution functions (PDF) and the power spectra of the thermal and kinetic Sunyaev-Zeldovich (tSZ and kSZ) effects and the mean Compton Y parameter using the Magneticum Pathfinder simulations, state-of-the-art cosmological hydrodynamical simulations of a large cosmological volume of (896 Mpc/h)3. These simulations follow in detail the thermal and chemical evolution of the intracluster medium as well as the evolution of super-massive black holes and their associated feedback processes. We construct full-sky maps of tSZ and kSZ from the light-cones out to z = 0.17, and one realisation of 8°.8 × 8°.8 deep light-cone out to z = 5.2. The local universe at z < 0.027 is simulated by a constrained realisation. The tail of the one-point PDF of tSZ from the deep light-cone follows a power-law shape with an index of -3.2. Once convolved with the effective beam of Planck, it agrees with the PDF measured by Planck. The predicted tSZ power spectrum agrees with that of the Planck data at all multipoles up to l ≈ 1000, once the calculations are scaled to the Planck 2015 cosmological parameters with Ωm = 0.308 and σ8 = 0.8149. Consistent with the results in the literature, however, we continue to find the tSZ power spectrum at l = 3000 that is significantly larger than that estimated from the high-resolution ground-based data. The simulation predicts the mean fluctuating Compton Y value of bar{Y}=1.18× 10^{-6} for Ωm = 0.272 and σ8 = 0.809. Nearly half (≈5 × 10-7) of the signal comes from halos below a virial mass of 1013 M⊙/h. Scaling this to the Planck 2015 parameters, we find bar{Y}=1.57× {}10^{-6}.

  19. RESULTS OF COPPER CATALYZED PEROXIDE OXIDATION (CCPO) OF TANK 48H SIMULANTS

    SciTech Connect

    Peters, T.; Pareizs, J.; Newell, J.; Fondeur, F.; Nash, C.; White, T.; Fink, S.

    2012-08-14

    Savannah River National Laboratory (SRNL) performed a series of laboratory-scale experiments that examined copper-catalyzed hydrogen peroxide (H{sub 2}O{sub 2}) aided destruction of organic components, most notably tetraphenylborate (TPB), in Tank 48H simulant slurries. The experiments were designed with an expectation of conducting the process within existing vessels of Building 241-96H with minimal modifications to the existing equipment. Results of the experiments indicate that TPB destruction levels exceeding 99.9% are achievable, dependent on the reaction conditions. The following observations were made with respect to the major processing variables investigated. A lower reaction pH provides faster reaction rates (pH 7 > pH 9 > pH 11); however, pH 9 reactions provide the least quantity of organic residual compounds within the limits of species analyzed. Higher temperatures lead to faster reaction rates and smaller quantities of organic residual compounds. Higher concentrations of the copper catalyst provide faster reaction rates, but the highest copper concentration (500 mg/L) also resulted in the second highest quantity of organic residual compounds. Faster rates of H{sub 2}O{sub 2} addition lead to faster reaction rates and lower quantities of organic residual compounds. Testing with simulated slurries continues. Current testing is examining lower copper concentrations, refined peroxide addition rates, and alternate acidification methods. A revision of this report will provide updated findings with emphasis on defining recommended conditions for similar tests with actual waste samples.

  20. Natural frequencies of two bubbles in a compliant tube: Analytical, simulation, and experimental results

    PubMed Central

    Jang, Neo W.; Zakrzewski, Aaron; Rossi, Christina; Dalecki, Diane; Gracewski, Sheryl

    2011-01-01

    Motivated by various clinical applications of ultrasound contrast agents within blood vessels, the natural frequencies of two bubbles in a compliant tube are studied analytically, numerically, and experimentally. A lumped parameter model for a five degree of freedom system was developed, accounting for the compliance of the tube and coupled response of the two bubbles. The results were compared to those produced by two different simulation methods: (1) an axisymmetric coupled boundary element and finite element code previously used to investigate the response of a single bubble in a compliant tube and (2) finite element models developed in comsol Multiphysics. For the simplified case of two bubbles in a rigid tube, the lumped parameter model predicts two frequencies for in- and out-of-phase oscillations, in good agreement with both numerical simulation and experimental results. For two bubbles in a compliant tube, the lumped parameter model predicts four nonzero frequencies, each asymptotically converging to expected values in the rigid and compliant limits of the tube material. PMID:22088008

  1. Investigation of short pulse effects in IR FELs and new simulation results

    NASA Astrophysics Data System (ADS)

    Asgekar, Vivek; Berden, Giel; Brunken, Marco; Casper, Lars; Genz, Harald; Grigore, Maria; Heßler, Christoph; Khodyachykh, Sergiy; Richter, Achim; van der Meer, Alex F. G.

    2003-07-01

    The Darmstadt IR FEL is designed to generate wavelengths between 3 and 10 μm and driven by the superconducting electron linear accelerator. The pulsed electron beam has a peak current of 2.7 A leading to a small signal gain of 5%. Currently, investigations of the energy transfer process inside the undulator are performed using the 1D time-dependent simulation code FAST1D-OSC. We present simulation results for the power vs. different desynchronization and tapering parameters as well as a comparison with experimental data from the S-DALINAC IR-FEL. Furthermore, a compact autocorrelation system assuring a background-free measurement of the optical pulse length is described. In a first test experiment at FELIX, the autocorrelator has been tested at wavelengths 5.7⩽λ⩽9.0 μm. The frequency doubling in a 2 mm-long ZnGeP 2-crystal resulted in a time resolution of 300 fs and a conversion efficiency of 5%.

  2. Preliminary results of strong ground motion simulation for the Lushan earthquake of 20 April 2013, China

    NASA Astrophysics Data System (ADS)

    Zhu, Gengshang; Zhang, Zhenguo; Wen, Jian; Zhang, Wei; Chen, Xiaofei

    2013-08-01

    The earthquake occurred in Lushan County on 20 April, 2013 caused heavy casualty and economic loss. In order to understand how the seismic energy propagates during this earthquake and how it causes the seismic hazard, we simulated the strong ground motions from a representative kinematic source model by Zhang et al. (Chin J Geophys 56(4):1408-1411, 2013) for this earthquake. To include the topographic effects, we used the curved grids finite difference method by Zhang and Chen (Geophys J Int 167(1):337-353, 2006), Zhang et al. (Geophys J Int 190(1):358-378, 2012) to implement the simulations. Our results indicated that the majority of seismic energy concentrated in the epicentral area and the vicinal Sichuan Basin, causing the XI and VII degree intensity. Due to the strong topographic effects of the mountain, the seismic intensity in the border area across the northeastern of Boxing County to the Lushan County also reached IX degree. Moreover, the strong influence of topography caused the amplifications of ground shaking at the mountain ridge, which is easy to cause landslides. These results are quite similar to those observed in the Wenchuan earthquake of 2008 occurred also in a strong topographic mountain area.

  3. Caution: Precision Error in Blade Alignment Results in Faulty Unsteady CFD Simulation

    NASA Astrophysics Data System (ADS)

    Lewis, Bryan; Cimbala, John; Wouden, Alex

    2012-11-01

    Turbomachinery components experience unsteady loads at several frequencies. The rotor frequency corresponds to the time for one rotor blade to rotate between two stator vanes, and is normally dominant for rotor torque oscillations. The guide vane frequency corresponds to the time for two rotor blades to pass by one guide vane. The machine frequency corresponds to the machine RPM. Oscillations at the machine frequency are always present due to minor blade misalignments and imperfections resulting from manufacturing defects. However, machine frequency oscillations should not be present in CFD simulations if the mesh is free of both blade misalignment and surface imperfections. The flow through a Francis hydroturbine was modeled with unsteady Reynolds-Averaged Navier-Stokes (URANS) CFD simulations and a dynamic rotating grid. Spectral analysis of the unsteady torque on the rotor blades revealed a large component at the machine frequency. Close examination showed that one blade was displaced by 0 .0001° due to round-off errors during mesh generation. A second mesh without blade misalignment was then created. Subsequently, large machine frequency oscillations were not observed for this mesh. These results highlight the effect of minor geometry imperfections on CFD solutions. This research was supported by a grant from the DoE and a National Defense Science and Engineering Graduate Fellowship.

  4. Stellar hydrodynamical modeling of dwarf galaxies: simulation methodology, tests, and first results

    NASA Astrophysics Data System (ADS)

    Vorobyov, Eduard I.; Recchi, Simone; Hensler, Gerhard

    2015-07-01

    Context. In spite of enormous progress and brilliant achievements in cosmological simulations, they still lack numerical resolution or physical processes to simulate dwarf galaxies in sufficient detail. Accurate numerical simulations of individual dwarf galaxies are thus still in demand. Aims: We aim to improve available numerical techniques to simulate individual dwarf galaxies. In particular, we aim to (i) study in detail the coupling between stars and gas in a galaxy, exploiting the so-called stellar hydrodynamical approach; and (ii) study for the first time the chemodynamical evolution of individual galaxies starting from self-consistently calculated initial gas distributions. Methods: We present a novel chemodynamical code for studying the evolution of individual dwarf galaxies. In this code, the dynamics of gas is computed using the usual hydrodynamics equations, while the dynamics of stars is described by the stellar hydrodynamics approach, which solves for the first three moments of the collisionless Boltzmann equation. The feedback from stellar winds and dying stars is followed in detail. In particular, a novel and detailed approach has been developed to trace the aging of various stellar populations, which facilitates an accurate calculation of the stellar feedback depending on the stellar age. The code has been accurately benchmarked, allowing us to provide a recipe for improving the code performance on the Sedov test problem. Results: We build initial equilibrium models of dwarf galaxies that take gas self-gravity into account and present different levels of rotational support. Models with high rotational support (and hence high degrees of flattening) develop prominent bipolar outflows; a newly-born stellar population in these models is preferentially concentrated to the galactic midplane. Models with little rotational support blow away a large fraction of the gas and the resulting stellar distribution is extended and diffuse. Models that start from non

  5. Computer simulation results for PCM/PM/NRZ receivers in nonideal channels

    NASA Technical Reports Server (NTRS)

    Anabtawi, A.; Nguyen, T. M.; Million, S.

    1995-01-01

    This article studies, by computer simulations, the performance of deep-space telemetry signals that employ the pulse code modulation/phase modulation (PCM/PM) technique, using nonreturn-to-zero data, under the separate and combined effects of unbalanced data, data asymmetry, and a band-limited channel. The study is based on measuring the symbol error rate performance and comparing the results to the theoretical results presented in previous articles. Only the effects of imperfect carrier tracking due to an imperfect data stream are considered. The presence of an imperfect data stream (unbalanced and/or asymmetric) produces undesirable spectral components at the carrier frequency, creating an imperfect carrier reference that will degrade the performance of the telemetry system. Further disturbance to the carrier reference is caused by the intersymbol interference created by the band-limited channel.

  6. Short-time dynamics of isotropic and anisotropic Bak-Sneppen model: extensive simulation results

    NASA Astrophysics Data System (ADS)

    Tirnakli, Ugur; Lyra, Marcelo L.

    2004-12-01

    In this work, the short-time dynamics of the isotropic and anisotropic versions of the Bak-Sneppen (BS) model has been investigated using the standard damage spreading technique. Since the system sizes attained in our simulations are larger than the ones employed in previous studies, our results for the dynamic scaling exponents are expected to be more accurate than the results of the existing literature. The obtained scaling exponents of both versions of the BS model are found to be greater than the ones given in previous works. These findings are in agreement with the recent claim of Cafiero et al. (Eur. Phys. J. B7 (1999) 505). Moreover, it is found that the short-time dynamics of the anisotropic model is only slightly affected by finite-size effects and the reported estimate of α≃0.53 can be considered as a good estimate of the true exponent in the thermodynamic limit.

  7. A three-phase series-parallel resonant converter -- analysis, design, simulation, and experimental results

    SciTech Connect

    Bhat, A.K.S.; Zheng, R.L.

    1996-07-01

    A three-phase dc-to-dc series-parallel resonant converter is proposed /and its operating modes for a 180{degree} wide gating pulse scheme are explained. A detailed analysis of the converter using a constant current model and the Fourier series approach is presented. Based on the analysis, design curves are obtained and a design example of a 1-kW converter is given. SPICE simulation results for the designed converter and experimental results for a 500-W converter are presented to verify the performance of the proposed converter for varying load conditions. The converter operates in lagging power factor (PF) mode for the entire load range and requires a narrow variation in switching frequency, to adequately regulate the output power.

  8. Multipacting simulation and test results of BNL 704 MHz SRF gun

    SciTech Connect

    Xu W.; Belomestnykh, S.; Ben-Zvi, I.; Cullen, C. et al

    2012-05-20

    The BNL 704MHz SRF gun has a grooved choke joint to support the photo-cathode. Due to the distortion of grooves at the choke joint during the BCP for the choke joint, several multipacting barriers showed up when it was tested with Nb cathode stalk at JLab. We built a setup to use the spare large grain SRF cavity to test and condition the multipacting at BNL with various power sources up to 50kW. The test is carried out in three stages: testing the cavity performance without cathode, testing the cavity with the Nb cathode stalk that was used at Jlab, and testing the cavity with a copper cathode stalk that is based on the design for the SRF gun. This paper summarizes the results of multipacting simulation, and presents the large grain cavity test setup and the test results.

  9. Predictive genetic testing of a bone marrow recipient-ethical issues involving unexpected results, gender issues, test accuracy, and implications for the donor.

    PubMed

    Sexton, A; Rawlings, L; Jenkins, M; Winship, I

    2014-02-01

    We present a case where an apparently straightforward Lynch syndrome predictive genetic test of DNA from a blood sample from a woman yielded an unexpected result of X/Y chromosome imbalance. Furthermore, it demonstrates the complexities of genetic testing in people who have had bone marrow transplants. This highlights the potential for multiple ethical and counselling challenges, including the inadvertent testing of the donor. Good communication between clinics and laboratories is essential to overcome such challenges and to minimise the provision of false results. PMID:23990319

  10. Tank 241-AZ-101 criticality assessment resulting from pump jet mixing: Sludge mixing simulation

    SciTech Connect

    Onishi, Y.; Recknagle, K.

    1997-04-01

    Tank 241-AZ-101 (AZ-101) is one of 28 double-shell tanks located in the AZ farm in the Hanford Site`s 200 East Area. The tank contains a significant quantity of fissile materials, including an estimated 9.782 kg of plutonium. Before beginning jet pump mixing for mitigative purposes, the operations must be evaluated to demonstrate that they will be subcritical under both normal and credible abnormal conditions. The main objective of this study was to address a concern about whether two 300-hp pumps with four rotating 18.3-m/s (60-ft/s) jets can concentrate plutonium in their pump housings during mixer pump operation and cause a criticality. The three-dimensional simulation was performed with the time-varying TEMPEST code to determine how much the pump jet mixing of Tank AZ-101 will concentrate plutonium in the pump housing. The AZ-101 model predicted that the total amount of plutonium within the pump housing peaks at 75 g at 10 simulation seconds and decreases to less than 10 g at four minutes. The plutonium concentration in the entire pump housing peaks at 0.60 g/L at 10 simulation seconds and is reduced to below 0.1 g/L after four minutes. Since the minimum critical concentration of plutonium is 2.6 g/L, and the minimum critical plutonium mass under idealized plutonium-water conditions is 520 g, these predicted maximums in the pump housing are much lower than the minimum plutonium conditions needed to reach a criticality level. The initial plutonium maximum of 1.88 g/L still results in safety factor of 4.3 in the pump housing during the pump jet mixing operation.

  11. Simulation of temperature profiles at the superfluid to normal-fluid interface in helium-4 for prediction of temperature measurement accuracy

    SciTech Connect

    Hensinger, D.M.; Gianoulakis, S.E.; Duncan, R.V.

    1996-12-31

    The purpose of this work was to model the conditions in a test cell containing normal-fluid and superfluid helium-4 and to predict the accuracy of temperature measurements made on this system in the presence of non-ideal wall materials and probe geometries. A thermal model of helium-4 in the vicinity of its normal-fluid to superfluid transition temperature was used to calculate the temperature profiles within a helium-4 filled experimental test cell. Calculated temperature profiles were used to predict the temperature measurement accuracy which could be expected from a test cell and temperature probe design. The superfluid phase of helium-4 was represented as a highly-conductive, diffusive material to approximate a superconductor of heat. The thermal model included the influences of temperature, heat flux, and hydrostatic pressure on the properties of helium-4. The model was solved for quasi-static temperature profiles using a finite element method and employing a transformed and expanded temperature scale to allow resolution of nK/cm temperature gradients in the presence of a 2 K absolute temperature.

  12. Late Pop III Star Formation During the Epoch of Reionization: Results from the Renaissance Simulations

    NASA Astrophysics Data System (ADS)

    Xu, Hao; Norman, Michael L.; O’Shea, Brian W.; Wise, John H.

    2016-06-01

    We present results on the formation of Population III (Pop III) stars at redshift 7.6 from the Renaissance Simulations, a suite of extremely high-resolution and physics-rich radiation transport hydrodynamics cosmological adaptive-mesh refinement simulations of high-redshift galaxy formation performed on the Blue Waters supercomputer. In a survey volume of about 220 comoving Mpc3, we found 14 Pop III galaxies with recent star formation. The surprisingly late formation of Pop III stars is possible due to two factors: (i) the metal enrichment process is local and slow, leaving plenty of pristine gas to exist in the vast volume; and (ii) strong Lyman–Werner radiation from vigorous metal-enriched star formation in early galaxies suppresses Pop III formation in (“not so”) small primordial halos with mass less than ˜3 × 107 M ⊙. We quantify the properties of these Pop III galaxies and their Pop III star formation environments. We look for analogs to the recently discovered luminous Ly α emitter CR7, which has been interpreted as a Pop III star cluster within or near a metal-enriched star-forming galaxy. We find and discuss a system similar to this in some respects, however, the Pop III star cluster is far less massive and luminous than CR7 is inferred to be.

  13. Statistics of dark matter substructure - II. Comparison of model with simulation results

    NASA Astrophysics Data System (ADS)

    van den Bosch, Frank C.; Jiang, Fangzhou

    2016-05-01

    We compare subhalo mass and velocity functions obtained from different simulations with different subhalo finders among each other, and with predictions from the new semi-analytical model presented in Paper I. We find that subhalo mass functions (SHMFs) obtained using different subhalo finders agree with each other at the level of ˜20 per cent, but only at the low-mass end. At the massive end, subhalo finders that identify subhaloes based purely on density in configuration space dramatically underpredict the subhalo abundances by more than an order of magnitude. These problems are much less severe for subhalo velocity functions (SHVFs), indicating that they arise from issues related to assigning masses to the subhaloes, rather than from detecting them. Overall the predictions from the semi-analytical model are in excellent agreement with simulation results obtained using the more advanced subhalo finders that use information in six-dimensional phase-space. In particular, the model accurately reproduces the slope and host-mass-dependent normalization of both the subhalo mass and velocity functions. We find that the SHMFs and SHVFs have power-law slopes of 0.86 and 2.77, respectively, significantly shallower than what has been claimed in several studies in the literature.

  14. Preparation, conduct, and experimental results of the AVR loss-of-coolant accident simulation test

    SciTech Connect

    Kruger, K.; Bergerfurth, A.; Burger, S.; Pohl, P.; Wimmers, M. ); Cleveland, J.C. )

    1991-02-01

    A loss-of-coolant accident (LOCA) is one of the most severe accidents for a nuclear power plant. To demonstrate inherent safety characteristics incorporated into small high-temperature gas-cooled reactor (HTGR) design, LOCA simulation tests have been conducted at the Arbeitsgemeinschaft Versuchsreaktor (AVR), the German pebble-bed-high-temperature reactor plant. The AVR is the only nuclear power plant ever to have been intentionally subjected to LOCA conditions without emergency cooling. This paper presents the planning and licensing activities including pretest predictions performed for the LOCA test are described, and the conduct of the test and experimental results. The LOCA test was planned to create conditions that would exist if a rapid LOCA occurred with the reactor operating at full power. The test demonstrated this reactor's safe response to an accident in which the coolant escapes from the reactor core and no emergency system is available to provide coolant flow to the core. The test is of special interest because it demonstrates the inherent safety features incorporated into optimized modular HTGR designs. The main LOCA test lasted for 5 days. After the test began, core temperatures increased for {approx}13 h and then gradually and continually decreased as the rate of heat dissipation from the core exceeded the simulated decay power. Throughout the test, temperatures remained below limiting values for the core and other reactor components.

  15. The Formation of Asteroid Satellites in Catastrophic Impacts: Results from Numerical Simulations

    NASA Technical Reports Server (NTRS)

    Durda, D. D.; Bottke, W. F., Jr.; Enke, B. L.; Asphaug, E.; Richardson, D. C.; Leinhardt, Z. M.

    2003-01-01

    We have performed new simulations of the formation of asteroid satellites by collisions, using a combination of hydrodynamical and gravitational dynamical codes. This initial work shows that both small satellites and ejected, co-orbiting pairs are produced most favorably by moderate-energy collisions at more direct, rather than oblique, impact angles. Simulations so far seem to be able to produce systems qualitatively similar to known binaries. Asteroid satellites provide vital clues that can help us understand the physics of hypervelocity impacts, the dominant geologic process affecting large main belt asteroids. Moreover, models of satellite formation may provide constraints on the internal structures of asteroids beyond those possible from observations of satellite orbital properties alone. It is probable that most observed main-belt asteroid satellites are by-products of cratering and/or catastrophic disruption events. Several possible formation mechanisms related to collisions have been identified: (i) mutual capture following catastrophic disruption, (ii) rotational fission due to glancing impact and spin-up, and (iii) re-accretion in orbit of ejecta from large, non-catastrophic impacts. Here we present results from a systematic investigation directed toward mapping out the parameter space of the first and third of these three collisional mechanisms.

  16. Simulated flight through JAWS wind shear - In-depth analysis results. [Joint Airport Weather Studies

    NASA Technical Reports Server (NTRS)

    Frost, W.; Chang, H.-P.; Elmore, K. L.; Mccarthy, J.

    1984-01-01

    The Joint Airport Weather Studies (JAWS) field experiment was carried out in 1982 near Denver. An analysis is presented of aircraft performance in the three-dimensional wind fields. The fourth dimension, time, is not considered. The analysis seeks to prepare computer models of microburst wind shear from the JAWS data sets for input to flight simulators and for research and development of aircraft control systems and operational procedures. A description is given of the data set and the method of interpolating velocities and velocity gradients for input to the six-degrees-of-freedom equations governing the motion of the aircraft. The results of the aircraft performance analysis are then presented, and the interpretation classifies the regions of shear as severe, moderate, or weak. Paths through the severe microburst of August 5, 1982, are then recommended for training and operational applications. Selected subregions of the flow field defined in terms of planar sections through the wind field are presented for application to simulators with limited computer storage capacity, that is, for computers incapable of storing the entire array of variables needed if the complete wind field is programmed.

  17. The Geoengineering Model Intercomparison Project Phase 6 (GeoMIP6). Simulation Design and Preliminary Results

    SciTech Connect

    Kravitz, Benjamin S.; Robock, Alan; Tilmes, S.; Boucher, Olivier; English, J.; Irvine, Peter; Jones, Andrew; Lawrence, M. G.; Maccracken, Michael C.; Muri, Helene O.; Moore, John; Niemeier, Ulrike; Phipps, Steven; Sillmann, Jana; Storelvmo, Trude; Wang, Hailong; Watanabe, Shingo

    2015-10-27

    We present a suite of new climate model experiment designs for the Geoengineering Model Intercomparison Project (GeoMIP). This set of experiments, named GeoMIP6 (to be consistent with the Coupled Model Intercomparison Project Phase 6), builds on the previous GeoMIP project simulations, and has been expanded to address several further important topics, including key uncertainties in extreme events, the use of geoengineering as part of a portfolio of responses to climate change, and the relatively new idea of cirrus cloud thinning to allow more longwave radiation to escape to space. We discuss experiment designs, as well as the rationale for those designs, showing preliminary results from individual models when available. We also introduce a new feature, called the GeoMIP Testbed, which provides a platform for simulations that will be performed with a few models and subsequently assessed to determine whether the proposed experiment designs will be adopted as core (Tier 1) GeoMIP experiments. This is meant to encourage various stakeholders to propose new targeted experiments that address their key open science questions, with the goal of making GeoMIP more relevant to a broader set of communities.

  18. Simulation and Laboratory results of the Hard X-ray Polarimeter: X-Calibur

    NASA Astrophysics Data System (ADS)

    Guo, Qingzhen; Beilicke, M.; Kislat, F.; Krawczynski, H.

    2014-01-01

    X-ray polarimetry promises to give qualitatively new information about high-energy sources, such as binary black hole (BH) systems, Microquasars, active galactic nuclei (AGN), GRBs, etc. We designed, built and tested a hard X-ray polarimeter 'X-Calibur' to be flown in the focal plane of the InFOCuS grazing incidence hard X-ray telescope in 2014. X-Calibur combines a low-Z Compton scatterer with a CZT detector assembly to measure the polarization of 20- 80 keV X-rays making use of the fact that polarized photons Compton scatter preferentially perpendicular to the E field orientation. X-Calibur achieves a high detection efficiency of order unity. We optimized of the design of the instrument based on Monte Carlo simulations of polarized and unpolarized X-ray beams and of the most important background components. We have calibrated and tested X-Calibur extensively in the laboratory at Washington University and at the Cornell High-Energy Synchrotron Source (CHESS). Measurements using the highly polarized synchrotron beam at CHESS confirm the polarization sensitivity of the instrument. In this talk we report on the optimization of the design of the instrument based on Monte Carlo simulations, as well as results of laboratory calibration measurements characterizing the performance of the instrument.

  19. The Geoengineering Model Intercomparison Project Phase 6 (GeoMIP6): simulation design and preliminary results

    NASA Astrophysics Data System (ADS)

    Kravitz, B.; Robock, A.; Tilmes, S.; Boucher, O.; English, J. M.; Irvine, P. J.; Jones, A.; Lawrence, M. G.; MacCracken, M.; Muri, H.; Moore, J. C.; Niemeier, U.; Phipps, S. J.; Sillmann, J.; Storelvmo, T.; Wang, H.; Watanabe, S.

    2015-06-01

    We present a suite of new climate model experiment designs for the Geoengineering Model Intercomparison Project (GeoMIP). This set of experiments, named GeoMIP6 (to be consistent with the Coupled Model Intercomparison Project Phase 6), builds on the previous GeoMIP simulations, and has been expanded to address several further important topics, including key uncertainties in extreme events, the use of geoengineering as part of a portfolio of responses to climate change, and the relatively new idea of cirrus cloud thinning to allow more longwave radiation to escape to space. We discuss experiment designs, as well as the rationale for those designs, showing preliminary results from individual models when available. We also introduce a new feature, called the GeoMIP Testbed, which provides a platform for simulations that will be performed with a few models and subsequently assessed to determine whether the proposed experiment designs will be adopted as core (Tier 1) GeoMIP experiments. This is meant to encourage various stakeholders to propose new targeted experiments that address their key open science questions, with the goal of making GeoMIP more relevant to a broader set of communities.

  20. The Geoengineering Model Intercomparison Project Phase 6 (GeoMIP6): simulation design and preliminary results

    NASA Astrophysics Data System (ADS)

    Kravitz, B.; Robock, A.; Tilmes, S.; Boucher, O.; English, J. M.; Irvine, P. J.; Jones, A.; Lawrence, M. G.; MacCracken, M.; Muri, H.; Moore, J. C.; Niemeier, U.; Phipps, S. J.; Sillmann, J.; Storelvmo, T.; Wang, H.; Watanabe, S.

    2015-10-01

    We present a suite of new climate model experiment designs for the Geoengineering Model Intercomparison Project (GeoMIP). This set of experiments, named GeoMIP6 (to be consistent with the Coupled Model Intercomparison Project Phase 6), builds on the previous GeoMIP project simulations, and has been expanded to address several further important topics, including key uncertainties in extreme events, the use of geoengineering as part of a portfolio of responses to climate change, and the relatively new idea of cirrus cloud thinning to allow more longwave radiation to escape to space. We discuss experiment designs, as well as the rationale for those designs, showing preliminary results from individual models when available. We also introduce a new feature, called the GeoMIP Testbed, which provides a platform for simulations that will be performed with a few models and subsequently assessed to determine whether the proposed experiment designs will be adopted as core (Tier 1) GeoMIP experiments. This is meant to encourage various stakeholders to propose new targeted experiments that address their key open science questions, with the goal of making GeoMIP more relevant to a broader set of communities.

  1. Results of transient simulations of a digital model of the Arikaree Aquifer near Wheatland, southeastern Wyoming

    USGS Publications Warehouse

    Hoxie, Dwight T.

    1979-01-01

    Revised ground-water pumpage data have been imposed on a ground-water flow model previously developed for the Arikaree aquifer in a 400 square-mile area in central Platte County, Wyo. Maximum permitted annual ground-water withdrawals of 750 acre-feet for industrial use were combined with three irrigation-pumping scenarios to predict the long-term effects on ground-water levels and streamflows. Total annual ground-water withdrawals of 8,806 acre-feet, 8,033 acre-feet, and 5,045 acre-feet were predicted to produce average water-level declines of 5 feet or more over areas of 99, 96, and 68 square miles, respectively, at the end of a 40-year simulation period. The first two pumping scenarios were predicted to produce average drawdowns of more than 50 feet over areas of 1.5 and 0.8 square miles, respectively, while the third scenario resulted in average drawdowns of less than 50 feet throughout the study area. In addition, these three pumping scenarios were predicted to cause streamflow reductions of 2.6, 2.0, and 1.4 cubic feet per second, respectively, in the Laramie River and 4.9, 4.7, and 3.7 cubic feet per second, respectively, in the North Laramie River at the end of the 40-year simulation period. (Kosco-USGS)

  2. Establishment of quality assurance for respiratory-gated radiotherapy using a respiration-simulating phantom and gamma index: Evaluation of accuracy taking into account tumor motion and respiratory cycle

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Seung; Im, In-Chul; Kang, Su-Man; Goo, Eun-Hoe; Baek, Seong-Min

    2013-11-01

    The purpose of this study is to present a new method of quality assurance (QA) in order to ensure effective evaluation of the accuracy of respiratory-gated radiotherapy (RGR). This would help in quantitatively analyzing the patient's respiratory cycle and respiration-induced tumor motion and in performing a subsequent comparative analysis of dose distributions, using the gamma-index method, as reproduced in our in-house developed respiration-simulating phantom. Therefore, we designed a respiration-simulating phantom capable of reproducing the patient's respiratory cycle and respiration-induced tumor motion and evaluated the accuracy of RGR by estimating its pass rates. We applied the gamma index passing criteria of accepted error ranges of 3% and 3 mm for the dose distribution calculated by using the treatment planning system (TPS) and the actual dose distribution of RGR. The pass rate clearly increased inversely to the gating width chosen. When respiration-induced tumor motion was 12 mm or less, pass rates of 85% and above were achieved for the 30-70% respiratory phase, and pass rates of 90% and above were achieved for the 40-60% respiratory phase. However, a respiratory cycle with a very small fluctuation range of pass rates failed to prove reliable in evaluating the accuracy of RGR. Therefore, accurate and reliable outcomes of radiotherapy will be obtainable only by establishing a novel QA system using the respiration-simulating phantom, the gamma-index analysis, and a quantitative analysis of diaphragmatic motion, enabling an indirect measurement of tumor motion.

  3. Assessing the Accuracy of Various Ab Initio Methods for Geometries and Excitation Energies of Retinal Chromophore Minimal Model by Comparison with CASPT3 Results.

    PubMed

    Grabarek, Dawid; Walczak, Elżbieta; Andruniów, Tadeusz

    2016-05-10

    The effect of the quality of the ground-state geometry on excitation energies in the retinal chromophore minimal model (PSB3) was systematically investigated using various single- (within Møller-Plesset and coupled-cluster frameworks) and multiconfigurational [within complete active space self-consistent field (CASSCF) and CASSCF-based perturbative approaches: second-order CASPT2 and third-order CASPT3] methods. Among investigated methods, only CASPT3 provides geometry in nearly perfect agreement with the CCSD(T)-based equilibrium structure. The second goal of the present study was to assess the performance of the CASPT2 methodology, which is popular in computational spectroscopy of retinals, in describing the excitation energies of low-lying excited states of PSB3 relative to CASPT3 results. The resulting CASPT2 excitation energy error is up to 0.16 eV for the S0 → S1 transition but only up to 0.06 eV for the S0 → S2 transition. Furthermore, CASPT3 excitation energies practically do not depend on modification of the zeroth-order Hamiltonian (so-called IPEA shift parameter), which does dramatically and nonsystematically affect CASPT2 excitation energies. PMID:27049438

  4. THE ACCURACY OF USING THE ULYSSES RESULT OF THE SPATIAL INVARIANCE OF THE RADIAL HELIOSPHERIC FIELD TO COMPUTE THE OPEN SOLAR FLUX

    SciTech Connect

    Lockwood, M.; Owens, M.

    2009-08-20

    We survey observations of the radial magnetic field in the heliosphere as a function of position, sunspot number, and sunspot cycle phase. We show that most of the differences between pairs of simultaneous observations, normalized using the square of the heliocentric distance and averaged over solar rotations, are consistent with the kinematic 'flux excess' effect whereby the radial component of the frozen-in heliospheric field is increased by longitudinal solar wind speed structure. In particular, the survey shows that, as expected, the flux excess effect at high latitudes is almost completely absent during sunspot minimum but is almost the same as within the streamer belt at sunspot maximum. We study the uncertainty inherent in the use of the Ulysses result that the radial field is independent of heliographic latitude in the computation of the total open solar flux: we show that after the kinematic correction for the excess flux effect has been made it causes errors that are smaller than 4.5%, with a most likely value of 2.5%. The importance of this result for understanding temporal evolution of the open solar flux is reviewed.

  5. Simulating tissue mechanics with agent-based models: concepts, perspectives and some novel results

    NASA Astrophysics Data System (ADS)

    Van Liedekerke, P.; Palm, M. M.; Jagiella, N.; Drasdo, D.

    2015-12-01

    In this paper we present an overview of agent-based models that are used to simulate mechanical and physiological phenomena in cells and tissues, and we discuss underlying concepts, limitations, and future perspectives of these models. As the interest in cell and tissue mechanics increase, agent-based models are becoming more common the modeling community. We overview the physical aspects, complexity, shortcomings, and capabilities of the major agent-based model categories: lattice-based models (cellular automata, lattice gas cellular automata, cellular Potts models), off-lattice models (center-based models, deformable cell models, vertex models), and hybrid discrete-continuum models. In this way, we hope to assist future researchers in choosing a model for the phenomenon they want to model and understand. The article also contains some novel results.

  6. Statistics of interacting networks with extreme preferred degrees: Simulation results and theoretical approaches

    NASA Astrophysics Data System (ADS)

    Liu, Wenjia; Schmittmann, Beate; Zia, R. K. P.

    2012-02-01

    Network studies have played a central role for understanding many systems in nature - e.g., physical, biological, and social. So far, much of the focus has been the statistics of networks in isolation. Yet, many networks in the world are coupled to each other. Recently, we considered this issue, in the context of two interacting social networks. In particular, We studied networks with two different preferred degrees, modeling, say, introverts vs. extroverts, with a variety of ``rules for engagement.'' As a first step towards an analytically accessible theory, we restrict our attention to an ``extreme scenario'': The introverts prefer zero contacts while the extroverts like to befriend everyone in the society. In this ``maximally frustrated'' system, the degree distributions, as well as the statistics of cross-links (between the two groups), can depend sensitively on how a node (individual) creates/breaks its connections. The simulation results can be reasonably well understood in terms of an approximate theory.

  7. Comparison of theoretical and simulated performance results for sloppy-slotted Aloha signaling

    NASA Astrophysics Data System (ADS)

    Crozier, Stewart N.

    Sloppy-slotted Aloha refers to a form of random access signaling which allows slotted packets, with random timing errors, to spill over into adjacent slots. For the North American mobile satellite (MSAT) system, the two-way propagation delay variation is on the order of 40 milliseconds. The higher the signaling rate, or the shorter the packet length, the wider the timing error distribution, measured in packet lengths. With 192 transmission bits per packet, a 40 millisecond timing error corresponds to 2 packet lengths at 9600 bits per second. Approximate theoretical and simulated performance results are presented and compared for a mixed Gaussian discrete timing error distribution model. This model allows a fraction of the users to have corrected timing. It is found that the theoretical approximations are generally quite accurate. Where differences are observed, the theoretical approximations are always found to be pessimistic. The conclusion is that the theoretical approximations can be used with confidence as a conservative measure of performance.

  8. Solar flare model: Comparison of the results of numerical simulations and observations

    NASA Astrophysics Data System (ADS)

    Podgorny, I. M.; Vashenyuk, E. V.; Podgorny, A. I.

    2009-12-01

    The electrodynamic flare model is based on numerical 3D simulations with the real magnetic field of an active region. An energy of ˜1032 erg necessary for a solar flare is shown to accumulate in the magnetic field of a coronal current sheet. The thermal X-ray source in the corona results from plasma heating in the current sheet upon reconnection. The hard X-ray sources are located on the solar surface at the loop foot-points. They are produced by the precipitation of electron beams accelerated in field-aligned currents. Solar cosmic rays appear upon acceleration in the electric field along a singular magnetic X-type line. The generation mechanism of the delayed cosmic-ray component is also discussed.

  9. Experimental and simulation study results for video landmark acquisition and tracking technology

    NASA Technical Reports Server (NTRS)

    Schappell, R. T.; Tietz, J. C.; Thomas, H. M.; Lowrie, J. W.

    1979-01-01

    A synopsis of related Earth observation technology is provided and includes surface-feature tracking, generic feature classification and landmark identification, and navigation by multicolor correlation. With the advent of the Space Shuttle era, the NASA role takes on new significance in that one can now conceive of dedicated Earth resources missions. Space Shuttle also provides a unique test bed for evaluating advanced sensor technology like that described in this report. As a result of this type of rationale, the FILE OSTA-1 Shuttle experiment, which grew out of the Video Landmark Acquisition and Tracking (VILAT) activity, was developed and is described in this report along with the relevant tradeoffs. In addition, a synopsis of FILE computer simulation activity is included. This synopsis relates to future required capabilities such as landmark registration, reacquisition, and tracking.

  10. Multiple Frequency Contrast Source Inversion Method for Vertical Electromagnetic Profiling: 2D Simulation Results and Analyses

    NASA Astrophysics Data System (ADS)

    Li, Jinghe; Song, Linping; Liu, Qing Huo

    2016-02-01

    A simultaneous multiple frequency contrast source inversion (CSI) method is applied to reconstructing hydrocarbon reservoir targets in a complex multilayered medium in two dimensions. It simulates the effects of a salt dome sedimentary formation in the context of reservoir monitoring. In this method, the stabilized biconjugate-gradient fast Fourier transform (BCGS-FFT) algorithm is applied as a fast solver for the 2D volume integral equation for the forward computation. The inversion technique with CSI combines the efficient FFT algorithm to speed up the matrix-vector multiplication and the stable convergence of the simultaneous multiple frequency CSI in the iteration process. As a result, this method is capable of making quantitative conductivity image reconstruction effectively for large-scale electromagnetic oil exploration problems, including the vertical electromagnetic profiling (VEP) survey investigated here. A number of numerical examples have been demonstrated to validate the effectiveness and capacity of the simultaneous multiple frequency CSI method for a limited array view in VEP.

  11. Test Results From a Direct Drive Gas Reactor Simulator Coupled to a Brayton Power Conversion Unit

    NASA Technical Reports Server (NTRS)

    Hervol, David S.; Briggs, Maxwell H.; Owen, Albert K.; Bragg-Sitton, Shannon M.

    2009-01-01

    The Brayton Power Conversion Unit (BPCU) located at NASA Glenn Research Center (GRC) in Cleveland, OH is a closed cycle system incorporating a turboaltemator, recuperator, and gas cooler connected by gas ducts to an external gas heater. For this series of tests, the BPCU was modified by replacing the gas heater with the Direct Drive Gas heater or DOG. The DOG uses electric resistance heaters to simulate a fast spectrum nuclear reactor similar to those proposed for space power applications. The combined system thermal transient behavior was the focus of these tests. The BPCU was operated at various steady state points. At each point it was subjected to transient changes involving shaft rotational speed or DOG electrical input. This paper outlines the changes made to the test unit and describes the testing that took place along with the test results.

  12. Biofilm formation and control in a simulated spacecraft water system - Two-year results

    NASA Technical Reports Server (NTRS)

    Schultz, John R.; Taylor, Robert D.; Flanagan, David T.; Carr, Sandra E.; Bruce, Rebekah J.; Svoboda, Judy V.; Huls, M. H.; Sauer, Richard L.; Pierson, Duane L.

    1991-01-01

    The ability of iodine to maintain microbial water quality in a simulated spacecraft water system is being studied. An iodine level of about 2.0 mg/L is maintained by passing ultrapure influent water through an iodinated ion exchange resin. Six liters are withdrawn daily and the chemical and microbial quality of the water is monitored regularly. Stainless steel coupons used to monitor biofilm formation are being analyzed by culture methods, epifluorescence microscopy, and scanning electron microscopy. Results from the first two years of operation show a single episode of high bacterial colony counts in the iodinated system. This growth was apparently controlled by replacing the iodinated ion exchange resin. Scanning electron microscopy indicates that the iodine has limited but not completely eliminated the formation of biofilm during the first two years of operation. Significant microbial contamination has been present continuously in a parallel noniodinated system since the third week of operation.

  13. Comparison of road load simulator test results with track tests on electric vehicle propulsion system

    NASA Technical Reports Server (NTRS)

    Dustin, M. O.

    1983-01-01

    A special-purpose dynamometer, the road load simulator (RLS), is being used at NASA's Lewis Research Center to test and evaluate electric vehicle propulsion systems developed under DOE's Electric and Hybrid Vehicle Program. To improve correlation between system tests on the RLS and track tests, similar tests were conducted on the same propulsion system on the RLS and on a test track. These tests are compared in this report. Battery current to maintain a constant vehicle speed with a fixed throttle was used for the comparison. Scatter in the data was greater in the track test results. This is attributable to variations in tire rolling resistance and wind effects in the track data. It also appeared that the RLS road load, determined by coastdown tests on the track, was lower than that of the vehicle on the track. These differences may be due to differences in tire temperature.

  14. Inverse Comptonization in a Two Component Advective Flow: Results of a Monte Carlo simulation

    SciTech Connect

    Ghosh, Himadri; Chakrabarti, S. K.; Laurent, Philippe

    2008-10-08

    We compute the resultant spectrum due to multiple scattering of soft photons emitted from a Keplerian disk by thermal electrons inside a torus axisymmetrically placed around a black hole. In a two component advective flow model, the post-shock region is similar to a thick accretion disk and the pre-shock sub-keplerian flow is highly optically thin. As a preliminary run of the Monte Carlo simulation of the system, we assume the CENBOL to be a small (2-14r{sub g}) thick accretion disk without a cusp to allow bulk motion of the flow. Bulk Motion Comptonization (BMC) has also been added. We show that the spectral behaviour is very similar to what is predicted in Chakrabarti and Titarchuk (1995)

  15. Barred Galaxy Photometry: Comparing results from the Cananea sample with N-body simulations

    NASA Astrophysics Data System (ADS)

    Athanassoula, E.; Gadotti, D. A.; Carrasco, L.; Bosma, A.; de Souza, R. E.; Recillas, E.

    2009-11-01

    We compare the results of the photometrical analysis of barred galaxies with those of a similar analysis from N-body simulations. The photometry is for a sample of nine barred galaxies observed in the J and K[s] bands with the CANICA near infrared (NIR) camera at the 2.1 m telescope of the Observatorio Astrofísico Guillermo Haro (OAGH) in Cananea, Sonora, Mexico. The comparison includes radial ellipticity profiles and surface brightness (density for the N-body galaxies) profiles along the bar major and minor axes. We find very good agreement, arguing that the exchange of angular momentum within the galaxy plays a determinant role in the evolution of barred galaxies.

  16. Recent Simulation Results on Ring Current Dynamics Using the Comprehensive Ring Current Model

    NASA Technical Reports Server (NTRS)

    Zheng, Yihua; Zaharia, Sorin G.; Lui, Anthony T. Y.; Fok, Mei-Ching

    2010-01-01

    Plasma sheet conditions and electromagnetic field configurations are both crucial in determining ring current evolution and connection to the ionosphere. In this presentation, we investigate how different conditions of plasma sheet distribution affect ring current properties. Results include comparative studies in 1) varying the radial distance of the plasma sheet boundary; 2) varying local time distribution of the source population; 3) varying the source spectra. Our results show that a source located farther away leads to a stronger ring current than a source that is closer to the Earth. Local time distribution of the source plays an important role in determining both the radial and azimuthal (local time) location of the ring current peak pressure. We found that post-midnight source locations generally lead to a stronger ring current. This finding is in agreement with Lavraud et al.. However, our results do not exhibit any simple dependence of the local time distribution of the peak ring current (within the lower energy range) on the local time distribution of the source, as suggested by Lavraud et al. [2008]. In addition, we will show how different specifications of the magnetic field in the simulation domain affect ring current dynamics in reference to the 20 November 2007 storm, which include initial results on coupling the CRCM with a three-dimensional (3-D) plasma force balance code to achieve self-consistency in the magnetic field.

  17. Simulated microgravity inhibits the proliferation of K562 erythroleukemia cells but does not result in apoptosis

    NASA Astrophysics Data System (ADS)

    Yi, Zong-Chun; Xia, Bing; Xue, Ming; Zhang, Guang-Yao; Wang, Hong; Zhou, Hui-Min; Sun, Yan; Zhuang, Feng-Yuan

    2009-07-01

    Astronauts and experimental animals in space develop the anemia of space flight, but the underlying mechanisms are still unclear. In this study, the impact of simulated microgravity on proliferation, cell death, cell cycle progress and cytoskeleton of erythroid progenitor-like K562 leukemia cells was observed. K562 cells were cultured in NASA Rotary Cell Culture System (RCCS) that was used to simulate microgravity (at 15 rpm). After culture for 24 h, 48 h, 72 h, and 96 h, the cell densities cultured in RCCS were only 55.5%, 54.3%, 67.2% and 66.4% of the flask-cultured control cells, respectively. The percentages of trypan blue-stained dead cells and the percentages of apoptotic cells demonstrated no difference between RCCS-cultured cells and flask-cultured cells at every time points (from 12 h to 96 h). Compared with flask-cultured cells, RCCS culture induced an accumulation of cell number at S phase concomitant with a decrease at G0/G1 and G2/M phases at 12 h. But 12 h later (from 24 h to 60 h), the distribution of cell cycle phases in RCCS-cultured cells became no difference compared to flask-cultured cells. Consistent with the changes of cell cycle distribution, the levels of intercellular cyclins in RCCS-cultured cells changed at 12 h, including a decrease in cyclin A, and the increasing in cyclin B, D1 and E, and then (from 24 h to 36 h) began to restore to control levels. After RCCS culture for 12-36 h, the microfilaments showed uneven and clustered distribution, and the microtubules were highly disorganized. These results indicated that RCCS-simulated microgravity could induce a transient inhibition of proliferation, but not result in apoptosis, which could involve in the development of space flight anemia. K562 cells could be a useful model to research the effects of microgravity on differentiation and proliferation of hematopoietic cells.

  18. DEM Simulated Results And Seismic Interpretation of the Red River Fault Displacements in Vietnam

    NASA Astrophysics Data System (ADS)

    Bui, H. T.; Yamada, Y.; Matsuoka, T.

    2005-12-01

    The Song Hong basin is the largest Tertiary sedimentary basin in Viet Nam. Its onset is approximately 32 Ma ago since the left-lateral displacement of the Red River Fault commenced. Many researches on structures, formation and tectonic evolution of the Song Hong basin have been carried out for a long time but there are still remained some problems that needed to put into continuous discussion such as: magnitude of the displacements, magnitude of movement along the faults, the time of tectonic inversion and right lateral displacement. Especially the mechanism of the Song Hong basin formation is still in controversy with many different hypotheses due to the activation of the Red River fault. In this paper PFC2D based on the Distinct Element Method (DEM) was used to simulate the development of the Red River fault system that controlled the development of the Song Hong basin from the onshore to the elongated portion offshore area. The numerical results show the different parts of the stress field such as compress field, non-stress field, pull-apart field of the dynamic mechanism along the Red River fault in the onshore area. This propagation to the offshore area is partitioned into two main branch faults that are corresponding to the Song Chay and Song Lo fault systems and said to restrain the east and west flanks of the Song Hong basin. The simulation of the Red River motion also showed well the left lateral displacement since its onset. Though it is the first time the DEM method was applied to study the deformation and geodynamic evolution of the Song Hong basin, the results showed reliably applied into the structural configuration evaluation of the Song Hong basin.

  19. Results of Aging Tests of Vendor-Produced Blended Feed Simulant

    SciTech Connect

    Russell, Renee L.; Buchmiller, William C.; Cantrell, Kirk J.; Peterson, Reid A.; Rinehart, Donald E.

    2009-04-21

    The Hanford Tank Waste Treatment and Immobilization Plant (WTP) is procuring through Pacific Northwest National Laboratory (PNNL) a minimum of five 3,500 gallon batches of waste simulant for Phase 1 testing in the Pretreatment Engineering Platform (PEP). To make sure that the quality of the simulant is acceptable, the production method was scaled up starting from laboratory-prepared simulant through 15-gallon vendor prepared simulant and 250-gallon vendor prepared simulant before embarking on the production of the 3500-gallon simulant batch by the vendor. The 3500-gallon PEP simulant batches were packaged in 250-gallon high molecular weight polyethylene totes at NOAH Technologies. The simulant was stored in an environmentally controlled environment at NOAH Technologies within their warehouse before blending or shipping. For the 15-gallon, 250-gallon, and 3500-gallon batch 0, the simulant was shipped in ambient temperature trucks with shipment requiring nominally 3 days. The 3500-gallon batch 1 traveled in a 70-75°F temperature controlled truck. Typically the simulant was uploaded in a PEP receiving tank within 24-hours of receipt. The first uploading required longer with it stored outside. Physical and chemical characterization of the 250-gallon batch was necessary to determine the effect of aging on the simulant in transit from the vendor and in storage before its use in the PEP. Therefore, aging tests were conducted on the 250-gallon batch of the vendor-produced PEP blended feed simulant to identify and determine any changes to the physical characteristics of the simulant when in storage. The supernate was also chemically characterized. Four aging scenarios for the vendor-produced blended simulant were studied: 1) stored outside in a 250-gallon tote, 2) stored inside in a gallon plastic bottle, 3) stored inside in a well mixed 5-L tank, and 4) subject to extended temperature cycling under summer temperature conditions in a gallon plastic bottle. The following

  20. Research on an expert system for database operation of simulation-emulation math models. Volume 1, Phase 1: Results

    NASA Technical Reports Server (NTRS)

    Kawamura, K.; Beale, G. O.; Schaffer, J. D.; Hsieh, B. J.; Padalkar, S.; Rodriguez-Moscoso, J. J.

    1985-01-01

    The results of the first phase of Research on an Expert System for Database Operation of Simulation/Emulation Math Models, is described. Techniques from artificial intelligence (AI) were to bear on task domains of interest to NASA Marshall Space Flight Center. One such domain is simulation of spacecraft attitude control systems. Two related software systems were developed to and delivered to NASA. One was a generic simulation model for spacecraft attitude control, written in FORTRAN. The second was an expert system which understands the usage of a class of spacecraft attitude control simulation software and can assist the user in running the software. This NASA Expert Simulation System (NESS), written in LISP, contains general knowledge about digital simulation, specific knowledge about the simulation software, and self knowledge.

  1. Ground Truth Accuracy Tests of GPS Seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Oberlander, D. J.; Davis, J. L.; Baena, R.; Ekstrom, G.

    2005-12-01

    As the precision of GPS determinations of site position continues to improve the detection of smaller and faster geophysical signals becomes possible. However, lack of independent measurements of these signals often precludes an assessment of the accuracy of such GPS position determinations. This may be particularly true for high-rate GPS applications. We have built an apparatus to assess the accuracy of GPS position determinations for high-rate applications, in particular the application known as "GPS seismology." The apparatus consists of a bidirectional, single-axis positioning table coupled to a digitally controlled stepping motor. The motor, in turn, is connected to a Field Programmable Gate Array (FPGA) chip that synchronously sequences through real historical earthquake profiles stored in Erasable Programmable Read Only Memory's (EPROM). A GPS antenna attached to this positioning table undergoes the simulated seismic motions of the Earth's surface while collecting high-rate GPS data. Analysis of the time-dependent position estimates can then be compared to the "ground truth," and the resultant GPS error spectrum can be measured. We have made extensive measurements with this system while inducing simulated seismic motions either in the horizontal plane or the vertical axis. A second stationary GPS antenna at a distance of several meters was simultaneously collecting high-rate (5 Hz) GPS data. We will present the calibration of this system, describe the GPS observations and data analysis, and assess the accuracy of GPS for high-rate geophysical applications and natural hazards mitigation.

  2. Free-Flight Test Results of Scale Models Simulating Viking Parachute/Lander Staging

    NASA Technical Reports Server (NTRS)

    Polutchko, Robert J.

    1973-01-01

    This report presents the results of Viking Aerothermodynamics Test D4-34.0. Motion picture coverage of a number of Scale model drop tests provides the data from which time-position characteristics as well as canopy shape and model system attitudes are measured. These data are processed to obtain the instantaneous drag during staging of a model simulating the Viking decelerator system during parachute staging at Mars. Through scaling laws derived prior to test (Appendix A and B) these results are used to predict such performance of the Viking decelerator parachute during staging at Mars. The tests were performed at the NASA/Kennedy Space Center (KSC) Vertical Assembly Building (VAB). Model assemblies were dropped 300 feet to a platform in High Bay No. 3. The data consist of an edited master film (negative) which is on permanent file in the NASA/LRC Library. Principal results of this investigation indicate that for Viking parachute staging at Mars: 1. Parachute staging separation distance is always positive and continuously increasing generally along the descent path. 2. At staging, the parachute drag coefficient is at least 55% of its prestage equilibrium value. One quarter minute later, it has recovered to its pre-stage value.

  3. INPRES (intraoperative presentation of surgical planning and simulation results): augmented reality for craniofacial surgery

    NASA Astrophysics Data System (ADS)

    Salb, Tobias; Brief, Jakob; Welzel, Thomas; Giesler, Bjoern; Hassfeld, Steffan; Muehling, Joachim; Dillmann, Ruediger

    2003-05-01

    In this paper we present recent developments and pre-clinical validation results of our approach for augmented reality (AR, for short) in craniofacial surgery. A commercial Sony Glasstron display is used for optical see-through overlay of surgical planning and simulation results with a patient inside the operation room (OR). For the tracking of the glasses, of the patient and of various medical instruments an NDI Polaris system is used as standard solution. A complementary inside-out navigation approach has been realized with a panoramic camera. This device is mounted on the head of the surgeon for tracking of fiducials placed on the walls of the OR. Further tasks described include the calibration of the head-mounted display (HMD), the registration of virtual objects with the real world and the detection of occlusions in the object overlay with help of two miniature CCD cameras. The evaluation of our work took place in the laboratory environment and showed promising results. Future work will concentrate on the optimization of the technical features of the prototype and on the development of a system for everyday clinical use.

  4. CZT detectors used in different irradiation geometries: Simulations and experimental results

    SciTech Connect

    Fritz, Shannon G.; Shikhaliev, Polad M.

    2009-04-15

    The purpose of this work was to evaluate potential advantages and limitations of CZT detectors used in surface-on, edge-on, and tilted angle irradiation geometries. Simulations and experimental investigations of the energy spectrum measured by a CZT detector have been performed using different irradiation geometries of the CZT. Experiments were performed using a CZT detector with 10x10 mm{sup 2} size and 3 mm thickness. The detector was irradiated with collimated photon beams from Am-241 (59.5 keV) and Co-57 (122 keV). The edge-scan method was used to measure the detector response function in edge-on illumination mode. The tilted angle mode was investigated with the radiation beam directed to the detector surface at angles of 90 degree sign , 15 degree sign , and 10 degree sign . The Hecht formalism was used to simulate theoretical energy spectra. The parameters used for simulations were matched to experiment to compare experimental and theoretical results. The tilted angle CZT detector suppressed the tailing of the spectrum and provided an increase in peak-to-total ratio from 38% at 90 degree sign to 83% at 10 degree sign tilt angle for 122 keV radiation. The corresponding increase for 59 keV radiation was from 60% at 90 degree sign to 85% at 10 degree sign tilt angle. The edge-on CZT detector provided high energy resolution when the beam thickness was much smaller than the thickness of CZT. The FWHM resolution in edge-on illumination mode was 4.2% for 122 keV beam with 0.3 mm thickness, and rapidly deteriorated when the thickness of the beam was increased. The energy resolution of surface-on geometry suffered from strong tailing effect at photon energies higher than 60 keV. It is concluded that tilted angle CZT provides high energy resolution but it is limited to a 1D linear array configuration. The surface-on CZT provides 2D pixel arrays but suffers from tailing effect and charge build up. The edge-on CZT is considered suboptimal as it requires small beam

  5. Wolter X-Ray Microscope Computed Tomography Ray-Trace Model with Preliminary Simulation Results

    SciTech Connect

    Jackson, J A

    2006-02-27

    code, (5) description of the modeling code, (6) the results of a number of preliminary imaging simulations, and (7) recommendations for future Wolter designs and for further modeling studies.

  6. Near-Infrared Spectroscopic Measurements of Calf Muscle during Walking at Simulated Reduced Gravity - Preliminary Results

    NASA Technical Reports Server (NTRS)

    Ellerby, Gwenn E. C.; Lee, Stuart M. C.; Stroud, Leah; Norcross, Jason; Gernhardt, Michael; Soller, Babs R.

    2008-01-01

    Consideration for lunar and planetary exploration space suit design can be enhanced by investigating the physiologic responses of individual muscles during locomotion in reduced gravity. Near-infrared spectroscopy (NIRS) provides a non-invasive method to study the physiology of individual muscles in ambulatory subjects during reduced gravity simulations. PURPOSE: To investigate calf muscle oxygen saturation (SmO2) and pH during reduced gravity walking at varying treadmill inclines and added mass conditions using NIRS. METHODS: Four male subjects aged 42.3 +/- 1.7 years (mean +/- SE) and weighing 77.9 +/- 2.4 kg walked at a moderate speed (3.2 +/- 0.2 km/h) on a treadmill at inclines of 0, 10, 20, and 30%. Unsuited subjects were attached to a partial gravity simulator which unloaded the subject to simulate body weight plus the additional weight of a space suit (121 kg) in lunar gravity (0.17G). Masses of 0, 11, 23, and 34 kg were added to the subject and then unloaded to maintain constant weight. Spectra were collected from the lateral gastrocnemius (LG), and SmO2 and pH were calculated using previously published methods (Yang et al. 2007 Optics Express ; Soller et al. 2008 J Appl Physiol). The effects of incline and added mass on SmO2 and pH were analyzed through repeated measures ANOVA. RESULTS: SmO2 and pH were both unchanged by added mass (p>0.05), so data from trials at the same incline were averaged. LG SmO2 decreased significantly with increasing incline (p=0.003) from 61.1 +/- 2.0% at 0% incline to 48.7 +/- 2.6% at 30% incline, while pH was unchanged by incline (p=0.12). CONCLUSION: Increasing the incline (and thus work performed) during walking causes the LG to extract more oxygen from the blood supply, presumably to support the increased metabolic cost of uphill walking. The lack of an effect of incline on pH may indicate that, while the intensity of exercise has increased, the LG has not reached a level of work above the anaerobic threshold. In these

  7. A rainfall simulation experiment on soil and water conservation measures - Undesirable results

    NASA Astrophysics Data System (ADS)

    Hösl, R.; Strauss, P.

    2012-04-01

    Sediment and nutrient inputs from agriculturally used land into surface waters are one of the main problems concerning surface water quality. On-site soil and water conservation measures are getting more and more popular throughout the last decades and a lot of research has been done within this issue. Numerous studies can be found about rainfall simulation experiments with different conservation measures tested like no till, mulching employing different types of soil cover, as well as sub soiling practices. Many studies document a more or less great success in preventing soil erosion and enhancing water quality by implementing no till and mulching techniques on farmland but few studies also indicate higher erosion rates with implementation of conservation tillage practices (Strauss et al., 2003). In May 2011 we conducted a field rainfall simulation experiment in Upper Austria to test 5 different maize cultivation techniques: no till with rough seedbed, no till with fine seedbed, mulching with disc harrow and rotary harrow, mulching with rotary harrow and conventional tillage using plough and rotary harrow. Rough seedbed refers to the seedbed preparation at planting of the cover crops. On every plot except on the conventionally managed one cover crops (a mix of Trifolium alexandrinum, Phacelia, Raphanus sativus and Herpestes) were sown in August 2010. All plots were rained three times with deionised water (<50 μS.cm-1) for one hour with 50mm.h-1 rainfall intensity. Surface runoff and soil erosion were measured. Additionally, soil cover by mulch was measured as well as soil texture, bulk density, penetration resistance, surface roughness and soil water content before and after the simulation. The simulation experiments took place about 2 weeks after seeding of maize in spring 2011. The most effective cultivation techniques for soil prevention expectedly proved to be the no till variants, mean erosion rate was about 0.1 kg.h-1, mean surface runoff was 29 l.h-1

  8. WE-D-17A-03: Improvement of Accuracy of Spot-Scanning Proton Beam Delivery for Liver Tumor by Real-Time Tumor-Monitoring and Gating System: A Simulation Study

    SciTech Connect

    Matsuura, T; Shimizu, S; Miyamoto, N; Takao, S; Toramatsu, C; Nihongi, H; Yamada, T; Shirato, H; Fujii, Y; Umezawa, M; Umegaki, K

    2014-06-15

    Purpose: To improve the accuracy of spot-scanning proton beam delivery for target in motion, a real-time tumor-monitoring and gating system using fluoroscopy images was developed. This study investigates the efficacy of this method for treatment of liver tumors using simulation. Methods: Three-dimensional position of a fiducial marker inserted close to the tumor is calculated in real time and proton beam is gated according to the marker's distance from the planned position (Shirato, 2012). The efficient beam delivery is realized even for the irregular and sporadic motion signals, by employing the multiple-gated irradiations per operation cycle (Umezawa, 2012). For each of two breath-hold CTs (CTV=14.6cc, 63.1cc), dose distributions were calculated with internal margins corresponding to freebreathing (FB) and real-time gating (RG) with a 2-mm gating window. We applied 8 trajectories of liver tumor recorded during the treatment of RTRT in X-ray therapy and 6 initial timings. Dmax/Dmin in CTV, mean liver dose (MLD), and irradiation time to administer 3 Gy (RBE) dose were estimated assuming rigid motion of targets by using in-house simulation tools and VQA treatment planning system (Hitachi, Ltd., Tokyo). Results: Dmax/Dmin was degraded by less than 5% compared to the prescribed dose with all motion parameters for smaller CTV and less than 7% for larger CTV with one exception. Irradiation time showed only a modest increase if RG was used instead of FB; the average value over motion parameters was 113 (FB) and 138 s (RG) for smaller CTV and 120 (FB) and 207 s (RG) for larger CTV. In RG, it was within 5 min for all but one trajectory. MLD was markedly decreased by 14% and 5–6% for smaller and larger CTVs respectively, if RG was applied. Conclusions: Spot-scanning proton beam was shown to be delivered successfully to liver tumor without much lengthening of treatment time. This research was supported by the Cabinet Office, Government of Japan and the Japan Society for

  9. Initial quality performance results using a phantom to simulate chest computed radiography.

    PubMed

    Muhogora, Wilbroad; Padovani, Renato; Msaki, Peter

    2011-01-01

    The aim of this study was to develop a homemade phantom for quantitative quality control in chest computed radiography (CR). The phantom was constructed from copper, aluminium, and polymenthylmethacrylate (PMMA) plates as well as Styrofoam materials. Depending on combinations, the literature suggests that these materials can simulate the attenuation and scattering characteristics of lung, heart, and mediastinum. The lung, heart, and mediastinum regions were simulated by 10 mm x 10 mm x 0.5 mm, 10 mm x 10 mm x 0.5 mm and 10 mm x 10 mm x 1 mm copper plates, respectively. A test object of 100 mm x 100 mm and 0.2 mm thick copper was positioned to each region for CNR measurements. The phantom was exposed to x-rays generated by different tube potentials that covered settings in clinical use: 110-120 kVp (HVL=4.26-4.66 mm Al) at a source image distance (SID) of 180 cm. An approach similar to the recommended method in digital mammography was applied to determine the CNR values of phantom images produced by a Kodak CR 850A system with post-processing turned off. Subjective contrast-detail studies were also carried out by using images of Leeds TOR CDR test object acquired under similar exposure conditions as during CNR measurements. For clinical kVp conditions relevant to chest radiography, the CNR was highest over 90-100 kVp range. The CNR data correlated with the results of contrast detail observations. The values of clinical tube potentials at which CNR is the highest are regarded to be optimal kVp settings. The simplicity in phantom construction can offer easy implementation of related quality control program. PMID:21430855

  10. Prediction Markets and Beliefs about Climate: Results from Agent-Based Simulations

    NASA Astrophysics Data System (ADS)

    Gilligan, J. M.; John, N. J.; van der Linden, M.

    2015-12-01

    Climate scientists have long been frustrated by persistent doubts a large portion of the public expresses toward the scientific consensus about anthropogenic global warming. The political and ideological polarization of this doubt led Vandenbergh, Raimi, and Gilligan [1] to propose that prediction markets for climate change might influence the opinions of those who mistrust the scientific community but do trust the power of markets.We have developed an agent-based simulation of a climate prediction market in which traders buy and sell future contracts that will pay off at some future year with a value that depends on the global average temperature at that time. The traders form a heterogeneous population with different ideological positions, different beliefs about anthropogenic global warming, and different degrees of risk aversion. We also vary characteristics of the market, including the topology of social networks among the traders, the number of traders, and the completeness of the market. Traders adjust their beliefs about climate according to the gains and losses they and other traders in their social network experience. This model predicts that if global temperature is predominantly driven by greenhouse gas concentrations, prediction markets will cause traders' beliefs to converge toward correctly accepting anthropogenic warming as real. This convergence is largely independent of the structure of the market and the characteristics of the population of traders. However, it may take considerable time for beliefs to converge. Conversely, if temperature does not depend on greenhouse gases, the model predicts that traders' beliefs will not converge. We will discuss the policy-relevance of these results and more generally, the use of agent-based market simulations for policy analysis regarding climate change, seasonal agricultural weather forecasts, and other applications.[1] MP Vandenbergh, KT Raimi, & JM Gilligan. UCLA Law Rev. 61, 1962 (2014).

  11. SIMULATION RESULTS OF RUNNING THE AGS MMPS, BY STORING ENERGY IN CAPACITOR BANKS.

    SciTech Connect

    MARNERIS, I.

    2006-09-01

    The Brookhaven AGS is a strong focusing accelerator which is used to accelerate protons and various heavy ion species to equivalent maximum proton energy of 29 GeV. The AGS Main Magnet Power Supply (MMPS) is a thyristor control supply rated at 5500 Amps, +/-go00 Volts. The peak magnet power is 49.5 Mwatts. The power supply is fed from a motor/generator manufactured by Siemens. The motor is rated at 9 MW, input voltage 3 phase 13.8 KV 60 Hz. The generator is rated at 50 MVA its output voltage is 3 phase 7500 Volts. Thus the peak power requirements come from the stored energy in the rotor of the motor/generator. The rotor changes speed by about +/-2.5% of its nominal speed of 1200 Revolutions per Minute. The reason the power supply is powered by the Generator is that the local power company (LIPA) can not sustain power swings of +/- 50 MW in 0.5 sec if the power supply were to be interfaced directly with the AC lines. The Motor Generator is about 45 years old and Siemens is not manufacturing similar machines in the future. As a result we are looking at different ways of storing energy and being able to utilize it for our application. This paper will present simulations of a power supply where energy is stored in capacitor banks. The simulation program used is called PSIM Version 6.1. The control system of the power supply will also be presented. The average power from LIPA into the power supply will be kept constant during the pulsing of the magnets at +/-50 MW. The reactive power will also be kept constant below 1.5 MVAR. Waveforms will be presented.

  12. Initial quality performance results using a phantom to simulate chest computed radiography

    PubMed Central

    Muhogora, Wilbroad; Padovani, Renato; Msaki, Peter

    2011-01-01

    The aim of this study was to develop a homemade phantom for quantitative quality control in chest computed radiography (CR). The phantom was constructed from copper, aluminium, and polymenthylmethacrylate (PMMA) plates as well as Styrofoam materials. Depending on combinations, the literature suggests that these materials can simulate the attenuation and scattering characteristics of lung, heart, and mediastinum. The lung, heart, and mediastinum regions were simulated by 10 mm x 10 mm x 0.5 mm, 10 mm x 10 mm x 0.5 mm and 10 mm x 10 mm x 1 mm copper plates, respectively. A test object of 100 mm x 100 mm and 0.2 mm thick copper was positioned to each region for CNR measurements. The phantom was exposed to x-rays generated by different tube potentials that covered settings in clinical use: 110-120 kVp (HVL=4.26-4.66 mm Al) at a source image distance (SID) of 180 cm. An approach similar to the recommended method in digital mammography was applied to determine the CNR values of phantom images produced by a Kodak CR 850A system with post-processing turned off. Subjective contrast-detail studies were also carried out by using images of Leeds TOR CDR test object acquired under similar exposure conditions as during CNR measurements. For clinical kVp conditions relevant to chest radiography, the CNR was highest over 90-100 kVp range. The CNR data correlated with the results of contrast detail observations. The values of clinical tube potentials at which CNR is the highest are regarded to be optimal kVp settings. The simplicity in phantom construction can offer easy implementation of related quality control program. PMID:21430855

  13. From Simulation to Real Robots with Predictable Results: Methods and Examples

    NASA Astrophysics Data System (ADS)

    Balakirsky, S.; Carpin, S.; Dimitoglou, G.; Balaguer, B.

    From a theoretical perspective, one may easily argue (as we will in this chapter) that simulation accelerates the algorithm development cycle. However, in practice many in the robotics development community share the sentiment that “Simulation is doomed to succeed” (Brooks, R., Matarić, M., Robot Learning, Kluwer Academic Press, Hingham, MA, 1993, p. 209). This comes in large part from the fact that many simulation systems are brittle; they do a fair-to-good job of simulating the expected, and fail to simulate the unexpected. It is the authors' belief that a simulation system is only as good as its models, and that deficiencies in these models lead to the majority of these failures. This chapter will attempt to address these deficiencies by presenting a systematic methodology with examples for the development of both simulated mobility models and sensor models for use with one of today's leading simulation engines. Techniques for using simulation for algorithm development leading to real-robot implementation will be presented, as well as opportunities for involvement in international robotics competitions based on these techniques.

  14. Urban Surface Network In Marseille: Network Optimization Using Numerical Simulations and Results

    NASA Astrophysics Data System (ADS)

    Pigeon, G.; Lemonsu, A.; Durand, P.; Masson, V.

    During the ESCOMPTE program (Field experiment to constrain models of atmo- spheric pollution and emissions transport) in Marseille between june and july 2001 an important device has been set up to describe the urban boundary layer over the built-up aera of Marseille. There was notably a network of 20 temperature and humid- ity sensors which has mesured the spatial and temporal variability of these parameters. Before the experiment the arrangement of the network had been optimized to get the maximum of information about these two varaibilities. We have worked on results of high resolution simulations containing the TEB scheme which represents the energy budgets associated with the gobal street geometry of the mesh. First, a qualitative analysis had enabled the identification of the characteristical phenomenons over the town of Marseille. There are narrows links beetween urban effects and local effects : marine advection and orography. Then, a quantitative analysis of the field has been developped. EOF (empirical orthogonal functions) have been used to characterised the spatial and temporal structures of the field evolution. Instrumented axis have been determined with all these results. Finally, we have choosen very carefully the locations of the instruments at the scale of the street to avoid that micro-climatic effects interfere with the meso-scale effect of the town. The recording of the mesurements, every 10 minutes, had started on the 12th of june and had finished on the 16th of july. We did not get any problem with the instrument and so all the period has been recorded every 10 minutes. The analysis of the datas will be led on different way. First, will be done a temporal study. We want to determine if the times when occur phenomenons are linked to the location in the town. We will interest particulary to the warming during the morning and the cooling during the evening. Then, we will look for correlation between the temperature and mixing ratio with the wind

  15. Simulation Results of the Huygens Probe Entry and Descent Trajectory Reconstruction Algorithm

    NASA Technical Reports Server (NTRS)

    Kazeminejad, B.; Atkinson, D. H.; Perez-Ayucar, M.

    2005-01-01

    Cassini/Huygens is a joint NASA/ESA mission to explore the Saturnian system. The ESA Huygens probe is scheduled to be released from the Cassini spacecraft on December 25, 2004, enter the atmosphere of Titan in January, 2005, and descend to Titan s surface using a sequence of different parachutes. To correctly interpret and correlate results from the probe science experiments and to provide a reference set of data for "ground-truthing" Orbiter remote sensing measurements, it is essential that the probe entry and descent trajectory reconstruction be performed as early as possible in the postflight data analysis phase. The Huygens Descent Trajectory Working Group (DTWG), a subgroup of the Huygens Science Working Team (HSWT), is responsible for developing a methodology and performing the entry and descent trajectory reconstruction. This paper provides an outline of the trajectory reconstruction methodology, preliminary probe trajectory retrieval test results using a simulated synthetic Huygens dataset developed by the Huygens Project Scientist Team at ESA/ESTEC, and a discussion of strategies for recovery from possible instrument failure.

  16. LSP Simulation and Analytical Results on Electromagnetic Wave Scattering on Coherent Density Structures

    NASA Astrophysics Data System (ADS)

    Sotnikov, V.; Kim, T.; Lundberg, J.; Paraschiv, I.; Mehlhorn, T.

    2014-09-01

    The presence of plasma turbulence can strongly influence propagation properties of electromagnetic signals used for surveillance and communication. In particular, we are interested in the generation of low frequency plasma density irregularities in the form of coherent vortex structures. Interchange or flute type density irregularities in magnetized plasma are associated with Rayleigh-Taylor type instability. These types of density irregularities play important role in refraction and scattering of high frequency electromagnetic signals propagating in the earth ionosphere, in high energy density physics (HEDP) and in many other applications. We will discuss scattering of high frequency electromagnetic waves on low frequency density irregularities due to the presence of vortex density structures associated with interchange instability. We will also present PIC simulation results on EM scattering on vortex type density structures using the LSP code and compare them with analytical results. Acknowledgement: This work was supported by the Air Force Research laboratory, the Air Force Office of Scientific Research, the Naval Research Laboratory and NNSA/DOE grant no. DE-FC52-06NA27616 at the University of Nevada at Reno.

  17. [Implementation results of emission standards of air pollutants for thermal power plants: a numerical simulation].

    PubMed

    Wang, Zhan-Shan; Pan, Li-Bo

    2014-03-01

    The emission inventory of air pollutants from the thermal power plants in the year of 2010 was set up. Based on the inventory, the air quality of the prediction scenarios by implementation of both 2003-version emission standard and the new emission standard were simulated using Models-3/CMAQ. The concentrations of NO2, SO2, and PM2.5, and the deposition of nitrogen and sulfur in the year of 2015 and 2020 were predicted to investigate the regional air quality improvement by the new emission standard. The results showed that the new emission standard could effectively improve the air quality in China. Compared with the implementation results of the 2003-version emission standard, by 2015 and 2020, the area with NO2 concentration higher than the emission standard would be reduced by 53.9% and 55.2%, the area with SO2 concentration higher than the emission standard would be reduced by 40.0%, the area with nitrogen deposition higher than 1.0 t x km(-2) would be reduced by 75.4% and 77.9%, and the area with sulfur deposition higher than 1.6 t x km(-2) would be reduced by 37.1% and 34.3%, respectively. PMID:24881370

  18. A mathematical model and simulation results of plasma enhanced chemical vapor deposition of silicon nitride films

    NASA Astrophysics Data System (ADS)

    Konakov, S. A.; Krzhizhanovskaya, V. V.

    2015-01-01

    We developed a mathematical model of Plasma Enhanced Chemical Vapor Deposition (PECVD) of silicon nitride thin films from SiH4-NH3-N2-Ar mixture, an important application in modern materials science. Our multiphysics model describes gas dynamics, chemical physics, plasma physics and electrodynamics. The PECVD technology is inherently multiscale, from macroscale processes in the chemical reactor to atomic-scale surface chemistry. Our macroscale model is based on Navier-Stokes equations for a transient laminar flow of a compressible chemically reacting gas mixture, together with the mass transfer and energy balance equations, Poisson equation for electric potential, electrons and ions balance equations. The chemical kinetics model includes 24 species and 58 reactions: 37 in the gas phase and 21 on the surface. A deposition model consists of three stages: adsorption to the surface, diffusion along the surface and embedding of products into the substrate. A new model has been validated on experimental results obtained with the "Plasmalab System 100" reactor. We present the mathematical model and simulation results investigating the influence of flow rate and source gas proportion on silicon nitride film growth rate and chemical composition.

  19. Instability of surface lenticular vortices: results from laboratory experiments and numerical simulations

    NASA Astrophysics Data System (ADS)

    Lahaye, Noé; Paci, Alexandre; Smith, Stefan Llewellyn

    2016-04-01

    We examine the instability of lenticular vortices -- or lenses -- in a stratified rotating fluid. The simplest configuration is one in which the lenses overlay a deep layer and have a free surface, and this can be studied using a two-layer rotating shallow water model. We report results from laboratory experiments and high-resolution direct numerical simulations of the destabilization of vortices with constant potential vorticity, and compare these to a linear stability analysis. The stability properties of the system are governed by two parameters: the typical upper-layer potential vorticity and the size (depth) of the vortex. Good agreement is found between analytical, numerical and experimental results for the growth rate and wavenumber of the instability. The nonlinear saturation of the instability is associated with conversion from potential to kinetic energy and weak emission of gravity waves, giving rise to the formation of coherent vortex multipoles with trapped waves. The impact of flow in the lower layer is examined. In particular, it is shown that the growth rate can be strongly affected and the instability can be suppressed for certain types of weak co-rotating flow.

  20. Results and Lessons Learned from Performance Testing of Humans in Spacesuits in Simulated Reduced Gravity

    NASA Technical Reports Server (NTRS)

    Chappell, Steven P.; Norcross, Jason R.; Gernhardt, Michael L.

    2009-01-01

    NASA's Constellation Program has plans to return to the Moon within the next 10 years. Although reaching the Moon during the Apollo Program was a remarkable human engineering achievement, fewer than 20 extravehicular activities (EVAs) were performed. Current projections indicate that the next lunar exploration program will require thousands of EVAs, which will require spacesuits that are better optimized for human performance. Limited mobility and dexterity, and the position of the center of gravity (CG) are a few of many features of the Apollo suit that required significant crew compensation to accomplish the objectives. Development of a new EVA suit system will ideally result in performance close to or better than that in shirtsleeves at 1 G, i.e., in "a suit that is a pleasure to work in, one that you would want to go out and explore in on your day off." Unlike the Shuttle program, in which only a fraction of the crew perform EVA, the Constellation program will require that all crewmembers be able to perform EVA. As a result, suits must be built to accommodate and optimize performance for a larger range of crew anthropometry, strength, and endurance. To address these concerns, NASA has begun a series of tests to better understand the factors affecting human performance and how to utilize various lunar gravity simulation environments available for testing.

  1. Simulation results of Pulse Shape Discrimination (PSD) for background reduction in INTEGRAL Spectrometer (SPI) germanium detectors

    NASA Technical Reports Server (NTRS)

    Slassi-Sennou, S. A.; Boggs, S. E.; Feffer, P. T.; Lin, R. P.

    1997-01-01

    Pulse Shape Discrimination (PSD) for background reduction will be used in the INTErnational Gamma Ray Astrophysics Laboratory (INTEGRAL) imaging spectrometer (SPI) to improve the sensitivity from 200 keV to 2 MeV. The observation of significant astrophysical gamma ray lines in this energy range is expected, where the dominant component of the background is the beta(sup -) decay in the Ge detectors due to the activation of Ge nuclei by cosmic rays. The sensitivity of the SPI will be improved by rejecting beta(sup -) decay events while retaining photon events. The PSD technique will distinguish between single and multiple site events. Simulation results of PSD for INTEGRAL-type Ge detectors using a numerical model for pulse shape generation are presented. The model was shown to agree with the experimental results for a narrow inner bore closed end cylindrical detector. Using PSD, a sensitivity improvement factor of the order of 2.4 at 0.8 MeV is expected.

  2. Solar wind-magnetosphere energy coupling function fitting: Results from a global MHD simulation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Han, J. P.; Li, H.; Peng, Z.; Richardson, J. D.

    2014-08-01

    Quantitatively estimating the energy input from the solar wind into the magnetosphere on a global scale is still an observational challenge. We perform three-dimensional magnetohydrodynamic (MHD) simulations to derive the energy coupling function. Based on 240 numerical test runs, the energy coupling function is given by Ein=3.78×107nsw0.24Vsw1.47BT0.86[sin2.70(θ/2)+0.25]. We study the correlations between the energy coupling function and a wide variety of magnetospheric activity, such as the indices of Dst, Kp, ap, AE, AU, AL, the polar cap index, and the hemispheric auroral power. The results indicate that this energy coupling function gives better correlations than the ɛ function. This result is also applied to a storm event under northward interplanetary magnetic field conditions. About 13% of the solar wind kinetic energy is transferred into the magnetosphere and about 35% of the input energy is dissipated in the ionosphere, consistent with previous studies.

  3. Ozone database in support of CMIP5 simulations: results and corresponding radiative forcing

    NASA Astrophysics Data System (ADS)

    Cionni, I.; Eyring, V.; Lamarque, J. F.; Randel, W. J.; Stevenson, D. S.; Wu, F.; Bodeker, G. E.; Shepherd, T. G.; Shindell, D. T.; Waugh, D. W.

    2011-04-01

    ozone is overestimated in the southern polar latitudes during spring and tropospheric column ozone is slightly underestimated. Vertical profiles of tropospheric ozone are broadly consistent with ozonesondes and in-situ measurements, with some deviations in regions of biomass burning. The tropospheric ozone radiative forcing (RF) from the 1850s to the 2000s is 0.23 W m-2, lower than previous results. The lower value is mainly due to (i) a smaller increase in biomass burning emissions; (ii) a larger influence of stratospheric ozone depletion on upper tropospheric ozone at high southern latitudes; and possibly (iii) a larger influence of clouds (which act to reduce the net forcing) compared to previous radiative forcing calculations. Over the same period, decreases in stratospheric ozone, mainly at high latitudes, produce a RF of -0.08 W m-2, which is more negative than the central Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) value of -0.05 W m-2, but which is within the stated range of -0.15 to +0.05 W m-2. The more negative value is explained by the fact that the regression model simulates significant ozone depletion prior to 1979, in line with the increase in EESC and as confirmed by CCMs, while the AR4 assumed no change in stratospheric RF prior to 1979. A negative RF of similar magnitude persists into the future, although its location shifts from high latitudes to the tropics. This shift is due to increases in polar stratospheric ozone, but decreases in tropical lower stratospheric ozone, related to a strengthening of the Brewer-Dobson circulation, particularly through the latter half of the 21st century. Differences in trends in tropospheric ozone among the four RCPs are mainly driven by different methane concentrations, resulting in a range of tropospheric ozone RFs between 0.4 and 0.1 W m-2 by 2100. The ozone dataset described here has been released for the Coupled Model Intercomparison Project (CMIP5) model simulations in net

  4. Ozone database in support of CMIP5 simulations: results and corresponding radiative forcing

    NASA Astrophysics Data System (ADS)

    Cionni, I.; Eyring, V.; Lamarque, J. F.; Randel, W. J.; Stevenson, D. S.; Wu, F.; Bodeker, G. E.; Shepherd, T. G.; Shindell, D. T.; Waugh, D. W.

    2011-11-01

    ozone is overestimated in the southern polar latitudes during spring and tropospheric column ozone is slightly underestimated. Vertical profiles of tropospheric ozone are broadly consistent with ozonesondes and in-situ measurements, with some deviations in regions of biomass burning. The tropospheric ozone radiative forcing (RF) from the 1850s to the 2000s is 0.23 W m-2, lower than previous results. The lower value is mainly due to (i) a smaller increase in biomass burning emissions; (ii) a larger influence of stratospheric ozone depletion on upper tropospheric ozone at high southern latitudes; and possibly (iii) a larger influence of clouds (which act to reduce the net forcing) compared to previous radiative forcing calculations. Over the same period, decreases in stratospheric ozone, mainly at high latitudes, produce a RF of -0.08 W m-2, which is more negative than the central Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) value of -0.05 W m-2, but which is within the stated range of -0.15 to +0.05 W m-2. The more negative value is explained by the fact that the regression model simulates significant ozone depletion prior to 1979, in line with the increase in EESC and as confirmed by CCMs, while the AR4 assumed no change in stratospheric RF prior to 1979. A negative RF of similar magnitude persists into the future, although its location shifts from high latitudes to the tropics. This shift is due to increases in polar stratospheric ozone, but decreases in tropical lower stratospheric ozone, related to a strengthening of the Brewer-Dobson circulation, particularly through the latter half of the 21st century. Differences in trends in tropospheric ozone among the four RCPs are mainly driven by different methane concentrations, resulting in a range of tropospheric ozone RFs between 0.4 and 0.1 W m-2 by 2100. The ozone dataset described here has been released for the Coupled Model Intercomparison Project (CMIP5) model simulations in net

  5. Simulation Framework for Rapid Entry, Descent, and Landing (EDL) Analysis, Phase 2 Results

    NASA Technical Reports Server (NTRS)

    Murri, Daniel G.

    2011-01-01

    The NASA Engineering and Safety Center (NESC) was requested to establish the Simulation Framework for Rapid Entry, Descent, and Landing (EDL) Analysis assessment, which involved development of an enhanced simulation architecture using the Program to Optimize Simulated Trajectories II simulation tool. The assessment was requested to enhance the capability of the Agency to provide rapid evaluation of EDL characteristics in systems analysis studies, preliminary design, mission development and execution, and time-critical assessments. Many of the new simulation framework capabilities were developed to support the Agency EDL-Systems Analysis (SA) team that is conducting studies of the technologies and architectures that are required to enable human and higher mass robotic missions to Mars. The findings, observations, and recommendations from the NESC are provided in this report.

  6. Results.

    ERIC Educational Resources Information Center

    Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.

    2001-01-01

    Describes the Collegiate Results Instrument (CRI), which measures a range of collegiate outcomes for alumni 6 years after graduation. The CRI was designed to target alumni from institutions across market segments and assess their values, abilities, work skills, occupations, and pursuit of lifelong learning. (EV)

  7. Accuracy of Aerodynamic Model Parameters Estimated from Flight Test Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1997-01-01

    An important put of building mathematical models based on measured date is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of this accuracy, the parameter estimates themselves have limited value. An expression is developed for computing quantitatively correct parameter accuracy measures for maximum likelihood parameter estimates when the output residuals are colored. This result is important because experience in analyzing flight test data reveals that the output residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Monte Carlo simulation runs were used to show that parameter accuracy measures from the new technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for correction factors or frequency domain analysis of the output residuals. The technique was applied to flight test data from repeated maneuvers flown on the F-18 High Alpha Research Vehicle. As in the simulated cases, parameter accuracy measures from the new technique were in agreement with the scatter in the parameter estimates from repeated maneuvers, whereas conventional parameter accuracy measures were optimistic.

  8. Chemical and Mechanical Alteration of Fractures: Micro-Scale Simulations and Comparison to Experimental Results

    NASA Astrophysics Data System (ADS)

    Ameli, P.; Detwiler, R. L.; Elkhoury, J. E.; Morris, J. P.

    2012-12-01

    surfaces to shift away from the equilibrium location. We apply a relative rotation of the fracture surfaces to preserve force equilibrium during each iteration. The results of the model are compared with flow-through experiments conducted on fractured limestone cores and on analogue rough-surfaced KDP-glass fractures. The fracture apertures are mapped before, during (for some) and after the experiments. These detailed aperture measurements are used as input to our new coupled model. The experiments cover a wide range of transport and reaction conditions; some exhibit permeability increase due to channel formation and others exhibit fracture closure due to deformation of contacting asperities. Simulation results predict these general trends as well as the small-scale details in regions of contacting asperities.n example of an aperture field under chemical and mechanical alterations. The color scale is in microns.

  9. Do tanning salons adhere to new legal regulations? Results of a simulated client trial in Germany.

    PubMed

    Möllers, Tobias; Pischke, Claudia R; Zeeb, Hajo

    2016-03-01

    In August 2009 and January 2012, two regulations were passed in Germany to limit UV exposure in the general population. These regulations state that no minors are allowed to use tanning devices. Personnel of tanning salons is mandated to offer counseling regarding individual skin type, to create a dosage plan with the customer and to provide a list describing harmful effects of UV radiation. Furthermore, a poster of warning criteria has to be visible and readable at all times inside the tanning salon. It is unclear whether these regulations are followed by employees of tanning salons in Germany, and we are not aware of any studies examining the implementation of the regulations at individual salons. We performed a simulated client study visiting 20 tanning salons in the city-state of Bremen in the year 2014, using a short checklist of criteria derived from the legal requirements, to evaluate whether legal requirements were followed or not. We found that only 20 % of the tanning salons communicated adverse health effects of UV radiation in visible posters and other materials and that only 60 % of the salons offered the required determination of the skin type to customers. In addition, only 60 % of the salons offered to complete the required dosage plan with their customers. To conclude, our results suggest that the new regulations are insufficiently implemented in Bremen. Additional control mechanisms appear necessary to ensure that consumers are protected from possible carcinogenic effects of excessive UV radiation. PMID:26364052

  10. Effect of interhemispheric currents on equivalent ionospheric currents in two hemispheres: Simulation results

    NASA Astrophysics Data System (ADS)

    Lyatskaya, Sonya; Lyatsky, Wladislaw; Zesta, Eftyhia

    2016-02-01

    In this research, we used numerical simulation to study the effect of interhemispheric field-aligned currents (IHCs), going between two conjugate ionospheres in two hemispheres, on the equivalent ionospheric currents (EICs). We computed the maps of these EICs in two hemispheres during summer-winter conditions, when the effect of the IHCs is especially significant. The main results may be summarized as follows. (1) In winter hemisphere, the IHCs may significantly exceed and be a substitute for the local R1 currents, and they may strongly affect the magnitude, location, and direction of the EICs in the nightside winter auroral ionosphere. (2) While in summer polar cap the EICs tend to flow sunward, and in winter polar cap the EICs turn toward dawn due to the effect of the IHCs. (3) The well-known reversal in the direction of the EICs in the vicinity of the midnight meridian, in winter hemisphere, is observed not at the polar caps boundary (as usually expected) but equatorward of this boundary in the region of the IHCs location. (4) The IHCs in winter hemisphere may be, in fact, not only a substitute for the R1 currents but also the major source of the Westward Auroral Electrojet, observed in both hemispheres during substorm activity.

  11. Simulation of natural corrosion by vapor hydration test: seven-year results

    SciTech Connect

    Luo, J.S.; Ebert, W.L.; Mazer, J.J.; Bates, J.K.

    1996-12-31

    We have investigated the alteration behavior of synthetic basalt and SRL 165 borosilicate waste glasses that had been reacted in water vapor at 70 {degrees}C for time periods up to seven years. The nature and extent of corrosion of glasses have been determined by characterizing the reacted glass surface with optical microscopy, scanning electron microscopy (SEM), transmission electron microscopy (TEM), and energy dispersive x-ray spectroscopy (EDS). Alteration in 70 {degrees}C laboratory tests was compared to that which occurs at 150-200 {degrees}C and also with Hawaiian basaltic glasses of 480 to 750 year old subaerially altered in nature. Synthetic basalt and waste glasses, both containing about 50 percent wt SiO{sub 2} were found to react with water vapor to form an amorphous hydrated gel that contained small amounts of clay, nearly identical to palagonite layers formed on naturally altered basaltic glass. This result implies that the corrosion reaction in nature can be simulated with a vapor hydration test. These tests also provide a means for measuring the corrosion kinetics, which are difficult to determine by studying natural samples because alteration layers have often spelled off the samples and we have only limited knowledge of the conditions under which alteration occurred.

  12. Optimal piezoelectric beam shape for single and broadband vibration energy harvesting: Modeling, simulation and experimental results

    NASA Astrophysics Data System (ADS)

    Muthalif, Asan G. A.; Nordin, N. H. Diyana

    2015-03-01

    Harvesting energy from the surroundings has become a new trend in saving our environment. Among the established ones are solar panels, wind turbines and hydroelectric generators which have successfully grown in meeting the world's energy demand. However, for low powered electronic devices; especially when being placed in a remote area, micro scale energy harvesting is preferable. One of the popular methods is via vibration energy scavenging which converts mechanical energy (from vibration) to electrical energy by the effect of coupling between mechanical variables and electric or magnetic fields. As the voltage generated greatly depends on the geometry and size of the piezoelectric material, there is a need to define an optimum shape and configuration of the piezoelectric energy scavenger. In this research, mathematical derivations for unimorph piezoelectric energy harvester are presented. Simulation is done using MATLAB and COMSOL Multiphysics software to study the effect of varying the length and shape of the beam to the generated voltage. Experimental results comparing triangular and rectangular shaped piezoelectric beam are also presented.

  13. Transfer function approach based on simulation results for the determination of pod curves

    NASA Astrophysics Data System (ADS)

    Demeyer, S.; Jenson, F.; Dominguez, N.; Iakovleva, E.

    2012-05-01

    POD curves estimations are based on statistical studies of empirical data which are obtained thru costly and time consuming experimental campaigns. Currently, cost reduction of POD trials is a major issue. A proposed solution is to replace some of the experimental data required to determine the POD with model based results. Following this idea, the concept of Model Assisted POD (MAPOD) has been introduced first in the US in 2004 through the constitution of the MAPOD working group. One approach to Model Assisted POD is based on a transfer function which uses empirical data and models to transfer POD measured for one specific application to another related application. The objective of this paper is to show how numerical simulations could help to determine such transfer functions. A practical implementation of the approach to a high frequency eddy current inspection for fatigue cracks is presented. Empirical data is available for the titanium alloy plates. A model based transfer function is used to assess a POD curve for the inspection of aluminum components.

  14. Transient Simulation of the DLR M3.1 Testbench: Methods and First Results

    NASA Astrophysics Data System (ADS)

    Manfletti, C.; Sender, J.

    2009-01-01

    Analysis of transient phases in liquid rocket engines play a major role in the design of the engines, as well as in the configuration and tailoring of the transient phases themselves. Testing of existing as well as future rocket engines, must therefore consider transient aspects, such as pre-cooling, priming, as well as ignition both experimentally as well as numerically. The flow behaviour within the various engine components is strongly dictated by the existing pressure and temperature fields. Ideally the flow through the engine feed lines is a one phase-flow. This is however not necessarily the case and a two-phase flow may lead to drastic changes in the behaviour. The application of the program TLRE to the simulation of the DLR test bench M3.1 is presented. The focus lies on the two-phase flow associated phenomena and the numerical resolution of these phenomena with the implementation of the lumped parameter method (LPM). A brief introduction of the relevant LPM characteristics is given. This is followed by a description of the relevant and observed two-phase flow phenomena and regimes and the numerical solution method. In conclusion both the main results of the work performed so far, which highlights the importance of the measurement system and how this needs to be taken into account during analysis processes, and a future roadmap for subsequent program evolution and applications are outlined.

  15. Biofilm formation and control in a simulated spacecraft water system - Three year results

    NASA Technical Reports Server (NTRS)

    Schultz, John R.; Flanagan, David T.; Bruce, Rebekah J.; Mudgett, Paul D.; Carr, Sandra E.; Rutz, Jeffrey A.; Huls, M. H.; Sauer, Richard L.; Pierson, Duane L.

    1992-01-01

    Two simulated spacecraft water systems are being used to evaluate the effectiveness of iodine for controlling microbial contamination within such systems. An iodine concentration of about 2.0 mg/L is maintained in one system by passing ultrapure water through an iodinated ion exchange resin. Stainless steel coupons with electropolished and mechanically-polished sides are being used to monitor biofilm formation. Results after three years of operation show a single episode of significant bacterial growth in the iodinated system when the iodine level dropped to 1.9 mg/L. This growth was apparently controlled by replacing the iodinated ion exchange resin, thereby increasing the iodine level. The second batch of resin has remained effective in controlling microbial growth down to an iodine level of 1.0 mg/L. SEM indicates that the iodine has impeded but may have not completely eliminated the formation of biofilm. Metals analyses reveal some corrosion in the iodinated system after 3 years of continuous exposure. Significant microbial contamination has been present continuously in a parallel noniodinated system since the third week of operation.

  16. Preliminary results for a two-dimensional simulation of the working process of a Stirling engine

    SciTech Connect

    Makhkamov, K.K.; Ingham, D.B.

    1998-07-01

    Stirling engines have several potential advantages over existing types of engines, in particular they can use renewable energy sources for power production and their performance meets the demands on the environmental security. In order to design Stirling Engines properly, and to put into effect their potential performance, it is important to more accurately mathematically simulate its working process. At present, a series of very important mathematical models are used for describing the working process of Stirling Engines and these are, in general, classified as models of three levels. All the models consider one-dimensional schemes for the engine and assume a uniform fluid velocity, temperature and pressure profiles at each plane of the internal gas circuit of the engine. The use of two-dimensional CFD models can significantly extend the capabilities for the detailed analysis of the complex heat transfer and gas dynamic processes which occur in the internal gas circuit, as well as in the external circuit of the engine. In this paper a two-dimensional simplified frame (no construction walls) calculation scheme for the Stirling Engine has been assumed and the standard {kappa}-{var{underscore}epsilon} turbulence model has been used for the analysis of the engine working process. The results obtained show that the use of two-dimensional CFD models gives the possibility of gaining a much greater insight into the fluid flow and heat transfer processes which occur in Stirling Engines.

  17. Circulation induced by subglacial discharge in glacial fjords: Results from idealized numerical simulations

    NASA Astrophysics Data System (ADS)

    Salcedo-Castro, Julio; Bourgault, Daniel; deYoung, Brad

    2011-09-01

    The flow caused by the discharge of freshwater underneath a glacier into an idealized fjord is simulated with a 2D non-hydrostatic model. As the freshwater leaves horizontally the subglacial opening into a fjord of uniformly denser water it spreads along the bottom as a jet, until buoyancy forces it to rise. During the initial rising phase, the plume meanders into complex flow patterns while mixing with the surrounding fluid until it reaches the surface and then spreads horizontally as a surface seaward flowing plume of brackish water. The process induces an estuarine-like circulation. Once steady-state is reached, the flow consists of an almost undiluted buoyant plume rising straight along the face of the glacier that turns into a horizontal surface layer thickening as it flows seaward. Over the range of parameters examined, the estuarine circulation is dynamically unstable with gradient Richardson number at the sheared interface having values of <1/4. The surface velocity and dilution factors are strongly and non-linearly related to the Froude number. It is the buoyancy flux that primarily controls the resulting circulation with the momentum flux playing a secondary role.

  18. Wide Bandpass and Narrow Bandstop Microstrip Filters based on Hilbert fractal geometry: design and simulation results.

    PubMed

    Mezaal, Yaqeen S; Eyyuboglu, Halil T; Ali, Jawad K

    2014-01-01

    This paper presents new Wide Bandpass Filter (WBPF) and Narrow Bandstop Filter (NBSF) incorporating two microstrip resonators, each resonator is based on 2nd iteration of Hilbert fractal geometry. The type of filter as pass or reject band has been adjusted by coupling gap parameter (d) between Hilbert resonators using a substrate with a dielectric constant of 10.8 and a thickness of 1.27 mm. Numerical simulation results as well as a parametric study of d parameter on filter type and frequency responses are presented and studied. WBPF has designed at resonant frequencies of 2 and 2.2 GHz with a bandwidth of 0.52 GHz, -28 dB return loss and -0.125 dB insertion loss while NBSF has designed for electrical specifications of 2.37 GHz center frequency, 20 MHz rejection bandwidth, -0.1873 dB return loss and 13.746 dB insertion loss. The proposed technique offers a new alternative to construct low-cost high-performance filter devices, suitable for a wide range of wireless communication systems. PMID:25536436

  19. Wide Bandpass and Narrow Bandstop Microstrip Filters Based on Hilbert Fractal Geometry: Design and Simulation Results

    PubMed Central

    Mezaal, Yaqeen S.; Eyyuboglu, Halil T.; Ali, Jawad K.

    2014-01-01

    This paper presents new Wide Bandpass Filter (WBPF) and Narrow Bandstop Filter (NBSF) incorporating two microstrip resonators, each resonator is based on 2nd iteration of Hilbert fractal geometry. The type of filter as pass or reject band has been adjusted by coupling gap parameter (d) between Hilbert resonators using a substrate with a dielectric constant of 10.8 and a thickness of 1.27 mm. Numerical simulation results as well as a parametric study of d parameter on filter type and frequency responses are presented and studied. WBPF has designed at resonant frequencies of 2 and 2.2 GHz with a bandwidth of 0.52 GHz, −28 dB return loss and −0.125 dB insertion loss while NBSF has designed for electrical specifications of 2.37 GHz center frequency, 20 MHz rejection bandwidth, −0.1873 dB return loss and 13.746 dB insertion loss. The proposed technique offers a new alternative to construct low-cost high-performance filter devices, suitable for a wide range of wireless communication systems. PMID:25536436

  20. Analysis of Optical CDMA Signal Transmission: Capacity Limits and Simulation Results

    NASA Astrophysics Data System (ADS)

    Garba, Aminata A.; Yim, Raymond M. H.; Bajcsy, Jan; Chen, Lawrence R.

    2005-12-01

    We present performance limits of the optical code-division multiple-access (OCDMA) networks. In particular, we evaluate the information-theoretical capacity of the OCDMA transmission when single-user detection (SUD) is used by the receiver. First, we model the OCDMA transmission as a discrete memoryless channel, evaluate its capacity when binary modulation is used in the interference-limited (noiseless) case, and extend this analysis to the case when additive white Gaussian noise (AWGN) is corrupting the received signals. Next, we analyze the benefits of using nonbinary signaling for increasing the throughput of optical CDMA transmission. It turns out that up to a fourfold increase in the network throughput can be achieved with practical numbers of modulation levels in comparison to the traditionally considered binary case. Finally, we present BER simulation results for channel coded binary and[InlineEquation not available: see fulltext.]-ary OCDMA transmission systems. In particular, we apply turbo codes concatenated with Reed-Solomon codes so that up to several hundred concurrent optical CDMA users can be supported at low target bit error rates. We observe that unlike conventional OCDMA systems, turbo-empowered OCDMA can allow overloading (more active users than is the length of the spreading sequences) with good bit error rate system performance.

  1. The Plasma Wake Downstream of Lunar Topographic Obstacles: Preliminary Results from 2D Particle Simulations

    NASA Technical Reports Server (NTRS)

    Zimmerman, Michael I.; Farrell, W. M.; Snubbs, T. J.; Halekas, J. S.

    2011-01-01

    Anticipating the plasma and electrical environments in permanently shadowed regions (PSRs) of the moon is critical in understanding local processes of space weathering, surface charging, surface chemistry, volatile production and trapping, exo-ion sputtering, and charged dust transport. In the present study, we have employed the open-source XOOPIC code [I] to investigate the effects of solar wind conditions and plasma-surface interactions on the electrical environment in PSRs through fully two-dimensional pattic1e-in-cell simulations. By direct analogy with current understanding of the global lunar wake (e.g., references) deep, near-terminator, shadowed craters are expected to produce plasma "mini-wakes" just leeward of the crater wall. The present results (e.g., Figure I) are in agreement with previous claims that hot electrons rush into the crater void ahead of the heavier ions, fanning a negative cloud of charge. Charge separation along the initial plasma-vacuum interface gives rise to an ambipolar electric field that subsequently accelerates ions into the void. However, the situation is complicated by the presence of the dynamic lunar surface, which develops an electric potential in response to local plasma currents (e.g., Figure Ia). In some regimes, wake structure is clearly affected by the presence of the charged crater floor as it seeks to achieve current balance (i.e. zero net current to the surface).

  2. Personal values and crew compatibility: Results from a 105 days simulated space mission

    NASA Astrophysics Data System (ADS)

    Sandal, Gro M.; Bye, Hege H.; van de Vijver, Fons J. R.

    2011-08-01

    On a mission to Mars the crew will experience high autonomy and inter-dependence. "Groupthink", known as a tendency to strive for consensus at the cost of considering alternative courses of action, represents a potential safety hazard. This paper addresses two aspects of "groupthink": the extent to which confined crewmembers perceive increasing convergence in personal values, and whether they attribute less tension to individual differences over time. It further examines the impact of personal values for interpersonal compatibility. These questions were investigated in a 105-day confinement study in which a multinational crew ( N=6) simulated a Mars mission. The Portrait of Crew Values Questionnaire was administered regularly to assess personal values, perceived value homogeneity, and tension attributed to value disparities. Interviews were conducted before and after the confinement. Multiple regression analysis revealed no significant changes in value homogeneity over time; rather the opposite tendency was indicated. More tension was attributed to differences in hedonism, benevolence and tradition in the last 35 days when the crew was allowed greater autonomy. Three subgroups, distinct in terms of personal values, were identified. No evidence for "groupthink" was found. The results suggest that personal values should be considered in composition of crews for long duration missions.

  3. On the Accuracy of Genomic Selection

    PubMed Central

    Rabier, Charles-Elie; Barre, Philippe; Asp, Torben; Charmet, Gilles; Mangin, Brigitte

    2016-01-01

    Genomic selection is focused on prediction of breeding values of selection candidates by means of high density of markers. It relies on the assumption that all quantitative trait loci (QTLs) tend to be in strong linkage disequilibrium (LD) with at least one marker. In this context, we present theoretical results regarding the accuracy of genomic selection, i.e., the correlation between predicted and true breeding values. Typically, for individuals (so-called test individuals), breeding values are predicted by means of markers, using marker effects estimated by fitting a ridge regression model to a set of training individuals. We present a theoretical expression for the accuracy; this expression is suitable for any configurations of LD between QTLs and markers. We also introduce a new accuracy proxy that is free of the QTL parameters and easily computable; it outperforms the proxies suggested in the literature, in particular, those based on an estimated effective number of independent loci (Me). The theoretical formula, the new proxy, and existing proxies were compared for simulated data, and the results point to the validity of our approach. The calculations were also illustrated on a new perennial ryegrass set (367 individuals) genotyped for 24,957 single nucleotide polymorphisms (SNPs). In this case, most of the proxies studied yielded similar results because of the lack of markers for coverage of the entire genome (2.7 Gb). PMID:27322178

  4. Feature Extraction from Simulations and Experiments: Preliminary Results Using a Fluid Mix Problem

    SciTech Connect

    Kamath, C; Nguyen, T

    2005-01-04

    Code validation, or comparing the output of computer simulations to experiments, is necessary to determine which simulation is a better approximation to an experiment. It can also be used to determine how the input parameters in a simulation can be modified to yield output that is closer to the experiment. In this report, we discuss our experiences in the use of image processing techniques for extracting features from 2-D simulations and experiments. These features can be used in comparing the output of simulations to experiments, or to other simulations. We first describe the problem domain and the data. We next explain the need for cleaning or denoising the experimental data and discuss the performance of different techniques. Finally, we discuss the features of interest and describe how they can be extracted from the data. The focus in this report is on extracting features from experimental and simulation data for the purpose of code validation; the actual interpretation of these features and their use in code validation is left to the domain experts.

  5. Chemical compatibility screening results of plastic packaging to mixed waste simulants

    SciTech Connect

    Nigrey, P.J.; Dickens, T.G.

    1995-12-01

    We have developed a chemical compatibility program for evaluating transportation packaging components for transporting mixed waste forms. We have performed the first phase of this experimental program to determine the effects of simulant mixed wastes on packaging materials. This effort involved the screening of 10 plastic materials in four liquid mixed waste simulants. The testing protocol involved exposing the respective materials to {approximately}3 kGy of gamma radiation followed by 14 day exposures to the waste simulants of 60 C. The seal materials or rubbers were tested using VTR (vapor transport rate) measurements while the liner materials were tested using specific gravity as a metric. For these tests, a screening criteria of {approximately}1 g/m{sup 2}/hr for VTR and a specific gravity change of 10% was used. It was concluded that while all seal materials passed exposure to the aqueous simulant mixed waste, EPDM and SBR had the lowest VTRs. In the chlorinated hydrocarbon simulant mixed waste, only VITON passed the screening tests. In both the simulant scintillation fluid mixed waste and the ketone mixture simulant mixed waste, none of the seal materials met the screening criteria. It is anticipated that those materials with the lowest VTRs will be evaluated in the comprehensive phase of the program. For specific gravity testing of liner materials the data showed that while all materials with the exception of polypropylene passed the screening criteria, Kel-F, HDPE, and XLPE were found to offer the greatest resistance to the combination of radiation and chemicals.

  6. Meditation Experience Predicts Introspective Accuracy

    PubMed Central

    Fox, Kieran C. R.; Zakarauskas, Pierre; Dixon, Matt; Ellamil, Melissa; Thompson, Evan; Christoff, Kalina

    2012-01-01

    The accuracy of subjective reports, especially those involving introspection of one's own internal processes, remains unclear, and research has demonstrated large individual differences in introspective accuracy. It has been hypothesized that introspective accuracy may be heightened in persons who engage in meditation practices, due to the highly introspective nature of such practices. We undertook a preliminary exploration of this hypothesis, examining introspective accuracy in a cross-section of meditation practitioners (1–15,000 hrs experience). Introspective accuracy was assessed by comparing subjective reports of tactile sensitivity for each of 20 body regions during a ‘body-scanning’ meditation with averaged, objective measures of tactile sensitivity (mean size of body representation area in primary somatosensory cortex; two-point discrimination threshold) as reported in prior research. Expert meditators showed significantly better introspective accuracy than novices; overall meditation experience also significantly predicted individual introspective accuracy. These results suggest that long-term meditators provide more accurate introspective reports than novices. PMID:23049790

  7. Soil nitrogen balance under wastewater management: Field measurements and simulation results

    USGS Publications Warehouse

    Sophocleous, M.; Townsend, M.A.; Vocasek, F.; Ma, L.; KC, A.

    2009-01-01

    The use of treated wastewater for irrigation of crops could result in high nitrate-nitrogen (NO3-N) concentrations in the vadose zone and ground water. The goal of this 2-yr field-monitoring study in the deep silty clay loam soils south of Dodge City, Kansas, was to assess how and under what circumstances N from the secondary-treated, wastewater-irrigated corn reached the deep (20-45 m) water table of the underlying High Plains aquifer and what could be done to minimize this problem. We collected 15.2-m-deep soil cores for characterization of physical and chemical properties; installed neutron probe access tubes to measure soil-water content and suction lysimeters to sample soil water periodically; sampled monitoring, irrigation, and domestic wells in the area; and obtained climatic, crop, irrigation, and N application rate records for two wastewater-irrigated study sites. These data and additional information were used to run the Root Zone Water Quality Model to identify key parameters and processes that influence N losses in the study area. We demonstrated that NO3-N transport processes result in significant accumulations of N in the vadose zone and that NO3-N in the underlying ground water is increasing with time. Root Zone Water Quality Model simulations for two wastewater-irrigated study sites indicated that reducing levels of corn N fertilization by more than half to 170 kg ha-1 substantially increases N-use efficiency and achieves near-maximum crop yield. Combining such measures with a crop rotation that includes alfalfa should further reduce the accumulation and downward movement of NO3-N in the soil profile. Copyright ?? 2009 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  8. Velocity structure of a bottom simulating reflector offshore Peru: Results from full waveform inversion

    USGS Publications Warehouse

    Pecher, I.A.; Minshull, T.A.; Singh, S.C.; Von Huene, R.

    1996-01-01

    Much of our knowledge of the worldwide distribution of submarine gas hydrates comes from seismic observations of Bottom Simulating Reflectors (BSRs). Full waveform inversion has proven to be a reliable technique for studying the fine structure of BSRs using the compressional wave velocity. We applied a non-linear full waveform inversion technique to a BSR at a location offshore Peru. We first determined the large-scale features of seismic velocity variations using a statistical inversion technique to maximise coherent energy along travel-time curves. These velocities were used for a starting velocity model for the full waveform inversion, which yielded a detailed velocity/depth model in the vicinity of the BSR. We found that the data are best fit by a model in which the BSR consists of a thin, low-velocity layer. The compressional wave velocity drops from 2.15 km/s down to an average of 1.70 km/s in an 18m thick interval, with a minimum velocity of 1.62 km/s in a 6 m interval. The resulting compressional wave velocity was used to estimate gas content in the sediments. Our results suggest that the low velocity layer is a 6-18 m thick zone containing a few percent of free gas in the pore space. The presence of the BSR coincides with a region of vertical uplift. Therefore, we suggest that gas at this BSR is formed by a dissociation of hydrates at the base of the hydrate stability zone due to uplift and subsequently a decrease in pressure.

  9. Evaluating LANDSAT wildland classification accuracies

    NASA Technical Reports Server (NTRS)

    Toll, D. L.

    1980-01-01

    Procedures to evaluate the accuracy of LANDSAT derived wildland cover classifications are described. The evaluation procedures include: (1) implementing a stratified random sample for obtaining unbiased verification data; (2) performing area by area comparisons between verification and LANDSAT data for both heterogeneous and homogeneous fields; (3) providing overall and individual classification accuracies with confidence limits; (4) displaying results within contingency tables for analysis of confusion between classes; and (5) quantifying the amount of information (bits/square kilometer) conveyed in the LANDSAT classification.

  10. ATMOSPHERIC MERCURY SIMULATION USING THE CMAQ MODEL: FORMULATION DESCRIPTION AND ANALYSIS OF WET DEPOSITION RESULTS

    EPA Science Inventory

    The Community Multiscale Air Quality (CMAQ) modeling system has recently been adapted to simulate the emission, transport, transformation and deposition of atmospheric mercury in three distinct forms; elemental mercury gas, reactive gaseous mercury, and particulate mercury. Emis...

  11. The simulation of optical diagnostics for crystal growth - Models and results

    NASA Astrophysics Data System (ADS)

    Banish, M. R.; Clark, R. L.; Kathman, A. D.; Lawson, S. M.

    A computer simulation of a Two Color Holographic Interferometric (TCHI) optical system was performed using a physical (wave) optics model. This model accurately simulates propagation through time-varying, 2-D or 3-D concentration and temperature fields as a wave phenomenon. The model calculates wavefront deformations that can be used to generate fringe patterns. This simulation modeled a proposed TriGlycine sulphate TGS flight experiment by propagating through the simplified onion-like refractive index distribution of the growing crystal and calculating the recorded wavefront deformation. The phase of this wavefront was used to generate sample interferograms that map index of refraction variation. Two such fringe patterns, generated at different wavelengths, were used to extract the original temperature and concentration field characteristics within the growth chamber. This proves feasibility for this TCHI crystal growth diagnostic technique. This simulation provides feedback to the experimental design process.

  12. Test Results from a Direct Drive Gas Reactor Simulator Coupled to a Brayton Power Conversion Unit

    NASA Technical Reports Server (NTRS)

    Hervol, David S.; Briggs, Maxwell H.; Owen, Albert K.; Bragg-Sitton, Shannon M.; Godfroy, Thomas J.

    2010-01-01

    Component level testing of power conversion units proposed for use in fission surface power systems has typically been done using relatively simple electric heaters for thermal input. These heaters do not adequately represent the geometry or response of proposed reactors. As testing of fission surface power systems transitions from the component level to the system level it becomes necessary to more accurately replicate these reactors using reactor simulators. The Direct Drive Gas-Brayton Power Conversion Unit test activity at the NASA Glenn Research Center integrates a reactor simulator with an existing Brayton test rig. The response of the reactor simulator to a change in Brayton shaft speed is shown as well as the response of the Brayton to an insertion of reactivity, corresponding to a drum reconfiguration. The lessons learned from these tests can be used to improve the design of future reactor simulators which can be used in system level fission surface power tests.

  13. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units

    PubMed Central

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-01-01

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10−6°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs. PMID:27338408

  14. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.

    PubMed

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-01-01

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs. PMID:27338408

  15. STREAM CHANNELS OF THE UPPER SAN PEDRO BASIN WITH PERCENT DIFFERENCE BETWEEN RESULTS FROM TWO SWAT SIMULATIONS

    EPA Science Inventory

    Stream channels of the Upper San Pedro with percent difference between results from two SWAT