Sample records for accuracy simulation results

  1. Obtaining identical results with double precision global accuracy on different numbers of processors in parallel particle Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Brunner, Thomas A.; Gentile, Nicholas A.

    2013-10-15

    We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositionsmore » will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.« less

  2. ShinyGPAS: interactive genomic prediction accuracy simulator based on deterministic formulas.

    PubMed

    Morota, Gota

    2017-12-20

    Deterministic formulas for the accuracy of genomic predictions highlight the relationships among prediction accuracy and potential factors influencing prediction accuracy prior to performing computationally intensive cross-validation. Visualizing such deterministic formulas in an interactive manner may lead to a better understanding of how genetic factors control prediction accuracy. The software to simulate deterministic formulas for genomic prediction accuracy was implemented in R and encapsulated as a web-based Shiny application. Shiny genomic prediction accuracy simulator (ShinyGPAS) simulates various deterministic formulas and delivers dynamic scatter plots of prediction accuracy versus genetic factors impacting prediction accuracy, while requiring only mouse navigation in a web browser. ShinyGPAS is available at: https://chikudaisei.shinyapps.io/shinygpas/ . ShinyGPAS is a shiny-based interactive genomic prediction accuracy simulator using deterministic formulas. It can be used for interactively exploring potential factors that influence prediction accuracy in genome-enabled prediction, simulating achievable prediction accuracy prior to genotyping individuals, or supporting in-class teaching. ShinyGPAS is open source software and it is hosted online as a freely available web-based resource with an intuitive graphical user interface.

  3. Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bostani, Maryam, E-mail: mbostani@mednet.ucla.edu; McMillan, Kyle; Cagnon, Chris H.

    Purpose: The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. Methods: MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for allmore » exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. Results: The calculated mean percent difference between TLD measurements and Monte Carlo simulations was −4.9% with standard deviation of 8.7% and a range of −22.7% to 5.7%. Conclusions: The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.« less

  4. Accuracy of Monte Carlo simulations compared to in-vivo MDCT dosimetry.

    PubMed

    Bostani, Maryam; Mueller, Jonathon W; McMillan, Kyle; Cody, Dianna D; Cagnon, Chris H; DeMarco, John J; McNitt-Gray, Michael F

    2015-02-01

    The purpose of this study was to assess the accuracy of a Monte Carlo simulation-based method for estimating radiation dose from multidetector computed tomography (MDCT) by comparing simulated doses in ten patients to in-vivo dose measurements. MD Anderson Cancer Center Institutional Review Board approved the acquisition of in-vivo rectal dose measurements in a pilot study of ten patients undergoing virtual colonoscopy. The dose measurements were obtained by affixing TLD capsules to the inner lumen of rectal catheters. Voxelized patient models were generated from the MDCT images of the ten patients, and the dose to the TLD for all exposures was estimated using Monte Carlo based simulations. The Monte Carlo simulation results were compared to the in-vivo dose measurements to determine accuracy. The calculated mean percent difference between TLD measurements and Monte Carlo simulations was -4.9% with standard deviation of 8.7% and a range of -22.7% to 5.7%. The results of this study demonstrate very good agreement between simulated and measured doses in-vivo. Taken together with previous validation efforts, this work demonstrates that the Monte Carlo simulation methods can provide accurate estimates of radiation dose in patients undergoing CT examinations.

  5. Location Accuracy of INS/Gravity-Integrated Navigation System on the Basis of Ocean Experiment and Simulation

    PubMed Central

    Wang, Hubiao; Chai, Hua; Bao, Lifeng; Wang, Yong

    2017-01-01

    An experiment comparing the location accuracy of gravity matching-aided navigation in the ocean and simulation is very important to evaluate the feasibility and the performance of an INS/gravity-integrated navigation system (IGNS) in underwater navigation. Based on a 1′ × 1′ marine gravity anomaly reference map and multi-model adaptive Kalman filtering algorithm, a matching location experiment of IGNS was conducted using data obtained using marine gravimeter. The location accuracy under actual ocean conditions was 2.83 nautical miles (n miles). Several groups of simulated data of marine gravity anomalies were obtained by establishing normally distributed random error N(u,σ2) with varying mean u and noise variance σ2. Thereafter, the matching location of IGNS was simulated. The results show that the changes in u had little effect on the location accuracy. However, an increase in σ2 resulted in a significant decrease in the location accuracy. A comparison between the actual ocean experiment and the simulation along the same route demonstrated the effectiveness of the proposed simulation method and quantitative analysis results. In addition, given the gravimeter (1–2 mGal accuracy) and the reference map (resolution 1′ × 1′; accuracy 3–8 mGal), location accuracy of IGNS was up to reach ~1.0–3.0 n miles in the South China Sea. PMID:29261136

  6. Location Accuracy of INS/Gravity-Integrated Navigation System on the Basis of Ocean Experiment and Simulation.

    PubMed

    Wang, Hubiao; Wu, Lin; Chai, Hua; Bao, Lifeng; Wang, Yong

    2017-12-20

    An experiment comparing the location accuracy of gravity matching-aided navigation in the ocean and simulation is very important to evaluate the feasibility and the performance of an INS/gravity-integrated navigation system (IGNS) in underwater navigation. Based on a 1' × 1' marine gravity anomaly reference map and multi-model adaptive Kalman filtering algorithm, a matching location experiment of IGNS was conducted using data obtained using marine gravimeter. The location accuracy under actual ocean conditions was 2.83 nautical miles (n miles). Several groups of simulated data of marine gravity anomalies were obtained by establishing normally distributed random error N ( u , σ 2 ) with varying mean u and noise variance σ 2 . Thereafter, the matching location of IGNS was simulated. The results show that the changes in u had little effect on the location accuracy. However, an increase in σ 2 resulted in a significant decrease in the location accuracy. A comparison between the actual ocean experiment and the simulation along the same route demonstrated the effectiveness of the proposed simulation method and quantitative analysis results. In addition, given the gravimeter (1-2 mGal accuracy) and the reference map (resolution 1' × 1'; accuracy 3-8 mGal), location accuracy of IGNS was up to reach ~1.0-3.0 n miles in the South China Sea.

  7. Impacts of land use/cover classification accuracy on regional climate simulations

    NASA Astrophysics Data System (ADS)

    Ge, Jianjun; Qi, Jiaguo; Lofgren, Brent M.; Moore, Nathan; Torbick, Nathan; Olson, Jennifer M.

    2007-03-01

    Land use/cover change has been recognized as a key component in global change. Various land cover data sets, including historically reconstructed, recently observed, and future projected, have been used in numerous climate modeling studies at regional to global scales. However, little attention has been paid to the effect of land cover classification accuracy on climate simulations, though accuracy assessment has become a routine procedure in land cover production community. In this study, we analyzed the behavior of simulated precipitation in the Regional Atmospheric Modeling System (RAMS) over a range of simulated classification accuracies over a 3 month period. This study found that land cover accuracy under 80% had a strong effect on precipitation especially when the land surface had a greater control of the atmosphere. This effect became stronger as the accuracy decreased. As shown in three follow-on experiments, the effect was further influenced by model parameterizations such as convection schemes and interior nudging, which can mitigate the strength of surface boundary forcings. In reality, land cover accuracy rarely obtains the commonly recommended 85% target. Its effect on climate simulations should therefore be considered, especially when historically reconstructed and future projected land covers are employed.

  8. Analysis of machining accuracy during free form surface milling simulation for different milling strategies

    NASA Astrophysics Data System (ADS)

    Matras, A.; Kowalczyk, R.

    2014-11-01

    The analysis results of machining accuracy after the free form surface milling simulations (based on machining EN AW- 7075 alloys) for different machining strategies (Level Z, Radial, Square, Circular) are presented in the work. Particular milling simulations were performed using CAD/CAM Esprit software. The accuracy of obtained allowance is defined as a difference between the theoretical surface of work piece element (the surface designed in CAD software) and the machined surface after a milling simulation. The difference between two surfaces describes a value of roughness, which is as the result of tool shape mapping on the machined surface. Accuracy of the left allowance notifies in direct way a surface quality after the finish machining. Described methodology of usage CAD/CAM software can to let improve a time design of machining process for a free form surface milling by a 5-axis CNC milling machine with omitting to perform the item on a milling machine in order to measure the machining accuracy for the selected strategies and cutting data.

  9. Simulation approach for the evaluation of tracking accuracy in radiotherapy: a preliminary study.

    PubMed

    Tanaka, Rie; Ichikawa, Katsuhiro; Mori, Shinichiro; Sanada, Sigeru

    2013-01-01

    Real-time tumor tracking in external radiotherapy can be achieved by diagnostic (kV) X-ray imaging with a dynamic flat-panel detector (FPD). It is important to keep the patient dose as low as possible while maintaining tracking accuracy. A simulation approach would be helpful to optimize the imaging conditions. This study was performed to develop a computer simulation platform based on a noise property of the imaging system for the evaluation of tracking accuracy at any noise level. Flat-field images were obtained using a direct-type dynamic FPD, and noise power spectrum (NPS) analysis was performed. The relationship between incident quantum number and pixel value was addressed, and a conversion function was created. The pixel values were converted into a map of quantum number using the conversion function, and the map was then input into the random number generator to simulate image noise. Simulation images were provided at different noise levels by changing the incident quantum numbers. Subsequently, an implanted marker was tracked automatically and the maximum tracking errors were calculated at different noise levels. The results indicated that the maximum tracking error increased with decreasing incident quantum number in flat-field images with an implanted marker. In addition, the range of errors increased with decreasing incident quantum number. The present method could be used to determine the relationship between image noise and tracking accuracy. The results indicated that the simulation approach would aid in determining exposure dose conditions according to the necessary tracking accuracy.

  10. An evaluation of information retrieval accuracy with simulated OCR output

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croft, W.B.; Harding, S.M.; Taghva, K.

    Optical Character Recognition (OCR) is a critical part of many text-based applications. Although some commercial systems use the output from OCR devices to index documents without editing, there is very little quantitative data on the impact of OCR errors on the accuracy of a text retrieval system. Because of the difficulty of constructing test collections to obtain this data, we have carried out evaluation using simulated OCR output on a variety of databases. The results show that high quality OCR devices have little effect on the accuracy of retrieval, but low quality devices used with databases of short documents canmore » result in significant degradation.« less

  11. Influence of simulation parameters on the speed and accuracy of Monte Carlo calculations using PENEPMA

    NASA Astrophysics Data System (ADS)

    Llovet, X.; Salvat, F.

    2018-01-01

    The accuracy of Monte Carlo simulations of EPMA measurements is primarily determined by that of the adopted interaction models and atomic relaxation data. The code PENEPMA implements the most reliable general models available, and it is known to provide a realistic description of electron transport and X-ray emission. Nonetheless, efficiency (i.e., the simulation speed) of the code is determined by a number of simulation parameters that define the details of the electron tracking algorithm, which may also have an effect on the accuracy of the results. In addition, to reduce the computer time needed to obtain X-ray spectra with a given statistical accuracy, PENEPMA allows the use of several variance-reduction techniques, defined by a set of specific parameters. In this communication we analyse and discuss the effect of using different values of the simulation and variance-reduction parameters on the speed and accuracy of EPMA simulations. We also discuss the effectiveness of using multi-core computers along with a simple practical strategy implemented in PENEPMA.

  12. A comparison of the accuracy of intraoral scanners using an intraoral environment simulator

    PubMed Central

    Park, Hye-Nan; Lim, Young-Jun; Yi, Won-Jin

    2018-01-01

    PURPOSE The aim of this study was to design an intraoral environment simulator and to assess the accuracy of two intraoral scanners using the simulator. MATERIALS AND METHODS A box-shaped intraoral environment simulator was designed to simulate two specific intraoral environments. The cast was scanned 10 times by Identica Blue (MEDIT, Seoul, South Korea), TRIOS (3Shape, Copenhagen, Denmark), and CS3500 (Carestream Dental, Georgia, USA) scanners in the two simulated groups. The distances between the left and right canines (D3), first molars (D6), second molars (D7), and the left canine and left second molar (D37) were measured. The distance data were analyzed by the Kruskal-Wallis test. RESULTS The differences in intraoral environments were not statistically significant (P>.05). Between intraoral scanners, statistically significant differences (P<.05) were revealed by the Kruskal-Wallis test with regard to D3 and D6. CONCLUSION No difference due to the intraoral environment was revealed. The simulator will contribute to the higher accuracy of intraoral scanners in the future. PMID:29503715

  13. [Accuracy Check of Monte Carlo Simulation in Particle Therapy Using Gel Dosimeters].

    PubMed

    Furuta, Takuya

    2017-01-01

    Gel dosimeters are a three-dimensional imaging tool for dose distribution induced by radiations. They can be used for accuracy check of Monte Carlo simulation in particle therapy. An application was reviewed in this article. An inhomogeneous biological sample placing a gel dosimeter behind it was irradiated by carbon beam. The recorded dose distribution in the gel dosimeter reflected the inhomogeneity of the biological sample. Monte Carlo simulation was conducted by reconstructing the biological sample from its CT image. The accuracy of the particle transport by Monte Carlo simulation was checked by comparing the dose distribution in the gel dosimeter between simulation and experiment.

  14. The Impact of Sea Ice Concentration Accuracies on Climate Model Simulations with the GISS GCM

    NASA Technical Reports Server (NTRS)

    Parkinson, Claire L.; Rind, David; Healy, Richard J.; Martinson, Douglas G.; Zukor, Dorothy J. (Technical Monitor)

    2000-01-01

    The Goddard Institute for Space Studies global climate model (GISS GCM) is used to examine the sensitivity of the simulated climate to sea ice concentration specifications in the type of simulation done in the Atmospheric Modeling Intercomparison Project (AMIP), with specified oceanic boundary conditions. Results show that sea ice concentration uncertainties of +/- 7% can affect simulated regional temperatures by more than 6 C, and biases in sea ice concentrations of +7% and -7% alter simulated annually averaged global surface air temperatures by -0.10 C and +0.17 C, respectively, over those in the control simulation. The resulting 0.27 C difference in simulated annual global surface air temperatures is reduced by a third, to 0.18 C, when considering instead biases of +4% and -4%. More broadly, least-squares fits through the temperature results of 17 simulations with ice concentration input changes ranging from increases of 50% versus the control simulation to decreases of 50% yield a yearly average global impact of 0.0107 C warming for every 1% ice concentration decrease, i.e., 1.07 C warming for the full +50% to -50% range. Regionally and on a monthly average basis, the differences can be far greater, especially in the polar regions, where wintertime contrasts between the +50% and -50% cases can exceed 30 C. However, few statistically significant effects are found outside the polar latitudes, and temperature effects over the non-polar oceans tend to be under 1 C, due in part to the specification of an unvarying annual cycle of sea surface temperatures. The +/- 7% and 14% results provide bounds on the impact (on GISS GCM simulations making use of satellite data) of satellite-derived ice concentration inaccuracies, +/- 7% being the current estimated average accuracy of satellite retrievals and +/- 4% being the anticipated improved average accuracy for upcoming satellite instruments. Results show that the impact on simulated temperatures of imposed ice concentration

  15. Monte-Carlo Simulation for Accuracy Assessment of a Single Camera Navigation System

    NASA Astrophysics Data System (ADS)

    Bethmann, F.; Luhmann, T.

    2012-07-01

    The paper describes a simulation-based optimization of an optical tracking system that is used as a 6DOF navigation system for neurosurgery. Compared to classical system used in clinical navigation, the presented system has two unique properties: firstly, the system will be miniaturized and integrated into an operating microscope for neurosurgery; secondly, due to miniaturization a single camera approach has been designed. Single camera techniques for 6DOF measurements show a special sensitivity against weak geometric configurations between camera and object. In addition, the achievable accuracy potential depends significantly on the geometric properties of the tracked objects (locators). Besides quality and stability of the targets used on the locator, their geometric configuration is of major importance. In the following the development and investigation of a simulation program is presented which allows for the assessment and optimization of the system with respect to accuracy. Different system parameters can be altered as well as different scenarios indicating the operational use of the system. Measurement deviations are estimated based on the Monte-Carlo method. Practical measurements validate the correctness of the numerical simulation results.

  16. GF-7 Imaging Simulation and Dsm Accuracy Estimate

    NASA Astrophysics Data System (ADS)

    Yue, Q.; Tang, X.; Gao, X.

    2017-05-01

    GF-7 satellite is a two-line-array stereo imaging satellite for surveying and mapping which will be launched in 2018. Its resolution is about 0.8 meter at subastral point corresponding to a 20 km width of cloth, and the viewing angle of its forward and backward cameras are 5 and 26 degrees. This paper proposed the imaging simulation method of GF-7 stereo images. WorldView-2 stereo images were used as basic data for simulation. That is, we didn't use DSM and DOM as basic data (we call it "ortho-to-stereo" method) but used a "stereo-to-stereo" method, which will be better to reflect the difference of geometry and radiation in different looking angle. The shortage is that geometric error will be caused by two factors, one is different looking angles between basic image and simulated image, another is not very accurate or no ground reference data. We generated DSM by WorldView-2 stereo images. The WorldView-2 DSM was not only used as reference DSM to estimate the accuracy of DSM generated by simulated GF-7 stereo images, but also used as "ground truth" to establish the relationship between WorldView-2 image point and simulated image point. Static MTF was simulated on the instantaneous focal plane "image" by filtering. SNR was simulated in the electronic sense, that is, digital value of WorldView-2 image point was converted to radiation brightness and used as radiation brightness of simulated GF-7 camera. This radiation brightness will be converted to electronic number n according to physical parameters of GF-7 camera. The noise electronic number n1 will be a random number between -√n and √n. The overall electronic number obtained by TDI CCD will add and converted to digital value of simulated GF-7 image. Sinusoidal curves with different amplitude, frequency and initial phase were used as attitude curves. Geometric installation errors of CCD tiles were also simulated considering the rotation and translation factors. An accuracy estimate was made for DSM generated

  17. The accuracy of climate models' simulated season lengths and the effectiveness of grid scale correction factors

    DOE PAGES

    Winterhalter, Wade E.

    2011-09-01

    Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I foundmore » that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.« less

  18. 4D dose simulation in volumetric arc therapy: Accuracy and affecting parameters

    PubMed Central

    Werner, René

    2017-01-01

    Radiotherapy of lung and liver lesions has changed from normofractioned 3D-CRT to stereotactic treatment in a single or few fractions, often employing volumetric arc therapy (VMAT)-based techniques. Potential unintended interference of respiratory target motion and dynamically changing beam parameters during VMAT dose delivery motivates establishing 4D quality assurance (4D QA) procedures to assess appropriateness of generated VMAT treatment plans when taking into account patient-specific motion characteristics. Current approaches are motion phantom-based 4D QA and image-based 4D VMAT dose simulation. Whereas phantom-based 4D QA is usually restricted to a small number of measurements, the computational approaches allow simulating many motion scenarios. However, 4D VMAT dose simulation depends on various input parameters, influencing estimated doses along with mitigating simulation reliability. Thus, aiming at routine use of simulation-based 4D VMAT QA, the impact of such parameters as well as the overall accuracy of the 4D VMAT dose simulation has to be studied in detail–which is the topic of the present work. In detail, we introduce the principles of 4D VMAT dose simulation, identify influencing parameters and assess their impact on 4D dose simulation accuracy by comparison of simulated motion-affected dose distributions to corresponding dosimetric motion phantom measurements. Exploiting an ITV-based treatment planning approach, VMAT treatment plans were generated for a motion phantom and different motion scenarios (sinusoidal motion of different period/direction; regular/irregular motion). 4D VMAT dose simulation results and dose measurements were compared by local 3% / 3 mm γ-evaluation, with the measured dose distributions serving as ground truth. Overall γ-passing rates of simulations and dynamic measurements ranged from 97% to 100% (mean across all motion scenarios: 98% ± 1%); corresponding values for comparison of different day repeat measurements were

  19. 4D dose simulation in volumetric arc therapy: Accuracy and affecting parameters.

    PubMed

    Sothmann, Thilo; Gauer, Tobias; Werner, René

    2017-01-01

    Radiotherapy of lung and liver lesions has changed from normofractioned 3D-CRT to stereotactic treatment in a single or few fractions, often employing volumetric arc therapy (VMAT)-based techniques. Potential unintended interference of respiratory target motion and dynamically changing beam parameters during VMAT dose delivery motivates establishing 4D quality assurance (4D QA) procedures to assess appropriateness of generated VMAT treatment plans when taking into account patient-specific motion characteristics. Current approaches are motion phantom-based 4D QA and image-based 4D VMAT dose simulation. Whereas phantom-based 4D QA is usually restricted to a small number of measurements, the computational approaches allow simulating many motion scenarios. However, 4D VMAT dose simulation depends on various input parameters, influencing estimated doses along with mitigating simulation reliability. Thus, aiming at routine use of simulation-based 4D VMAT QA, the impact of such parameters as well as the overall accuracy of the 4D VMAT dose simulation has to be studied in detail-which is the topic of the present work. In detail, we introduce the principles of 4D VMAT dose simulation, identify influencing parameters and assess their impact on 4D dose simulation accuracy by comparison of simulated motion-affected dose distributions to corresponding dosimetric motion phantom measurements. Exploiting an ITV-based treatment planning approach, VMAT treatment plans were generated for a motion phantom and different motion scenarios (sinusoidal motion of different period/direction; regular/irregular motion). 4D VMAT dose simulation results and dose measurements were compared by local 3% / 3 mm γ-evaluation, with the measured dose distributions serving as ground truth. Overall γ-passing rates of simulations and dynamic measurements ranged from 97% to 100% (mean across all motion scenarios: 98% ± 1%); corresponding values for comparison of different day repeat measurements were

  20. Assessing accuracy of point fire intervals across landscapes with simulation modelling

    Treesearch

    Russell A. Parsons; Emily K. Heyerdahl; Robert E. Keane; Brigitte Dorner; Joseph Fall

    2007-01-01

    We assessed accuracy in point fire intervals using a simulation model that sampled four spatially explicit simulated fire histories. These histories varied in fire frequency and size and were simulated on a flat landscape with two forest types (dry versus mesic). We used three sampling designs (random, systematic grids, and stratified). We assessed the sensitivity of...

  1. Line-of-sight pointing accuracy/stability analysis and computer simulation for small spacecraft

    NASA Astrophysics Data System (ADS)

    Algrain, Marcelo C.; Powers, Richard M.

    1996-06-01

    This paper presents a case study where a comprehensive computer simulation is developed to determine the driving factors contributing to spacecraft pointing accuracy and stability. The simulation is implemented using XMATH/SystemBuild software from Integrated Systems, Inc. The paper is written in a tutorial manner and models for major system components are described. Among them are spacecraft bus, attitude controller, reaction wheel assembly, star-tracker unit, inertial reference unit, and gyro drift estimators (Kalman filter). THe predicted spacecraft performance is analyzed for a variety of input commands and system disturbances. The primary deterministic inputs are desired attitude angles and rate setpoints. The stochastic inputs include random torque disturbances acting on the spacecraft, random gyro bias noise, gyro random walk, and star-tracker noise. These inputs are varied over a wide range to determine their effects on pointing accuracy and stability. The results are presented in the form of trade-off curves designed to facilitate the proper selection of subsystems so that overall spacecraft pointing accuracy and stability requirements are met.

  2. Accuracy of flowmeters measuring horizontal groundwater flow in an unconsolidated aquifer simulator.

    USGS Publications Warehouse

    Bayless, E.R.; Mandell, Wayne A.; Ursic, James R.

    2011-01-01

    Borehole flowmeters that measure horizontal flow velocity and direction of groundwater flow are being increasingly applied to a wide variety of environmental problems. This study was carried out to evaluate the measurement accuracy of several types of flowmeters in an unconsolidated aquifer simulator. Flowmeter response to hydraulic gradient, aquifer properties, and well-screen construction was measured during 2003 and 2005 at the U.S. Geological Survey Hydrologic Instrumentation Facility in Bay St. Louis, Mississippi. The flowmeters tested included a commercially available heat-pulse flowmeter, an acoustic Doppler flowmeter, a scanning colloidal borescope flowmeter, and a fluid-conductivity logging system. Results of the study indicated that at least one flowmeter was capable of measuring borehole flow velocity and direction in most simulated conditions. The mean error in direction measurements ranged from 15.1 degrees to 23.5 degrees and the directional accuracy of all tested flowmeters improved with increasing hydraulic gradient. The range of Darcy velocities examined in this study ranged 4.3 to 155 ft/d. For many plots comparing the simulated and measured Darcy velocity, the squared correlation coefficient (r2) exceeded 0.92. The accuracy of velocity measurements varied with well construction and velocity magnitude. The use of horizontal flowmeters in environmental studies appears promising but applications may require more than one type of flowmeter to span the range of conditions encountered in the field. Interpreting flowmeter data from field settings may be complicated by geologic heterogeneity, preferential flow, vertical flow, constricted screen openings, and nonoptimal screen orientation.

  3. Influence of learner knowledge and case complexity on handover accuracy and cognitive load: results from a simulation study.

    PubMed

    Young, John Q; van Dijk, Savannah M; O'Sullivan, Patricia S; Custers, Eugene J; Irby, David M; Ten Cate, Olle

    2016-09-01

    The handover represents a high-risk event in which errors are common and lead to patient harm. A better understanding of the cognitive mechanisms of handover errors is essential to improving handover education and practice. This paper reports on an experiment conducted to study the effects of learner knowledge, case complexity (i.e. cases with or without a clear diagnosis) and their interaction on handover accuracy and cognitive load. Participants were 52 Dutch medical students in Years 2 and 6. The experiment employed a repeated-measures design with two explanatory variables: case complexity (simple or complex) as the within-subject variable, and learner knowledge (as indicated by illness script maturity) as the between-subject covariate. The dependent variables were handover accuracy and cognitive load. Each participant performed a total of four simulated handovers involving two simple cases and two complex cases. Higher illness script maturity predicted increased handover accuracy (p < 0.001) and lower cognitive load (p = 0.007). Case complexity did not independently affect either outcome. For handover accuracy, there was no interaction between case complexity and illness script maturity. For cognitive load, there was an interaction effect between illness script maturity and case complexity, indicating that more mature illness scripts reduced cognitive load less in complex cases than in simple cases. Students with more mature illness scripts performed more accurate handovers and experienced lower cognitive load. For cognitive load, these effects were more pronounced in simple than complex cases. If replicated, these findings suggest that handover curricula and protocols should provide support that varies according to the knowledge of the trainee. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  4. Contextual information influences diagnosis accuracy and decision making in simulated emergency medicine emergencies.

    PubMed

    McRobert, Allistair Paul; Causer, Joe; Vassiliadis, John; Watterson, Leonie; Kwan, James; Williams, Mark A

    2013-06-01

    It is well documented that adaptations in cognitive processes with increasing skill levels support decision making in multiple domains. We examined skill-based differences in cognitive processes in emergency medicine physicians, and whether performance was significantly influenced by the removal of contextual information related to a patient's medical history. Skilled (n=9) and less skilled (n=9) emergency medicine physicians responded to high-fidelity simulated scenarios under high- and low-context information conditions. Skilled physicians demonstrated higher diagnostic accuracy irrespective of condition, and were less affected by the removal of context-specific information compared with less skilled physicians. The skilled physicians generated more options, and selected better quality options during diagnostic reasoning compared with less skilled counterparts. These cognitive processes were active irrespective of the level of context-specific information presented, although high-context information enhanced understanding of the patients' symptoms resulting in higher diagnostic accuracy. Our findings have implications for scenario design and the manipulation of contextual information during simulation training.

  5. A comparison of the accuracy of intraoral scanners using an intraoral environment simulator.

    PubMed

    Park, Hye-Nan; Lim, Young-Jun; Yi, Won-Jin; Han, Jung-Suk; Lee, Seung-Pyo

    2018-02-01

    The aim of this study was to design an intraoral environment simulator and to assess the accuracy of two intraoral scanners using the simulator. A box-shaped intraoral environment simulator was designed to simulate two specific intraoral environments. The cast was scanned 10 times by Identica Blue (MEDIT, Seoul, South Korea), TRIOS (3Shape, Copenhagen, Denmark), and CS3500 (Carestream Dental, Georgia, USA) scanners in the two simulated groups. The distances between the left and right canines (D3), first molars (D6), second molars (D7), and the left canine and left second molar (D37) were measured. The distance data were analyzed by the Kruskal-Wallis test. The differences in intraoral environments were not statistically significant ( P >.05). Between intraoral scanners, statistically significant differences ( P <.05) were revealed by the Kruskal-Wallis test with regard to D3 and D6. No difference due to the intraoral environment was revealed. The simulator will contribute to the higher accuracy of intraoral scanners in the future.

  6. Accuracy of lung nodule density on HRCT: analysis by PSF-based image simulation.

    PubMed

    Ohno, Ken; Ohkubo, Masaki; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi

    2012-11-08

    A computed tomography (CT) image simulation technique based on the point spread function (PSF) was applied to analyze the accuracy of CT-based clinical evaluations of lung nodule density. The PSF of the CT system was measured and used to perform the lung nodule image simulation. Then, the simulated image was resampled at intervals equal to the pixel size and the slice interval found in clinical high-resolution CT (HRCT) images. On those images, the nodule density was measured by placing a region of interest (ROI) commonly used for routine clinical practice, and comparing the measured value with the true value (a known density of object function used in the image simulation). It was quantitatively determined that the measured nodule density depended on the nodule diameter and the image reconstruction parameters (kernel and slice thickness). In addition, the measured density fluctuated, depending on the offset between the nodule center and the image voxel center. This fluctuation was reduced by decreasing the slice interval (i.e., with the use of overlapping reconstruction), leading to a stable density evaluation. Our proposed method of PSF-based image simulation accompanied with resampling enables a quantitative analysis of the accuracy of CT-based evaluations of lung nodule density. These results could potentially reveal clinical misreadings in diagnosis, and lead to more accurate and precise density evaluations. They would also be of value for determining the optimum scan and reconstruction parameters, such as image reconstruction kernels and slice thicknesses/intervals.

  7. Digital core based transmitted ultrasonic wave simulation and velocity accuracy analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Shan, Rui

    2016-06-01

    Transmitted ultrasonic wave simulation (TUWS) in a digital core is one of the important elements of digital rock physics and is used to study wave propagation in porous cores and calculate equivalent velocity. When simulating wave propagates in a 3D digital core, two additional layers are attached to its two surfaces vertical to the wave-direction and one planar wave source and two receiver-arrays are properly installed. After source excitation, the two receivers then record incident and transmitted waves of the digital rock. Wave propagating velocity, which is the velocity of the digital core, is computed by the picked peak-time difference between the two recorded waves. To evaluate the accuracy of TUWS, a digital core is fully saturated with gas, oil, and water to calculate the corresponding velocities. The velocities increase with decreasing wave frequencies in the simulation frequency band, and this is considered to be the result of scattering. When the pore fluids are varied from gas to oil and finally to water, the velocity-variation characteristics between the different frequencies are similar, thereby approximately following the variation law of velocities obtained from linear elastic statics simulation (LESS), although their absolute values are different. However, LESS has been widely used. The results of this paper show that the transmission ultrasonic simulation has high relative precision.

  8. Equations of State for Mixtures: Results from DFT Simulations of Xenon/Ethane Mixtures Compared to High Accuracy Validation Experiments on Z

    NASA Astrophysics Data System (ADS)

    Magyar, Rudolph

    2013-06-01

    We report a computational and validation study of equation of state (EOS) properties of liquid / dense plasma mixtures of xenon and ethane to explore and to illustrate the physics of the molecular scale mixing of light elements with heavy elements. Accurate EOS models are crucial to achieve high-fidelity hydrodynamics simulations of many high-energy-density phenomena such as inertial confinement fusion and strong shock waves. While the EOS is often tabulated for separate species, the equation of state for arbitrary mixtures is generally not available, requiring properties of the mixture to be approximated by combining physical properties of the pure systems. The main goal of this study is to access how accurate this approximation is under shock conditions. Density functional theory molecular dynamics (DFT-MD) at elevated-temperature and pressure is used to assess the thermodynamics of the xenon-ethane mixture. The simulations are unbiased as to elemental species and therefore provide comparable accuracy when describing total energies, pressures, and other physical properties of mixtures as they do for pure systems. In addition, we have performed shock compression experiments using the Sandia Z-accelerator on pure xenon, ethane, and various mixture ratios thereof. The Hugoniot results are compared to the DFT-MD results and the predictions of different rules for combing EOS tables. The DFT-based simulation results compare well with the experimental points, and it is found that a mixing rule based on pressure equilibration performs reliably well for the mixtures considered. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  9. Sensitivity of tumor motion simulation accuracy to lung biomechanical modeling approaches and parameters.

    PubMed

    Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing

    2015-11-21

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right, anterior-posterior, and superior-inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation.

  10. Kalman approach to accuracy management for interoperable heterogeneous model abstraction within an HLA-compliant simulation

    NASA Astrophysics Data System (ADS)

    Leskiw, Donald M.; Zhau, Junmei

    2000-06-01

    This paper reports on results from an ongoing project to develop methodologies for representing and managing multiple, concurrent levels of detail and enabling high performance computing using parallel arrays within distributed object-based simulation frameworks. At this time we present the methodology for representing and managing multiple, concurrent levels of detail and modeling accuracy by using a representation based on the Kalman approach for estimation. The Kalman System Model equations are used to represent model accuracy, Kalman Measurement Model equations provide transformations between heterogeneous levels of detail, and interoperability among disparate abstractions is provided using a form of the Kalman Update equations.

  11. Clinical results of computerized tomography-based simulation with laser patient marking.

    PubMed

    Ragan, D P; Forman, J D; He, T; Mesina, C F

    1996-02-01

    Accuracy of a patient treatment portal marking device and computerized tomography (CT) simulation have been clinically tested. A CT-based simulator has been assembled based on a commercial CT scanner. This includes visualization software and a computer-controlled laser drawing device. This laser drawing device is used to transfer the setup, central axis, and/or radiation portals from the CT simulator to the patient for appropriate patient skin marking. A protocol for clinical testing is reported. Twenty-five prospectively, sequentially accessioned patients have been analyzed. The simulation process can be completed in an average time of 62 min. Under many cases, the treatment portals can be designed and the patient marked in one session. Mechanical accuracy of the system was found to be within +/- 1mm. The portal projection accuracy in clinical cases is observed to be better than +/- 1.2 mm. Operating costs are equivalent to the conventional simulation process it replaces. Computed tomography simulation is a clinical accurate substitute for conventional simulation when used with an appropriate patient marking system and digitally reconstructed radiographs. Personnel time spent in CT simulation is equivalent to time in conventional simulation.

  12. Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Aidan Patrick; Schultz, Peter Andrew; Crozier, Paul

    2014-09-01

    This report summarizes the result of LDRD project 12-0395, titled "Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations." During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projectedmore » on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel

  13. Sensitivity of Tumor Motion Simulation Accuracy to Lung Biomechanical Modeling Approaches and Parameters

    PubMed Central

    Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu

    2015-01-01

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the Neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right (LR), anterior-posterior (AP), and superior-inferior (SI) directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation. PMID:26531324

  14. Trade-off study and computer simulation for assessing spacecraft pointing accuracy and stability capabilities

    NASA Astrophysics Data System (ADS)

    Algrain, Marcelo C.; Powers, Richard M.

    1997-05-01

    A case study, written in a tutorial manner, is presented where a comprehensive computer simulation is developed to determine the driving factors contributing to spacecraft pointing accuracy and stability. Models for major system components are described. Among them are spacecraft bus, attitude controller, reaction wheel assembly, star-tracker unit, inertial reference unit, and gyro drift estimators (Kalman filter). The predicted spacecraft performance is analyzed for a variety of input commands and system disturbances. The primary deterministic inputs are the desired attitude angles and rate set points. The stochastic inputs include random torque disturbances acting on the spacecraft, random gyro bias noise, gyro random walk, and star-tracker noise. These inputs are varied over a wide range to determine their effects on pointing accuracy and stability. The results are presented in the form of trade- off curves designed to facilitate the proper selection of subsystems so that overall spacecraft pointing accuracy and stability requirements are met.

  15. Simulation of thalamic prosthetic vision: reading accuracy, speed, and acuity in sighted humans.

    PubMed

    Vurro, Milena; Crowell, Anne Marie; Pezaris, John S

    2014-01-01

    The psychophysics of reading with artificial sight has received increasing attention as visual prostheses are becoming a real possibility to restore useful function to the blind through the coarse, pseudo-pixelized vision they generate. Studies to date have focused on simulating retinal and cortical prostheses; here we extend that work to report on thalamic designs. This study examined the reading performance of normally sighted human subjects using a simulation of three thalamic visual prostheses that varied in phosphene count, to help understand the level of functional ability afforded by thalamic designs in a task of daily living. Reading accuracy, reading speed, and reading acuity of 20 subjects were measured as a function of letter size, using a task based on the MNREAD chart. Results showed that fluid reading was feasible with appropriate combinations of letter size and phosphene count, and performance degraded smoothly as font size was decreased, with an approximate doubling of phosphene count resulting in an increase of 0.2 logMAR in acuity. Results here were consistent with previous results from our laboratory. Results were also consistent with those from the literature, despite using naive subjects who were not trained on the simulator, in contrast to other reports.

  16. Results of a joint NOAA/NASA sounder simulation study

    NASA Technical Reports Server (NTRS)

    Phillips, N.; Susskind, Joel; Mcmillin, L.

    1988-01-01

    This paper presents the results of a joint NOAA and NASA sounder simulation study in which the accuracies of atmospheric temperature profiles and surface skin temperature measuremnents retrieved from two sounders were compared: (1) the currently used IR temperature sounder HIRS2 (High-resolution Infrared Radiation Sounder 2); and (2) the recently proposed high-spectral-resolution IR sounder AMTS (Advanced Moisture and Temperature Sounder). Simulations were conducted for both clear and partial cloud conditions. Data were analyzed at NASA using a physical inversion technique and at NOAA using a statistical technique. Results show significant improvement of AMTS compared to HIRS2 for both clear and cloudy conditions. The improvements are indicated by both methods of data analysis, but the physical retrievals outperform the statistical retrievals.

  17. Testing the accuracy of clustering redshifts with simulations

    NASA Astrophysics Data System (ADS)

    Scottez, V.; Benoit-Lévy, A.; Coupon, J.; Ilbert, O.; Mellier, Y.

    2018-03-01

    We explore the accuracy of clustering-based redshift inference within the MICE2 simulation. This method uses the spatial clustering of galaxies between a spectroscopic reference sample and an unknown sample. This study give an estimate of the reachable accuracy of this method. First, we discuss the requirements for the number objects in the two samples, confirming that this method does not require a representative spectroscopic sample for calibration. In the context of next generation of cosmological surveys, we estimated that the density of the Quasi Stellar Objects in BOSS allows us to reach 0.2 per cent accuracy in the mean redshift. Secondly, we estimate individual redshifts for galaxies in the densest regions of colour space ( ˜ 30 per cent of the galaxies) without using the photometric redshifts procedure. The advantage of this procedure is threefold. It allows: (i) the use of cluster-zs for any field in astronomy, (ii) the possibility to combine photo-zs and cluster-zs to get an improved redshift estimation, (iii) the use of cluster-z to define tomographic bins for weak lensing. Finally, we explore this last option and build five cluster-z selected tomographic bins from redshift 0.2 to 1. We found a bias on the mean redshift estimate of 0.002 per bin. We conclude that cluster-z could be used as a primary redshift estimator by next generation of cosmological surveys.

  18. Non-conforming finite-element formulation for cardiac electrophysiology: an effective approach to reduce the computation time of heart simulations without compromising accuracy

    NASA Astrophysics Data System (ADS)

    Hurtado, Daniel E.; Rojas, Guillermo

    2018-04-01

    Computer simulations constitute a powerful tool for studying the electrical activity of the human heart, but computational effort remains prohibitively high. In order to recover accurate conduction velocities and wavefront shapes, the mesh size in linear element (Q1) formulations cannot exceed 0.1 mm. Here we propose a novel non-conforming finite-element formulation for the non-linear cardiac electrophysiology problem that results in accurate wavefront shapes and lower mesh-dependance in the conduction velocity, while retaining the same number of global degrees of freedom as Q1 formulations. As a result, coarser discretizations of cardiac domains can be employed in simulations without significant loss of accuracy, thus reducing the overall computational effort. We demonstrate the applicability of our formulation in biventricular simulations using a coarse mesh size of ˜ 1 mm, and show that the activation wave pattern closely follows that obtained in fine-mesh simulations at a fraction of the computation time, thus improving the accuracy-efficiency trade-off of cardiac simulations.

  19. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  20. Parallel Decomposition of the Fictitious Lagrangian Algorithm and its Accuracy for Molecular Dynamics Simulations of Semiconductors.

    NASA Astrophysics Data System (ADS)

    Yeh, Mei-Ling

    We have performed a parallel decomposition of the fictitious Lagrangian method for molecular dynamics with tight-binding total energy expression into the hypercube computer. This is the first time in literature that the dynamical simulation of semiconducting systems containing more than 512 silicon atoms has become possible with the electrons treated as quantum particles. With the utilization of the Intel Paragon system, our timing analysis predicts that our code is expected to perform realistic simulations on very large systems consisting of thousands of atoms with time requirements of the order of tens of hours. Timing results and performance analysis of our parallel code are presented in terms of calculation time, communication time, and setup time. The accuracy of the fictitious Lagrangian method in molecular dynamics simulation is also investigated, especially the energy conservation of the total energy of ions. We find that the accuracy of the fictitious Lagrangian scheme in small silicon cluster and very large silicon system simulations is good for as long as the simulations proceed, even though we quench the electronic coordinates to the Born-Oppenheimer surface only in the beginning of the run. The kinetic energy of electrons does not increase as time goes on, and the energy conservation of the ionic subsystem remains very good. This means that, as far as the ionic subsystem is concerned, the electrons are on the average in the true quantum ground states. We also tie up some odds and ends regarding a few remaining questions about the fictitious Lagrangian method, such as the difference between the results obtained from the Gram-Schmidt and SHAKE method of orthonormalization, and differences between simulations where the electrons are quenched to the Born -Oppenheimer surface only once compared with periodic quenching.

  1. Factoring vs linear modeling in rate estimation: a simulation study of relative accuracy.

    PubMed

    Maldonado, G; Greenland, S

    1998-07-01

    A common strategy for modeling dose-response in epidemiology is to transform ordered exposures and covariates into sets of dichotomous indicator variables (that is, to factor the variables). Factoring tends to increase estimation variance, but it also tends to decrease bias and thus may increase or decrease total accuracy. We conducted a simulation study to examine the impact of factoring on the accuracy of rate estimation. Factored and unfactored Poisson regression models were fit to follow-up study datasets that were randomly generated from 37,500 population model forms that ranged from subadditive to supramultiplicative. In the situations we examined, factoring sometimes substantially improved accuracy relative to fitting the corresponding unfactored model, sometimes substantially decreased accuracy, and sometimes made little difference. The difference in accuracy between factored and unfactored models depended in a complicated fashion on the difference between the true and fitted model forms, the strength of exposure and covariate effects in the population, and the study size. It may be difficult in practice to predict when factoring is increasing or decreasing accuracy. We recommend, therefore, that the strategy of factoring variables be supplemented with other strategies for modeling dose-response.

  2. Sampling Simulations for Assessing the Accuracy of U.S. Agricultural Crop Mapping from Remotely Sensed Imagery

    NASA Astrophysics Data System (ADS)

    Dwyer, Linnea; Yadav, Kamini; Congalton, Russell G.

    2017-04-01

    Providing adequate food and water for a growing, global population continues to be a major challenge. Mapping and monitoring crops are useful tools for estimating the extent of crop productivity. GFSAD30 (Global Food Security Analysis Data at 30m) is a program, funded by NASA, that is producing global cropland maps by using field measurements and remote sensing images. This program studies 8 major crop types, and includes information on cropland area/extent, if crops are irrigated or rainfed, and the cropping intensities. Using results from the US and the extensive reference data available, CDL (USDA Crop Data Layer), we will experiment with various sampling simulations to determine optimal sampling for thematic map accuracy assessment. These simulations will include varying the sampling unit, the sampling strategy, and the sample number. Results of these simulations will allow us to recommend assessment approaches to handle different cropping scenarios.

  3. A high accuracy sequential solver for simulation and active control of a longitudinal combustion instability

    NASA Technical Reports Server (NTRS)

    Shyy, W.; Thakur, S.; Udaykumar, H. S.

    1993-01-01

    A high accuracy convection scheme using a sequential solution technique has been developed and applied to simulate the longitudinal combustion instability and its active control. The scheme has been devised in the spirit of the Total Variation Diminishing (TVD) concept with special source term treatment. Due to the substantial heat release effect, a clear delineation of the key elements employed by the scheme, i.e., the adjustable damping factor and the source term treatment has been made. By comparing with the first-order upwind scheme previously utilized, the present results exhibit less damping and are free from spurious oscillations, offering improved quantitative accuracy while confirming the spectral analysis reported earlier. A simple feedback type of active control has been found to be capable of enhancing or attenuating the magnitude of the combustion instability.

  4. Monte Carlo Simulations: Number of Iterations and Accuracy

    DTIC Science & Technology

    2015-07-01

    iterations because of its added complexity compared to the WM . We recommend that the WM be used for a priori estimates of the number of MC ...inaccurate.15 Although the WM and the WSM have generally proven useful in estimating the number of MC iterations and addressing the accuracy of the MC ...Theorem 3 3. A Priori Estimate of Number of MC Iterations 7 4. MC Result Accuracy 11 5. Using Percentage Error of the Mean to Estimate Number of MC

  5. Effects of training and simulated combat stress on leg tourniquet application accuracy, time, and effectiveness.

    PubMed

    Schreckengaust, Richard; Littlejohn, Lanny; Zarow, Gregory J

    2014-02-01

    The lower extremity tourniquet failure rate remains significantly higher in combat than in preclinical testing, so we hypothesized that tourniquet placement accuracy, speed, and effectiveness would improve during training and decline during simulated combat. Navy Hospital Corpsman (N = 89), enrolled in a Tactical Combat Casualty Care training course in preparation for deployment, applied Combat Application Tourniquet (CAT) and the Special Operations Forces Tactical Tourniquet (SOFT-T) on day 1 and day 4 of classroom training, then under simulated combat, wherein participants ran an obstacle course to apply a tourniquet while wearing full body armor and avoiding simulated small arms fire (paint balls). Application time and pulse elimination effectiveness improved day 1 to day 4 (p < 0.005). Under simulated combat, application time slowed significantly (p < 0.001), whereas accuracy and effectiveness declined slightly. Pulse elimination was poor for CAT (25% failure) and SOFT-T (60% failure) even in classroom conditions following training. CAT was more quickly applied (p < 0.005) and more effective (p < 0.002) than SOFT-T. Training fostered fast and effective application of leg tourniquets while performance declined under simulated combat. The inherent efficacy of tourniquet products contributes to high failure rates under combat conditions, pointing to the need for superior tourniquets and for rigorous deployment preparation training in simulated combat scenarios. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.

  6. Accuracy of volumetric measurement of simulated root resorption lacunas based on cone beam computed tomography.

    PubMed

    Wang, Y; He, S; Guo, Y; Wang, S; Chen, S

    2013-08-01

    To evaluate the accuracy of volumetric measurement of simulated root resorption cavities based on cone beam computed tomography (CBCT), in comparison with that of Micro-computed tomography (Micro-CT) which served as the reference. The State Key Laboratory of Oral Diseases at Sichuan University. Thirty-two bovine teeth were included for standardized CBCT scanning and Micro-CT scanning before and after the simulation of different degrees of root resorption. The teeth were divided into three groups according to the depths of the root resorption cavity (group 1: 0.15, 0.2, 0.3 mm; group 2: 0.6, 1.0 mm; group 3: 1.5, 2.0, 3.0 mm). Each depth included four specimens. Differences in tooth volume before and after simulated root resorption were then calculated from CBCT and Micro-CT scans, respectively. The overall between-method agreement of the measurements was evaluated using the concordance correlation coefficient (CCC). For the first group, the average volume of resorption cavity was 1.07 mm(3) , and the between-method agreement of measurement for the volume changes was low (CCC = 0.098). For the second and third groups, the average volumes of resorption cavities were 3.47 and 6.73 mm(3) respectively, and the between-method agreements were good (CCC = 0.828 and 0.895, respectively). The accuracy of 3-D quantitative volumetric measurement of simulated root resorption based on CBCT was fairly good in detecting simulated resorption cavities larger than 3.47 mm(3), while it was not sufficient for measuring resorption cavities smaller than 1.07 mm(3) . This method could be applied in future studies of root resorption although further studies are required to improve its accuracy. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Aging adult skull remains through radiological density estimates: A comparison of different computed tomography systems and the use of computer simulations to judge the accuracy of results.

    PubMed

    Obert, Martin; Kubelt, Carolin; Schaaf, Thomas; Dassinger, Benjamin; Grams, Astrid; Gizewski, Elke R; Krombach, Gabriele A; Verhoff, Marcel A

    2013-05-10

    The objective of this article was to explore age-at-death estimates in forensic medicine, which were methodically based on age-dependent, radiologically defined bone-density (HC) decay and which were investigated with a standard clinical computed tomography (CT) system. Such density decay was formerly discovered with a high-resolution flat-panel CT in the skulls of adult females. The development of a standard CT methodology for age estimations--with thousands of installations--would have the advantage of being applicable everywhere, whereas only few flat-panel prototype CT systems are in use worldwide. A Multi-Slice CT scanner (MSCT) was used to obtain 22,773 images from 173 European human skulls (89 male, 84 female), taken from a population of patients from the Department of Neuroradiology at the University Hospital Giessen and Marburg during 2010 and 2011. An automated image analysis was carried out to evaluate HC of all images. The age dependence of HC was studied by correlation analysis. The prediction accuracy of age-at-death estimates was calculated. Computer simulations were carried out to explore the influence of noise on the accuracy of age predictions. Human skull HC values strongly scatter as a function of age for both sexes. Adult male skull bone-density remains constant during lifetime. Adult female HC decays during lifetime, as indicated by a correlation coefficient (CC) of -0.53. Prediction errors for age-at-death estimates for both of the used scanners are in the range of ±18 years at a 75% confidence interval (CI). Computer simulations indicate that this is the best that can be expected for such noisy data. Our results indicate that HC-decay is indeed present in adult females and that it can be demonstrated both by standard and by high-resolution CT methods, applied to different subject groups of an identical population. The weak correlation between HC and age found by both CT methods only enables a method to estimate age-at-death with limited

  8. Accuracy of finite-difference modeling of seismic waves : Simulation versus laboratory measurements

    NASA Astrophysics Data System (ADS)

    Arntsen, B.

    2017-12-01

    The finite-difference technique for numerical modeling of seismic waves is still important and for some areas extensively used.For exploration purposes is finite-difference simulation at the core of both traditional imaging techniques such as reverse-time migration and more elaborate Full-Waveform Inversion techniques.The accuracy and fidelity of finite-difference simulation of seismic waves are hard to quantify and meaningfully error analysis is really onlyeasily available for simplistic media. A possible alternative to theoretical error analysis is provided by comparing finite-difference simulated data with laboratory data created using a scale model. The advantage of this approach is the accurate knowledge of the model, within measurement precision, and the location of sources and receivers.We use a model made of PVC immersed in water and containing horizontal and tilted interfaces together with several spherical objects to generateultrasonic pressure reflection measurements. The physical dimensions of the model is of the order of a meter, which after scaling represents a model with dimensions of the order of 10 kilometer and frequencies in the range of one to thirty hertz.We find that for plane horizontal interfaces the laboratory data can be reproduced by the finite-difference scheme with relatively small error, but for steeply tilted interfaces the error increases. For spherical interfaces the discrepancy between laboratory data and simulated data is sometimes much more severe, to the extent that it is not possible to simulate reflections from parts of highly curved bodies. The results are important in view of the fact that finite-difference modeling is often at the core of imaging and inversion algorithms tackling complicatedgeological areas with highly curved interfaces.

  9. Dose accuracy of a durable insulin pen with memory function, before and after simulated lifetime use and under stress conditions.

    PubMed

    Xue, Ligang; Mikkelsen, Kristian Handberg

    2013-03-01

    The objective of this study was to assess the dose accuracy of NovoPen® 5 in delivering low, medium and high doses of insulin before and after simulated lifetime use. A secondary objective was to evaluate the durability of the pen and its memory function under various stress conditions designed to simulate conditions that may be encountered in everyday use of an insulin pen. All testing was conducted according to International Organization for Standardization guideline 11608-1, 2000 for pen injectors. Dose accuracy was measured for the delivery of 1 unit (U) (10 mg), 30 U (300 mg) and 60 U (600 mg) test medium in standard, cool and hot conditions and before and after simulated lifetime use. Dose accuracy was also tested after preconditioning in dry heat storage; cold storage; damp cyclical heat; shock, bump and vibration; free fall and after electrostatic charge and radiated field test. Memory function was tested under all temperature and physical conditions. NovoPen 5 maintained dosing accuracy and memory function at minimum, medium and maximum doses in standard, cool and hot conditions, stress tests and simulated lifetime use. The pens remained intact and retained dosing accuracy and a working memory function at all doses after exposure to variations in temperature and after physical challenge. NovoPen 5 was accurate at all doses tested and under various functionality tests. Its durable design ensured that the dose accuracy and memory function were retained under conditions of stress likely to be encountered in everyday use.

  10. Assessing the accuracy of improved force-matched water models derived from Ab initio molecular dynamics simulations.

    PubMed

    Köster, Andreas; Spura, Thomas; Rutkai, Gábor; Kessler, Jan; Wiebeler, Hendrik; Vrabec, Jadran; Kühne, Thomas D

    2016-07-15

    The accuracy of water models derived from ab initio molecular dynamics simulations by means on an improved force-matching scheme is assessed for various thermodynamic, transport, and structural properties. It is found that although the resulting force-matched water models are typically less accurate than fully empirical force fields in predicting thermodynamic properties, they are nevertheless much more accurate than generally appreciated in reproducing the structure of liquid water and in fact superseding most of the commonly used empirical water models. This development demonstrates the feasibility to routinely parametrize computationally efficient yet predictive potential energy functions based on accurate ab initio molecular dynamics simulations for a large variety of different systems. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  11. Accuracy of MHD simulations: Effects of simulation initialization in GUMICS-4

    NASA Astrophysics Data System (ADS)

    Lakka, Antti; Pulkkinen, Tuija; Dimmock, Andrew; Osmane, Adnane; Palmroth, Minna; Honkonen, Ilja

    2016-04-01

    We conducted a study aimed at revealing how different global magnetohydrodynamic (MHD) simulation initialization methods affect the dynamics in different parts of the Earth's magnetosphere-ionosphere system. While such magnetosphere-ionosphere coupling codes have been used for more than two decades, their testing still requires significant work to identify the optimal numerical representation of the physical processes. We used the Grand Unified Magnetosphere-Ionosphere Coupling Simulation (GUMICS-4), the only European global MHD simulation being developed by the Finnish Meteorological Institute. GUMICS-4 was put to a test that included two stages: 1) a 10 day Omni data interval was simulated and the results were validated by comparing both the bow shock and the magnetopause spatial positions predicted by the simulation to actual measurements and 2) the validated 10 day simulation run was used as a reference in a comparison of five 3 + 12 hour (3 hour synthetic initialisation + 12 hour actual simulation) simulation runs. The 12 hour input was not only identical in each simulation case but it also represented a subset of the 10 day input thus enabling quantifying the effects of different synthetic initialisations on the magnetosphere-ionosphere system. The used synthetic initialisation data sets were created using stepwise, linear and sinusoidal functions. Switching the used input from the synthetic to real Omni data was immediate. The results show that the magnetosphere forms in each case within an hour after the switch to real data. However, local dissimilarities are found in the magnetospheric dynamics after formation depending on the used initialisation method. This is evident especially in the inner parts of the lobe.

  12. Simulation of intrafraction motion and overall geometric accuracy of a frameless intracranial radiosurgery process

    PubMed Central

    Walker, Luke; Chinnaiyan, Prakash; Forster, Kenneth

    2008-01-01

    We conducted a comprehensive evaluation of the clinical accuracy of an image‐guided frameless intracranial radiosurgery system. All links in the process chain were tested. Using healthy volunteers, we evaluated a novel method to prospectively quantify the range of target motion for optimal determination of the planning target volume (PTV) margin. The overall system isocentric accuracy was tested using a rigid anthropomorphic phantom containing a hidden target. Intrafraction motion was simulated in 5 healthy volunteers. Reinforced head‐and‐shoulders thermoplastic masks were used for immobilization. The subjects were placed in a treatment position for 15 minutes (the maximum expected time between repeated isocenter localizations) and the six‐degrees‐of‐freedom target displacements were recorded with high frequency by tracking infrared markers. The markers were placed on a customized piece of thermoplastic secured to the head independently of the immobilization mask. Additional data were collected with the subjects holding their breath, talking, and deliberately moving. As compared with fiducial matching, the automatic registration algorithm did not introduce clinically significant errors (<0.3 mm difference). The hidden target test confirmed overall system isocentric accuracy of ≤1 mm (total three‐dimensional displacement). The subjects exhibited various patterns and ranges of head motion during the mock treatment. The total displacement vector encompassing 95% of the positional points varied from 0.4 mm to 2.9 mm. Pre‐planning motion simulation with optical tracking was tested on volunteers and appears promising for determination of patient‐specific PTV margins. Further patient study is necessary and is planned. In the meantime, system accuracy is sufficient for confident clinical use with 3 mm PTV margins. PACS number: 87.53.Ly

  13. A fast RCS accuracy assessment method for passive radar calibrators

    NASA Astrophysics Data System (ADS)

    Zhou, Yongsheng; Li, Chuanrong; Tang, Lingli; Ma, Lingling; Liu, QI

    2016-10-01

    In microwave radar radiometric calibration, the corner reflector acts as the standard reference target but its structure is usually deformed during the transportation and installation, or deformed by wind and gravity while permanently installed outdoor, which will decrease the RCS accuracy and therefore the radiometric calibration accuracy. A fast RCS accuracy measurement method based on 3-D measuring instrument and RCS simulation was proposed in this paper for tracking the characteristic variation of the corner reflector. In the first step, RCS simulation algorithm was selected and its simulation accuracy was assessed. In the second step, the 3-D measuring instrument was selected and its measuring accuracy was evaluated. Once the accuracy of the selected RCS simulation algorithm and 3-D measuring instrument was satisfied for the RCS accuracy assessment, the 3-D structure of the corner reflector would be obtained by the 3-D measuring instrument, and then the RCSs of the obtained 3-D structure and corresponding ideal structure would be calculated respectively based on the selected RCS simulation algorithm. The final RCS accuracy was the absolute difference of the two RCS calculation results. The advantage of the proposed method was that it could be applied outdoor easily, avoiding the correlation among the plate edge length error, plate orthogonality error, plate curvature error. The accuracy of this method is higher than the method using distortion equation. In the end of the paper, a measurement example was presented in order to show the performance of the proposed method.

  14. Factors affecting GEBV accuracy with single-step Bayesian models.

    PubMed

    Zhou, Lei; Mrode, Raphael; Zhang, Shengli; Zhang, Qin; Li, Bugao; Liu, Jian-Feng

    2018-01-01

    A single-step approach to obtain genomic prediction was first proposed in 2009. Many studies have investigated the components of GEBV accuracy in genomic selection. However, it is still unclear how the population structure and the relationships between training and validation populations influence GEBV accuracy in terms of single-step analysis. Here, we explored the components of GEBV accuracy in single-step Bayesian analysis with a simulation study. Three scenarios with various numbers of QTL (5, 50, and 500) were simulated. Three models were implemented to analyze the simulated data: single-step genomic best linear unbiased prediction (GBLUP; SSGBLUP), single-step BayesA (SS-BayesA), and single-step BayesB (SS-BayesB). According to our results, GEBV accuracy was influenced by the relationships between the training and validation populations more significantly for ungenotyped animals than for genotyped animals. SS-BayesA/BayesB showed an obvious advantage over SSGBLUP with the scenarios of 5 and 50 QTL. SS-BayesB model obtained the lowest accuracy with the 500 QTL in the simulation. SS-BayesA model was the most efficient and robust considering all QTL scenarios. Generally, both the relationships between training and validation populations and LD between markers and QTL contributed to GEBV accuracy in the single-step analysis, and the advantages of single-step Bayesian models were more apparent when the trait is controlled by fewer QTL.

  15. TU-H-207A-02: Relative Importance of the Various Factors Influencing the Accuracy of Monte Carlo Simulated CT Dose Index

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marous, L; Muryn, J; Liptak, C

    2016-06-15

    Purpose: Monte Carlo simulation is a frequently used technique for assessing patient dose in CT. The accuracy of a Monte Carlo program is often validated using the standard CT dose index (CTDI) phantoms by comparing simulated and measured CTDI{sub 100}. To achieve good agreement, many input parameters in the simulation (e.g., energy spectrum and effective beam width) need to be determined. However, not all the parameters have equal importance. Our aim was to assess the relative importance of the various factors that influence the accuracy of simulated CTDI{sub 100}. Methods: A Monte Carlo program previously validated for a clinical CTmore » system was used to simulate CTDI{sub 100}. For the standard CTDI phantoms (32 and 16 cm in diameter), CTDI{sub 100} values from central and four peripheral locations at 70 and 120 kVp were first simulated using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which intentional errors were introduced into the input parameters, the effects of which on simulated CTDI{sub 100} were analyzed. Results: At 38.4-mm collimation, errors in effective beam width up to 5.0 mm showed negligible effects on simulated CTDI{sub 100} (<1.0%). Likewise, errors in acrylic density of up to 0.01 g/cm{sup 3} resulted in small CTDI{sub 100} errors (<2.5%). In contrast, errors in spectral HVL produced more significant effects: slight deviations (±0.2 mm Al) produced errors up to 4.4%, whereas more extreme deviations (±1.4 mm Al) produced errors as high as 25.9%. Lastly, ignoring the CT table introduced errors up to 13.9%. Conclusion: Monte Carlo simulated CTDI{sub 100} is insensitive to errors in effective beam width and acrylic density. However, they are sensitive to errors in spectral HVL. To obtain accurate results, the CT table should not be ignored. This work was supported by a

  16. Accuracy of three-dimensional seismic ground response analysis in time domain using nonlinear numerical simulations

    NASA Astrophysics Data System (ADS)

    Liang, Fayun; Chen, Haibing; Huang, Maosong

    2017-07-01

    To provide appropriate uses of nonlinear ground response analysis for engineering practice, a three-dimensional soil column with a distributed mass system and a time domain numerical analysis were implemented on the OpenSees simulation platform. The standard mesh of a three-dimensional soil column was suggested to be satisfied with the specified maximum frequency. The layered soil column was divided into multiple sub-soils with a different viscous damping matrix according to the shear velocities as the soil properties were significantly different. It was necessary to use a combination of other one-dimensional or three-dimensional nonlinear seismic ground analysis programs to confirm the applicability of nonlinear seismic ground motion response analysis procedures in soft soil or for strong earthquakes. The accuracy of the three-dimensional soil column finite element method was verified by dynamic centrifuge model testing under different peak accelerations of the earthquake. As a result, nonlinear seismic ground motion response analysis procedures were improved in this study. The accuracy and efficiency of the three-dimensional seismic ground response analysis can be adapted to the requirements of engineering practice.

  17. Continuous Glucose Monitoring and Trend Accuracy

    PubMed Central

    Gottlieb, Rebecca; Le Compte, Aaron; Chase, J. Geoffrey

    2014-01-01

    Continuous glucose monitoring (CGM) devices are being increasingly used to monitor glycemia in people with diabetes. One advantage with CGM is the ability to monitor the trend of sensor glucose (SG) over time. However, there are few metrics available for assessing the trend accuracy of CGM devices. The aim of this study was to develop an easy to interpret tool for assessing trend accuracy of CGM data. SG data from CGM were compared to hourly blood glucose (BG) measurements and trend accuracy was quantified using the dot product. Trend accuracy results are displayed on the Trend Compass, which depicts trend accuracy as a function of BG. A trend performance table and Trend Index (TI) metric are also proposed. The Trend Compass was tested using simulated CGM data with varying levels of error and variability, as well as real clinical CGM data. The results show that the Trend Compass is an effective tool for differentiating good trend accuracy from poor trend accuracy, independent of glycemic variability. Furthermore, the real clinical data show that the Trend Compass assesses trend accuracy independent of point bias error. Finally, the importance of assessing trend accuracy as a function of BG level is highlighted in a case example of low and falling BG data, with corresponding rising SG data. This study developed a simple to use tool for quantifying trend accuracy. The resulting trend accuracy is easily interpreted on the Trend Compass plot, and if required, performance table and TI metric. PMID:24876437

  18. Assessing the Accuracy of Classwide Direct Observation Methods: Two Analyses Using Simulated and Naturalistic Data

    ERIC Educational Resources Information Center

    Dart, Evan H.; Radley, Keith C.; Briesch, Amy M.; Furlow, Christopher M.; Cavell, Hannah J.; Briesch, Amy M.

    2016-01-01

    Two studies investigated the accuracy of eight different interval-based group observation methods that are commonly used to assess the effects of classwide interventions. In Study 1, a Microsoft Visual Basic program was created to simulate a large set of observational data. Binary data were randomly generated at the student level to represent…

  19. [Numerical simulation of the effect of virtual stent release pose on the expansion results].

    PubMed

    Li, Jing; Peng, Kun; Cui, Xinyang; Fu, Wenyu; Qiao, Aike

    2018-04-01

    The current finite element analysis of vascular stent expansion does not take into account the effect of the stent release pose on the expansion results. In this study, stent and vessel model were established by Pro/E. Five kinds of finite element assembly models were constructed by ABAQUS, including 0 degree without eccentricity model, 3 degree without eccentricity model, 5 degree without eccentricity model, 0 degree axial eccentricity model and 0 degree radial eccentricity model. These models were divided into two groups of experiments for numerical simulation with respect to angle and eccentricity. The mechanical parameters such as foreshortening rate, radial recoil rate and dog boning rate were calculated. The influence of angle and eccentricity on the numerical simulation was obtained by comparative analysis. Calculation results showed that the residual stenosis rates were 38.3%, 38.4%, 38.4%, 35.7% and 38.2% respectively for the 5 models. The results indicate that the pose has less effect on the numerical simulation results so that it can be neglected when the accuracy of the result is not highly required, and the basic model as 0 degree without eccentricity model is feasible for numerical simulation.

  20. Accuracy of a computer-aided surgical simulation protocol for orthognathic surgery: a prospective multicenter study.

    PubMed

    Hsu, Sam Sheng-Pin; Gateno, Jaime; Bell, R Bryan; Hirsch, David L; Markiewicz, Michael R; Teichgraeber, John F; Zhou, Xiaobo; Xia, James J

    2013-01-01

    and yaw orientations. In the secondary outcome measurements, the RMSD of the maxillary dental midline positions was 0.9 mm. When registered at the body of the mandible, the linear and angular differences of the chin segment between the groups with and without the use of the chin template were consistent with the results found in the primary outcome measurements. Using this computer-aided surgical simulation protocol, the computerized plan can be transferred accurately and consistently to the patient to position the maxilla and mandible at the time of surgery. The computer-generated chin template provides greater accuracy in repositioning the chin segment than the intraoperative measurements. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  1. Accuracy and Availability of Egnos - Results of Observations

    NASA Astrophysics Data System (ADS)

    Felski, Andrzej; Nowak, Aleksander; Woźniak, Tomasz

    2011-01-01

    According to SBAS concept the user should receive timely the correct information about the system integrity and corrections to the pseudoranges measurements, which leads to better accuracy of coordinates. In theory the whole system is permanently monitored by RIMS stations, so it is impossible to deliver the faulty information to the user. The quality of the system is guaranteed inside the border of the system coverage however in the east part of Poland lower accuracy and availability of the system is still observed. This was the impulse to start an observation and analysis of real accuracy and availability of EGNOS service in the context of support air-operations in local airports and as the supplementation in hydrographic operations on the Polish Exclusive Zone. A registration has been conducted on three PANSA stations situated on airports in Warsaw, Krakow and Rzeszow and on PNA station in Gdynia. Measurements on PANSA stations have been completed permanently during each whole month up to end of September 2011. These stations are established on Septentrio PolaRx2e receivers and have been engaged into EGNOS Data Collection Network performed by EUROCONTROL. The advantage of these registrations is the uniformity of receivers. Apart from these registrations additional measurements in Gdynia have been provided with different receivers, mainly dedicated sea-navigation: CSI Wireless 1, NOVATEL OEMV, Sperry Navistar, Crescent V-100 and R110 as well as Magellan FX420. The main object of analyses was the accuracy and availability of EGNOS service in each point and for different receivers. Accuracy has been analyzed separately for each coordinate. Finally the temporarily and spatial correlations of coordinates, its availability and accuracy has been investigated. The findings prove that present accuracy of EGNOS service is about 1,5m (95%), but availability of the service is controversial. The accuracy of present EGNOS service meets the parameters of APV I and even APV II

  2. Accuracy of user-friendly blood typing kits tested under simulated military field conditions.

    PubMed

    Bienek, Diane R; Charlton, David G

    2011-04-01

    Rapid user-friendly ABO-Rh blood typing kits (Eldon Home Kit 2511, ABO-Rh Combination Blood Typing Experiment Kit) were evaluated to determine their accuracy when used under simulated military field conditions and after long-term storage at various temperatures and humidities. Rates of positive tests between control groups, experimental groups, and industry standards were measured and analyzed using the Fisher's exact chi-square method to identify significant differences (p < or = 0.05). When Eldon Home Kits 2511 were used in various operational conditions, the results were comparable to those obtained with the control group and with the industry standard. The performance of the ABO-Rh Combination Blood Typing Experiment Kit was adversely affected by prolonged storage in temperatures above 37 degrees C. The diagnostic performance of commercial blood typing kits varies according to product and environmental storage conditions.

  3. Evaluation of the soil moisture prediction accuracy of a space radar using simulation techniques. [Kansas

    NASA Technical Reports Server (NTRS)

    Ulaby, F. T. (Principal Investigator); Dobson, M. C.; Stiles, J. A.; Moore, R. K.; Holtzman, J. C.

    1981-01-01

    Image simulation techniques were employed to generate synthetic aperture radar images of a 17.7 km x 19.3 km test site located east of Lawrence, Kansas. The simulations were performed for a space SAR at an orbital altitude of 600 km, with the following sensor parameters: frequency = 4.75 GHz, polarization = HH, and angle of incidence range = 7 deg to 22 deg from nadir. Three sets of images were produced corresponding to three different spatial resolutions; 20 m x 20 m with 12 looks, 100 m x 100 m with 23 looks, and 1 km x 1 km with 1000 looks. Each set consisted of images for four different soil moisture distributions across the test site. Results indicate that, for the agricultural portion of the test site, the soil moisture in about 90% of the pixels can be predicted with an accuracy of = + or - 20% of field capacity. Among the three spatial resolutions, the 1 km x 1 km resolution gave the best results for most cases, however, for very dry soil conditions, the 100 m x 100 m resolution was slightly superior.

  4. Parameter Accuracy in Meta-Analyses of Factor Structures

    ERIC Educational Resources Information Center

    Gnambs, Timo; Staufenbiel, Thomas

    2016-01-01

    Two new methods for the meta-analysis of factor loadings are introduced and evaluated by Monte Carlo simulations. The direct method pools each factor loading individually, whereas the indirect method synthesizes correlation matrices reproduced from factor loadings. The results of the two simulations demonstrated that the accuracy of…

  5. Improving stamping simulation accuracy by accounting for realistic friction and lubrication conditions: Application to the door-outer of the Mercedes-Benz C-class Coupé

    NASA Astrophysics Data System (ADS)

    Hol, J.; Wiebenga, J. H.; Stock, J.; Wied, J.; Wiegand, K.; Carleer, B.

    2016-08-01

    In the stamping of automotive parts, friction and lubrication play a key role in achieving high quality products. In the development process of new automotive parts, it is therefore crucial to accurately account for these effects in sheet metal forming simulations. Only then, one can obtain reliable and realistic simulation results that correspond to the actual try-out and mass production conditions. In this work, the TriboForm software is used to accurately account for tribology-, friction-, and lubrication conditions in stamping simulations. The enhanced stamping simulations are applied and validated for the door-outer of the Mercedes- Benz C-Class Coupe. The project results demonstrate the improved prediction accuracy of stamping simulations with respect to both part quality and actual stamping process conditions.

  6. High accuracy binary black hole simulations with an extended wave zone

    NASA Astrophysics Data System (ADS)

    Pollney, Denis; Reisswig, Christian; Schnetter, Erik; Dorband, Nils; Diener, Peter

    2011-02-01

    We present results from a new code for binary black hole evolutions using the moving-puncture approach, implementing finite differences in generalized coordinates, and allowing the spacetime to be covered with multiple communicating nonsingular coordinate patches. Here we consider a regular Cartesian near-zone, with adapted spherical grids covering the wave zone. The efficiencies resulting from the use of adapted coordinates allow us to maintain sufficient grid resolution to an artificial outer boundary location which is causally disconnected from the measurement. For the well-studied test case of the inspiral of an equal-mass nonspinning binary (evolved for more than 8 orbits before merger), we determine the phase and amplitude to numerical accuracies better than 0.010% and 0.090% during inspiral, respectively, and 0.003% and 0.153% during merger. The waveforms, including the resolved higher harmonics, are convergent and can be consistently extrapolated to r→∞ throughout the simulation, including the merger and ringdown. Ringdown frequencies for these modes (to (ℓ,m)=(6,6)) match perturbative calculations to within 0.01%, providing a strong confirmation that the remnant settles to a Kerr black hole with irreducible mass Mirr=0.884355±20×10-6 and spin Sf/Mf2=0.686923±10×10-6.

  7. Efficiency and Accuracy in Thermal Simulation of Powder Bed Fusion of Bulk Metallic Glass

    NASA Astrophysics Data System (ADS)

    Lindwall, J.; Malmelöv, A.; Lundbäck, A.; Lindgren, L.-E.

    2018-05-01

    Additive manufacturing by powder bed fusion processes can be utilized to create bulk metallic glass as the process yields considerably high cooling rates. However, there is a risk that reheated material set in layers may become devitrified, i.e., crystallize. Therefore, it is advantageous to simulate the process to fully comprehend it and design it to avoid the aforementioned risk. However, a detailed simulation is computationally demanding. It is necessary to increase the computational speed while maintaining accuracy of the computed temperature field in critical regions. The current study evaluates a few approaches based on temporal reduction to achieve this. It is found that the evaluated approaches save a lot of time and accurately predict the temperature history.

  8. Halo abundance matching: accuracy and conditions for numerical convergence

    NASA Astrophysics Data System (ADS)

    Klypin, Anatoly; Prada, Francisco; Yepes, Gustavo; Heß, Steffen; Gottlöber, Stefan

    2015-03-01

    Accurate predictions of the abundance and clustering of dark matter haloes play a key role in testing the standard cosmological model. Here, we investigate the accuracy of one of the leading methods of connecting the simulated dark matter haloes with observed galaxies- the halo abundance matching (HAM) technique. We show how to choose the optimal values of the mass and force resolution in large volume N-body simulations so that they provide accurate estimates for correlation functions and circular velocities for haloes and their subhaloes - crucial ingredients of the HAM method. At the 10 per cent accuracy, results converge for ˜50 particles for haloes and ˜150 particles for progenitors of subhaloes. In order to achieve this level of accuracy a number of conditions should be satisfied. The force resolution for the smallest resolved (sub)haloes should be in the range (0.1-0.3)rs, where rs is the scale radius of (sub)haloes. The number of particles for progenitors of subhaloes should be ˜150. We also demonstrate that the two-body scattering plays a minor role for the accuracy of N-body simulations thanks to the relatively small number of crossing-times of dark matter in haloes, and the limited force resolution of cosmological simulations.

  9. A comparison of accuracy validation methods for genomic and pedigree-based predictions of swine litter size traits using Large White and simulated data.

    PubMed

    Putz, A M; Tiezzi, F; Maltecca, C; Gray, K A; Knauer, M T

    2018-02-01

    The objective of this study was to compare and determine the optimal validation method when comparing accuracy from single-step GBLUP (ssGBLUP) to traditional pedigree-based BLUP. Field data included six litter size traits. Simulated data included ten replicates designed to mimic the field data in order to determine the method that was closest to the true accuracy. Data were split into training and validation sets. The methods used were as follows: (i) theoretical accuracy derived from the prediction error variance (PEV) of the direct inverse (iLHS), (ii) approximated accuracies from the accf90(GS) program in the BLUPF90 family of programs (Approx), (iii) correlation between predictions and the single-step GEBVs from the full data set (GEBV Full ), (iv) correlation between predictions and the corrected phenotypes of females from the full data set (Y c ), (v) correlation from method iv divided by the square root of the heritability (Y ch ) and (vi) correlation between sire predictions and the average of their daughters' corrected phenotypes (Y cs ). Accuracies from iLHS increased from 0.27 to 0.37 (37%) in the Large White. Approximation accuracies were very consistent and close in absolute value (0.41 to 0.43). Both iLHS and Approx were much less variable than the corrected phenotype methods (ranging from 0.04 to 0.27). On average, simulated data showed an increase in accuracy from 0.34 to 0.44 (29%) using ssGBLUP. Both iLHS and Y ch approximated the increase well, 0.30 to 0.46 and 0.36 to 0.45, respectively. GEBV Full performed poorly in both data sets and is not recommended. Results suggest that for within-breed selection, theoretical accuracy using PEV was consistent and accurate. When direct inversion is infeasible to get the PEV, correlating predictions to the corrected phenotypes divided by the square root of heritability is adequate given a large enough validation data set. © 2017 Blackwell Verlag GmbH.

  10. CPO Prediction: Accuracy Assessment and Impact on UT1 Intensive Results

    NASA Technical Reports Server (NTRS)

    Malkin, Zinovy

    2010-01-01

    The UT1 Intensive results heavily depend on the celestial pole offset (CPO) model used during data processing. Since accurate CPO values are available with a delay of two to four weeks, CPO predictions are necessarily applied to the UT1 Intensive data analysis, and errors in the predictions can influence the operational UT1 accuracy. In this paper we assess the real accuracy of CPO prediction using the actual IERS and PUL predictions made in 2007-2009. Also, results of operational processing were analyzed to investigate the actual impact of EOP prediction errors on the rapid UT1 results. It was found that the impact of CPO prediction errors is at a level of several microseconds, whereas the impact of the inaccuracy in the polar motion prediction may be about one order of magnitude larger for ultra-rapid UT1 results. The situation can be amended if the IERS Rapid solution will be updated more frequently.

  11. Discussion on accuracy degree evaluation of accident velocity reconstruction model

    NASA Astrophysics Data System (ADS)

    Zou, Tiefang; Dai, Yingbiao; Cai, Ming; Liu, Jike

    In order to investigate the applicability of accident velocity reconstruction model in different cases, a method used to evaluate accuracy degree of accident velocity reconstruction model is given. Based on pre-crash velocity in theory and calculation, an accuracy degree evaluation formula is obtained. With a numerical simulation case, Accuracy degrees and applicability of two accident velocity reconstruction models are analyzed; results show that this method is feasible in practice.

  12. Effects of experimental protocol on global vegetation model accuracy: a comparison of simulated and observed vegetation patterns for Asia

    USGS Publications Warehouse

    Tang, Guoping; Shafer, Sarah L.; Barlein, Patrick J.; Holman, Justin O.

    2009-01-01

    Prognostic vegetation models have been widely used to study the interactions between environmental change and biological systems. This study examines the sensitivity of vegetation model simulations to: (i) the selection of input climatologies representing different time periods and their associated atmospheric CO2 concentrations, (ii) the choice of observed vegetation data for evaluating the model results, and (iii) the methods used to compare simulated and observed vegetation. We use vegetation simulated for Asia by the equilibrium vegetation model BIOME4 as a typical example of vegetation model output. BIOME4 was run using 19 different climatologies and their associated atmospheric CO2 concentrations. The Kappa statistic, Fuzzy Kappa statistic and a newly developed map-comparison method, the Nomad index, were used to quantify the agreement between the biomes simulated under each scenario and the observed vegetation from three different global land- and tree-cover data sets: the global Potential Natural Vegetation data set (PNV), the Global Land Cover Characteristics data set (GLCC), and the Global Land Cover Facility data set (GLCF). The results indicate that the 30-year mean climatology (and its associated atmospheric CO2 concentration) for the time period immediately preceding the collection date of the observed vegetation data produce the most accurate vegetation simulations when compared with all three observed vegetation data sets. The study also indicates that the BIOME4-simulated vegetation for Asia more closely matches the PNV data than the other two observed vegetation data sets. Given the same observed data, the accuracy assessments of the BIOME4 simulations made using the Kappa, Fuzzy Kappa and Nomad index map-comparison methods agree well when the compared vegetation types consist of a large number of spatially continuous grid cells. The results of this analysis can assist model users in designing experimental protocols for simulating vegetation.

  13. Diagnostic accuracy of phosphor plate systems and conventional radiography in the detection of simulated internal root resorption.

    PubMed

    Vasconcelos, Karla de Faria; Rovaris, Karla; Nascimento, Eduarda Helena Leandro; Oliveira, Matheus Lima; Távora, Débora de Melo; Bóscolo, Frab Norberto

    2017-11-01

    To evaluate the performance of conventional radiography and photostimulable phosphor (PSP) plate in the detection of simulated internal root resorption (IRR) lesions in early stages. Twenty single-rooted teeth were X-rayed before and after having a simulated IRR early lesion. Three imaging systems were used: Kodak InSight dental film and two PSPs digital systems, Digora Optime and VistaScan. The digital images were displayed on a 20.1″ LCD monitor using the native software of each system, and the conventional radiographs were evaluated on a masked light box. Two radiologists were asked to indicate the presence or absence of IRR and, after two weeks, all images were re-evaluated. Cohen's kappa coefficient was calculated to assess intra- and interobserver agreement. The three imaging systems were compared using the Kruskal-Wallis test. For interexaminer agreement, overall kappa values were 0.70, 0.65 and 0.70 for conventional film, Digora Optima and VistaScan, respectively. Both the conventional and digital radiography presented low sensitivity, specificity, accuracy, positive and negative predictive values with no significant difference between imaging systems (p = .0725). The performance of conventional and PSP was similar in the detection of simulated IRR lesions in early stages with low accuracy.

  14. Application of CT-PSF-based computer-simulated lung nodules for evaluating the accuracy of computer-aided volumetry.

    PubMed

    Funaki, Ayumu; Ohkubo, Masaki; Wada, Shinichi; Murao, Kohei; Matsumoto, Toru; Niizuma, Shinji

    2012-07-01

    With the wide dissemination of computed tomography (CT) screening for lung cancer, measuring the nodule volume accurately with computer-aided volumetry software is increasingly important. Many studies for determining the accuracy of volumetry software have been performed using a phantom with artificial nodules. These phantom studies are limited, however, in their ability to reproduce the nodules both accurately and in the variety of sizes and densities required. Therefore, we propose a new approach of using computer-simulated nodules based on the point spread function measured in a CT system. The validity of the proposed method was confirmed by the excellent agreement obtained between computer-simulated nodules and phantom nodules regarding the volume measurements. A practical clinical evaluation of the accuracy of volumetry software was achieved by adding simulated nodules onto clinical lung images, including noise and artifacts. The tested volumetry software was revealed to be accurate within an error of 20 % for nodules >5 mm and with the difference between nodule density and background (lung) (CT value) being 400-600 HU. Such a detailed analysis can provide clinically useful information on the use of volumetry software in CT screening for lung cancer. We concluded that the proposed method is effective for evaluating the performance of computer-aided volumetry software.

  15. Accuracy of Estimating Solar Radiation Pressure for GEO Debris with Tumbling Effect

    NASA Astrophysics Data System (ADS)

    Chao, Chia-Chun George

    2009-03-01

    The accuracy of estimating solar radiation pressure for GEO debris is examined and demonstrated, via numerical simulations, by fitting a batch (months) of simulated position vectors. These simulated position vectors are generated from a "truth orbit" with added white noise using high-precision numerical integration tools. After the long-arc fit of the simulated observations (position vectors), one can accurately and reliably determine how close the estimated value of solar radiation pressure is to the truth. Results of this study show that the inherent accuracy in estimating the solar radiation pressure coefficient can be as good as 1% if a long-arc fit span up to 180 days is used and the satellite is not tumbling. The corresponding position prediction accuracy can be as good as, in maximum error, 1 km along in-track, 0.3 km along radial and 0.1 km along cross-track up to 30 days. Similar accuracies can be expected when the object is tumbling as long as the rate of attitude change is different from the orbit rate. Results of this study reveal an important phenomenon that the solar radiation pressure significantly affects the orbit motion when the spin rate is equal to the orbit rate.

  16. AMES Stereo Pipeline Derived DEM Accuracy Experiment Using LROC-NAC Stereopairs and Weighted Spatial Dependence Simulation for Lunar Site Selection

    NASA Astrophysics Data System (ADS)

    Laura, J. R.; Miller, D.; Paul, M. V.

    2012-03-01

    An accuracy assessment of AMES Stereo Pipeline derived DEMs for lunar site selection using weighted spatial dependence simulation and a call for outside AMES derived DEMs to facilitate a statistical precision analysis.

  17. Simulation-based evaluation of the resolution and quantitative accuracy of temperature-modulated fluorescence tomography.

    PubMed

    Lin, Yuting; Nouizi, Farouk; Kwong, Tiffany C; Gulsen, Gultekin

    2015-09-01

    Conventional fluorescence tomography (FT) can recover the distribution of fluorescent agents within a highly scattering medium. However, poor spatial resolution remains its foremost limitation. Previously, we introduced a new fluorescence imaging technique termed "temperature-modulated fluorescence tomography" (TM-FT), which provides high-resolution images of fluorophore distribution. TM-FT is a multimodality technique that combines fluorescence imaging with focused ultrasound to locate thermo-sensitive fluorescence probes using a priori spatial information to drastically improve the resolution of conventional FT. In this paper, we present an extensive simulation study to evaluate the performance of the TM-FT technique on complex phantoms with multiple fluorescent targets of various sizes located at different depths. In addition, the performance of the TM-FT is tested in the presence of background fluorescence. The results obtained using our new method are systematically compared with those obtained with the conventional FT. Overall, TM-FT provides higher resolution and superior quantitative accuracy, making it an ideal candidate for in vivo preclinical and clinical imaging. For example, a 4 mm diameter inclusion positioned in the middle of a synthetic slab geometry phantom (D:40  mm×W:100  mm) is recovered as an elongated object in the conventional FT (x=4.5  mm; y=10.4  mm), while TM-FT recovers it successfully in both directions (x=3.8  mm; y=4.6  mm). As a result, the quantitative accuracy of the TM-FT is superior because it recovers the concentration of the agent with a 22% error, which is in contrast with the 83% error of the conventional FT.

  18. Simulation-based evaluation of the resolution and quantitative accuracy of temperature-modulated fluorescence tomography

    PubMed Central

    Lin, Yuting; Nouizi, Farouk; Kwong, Tiffany C.; Gulsen, Gultekin

    2016-01-01

    Conventional fluorescence tomography (FT) can recover the distribution of fluorescent agents within a highly scattering medium. However, poor spatial resolution remains its foremost limitation. Previously, we introduced a new fluorescence imaging technique termed “temperature-modulated fluorescence tomography” (TM-FT), which provides high-resolution images of fluorophore distribution. TM-FT is a multimodality technique that combines fluorescence imaging with focused ultrasound to locate thermo-sensitive fluorescence probes using a priori spatial information to drastically improve the resolution of conventional FT. In this paper, we present an extensive simulation study to evaluate the performance of the TM-FT technique on complex phantoms with multiple fluorescent targets of various sizes located at different depths. In addition, the performance of the TM-FT is tested in the presence of background fluorescence. The results obtained using our new method are systematically compared with those obtained with the conventional FT. Overall, TM-FT provides higher resolution and superior quantitative accuracy, making it an ideal candidate for in vivo preclinical and clinical imaging. For example, a 4 mm diameter inclusion positioned in the middle of a synthetic slab geometry phantom (D:40 mm × W :100 mm) is recovered as an elongated object in the conventional FT (x = 4.5 mm; y = 10.4 mm), while TM-FT recovers it successfully in both directions (x = 3.8 mm; y = 4.6 mm). As a result, the quantitative accuracy of the TM-FT is superior because it recovers the concentration of the agent with a 22% error, which is in contrast with the 83% error of the conventional FT. PMID:26368884

  19. Influence of photon energy cuts on PET Monte Carlo simulation results.

    PubMed

    Mitev, Krasimir; Gerganov, Georgi; Kirov, Assen S; Schmidtlein, C Ross; Madzhunkov, Yordan; Kawrakow, Iwan

    2012-07-01

    The purpose of this work is to study the influence of photon energy cuts on the results of positron emission tomography (PET) Monte Carlo (MC) simulations. MC simulations of PET scans of a box phantom and the NEMA image quality phantom are performed for 32 photon energy cut values in the interval 0.3-350 keV using a well-validated numerical model of a PET scanner. The simulations are performed with two MC codes, egs_pet and GEANT4 Application for Tomographic Emission (GATE). The effect of photon energy cuts on the recorded number of singles, primary, scattered, random, and total coincidences as well as on the simulation time and noise-equivalent count rate is evaluated by comparing the results for higher cuts to those for 1 keV cut. To evaluate the effect of cuts on the quality of reconstructed images, MC generated sinograms of PET scans of the NEMA image quality phantom are reconstructed with iterative statistical reconstruction. The effects of photon cuts on the contrast recovery coefficients and on the comparison of images by means of commonly used similarity measures are studied. For the scanner investigated in this study, which uses bismuth germanate crystals, the transport of Bi X(K) rays must be simulated in order to obtain unbiased estimates for the number of singles, true, scattered, and random coincidences as well as for an unbiased estimate of the noise-equivalent count rate. Photon energy cuts higher than 170 keV lead to absorption of Compton scattered photons and strongly increase the number of recorded coincidences of all types and the noise-equivalent count rate. The effect of photon cuts on the reconstructed images and the similarity measures used for their comparison is statistically significant for very high cuts (e.g., 350 keV). The simulation time decreases slowly with the increase of the photon cut. The simulation of the transport of characteristic x rays plays an important role, if an accurate modeling of a PET scanner system is to be achieved

  20. Simulation of the Tsunami Resulting from the M 9.2 2004 Sumatra-Andaman Earthquake - Dynamic Rupture vs. Seismic Inversion Source Model

    NASA Astrophysics Data System (ADS)

    Vater, Stefan; Behrens, Jörn

    2017-04-01

    Simulations of historic tsunami events such as the 2004 Sumatra or the 2011 Tohoku event are usually initialized using earthquake sources resulting from inversion of seismic data. Also, other data from ocean buoys etc. is sometimes included in the derivation of the source model. The associated tsunami event can often be well simulated in this way, and the results show high correlation with measured data. However, it is unclear how the derived source model compares to the particular earthquake event. In this study we use the results from dynamic rupture simulations obtained with SeisSol, a software package based on an ADER-DG discretization solving the spontaneous dynamic earthquake rupture problem with high-order accuracy in space and time. The tsunami model is based on a second-order Runge-Kutta discontinuous Galerkin (RKDG) scheme on triangular grids and features a robust wetting and drying scheme for the simulation of inundation events at the coast. Adaptive mesh refinement enables the efficient computation of large domains, while at the same time it allows for high local resolution and geometric accuracy. The results are compared to measured data and results using earthquake sources based on inversion. With the approach of using the output of actual dynamic rupture simulations, we can estimate the influence of different earthquake parameters. Furthermore, the comparison to other source models enables a thorough comparison and validation of important tsunami parameters, such as the runup at the coast. This work is part of the ASCETE (Advanced Simulation of Coupled Earthquake and Tsunami Events) project, which aims at an improved understanding of the coupling between the earthquake and the generated tsunami event.

  1. Accuracy of the unified approach in maternally influenced traits - illustrated by a simulation study in the honey bee (Apis mellifera)

    PubMed Central

    2013-01-01

    Background The honey bee is an economically important species. With a rapid decline of the honey bee population, it is necessary to implement an improved genetic evaluation methodology. In this study, we investigated the applicability of the unified approach and its impact on the accuracy of estimation of breeding values for maternally influenced traits on a simulated dataset for the honey bee. Due to the limitation to the number of individuals that can be genotyped in a honey bee population, the unified approach can be an efficient strategy to increase the genetic gain and to provide a more accurate estimation of breeding values. We calculated the accuracy of estimated breeding values for two evaluation approaches, the unified approach and the traditional pedigree based approach. We analyzed the effects of different heritabilities as well as genetic correlation between direct and maternal effects on the accuracy of estimation of direct, maternal and overall breeding values (sum of maternal and direct breeding values). The genetic and reproductive biology of the honey bee was accounted for by taking into consideration characteristics such as colony structure, uncertain paternity, overlapping generations and polyandry. In addition, we used a modified numerator relationship matrix and a realistic genome for the honey bee. Results For all values of heritability and correlation, the accuracy of overall estimated breeding values increased significantly with the unified approach. The increase in accuracy was always higher for the case when there was no correlation as compared to the case where a negative correlation existed between maternal and direct effects. Conclusions Our study shows that the unified approach is a useful methodology for genetic evaluation in honey bees, and can contribute immensely to the improvement of traits of apicultural interest such as resistance to Varroa or production and behavioural traits. In particular, the study is of great interest for

  2. Evaluation of the geomorphometric results and residual values of a robust plane fitting method applied to different DTMs of various scales and accuracy

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor

    2013-04-01

    Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is

  3. Accuracy of remotely sensed data: Sampling and analysis procedures

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Oderwald, R. G.; Mead, R. A.

    1982-01-01

    A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.

  4. Reliable results from stochastic simulation models

    Treesearch

    Donald L., Jr. Gochenour; Leonard R. Johnson

    1973-01-01

    Development of a computer simulation model is usually done without fully considering how long the model should run (e.g. computer time) before the results are reliable. However construction of confidence intervals (CI) about critical output parameters from the simulation model makes it possible to determine the point where model results are reliable. If the results are...

  5. Persistency of accuracy of genomic breeding values for different simulated pig breeding programs in developing countries.

    PubMed

    Akanno, E C; Schenkel, F S; Sargolzaei, M; Friendship, R M; Robinson, J A B

    2014-10-01

    Genetic improvement of pigs in tropical developing countries has focused on imported exotic populations which have been subjected to intensive selection with attendant high population-wide linkage disequilibrium (LD). Presently, indigenous pig population with limited selection and low LD are being considered for improvement. Given that the infrastructure for genetic improvement using the conventional BLUP selection methods are lacking, a genome-wide selection (GS) program was proposed for developing countries. A simulation study was conducted to evaluate the option of using 60 K SNP panel and observed amount of LD in the exotic and indigenous pig populations. Several scenarios were evaluated including different size and structure of training and validation populations, different selection methods and long-term accuracy of GS in different population/breeding structures and traits. The training set included previously selected exotic population, unselected indigenous population and their crossbreds. Traits studied included number born alive (NBA), average daily gain (ADG) and back fat thickness (BFT). The ridge regression method was used to train the prediction model. The results showed that accuracies of genomic breeding values (GBVs) in the range of 0.30 (NBA) to 0.86 (BFT) in the validation population are expected if high density marker panels are utilized. The GS method improved accuracy of breeding values better than pedigree-based approach for traits with low heritability and in young animals with no performance data. Crossbred training population performed better than purebreds when validation was in populations with similar or a different structure as in the training set. Genome-wide selection holds promise for genetic improvement of pigs in the tropics. © 2014 Blackwell Verlag GmbH.

  6. Dynamic contrast-enhanced MRI: Study of inter-software accuracy and reproducibility using simulated and clinical data.

    PubMed

    Beuzit, Luc; Eliat, Pierre-Antoine; Brun, Vanessa; Ferré, Jean-Christophe; Gandon, Yves; Bannier, Elise; Saint-Jalmes, Hervé

    2016-06-01

    To test the reproducibility and accuracy of pharmacokinetic parameter measurements on five analysis software packages (SPs) for dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), using simulated and clinical data. This retrospective study was Institutional Review Board-approved. Simulated tissues consisted of pixel clusters of calculated dynamic signal changes for combinations of Tofts model pharmacokinetic parameters (volume transfer constant [K(trans) ], extravascular extracellular volume fraction [ve ]), longitudinal relaxation time (T1 ). The clinical group comprised 27 patients treated for rectal cancer, with 36 3T DCE-MR scans performed between November 2012 and February 2014, including dual-flip-angle T1 mapping and a dynamic postcontrast T1 -weighted, 3D spoiled gradient-echo sequence. The clinical and simulated images were postprocessed with five SPs to measure K(trans) , ve , and the initial area under the gadolinium curve (iAUGC). Modified Bland-Altman analysis was conducted, intraclass correlation coefficients (ICCs) and within-subject coefficients of variation were calculated. Thirty-one examinations from 23 patients were of sufficient technical quality and postprocessed. Measurement errors were observed on the simulated data for all the pharmacokinetic parameters and SPs, with a bias ranging from -0.19 min(-1) to 0.09 min(-1) for K(trans) , -0.15 to 0.01 for ve , and -0.65 to 1.66 mmol.L(-1) .min for iAUGC. The ICC between SPs revealed moderate agreement for the simulated data (K(trans) : 0.50; ve : 0.67; iAUGC: 0.77) and very poor agreement for the clinical data (K(trans) : 0.10; ve : 0.16; iAUGC: 0.21). Significant errors were found in the calculated DCE-MRI pharmacokinetic parameters for the perfusion analysis SPs, resulting in poor inter-software reproducibility. J. Magn. Reson. Imaging 2016;43:1288-1300. © 2015 Wiley Periodicals, Inc.

  7. 3-D numerical simulations of earthquake ground motion in sedimentary basins: testing accuracy through stringent models

    NASA Astrophysics Data System (ADS)

    Chaljub, Emmanuel; Maufroy, Emeline; Moczo, Peter; Kristek, Jozef; Hollender, Fabrice; Bard, Pierre-Yves; Priolo, Enrico; Klin, Peter; de Martin, Florent; Zhang, Zhenguo; Zhang, Wei; Chen, Xiaofei

    2015-04-01

    Differences between 3-D numerical predictions of earthquake ground motion in the Mygdonian basin near Thessaloniki, Greece, led us to define four canonical stringent models derived from the complex realistic 3-D model of the Mygdonian basin. Sediments atop an elastic bedrock are modelled in the 1D-sharp and 1D-smooth models using three homogeneous layers and smooth velocity distribution, respectively. The 2D-sharp and 2D-smooth models are extensions of the 1-D models to an asymmetric sedimentary valley. In all cases, 3-D wavefields include strongly dispersive surface waves in the sediments. We compared simulations by the Fourier pseudo-spectral method (FPSM), the Legendre spectral-element method (SEM) and two formulations of the finite-difference method (FDM-S and FDM-C) up to 4 Hz. The accuracy of individual solutions and level of agreement between solutions vary with type of seismic waves and depend on the smoothness of the velocity model. The level of accuracy is high for the body waves in all solutions. However, it strongly depends on the discrete representation of the material interfaces (at which material parameters change discontinuously) for the surface waves in the sharp models. An improper discrete representation of the interfaces can cause inaccurate numerical modelling of surface waves. For all the numerical methods considered, except SEM with mesh of elements following the interfaces, a proper implementation of interfaces requires definition of an effective medium consistent with the interface boundary conditions. An orthorhombic effective medium is shown to significantly improve accuracy and preserve the computational efficiency of modelling. The conclusions drawn from the analysis of the results of the canonical cases greatly help to explain differences between numerical predictions of ground motion in realistic models of the Mygdonian basin. We recommend that any numerical method and code that is intended for numerical prediction of earthquake

  8. Improving the trust in results of numerical simulations and scientific data analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cappello, Franck; Constantinescu, Emil; Hovland, Paul

    general approaches to address it. This paper does not focus on the trust that the execution will actually complete. The product of simulation or of data analytic executions is the final element of a potentially long chain of transformations, where each stage has the potential to introduce harmful corruptions. These corruptions may produce results that deviate from the user-expected accuracy without notifying the user of this deviation. There are many potential sources of corruption before and during the execution; consequently, in this white paper we do not focus on the protection of the end result after the execution.« less

  9. Accuracy of Monte Carlo photon transport simulation in characterizing brachytherapy dosimeter energy-response artefacts.

    PubMed

    Das, R K; Li, Z; Perera, H; Williamson, J F

    1996-06-01

    Practical dosimeters in brachytherapy, such as thermoluminescent dosimeters (TLD) and diodes, are usually calibrated against low-energy megavoltage beams. To measure absolute dose rate near a brachytherapy source, it is necessary to establish the energy response of the detector relative to that of the calibration energy. The purpose of this paper is to assess the accuracy of Monte Carlo photon transport (MCPT) simulation in modelling the absolute detector response as a function of detector geometry and photon energy. We have exposed two different sizes of TLD-100 (LiF chips) and p-type silicon diode detectors to calibrated 60Co, HDR source (192Ir) and superficial x-ray beams. For the Scanditronix electron-field diode, the relative detector response, defined as the measured detector readings per measured unit of air kerma, varied from 38.46 V cGy-1 (40 kVp beam) to 6.22 V cGy-1 (60Co beam). Similarly for the large and small chips the same quantity varied from 2.08-3.02 nC cGy-1 and 0.171-0.244 nC cGy-1, respectively. Monte Carlo simulation was used to calculate the absorbed dose to the active volume of the detector per unit air kerma. If the Monte Carlo simulation is accurate, then the absolute detector response, which is defined as the measured detector reading per unit dose absorbed by the active detector volume, and is calculated by Monte Carlo simulation, should be a constant. For the diode, the absolute response is 5.86 +/- 0.15 (V cGy-1). For TLDs of size 3 x 3 x 1 mm3 the absolute response is 2.47 +/- 0.07 (nC cGy-1) and for TLDs of 1 x 1 x 1 mm3 it is 0.201 +/- 0.008 (nC cGy-1). From the above results we can conclude that the absolute response function of detectors (TLDs and diodes) is directly proportional to absorbed dose by the active volume of the detector and is independent of beam quality.

  10. Accuracy of a Computer-Aided Surgical Simulation (CASS) Protocol for Orthognathic Surgery: A Prospective Multicenter Study

    PubMed Central

    Hsu, Sam Sheng-Pin; Gateno, Jaime; Bell, R. Bryan; Hirsch, David L.; Markiewicz, Michael R.; Teichgraeber, John F.; Zhou, Xiaobo; Xia, James J.

    2012-01-01

    Purpose The purpose of this prospective multicenter study was to assess the accuracy of a computer-aided surgical simulation (CASS) protocol for orthognathic surgery. Materials and Methods The accuracy of the CASS protocol was assessed by comparing planned and postoperative outcomes of 65 consecutive patients enrolled from 3 centers. Computer-generated surgical splints were used for all patients. For the genioplasty, one center utilized computer-generated chin templates to reposition the chin segment only for patients with asymmetry. Standard intraoperative measurements were utilized without the chin templates for the remaining patients. The primary outcome measurements were linear and angular differences for the maxilla, mandible and chin when the planned and postoperative models were registered at the cranium. The secondary outcome measurements were: maxillary dental midline difference between the planned and postoperative positions; and linear and angular differences of the chin segment between the groups with and without the use of the template. The latter was measured when the planned and postoperative models were registered at mandibular body. Statistical analyses were performed, and the accuracy was reported using root mean square deviation (RMSD) and Bland and Altman's method for assessing measurement agreement. Results In the primary outcome measurements, there was no statistically significant difference among the 3 centers for the maxilla and mandible. The largest RMSD was 1.0mm and 1.5° for the maxilla, and 1.1mm and 1.8° for the mandible. For the chin, there was a statistically significant difference between the groups with and without the use of the chin template. The chin template group showed excellent accuracy with largest positional RMSD of 1.0mm and the largest orientational RSMD of 2.2°. However, larger variances were observed in the group not using the chin template. This was significant in anteroposterior and superoinferior directions, as in

  11. Accuracy of buffered-force QM/MM simulations of silica

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peguiron, Anke; Moras, Gianpietro; Colombi Ciacchi, Lucio

    2015-02-14

    We report comparisons between energy-based quantum mechanics/molecular mechanics (QM/MM) and buffered force-based QM/MM simulations in silica. Local quantities—such as density of states, charges, forces, and geometries—calculated with both QM/MM approaches are compared to the results of full QM simulations. We find the length scale over which forces computed using a finite QM region converge to reference values obtained in full quantum-mechanical calculations is ∼10 Å rather than the ∼5 Å previously reported for covalent materials such as silicon. Electrostatic embedding of the QM region in the surrounding classical point charges gives only a minor contribution to the force convergence. Whilemore » the energy-based approach provides accurate results in geometry optimizations of point defects, we find that the removal of large force errors at the QM/MM boundary provided by the buffered force-based scheme is necessary for accurate constrained geometry optimizations where Si–O bonds are elongated and for finite-temperature molecular dynamics simulations of crack propagation. Moreover, the buffered approach allows for more flexibility, since special-purpose QM/MM coupling terms that link QM and MM atoms are not required and the region that is treated at the QM level can be adaptively redefined during the course of a dynamical simulation.« less

  12. Summarizing Simulation Results using Causally-relevant States

    PubMed Central

    Parikh, Nidhi; Marathe, Madhav; Swarup, Samarth

    2016-01-01

    As increasingly large-scale multiagent simulations are being implemented, new methods are becoming necessary to make sense of the results of these simulations. Even concisely summarizing the results of a given simulation run is a challenge. Here we pose this as the problem of simulation summarization: how to extract the causally-relevant descriptions of the trajectories of the agents in the simulation. We present a simple algorithm to compress agent trajectories through state space by identifying the state transitions which are relevant to determining the distribution of outcomes at the end of the simulation. We present a toy-example to illustrate the working of the algorithm, and then apply it to a complex simulation of a major disaster in an urban area. PMID:28042620

  13. Improvement of shallow landslide prediction accuracy using soil parameterisation for a granite area in South Korea

    NASA Astrophysics Data System (ADS)

    Kim, M. S.; Onda, Y.; Kim, J. K.

    2015-01-01

    SHALSTAB model applied to shallow landslides induced by rainfall to evaluate soil properties related with the effect of soil depth for a granite area in Jinbu region, Republic of Korea. Soil depth measured by a knocking pole test and two soil parameters from direct shear test (a and b) as well as one soil parameters from a triaxial compression test (c) were collected to determine the input parameters for the model. Experimental soil data were used for the first simulation (Case I) and, soil data represented the effect of measured soil depth and average soil depth from soil data of Case I were used in the second (Case II) and third simulations (Case III), respectively. All simulations were analysed using receiver operating characteristic (ROC) analysis to determine the accuracy of prediction. ROC analysis results for first simulation showed the low ROC values under 0.75 may be due to the internal friction angle and particularly the cohesion value. Soil parameters calculated from a stochastic hydro-geomorphological model were applied to the SHALSTAB model. The accuracy of Case II and Case III using ROC analysis showed higher accuracy values rather than first simulation. Our results clearly demonstrate that the accuracy of shallow landslide prediction can be improved when soil parameters represented the effect of soil thickness.

  14. SARDA HITL Simulations: System Performance Results

    NASA Technical Reports Server (NTRS)

    Gupta, Gautam

    2012-01-01

    This presentation gives an overview of the 2012 SARDA human-in-the-loop simulation, and presents a summary of system performance results from the simulation, including delay, throughput and fuel consumption

  15. Scaling between Wind Tunnels-Results Accuracy in Two-Dimensional Testing

    NASA Astrophysics Data System (ADS)

    Rasuo, Bosko

    The establishment of exact two-dimensional flow conditions in wind tunnels is a very difficult problem. This has been evident for wind tunnels of all types and scales. In this paper, the principal factors that influence the accuracy of two-dimensional wind tunnel test results are analyzed. The influences of the Reynolds number, Mach number and wall interference with reference to solid and flow blockage (blockage of wake) as well as the influence of side-wall boundary layer control are analyzed. Interesting results are brought to light regarding the Reynolds number effects of the test model versus the Reynolds number effects of the facility in subsonic and transonic flow.

  16. Accuracy of Mass and Radius Determination for Neutron Stars in X-ray Bursters from Simulated LOFT Spectra

    NASA Astrophysics Data System (ADS)

    Majczyna, A.; Madej, J.; Różańska, A.; Należyty, M.

    2017-06-01

    We present a simulation of an X-ray spectrum of a hot neutron star, as would be seen by the LAD detector on board of LOFT satellite. We also compute a grid of theoretical spectra corresponding to a range of effective temperatures Teff and surface gravities log g with values corresponding to compact stars in Type I X-ray bursters. A neutron star with the mass M=1.64 M⊙ and the radius R=11.95 km (which yields the surface gravity log g=14.30 [cgs] and the surface redshift z=0.30) is used in simulation. Accuracy of mass and radius determination by fitting theoretical spectra to the observed one is found to be M=1.64+0.16-0.02 M⊙ and R=11.95+1.57-0.40 km (2σ). The confidence contours for these two variables are narrow but elongated, and therefore the resulting constraints on the EOS cannot be strong. Note, that in this paper we aim to discuss error contours of NS mass and radius, whereas discussion of EOS is beyond the scope of this work.

  17. Orbit Determination Accuracy for Comets on Earth-Impacting Trajectories

    NASA Technical Reports Server (NTRS)

    Kay-Bunnell, Linda

    2004-01-01

    The results presented show the level of orbit determination accuracy obtainable for long-period comets discovered approximately one year before collision with Earth. Preliminary orbits are determined from simulated observations using Gauss' method. Additional measurements are incorporated to improve the solution through the use of a Kalman filter, and include non-gravitational perturbations due to outgassing. Comparisons between observatories in several different circular heliocentric orbits show that observatories in orbits with radii less than 1 AU result in increased orbit determination accuracy for short tracking durations due to increased parallax per unit time. However, an observatory at 1 AU will perform similarly if the tracking duration is increased, and accuracy is significantly improved if additional observatories are positioned at the Sun-Earth Lagrange points L3, L4, or L5. A single observatory at 1 AU capable of both optical and range measurements yields the highest orbit determination accuracy in the shortest amount of time when compared to other systems of observatories.

  18. Does exposure to simulated patient cases improve accuracy of clinicians' predictive value estimates of diagnostic test results? A within-subjects experiment at St Michael's Hospital, Toronto, Canada.

    PubMed

    Armstrong, Bonnie; Spaniol, Julia; Persaud, Nav

    2018-02-13

    Clinicians often overestimate the probability of a disease given a positive test result (positive predictive value; PPV) and the probability of no disease given a negative test result (negative predictive value; NPV). The purpose of this study was to investigate whether experiencing simulated patient cases (ie, an 'experience format') would promote more accurate PPV and NPV estimates compared with a numerical format. Participants were presented with information about three diagnostic tests for the same fictitious disease and were asked to estimate the PPV and NPV of each test. Tests varied with respect to sensitivity and specificity. Information about each test was presented once in the numerical format and once in the experience format. The study used a 2 (format: numerical vs experience) × 3 (diagnostic test: gold standard vs low sensitivity vs low specificity) within-subjects design. The study was completed online, via Qualtrics (Provo, Utah, USA). 50 physicians (12 clinicians and 38 residents) from the Department of Family and Community Medicine at St Michael's Hospital in Toronto, Canada, completed the study. All participants had completed at least 1 year of residency. Estimation accuracy was quantified by the mean absolute error (MAE; absolute difference between estimate and true predictive value). PPV estimation errors were larger in the numerical format (MAE=32.6%, 95% CI 26.8% to 38.4%) compared with the experience format (MAE=15.9%, 95% CI 11.8% to 20.0%, d =0.697, P<0.001). Likewise, NPV estimation errors were larger in the numerical format (MAE=24.4%, 95% CI 14.5% to 34.3%) than in the experience format (MAE=11.0%, 95% CI 6.5% to 15.5%, d =0.303, P=0.015). Exposure to simulated patient cases promotes accurate estimation of predictive values in clinicians. This finding carries implications for diagnostic training and practice. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights

  19. Micro-scale finite element modeling of ultrasound propagation in aluminum trabecular bone-mimicking phantoms: A comparison between numerical simulation and experimental results.

    PubMed

    Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S

    2016-05-01

    The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Cause and Cure - Deterioration in Accuracy of CFD Simulations With Use of High-Aspect-Ratio Triangular Tetrahedral Grids

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji Shankar

    2017-01-01

    tetrahedral-grid case along with some of the practical results of this extension is also provided. Furthermore, through the use of numerical simulations of practical viscous problems involving high-Reynolds number flows, the effectiveness of the gradient evaluation procedures within the CESE framework (that have their basis on the analysis presented here) to produce accurate and stable results on such high-aspect ratio meshes is also showcased.

  1. DKIST Adaptive Optics System: Simulation Results

    NASA Astrophysics Data System (ADS)

    Marino, Jose; Schmidt, Dirk

    2016-05-01

    The 4 m class Daniel K. Inouye Solar Telescope (DKIST), currently under construction, will be equipped with an ultra high order solar adaptive optics (AO) system. The requirements and capabilities of such a solar AO system are beyond those of any other solar AO system currently in operation. We must rely on solar AO simulations to estimate and quantify its performance.We present performance estimation results of the DKIST AO system obtained with a new solar AO simulation tool. This simulation tool is a flexible and fast end-to-end solar AO simulator which produces accurate solar AO simulations while taking advantage of current multi-core computer technology. It relies on full imaging simulations of the extended field Shack-Hartmann wavefront sensor (WFS), which directly includes important secondary effects such as field dependent distortions and varying contrast of the WFS sub-aperture images.

  2. Dependence of Dynamic Modeling Accuracy on Sensor Measurements, Mass Properties, and Aircraft Geometry

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    The NASA Generic Transport Model (GTM) nonlinear simulation was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of identified parameters in mathematical models describing the flight dynamics and determined from flight data. Measurements from a typical flight condition and system identification maneuver were systematically and progressively deteriorated by introducing noise, resolution errors, and bias errors. The data were then used to estimate nondimensional stability and control derivatives within a Monte Carlo simulation. Based on these results, recommendations are provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using additional flight conditions and parameter estimation methods, as well as a nonlinear flight simulation of the General Dynamics F-16 aircraft, were compared with these recommendations

  3. Use of Electronic Health Record Simulation to Understand the Accuracy of Intern Progress Notes

    PubMed Central

    March, Christopher A.; Scholl, Gretchen; Dversdal, Renee K.; Richards, Matthew; Wilson, Leah M.; Mohan, Vishnu; Gold, Jeffrey A.

    2016-01-01

    Background With the widespread adoption of electronic health records (EHRs), there is a growing awareness of problems in EHR training for new users and subsequent problems with the quality of information present in EHR-generated progress notes. By standardizing the case, simulation allows for the discovery of EHR patterns of use as well as a modality to aid in EHR training. Objective To develop a high-fidelity EHR training exercise for internal medicine interns to understand patterns of EHR utilization in the generation of daily progress notes. Methods Three months after beginning their internship, 32 interns participated in an EHR simulation designed to assess patterns in note writing and generation. Each intern was given a simulated chart and instructed to create a daily progress note. Notes were graded for use of copy-paste, macros, and accuracy of presented data. Results A total of 31 out of 32 interns (97%) completed the exercise. There was wide variance in use of macros to populate data, with multiple macro types used for the same data category. Three-quarters of notes contained either copy-paste elements or the elimination of active medical problems from the prior days' notes. This was associated with a significant number of quality issues, including failure to recognize a lack of deep vein thrombosis prophylaxis, medications stopped on admission, and issues in prior discharge summary. Conclusions Interns displayed wide variation in the process of creating progress notes. Additional studies are being conducted to determine the impact EHR-based simulation has on standardization of note content. PMID:27168894

  4. Use of Electronic Health Record Simulation to Understand the Accuracy of Intern Progress Notes.

    PubMed

    March, Christopher A; Scholl, Gretchen; Dversdal, Renee K; Richards, Matthew; Wilson, Leah M; Mohan, Vishnu; Gold, Jeffrey A

    2016-05-01

    Background With the widespread adoption of electronic health records (EHRs), there is a growing awareness of problems in EHR training for new users and subsequent problems with the quality of information present in EHR-generated progress notes. By standardizing the case, simulation allows for the discovery of EHR patterns of use as well as a modality to aid in EHR training. Objective To develop a high-fidelity EHR training exercise for internal medicine interns to understand patterns of EHR utilization in the generation of daily progress notes. Methods Three months after beginning their internship, 32 interns participated in an EHR simulation designed to assess patterns in note writing and generation. Each intern was given a simulated chart and instructed to create a daily progress note. Notes were graded for use of copy-paste, macros, and accuracy of presented data. Results A total of 31 out of 32 interns (97%) completed the exercise. There was wide variance in use of macros to populate data, with multiple macro types used for the same data category. Three-quarters of notes contained either copy-paste elements or the elimination of active medical problems from the prior days' notes. This was associated with a significant number of quality issues, including failure to recognize a lack of deep vein thrombosis prophylaxis, medications stopped on admission, and issues in prior discharge summary. Conclusions Interns displayed wide variation in the process of creating progress notes. Additional studies are being conducted to determine the impact EHR-based simulation has on standardization of note content.

  5. Weight Multispectral Reconstruction Strategy for Enhanced Reconstruction Accuracy and Stability With Cerenkov Luminescence Tomography.

    PubMed

    Hongbo Guo; Xiaowei He; Muhan Liu; Zeyu Zhang; Zhenhua Hu; Jie Tian

    2017-06-01

    Cerenkov luminescence tomography (CLT) provides a novel technique for 3-D noninvasive detection of radiopharmaceuticals in living subjects. However, because of the severe scattering of Cerenkov light, the reconstruction accuracy and stability of CLT is still unsatisfied. In this paper, a modified weight multispectral CLT (wmCLT) reconstruction strategy was developed which split the Cerenkov radiation spectrum into several sub-spectral bands and weighted the sub-spectral results to obtain the final result. To better evaluate the property of the wmCLT reconstruction strategy in terms of accuracy, stability and practicability, several numerical simulation experiments and in vivo experiments were conducted and the results obtained were compared with the traditional multispectral CLT (mCLT) and hybrid-spectral CLT (hCLT) reconstruction strategies. The numerical simulation results indicated that wmCLT strategy significantly improved the accuracy of Cerenkov source localization and intensity quantitation and exhibited good stability in suppressing noise in numerical simulation experiments. And the comparison of the results achieved from different in vivo experiments further indicated significant improvement of the wmCLT strategy in terms of the shape recovery of the bladder and the spatial resolution of imaging xenograft tumors. Overall the strategy reported here will facilitate the development of nuclear and optical molecular tomography in theoretical study.

  6. Simulation Results for Airborne Precision Spacing along Continuous Descent Arrivals

    NASA Technical Reports Server (NTRS)

    Barmore, Bryan E.; Abbott, Terence S.; Capron, William R.; Baxley, Brian T.

    2008-01-01

    This paper describes the results of a fast-time simulation experiment and a high-fidelity simulator validation with merging streams of aircraft flying Continuous Descent Arrivals through generic airspace to a runway at Dallas-Ft Worth. Aircraft made small speed adjustments based on an airborne-based spacing algorithm, so as to arrive at the threshold exactly at the assigned time interval behind their Traffic-To-Follow. The 40 aircraft were initialized at different altitudes and speeds on one of four different routes, and then merged at different points and altitudes while flying Continuous Descent Arrivals. This merging and spacing using flight deck equipment and procedures to augment or implement Air Traffic Management directives is called Flight Deck-based Merging and Spacing, an important subset of a larger Airborne Precision Spacing functionality. This research indicates that Flight Deck-based Merging and Spacing initiated while at cruise altitude and well prior to the Terminal Radar Approach Control entry can significantly contribute to the delivery of aircraft at a specified interval to the runway threshold with a high degree of accuracy and at a reduced pilot workload. Furthermore, previously documented work has shown that using a Continuous Descent Arrival instead of a traditional step-down descent can save fuel, reduce noise, and reduce emissions. Research into Flight Deck-based Merging and Spacing is a cooperative effort between government and industry partners.

  7. Investigation on the Accuracy of Superposition Predictions of Film Cooling Effectiveness

    NASA Astrophysics Data System (ADS)

    Meng, Tong; Zhu, Hui-ren; Liu, Cun-liang; Wei, Jian-sheng

    2018-05-01

    Film cooling effectiveness on flat plates with double rows of holes has been studied experimentally and numerically in this paper. This configuration is widely used to simulate the multi-row film cooling on turbine vane. Film cooling effectiveness of double rows of holes and each single row was used to study the accuracy of superposition predictions. Method of stable infrared measurement technique was used to measure the surface temperature on the flat plate. This paper analyzed the factors that affect the film cooling effectiveness including hole shape, hole arrangement, row-to-row spacing and blowing ratio. Numerical simulations were performed to analyze the flow structure and film cooling mechanisms between each film cooling row. Results show that the blowing ratio within the range of 0.5 to 2 has a significant influence on the accuracy of superposition predictions. At low blowing ratios, results obtained by superposition method agree well with the experimental data. While at high blowing ratios, the accuracy of superposition prediction decreases. Another significant factor is hole arrangement. Results obtained by superposition prediction are nearly the same as experimental values of staggered arrangement structures. For in-line configurations, the superposition values of film cooling effectiveness are much higher than experimental data. For different hole shapes, the accuracy of superposition predictions on converging-expanding holes is better than cylinder holes and compound angle holes. For two different hole spacing structures in this paper, predictions show good agreement with the experiment results.

  8. Accuracy optimization with wavelength tunability in overlay imaging technology

    NASA Astrophysics Data System (ADS)

    Lee, Honggoo; Kang, Yoonshik; Han, Sangjoon; Shim, Kyuchan; Hong, Minhyung; Kim, Seungyoung; Lee, Jieun; Lee, Dongyoung; Oh, Eungryong; Choi, Ahlin; Kim, Youngsik; Marciano, Tal; Klein, Dana; Hajaj, Eitan M.; Aharon, Sharon; Ben-Dov, Guy; Lilach, Saltoun; Serero, Dan; Golotsvan, Anna

    2018-03-01

    As semiconductor manufacturing technology progresses and the dimensions of integrated circuit elements shrink, overlay budget is accordingly being reduced. Overlay budget closely approaches the scale of measurement inaccuracies due to both optical imperfections of the measurement system and the interaction of light with geometrical asymmetries of the measured targets. Measurement inaccuracies can no longer be ignored due to their significant effect on the resulting device yield. In this paper we investigate a new approach for imaging based overlay (IBO) measurements by optimizing accuracy rather than contrast precision, including its effect over the total target performance, using wavelength tunable overlay imaging metrology. We present new accuracy metrics based on theoretical development and present their quality in identifying the measurement accuracy when compared to CD-SEM overlay measurements. The paper presents the theoretical considerations and simulation work, as well as measurement data, for which tunability combined with the new accuracy metrics is shown to improve accuracy performance.

  9. Dynamic Modeling Accuracy Dependence on Errors in Sensor Measurements, Mass Properties, and Aircraft Geometry

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.

  10. 3D Printing of Preoperative Simulation Models of a Splenic Artery Aneurysm: Precision and Accuracy.

    PubMed

    Takao, Hidemasa; Amemiya, Shiori; Shibata, Eisuke; Ohtomo, Kuni

    2017-05-01

    Three-dimensional (3D) printing is attracting increasing attention in the medical field. This study aimed to apply 3D printing to the production of hollow splenic artery aneurysm models for use in the simulation of endovascular treatment, and to evaluate the precision and accuracy of the simulation model. From 3D computed tomography (CT) angiography data of a splenic artery aneurysm, 10 hollow models reproducing the vascular lumen were created using a fused deposition modeling-type desktop 3D printer. After filling with water, each model was scanned using T2-weighted magnetic resonance imaging for the evaluation of the lumen. All images were coregistered, binarized, and then combined to create an overlap map. The cross-sectional area of the splenic artery aneurysm and its standard deviation (SD) were calculated perpendicular to the x- and y-axes. Most voxels overlapped among the models. The cross-sectional areas were similar among the models, with SDs <0.05 cm 2 . The mean cross-sectional areas of the splenic artery aneurysm were slightly smaller than those calculated from the original mask images. The maximum mean cross-sectional areas calculated perpendicular to the x- and y-axes were 3.90 cm 2 (SD, 0.02) and 4.33 cm 2 (SD, 0.02), whereas those calculated from the original mask images were 4.14 cm 2 and 4.66 cm 2 , respectively. The mean cross-sectional areas of the afferent artery were, however, almost the same as those calculated from the original mask images. The results suggest that 3D simulation modeling of a visceral artery aneurysm using a fused deposition modeling-type desktop 3D printer and computed tomography angiography data is highly precise and accurate. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  11. Ultrafast High Accuracy PCRTM_SOLAR Model for Cloudy Atmosphere

    NASA Technical Reports Server (NTRS)

    Yang, Qiguang; Liu, Xu; Wu, Wan; Yang, Ping; Wang, Chenxi

    2015-01-01

    An ultrafast high accuracy PCRTM_SOLAR model is developed based on PCA compression and principal component-based radiative transfer model (PCRTM). A fast algorithm for simulation of multi-scattering properties of cloud and/or aerosols is integrated into the fast infrared PCRTM. We completed radiance simulation and training for instruments, such as IASI, AIRS, CrIS, NASTI and SHIS, under diverse conditions. The new model is 5 orders faster than 52-stream DISORT with very high accuracy for cloudy sky radiative transfer simulation. It is suitable for hyperspectral remote data assimilation and cloudy sky retrievals.

  12. Relative significance of heat transfer processes to quantify tradeoffs between complexity and accuracy of energy simulations with a building energy use patterns classification

    NASA Astrophysics Data System (ADS)

    Heidarinejad, Mohammad

    the indoor condition regardless of the contribution of internal and external loads. To deploy the methodology to another portfolio of buildings, simulated LEED NC office buildings are selected. The advantage of this approach is to isolate energy performance due to inherent building characteristics and location, rather than operational and maintenance factors that can contribute to significant variation in building energy use. A framework for detailed building energy databases with annual energy end-uses is developed to select variables and omit outliers. The results show that the high performance office buildings are internally-load dominated with existence of three different clusters of low-intensity, medium-intensity, and high-intensity energy use pattern for the reviewed office buildings. Low-intensity cluster buildings benefit from small building area, while the medium- and high-intensity clusters have a similar range of floor areas and different energy use intensities. Half of the energy use in the low-intensity buildings is associated with the internal loads, such as lighting and plug loads, indicating that there are opportunities to save energy by using lighting or plug load management systems. A comparison between the frameworks developed for the campus buildings and LEED NC office buildings indicates these two frameworks are complementary to each other. Availability of the information has yielded to two different procedures, suggesting future studies for a portfolio of buildings such as city benchmarking and disclosure ordinance should collect and disclose minimal required inputs suggested by this study with the minimum level of monthly energy consumption granularity. This dissertation developed automated methods using the OpenStudio API (Application Programing Interface) to create energy models based on the building class. ASHRAE Guideline 14 defines well-accepted criteria to measure accuracy of energy simulations; however, there is no well

  13. Accuracy of three-dimensional facial soft tissue simulation in post-traumatic zygoma reconstruction.

    PubMed

    Li, P; Zhou, Z W; Ren, J Y; Zhang, Y; Tian, W D; Tang, W

    2016-12-01

    The aim of this study was to evaluate the accuracy of novel software-CMF-preCADS-for the prediction of soft tissue changes following repositioning surgery for zygomatic fractures. Twenty patients who had sustained an isolated zygomatic fracture accompanied by facial deformity and who were treated with repositioning surgery participated in this study. Cone beam computed tomography (CBCT) scans and three-dimensional (3D) stereophotographs were acquired preoperatively and postoperatively. The 3D skeletal model from the preoperative CBCT data was matched with the postoperative one, and the fractured zygomatic fragments were segmented and aligned to the postoperative position for prediction. Then, the predicted model was matched with the postoperative 3D stereophotograph for quantification of the simulation error. The mean absolute error in the zygomatic soft tissue region between the predicted model and the real one was 1.42±1.56mm for all cases. The accuracy of the prediction (mean absolute error ≤2mm) was 87%. In the subjective assessment it was found that the majority of evaluators considered the predicted model and the postoperative model to be 'very similar'. CMF-preCADS software can provide a realistic, accurate prediction of the facial soft tissue appearance after repositioning surgery for zygomatic fractures. The reliability of this software for other types of repositioning surgery for maxillofacial fractures should be validated in the future. Copyright © 2016. Published by Elsevier Ltd.

  14. Testing the Accuracy of Data-driven MHD Simulations of Active Region Evolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leake, James E.; Linton, Mark G.; Schuck, Peter W., E-mail: james.e.leake@nasa.gov

    Models for the evolution of the solar coronal magnetic field are vital for understanding solar activity, yet the best measurements of the magnetic field lie at the photosphere, necessitating the development of coronal models which are “data-driven” at the photosphere. We present an investigation to determine the feasibility and accuracy of such methods. Our validation framework uses a simulation of active region (AR) formation, modeling the emergence of magnetic flux from the convection zone to the corona, as a ground-truth data set, to supply both the photospheric information and to perform the validation of the data-driven method. We focus ourmore » investigation on how the accuracy of the data-driven model depends on the temporal frequency of the driving data. The Helioseismic and Magnetic Imager on NASA’s Solar Dynamics Observatory produces full-disk vector magnetic field measurements at a 12-minute cadence. Using our framework we show that ARs that emerge over 25 hr can be modeled by the data-driving method with only ∼1% error in the free magnetic energy, assuming the photospheric information is specified every 12 minutes. However, for rapidly evolving features, under-sampling of the dynamics at this cadence leads to a strobe effect, generating large electric currents and incorrect coronal morphology and energies. We derive a sampling condition for the driving cadence based on the evolution of these small-scale features, and show that higher-cadence driving can lead to acceptable errors. Future work will investigate the source of errors associated with deriving plasma variables from the photospheric magnetograms as well as other sources of errors, such as reduced resolution, instrument bias, and noise.« less

  15. Accuracy Rates of Sex Estimation by Forensic Anthropologists through Comparison with DNA Typing Results in Forensic Casework.

    PubMed

    Thomas, Richard M; Parks, Connie L; Richard, Adam H

    2016-09-01

    A common task in forensic anthropology involves the estimation of the biological sex of a decedent by exploiting the sexual dimorphism between males and females. Estimation methods are often based on analysis of skeletal collections of known sex and most include a research-based accuracy rate. However, the accuracy rates of sex estimation methods in actual forensic casework have rarely been studied. This article uses sex determinations based on DNA results from 360 forensic cases to develop accuracy rates for sex estimations conducted by forensic anthropologists. The overall rate of correct sex estimation from these cases is 94.7% with increasing accuracy rates as more skeletal material is available for analysis and as the education level and certification of the examiner increases. Nine of 19 incorrect assessments resulted from cases in which one skeletal element was available, suggesting that the use of an "undetermined" result may be more appropriate for these cases. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  16. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald

    2016-01-01

    The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.

  17. Accuracy in identifying the elbow rotation axis on simulated fluoroscopic images using a new anatomical landmark.

    PubMed

    Wiggers, J K; Snijders, R M; Dobbe, J G G; Streekstra, G J; den Hartog, D; Schep, N W L

    2017-11-01

    External fixation of the elbow requires identification of the elbow rotation axis, but the accuracy of traditional landmarks (capitellum and trochlea) on fluoroscopy is limited. The relative distance (RD) of the humerus may be helpful as additional landmark. The first aim of this study was to determine the optimal RD that corresponds to an on-axis lateral image of the elbow. The second aim was to assess whether the use of the optimal RD improves the surgical accuracy to identify the elbow rotation axis on fluoroscopy. CT scans of elbows from five volunteers were used to simulate fluoroscopy; the actual rotation axis was calculated with CT-based flexion-extension analysis. First, three observers measured the optimal RD on simulated fluoroscopy. The RD is defined as the distance between the dorsal part of the humerus and the projection of the posteromedial cortex of the distal humerus, divided by the anteroposterior diameter of the humerus. Second, eight trauma surgeons assessed the elbow rotation axis on simulated fluoroscopy. In a preteaching session, surgeons used traditional landmarks. The surgeons were then instructed how to use the optimal RD as additional landmark in a postteaching session. The deviation from the actual rotation axis was expressed as rotational and translational error (±SD). Measurement of the RD was robust and easily reproducible; the optimal RD was 45%. The surgeons identified the elbow rotation axis with a mean rotational error decreasing from 7.6° ± 3.4° to 6.7° ± 3.3° after teaching how to use the RD. The mean translational error decreased from 4.2 ± 2.0 to 3.7 ± 2.0 mm after teaching. The humeral RD as additional landmark yielded small but relevant improvements. Although fluoroscopy-based external fixator alignment to the elbow remains prone to error, it is recommended to use the RD as additional landmark.

  18. Influence of outliers on accuracy estimation in genomic prediction in plant breeding.

    PubMed

    Estaghvirou, Sidi Boubacar Ould; Ogutu, Joseph O; Piepho, Hans-Peter

    2014-10-01

    Outliers often pose problems in analyses of data in plant breeding, but their influence on the performance of methods for estimating predictive accuracy in genomic prediction studies has not yet been evaluated. Here, we evaluate the influence of outliers on the performance of methods for accuracy estimation in genomic prediction studies using simulation. We simulated 1000 datasets for each of 10 scenarios to evaluate the influence of outliers on the performance of seven methods for estimating accuracy. These scenarios are defined by the number of genotypes, marker effect variance, and magnitude of outliers. To mimic outliers, we added to one observation in each simulated dataset, in turn, 5-, 8-, and 10-times the error SD used to simulate small and large phenotypic datasets. The effect of outliers on accuracy estimation was evaluated by comparing deviations in the estimated and true accuracies for datasets with and without outliers. Outliers adversely influenced accuracy estimation, more so at small values of genetic variance or number of genotypes. A method for estimating heritability and predictive accuracy in plant breeding and another used to estimate accuracy in animal breeding were the most accurate and resistant to outliers across all scenarios and are therefore preferable for accuracy estimation in genomic prediction studies. The performances of the other five methods that use cross-validation were less consistent and varied widely across scenarios. The computing time for the methods increased as the size of outliers and sample size increased and the genetic variance decreased. Copyright © 2014 Ould Estaghvirou et al.

  19. Testing Delays Resulting in Increased Identification Accuracy in Line-Ups and Show-Ups.

    ERIC Educational Resources Information Center

    Dekle, Dawn J.

    1997-01-01

    Investigated time delays (immediate, two-three days, one week) between viewing a staged theft and attempting an eyewitness identification. Compared lineups to one-person showups in a laboratory analogue involving 412 subjects. Results show that across all time delays, participants maintained a higher identification accuracy with the showup…

  20. Accuracy in contouring of small and low contrast lesions: Comparison between diagnostic quality computed tomography scanner and computed tomography simulation scanner-A phantom study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, Yick Wing, E-mail: mpr@hksh.com; Wong, Wing Kei Rebecca; Yu, Siu Ki

    2012-01-01

    To evaluate the accuracy in detection of small and low-contrast regions using a high-definition diagnostic computed tomography (CT) scanner compared with a radiotherapy CT simulation scanner. A custom-made phantom with cylindrical holes of diameters ranging from 2-9 mm was filled with 9 different concentrations of contrast solution. The phantom was scanned using a 16-slice multidetector CT simulation scanner (LightSpeed RT16, General Electric Healthcare, Milwaukee, WI) and a 64-slice high-definition diagnostic CT scanner (Discovery CT750 HD, General Electric Healthcare). The low-contrast regions of interest (ROIs) were delineated automatically upon their full width at half maximum of the CT number profile inmore » Hounsfield units on a treatment planning workstation. Two conformal indexes, CI{sub in}, and CI{sub out}, were calculated to represent the percentage errors of underestimation and overestimation in the automated contours compared with their actual sizes. Summarizing the conformal indexes of different sizes and contrast concentration, the means of CI{sub in} and CI{sub out} for the CT simulation scanner were 33.7% and 60.9%, respectively, and 10.5% and 41.5% were found for the diagnostic CT scanner. The mean differences between the 2 scanners' CI{sub in} and CI{sub out} were shown to be significant with p < 0.001. A descending trend of the index values was observed as the ROI size increases for both scanners, which indicates an improved accuracy when the ROI size increases, whereas no observable trend was found in the contouring accuracy with respect to the contrast levels in this study. Images acquired by the diagnostic CT scanner allow higher accuracy on size estimation compared with the CT simulation scanner in this study. We recommend using a diagnostic CT scanner to scan patients with small lesions (<1 cm in diameter) for radiotherapy treatment planning, especially for those pending for stereotactic radiosurgery in which accurate delineation of

  1. Approximate Algorithms for Computing Spatial Distance Histograms with Accuracy Guarantees

    PubMed Central

    Grupcev, Vladimir; Yuan, Yongke; Tu, Yi-Cheng; Huang, Jin; Chen, Shaoping; Pandit, Sagar; Weng, Michael

    2014-01-01

    Particle simulation has become an important research tool in many scientific and engineering fields. Data generated by such simulations impose great challenges to database storage and query processing. One of the queries against particle simulation data, the spatial distance histogram (SDH) query, is the building block of many high-level analytics, and requires quadratic time to compute using a straightforward algorithm. Previous work has developed efficient algorithms that compute exact SDHs. While beating the naive solution, such algorithms are still not practical in processing SDH queries against large-scale simulation data. In this paper, we take a different path to tackle this problem by focusing on approximate algorithms with provable error bounds. We first present a solution derived from the aforementioned exact SDH algorithm, and this solution has running time that is unrelated to the system size N. We also develop a mathematical model to analyze the mechanism that leads to errors in the basic approximate algorithm. Our model provides insights on how the algorithm can be improved to achieve higher accuracy and efficiency. Such insights give rise to a new approximate algorithm with improved time/accuracy tradeoff. Experimental results confirm our analysis. PMID:24693210

  2. Results of a remote multiplexer/digitizer unit accuracy and environmental study

    NASA Technical Reports Server (NTRS)

    Wilner, D. O.

    1977-01-01

    A remote multiplexer/digitizer unit (RMDU), a part of the airborne integrated flight test data system, was subjected to an accuracy study. The study was designed to show the effects of temperature, altitude, and vibration on the RMDU. The RMDU was subjected to tests at temperatures from -54 C (-65 F) to 71 C (160 F), and the resulting data are presented here, along with a complete analysis of the effects. The methods and means used for obtaining correctable data and correcting the data are also discussed.

  3. Cause and Cure - Deterioration in Accuracy of CFD Simulations with Use of High-Aspect-Ratio Triangular/Tetrahedral Grids

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji Shankar

    2017-01-01

    Traditionally high-aspect ratio triangular/tetrahedral meshes are avoided by CFD researchers in the vicinity of a solid wall, as it is known to reduce the accuracy of gradient computations in those regions. Although for certain complex geometries, the use of high-aspect ratio triangular/tetrahedral elements in the vicinity of a solid wall can be replaced by quadrilateral/prismatic elements, ability to use triangular/tetrahedral elements in such regions without any degradation in accuracy can be beneficial from a mesh generation point of view. The benefits also carry over to numerical frameworks such as the space-time conservation element and solution element (CESE), where simplex elements are the mandatory building blocks. With the requirement of the CESE method in mind, a rigorous mathematical framework that clearly identifies the reason behind the difficulties in use of such high-aspect ratio simplex elements is formulated using two different approaches and presented here. Drawing insights from the analysis, a potential solution to avoid that pitfall is also provided as part of this work. Furthermore, through the use of numerical simulations of practical viscous problems involving high-Reynolds number flows, how the gradient evaluation procedures of the CESE framework can be effectively used to produce accurate and stable results on such high-aspect ratio simplex meshes is also showcased.

  4. Accuracy of partial volume effect correction in clinical molecular imaging of dopamine transporter using SPECT

    NASA Astrophysics Data System (ADS)

    Soret, Marine; Alaoui, Jawad; Koulibaly, Pierre M.; Darcourt, Jacques; Buvat, Irène

    2007-02-01

    ObjectivesPartial volume effect (PVE) is a major source of bias in brain SPECT imaging of dopamine transporter. Various PVE corrections (PVC) making use of anatomical data have been developed and yield encouraging results. However, their accuracy in clinical data is difficult to demonstrate because the gold standard (GS) is usually unknown. The objective of this study was to assess the accuracy of PVC. MethodTwenty-three patients underwent MRI and 123I-FP-CIT SPECT. The binding potential (BP) values were measured in the striata segmented on the MR images after coregistration to SPECT images. These values were calculated without and with an original PVC. In addition, for each patient, a Monte Carlo simulation of the SPECT scan was performed. For these simulations where true simulated BP values were known, percent biases in BP estimates were calculated. For the real data, an evaluation method that simultaneously estimates the GS and a quadratic relationship between the observed and the GS values was used. It yields a surrogate mean square error (sMSE) between the estimated values and the estimated GS values. ResultsThe averaged percent difference between BP measured for real and for simulated patients was 0.7±9.7% without PVC and was -8.5±14.5% with PVC, suggesting that the simulated data reproduced the real data well enough. For the simulated patients, BP was underestimated by 66.6±9.3% on average without PVC and overestimated by 11.3±9.5% with PVC, demonstrating the greatest accuracy of BP estimates with PVC. For the simulated data, sMSE were 27.3 without PVC and 0.90 with PVC, confirming that our sMSE index properly captured the greatest accuracy of BP estimates with PVC. For the real patient data, sMSE was 50.8 without PVC and 3.5 with PVC. These results were consistent with those obtained on the simulated data, suggesting that for clinical data, and despite probable segmentation and registration errors, BP were more accurately estimated with PVC than without

  5. Study of accuracy of precipitation measurements using simulation method

    NASA Astrophysics Data System (ADS)

    Nagy, Zoltán; Lajos, Tamás; Morvai, Krisztián

    2013-04-01

    Hungarian Meteorological Service1 Budapest University of Technology and Economics2 Precipitation is one of the the most important meteorological parameters describing the state of the climate and to get correct information from trends, accurate measurements of precipitation is very important. The problem is that the precipitation measurements are affected by systematic errors leading to an underestimation of actual precipitation which errors vary by type of precipitaion and gauge type. It is well known that the wind speed is the most important enviromental factor that contributes to the underestimation of actual precipitation, especially for solid precipitation. To study and correct the errors of precipitation measurements there are two basic possibilities: · Use of results and conclusion of International Precipitation Measurements Intercomparisons; · To build standard reference gauges (DFIR, pit gauge) and make own investigation; In 1999 at the HMS we tried to achieve own investigation and built standard reference gauges But the cost-benefit ratio in case of snow (use of DFIR) was very bad (we had several winters without significant amount of snow, while the state of DFIR was continously falling) Due to the problem mentioned above there was need for new approximation that was the modelling made by Budapest University of Technology and Economics, Department of Fluid Mechanics using the FLUENT 6.2 model. The ANSYS Fluent package is featured fluid dynamics solution for modelling flow and other related physical phenomena. It provides the tools needed to describe atmospheric processes, design and optimize new equipment. The CFD package includes solvers that accurately simulate behaviour of the broad range of flows that from single-phase to multi-phase. The questions we wanted to get answer to are as follows: · How do the different types of gauges deform the airflow around themselves? · Try to give quantitative estimation of wind induced error. · How does the use

  6. Accuracy and Resolution in Micro-earthquake Tomographic Inversion Studies

    NASA Astrophysics Data System (ADS)

    Hutchings, L. J.; Ryan, J.

    2010-12-01

    Accuracy and resolution are complimentary properties necessary to interpret the results of earthquake location and tomography studies. Accuracy is the how close an answer is to the “real world”, and resolution is who small of node spacing or earthquake error ellipse one can achieve. We have modified SimulPS (Thurber, 1986) in several ways to provide a tool for evaluating accuracy and resolution of potential micro-earthquake networks. First, we provide synthetic travel times from synthetic three-dimensional geologic models and earthquake locations. We use this to calculate errors in earthquake location and velocity inversion results when we perturb these models and try to invert to obtain these models. We create as many stations as desired and can create a synthetic velocity model with any desired node spacing. We apply this study to SimulPS and TomoDD inversion studies. “Real” travel times are perturbed with noise and hypocenters are perturbed to replicate a starting location away from the “true” location, and inversion is performed by each program. We establish travel times with the pseudo-bending ray tracer and use the same ray tracer in the inversion codes. This, of course, limits our ability to test the accuracy of the ray tracer. We developed relationships for the accuracy and resolution expected as a function of the number of earthquakes and recording stations for typical tomographic inversion studies. Velocity grid spacing started at 1km, then was decreased to 500m, 100m, 50m and finally 10m to see if resolution with decent accuracy at that scale was possible. We considered accuracy to be good when we could invert a velocity model perturbed by 50% back to within 5% of the original model, and resolution to be the size of the grid spacing. We found that 100 m resolution could obtained by using 120 stations with 500 events, bu this is our current limit. The limiting factors are the size of computers needed for the large arrays in the inversion and a

  7. Phenomenological reports diagnose accuracy of eyewitness identification decisions.

    PubMed

    Palmer, Matthew A; Brewer, Neil; McKinnon, Anna C; Weber, Nathan

    2010-02-01

    This study investigated whether measuring the phenomenology of eyewitness identification decisions aids evaluation of their accuracy. Witnesses (N=502) viewed a simulated crime and attempted to identify two targets from lineups. A divided attention manipulation during encoding reduced the rate of remember (R) correct identifications, but not the rates of R foil identifications or know (K) judgments in the absence of recollection (i.e., K/[1-R]). Both RK judgments and recollection ratings (a novel measure of graded recollection) distinguished correct from incorrect positive identifications. However, only recollection ratings improved accuracy evaluation after identification confidence was taken into account. These results provide evidence that RK judgments for identification decisions function in a similar way as for recognition decisions; are consistent with the notion of graded recollection; and indicate that measures of phenomenology can enhance the evaluation of identification accuracy. Copyright 2009 Elsevier B.V. All rights reserved.

  8. International benchmarking of longitudinal train dynamics simulators: results

    NASA Astrophysics Data System (ADS)

    Wu, Qing; Spiryagin, Maksym; Cole, Colin; Chang, Chongyi; Guo, Gang; Sakalo, Alexey; Wei, Wei; Zhao, Xubao; Burgelman, Nico; Wiersma, Pier; Chollet, Hugues; Sebes, Michel; Shamdani, Amir; Melzi, Stefano; Cheli, Federico; di Gialleonardo, Egidio; Bosso, Nicola; Zampieri, Nicolò; Luo, Shihui; Wu, Honghua; Kaza, Guy-Léon

    2018-03-01

    This paper presents the results of the International Benchmarking of Longitudinal Train Dynamics Simulators which involved participation of nine simulators (TABLDSS, UM, CRE-LTS, TDEAS, PoliTo, TsDyn, CARS, BODYSIM and VOCO) from six countries. Longitudinal train dynamics results and computing time of four simulation cases are presented and compared. The results show that all simulators had basic agreement in simulations of locomotive forces, resistance forces and track gradients. The major differences among different simulators lie in the draft gear models. TABLDSS, UM, CRE-LTS, TDEAS, TsDyn and CARS had general agreement in terms of the in-train forces; minor differences exist as reflections of draft gear model variations. In-train force oscillations were observed in VOCO due to the introduction of wheel-rail contact. In-train force instabilities were sometimes observed in PoliTo and BODYSIM due to the velocity controlled transitional characteristics which could have generated unreasonable transitional stiffness. Regarding computing time per train operational second, the following list is in order of increasing computing speed: VOCO, TsDyn, PoliTO, CARS, BODYSIM, UM, TDEAS, CRE-LTS and TABLDSS (fastest); all simulators except VOCO, TsDyn and PoliTo achieved faster speeds than real-time simulations. Similarly, regarding computing time per integration step, the computing speeds in order are: CRE-LTS, VOCO, CARS, TsDyn, UM, TABLDSS and TDEAS (fastest).

  9. Cassini radar : system concept and simulation results

    NASA Astrophysics Data System (ADS)

    Melacci, P. T.; Orosei, R.; Picardi, G.; Seu, R.

    1998-10-01

    The Cassini mission is an international venture, involving NASA, the European Space Agency (ESA) and the Italian Space Agency (ASI), for the investigation of the Saturn system and, in particular, Titan. The Cassini radar will be able to see through Titan's thick, optically opaque atmosphere, allowing us to better understand the composition and the morphology of its surface, but the interpretation of the results, due to the complex interplay of many different factors determining the radar echo, will not be possible without an extensive modellization of the radar system functioning and of the surface reflectivity. In this paper, a simulator of the multimode Cassini radar will be described, after a brief review of our current knowledge of Titan and a discussion of the contribution of the Cassini radar in answering to currently open questions. Finally, the results of the simulator will be discussed. The simulator has been implemented on a RISC 6000 computer by considering only the active modes of operation, that is altimeter and synthetic aperture radar. In the instrument simulation, strict reference has been made to the present planned sequence of observations and to the radar settings, including burst and single pulse duration, pulse bandwidth, pulse repetition frequency and all other parameters which may be changed, and possibly optimized, according to the operative mode. The observed surfaces are simulated by a facet model, allowing the generation of surfaces with Gaussian or non-Gaussian roughness statistic, together with the possibility of assigning to the surface an average behaviour which can represent, for instance, a flat surface or a crater. The results of the simulation will be discussed, in order to check the analytical evaluations of the models of the average received echoes and of the attainable performances. In conclusion, the simulation results should allow the validation of the theoretical evaluations of the capabilities of microwave instruments, when

  10. Space Geodetic Technique Co-location in Space: Simulation Results for the GRASP Mission

    NASA Astrophysics Data System (ADS)

    Kuzmicz-Cieslak, M.; Pavlis, E. C.

    2011-12-01

    The Global Geodetic Observing System-GGOS, places very stringent requirements in the accuracy and stability of future realizations of the International Terrestrial Reference Frame (ITRF): an origin definition at 1 mm or better at epoch and a temporal stability on the order of 0.1 mm/y, with similar numbers for the scale (0.1 ppb) and orientation components. These goals were derived from the requirements of Earth science problems that are currently the international community's highest priority. None of the geodetic positioning techniques can achieve this goal alone. This is due in part to the non-observability of certain attributes from a single technique. Another limitation is imposed from the extent and uniformity of the tracking network and the schedule of observational availability and number of suitable targets. The final limitation derives from the difficulty to "tie" the reference points of each technique at the same site, to an accuracy that will support the GGOS goals. The future GGOS network will address decisively the ground segment and to certain extent the space segment requirements. The JPL-proposed multi-technique mission GRASP (Geodetic Reference Antenna in Space) attempts to resolve the accurate tie between techniques, using their co-location in space, onboard a well-designed spacecraft equipped with GNSS receivers, a SLR retroreflector array, a VLBI beacon and a DORIS system. Using the anticipated system performance for all four techniques at the time the GGOS network is completed (ca 2020), we generated a number of simulated data sets for the development of a TRF. Our simulation studies examine the degree to which GRASP can improve the inter-technique "tie" issue compared to the classical approach, and the likely modus operandi for such a mission. The success of the examined scenarios is judged by the quality of the origin and scale definition of the resulting TRF.

  11. Comparison of the effect of web-based, simulation-based, and conventional training on the accuracy of visual estimation of postpartum hemorrhage volume on midwifery students: A randomized clinical trial

    PubMed Central

    Kordi, Masoumeh; Fakari, Farzaneh Rashidi; Mazloum, Seyed Reza; Khadivzadeh, Talaat; Akhlaghi, Farideh; Tara, Mahmoud

    2016-01-01

    Introduction: Delay in diagnosis of bleeding can be due to underestimation of the actual amount of blood loss during delivery. Therefore, this research aimed to compare the efficacy of web-based, simulation-based, and conventional training on the accuracy of visual estimation of postpartum hemorrhage volume. Materials and Methods: This three-group randomized clinical trial study was performed on 105 midwifery students in Mashhad School of Nursing and Midwifery in 2013. The samples were selected by the convenience method and were randomly divided into three groups of web-based, simulation-based, and conventional training. The three groups participated before and 1 week after the training course in eight station practical tests, then, the students of the web-based group were trained on-line for 1 week, the students of the simulation-based group were trained in the Clinical Skills Centre for 4 h, and the students of the conventional group were trained for 4 h presentation by researchers. The data gathering tool was a demographic questionnaire designed by the researchers and objective structured clinical examination. Data were analyzed by software version 11.5. Results: The accuracy of visual estimation of postpartum hemorrhage volume after training increased significantly in the three groups at all stations (1, 2, 4, 5, 6 and 7 (P = 0.001), 8 (P = 0.027)) except station 3 (blood loss of 20 cc, P = 0.095), but the mean score of blood loss estimation after training did not significantly different between the three groups (P = 0.95). Conclusion: Training increased the accuracy of estimation of postpartum hemorrhage, but no significant difference was found among the three training groups. We can use web-based training as a substitute or supplement of training along with two other more common simulation and conventional methods. PMID:27500175

  12. Accuracy of the unified approach in maternally influenced traits--illustrated by a simulation study in the honey bee (Apis mellifera).

    PubMed

    Gupta, Pooja; Reinsch, Norbert; Spötter, Andreas; Conrad, Tim; Bienefeld, Kaspar

    2013-05-06

    The honey bee is an economically important species. With a rapid decline of the honey bee population, it is necessary to implement an improved genetic evaluation methodology. In this study, we investigated the applicability of the unified approach and its impact on the accuracy of estimation of breeding values for maternally influenced traits on a simulated dataset for the honey bee. Due to the limitation to the number of individuals that can be genotyped in a honey bee population, the unified approach can be an efficient strategy to increase the genetic gain and to provide a more accurate estimation of breeding values. We calculated the accuracy of estimated breeding values for two evaluation approaches, the unified approach and the traditional pedigree based approach. We analyzed the effects of different heritabilities as well as genetic correlation between direct and maternal effects on the accuracy of estimation of direct, maternal and overall breeding values (sum of maternal and direct breeding values). The genetic and reproductive biology of the honey bee was accounted for by taking into consideration characteristics such as colony structure, uncertain paternity, overlapping generations and polyandry. In addition, we used a modified numerator relationship matrix and a realistic genome for the honey bee. For all values of heritability and correlation, the accuracy of overall estimated breeding values increased significantly with the unified approach. The increase in accuracy was always higher for the case when there was no correlation as compared to the case where a negative correlation existed between maternal and direct effects. Our study shows that the unified approach is a useful methodology for genetic evaluation in honey bees, and can contribute immensely to the improvement of traits of apicultural interest such as resistance to Varroa or production and behavioural traits. In particular, the study is of great interest for cases where negative correlation

  13. Accuracy of tretyakov precipitation gauge: Result of wmo intercomparison

    USGS Publications Warehouse

    Yang, Daqing; Goodison, Barry E.; Metcalfe, John R.; Golubev, Valentin S.; Elomaa, Esko; Gunther, Thilo; Bates, Roy; Pangburn, Timothy; Hanson, Clayton L.; Emerson, Douglas G.; Copaciu, Voilete; Milkovic, Janja

    1995-01-01

    The Tretyakov non-recording precipitation gauge has been used historically as the official precipitation measurement instrument in the Russian (formerly the USSR) climatic and hydrological station network and in a number of other European countries. From 1986 to 1993, the accuracy and performance of this gauge were evaluated during the WMO Solid Precipitation Measurement Intercomparison at 11 stations in Canada, the USA, Russia, Germany, Finland, Romania and Croatia. The double fence intercomparison reference (DFIR) was the reference standard used at all the Intercomparison stations in the Intercomparison. The Intercomparison data collected at the different sites are compatible with respect to the catch ratio (measured/DFIR) for the same gauge, when compared using mean wind speed at the height of the gauge orifice during the observation period.The Intercomparison data for the Tretyakov gauge were compiled from measurements made at these WMO intercomparison sites. These data represent a variety of climates, terrains and exposures. The effects of environmental factors, such as wind speed, wind direction, type of precipitation and temperature, on gauge catch ratios were investigated. Wind speed was found to be the most important factor determining the gauge catch and air temperature had a secondary effect when precipitation was classified into snow, mixed and rain. The results of the analysis of gauge catch ratio versus wind speed and temperature on a daily time step are presented for various types of precipitation. Independent checks of the correction equations against the DFIR have been conducted at those Intercomparison stations and a good agreement (difference less than 10%) has been obtained. The use of such adjustment procedures should significantly improve the accuracy and homogeneity of gauge-measured precipitation data over large regions of the former USSR and central Europe.

  14. Real-Time Tropospheric Product Establishment and Accuracy Assessment in China

    NASA Astrophysics Data System (ADS)

    Chen, M.; Guo, J.; Wu, J.; Song, W.; Zhang, D.

    2018-04-01

    Tropospheric delay has always been an important issue in Global Navigation Satellite System (GNSS) processing. Empirical tropospheric delay models are difficult to simulate complex and volatile atmospheric environments, resulting in poor accuracy of the empirical model and difficulty in meeting precise positioning demand. In recent years, some scholars proposed to establish real-time tropospheric product by using real-time or near-real-time GNSS observations in a small region, and achieved some good results. This paper uses real-time observing data of 210 Chinese national GNSS reference stations to estimate the tropospheric delay, and establishes ZWD grid model in the country wide. In order to analyze the influence of tropospheric grid product on wide-area real-time PPP, this paper compares the method of taking ZWD grid product as a constraint with the model correction method. The results show that the ZWD grid product estimated based on the national reference stations can improve PPP accuracy and convergence speed. The accuracy in the north (N), east (E) and up (U) direction increase by 31.8 %,15.6 % and 38.3 %, respectively. As with the convergence speed, the accuracy of U direction experiences the most improvement.

  15. Accuracy of surface registration compared to conventional volumetric registration in patient positioning for head-and-neck radiotherapy: A simulation study using patient data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Youngjun; Li, Ruijiang; Na, Yong Hum

    2014-12-15

    Purpose: 3D optical surface imaging has been applied to patient positioning in radiation therapy (RT). The optical patient positioning system is advantageous over conventional method using cone-beam computed tomography (CBCT) in that it is radiation free, frameless, and is capable of real-time monitoring. While the conventional radiographic method uses volumetric registration, the optical system uses surface matching for patient alignment. The relative accuracy of these two methods has not yet been sufficiently investigated. This study aims to investigate the theoretical accuracy of the surface registration based on a simulation study using patient data. Methods: This study compares the relative accuracymore » of surface and volumetric registration in head-and-neck RT. The authors examined 26 patient data sets, each consisting of planning CT data acquired before treatment and patient setup CBCT data acquired at the time of treatment. As input data of surface registration, patient’s skin surfaces were created by contouring patient skin from planning CT and treatment CBCT. Surface registration was performed using the iterative closest points algorithm by point–plane closest, which minimizes the normal distance between source points and target surfaces. Six degrees of freedom (three translations and three rotations) were used in both surface and volumetric registrations and the results were compared. The accuracy of each method was estimated by digital phantom tests. Results: Based on the results of 26 patients, the authors found that the average and maximum root-mean-square translation deviation between the surface and volumetric registrations were 2.7 and 5.2 mm, respectively. The residual error of the surface registration was calculated to have an average of 0.9 mm and a maximum of 1.7 mm. Conclusions: Surface registration may lead to results different from those of the conventional volumetric registration. Only limited accuracy can be achieved for patient

  16. Impact of Assimilation on Heavy Rainfall Simulations Using WRF Model: Sensitivity of Assimilation Results to Background Error Statistics

    NASA Astrophysics Data System (ADS)

    Rakesh, V.; Kantharao, B.

    2017-03-01

    Data assimilation is considered as one of the effective tools for improving forecast skill of mesoscale models. However, for optimum utilization and effective assimilation of observations, many factors need to be taken into account while designing data assimilation methodology. One of the critical components that determines the amount and propagation observation information into the analysis, is model background error statistics (BES). The objective of this study is to quantify how BES in data assimilation impacts on simulation of heavy rainfall events over a southern state in India, Karnataka. Simulations of 40 heavy rainfall events were carried out using Weather Research and Forecasting Model with and without data assimilation. The assimilation experiments were conducted using global and regional BES while the experiment with no assimilation was used as the baseline for assessing the impact of data assimilation. The simulated rainfall is verified against high-resolution rain-gage observations over Karnataka. Statistical evaluation using several accuracy and skill measures shows that data assimilation has improved the heavy rainfall simulation. Our results showed that the experiment using regional BES outperformed the one which used global BES. Critical thermo-dynamic variables conducive for heavy rainfall like convective available potential energy simulated using regional BES is more realistic compared to global BES. It is pointed out that these results have important practical implications in design of forecast platforms while decision-making during extreme weather events

  17. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald

    2016-01-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.

  18. Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models

    NASA Astrophysics Data System (ADS)

    Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.

    2010-10-01

    Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.

  19. Accuracy of self-reports of fecal occult blood tests and test results among individuals in the carpentry trade.

    PubMed

    Lipkus, Isaac M; Samsa, Gregory P; Dement, John; Skinner, Celette Sugg; Green, La Sonya G; Pompeii, Lisa; Ransohoff, David F

    2003-11-01

    Inaccuracy in self-reports of colorectal cancer (CRC) screening procedures (e.g., over- or underreporting) may interfere with individuals adhering to appropriate screening intervals, and can blur the true effects of physician recommendations to screen and the effects of interventions designed to promote screening. We assessed accuracy of self-report of having a fecal occult blood test (FOBT) within a 1-year window based on receipt of FOBT kits among individuals aged 50 and older in the carpentry trade (N = 658) who were off-schedule for having had a FOBT. Indices of evaluating accuracy of self-reports (concordance, specificity, false-positive and false-negative rates) were calculated relative to receipt of a mailed FOBT. Among those who mailed a completed FOBT, we assessed accuracy of reporting the test result. Participants underestimated having performed a FOBT (false-negative rate of 44%). Accuracy was unrelated to perceptions of getting or worrying about CRC or family history. Self-reports of having a negative FOBT result more consistently matched the laboratory result (specificity 98%) than having a positive test result (sensitivity 63%). Contrary to other findings, participants under- rather than over reported FOBT screening. Results suggest greater efforts are needed to enhance accurate recall of FOBT screening.

  20. Solving Nonlinear Euler Equations with Arbitrary Accuracy

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.

    2005-01-01

    A computer program that efficiently solves the time-dependent, nonlinear Euler equations in two dimensions to an arbitrarily high order of accuracy has been developed. The program implements a modified form of a prior arbitrary- accuracy simulation algorithm that is a member of the class of algorithms known in the art as modified expansion solution approximation (MESA) schemes. Whereas millions of lines of code were needed to implement the prior MESA algorithm, it is possible to implement the present MESA algorithm by use of one or a few pages of Fortran code, the exact amount depending on the specific application. The ability to solve the Euler equations to arbitrarily high accuracy is especially beneficial in simulations of aeroacoustic effects in settings in which fully nonlinear behavior is expected - for example, at stagnation points of fan blades, where linearizing assumptions break down. At these locations, it is necessary to solve the full nonlinear Euler equations, and inasmuch as the acoustical energy is of the order of 4 to 5 orders of magnitude below that of the mean flow, it is necessary to achieve an overall fractional error of less than 10-6 in order to faithfully simulate entropy, vortical, and acoustical waves.

  1. Modeling and Simulation of High Resolution Optical Remote Sensing Satellite Geometric Chain

    NASA Astrophysics Data System (ADS)

    Xia, Z.; Cheng, S.; Huang, Q.; Tian, G.

    2018-04-01

    The high resolution satellite with the longer focal length and the larger aperture has been widely used in georeferencing of the observed scene in recent years. The consistent end to end model of high resolution remote sensing satellite geometric chain is presented, which consists of the scene, the three line array camera, the platform including attitude and position information, the time system and the processing algorithm. The integrated design of the camera and the star tracker is considered and the simulation method of the geolocation accuracy is put forward by introduce the new index of the angle between the camera and the star tracker. The model is validated by the geolocation accuracy simulation according to the test method of the ZY-3 satellite imagery rigorously. The simulation results show that the geolocation accuracy is within 25m, which is highly consistent with the test results. The geolocation accuracy can be improved about 7 m by the integrated design. The model combined with the simulation method is applicable to the geolocation accuracy estimate before the satellite launching.

  2. Evaluation of the three-dimensional accuracy of implant impression techniques in two simulated clinical conditions by optical scanning.

    PubMed

    Sabouhi, Mahmoud; Bajoghli, Farshad; Abolhasani, Majid

    2015-01-01

    The success of an implant-supported prosthesis is dependent on the passive fit of its framework fabricated on a precise cast. The aim of this in vitro study was to digitally compare the three-dimensional accuracy of implant impression techniques in partially and completely edentulous conditions. The master model simulated two clinical conditions. The first condition was a partially edentulous mandibular arch with an anterior edentulous space (D condition). Two implant analogs were inserted in bilateral canine sites. After elimination of the teeth, the model was converted to a completely edentulous condition (E condition). Three different impression techniques were performed (open splinted [OS], open unsplinted [OU], closed [C]) for each condition. Six groups of casts (DOS, DOU, DC, EOS, EOU, EC) (n = 8), totaling 48 casts, were made. Two scan bodies were secured onto the master edentulous model and onto each test cast and digitized by an optical scanning system. The related scans were superimposed, and the mean discrepancy for each cast was determined. The statistical analysis showed no significant difference in the accuracy of casts as a function of model status (P = .78, analysis of variance [ANOVA] test), impression technique (P = .57, ANOVA test), or as the combination of both (P = .29, ANOVA test). The distribution of data was normal (Kolmogorov-Smirnov test). Model status (dentate or edentulous) and impression technique did not influence the precision of the casts. There is no difference among any of the impression techniques in either simulated clinical condition.

  3. Between simplicity and accuracy: Effect of adding modeling details on quarter vehicle model accuracy.

    PubMed

    Soong, Ming Foong; Ramli, Rahizar; Saifizul, Ahmad

    2017-01-01

    Quarter vehicle model is the simplest representation of a vehicle that belongs to lumped-mass vehicle models. It is widely used in vehicle and suspension analyses, particularly those related to ride dynamics. However, as much as its common adoption, it is also commonly accepted without quantification that this model is not as accurate as many higher-degree-of-freedom models due to its simplicity and limited degrees of freedom. This study investigates the trade-off between simplicity and accuracy within the context of quarter vehicle model by determining the effect of adding various modeling details on model accuracy. In the study, road input detail, tire detail, suspension stiffness detail and suspension damping detail were factored in, and several enhanced models were compared to the base model to assess the significance of these details. The results clearly indicated that these details do have effect on simulated vehicle response, but to various extents. In particular, road input detail and suspension damping detail have the most significance and are worth being added to quarter vehicle model, as the inclusion of these details changed the response quite fundamentally. Overall, when it comes to lumped-mass vehicle modeling, it is reasonable to say that model accuracy depends not just on the number of degrees of freedom employed, but also on the contributions from various modeling details.

  4. Between simplicity and accuracy: Effect of adding modeling details on quarter vehicle model accuracy

    PubMed Central

    2017-01-01

    Quarter vehicle model is the simplest representation of a vehicle that belongs to lumped-mass vehicle models. It is widely used in vehicle and suspension analyses, particularly those related to ride dynamics. However, as much as its common adoption, it is also commonly accepted without quantification that this model is not as accurate as many higher-degree-of-freedom models due to its simplicity and limited degrees of freedom. This study investigates the trade-off between simplicity and accuracy within the context of quarter vehicle model by determining the effect of adding various modeling details on model accuracy. In the study, road input detail, tire detail, suspension stiffness detail and suspension damping detail were factored in, and several enhanced models were compared to the base model to assess the significance of these details. The results clearly indicated that these details do have effect on simulated vehicle response, but to various extents. In particular, road input detail and suspension damping detail have the most significance and are worth being added to quarter vehicle model, as the inclusion of these details changed the response quite fundamentally. Overall, when it comes to lumped-mass vehicle modeling, it is reasonable to say that model accuracy depends not just on the number of degrees of freedom employed, but also on the contributions from various modeling details. PMID:28617819

  5. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.

    PubMed

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-06-22

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs.

  6. Age Differences in Day-To-Day Speed-Accuracy Tradeoffs: Results from the COGITO Study.

    PubMed

    Ghisletta, Paolo; Joly-Burra, Emilie; Aichele, Stephen; Lindenberger, Ulman; Schmiedek, Florian

    2018-04-23

    We examined adult age differences in day-to-day adjustments in speed-accuracy tradeoffs (SAT) on a figural comparison task. Data came from the COGITO study, with over 100 younger and 100 older adults, assessed for over 100 days. Participants were given explicit feedback about their completion time and accuracy each day after task completion. We applied a multivariate vector auto-regressive model of order 1 to the daily mean reaction time (RT) and daily accuracy scores together, within each age group. We expected that participants adjusted their SAT if the two cross-regressive parameters from RT (or accuracy) on day t-1 of accuracy (or RT) on day t were sizable and negative. We found that: (a) the temporal dependencies of both accuracy and RT were quite strong in both age groups; (b) younger adults showed an effect of their accuracy on day t-1 on their RT on day t, a pattern that was in accordance with adjustments of their SAT; (c) older adults did not appear to adjust their SAT; (d) these effects were partly associated with reliable individual differences within each age group. We discuss possible explanations for older adults' reluctance to recalibrate speed and accuracy on a day-to-day basis.

  7. Efficiency of including first-generation information in second-generation ranking and selection: results of computer simulation.

    Treesearch

    T.Z. Ye; K.J.S. Jayawickrama; G.R. Johnson

    2006-01-01

    Using computer simulation, we evaluated the impact of using first-generation information to increase selection efficiency in a second-generation breeding program. Selection efficiency was compared in terms of increase in rank correlation between estimated and true breeding values (i.e., ranking accuracy), reduction in coefficient of variation of correlation...

  8. Refined Simulation of Satellite Laser Altimeter Full Echo Waveform

    NASA Astrophysics Data System (ADS)

    Men, H.; Xing, Y.; Li, G.; Gao, X.; Zhao, Y.; Gao, X.

    2018-04-01

    The return waveform of satellite laser altimeter plays vital role in the satellite parameters designation, data processing and application. In this paper, a method of refined full waveform simulation is proposed based on the reflectivity of the ground target, the true emission waveform and the Laser Profile Array (LPA). The ICESat/GLAS data is used as the validation data. Finally, we evaluated the simulation accuracy with the correlation coefficient. It was found that the accuracy of echo simulation could be significantly improved by considering the reflectivity of the ground target and the emission waveform. However, the laser intensity distribution recorded by the LPA has little effect on the echo simulation accuracy when compared with the distribution of the simulated laser energy. At last, we proposed a refinement idea by analyzing the experimental results, in the hope of providing references for the waveform data simulation and processing of GF-7 satellite in the future.

  9. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  10. Diagnostic accuracy of fused positron emission tomography/magnetic resonance mammography: initial results.

    PubMed

    Heusner, T A; Hahn, S; Jonkmanns, C; Kuemmel, S; Otterbach, F; Hamami, M E; Stahl, A R; Bockisch, A; Forsting, M; Antoch, G

    2011-02-01

    The aim of this study was to evaluate the diagnostic accuracy of fused fluoro-deoxy-D-glucose positron emission tomography/magnetic resonance mammography (FDG-PET/MRM) in breast cancer patients and to compare FDG-PET/MRM with MRM. 27 breast cancer patients (mean age 58.9±9.9 years) underwent MRM and prone FDG-PET. Images were fused software-based to FDG-PET/MRM images. Histopathology served as the reference standard to define the following parameters for both MRM and FDG-PET/MRM: sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and accuracy for the detection of breast cancer lesions. Furthermore, the number of patients with correctly determined lesion focality was assessed. Differences between both modalities were assessed by McNemaŕs test (p<0.05). The number of patients in whom FDG-PET/MRM would have changed the surgical approach was determined. 58 breast lesions were evaluated. The sensitivity, specificity, PPV, NPV and accuracy were 93%, 60%, 87%, 75% and 85% for MRM, respectively. For FDG-PET/MRM they were 88%, 73%, 90%, 69% and 92%, respectively. FDG-PET/MRM was as accurate for lesion detection (p = 1) and determination of the lesions' focality (p = 0.7722) as MRM. In only 1 patient FDG-PET/MRM would have changed the surgical treatment. FDG-PET/MRM is as accurate as MRM for the evaluation of local breast cancer. FDG-PET/MRM defines the tumours' focality as accurately as MRM and may have an impact on the surgical treatment in only a small portion of patients. Based on these results, FDG-PET/MRM cannot be recommended as an adjunct or alternative to MRM.

  11. Forest Classification Accuracy as Influenced by Multispectral Scanner Spatial Resolution. [Sam Houston National Forest, Texas

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F. (Principal Investigator); Sadowski, F. E.; Sarno, J. E.

    1976-01-01

    The author has identified the following significant results. A supervised classification within two separate ground areas of the Sam Houston National Forest was carried out for two sq meters spatial resolution MSS data. Data were progressively coarsened to simulate five additional cases of spatial resolution ranging up to 64 sq meters. Similar processing and analysis of all spatial resolutions enabled evaluations of the effect of spatial resolution on classification accuracy for various levels of detail and the effects on area proportion estimation for very general forest features. For very coarse resolutions, a subset of spectral channels which simulated the proposed thematic mapper channels was used to study classification accuracy.

  12. Impact of Transport Zone Number in Simulation Models on Cost-Benefit Analysis Results in Transport Investments

    NASA Astrophysics Data System (ADS)

    Chmielewski, Jacek

    2017-10-01

    Nowadays, feasibility studies need to be prepared for all planned transport investments, mainly those co-financed with UE grants. One of the fundamental aspect of feasibility study is the economic justification of an investment, evaluated in an area of so called cost-benefit analysis (CBA). The main goal of CBA calculation is to prove that a transport investment is really important for the society and should be implemented as economically efficient one. It can be said that the number of hours (PH - passengers hours) in trips and travelled kilometres (PK - passengers kilometres) are the most important for CBA results. The differences between PH and PK calculated for particular investment scenarios are the base for benefits calculation. Typically, transport simulation models are the best source for such data. Transport simulation models are one of the most powerful tools for transport network planning. They make it possible to evaluate forecast traffic volume and passenger flows in a public transport system for defined scenarios of transport and area development. There are many different transport models. Their construction is often similar, and they mainly differ in the level of their accuracy. Even models for the same area may differ in this matter. Typically, such differences come from the accuracy of supply side representation: road and public transport network representation. In many cases only main roads and a public transport network are represented, while local and service roads are eliminated as a way of reality simplification. This also enables a faster and more effective calculation process. On the other hand, the description of demand part of these models based on transport zones is often stable. Difficulties with data collection, mainly data on land use, resulted in the lack of changes in the analysed land division into so called transport zones. In this paper the author presents an influence of land division on the results of traffic analyses, and hence

  13. Navigator Accuracy Requirements for Prospective Motion Correction

    PubMed Central

    Maclaren, Julian; Speck, Oliver; Stucht, Daniel; Schulze, Peter; Hennig, Jürgen; Zaitsev, Maxim

    2010-01-01

    Prospective motion correction in MR imaging is becoming increasingly popular to prevent the image artefacts that result from subject motion. Navigator information is used to update the position of the imaging volume before every spin excitation so that lines of acquired k-space data are consistent. Errors in the navigator information, however, result in residual errors in each k-space line. This paper presents an analysis linking noise in the tracking system to the power of the resulting image artefacts. An expression is formulated for the required navigator accuracy based on the properties of the imaged object and the desired resolution. Analytical results are compared with computer simulations and experimental data. PMID:19918892

  14. Daily modulation of the speed-accuracy trade-off.

    PubMed

    Gueugneau, Nicolas; Pozzo, Thierry; Darlot, Christian; Papaxanthis, Charalambos

    2017-07-25

    Goal-oriented arm movements are characterized by a balance between speed and accuracy. The relation between speed and accuracy has been formalized by Fitts' law and predicts a linear increase in movement duration with task constraints. Up to now this relation has been investigated on a short-time scale only, that is during a single experimental session, although chronobiological studies report that the motor system is shaped by circadian rhythms. Here, we examine whether the speed-accuracy trade-off could vary during the day. Healthy adults carried out arm-pointing movements as accurately and fast as possible toward targets of different sizes at various hours of the day, and variations in Fitts' law parameters were scrutinized. To investigate whether the potential modulation of the speed-accuracy trade-off has peripheral and/or central origins, a motor imagery paradigm was used as well. Results indicated a daily (circadian-like) variation for the durations of both executed and mentally simulated movements, in strictly controlled accuracy conditions. While Fitts' law was held for the whole sessions of the day, the slope of the relation between movement duration and task difficulty expressed a clear modulation, with the lowest values in the afternoon. This variation of the speed-accuracy trade-off in executed and mental movements suggests that, beyond execution parameters, motor planning mechanisms are modulated during the day. Daily update of forward models is discussed as a potential mechanism. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. The accuracy of seminumerical reionization models in comparison with radiative transfer simulations

    NASA Astrophysics Data System (ADS)

    Hutter, Anne

    2018-06-01

    We have developed a modular seminumerical code that computes the time and spatially dependent ionization of neutral hydrogen (H I), neutral (He I), and single-ionized helium (He II) in the intergalactic medium (IGM). The model accounts for recombinations and provides different descriptions for the photoionization rate that are used to calculate the residual H I fraction in ionized regions. We compare different seminumerical reionization schemes to a radiative transfer (RT) simulation. We use the RT simulation as a benchmark, and find that the seminumerical approaches produce similar H II and He II morphologies and power spectra of the H I 21 cm signal throughout reionization. As we do not track partial ionization of He II, the extent of the double-ionized helium (He III) regions is consistently smaller. In contrast to previous comparison projects, the ionizing emissivity in our seminumerical scheme is not adjusted to reproduce the redshift evolution of the RT simulation, but directly derived from the RT simulation spectra. Among schemes that identify the ionized regions by the ratio of the number of ionization and absorption events on different spatial smoothing scales, we find those that mark the entire sphere as ionized when the ionization criterion is fulfilled to result in significantly accelerated reionization compared to the RT simulation. Conversely, those that flag only the central cell as ionized yield very similar but slightly delayed redshift evolution of reionization, with up to 20 per cent ionizing photons lost. Despite the overall agreement with the RT simulation, our results suggest that constraining ionizing emissivity-sensitive parameters from seminumerical galaxy formation-reionization models are subject to photon nonconservation.

  16. Mapping simulated scenes with skeletal remains using differential GPS in open environments: an assessment of accuracy and practicality.

    PubMed

    Walter, Brittany S; Schultz, John J

    2013-05-10

    Scene mapping is an integral aspect of processing a scene with scattered human remains. By utilizing the appropriate mapping technique, investigators can accurately document the location of human remains and maintain a precise geospatial record of evidence. One option that has not received much attention for mapping forensic evidence is the differential global positioning (DGPS) unit, as this technology now provides decreased positional error suitable for mapping scenes. Because of the lack of knowledge concerning this utility in mapping a scene, controlled research is necessary to determine the practicality of using newer and enhanced DGPS units in mapping scattered human remains. The purpose of this research was to quantify the accuracy of a DGPS unit for mapping skeletal dispersals and to determine the applicability of this utility in mapping a scene with dispersed remains. First, the accuracy of the DGPS unit in open environments was determined using known survey markers in open areas. Secondly, three simulated scenes exhibiting different types of dispersals were constructed and mapped in an open environment using the DGPS. Variables considered during data collection included the extent of the dispersal, data collection time, data collected on different days, and different postprocessing techniques. Data were differentially postprocessed and compared in a geographic information system (GIS) to evaluate the most efficient recordation methods. Results of this study demonstrate that the DGPS is a viable option for mapping dispersed human remains in open areas. The accuracy of collected point data was 11.52 and 9.55 cm for 50- and 100-s collection times, respectfully, and the orientation and maximum length of long bones was maintained. Also, the use of error buffers for point data of bones in maps demonstrated the error of the DGPS unit, while showing that the context of the dispersed skeleton was accurately maintained. Furthermore, the application of a DGPS for

  17. Determining dynamical parameters of the Milky Way Galaxy based on high-accuracy radio astrometry

    NASA Astrophysics Data System (ADS)

    Honma, Mareki; Nagayama, Takumi; Sakai, Nobuyuki

    2015-08-01

    In this paper we evaluate how the dynamical structure of the Galaxy can be constrained by high-accuracy VLBI (Very Long Baseline Interferometry) astrometry such as VERA (VLBI Exploration of Radio Astrometry). We generate simulated samples of maser sources which follow the gas motion caused by a spiral or bar potential, with their distribution similar to those currently observed with VERA and VLBA (Very Long Baseline Array). We apply the Markov chain Monte Carlo analyses to the simulated sample sources to determine the dynamical parameter of the models. We show that one can successfully determine the initial model parameters if astrometric results are obtained for a few hundred sources with currently achieved astrometric accuracy. If astrometric data are available from 500 sources, the expected accuracy of R0 and Θ0 is ˜ 1% or higher, and parameters related to the spiral structure can be constrained by an error of 10% or with higher accuracy. We also show that the parameter determination accuracy is basically independent of the locations of resonances such as corotation and/or inner/outer Lindblad resonances. We also discuss the possibility of model selection based on the Bayesian information criterion (BIC), and demonstrate that BIC can be used to discriminate different dynamical models of the Galaxy.

  18. Presenting simulation results in a nested loop plot.

    PubMed

    Rücker, Gerta; Schwarzer, Guido

    2014-12-12

    Statisticians investigate new methods in simulations to evaluate their properties for future real data applications. Results are often presented in a number of figures, e.g., Trellis plots. We had conducted a simulation study on six statistical methods for estimating the treatment effect in binary outcome meta-analyses, where selection bias (e.g., publication bias) was suspected because of apparent funnel plot asymmetry. We varied five simulation parameters: true treatment effect, extent of selection, event proportion in control group, heterogeneity parameter, and number of studies in meta-analysis. In combination, this yielded a total number of 768 scenarios. To present all results using Trellis plots, 12 figures were needed. Choosing bias as criterion of interest, we present a 'nested loop plot', a diagram type that aims to have all simulation results in one plot. The idea was to bring all scenarios into a lexicographical order and arrange them consecutively on the horizontal axis of a plot, whereas the treatment effect estimate is presented on the vertical axis. The plot illustrates how parameters simultaneously influenced the estimate. It can be combined with a Trellis plot in a so-called hybrid plot. Nested loop plots may also be applied to other criteria such as the variance of estimation. The nested loop plot, similar to a time series graph, summarizes all information about the results of a simulation study with respect to a chosen criterion in one picture and provides a suitable alternative or an addition to Trellis plots.

  19. Accuracy of Shack-Hartmann wavefront sensor using a coherent wound fibre image bundle

    NASA Astrophysics Data System (ADS)

    Zheng, Jessica R.; Goodwin, Michael; Lawrence, Jon

    2018-03-01

    Shack-Hartmannwavefront sensors using wound fibre image bundles are desired for multi-object adaptive optical systems to provide large multiplex positioned by Starbugs. The use of a large-sized wound fibre image bundle provides the flexibility to use more sub-apertures wavefront sensor for ELTs. These compact wavefront sensors take advantage of large focal surfaces such as the Giant Magellan Telescope. The focus of this paper is to study the wound fibre image bundle structure defects effect on the centroid measurement accuracy of a Shack-Hartmann wavefront sensor. We use the first moment centroid method to estimate the centroid of a focused Gaussian beam sampled by a simulated bundle. Spot estimation accuracy with wound fibre image bundle and its structure impact on wavefront measurement accuracy statistics are addressed. Our results show that when the measurement signal-to-noise ratio is high, the centroid measurement accuracy is dominated by the wound fibre image bundle structure, e.g. tile angle and gap spacing. For the measurement with low signal-to-noise ratio, its accuracy is influenced by the read noise of the detector instead of the wound fibre image bundle structure defects. We demonstrate this both with simulation and experimentally. We provide a statistical model of the centroid and wavefront error of a wound fibre image bundle found through experiment.

  20. Accuracy of Monte Carlo photon transport simulation in characterizing brachytherapy dosimeter energy-response artefacts

    NASA Astrophysics Data System (ADS)

    Das, R. K.; Li, Z.; Perera, H.; Williamson, J. F.

    1996-06-01

    Practical dosimeters in brachytherapy, such as thermoluminescent dosimeters (TLD) and diodes, are usually calibrated against low-energy megavoltage beams. To measure absolute dose rate near a brachytherapy source, it is necessary to establish the energy response of the detector relative to that of the calibration energy. The purpose of this paper is to assess the accuracy of Monte Carlo photon transport (MCPT) simulation in modelling the absolute detector response as a function of detector geometry and photon energy. We have exposed two different sizes of TLD-100 (LiF chips) and p-type silicon diode detectors to calibrated , HDR source and superficial x-ray beams. For the Scanditronix electron-field diode, the relative detector response, defined as the measured detector readings per measured unit of air kerma, varied from (40 kVp beam) to ( beam). Similarly for the large and small chips the same quantity varied from and , respectively. Monte Carlo simulation was used to calculate the absorbed dose to the active volume of the detector per unit air kerma. If the Monte Carlo simulation is accurate, then the absolute detector response, which is defined as the measured detector reading per unit dose absorbed by the active detector volume, and is calculated by Monte Carlo simulation, should be a constant. For the diode, the absolute response is . For TLDs of size

  1. A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.

    1999-01-01

    A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.

  2. Colored noise effects on batch attitude accuracy estimates

    NASA Technical Reports Server (NTRS)

    Bilanow, Stephen

    1991-01-01

    The effects of colored noise on the accuracy of batch least squares parameter estimates with applications to attitude determination cases are investigated. The standard approaches used for estimating the accuracy of a computed attitude commonly assume uncorrelated (white) measurement noise, while in actual flight experience measurement noise often contains significant time correlations and thus is colored. For example, horizon scanner measurements from low Earth orbit were observed to show correlations over many minutes in response to large scale atmospheric phenomena. A general approach to the analysis of the effects of colored noise is investigated, and interpretation of the resulting equations provides insight into the effects of any particular noise color and the worst case noise coloring for any particular parameter estimate. It is shown that for certain cases, the effects of relatively short term correlations can be accommodated by a simple correction factor. The errors in the predicted accuracy assuming white noise and the reduced accuracy due to the suboptimal nature of estimators that do not take into account the noise color characteristics are discussed. The appearance of a variety of sample noise color characteristics are demonstrated through simulation, and their effects are discussed for sample estimation cases. Based on the analysis, options for dealing with the effects of colored noise are discussed.

  3. Accurate time delay technology in simulated test for high precision laser range finder

    NASA Astrophysics Data System (ADS)

    Chen, Zhibin; Xiao, Wenjian; Wang, Weiming; Xue, Mingxi

    2015-10-01

    With the continuous development of technology, the ranging accuracy of pulsed laser range finder (LRF) is higher and higher, so the maintenance demand of LRF is also rising. According to the dominant ideology of "time analog spatial distance" in simulated test for pulsed range finder, the key of distance simulation precision lies in the adjustable time delay. By analyzing and comparing the advantages and disadvantages of fiber and circuit delay, a method was proposed to improve the accuracy of the circuit delay without increasing the count frequency of the circuit. A high precision controllable delay circuit was designed by combining the internal delay circuit and external delay circuit which could compensate the delay error in real time. And then the circuit delay accuracy could be increased. The accuracy of the novel circuit delay methods proposed in this paper was actually measured by a high sampling rate oscilloscope actual measurement. The measurement result shows that the accuracy of the distance simulated by the circuit delay is increased from +/- 0.75m up to +/- 0.15m. The accuracy of the simulated distance is greatly improved in simulated test for high precision pulsed range finder.

  4. Gravity compensation in a Strapdown Inertial Navigation System to improve the attitude accuracy

    NASA Astrophysics Data System (ADS)

    Zhu, Jing; Wang, Jun; Wang, Xingshu; Yang, Shuai

    2017-10-01

    Attitude errors in a strapdown inertial navigation system due to gravity disturbances and system noises can be relatively large, although they are bound within the Schuler and the Earth rotation period. The principal objective of the investigation is to determine to what extent accurate gravity data can improve the attitude accuracy. The way the gravity disturbances affect the attitude were analyzed and compared with system noises by the analytic solution and simulation. The gravity disturbances affect the attitude accuracy by introducing the initial attitude error and the equivalent accelerometer bias. With the development of the high precision inertial devices and the application of the rotation modulation technology, the gravity disturbance cannot be neglected anymore. The gravity compensation was performed using the EGM2008 and simulations with and without accurate gravity compensation under varying navigation conditions were carried out. The results show that the gravity compensation improves the horizontal components of attitude accuracy evidently while the yaw angle is badly affected by the uncompensated gyro bias in vertical channel.

  5. Effect of simulated intraoral variables on the accuracy of a photogrammetric imaging technique for complete-arch implant prostheses.

    PubMed

    Bratos, Manuel; Bergin, Jumping M; Rubenstein, Jeffrey E; Sorensen, John A

    2018-03-17

    Conventional impression techniques to obtain a definitive cast for a complete-arch implant-supported prosthesis are technique-sensitive and time-consuming. Direct optical recording with a camera could offer an alternative to conventional impression making. The purpose of this in vitro study was to test a novel intraoral image capture protocol to obtain 3-dimensional (3D) implant spatial measurement data under simulated oral conditions of vertical opening and lip retraction. A mannequin was assembled simulating the intraoral conditions of a patient having an edentulous mandible with 5 interforaminal implants. Simulated mouth openings with 2 interincisal openings (35 mm and 55 mm) and 3 lip retractions (55 mm, 75 mm, and 85 mm) were evaluated to record the implant positions. The 3D spatial orientations of implant replicas embedded in the reference model were measured using a coordinate measuring machine (CMM) (control). Five definitive casts were made with a splinted conventional impression technique of the reference model. The positions of the implant replicas for each of the 5 casts were measured with a Nobel Procera Scanner (conventional digital method). For the prototype, optical targets were secured to the implant replicas, and 3 sets of 12 images each were recorded for the photogrammetric process of 6 groups of retractions and openings using a digital camera and a standardized image capture protocol. Dimensional data were imported into photogrammetry software (photogrammetry method). The calculated and/or measured precision and accuracy of the implant positions in 3D space for the 6 groups were compared with 1-way ANOVA with an F-test (α=.05). The precision (standard error [SE] of measurement) for CMM was 3.9 μm (95% confidence interval [CI] 2.7 to 7.1 μm). For the conventional impression method, the SE of measurement was 17.2 μm (95% CI 10.3 to 49.4 μm). For photogrammetry, a grand mean was calculated for groups MinR-AvgO, MinR-MaxO, AvgR-AvgO, and Max

  6. Filters for Improvement of Multiscale Data from Atomistic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Reynolds, Daniel R.

    Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less

  7. Filters for Improvement of Multiscale Data from Atomistic Simulations

    DOE PAGES

    Gardner, David J.; Reynolds, Daniel R.

    2017-01-05

    Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less

  8. Increasing Accuracy in Computed Inviscid Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Roger

    2004-01-01

    A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number

  9. Percutaneous spinal fixation simulation with virtual reality and haptics.

    PubMed

    Luciano, Cristian J; Banerjee, P Pat; Sorenson, Jeffery M; Foley, Kevin T; Ansari, Sameer A; Rizzi, Silvio; Germanwala, Anand V; Kranzler, Leonard; Chittiboina, Prashant; Roitberg, Ben Z

    2013-01-01

    In this study, we evaluated the use of a part-task simulator with 3-dimensional and haptic feedback as a training tool for percutaneous spinal needle placement. To evaluate the learning effectiveness in terms of entry point/target point accuracy of percutaneous spinal needle placement on a high-performance augmented-reality and haptic technology workstation with the ability to control the duration of computer-simulated fluoroscopic exposure, thereby simulating an actual situation. Sixty-three fellows and residents performed needle placement on the simulator. A virtual needle was percutaneously inserted into a virtual patient's thoracic spine derived from an actual patient computed tomography data set. Ten of 126 needle placement attempts by 63 participants ended in failure for a failure rate of 7.93%. From all 126 needle insertions, the average error (15.69 vs 13.91), average fluoroscopy exposure (4.6 vs 3.92), and average individual performance score (32.39 vs 30.71) improved from the first to the second attempt. Performance accuracy yielded P = .04 from a 2-sample t test in which the rejected null hypothesis assumes no improvement in performance accuracy from the first to second attempt in the test session. The experiments showed evidence (P = .04) of performance accuracy improvement from the first to the second percutaneous needle placement attempt. This result, combined with previous learning retention and/or face validity results of using the simulator for open thoracic pedicle screw placement and ventriculostomy catheter placement, supports the efficacy of augmented reality and haptics simulation as a learning tool.

  10. EFFECT OF RADIATION DOSE LEVEL ON ACCURACY AND PRECISION OF MANUAL SIZE MEASUREMENTS IN CHEST TOMOSYNTHESIS EVALUATED USING SIMULATED PULMONARY NODULES

    PubMed Central

    Söderman, Christina; Johnsson, Åse Allansdotter; Vikgren, Jenny; Norrlund, Rauni Rossi; Molnar, David; Svalkvist, Angelica; Månsson, Lars Gunnar; Båth, Magnus

    2016-01-01

    The aim of the present study was to investigate the dependency of the accuracy and precision of nodule diameter measurements on the radiation dose level in chest tomosynthesis. Artificial ellipsoid-shaped nodules with known dimensions were inserted in clinical chest tomosynthesis images. Noise was added to the images in order to simulate radiation dose levels corresponding to effective doses for a standard-sized patient of 0.06 and 0.04 mSv. These levels were compared with the original dose level, corresponding to an effective dose of 0.12 mSv for a standard-sized patient. Four thoracic radiologists measured the longest diameter of the nodules. The study was restricted to nodules located in high-dose areas of the tomosynthesis projection radiographs. A significant decrease of the measurement accuracy and intraobserver variability was seen for the lowest dose level for a subset of the observers. No significant effect of dose level on the interobserver variability was found. The number of non-measurable small nodules (≤5 mm) was higher for the two lowest dose levels compared with the original dose level. In conclusion, for pulmonary nodules at positions in the lung corresponding to locations in high-dose areas of the projection radiographs, using a radiation dose level resulting in an effective dose of 0.06 mSv to a standard-sized patient may be possible in chest tomosynthesis without affecting the accuracy and precision of nodule diameter measurements to any large extent. However, an increasing number of non-measurable small nodules (≤5 mm) with decreasing radiation dose may raise some concerns regarding an applied general dose reduction for chest tomosynthesis examinations in the clinical praxis. PMID:26994093

  11. Multi-wavelength approach towards on-product overlay accuracy and robustness

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Kaustuve; Noot, Marc; Chang, Hammer; Liao, Sax; Chang, Ken; Gosali, Benny; Su, Eason; Wang, Cathy; den Boef, Arie; Fouquet, Christophe; Huang, Guo-Tsai; Chen, Kai-Hsiung; Cheng, Kevin; Lin, John

    2018-03-01

    Success of diffraction-based overlay (DBO) technique1,4,5 in the industry is not just for its good precision and low toolinduced shift, but also for the measurement accuracy2 and robustness that DBO can provide. Significant efforts are put in to capitalize on the potential that DBO has to address measurement accuracy and robustness. Introduction of many measurement wavelength choices (continuous wavelength) in DBO is one of the key new capabilities in this area. Along with the continuous choice of wavelengths, the algorithms (fueled by swing-curve physics) on how to use these wavelengths are of high importance for a robust recipe setup that can avoid the impact from process stack variations (symmetric as well as asymmetric). All these are discussed. Moreover, another aspect of boosting measurement accuracy and robustness is discussed that deploys the capability to combine overlay measurement data from multiple wavelength measurements. The goal is to provide a method to make overlay measurements immune from process stack variations and also to report health KPIs for every measurement. By combining measurements from multiple wavelengths, a final overlay measurement is generated. The results show a significant benefit in accuracy and robustness against process stack variation. These results are supported by both measurement data as well as simulation from many product stacks.

  12. An accuracy measurement method for star trackers based on direct astronomic observation

    PubMed Central

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-01-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412

  13. An accuracy measurement method for star trackers based on direct astronomic observation.

    PubMed

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-03-07

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers.

  14. Toward Quantitative Small Animal Pinhole SPECT: Assessment of Quantitation Accuracy Prior to Image Compensations

    PubMed Central

    Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J. S.; Tsui, Benjamin M. W.

    2011-01-01

    Purpose We assessed the quantitation accuracy of small animal pinhole single photon emission computed tomography (SPECT) under the current preclinical settings, where image compensations are not routinely applied. Procedures The effects of several common image-degrading factors and imaging parameters on quantitation accuracy were evaluated using Monte-Carlo simulation methods. Typical preclinical imaging configurations were modeled, and quantitative analyses were performed based on image reconstructions without compensating for attenuation, scatter, and limited system resolution. Results Using mouse-sized phantom studies as examples, attenuation effects alone degraded quantitation accuracy by up to −18% (Tc-99m or In-111) or −41% (I-125). The inclusion of scatter effects changed the above numbers to −12% (Tc-99m or In-111) and −21% (I-125), respectively, indicating the significance of scatter in quantitative I-125 imaging. Region-of-interest (ROI) definitions have greater impacts on regional quantitation accuracy for small sphere sources as compared to attenuation and scatter effects. For the same ROI, SPECT acquisitions using pinhole apertures of different sizes could significantly affect the outcome, whereas the use of different radii-of-rotation yielded negligible differences in quantitation accuracy for the imaging configurations simulated. Conclusions We have systematically quantified the influence of several factors affecting the quantitation accuracy of small animal pinhole SPECT. In order to consistently achieve accurate quantitation within 5% of the truth, comprehensive image compensation methods are needed. PMID:19048346

  15. Improving the result of forcasting using reservoir and surface network simulation

    NASA Astrophysics Data System (ADS)

    Hendri, R. S.; Winarta, J.

    2018-01-01

    This study was aimed to get more representative results in production forcasting using integrated simulation in pipeline gathering system of X field. There are 5 main scenarios which consist of the production forecast of the existing condition, work over, and infill drilling. Then, it’s determined the best development scenario. The methods of this study is Integrated Reservoir Simulator and Pipeline Simulator so-calle as Integrated Reservoir and Surface Network Simulation. After well data result from reservoir simulator was then integrated with pipeline networking simulator’s to construct a new schedule, which was input for all simulation procedure. The well design result was done by well modeling simulator then exported into pipeline simulator. Reservoir prediction depends on the minimum value of Tubing Head Pressure (THP) for each well, where the pressure drop on the Gathering Network is not necessary calculated. The same scenario was done also for the single-reservoir simulation. Integration Simulation produces results approaching the actual condition of the reservoir and was confirmed by the THP profile, which difference between those two methods. The difference between integrated simulation compared to single-modeling simulation is 6-9%. The aimed of solving back-pressure problem in pipeline gathering system of X field is achieved.

  16. Accuracy and borehole influences in pulsed neutron gamma density logging while drilling.

    PubMed

    Yu, Huawei; Sun, Jianmeng; Wang, Jiaxin; Gardner, Robin P

    2011-09-01

    A new pulsed neutron gamma density (NGD) logging has been developed to replace radioactive chemical sources in oil logging tools. The present paper describes studies of near and far density measurement accuracy of NGD logging at two spacings and the borehole influences using Monte-Carlo simulation. The results show that the accuracy of near density is not as good as far density. It is difficult to correct this for borehole effects by using conventional methods because both near and far density measurement is significantly sensitive to standoffs and mud properties. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. NREL: News - Solar Decathlon Design Presentation and Simulation Results

    Science.gov Websites

    Announced Design Presentation and Simulation Results Announced Monday, September 30, 2002 took first place in the Design Presentation and Simulation Contest at the Solar Village on the National Tech in third. Design Presentation and Simulation is one of ten contests in the Solar Decathlon, which

  18. LENS: μLENS Simulations, Analysis, and Results

    NASA Astrophysics Data System (ADS)

    Rasco, Charles

    2013-04-01

    Simulations of the Low-Energy Neutrino Spectrometer prototype, μLENS, have been performed in order to benchmark the first measurements of the μLENS detector at the Kimballton Underground Research Facility (KURF). μLENS is a 6x6x6 celled scintillation lattice filled with Linear Alkylbenzene based scintillator. We have performed simulations of μLENS using the GEANT4 toolkit. We have measured various radioactive sources, LEDs, and environmental background radiation measurements at KURF using up to 96 PMTs with a simplified data acquisition system of QDCs and TDCs. In this talk we will demonstrate our understanding of the light propagation and we will compare simulation results with measurements of the μLENS detector of various radioactive sources, LEDs, and the environmental background radiation.

  19. Hyper-X Stage Separation: Simulation Development and Results

    NASA Technical Reports Server (NTRS)

    Reubush, David E.; Martin, John G.; Robinson, Jeffrey S.; Bose, David M.; Strovers, Brian K.

    2001-01-01

    This paper provides an overview of stage separation simulation development and results for NASA's Hyper-X program; a focused hypersonic technology effort designed to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. This paper presents an account of the development of the current 14 degree of freedom stage separation simulation tool (SepSim) and results from use of the tool in a Monte Carlo analysis to evaluate the risk of failure for the separation event. Results from use of the tool show that there is only a very small risk of failure in the separation event.

  20. NOTE: Implementation of angular response function modeling in SPECT simulations with GATE

    NASA Astrophysics Data System (ADS)

    Descourt, P.; Carlier, T.; Du, Y.; Song, X.; Buvat, I.; Frey, E. C.; Bardies, M.; Tsui, B. M. W.; Visvikis, D.

    2010-05-01

    Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy.

  1. Establishment of quality assurance for respiratory-gated radiotherapy using a respiration-simulating phantom and gamma index: Evaluation of accuracy taking into account tumor motion and respiratory cycle

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Seung; Im, In-Chul; Kang, Su-Man; Goo, Eun-Hoe; Baek, Seong-Min

    2013-11-01

    The purpose of this study is to present a new method of quality assurance (QA) in order to ensure effective evaluation of the accuracy of respiratory-gated radiotherapy (RGR). This would help in quantitatively analyzing the patient's respiratory cycle and respiration-induced tumor motion and in performing a subsequent comparative analysis of dose distributions, using the gamma-index method, as reproduced in our in-house developed respiration-simulating phantom. Therefore, we designed a respiration-simulating phantom capable of reproducing the patient's respiratory cycle and respiration-induced tumor motion and evaluated the accuracy of RGR by estimating its pass rates. We applied the gamma index passing criteria of accepted error ranges of 3% and 3 mm for the dose distribution calculated by using the treatment planning system (TPS) and the actual dose distribution of RGR. The pass rate clearly increased inversely to the gating width chosen. When respiration-induced tumor motion was 12 mm or less, pass rates of 85% and above were achieved for the 30-70% respiratory phase, and pass rates of 90% and above were achieved for the 40-60% respiratory phase. However, a respiratory cycle with a very small fluctuation range of pass rates failed to prove reliable in evaluating the accuracy of RGR. Therefore, accurate and reliable outcomes of radiotherapy will be obtainable only by establishing a novel QA system using the respiration-simulating phantom, the gamma-index analysis, and a quantitative analysis of diaphragmatic motion, enabling an indirect measurement of tumor motion.

  2. Flight Test Results: CTAS Cruise/Descent Trajectory Prediction Accuracy for En route ATC Advisories

    NASA Technical Reports Server (NTRS)

    Green, S.; Grace, M.; Williams, D.

    1999-01-01

    The Center/TRACON Automation System (CTAS), under development at NASA Ames Research Center, is designed to assist controllers with the management and control of air traffic transitioning to/from congested airspace. This paper focuses on the transition from the en route environment, to high-density terminal airspace, under a time-based arrival-metering constraint. Two flight tests were conducted at the Denver Air Route Traffic Control Center (ARTCC) to study trajectory-prediction accuracy, the key to accurate Decision Support Tool advisories such as conflict detection/resolution and fuel-efficient metering conformance. In collaboration with NASA Langley Research Center, these test were part of an overall effort to research systems and procedures for the integration of CTAS and flight management systems (FMS). The Langley Transport Systems Research Vehicle Boeing 737 airplane flew a combined total of 58 cruise-arrival trajectory runs while following CTAS clearance advisories. Actual trajectories of the airplane were compared to CTAS and FMS predictions to measure trajectory-prediction accuracy and identify the primary sources of error for both. The research airplane was used to evaluate several levels of cockpit automation ranging from conventional avionics to a performance-based vertical navigation (VNAV) FMS. Trajectory prediction accuracy was analyzed with respect to both ARTCC radar tracking and GPS-based aircraft measurements. This paper presents detailed results describing the trajectory accuracy and error sources. Although differences were found in both accuracy and error sources, CTAS accuracy was comparable to the FMS in terms of both meter-fix arrival-time performance (in support of metering) and 4D-trajectory prediction (key to conflict prediction). Overall arrival time errors (mean plus standard deviation) were measured to be approximately 24 seconds during the first flight test (23 runs) and 15 seconds during the second flight test (25 runs). The major

  3. Interactive visualisation for interpreting diagnostic test accuracy study results.

    PubMed

    Fanshawe, Thomas R; Power, Michael; Graziadio, Sara; Ordóñez-Mena, José M; Simpson, John; Allen, Joy

    2018-02-01

    Information about the performance of diagnostic tests is typically presented in the form of measures of test accuracy such as sensitivity and specificity. These measures may be difficult to translate directly into decisions about patient treatment, for which information presented in the form of probabilities of disease after a positive or a negative test result may be more useful. These probabilities depend on the prevalence of the disease, which is likely to vary between populations. This article aims to clarify the relationship between pre-test (prevalence) and post-test probabilities of disease, and presents two free, online interactive tools to illustrate this relationship. These tools allow probabilities of disease to be compared with decision thresholds above and below which different treatment decisions may be indicated. They are intended to help those involved in communicating information about diagnostic test performance and are likely to be of benefit when teaching these concepts. A substantive example is presented using C reactive protein as a diagnostic marker for bacterial infection in the older adult population. The tools may also be useful for manufacturers of clinical tests in planning product development, for authors of test evaluation studies to improve reporting and for users of test evaluations to facilitate interpretation and application of the results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. Factors Affecting the Item Parameter Estimation and Classification Accuracy of the DINA Model

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Hong, Yuan; Deng, Weiling

    2010-01-01

    To better understand the statistical properties of the deterministic inputs, noisy "and" gate cognitive diagnosis (DINA) model, the impact of several factors on the quality of the item parameter estimates and classification accuracy was investigated. Results of the simulation study indicate that the fully Bayes approach is most accurate when the…

  5. Sampling factors influencing accuracy of sperm kinematic analysis.

    PubMed

    Owen, D H; Katz, D F

    1993-01-01

    Sampling conditions that influence the accuracy of experimental measurement of sperm head kinematics were studied by computer simulation methods. Several archetypal sperm trajectories were studied. First, mathematical models of typical flagellar beats were input to hydrodynamic equations of sperm motion. The instantaneous swimming velocities of such sperm were computed over sequences of flagellar beat cycles, from which the resulting trajectories were determined. In a second, idealized approach, direct mathematical models of trajectories were utilized, based upon similarities to the previous hydrodynamic constructs. In general, it was found that analyses of sampling factors produced similar results for the hydrodynamic and idealized trajectories. A number of experimental sampling factors were studied, including the number of sperm head positions measured per flagellar beat, and the time interval over which these measurements are taken. It was found that when one flagellar beat is sampled, values of amplitude of lateral head displacement (ALH) and linearity (LIN) approached their actual values when five or more sample points per beat were taken. Mean angular displacement (MAD) values, however, remained sensitive to sampling rate even when large sampling rates were used. Values of MAD were also much more sensitive to the initial starting point of the sampling procedure than were ALH or LIN. On the basis of these analyses of measurement accuracy for individual sperm, simulations were then performed of cumulative effects when studying entire populations of motile cells. It was found that substantial (double digit) errors occurred in the mean values of curvilinear velocity (VCL), LIN, and MAD under the conditions of 30 video frames per second and 0.5 seconds of analysis time. Increasing the analysis interval to 1 second did not appreciably improve the results. However, increasing the analysis rate to 60 frames per second significantly reduced the errors. These findings

  6. Improved Statistical Sampling and Accuracy with Accelerated Molecular Dynamics on Rotatable Torsions.

    PubMed

    Doshi, Urmi; Hamelberg, Donald

    2012-11-13

    In enhanced sampling techniques, the precision of the reweighted ensemble properties is often decreased due to large variation in statistical weights and reduction in the effective sampling size. To abate this reweighting problem, here, we propose a general accelerated molecular dynamics (aMD) approach in which only the rotatable dihedrals are subjected to aMD (RaMD), unlike the typical implementation wherein all dihedrals are boosted (all-aMD). Nonrotatable and improper dihedrals are marginally important to conformational changes or the different rotameric states. Not accelerating them avoids the sharp increases in the potential energies due to small deviations from their minimum energy conformations and leads to improvement in the precision of RaMD. We present benchmark studies on two model dipeptides, Ace-Ala-Nme and Ace-Trp-Nme, simulated with normal MD, all-aMD, and RaMD. We carry out a systematic comparison between the performances of both forms of aMD using a theory that allows quantitative estimation of the effective number of sampled points and the associated uncertainty. Our results indicate that, for the same level of acceleration and simulation length, as used in all-aMD, RaMD results in significantly less loss in the effective sample size and, hence, increased accuracy in the sampling of φ-ψ space. RaMD yields an accuracy comparable to that of all-aMD, from simulation lengths 5 to 1000 times shorter, depending on the peptide and the acceleration level. Such improvement in speed and accuracy over all-aMD is highly remarkable, suggesting RaMD as a promising method for sampling larger biomolecules.

  7. A "Skylight" Simulator for HWIL Simulation of Hyperspectral Remote Sensing.

    PubMed

    Zhao, Huijie; Cui, Bolun; Jia, Guorui; Li, Xudong; Zhang, Chao; Zhang, Xinyang

    2017-12-06

    Even though digital simulation technology has been widely used in the last two decades, hardware-in-the-loop (HWIL) simulation is still an indispensable method for spectral uncertainty research of ground targets. However, previous facilities mainly focus on the simulation of panchromatic imaging. Therefore, neither the spectral nor the spatial performance is enough for hyperspectral simulation. To improve the accuracy of illumination simulation, a new dome-like skylight simulator is designed and developed to fit the spatial distribution and spectral characteristics of a real skylight for the wavelength from 350 nm to 2500 nm. The simulator's performance was tested using a spectroradiometer with different accessories. The spatial uniformity is greater than 0.91. The spectral mismatch decreases to 1/243 of the spectral mismatch of the Imagery Simulation Facility (ISF). The spatial distribution of radiance can be adjusted, and the accuracy of the adjustment is greater than 0.895. The ability of the skylight simulator is also demonstrated by comparing radiometric quantities measured in the skylight simulator with those in a real skylight in Beijing.

  8. The VIIRS Ocean Data Simulator Enhancements and Results

    NASA Technical Reports Server (NTRS)

    Robinson, Wayne D.; Patt, Fredrick S.; Franz, Bryan A.; Turpie, Kevin R.; McClain, Charles R.

    2011-01-01

    The VIIRS Ocean Science Team (VOST) has been developing an Ocean Data Simulator to create realistic VIIRS SDR datasets based on MODIS water-leaving radiances. The simulator is helping to assess instrument performance and scientific processing algorithms. Several changes were made in the last two years to complete the simulator and broaden its usefulness. The simulator is now fully functional and includes all sensor characteristics measured during prelaunch testing, including electronic and optical crosstalk influences, polarization sensitivity, and relative spectral response. Also included is the simulation of cloud and land radiances to make more realistic data sets and to understand their important influence on nearby ocean color data. The atmospheric tables used in the processing, including aerosol and Rayleigh reflectance coefficients, have been modeled using VIIRS relative spectral responses. The capabilities of the simulator were expanded to work in an unaggregated sample mode and to produce scans with additional samples beyond the standard scan. These features improve the capability to realistically add artifacts which act upon individual instrument samples prior to aggregation and which may originate from beyond the actual scan boundaries. The simulator was expanded to simulate all 16 M-bands and the EDR processing was improved to use these bands to make an SST product. The simulator is being used to generate global VIIRS data from and in parallel with the MODIS Aqua data stream. Studies have been conducted using the simulator to investigate the impact of instrument artifacts. This paper discusses the simulator improvements and results from the artifact impact studies.

  9. The VIIRS ocean data simulator enhancements and results

    NASA Astrophysics Data System (ADS)

    Robinson, Wayne D.; Patt, Frederick S.; Franz, Bryan A.; Turpie, Kevin R.; McClain, Charles R.

    2011-10-01

    The VIIRS Ocean Science Team (VOST) has been developing an Ocean Data Simulator to create realistic VIIRS SDR datasets based on MODIS water-leaving radiances. The simulator is helping to assess instrument performance and scientific processing algorithms. Several changes were made in the last two years to complete the simulator and broaden its usefulness. The simulator is now fully functional and includes all sensor characteristics measured during prelaunch testing, including electronic and optical crosstalk influences, polarization sensitivity, and relative spectral response. Also included is the simulation of cloud and land radiances to make more realistic data sets and to understand their important influence on nearby ocean color data. The atmospheric tables used in the processing, including aerosol and Rayleigh reflectance coefficients, have been modeled using VIIRS relative spectral responses. The capabilities of the simulator were expanded to work in an unaggregated sample mode and to produce scans with additional samples beyond the standard scan. These features improve the capability to realistically add artifacts which act upon individual instrument samples prior to aggregation and which may originate from beyond the actual scan boundaries. The simulator was expanded to simulate all 16 M-bands and the EDR processing was improved to use these bands to make an SST product. The simulator is being used to generate global VIIRS data from and in parallel with the MODIS Aqua data stream. Studies have been conducted using the simulator to investigate the impact of instrument artifacts. This paper discusses the simulator improvements and results from the artifact impact studies.

  10. Accuracy of a hexapod parallel robot kinematics based external fixator.

    PubMed

    Faschingbauer, Maximilian; Heuer, Hinrich J D; Seide, Klaus; Wendlandt, Robert; Münch, Matthias; Jürgens, Christian; Kirchner, Rainer

    2015-12-01

    Different hexapod-based external fixators are increasingly used to treat bone deformities and fractures. Accuracy has not been measured sufficiently for all models. An infrared tracking system was applied to measure positioning maneuvers with a motorized Precision Hexapod® fixator, detecting three-dimensional positions of reflective balls mounted in an L-arrangement on the fixator, simulating bone directions. By omitting one dimension of the coordinates, projections were simulated as if measured on standard radiographs. Accuracy was calculated as the absolute difference between targeted and measured positioning values. In 149 positioning maneuvers, the median values for positioning accuracy of translations and rotations (torsions/angulations) were below 0.3 mm and 0.2° with quartiles ranging from -0.5 mm to 0.5 mm and -1.0° to 0.9°, respectively. The experimental setup was found to be precise and reliable. It can be applied to compare different hexapod-based fixators. Accuracy of the investigated hexapod system was high. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Titan's organic chemistry: Results of simulation experiments

    NASA Technical Reports Server (NTRS)

    Sagan, Carl; Thompson, W. Reid; Khare, Bishun N.

    1992-01-01

    Recent low pressure continuous low plasma discharge simulations of the auroral electron driven organic chemistry in Titan's mesosphere are reviewed. These simulations yielded results in good accord with Voyager observations of gas phase organic species. Optical constants of the brownish solid tholins produced in similar experiments are in good accord with Voyager observations of the Titan haze. Titan tholins are rich in prebiotic organic constituents; the Huygens entry probe may shed light on some of the processes that led to the origin of life on Earth.

  12. The effect of using cow genomic information on accuracy and bias of genomic breeding values in a simulated Holstein dairy cattle population.

    PubMed

    Dehnavi, E; Mahyari, S Ansari; Schenkel, F S; Sargolzaei, M

    2018-06-01

    Using cow data in the training population is attractive as a way to mitigate bias due to highly selected training bulls and to implement genomic selection for countries with no or limited proven bull data. However, one potential issue with cow data is a bias due to the preferential treatment. The objectives of this study were to (1) investigate the effect of including cow genotype and phenotype data into the training population on accuracy and bias of genomic predictions and (2) assess the effect of preferential treatment for different proportions of elite cows. First, a 4-pathway Holstein dairy cattle population was simulated for 2 traits with low (0.05) and moderate (0.3) heritability. Then different numbers of cows (0, 2,500, 5,000, 10,000, 15,000, or 20,000) were randomly selected and added to the training group composed of different numbers of top bulls (0, 2,500, 5,000, 10,000, or 15,000). Reliability levels of de-regressed estimated breeding values for training cows and bulls were 30 and 75% for traits with low heritability and were 60 and 90% for traits with moderate heritability, respectively. Preferential treatment was simulated by introducing upward bias equal to 35% of phenotypic variance to 5, 10, and 20% of elite bull dams in each scenario. Two different validation data sets were considered: (1) all animals in the last generation of both elite and commercial tiers (n = 42,000) and (2) only animals in the last generation of the elite tier (n = 12,000). Adding cow data into the training population led to an increase in accuracy (r) and decrease in bias of genomic predictions in all considered scenarios without preferential treatment. The gain in r was higher for the low heritable trait (from 0.004 to 0.166 r points) compared with the moderate heritable trait (from 0.004 to 0.116 r points). The gain in accuracy in scenarios with a lower number of training bulls was relatively higher (from 0.093 to 0.166 r points) than with a higher number of training

  13. The accuracy of semi-numerical reionization models in comparison with radiative transfer simulations

    NASA Astrophysics Data System (ADS)

    Hutter, Anne

    2018-03-01

    We have developed a modular semi-numerical code that computes the time and spatially dependent ionization of neutral hydrogen (H I), neutral (He I) and singly ionized helium (He II) in the intergalactic medium (IGM). The model accounts for recombinations and provides different descriptions for the photoionization rate that are used to calculate the residual H I fraction in ionized regions. We compare different semi-numerical reionization schemes to a radiative transfer (RT) simulation. We use the RT simulation as a benchmark, and find that the semi-numerical approaches produce similar H II and He II morphologies and power spectra of the H I 21cm signal throughout reionization. As we do not track partial ionization of He II, the extent of the double ionized helium (He III) regions is consistently smaller. In contrast to previous comparison projects, the ionizing emissivity in our semi-numerical scheme is not adjusted to reproduce the redshift evolution of the RT simulation, but directly derived from the RT simulation spectra. Among schemes that identify the ionized regions by the ratio of the number of ionization and absorption events on different spatial smoothing scales, we find those that mark the entire sphere as ionized when the ionization criterion is fulfilled to result in significantly accelerated reionization compared to the RT simulation. Conversely, those that flag only the central cell as ionized yield very similar but slightly delayed redshift evolution of reionization, with up to 20% ionizing photons lost. Despite the overall agreement with the RT simulation, our results suggests that constraining ionizing emissivity sensitive parameters from semi-numerical galaxy formation-reionization models are subject to photon nonconservation.

  14. Impact of Glucose Measurement Processing Delays on Clinical Accuracy and Relevance

    PubMed Central

    Jangam, Sujit R.; Hayter, Gary; Dunn, Timothy C.

    2013-01-01

    Background In a hospital setting, glucose is often measured from venous blood in the clinical laboratory. However, laboratory glucose measurements are typically not available in real time. In practice, turn-around times for laboratory measurements can be minutes to hours. This analysis assesses the impact of turn-around time on the effective clinical accuracy of laboratory measurements. Methods Data obtained from an earlier study with 58 subjects with type 1 diabetes mellitus (T1DM) were used for this analysis. In the study, glucose measurements using a YSI glucose analyzer were obtained from venous blood samples every 15 min while the subjects were at the health care facility. To simulate delayed laboratory results, each YSI glucose value from a subject was paired with one from a later time point (from the same subject) separated by 15, 30, 45, and 60 min. To assess the clinical accuracy of a delayed YSI result relative to a real-time result, the percentage of YSI pairs that meet the International Organization for Standardization (ISO) 15197:2003(E) standard for glucose measurement accuracy (±15 mg/dl for blood glucose < 75 mg/dl, ±20% for blood glucose ≥ 75 mg/dl) was calculated. Results It was observed that delays of 15 min or more reduce clinical accuracy below the ISO 15197:2003(E) recommendation of 95%. The accuracy was less than 65% for delays of 60 min. Conclusion This analysis suggests that processing delays in glucose measurements reduce the clinical relevance of results in patients with T1DM and may similarly degrade the clinical value of measurements in other patient populations. PMID:23759399

  15. Accuracy of Binary Black Hole waveforms for Advanced LIGO searches

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Chu, Tony; Fong, Heather; Brown, Duncan; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela

    2015-04-01

    Coalescing binaries of compact objects are flagship sources for the first direct detection of gravitational waves with LIGO-Virgo observatories. Matched-filtering based detection searches aimed at binaries of black holes will use aligned spin waveforms as filters, and their efficiency hinges on the accuracy of the underlying waveform models. A number of gravitational waveform models are available in literature, e.g. the Effective-One-Body, Phenomenological, and traditional post-Newtonian ones. While Numerical Relativity (NR) simulations provide for the most accurate modeling of gravitational radiation from compact binaries, their computational cost limits their application in large scale searches. In this talk we assess the accuracy of waveform models in two regions of parameter space, which have only been explored cursorily in the past: the high mass-ratio regime as well as the comparable mass-ratio + high spin regime.s Using the SpEC code, six q = 7 simulations with aligned-spins and lasting 60 orbits, and tens of q ∈ [1,3] simulations with high black hole spins were performed. We use them to study the accuracy and intrinsic parameter biases of different waveform families, and assess their viability for Advanced LIGO searches.

  16. EFFECT OF RADIATION DOSE LEVEL ON ACCURACY AND PRECISION OF MANUAL SIZE MEASUREMENTS IN CHEST TOMOSYNTHESIS EVALUATED USING SIMULATED PULMONARY NODULES.

    PubMed

    Söderman, Christina; Johnsson, Åse Allansdotter; Vikgren, Jenny; Norrlund, Rauni Rossi; Molnar, David; Svalkvist, Angelica; Månsson, Lars Gunnar; Båth, Magnus

    2016-06-01

    The aim of the present study was to investigate the dependency of the accuracy and precision of nodule diameter measurements on the radiation dose level in chest tomosynthesis. Artificial ellipsoid-shaped nodules with known dimensions were inserted in clinical chest tomosynthesis images. Noise was added to the images in order to simulate radiation dose levels corresponding to effective doses for a standard-sized patient of 0.06 and 0.04 mSv. These levels were compared with the original dose level, corresponding to an effective dose of 0.12 mSv for a standard-sized patient. Four thoracic radiologists measured the longest diameter of the nodules. The study was restricted to nodules located in high-dose areas of the tomosynthesis projection radiographs. A significant decrease of the measurement accuracy and intraobserver variability was seen for the lowest dose level for a subset of the observers. No significant effect of dose level on the interobserver variability was found. The number of non-measurable small nodules (≤5 mm) was higher for the two lowest dose levels compared with the original dose level. In conclusion, for pulmonary nodules at positions in the lung corresponding to locations in high-dose areas of the projection radiographs, using a radiation dose level resulting in an effective dose of 0.06 mSv to a standard-sized patient may be possible in chest tomosynthesis without affecting the accuracy and precision of nodule diameter measurements to any large extent. However, an increasing number of non-measurable small nodules (≤5 mm) with decreasing radiation dose may raise some concerns regarding an applied general dose reduction for chest tomosynthesis examinations in the clinical praxis. © The Author 2016. Published by Oxford University Press.

  17. First results of coupled IPS/NIMROD/GENRAY simulations

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas; Kruger, S. E.; Held, E. D.; Harvey, R. W.; Elwasif, W. R.; Schnack, D. D.

    2010-11-01

    The Integrated Plasma Simulator (IPS) framework, developed by the SWIM Project Team, facilitates self-consistent simulations of complicated plasma behavior via the coupling of various codes modeling different spatial/temporal scales in the plasma. Here, we apply this capability to investigate the stabilization of tearing modes by ECCD. Under IPS control, the NIMROD code (MHD) evolves fluid equations to model bulk plasma behavior, while the GENRAY code (RF) calculates the self-consistent propagation and deposition of RF power in the resulting plasma profiles. GENRAY data is then used to construct moments of the quasilinear diffusion tensor (induced by the RF) which influence the dynamics of momentum/energy evolution in NIMROD's equations. We present initial results from these coupled simulations and demonstrate that they correctly capture the physics of magnetic island stabilization [Jenkins et al, PoP 17, 012502 (2010)] in the low-beta limit. We also discuss the process of code verification in these simulations, demonstrating good agreement between NIMROD and GENRAY predictions for the flux-surface-averaged, RF-induced currents. An overview of ongoing model development (synthetic diagnostics/plasma control systems; neoclassical effects; etc.) is also presented. Funded by US DoE.

  18. Low-cost autonomous orbit control about Mars: Initial simulation results

    NASA Astrophysics Data System (ADS)

    Dawson, S. D.; Early, L. W.; Potterveld, C. W.; Königsmann, H. J.

    1999-11-01

    Interest in studying the possibility of extraterrestrial life has led to the re-emergence of the Red Planet as a major target of planetary exploration. Currently proposed missions in the post-2000 period are routinely calling for rendezvous with ascent craft, long-term orbiting of, and sample-return from Mars. Such missions would benefit greatly from autonomous orbit control as a means to reduce operations costs and enable contact with Mars ground stations out of view of the Earth. This paper present results from initial simulations of autonomously controlled orbits around Mars, and points out possible uses of the technology and areas of routine Mars operations where such cost-conscious and robust autonomy could prove most effective. These simulations have validated the approach and control philosophies used in the development of this autonomous orbit controller. Future work will refine the controller, accounting for systematic and random errors in the navigation of the spacecraft from the sensor suite, and will produce prototype flight code for inclusion on future missions. A modified version of Microcosm's commercially available High Precision Orbit Propagator (HPOP) was used in the preparation of these results due to its high accuracy and speed of operation. Control laws were developed to allow an autonomously controlled spacecraft to continuously control to a pre-defined orbit about Mars with near-optimal propellant usage. The control laws were implemented as an adjunct to HPOP. The GSFC-produced 50 × 50 field model of the Martian gravitational potential was used in all simulations. The Martian atmospheric drag was modeled using an exponentially decaying atmosphere based on data from the Mars-GRAM NASA Ames model. It is hoped that the simple atmosphere model that was implemented can be significantly improved in the future so as to approach the fidelity of the Mars-GRAM model in its predictions of atmospheric density at orbital altitudes. Such additional work

  19. Medical Simulation Practices 2010 Survey Results

    NASA Technical Reports Server (NTRS)

    McCrindle, Jeffrey J.

    2011-01-01

    Medical Simulation Centers are an essential component of our learning infrastructure to prepare doctors and nurses for their careers. Unlike the military and aerospace simulation industry, very little has been published regarding the best practices currently in use within medical simulation centers. This survey attempts to provide insight into the current simulation practices at medical schools, hospitals, university nursing programs and community college nursing programs. Students within the MBA program at Saint Joseph's University conducted a survey of medical simulation practices during the summer 2010 semester. A total of 115 institutions responded to the survey. The survey resus discuss overall effectiveness of current simulation centers as well as the tools and techniques used to conduct the simulation activity

  20. Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding

    PubMed Central

    2013-01-01

    Background In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. Results The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. Conclusions The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the

  1. Multi-grid finite element method used for enhancing the reconstruction accuracy in Cerenkov luminescence tomography

    NASA Astrophysics Data System (ADS)

    Guo, Hongbo; He, Xiaowei; Liu, Muhan; Zhang, Zeyu; Hu, Zhenhua; Tian, Jie

    2017-03-01

    Cerenkov luminescence tomography (CLT), as a promising optical molecular imaging modality, can be applied to cancer diagnostic and therapeutic. Most researches about CLT reconstruction are based on the finite element method (FEM) framework. However, the quality of FEM mesh grid is still a vital factor to restrict the accuracy of the CLT reconstruction result. In this paper, we proposed a multi-grid finite element method framework, which was able to improve the accuracy of reconstruction. Meanwhile, the multilevel scheme adaptive algebraic reconstruction technique (MLS-AART) based on a modified iterative algorithm was applied to improve the reconstruction accuracy. In numerical simulation experiments, the feasibility of our proposed method were evaluated. Results showed that the multi-grid strategy could obtain 3D spatial information of Cerenkov source more accurately compared with the traditional single-grid FEM.

  2. Comparison the Results of Numerical Simulation And Experimental Results for Amirkabir Plasma Focus Facility

    NASA Astrophysics Data System (ADS)

    Goudarzi, Shervin; Amrollahi, R.; Niknam Sharak, M.

    2014-06-01

    In this paper the results of the numerical simulation for Amirkabir Mather-type Plasma Focus Facility (16 kV, 36μF and 115 nH) in several experiments with Argon as working gas at different working conditions (different discharge voltages and gas pressures) have been presented and compared with the experimental results. Two different models have been used for simulation: five-phase model of Lee and lumped parameter model of Gonzalez. It is seen that the results (optimum pressures and current signals) of the Lee model at different working conditions show better agreement than lumped parameter model with experimental values.

  3. Data-driven train set crash dynamics simulation

    NASA Astrophysics Data System (ADS)

    Tang, Zhao; Zhu, Yunrui; Nie, Yinyu; Guo, Shihui; Liu, Fengjia; Chang, Jian; Zhang, Jianjun

    2017-02-01

    Traditional finite element (FE) methods are arguably expensive in computation/simulation of the train crash. High computational cost limits their direct applications in investigating dynamic behaviours of an entire train set for crashworthiness design and structural optimisation. On the contrary, multi-body modelling is widely used because of its low computational cost with the trade-off in accuracy. In this study, a data-driven train crash modelling method is proposed to improve the performance of a multi-body dynamics simulation of train set crash without increasing the computational burden. This is achieved by the parallel random forest algorithm, which is a machine learning approach that extracts useful patterns of force-displacement curves and predicts a force-displacement relation in a given collision condition from a collection of offline FE simulation data on various collision conditions, namely different crash velocities in our analysis. Using the FE simulation results as a benchmark, we compared our method with traditional multi-body modelling methods and the result shows that our data-driven method improves the accuracy over traditional multi-body models in train crash simulation and runs at the same level of efficiency.

  4. Reconstructing the ideal results of a perturbed analog quantum simulator

    NASA Astrophysics Data System (ADS)

    Schwenk, Iris; Reiner, Jan-Michael; Zanker, Sebastian; Tian, Lin; Leppäkangas, Juha; Marthaler, Michael

    2018-04-01

    Well-controlled quantum systems can potentially be used as quantum simulators. However, a quantum simulator is inevitably perturbed by coupling to additional degrees of freedom. This constitutes a major roadblock to useful quantum simulations. So far there are only limited means to understand the effect of perturbation on the results of quantum simulation. Here we present a method which, in certain circumstances, allows for the reconstruction of the ideal result from measurements on a perturbed quantum simulator. We consider extracting the value of the correlator 〈Ôi(t ) Ôj(0 ) 〉 from the simulated system, where Ôi are the operators which couple the system to its environment. The ideal correlator can be straightforwardly reconstructed by using statistical knowledge of the environment, if any n -time correlator of operators Ôi of the ideal system can be written as products of two-time correlators. We give an approach to verify the validity of this assumption experimentally by additional measurements on the perturbed quantum simulator. The proposed method can allow for reliable quantum simulations with systems subjected to environmental noise without adding an overhead to the quantum system.

  5. Impact of glucose measurement processing delays on clinical accuracy and relevance.

    PubMed

    Jangam, Sujit R; Hayter, Gary; Dunn, Timothy C

    2013-05-01

    In a hospital setting, glucose is often measured from venous blood in the clinical laboratory. However, laboratory glucose measurements are typically not available in real time. In practice, turn-around times for laboratory measurements can be minutes to hours. This analysis assesses the impact of turn-around time on the effective clinical accuracy of laboratory measurements. Data obtained from an earlier study with 58 subjects with type 1 diabetes mellitus (T1DM) were used for this analysis. In the study, glucose measurements using a YSI glucose analyzer were obtained from venous blood samples every 15 min while the subjects were at the health care facility. To simulate delayed laboratory results, each YSI glucose value from a subject was paired with one from a later time point (from the same subject) separated by 15, 30, 45, and 60 min. To assess the clinical accuracy of a delayed YSI result relative to a real-time result, the percentage of YSI pairs that meet the International Organization for Standardization (ISO) 15197:2003(E) standard for glucose measurement accuracy (±15 mg/dl for blood glucose < 75 mg/dl, ±20% for blood glucose ≥ 75 mg/dl) was calculated. It was observed that delays of 15 min or more reduce clinical accuracy below the ISO 15197:2003(E) recommendation of 95%. The accuracy was less than 65% for delays of 60 min. This analysis suggests that processing delays in glucose measurements reduce the clinical relevance of results in patients with T1DM and may similarly degrade the clinical value of measurements in other patient populations. © 2013 Diabetes Technology Society.

  6. Computer simulation and discussion of high-accuracy laser direction finding in real time

    NASA Astrophysics Data System (ADS)

    Chen, Wenyi; Chen, Yongzhi

    1997-12-01

    On condition that CCD is used as the sensor, there are at least five methods that can be used to realize laser's direction finding with high accuracy. They are: image matching method, radiation center method, geometric center method, center of rectangle envelope method and center of maximum run length method. The first three can get the highest accuracy but working in real-time it is too complicated to realize and the cost is very expansive. The other two can also get high accuracy, and it is not difficult to realize working in real time. By using a single-chip microcomputer and an ordinary CCD camera a very simple system can get the position information of a laser beam. The data rate is 50 times per second.

  7. A “Skylight” Simulator for HWIL Simulation of Hyperspectral Remote Sensing

    PubMed Central

    Zhao, Huijie; Cui, Bolun; Li, Xudong; Zhang, Chao; Zhang, Xinyang

    2017-01-01

    Even though digital simulation technology has been widely used in the last two decades, hardware-in-the-loop (HWIL) simulation is still an indispensable method for spectral uncertainty research of ground targets. However, previous facilities mainly focus on the simulation of panchromatic imaging. Therefore, neither the spectral nor the spatial performance is enough for hyperspectral simulation. To improve the accuracy of illumination simulation, a new dome-like skylight simulator is designed and developed to fit the spatial distribution and spectral characteristics of a real skylight for the wavelength from 350 nm to 2500 nm. The simulator’s performance was tested using a spectroradiometer with different accessories. The spatial uniformity is greater than 0.91. The spectral mismatch decreases to 1/243 of the spectral mismatch of the Imagery Simulation Facility (ISF). The spatial distribution of radiance can be adjusted, and the accuracy of the adjustment is greater than 0.895. The ability of the skylight simulator is also demonstrated by comparing radiometric quantities measured in the skylight simulator with those in a real skylight in Beijing. PMID:29211004

  8. Results of 17 Independent Geopositional Accuracy Assessments of Earth Satellite Corporation's GeoCover Landsat Thematic Mapper Imagery. Geopositional Accuracy Validation of Orthorectified Landsat TM Imagery: Northeast Asia

    NASA Technical Reports Server (NTRS)

    Smith, Charles M.

    2003-01-01

    This report provides results of an independent assessment of the geopositional accuracy of the Earth Satellite (EarthSat) Corporation's GeoCover, Orthorectified Landsat Thematic Mapper (TM) imagery over Northeast Asia. This imagery was purchased through NASA's Earth Science Enterprise (ESE) Scientific Data Purchase (SDP) program.

  9. Prediction accuracy of direct and indirect approaches, and their relationships with prediction ability of calibration models.

    PubMed

    Belay, T K; Dagnachew, B S; Boison, S A; Ådnøy, T

    2018-03-28

    true values from the simulations. The results showed that accuracies of EBV prediction were higher in the DP than in the IP approach. The reverse was true for accuracy of phenotypic prediction when using β p but not when using β g and β r , where accuracy of phenotypic prediction in the DP was slightly higher than in the IP approach. Within the DP approach, accuracies of EBV when using β g were higher than when using β p only at the low genetic correlation scenario. However, we found no differences in EBV prediction accuracy between the β p and β g in the IP approach. Accuracy of the calibration models increased with an increase in genetic and residual correlations between the traits. Performance of both approaches increased with an increase in accuracy of the calibration models. In conclusion, the DP approach is a good strategy for EBV prediction but not for phenotypic prediction, where the classical PLS regression-based equations or the IP approach provided better results. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).

  10. Advanced Computational Methods for High-accuracy Refinement of Protein Low-quality Models

    NASA Astrophysics Data System (ADS)

    Zang, Tianwu

    Predicting the 3-dimentional structure of protein has been a major interest in the modern computational biology. While lots of successful methods can generate models with 3˜5A root-mean-square deviation (RMSD) from the solution, the progress of refining these models is quite slow. It is therefore urgently needed to develop effective methods to bring low-quality models to higher-accuracy ranges (e.g., less than 2 A RMSD). In this thesis, I present several novel computational methods to address the high-accuracy refinement problem. First, an enhanced sampling method, named parallel continuous simulated tempering (PCST), is developed to accelerate the molecular dynamics (MD) simulation. Second, two energy biasing methods, Structure-Based Model (SBM) and Ensemble-Based Model (EBM), are introduced to perform targeted sampling around important conformations. Third, a three-step method is developed to blindly select high-quality models along the MD simulation. These methods work together to make significant refinement of low-quality models without any knowledge of the solution. The effectiveness of these methods is examined in different applications. Using the PCST-SBM method, models with higher global distance test scores (GDT_TS) are generated and selected in the MD simulation of 18 targets from the refinement category of the 10th Critical Assessment of Structure Prediction (CASP10). In addition, in the refinement test of two CASP10 targets using the PCST-EBM method, it is indicated that EBM may bring the initial model to even higher-quality levels. Furthermore, a multi-round refinement protocol of PCST-SBM improves the model quality of a protein to the level that is sufficient high for the molecular replacement in X-ray crystallography. Our results justify the crucial position of enhanced sampling in the protein structure prediction and demonstrate that a considerable improvement of low-accuracy structures is still achievable with current force fields.

  11. Evaluation of diagnostic accuracy in detecting ordered symptom statuses without a gold standard

    PubMed Central

    Wang, Zheyu; Zhou, Xiao-Hua; Wang, Miqu

    2011-01-01

    Our research is motivated by 2 methodological problems in assessing diagnostic accuracy of traditional Chinese medicine (TCM) doctors in detecting a particular symptom whose true status has an ordinal scale and is unknown—imperfect gold standard bias and ordinal scale symptom status. In this paper, we proposed a nonparametric maximum likelihood method for estimating and comparing the accuracy of different doctors in detecting a particular symptom without a gold standard when the true symptom status had an ordered multiple class. In addition, we extended the concept of the area under the receiver operating characteristic curve to a hyper-dimensional overall accuracy for diagnostic accuracy and alternative graphs for displaying a visual result. The simulation studies showed that the proposed method had good performance in terms of bias and mean squared error. Finally, we applied our method to our motivating example on assessing the diagnostic abilities of 5 TCM doctors in detecting symptoms related to Chills disease. PMID:21209155

  12. Testing the accuracy of redshift-space group-finding algorithms

    NASA Astrophysics Data System (ADS)

    Frederic, James J.

    1995-04-01

    Using simulated redshift surveys generated from a high-resolution N-body cosmological structure simulation, we study algorithms used to identify groups of galaxies in redshift space. Two algorithms are investigated; both are friends-of-friends schemes with variable linking lengths in the radial and transverse dimenisons. The chief difference between the algorithms is in the redshift linking length. The algorithm proposed by Huchra & Geller (1982) uses a generous linking length designed to find 'fingers of god,' while that of Nolthenius & White (1987) uses a smaller linking length to minimize contamination by projection. We find that neither of the algorithms studied is intrinsically superior to the other; rather, the ideal algorithm as well as the ideal algorithm parameters depends on the purpose for which groups are to be studied. The Huchra & Geller algorithm misses few real groups, at the cost of including some spurious groups and members, while the Nolthenius & White algorithm misses high velocity dispersion groups and members but is less likely to include interlopers in its group assignments. Adjusting the parameters of either algorithm results in a trade-off between group accuracy and completeness. In a companion paper we investigate the accuracy of virial mass estimates and clustering properties of groups identified using these algorithms.

  13. Accuracy of tree diameter estimation from terrestrial laser scanning by circle-fitting methods

    NASA Astrophysics Data System (ADS)

    Koreň, Milan; Mokroš, Martin; Bucha, Tomáš

    2017-12-01

    This study compares the accuracies of diameter at breast height (DBH) estimations by three initial (minimum bounding box, centroid, and maximum distance) and two refining (Monte Carlo and optimal circle) circle-fitting methods The circle-fitting algorithms were evaluated in multi-scan mode and a simulated single-scan mode on 157 European beech trees (Fagus sylvatica L.). DBH measured by a calliper was used as reference data. Most of the studied circle-fitting algorithms significantly underestimated the mean DBH in both scanning modes. Only the Monte Carlo method in the single-scan mode significantly overestimated the mean DBH. The centroid method proved to be the least suitable and showed significantly different results from the other circle-fitting methods in both scanning modes. In multi-scan mode, the accuracy of the minimum bounding box method was not significantly different from the accuracies of the refining methods The accuracy of the maximum distance method was significantly different from the accuracies of the refining methods in both scanning modes. The accuracy of the Monte Carlo method was significantly different from the accuracy of the optimal circle method in only single-scan mode. The optimal circle method proved to be the most accurate circle-fitting method for DBH estimation from point clouds in both scanning modes.

  14. Accuracy of virtual surgical planning in two-jaw orthognathic surgery: comparison of planned and actual results.

    PubMed

    Zhang, Nan; Liu, Shuguang; Hu, Zhiai; Hu, Jing; Zhu, Songsong; Li, Yunfeng

    2016-08-01

    This study aims to evaluate the accuracy of virtual surgical planning in two-jaw orthognathic surgery via quantitative comparison of preoperative planned and postoperative actual skull models. Thirty consecutive patients who required two-jaw orthognathic surgery were included. A composite skull model was reconstructed by using Digital Imaging and Communications in Medicine (DICOM) data from spiral computed tomography (CT) and STL (stereolithography) data from surface scanning of the dental arch. LeFort I osteotomy of the maxilla and bilateral sagittal split ramus osteotomy (of the mandible were simulated by using Dolphin Imaging 11.7 Premium (Dolphin Imaging and Management Solutions, Chatsworth, CA). Genioplasty was performed, if indicated. The virtual plan was then transferred to the operation room by using three-dimensional (3-D)-printed surgical templates. Linear and angular differences between virtually simulated and postoperative skull models were evaluated. The virtual surgical planning was successfully transferred to actual surgery with the help of 3-D-printed surgical templates. All patients were satisfied with the postoperative facial profile and occlusion. The overall mean linear difference was 0.81 mm (0.71 mm for the maxilla and 0.91 mm for the mandible); and the overall mean angular difference was 0.95 degrees. Virtual surgical planning and 3-D-printed surgical templates facilitated the diagnosis, treatment planning, and accurate repositioning of bony segments in two-jaw orthognathic surgery. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. DoSSiER: Database of scientific simulation and experimental results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenzel, Hans; Yarba, Julia; Genser, Krzystof

    The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this paper, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.

  16. DoSSiER: Database of scientific simulation and experimental results

    DOE PAGES

    Wenzel, Hans; Yarba, Julia; Genser, Krzystof; ...

    2016-08-01

    The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this paper, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.

  17. Sampling Molecular Conformers in Solution with Quantum Mechanical Accuracy at a Nearly Molecular-Mechanics Cost.

    PubMed

    Rosa, Marta; Micciarelli, Marco; Laio, Alessandro; Baroni, Stefano

    2016-09-13

    We introduce a method to evaluate the relative populations of different conformers of molecular species in solution, aiming at quantum mechanical accuracy, while keeping the computational cost at a nearly molecular-mechanics level. This goal is achieved by combining long classical molecular-dynamics simulations to sample the free-energy landscape of the system, advanced clustering techniques to identify the most relevant conformers, and thermodynamic perturbation theory to correct the resulting populations, using quantum-mechanical energies from density functional theory. A quantitative criterion for assessing the accuracy thus achieved is proposed. The resulting methodology is demonstrated in the specific case of cyanin (cyanidin-3-glucoside) in water solution.

  18. Simulation Test Of Descent Advisor

    NASA Technical Reports Server (NTRS)

    Davis, Thomas J.; Green, Steven M.

    1991-01-01

    Report describes piloted-simulation test of Descent Advisor (DA), subsystem of larger automation system being developed to assist human air-traffic controllers and pilots. Focuses on results of piloted simulation, in which airline crews executed controller-issued descent advisories along standard curved-path arrival routes. Crews able to achieve arrival-time precision of plus or minus 20 seconds at metering fix. Analysis of errors generated in turns resulted in further enhancements of algorithm to increase accuracies of its predicted trajectories. Evaluations by pilots indicate general support for DA concept and provide specific recommendations for improvement.

  19. Frontotemporal oxyhemoglobin dynamics predict performance accuracy of dance simulation gameplay: temporal characteristics of top-down and bottom-up cortical activities.

    PubMed

    Ono, Yumie; Nomoto, Yasunori; Tanaka, Shohei; Sato, Keisuke; Shimada, Sotaro; Tachibana, Atsumichi; Bronner, Shaw; Noah, J Adam

    2014-01-15

    We utilized the high temporal resolution of functional near-infrared spectroscopy to explore how sensory input (visual and rhythmic auditory cues) are processed in the cortical areas of multimodal integration to achieve coordinated motor output during unrestricted dance simulation gameplay. Using an open source clone of the dance simulation video game, Dance Dance Revolution, two cortical regions of interest were selected for study, the middle temporal gyrus (MTG) and the frontopolar cortex (FPC). We hypothesized that activity in the FPC would indicate top-down regulatory mechanisms of motor behavior; while that in the MTG would be sustained due to bottom-up integration of visual and auditory cues throughout the task. We also hypothesized that a correlation would exist between behavioral performance and the temporal patterns of the hemodynamic responses in these regions of interest. Results indicated that greater temporal accuracy of dance steps positively correlated with persistent activation of the MTG and with cumulative suppression of the FPC. When auditory cues were eliminated from the simulation, modifications in cortical responses were found depending on the gameplay performance. In the MTG, high-performance players showed an increase but low-performance players displayed a decrease in cumulative amount of the oxygenated hemoglobin response in the no music condition compared to that in the music condition. In the FPC, high-performance players showed relatively small variance in the activity regardless of the presence of auditory cues, while low-performance players showed larger differences in the activity between the no music and music conditions. These results suggest that the MTG plays an important role in the successful integration of visual and rhythmic cues and the FPC may work as top-down control to compensate for insufficient integrative ability of visual and rhythmic cues in the MTG. The relative relationships between these cortical areas indicated

  20. Evaluating the Accuracy of Results for Teacher Implemented Trial-Based Functional Analyses.

    PubMed

    Rispoli, Mandy; Ninci, Jennifer; Burke, Mack D; Zaini, Samar; Hatton, Heather; Sanchez, Lisa

    2015-09-01

    Trial-based functional analysis (TBFA) allows for the systematic and experimental assessment of challenging behavior in applied settings. The purposes of this study were to evaluate a professional development package focused on training three Head Start teachers to conduct TBFAs with fidelity during ongoing classroom routines. To assess the accuracy of the TBFA results, the effects of a function-based intervention derived from the TBFA were compared with the effects of a non-function-based intervention. Data were collected on child challenging behavior and appropriate communication. An A-B-A-C-D design was utilized in which A represented baseline, and B and C consisted of either function-based or non-function-based interventions counterbalanced across participants, and D represented teacher implementation of the most effective intervention. Results showed that the function-based intervention produced greater decreases in challenging behavior and greater increases in appropriate communication than the non-function-based intervention for all three children. © The Author(s) 2015.

  1. Advanced Thermal Simulator Testing: Thermal Analysis and Test Results

    NASA Technical Reports Server (NTRS)

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David; Reid, Robert; Adams, Mike; Davis, Joe

    2008-01-01

    Work at the NASA Marshall Space Flight Center seeks to develop high fidelity, electrically heated thermal simulators that represent fuel elements in a nuclear reactor design to support non-nuclear testing applicable to the development of a space nuclear power or propulsion system. Comparison between the fuel pins and thermal simulators is made at the outer fuel clad surface, which corresponds to the outer sheath surface in the thermal simulator. The thermal simulators that are currently being tested correspond to a SNAP derivative reactor design that could be applied for Lunar surface power. These simulators are designed to meet the geometric and power requirements of a proposed surface power reactor design, accommodate testing of various axial power profiles, and incorporate imbedded instrumentation. This paper reports the results of thermal simulator analysis and testing in a bare element configuration, which does not incorporate active heat removal, and testing in a water-cooled calorimeter designed to mimic the heat removal that would be experienced in a reactor core.

  2. Diagnostic Accuracy Comparison of Artificial Immune Algorithms for Primary Headaches.

    PubMed

    Çelik, Ufuk; Yurtay, Nilüfer; Koç, Emine Rabia; Tepe, Nermin; Güllüoğlu, Halil; Ertaş, Mustafa

    2015-01-01

    The present study evaluated the diagnostic accuracy of immune system algorithms with the aim of classifying the primary types of headache that are not related to any organic etiology. They are divided into four types: migraine, tension, cluster, and other primary headaches. After we took this main objective into consideration, three different neurologists were required to fill in the medical records of 850 patients into our web-based expert system hosted on our project web site. In the evaluation process, Artificial Immune Systems (AIS) were used as the classification algorithms. The AIS are classification algorithms that are inspired by the biological immune system mechanism that involves significant and distinct capabilities. These algorithms simulate the specialties of the immune system such as discrimination, learning, and the memorizing process in order to be used for classification, optimization, or pattern recognition. According to the results, the accuracy level of the classifier used in this study reached a success continuum ranging from 95% to 99%, except for the inconvenient one that yielded 71% accuracy.

  3. The Accuracy and Precision of Flow Measurements Using Phase Contrast Techniques

    NASA Astrophysics Data System (ADS)

    Tang, Chao

    Quantitative volume flow rate measurements using the magnetic resonance imaging technique are studied in this dissertation because the volume flow rates have a special interest in the blood supply of the human body. The method of quantitative volume flow rate measurements is based on the phase contrast technique, which assumes a linear relationship between the phase and flow velocity of spins. By measuring the phase shift of nuclear spins and integrating velocity across the lumen of the vessel, we can determine the volume flow rate. The accuracy and precision of volume flow rate measurements obtained using the phase contrast technique are studied by computer simulations and experiments. The various factors studied include (1) the partial volume effect due to voxel dimensions and slice thickness relative to the vessel dimensions; (2) vessel angulation relative to the imaging plane; (3) intravoxel phase dispersion; (4) flow velocity relative to the magnitude of the flow encoding gradient. The partial volume effect is demonstrated to be the major obstacle to obtaining accurate flow measurements for both laminar and plug flow. Laminar flow can be measured more accurately than plug flow in the same condition. Both the experiment and simulation results for laminar flow show that, to obtain the accuracy of volume flow rate measurements to within 10%, at least 16 voxels are needed to cover the vessel lumen. The accuracy of flow measurements depends strongly on the relative intensity of signal from stationary tissues. A correction method is proposed to compensate for the partial volume effect. The correction method is based on a small phase shift approximation. After the correction, the errors due to the partial volume effect are compensated, allowing more accurate results to be obtained. An automatic program based on the correction method is developed and implemented on a Sun workstation. The correction method is applied to the simulation and experiment results. The

  4. Two high accuracy digital integrators for Rogowski current transducers.

    PubMed

    Luo, Pan-dian; Li, Hong-bin; Li, Zhen-hua

    2014-01-01

    The Rogowski current transducers have been widely used in AC current measurement, but their accuracy is mainly subject to the analog integrators, which have typical problems such as poor long-term stability and being susceptible to environmental conditions. The digital integrators can be another choice, but they cannot obtain a stable and accurate output for the reason that the DC component in original signal can be accumulated, which will lead to output DC drift. Unknown initial conditions can also result in integral output DC offset. This paper proposes two improved digital integrators used in Rogowski current transducers instead of traditional analog integrators for high measuring accuracy. A proportional-integral-derivative (PID) feedback controller and an attenuation coefficient have been applied in improving the Al-Alaoui integrator to change its DC response and get an ideal frequency response. For the special design in the field of digital signal processing, the improved digital integrators have better performance than analog integrators. Simulation models are built for the purpose of verification and comparison. The experiments prove that the designed integrators can achieve higher accuracy than analog integrators in steady-state response, transient-state response, and temperature changing condition.

  5. Two high accuracy digital integrators for Rogowski current transducers

    NASA Astrophysics Data System (ADS)

    Luo, Pan-dian; Li, Hong-bin; Li, Zhen-hua

    2014-01-01

    The Rogowski current transducers have been widely used in AC current measurement, but their accuracy is mainly subject to the analog integrators, which have typical problems such as poor long-term stability and being susceptible to environmental conditions. The digital integrators can be another choice, but they cannot obtain a stable and accurate output for the reason that the DC component in original signal can be accumulated, which will lead to output DC drift. Unknown initial conditions can also result in integral output DC offset. This paper proposes two improved digital integrators used in Rogowski current transducers instead of traditional analog integrators for high measuring accuracy. A proportional-integral-derivative (PID) feedback controller and an attenuation coefficient have been applied in improving the Al-Alaoui integrator to change its DC response and get an ideal frequency response. For the special design in the field of digital signal processing, the improved digital integrators have better performance than analog integrators. Simulation models are built for the purpose of verification and comparison. The experiments prove that the designed integrators can achieve higher accuracy than analog integrators in steady-state response, transient-state response, and temperature changing condition.

  6. On the convergence and accuracy of the FDTD method for nanoplasmonics.

    PubMed

    Lesina, Antonino Calà; Vaccari, Alessandro; Berini, Pierre; Ramunno, Lora

    2015-04-20

    Use of the Finite-Difference Time-Domain (FDTD) method to model nanoplasmonic structures continues to rise - more than 2700 papers have been published in 2014 on FDTD simulations of surface plasmons. However, a comprehensive study on the convergence and accuracy of the method for nanoplasmonic structures has yet to be reported. Although the method may be well-established in other areas of electromagnetics, the peculiarities of nanoplasmonic problems are such that a targeted study on convergence and accuracy is required. The availability of a high-performance computing system (a massively parallel IBM Blue Gene/Q) allows us to do this for the first time. We consider gold and silver at optical wavelengths along with three "standard" nanoplasmonic structures: a metal sphere, a metal dipole antenna and a metal bowtie antenna - for the first structure comparisons with the analytical extinction, scattering, and absorption coefficients based on Mie theory are possible. We consider different ways to set-up the simulation domain, we vary the mesh size to very small dimensions, we compare the simple Drude model with the Drude model augmented with two critical points correction, we compare single-precision to double-precision arithmetic, and we compare two staircase meshing techniques, per-component and uniform. We find that the Drude model with two critical points correction (at least) must be used in general. Double-precision arithmetic is needed to avoid round-off errors if highly converged results are sought. Per-component meshing increases the accuracy when complex geometries are modeled, but the uniform mesh works better for structures completely fillable by the Yee cell (e.g., rectangular structures). Generally, a mesh size of 0.25 nm is required to achieve convergence of results to ∼ 1%. We determine how to optimally setup the simulation domain, and in so doing we find that performing scattering calculations within the near-field does not necessarily produces large

  7. Effects of Sample Selection Bias on the Accuracy of Population Structure and Ancestry Inference

    PubMed Central

    Shringarpure, Suyash; Xing, Eric P.

    2014-01-01

    Population stratification is an important task in genetic analyses. It provides information about the ancestry of individuals and can be an important confounder in genome-wide association studies. Public genotyping projects have made a large number of datasets available for study. However, practical constraints dictate that of a geographical/ethnic population, only a small number of individuals are genotyped. The resulting data are a sample from the entire population. If the distribution of sample sizes is not representative of the populations being sampled, the accuracy of population stratification analyses of the data could be affected. We attempt to understand the effect of biased sampling on the accuracy of population structure analysis and individual ancestry recovery. We examined two commonly used methods for analyses of such datasets, ADMIXTURE and EIGENSOFT, and found that the accuracy of recovery of population structure is affected to a large extent by the sample used for analysis and how representative it is of the underlying populations. Using simulated data and real genotype data from cattle, we show that sample selection bias can affect the results of population structure analyses. We develop a mathematical framework for sample selection bias in models for population structure and also proposed a correction for sample selection bias using auxiliary information about the sample. We demonstrate that such a correction is effective in practice using simulated and real data. PMID:24637351

  8. A clinical study of lung cancer dose calculation accuracy with Monte Carlo simulation.

    PubMed

    Zhao, Yanqun; Qi, Guohai; Yin, Gang; Wang, Xianliang; Wang, Pei; Li, Jian; Xiao, Mingyong; Li, Jie; Kang, Shengwei; Liao, Xiongfei

    2014-12-16

    The accuracy of dose calculation is crucial to the quality of treatment planning and, consequently, to the dose delivered to patients undergoing radiation therapy. Current general calculation algorithms such as Pencil Beam Convolution (PBC) and Collapsed Cone Convolution (CCC) have shortcomings in regard to severe inhomogeneities, particularly in those regions where charged particle equilibrium does not hold. The aim of this study was to evaluate the accuracy of the PBC and CCC algorithms in lung cancer radiotherapy using Monte Carlo (MC) technology. Four treatment plans were designed using Oncentra Masterplan TPS for each patient. Two intensity-modulated radiation therapy (IMRT) plans were developed using the PBC and CCC algorithms, and two three-dimensional conformal therapy (3DCRT) plans were developed using the PBC and CCC algorithms. The DICOM-RT files of the treatment plans were exported to the Monte Carlo system to recalculate. The dose distributions of GTV, PTV and ipsilateral lung calculated by the TPS and MC were compared. For 3DCRT and IMRT plans, the mean dose differences for GTV between the CCC and MC increased with decreasing of the GTV volume. For IMRT, the mean dose differences were found to be higher than that of 3DCRT. The CCC algorithm overestimated the GTV mean dose by approximately 3% for IMRT. For 3DCRT plans, when the volume of the GTV was greater than 100 cm(3), the mean doses calculated by CCC and MC almost have no difference. PBC shows large deviations from the MC algorithm. For the dose to the ipsilateral lung, the CCC algorithm overestimated the dose to the entire lung, and the PBC algorithm overestimated V20 but underestimated V5; the difference in V10 was not statistically significant. PBC substantially overestimates the dose to the tumour, but the CCC is similar to the MC simulation. It is recommended that the treatment plans for lung cancer be developed using an advanced dose calculation algorithm other than PBC. MC can accurately

  9. Modeling and Compensation Design for a Power Hardware-in-the-Loop Simulation of an AC Distribution System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ainsworth, Nathan; Hariri, Ali; Prabakar, Kumaraguru

    Power hardware-in-the-loop (PHIL) simulation, where actual hardware under text is coupled with a real-time digital model in closed loop, is a powerful tool for analyzing new methods of control for emerging distributed power systems. However, without careful design and compensation of the interface between the simulated and actual systems, PHIL simulations may exhibit instability and modeling inaccuracies. This paper addresses issues that arise in the PHIL simulation of a hardware battery inverter interfaced with a simulated distribution feeder. Both the stability and accuracy issues are modeled and characterized, and a methodology for design of PHIL interface compensation to ensure stabilitymore » and accuracy is presented. The stability and accuracy of the resulting compensated PHIL simulation is then shown by experiment.« less

  10. Results from Binary Black Hole Simulations in Astrophysics Applications

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2007-01-01

    Present and planned gravitational wave observatories are opening a new astronomical window to the sky. A key source of gravitational waves is the merger of two black holes. The Laser Interferometer Space Antenna (LISA), in particular, is expected to observe these events with signal-to-noise ratio's in the thousands. To fully reap the scientific benefits of these observations requires a detailed understanding, based on numerical simulations, of the predictions of General Relativity for the waveform signals. New techniques for simulating binary black hole mergers, introduced two years ago, have led to dramatic advances in applied numerical simulation work. Over the last two years, numerical relativity researchers have made tremendous strides in understanding the late stages of binary black hole mergers. Simulations have been applied to test much of the basic physics of binary black hole interactions, showing robust results for merger waveform predictions, and illuminating such phenomena as spin-precession. Calculations have shown that merging systems can be kicked at up to 2500 km/s by the thrust from asymmetric emission. Recently, long lasting simulations of ten or more orbits allow tests of post-Newtonian (PN) approximation results for radiation from the last orbits of the binary's inspiral. Already, analytic waveform models based PN techniques with incorporated information from numerical simulations may be adequate for observations with current ground based observatories. As new advances in simulations continue to rapidly improve our theoretical understanding of the systems, it seems certain that high-precision predictions will be available in time for LISA and other advanced ground-based instruments. Future gravitational wave observatories are expected to make precision.

  11. Methods to validate the accuracy of an indirect calorimeter in the in-vitro setting.

    PubMed

    Oshima, Taku; Ragusa, Marco; Graf, Séverine; Dupertuis, Yves Marc; Heidegger, Claudia-Paula; Pichard, Claude

    2017-12-01

    The international ICALIC initiative aims at developing a new indirect calorimeter according to the needs of the clinicians and researchers in the field of clinical nutrition and metabolism. The project initially focuses on validating the calorimeter for use in mechanically ventilated acutely ill adult patient. However, standard methods to validate the accuracy of calorimeters have not yet been established. This paper describes the procedures for the in-vitro tests to validate the accuracy of the new indirect calorimeter, and defines the ranges for the parameters to be evaluated in each test to optimize the validation for clinical and research calorimetry measurements. Two in-vitro tests have been defined to validate the accuracy of the gas analyzers and the overall function of the new calorimeter. 1) Gas composition analysis allows validating the accuracy of O 2 and CO 2 analyzers. Reference gas of known O 2 (or CO 2 ) concentration is diluted by pure nitrogen gas to achieve predefined O 2 (or CO 2 ) concentration, to be measured by the indirect calorimeter. O 2 and CO 2 concentrations to be tested were determined according to their expected ranges of concentrations during calorimetry measurements. 2) Gas exchange simulator analysis validates O 2 consumption (VO 2 ) and CO 2 production (VCO 2 ) measurements. CO 2 gas injection into artificial breath gas provided by the mechanical ventilator simulates VCO 2 . Resulting dilution of O 2 concentration in the expiratory air is analyzed by the calorimeter as VO 2 . CO 2 gas of identical concentration to the fraction of inspired O 2 (FiO 2 ) is used to simulate identical VO 2 and VCO 2 . Indirect calorimetry results from publications were analyzed to determine the VO 2 and VCO 2 values to be tested for the validation. O 2 concentration in respiratory air is highest at inspiration, and can decrease to 15% during expiration. CO 2 concentration can be as high as 5% in expired air. To validate analyzers for measurements of Fi

  12. The elimination of influence of disturbing bodies' coordinates and derivatives discontinuity on the accuracy of asteroid motion simulation

    NASA Astrophysics Data System (ADS)

    Baturin, A. P.; Votchel, I. A.

    2013-12-01

    The problem of asteroid motion sumulation has been considered. At present this simulation is being performed by means of numerical integration taking into account the pertubations from planets and the Moon with some their ephemerides (DE405, DE422, etc.). All these ephemerides contain coefficients for Chebyshev polinomials for the great amount of equal interpolation intervals. However, all ephemerides has been constructed to keep at the junctions of adjacent intervals a continuity of just coordinates and their first derivatives (just in 16-digit decimal format corre-sponding to 64-bit floating-point numbers). But as for the second and higher order derivatives, they have breaks at these junctions. These breaks, if they are within an integration step, decrease the accuracy of numerical integration. If to consider 34-digit format (128-bit floating point numbers) the coordinates and their first derivatives will also have breaks (at 15-16 decimal digit) at interpolation intervals' junctions. Two ways of elimination of influence of such breaks have been considered. The first one is a "smoothing" of ephemerides so that planets' coordinates and their de-rivatives up to some order will be continuous at the junctions. The smoothing algorithm is based on conditional least-square fitting of coefficients for Chebyshev polynomials, the conditions are equalities of coordinates and derivatives up to some order "from the left" and "from the right" at the each junction. The algorithm has been applied for the smoothing of ephemerides DE430 just up to the first-order derivatives. The second way is a correction of integration step so that junctions does not lie within the step and always coincide with its end. But this way may be applied just at 16-digit decimal precision because it assumes a continuity of planets' coordinates and their first derivatives. Both ways was applied in forward and backward numerical integration for asteroids Apophis and 2012 DA14 by means of 15- and 31-order

  13. Local indicators of geocoding accuracy (LIGA): theory and application

    PubMed Central

    Jacquez, Geoffrey M; Rommel, Robert

    2009-01-01

    Background Although sources of positional error in geographic locations (e.g. geocoding error) used for describing and modeling spatial patterns are widely acknowledged, research on how such error impacts the statistical results has been limited. In this paper we explore techniques for quantifying the perturbability of spatial weights to different specifications of positional error. Results We find that a family of curves describes the relationship between perturbability and positional error, and use these curves to evaluate sensitivity of alternative spatial weight specifications to positional error both globally (when all locations are considered simultaneously) and locally (to identify those locations that would benefit most from increased geocoding accuracy). We evaluate the approach in simulation studies, and demonstrate it using a case-control study of bladder cancer in south-eastern Michigan. Conclusion Three results are significant. First, the shape of the probability distributions of positional error (e.g. circular, elliptical, cross) has little impact on the perturbability of spatial weights, which instead depends on the mean positional error. Second, our methodology allows researchers to evaluate the sensitivity of spatial statistics to positional accuracy for specific geographies. This has substantial practical implications since it makes possible routine sensitivity analysis of spatial statistics to positional error arising in geocoded street addresses, global positioning systems, LIDAR and other geographic data. Third, those locations with high perturbability (most sensitive to positional error) and high leverage (that contribute the most to the spatial weight being considered) will benefit the most from increased positional accuracy. These are rapidly identified using a new visualization tool we call the LIGA scatterplot. Herein lies a paradox for spatial analysis: For a given level of positional error increasing sample density to more accurately

  14. Computer simulation results of attitude estimation of earth orbiting satellites

    NASA Technical Reports Server (NTRS)

    Kou, S. R.

    1976-01-01

    Computer simulation results of attitude estimation of Earth-orbiting satellites (including Space Telescope) subjected to environmental disturbances and noises are presented. Decomposed linear recursive filter and Kalman filter were used as estimation tools. Six programs were developed for this simulation, and all were written in the basic language and were run on HP 9830A and HP 9866A computers. Simulation results show that a decomposed linear recursive filter is accurate in estimation and fast in response time. Furthermore, for higher order systems, this filter has computational advantages (i.e., less integration errors and roundoff errors) over a Kalman filter.

  15. Accelerating molecular Monte Carlo simulations using distance and orientation dependent energy tables: tuning from atomistic accuracy to smoothed “coarse-grained” models

    PubMed Central

    Lettieri, S.; Zuckerman, D.M.

    2011-01-01

    Typically, the most time consuming part of any atomistic molecular simulation is due to the repeated calculation of distances, energies and forces between pairs of atoms. However, many molecules contain nearly rigid multi-atom groups such as rings and other conjugated moieties, whose rigidity can be exploited to significantly speed up computations. The availability of GB-scale random-access memory (RAM) offers the possibility of tabulation (pre-calculation) of distance and orientation-dependent interactions among such rigid molecular bodies. Here, we perform an investigation of this energy tabulation approach for a fluid of atomistic – but rigid – benzene molecules at standard temperature and density. In particular, using O(1) GB of RAM, we construct an energy look-up table which encompasses the full range of allowed relative positions and orientations between a pair of whole molecules. We obtain a hardware-dependent speed-up of a factor of 24-50 as compared to an ordinary (“exact”) Monte Carlo simulation and find excellent agreement between energetic and structural properties. Second, we examine the somewhat reduced fidelity of results obtained using energy tables based on much less memory use. Third, the energy table serves as a convenient platform to explore potential energy smoothing techniques, akin to coarse-graining. Simulations with smoothed tables exhibit near atomistic accuracy while increasing diffusivity. The combined speed-up in sampling from tabulation and smoothing exceeds a factor of 100. For future applications greater speed-ups can be expected for larger rigid groups, such as those found in biomolecules. PMID:22120971

  16. Effect of Small Numbers of Test Results on Accuracy of Hoek-Brown Strength Parameter Estimations: A Statistical Simulation Study

    NASA Astrophysics Data System (ADS)

    Bozorgzadeh, Nezam; Yanagimura, Yoko; Harrison, John P.

    2017-12-01

    The Hoek-Brown empirical strength criterion for intact rock is widely used as the basis for estimating the strength of rock masses. Estimations of the intact rock H-B parameters, namely the empirical constant m and the uniaxial compressive strength σc, are commonly obtained by fitting the criterion to triaxial strength data sets of small sample size. This paper investigates how such small sample sizes affect the uncertainty associated with the H-B parameter estimations. We use Monte Carlo (MC) simulation to generate data sets of different sizes and different combinations of H-B parameters, and then investigate the uncertainty in H-B parameters estimated from these limited data sets. We show that the uncertainties depend not only on the level of variability but also on the particular combination of parameters being investigated. As particular combinations of H-B parameters can informally be considered to represent specific rock types, we discuss that as the minimum number of required samples depends on rock type it should correspond to some acceptable level of uncertainty in the estimations. Also, a comparison of the results from our analysis with actual rock strength data shows that the probability of obtaining reliable strength parameter estimations using small samples may be very low. We further discuss the impact of this on ongoing implementation of reliability-based design protocols and conclude with suggestions for improvements in this respect.

  17. Numerical simulation of turbulence flow in a Kaplan turbine -Evaluation on turbine performance prediction accuracy-

    NASA Astrophysics Data System (ADS)

    Ko, P.; Kurosawa, S.

    2014-03-01

    The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.

  18. Superior accuracy of model-based radiostereometric analysis for measurement of polyethylene wear

    PubMed Central

    Stilling, M.; Kold, S.; de Raedt, S.; Andersen, N. T.; Rahbek, O.; Søballe, K.

    2012-01-01

    Objectives The accuracy and precision of two new methods of model-based radiostereometric analysis (RSA) were hypothesised to be superior to a plain radiograph method in the assessment of polyethylene (PE) wear. Methods A phantom device was constructed to simulate three-dimensional (3D) PE wear. Images were obtained consecutively for each simulated wear position for each modality. Three commercially available packages were evaluated: model-based RSA using laser-scanned cup models (MB-RSA), model-based RSA using computer-generated elementary geometrical shape models (EGS-RSA), and PolyWare. Precision (95% repeatability limits) and accuracy (Root Mean Square Errors) for two-dimensional (2D) and 3D wear measurements were assessed. Results The precision for 2D wear measures was 0.078 mm, 0.102 mm, and 0.076 mm for EGS-RSA, MB-RSA, and PolyWare, respectively. For the 3D wear measures the precision was 0.185 mm, 0.189 mm, and 0.244 mm for EGS-RSA, MB-RSA, and PolyWare respectively. Repeatability was similar for all methods within the same dimension, when compared between 2D and 3D (all p > 0.28). For the 2D RSA methods, accuracy was below 0.055 mm and at least 0.335 mm for PolyWare. For 3D measurements, accuracy was 0.1 mm, 0.2 mm, and 0.3 mm for EGS-RSA, MB-RSA and PolyWare respectively. PolyWare was less accurate compared with RSA methods (p = 0.036). No difference was observed between the RSA methods (p = 0.10). Conclusions For all methods, precision and accuracy were better in 2D, with RSA methods being superior in accuracy. Although less accurate and precise, 3D RSA defines the clinically relevant wear pattern (multidirectional). PolyWare is a good and low-cost alternative to RSA, despite being less accurate and requiring a larger sample size. PMID:23610688

  19. Speed and Accuracy of Absolute Pitch Judgments: Some Latter-Day Results.

    ERIC Educational Resources Information Center

    Carroll, John B.

    Nine subjects, 5 of whom claimed absolute pitch (AP) ability were instructed to rapidly strike notes on the piano to match randomized tape-recorded piano notes. Stimulus set sizes were 64, 16, or 4 consecutive semitones, or 7 diatonic notes of a designated octave. A control task involved motor movements to notes announced in advance. Accuracy,…

  20. The biasing effect of clinical history on physical examination diagnostic accuracy.

    PubMed

    Sibbald, Matthew; Cavalcanti, Rodrigo B

    2011-08-01

    Literature on diagnostic test interpretation has shown that access to clinical history can both enhance diagnostic accuracy and increase diagnostic error. Knowledge of clinical history has also been shown to enhance the more complex cognitive task of physical examination diagnosis, possibly by enabling early hypothesis generation. However, it is unclear whether clinicians adhere to these early hypotheses in the face of unexpected physical findings, thus resulting in diagnostic error. A sample of 180 internal medicine residents received a short clinical history and conducted a cardiac physical examination on a high-fidelity simulator. Resident Doctors (Residents) were randomised to three groups based on the physical findings in the simulator. The concordant group received physical examination findings consistent with the diagnosis that was most probable based on the clinical history. Discordant groups received findings associated with plausible alternative diagnoses which either lacked expected findings (indistinct discordant) or contained unexpected findings (distinct discordant). Physical examination diagnostic accuracy and physical examination findings were analysed. Physical examination diagnostic accuracy varied significantly among groups (75 ± 44%, 2 ± 13% and 31 ± 47% in the concordant, indistinct discordant and distinct discordant groups, respectively (F(2,177)  = 53, p < 0.0001). Of the 115 Residents who were diagnostically unsuccessful, 33% adhered to their original incorrect hypotheses. Residents verbalised an average of 12 findings (interquartile range: 10-14); 58 ± 17% were correct and the percentage of correct findings was similar in all three groups (p = 0.44). Residents showed substantially decreased diagnostic accuracy when faced with discordant physical findings. The majority of trainees given discordant physical findings rejected their initial hypotheses, but were still diagnostically unsuccessful. These results

  1. Summarising and validating test accuracy results across multiple studies for use in clinical practice.

    PubMed

    Riley, Richard D; Ahmed, Ikhlaaq; Debray, Thomas P A; Willis, Brian H; Noordzij, J Pieter; Higgins, Julian P T; Deeks, Jonathan J

    2015-06-15

    Following a meta-analysis of test accuracy studies, the translation of summary results into clinical practice is potentially problematic. The sensitivity, specificity and positive (PPV) and negative (NPV) predictive values of a test may differ substantially from the average meta-analysis findings, because of heterogeneity. Clinicians thus need more guidance: given the meta-analysis, is a test likely to be useful in new populations, and if so, how should test results inform the probability of existing disease (for a diagnostic test) or future adverse outcome (for a prognostic test)? We propose ways to address this. Firstly, following a meta-analysis, we suggest deriving prediction intervals and probability statements about the potential accuracy of a test in a new population. Secondly, we suggest strategies on how clinicians should derive post-test probabilities (PPV and NPV) in a new population based on existing meta-analysis results and propose a cross-validation approach for examining and comparing their calibration performance. Application is made to two clinical examples. In the first example, the joint probability that both sensitivity and specificity will be >80% in a new population is just 0.19, because of a low sensitivity. However, the summary PPV of 0.97 is high and calibrates well in new populations, with a probability of 0.78 that the true PPV will be at least 0.95. In the second example, post-test probabilities calibrate better when tailored to the prevalence in the new population, with cross-validation revealing a probability of 0.97 that the observed NPV will be within 10% of the predicted NPV. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  2. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.; Baker, D.

    1994-01-01

    This document presents a compilation of the attitude accuracy attained by a number of satellites that have been supported by the Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC). It starts with a general description of the factors that influence spacecraft attitude accuracy. After brief descriptions of the missions supported, it presents the attitude accuracy results for currently active and older missions, including both three-axis stabilized and spin-stabilized spacecraft. The attitude accuracy results are grouped by the sensor pair used to determine the attitudes. A supplementary section is also included, containing the results of theoretical computations of the effects of variation of sensor accuracy on overall attitude accuracy.

  3. Genomic prediction in animals and plants: simulation of data, validation, reporting, and benchmarking.

    PubMed

    Daetwyler, Hans D; Calus, Mario P L; Pong-Wong, Ricardo; de Los Campos, Gustavo; Hickey, John M

    2013-02-01

    The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits

  4. Genomic Prediction in Animals and Plants: Simulation of Data, Validation, Reporting, and Benchmarking

    PubMed Central

    Daetwyler, Hans D.; Calus, Mario P. L.; Pong-Wong, Ricardo; de los Campos, Gustavo; Hickey, John M.

    2013-01-01

    The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits

  5. Simulation of diurnal thermal energy storage systems: Preliminary results

    NASA Astrophysics Data System (ADS)

    Katipamula, S.; Somasundaram, S.; Williams, H. R.

    1994-12-01

    This report describes the results of a simulation of thermal energy storage (TES) integrated with a simple-cycle gas turbine cogeneration system. Integrating TES with cogeneration can serve the electrical and thermal loads independently while firing all fuel in the gas turbine. The detailed engineering and economic feasibility of diurnal TES systems integrated with cogeneration systems has been described in two previous PNL reports. The objective of this study was to lay the ground work for optimization of the TES system designs using a simulation tool called TRNSYS (TRaNsient SYstem Simulation). TRNSYS is a transient simulation program with a sequential-modular structure developed at the Solar Energy Laboratory, University of Wisconsin-Madison. The two TES systems selected for the base-case simulations were: (1) a one-tank storage model to represent the oil/rock TES system; and (2) a two-tank storage model to represent the molten nitrate salt TES system. Results of the study clearly indicate that an engineering optimization of the TES system using TRNSYS is possible. The one-tank stratified oil/rock storage model described here is a good starting point for parametric studies of a TES system. Further developments to the TRNSYS library of available models (economizer, evaporator, gas turbine, etc.) are recommended so that the phase-change processes is accurately treated.

  6. Comparison between simulations and lab results on the ASSIST test-bench

    NASA Astrophysics Data System (ADS)

    Le Louarn, Miska; Madec, Pierre-Yves; Kolb, Johann; Paufique, Jerome; Oberti, Sylvain; La Penna, Paolo; Arsenault, Robin

    2016-07-01

    We present the latest comparison results between laboratory tests carried out on the ASSIST test bench and Octopus end-to end simulations. We simulated, as closely to the lab conditions as possible, the different AOF modes (Maintenance and commissioning mode (SCAO), GRAAL (GLAO in the near IR), Galacsi Wide Field mode (GLAO in the visible) and Galacsi narrow field mode (LTAO in the visible)). We then compared the simulation results to the ones obtained on the lab bench. Several aspects were investigated, like number of corrected modes, turbulence wind speeds, LGS photon flux etc. The agreement between simulations and lab is remarkably good for all investigated parameters, giving great confidence in both simulation tool and performance of the AO system in the lab.

  7. Numerical simulations of catastrophic disruption: Recent results

    NASA Technical Reports Server (NTRS)

    Benz, W.; Asphaug, E.; Ryan, E. V.

    1994-01-01

    Numerical simulations have been used to study high velocity two-body impacts. In this paper, a two-dimensional Largrangian finite difference hydro-code and a three-dimensional smooth particle hydro-code (SPH) are described and initial results reported. These codes can be, and have been, used to make specific predictions about particular objects in our solar system. But more significantly, they allow us to explore a broad range of collisional events. Certain parameters (size, time) can be studied only over a very restricted range within the laboratory; other parameters (initial spin, low gravity, exotic structure or composition) are difficult to study at all experimentally. The outcomes of numerical simulations lead to a more general and accurate understanding of impacts in their many forms.

  8. Improved Motor-Timing: Effects of Synchronized Metro-Nome Training on Golf Shot Accuracy

    PubMed Central

    Sommer, Marius; Rönnqvist, Louise

    2009-01-01

    This study investigates the effect of synchronized metronome training (SMT) on motor timing and how this training might affect golf shot accuracy. Twenty-six experienced male golfers participated (mean age 27 years; mean golf handicap 12.6) in this study. Pre- and post-test investigations of golf shots made by three different clubs were conducted by use of a golf simulator. The golfers were randomized into two groups: a SMT group and a Control group. After the pre-test, the golfers in the SMT group completed a 4-week SMT program designed to improve their motor timing, the golfers in the Control group were merely training their golf-swings during the same time period. No differences between the two groups were found from the pre-test outcomes, either for motor timing scores or for golf shot accuracy. However, the post-test results after the 4-weeks SMT showed evident motor timing improvements. Additionally, significant improvements for golf shot accuracy were found for the SMT group and with less variability in their performance. No such improvements were found for the golfers in the Control group. As with previous studies that used a SMT program, this study’s results provide further evidence that motor timing can be improved by SMT and that such timing improvement also improves golf accuracy. Key points This study investigates the effect of synchronized metronome training (SMT) on motor timing and how this training might affect golf shot accuracy. A randomized control group design was used. The 4 week SMT intervention showed significant improvements in motor timing, golf shot accuracy, and lead to less variability. We conclude that this study’s results provide further evidence that motor timing can be improved by SMT training and that such timing improvement also improves golf accuracy. PMID:24149608

  9. Simulation for analysis and control of superplastic forming. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zacharia, T.; Aramayo, G.A.; Simunovic, S.

    1996-08-01

    A joint study was conducted by Oak Ridge National Laboratory (ORNL) and the Pacific Northwest Laboratory (PNL) for the U.S. Department of Energy-Lightweight Materials (DOE-LWM) Program. the purpose of the study was to assess and benchmark the current modeling capabilities with respect to accuracy of predictions and simulation time. Two modeling capabilities with respect to accuracy of predictions and simulation time. Two simulation platforms were considered in this study, which included the LS-DYNA3D code installed on ORNL`s high- performance computers and the finite element code MARC used at PNL. both ORNL and PNL performed superplastic forming (SPF) analysis on amore » standard butter-tray geometry, which was defined by PNL, to better understand the capabilities of the respective models. The specific geometry was selected and formed at PNL, and the experimental results, such as forming time and thickness at specific locations, were provided for comparisons with numerical predictions. Furthermore, comparisons between the ORNL simulation results, using elasto-plastic analysis, and PNL`s results, using rigid-plastic flow analysis, were performed.« less

  10. Modeling and Compensation Design for a Power Hardware-in-the-Loop Simulation of an AC Distribution System: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabakar, Kumaraguru; Ainsworth, Nathan; Pratt, Annabelle

    Power hardware-in-the-loop (PHIL) simulation, where actual hardware under text is coupled with a real-time digital model in closed loop, is a powerful tool for analyzing new methods of control for emerging distributed power systems. However, without careful design and compensation of the interface between the simulated and actual systems, PHIL simulations may exhibit instability and modeling inaccuracies. This paper addresses issues that arise in the PHIL simulation of a hardware battery inverter interfaced with a simulated distribution feeder. Both the stability and accuracy issues are modeled and characterized, and a methodology for design of PHIL interface compensation to ensure stabilitymore » and accuracy is presented. The stability and accuracy of the resulting compensated PHIL simulation is then shown by experiment.« less

  11. Decision-Making Accuracy of CBM Progress-Monitoring Data

    ERIC Educational Resources Information Center

    Hintze, John M.; Wells, Craig S.; Marcotte, Amanda M.; Solomon, Benjamin G.

    2018-01-01

    This study examined the diagnostic accuracy associated with decision making as is typically conducted with curriculum-based measurement (CBM) approaches to progress monitoring. Using previously published estimates of the standard errors of estimate associated with CBM, 20,000 progress-monitoring data sets were simulated to model student reading…

  12. Assessing the accuracy of TDR-based water leak detection system

    NASA Astrophysics Data System (ADS)

    Fatemi Aghda, S. M.; GanjaliPour, K.; Nabiollahi, K.

    2018-03-01

    The use of TDR system to detect leakage locations in underground pipes has been developed in recent years. In this system, a bi-wire is installed in parallel with the underground pipes and is considered as a TDR sensor. This approach greatly covers the limitations arisen with using the traditional method of acoustic leak positioning. TDR based leak detection method is relatively accurate when the TDR sensor is in contact with water in just one point. Researchers have been working to improve the accuracy of this method in recent years. In this study, the ability of TDR method was evaluated in terms of the appearance of multi leakage points simultaneously. For this purpose, several laboratory tests were conducted. In these tests in order to simulate leakage points, the TDR sensor was put in contact with water at some points, then the number and the dimension of the simulated leakage points were gradually increased. The results showed that with the increase in the number and dimension of the leakage points, the error rate of the TDR-based water leak detection system increases. The authors tried, according to the results obtained from the laboratory tests, to develop a method to improve the accuracy of the TDR-based leak detection systems. To do that, they defined a few reference points on the TDR sensor. These points were created via increasing the distance between two conductors of TDR sensor and were easily identifiable in the TDR waveform. The tests were repeated again using the TDR sensor having reference points. In order to calculate the exact distance of the leakage point, the authors developed an equation in accordance to the reference points. A comparison between the results obtained from both tests (with and without reference points) showed that using the method and equation developed by the authors can significantly improve the accuracy of positioning the leakage points.

  13. Simulation study and experimental results for detection and classification of the transient capacitor inrush current using discrete wavelet transform and artificial intelligence

    NASA Astrophysics Data System (ADS)

    Patcharoen, Theerasak; Yoomak, Suntiti; Ngaopitakkul, Atthapol; Pothisarn, Chaichan

    2018-04-01

    This paper describes the combination of discrete wavelet transforms (DWT) and artificial intelligence (AI), which are efficient techniques to identify the type of inrush current, analyze the origin and possible cause on the capacitor bank switching. The experiment setup used to verify the proposed techniques can be detected and classified the transient inrush current from normal capacitor rated current. The discrete wavelet transforms are used to detect and classify the inrush current. Then, output from wavelet is acted as input of fuzzy inference system for discriminating the type of switching transient inrush current. The proposed technique shows enhanced performance with a discrimination accuracy of 90.57%. Both simulation study and experimental results are quite satisfactory with providing the high accuracy and reliability which can be developed and implemented into a numerical overcurrent (50/51) and unbalanced current (60C) protection relay for an application of shunt capacitor bank protection in the future.

  14. Shape accuracy optimization for cable-rib tension deployable antenna structure with tensioned cables

    NASA Astrophysics Data System (ADS)

    Liu, Ruiwei; Guo, Hongwei; Liu, Rongqiang; Wang, Hongxiang; Tang, Dewei; Song, Xiaoke

    2017-11-01

    Shape accuracy is of substantial importance in deployable structures as the demand for large-scale deployable structures in various fields, especially in aerospace engineering, increases. The main purpose of this paper is to present a shape accuracy optimization method to find the optimal pretensions for the desired shape of cable-rib tension deployable antenna structure with tensioned cables. First, an analysis model of the deployable structure is established by using finite element method. In this model, geometrical nonlinearity is considered for the cable element and beam element. Flexible deformations of the deployable structure under the action of cable network and tensioned cables are subsequently analyzed separately. Moreover, the influence of pretension of tensioned cables on natural frequencies is studied. Based on the results, a genetic algorithm is used to find a set of reasonable pretension and thus minimize structural deformation under the first natural frequency constraint. Finally, numerical simulations are presented to analyze the deployable structure under two kinds of constraints. Results show that the shape accuracy and natural frequencies of deployable structure can be effectively improved by pretension optimization.

  15. Accuracy of Self-Evaluation in Adults with ADHD: Evidence from a Driving Study

    ERIC Educational Resources Information Center

    Knouse, Laura E.; Bagwell, Catherine L.; Barkley, Russell A.; Murphy, Kevin R.

    2005-01-01

    Research on children with ADHD indicates an association with inaccuracy of self-appraisal. This study examines the accuracy of self-evaluations in clinic-referred adults diagnosed with ADHD. Self-assessments and performance measures of driving in naturalistic settings and on a virtual-reality driving simulator are used to assess accuracy of…

  16. Accuracy of relative positioning by interferometry with GPS Double-blind test results

    NASA Technical Reports Server (NTRS)

    Counselman, C. C., III; Gourevitch, S. A.; Herring, T. A.; King, B. W.; Shapiro, I. I.; Cappallo, R. J.; Rogers, A. E. E.; Whitney, A. R.; Greenspan, R. L.; Snyder, R. E.

    1983-01-01

    MITES (Miniature Interferometer Terminals for Earth Surveying) observations conducted on December 17 and 29, 1980, are analyzed. It is noted that the time span of the observations used on each day was 78 minutes, during which five satellites were always above 20 deg elevation. The observations are analyzed to determine the intersite position vectors by means of the algorithm described by Couselman and Gourevitch (1981). The average of the MITES results from the two days is presented. The rms differences between the two determinations of the components of the three vectors, which were about 65, 92, and 124 m long, were 8 mm for the north, 3 mm for the east, and 6 mm for the vertical. It is concluded that, at least for short distances, relative positioning by interferometry with GPS can be done reliably with subcentimeter accuracy.

  17. Accuracy Analysis and Parameters Optimization in Urban Flood Simulation by PEST Model

    NASA Astrophysics Data System (ADS)

    Keum, H.; Han, K.; Kim, H.; Ha, C.

    2017-12-01

    The risk of urban flooding has been increasing due to heavy rainfall, flash flooding and rapid urbanization. Rainwater pumping stations, underground reservoirs are used to actively take measures against flooding, however, flood damage from lowlands continues to occur. Inundation in urban areas has resulted in overflow of sewer. Therefore, it is important to implement a network system that is intricately entangled within a city, similar to the actual physical situation and accurate terrain due to the effects on buildings and roads for accurate two-dimensional flood analysis. The purpose of this study is to propose an optimal scenario construction procedure watershed partitioning and parameterization for urban runoff analysis and pipe network analysis, and to increase the accuracy of flooded area prediction through coupled model. The establishment of optimal scenario procedure was verified by applying it to actual drainage in Seoul. In this study, optimization was performed by using four parameters such as Manning's roughness coefficient for conduits, watershed width, Manning's roughness coefficient for impervious area, Manning's roughness coefficient for pervious area. The calibration range of the parameters was determined using the SWMM manual and the ranges used in the previous studies, and the parameters were estimated using the automatic calibration method PEST. The correlation coefficient showed a high correlation coefficient for the scenarios using PEST. The RPE and RMSE also showed high accuracy for the scenarios using PEST. In the case of RPE, error was in the range of 13.9-28.9% in the no-parameter estimation scenarios, but in the scenario using the PEST, the error range was reduced to 6.8-25.7%. Based on the results of this study, it can be concluded that more accurate flood analysis is possible when the optimum scenario is selected by determining the appropriate reference conduit for future urban flooding analysis and if the results is applied to various

  18. Students as Toolmakers: Refining the Results in the Accuracy and Precision of a Trigonometric Activity

    ERIC Educational Resources Information Center

    Igoe, D. P.; Parisi, A. V.; Wagner, S.

    2017-01-01

    Smartphones used as tools provide opportunities for the teaching of the concepts of accuracy and precision and the mathematical concept of arctan. The accuracy and precision of a trigonometric experiment using entirely mechanical tools is compared to one using electronic tools, such as a smartphone clinometer application and a laser pointer. This…

  19. The influence of sampling interval on the accuracy of trail impact assessment

    USGS Publications Warehouse

    Leung, Y.-F.; Marion, J.L.

    1999-01-01

    Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.

  20. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset 1998-2000 in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, J. C.; Thompson, A. M.; Schmidlin, F. J.; Oltmans, S. J.; McPeters, R. D.; Smit, H. G. J.

    2003-01-01

    A network of 12 southern hemisphere tropical and subtropical stations in the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 profiles of stratospheric and tropospheric ozone since 1998. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used with standard radiosondes for pressure, temperature and relative humidity measurements. The archived data are available at:http: //croc.gsfc.nasa.gov/shadoz. In Thompson et al., accuracies and imprecisions in the SHADOZ 1998- 2000 dataset were examined using ground-based instruments and the TOMS total ozone measurement (version 7) as references. Small variations in ozonesonde technique introduced possible biases from station-to-station. SHADOZ total ozone column amounts are now compared to version 8 TOMS; discrepancies between the two datasets are reduced 2\\% on average. An evaluation of ozone variations among the stations is made using the results of a series of chamber simulations of ozone launches (JOSIE-2000, Juelich Ozonesonde Intercomparison Experiment) in which a standard reference ozone instrument was employed with the various sonde techniques used in SHADOZ. A number of variations in SHADOZ ozone data are explained when differences in solution strength, data processing and instrument type (manufacturer) are taken into account.

  1. Localization accuracy of sphere fiducials in computed tomography images

    NASA Astrophysics Data System (ADS)

    Kobler, Jan-Philipp; Díaz Díaz, Jesus; Fitzpatrick, J. Michael; Lexow, G. Jakob; Majdani, Omid; Ortmaier, Tobias

    2014-03-01

    In recent years, bone-attached robots and microstereotactic frames have attracted increasing interest due to the promising targeting accuracy they provide. Such devices attach to a patient's skull via bone anchors, which are used as landmarks during intervention planning as well. However, as simulation results reveal, the performance of such mechanisms is limited by errors occurring during the localization of their bone anchors in preoperatively acquired computed tomography images. Therefore, it is desirable to identify the most suitable fiducials as well as the most accurate method for fiducial localization. We present experimental results of a study focusing on the fiducial localization error (FLE) of spheres. Two phantoms equipped with fiducials made from ferromagnetic steel and titanium, respectively, are used to compare two clinically available imaging modalities (multi-slice CT (MSCT) and cone-beam CT (CBCT)), three localization algorithms as well as two methods for approximating the FLE. Furthermore, the impact of cubic interpolation applied to the images is investigated. Results reveal that, generally, the achievable localization accuracy in CBCT image data is significantly higher compared to MSCT imaging. The lowest FLEs (approx. 40 μm) are obtained using spheres made from titanium, CBCT imaging, template matching based on cross correlation for localization, and interpolating the images by a factor of sixteen. Nevertheless, the achievable localization accuracy of spheres made from steel is only slightly inferior. The outcomes of the presented study will be valuable considering the optimization of future microstereotactic frame prototypes as well as the operative workflow.

  2. Kinematics Simulation Analysis of Packaging Robot with Joint Clearance

    NASA Astrophysics Data System (ADS)

    Zhang, Y. W.; Meng, W. J.; Wang, L. Q.; Cui, G. H.

    2018-03-01

    Considering the influence of joint clearance on the motion error, repeated positioning accuracy and overall position of the machine, this paper presents simulation analysis of a packaging robot — 2 degrees of freedom(DOF) planar parallel robot based on the characteristics of high precision and fast speed of packaging equipment. The motion constraint equation of the mechanism is established, and the analysis and simulation of the motion error are carried out in the case of turning the revolute clearance. The simulation results show that the size of the joint clearance will affect the movement accuracy and packaging efficiency of the packaging robot. The analysis provides a reference point of view for the packaging equipment design and selection criteria and has a great significance on the packaging industry automation.

  3. Attitude-correlated frames approach for a star sensor to improve attitude accuracy under highly dynamic conditions.

    PubMed

    Ma, Liheng; Zhan, Dejun; Jiang, Guangwen; Fu, Sihua; Jia, Hui; Wang, Xingshu; Huang, Zongsheng; Zheng, Jiaxing; Hu, Feng; Wu, Wei; Qin, Shiqiao

    2015-09-01

    The attitude accuracy of a star sensor decreases rapidly when star images become motion-blurred under dynamic conditions. Existing techniques concentrate on a single frame of star images to solve this problem and improvements are obtained to a certain extent. An attitude-correlated frames (ACF) approach, which concentrates on the features of the attitude transforms of the adjacent star image frames, is proposed to improve upon the existing techniques. The attitude transforms between different star image frames are measured by the strap-down gyro unit precisely. With the ACF method, a much larger star image frame is obtained through the combination of adjacent frames. As a result, the degradation of attitude accuracy caused by motion-blurring are compensated for. The improvement of the attitude accuracy is approximately proportional to the square root of the number of correlated star image frames. Simulations and experimental results indicate that the ACF approach is effective in removing random noises and improving the attitude determination accuracy of the star sensor under highly dynamic conditions.

  4. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, Jacquelyn C.; Thompson, Anne M.; Schmidlin, F. J.; Oltmans, S. J.; Smit, H. G. J.

    2004-01-01

    Since 1998 the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 ozone profiles over eleven southern hemisphere tropical and subtropical stations. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used to measure ozone. The data are archived at: &ttp://croc.gsfc.nasa.gov/shadoz>. In analysis of ozonesonde imprecision within the SHADOZ dataset, Thompson et al. [JGR, 108,8238,20031 we pointed out that variations in ozonesonde technique (sensor solution strength, instrument manufacturer, data processing) could lead to station-to-station biases within the SHADOZ dataset. Imprecisions and accuracy in the SHADOZ dataset are examined in light of new data. First, SHADOZ total ozone column amounts are compared to version 8 TOMS (2004 release). As for TOMS version 7, satellite total ozone is usually higher than the integrated column amount from the sounding. Discrepancies between the sonde and satellite datasets decline two percentage points on average, compared to version 7 TOMS offsets. Second, the SHADOZ station data are compared to results of chamber simulations (JOSE-2000, Juelich Ozonesonde Intercomparison Experiment) in which the various SHADOZ techniques were evaluated. The range of JOSE column deviations from a standard instrument (-10%) in the chamber resembles that of the SHADOZ station data. It appears that some systematic variations in the SHADOZ ozone record are accounted for by differences in solution strength, data processing and instrument type (manufacturer).

  5. Preliminary navigation accuracy analysis for the TDRSS Onboard Navigation System (TONS) experiment on EP/EUVE

    NASA Technical Reports Server (NTRS)

    Gramling, C. J.; Long, A. C.; Lee, T.; Ottenstein, N. A.; Samii, M. V.

    1991-01-01

    A Tracking and Data Relay Satellite System (TDRSS) Onboard Navigation System (TONS) is currently being developed by NASA to provide a high accuracy autonomous navigation capability for users of TDRSS and its successor, the Advanced TDRSS (ATDRSS). The fully autonomous user onboard navigation system will support orbit determination, time determination, and frequency determination, based on observation of a continuously available, unscheduled navigation beacon signal. A TONS experiment will be performed in conjunction with the Explorer Platform (EP) Extreme Ultraviolet Explorer (EUVE) mission to flight quality TONS Block 1. An overview is presented of TONS and a preliminary analysis of the navigation accuracy anticipated for the TONS experiment. Descriptions of the TONS experiment and the associated navigation objectives, as well as a description of the onboard navigation algorithms, are provided. The accuracy of the selected algorithms is evaluated based on the processing of realistic simulated TDRSS one way forward link Doppler measurements. The analysis process is discussed and the associated navigation accuracy results are presented.

  6. Results of GEANT simulations and comparison with first experiments at DANCE.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reifarth, R.; Bredeweg, T. A.; Browne, J. C.

    2003-07-29

    This report describes intensive Monte Carlo simulations carried out to be compared with the results of the first run cycle with DANCE (Detector for Advanced Neutron Capture Experiments). The experimental results were gained during the commissioning phase 2002/2003 with only a part of the array. Based on the results of these simulations the most important items to be improved before the next experiments will be addressed.

  7. First results from simulations of supersymmetric lattices

    NASA Astrophysics Data System (ADS)

    Catterall, Simon

    2009-01-01

    We conduct the first numerical simulations of lattice theories with exact supersymmetry arising from the orbifold constructions of \\cite{Cohen:2003xe,Cohen:2003qw,Kaplan:2005ta}. We consider the Script Q = 4 theory in D = 0,2 dimensions and the Script Q = 16 theory in D = 0,2,4 dimensions. We show that the U(N) theories do not possess vacua which are stable non-perturbatively, but that this problem can be circumvented after truncation to SU(N). We measure the distribution of scalar field eigenvalues, the spectrum of the fermion operator and the phase of the Pfaffian arising after integration over the fermions. We monitor supersymmetry breaking effects by measuring a simple Ward identity. Our results indicate that simulations of Script N = 4 super Yang-Mills may be achievable in the near future.

  8. A Method for Assessing the Accuracy of a Photogrammetry System for Precision Deployable Structures

    NASA Technical Reports Server (NTRS)

    Moore, Ashley

    2005-01-01

    The measurement techniques used to validate analytical models of large deployable structures are an integral Part of the technology development process and must be precise and accurate. Photogrammetry and videogrammetry are viable, accurate, and unobtrusive methods for measuring such large Structures. Photogrammetry uses Software to determine the three-dimensional position of a target using camera images. Videogrammetry is based on the same principle, except a series of timed images are analyzed. This work addresses the accuracy of a digital photogrammetry system used for measurement of large, deployable space structures at JPL. First, photogrammetry tests are performed on a precision space truss test article, and the images are processed using Photomodeler software. The accuracy of the Photomodeler results is determined through, comparison with measurements of the test article taken by an external testing group using the VSTARS photogrammetry system. These two measurements are then compared with Australis photogrammetry software that simulates a measurement test to predict its accuracy. The software is then used to study how particular factors, such as camera resolution and placement, affect the system accuracy to help design the setup for the videogrammetry system that will offer the highest level of accuracy for measurement of deploying structures.

  9. Multiple Optical Filter Design Simulation Results

    NASA Astrophysics Data System (ADS)

    Mendelsohn, J.; Englund, D. C.

    1986-10-01

    In this paper we continue our investigation of the application of matched filters to robotic vision problems. Specifically, we are concerned with the tray-picking problem. Our principal interest in this paper is the examination of summation affects which arise from attempting to reduce the matched filter memory size by averaging of matched filters. While the implementation of matched filtering theory to applications in pattern recognition or machine vision is ideally through the use of optics and optical correlators, in this paper the results were obtained through a digital simulation of the optical process.

  10. A simplified DEM-CFD approach for pebble bed reactor simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y.; Ji, W.

    In pebble bed reactors (PBR's), the pebble flow and the coolant flow are coupled with each other through coolant-pebble interactions. Approaches with different fidelities have been proposed to simulate similar phenomena. Coupled Discrete Element Method-Computational Fluid Dynamics (DEM-CFD) approaches are widely studied and applied in these problems due to its good balance between efficiency and accuracy. In this work, based on the symmetry of the PBR geometry, a simplified 3D-DEM/2D-CFD approach is proposed to speed up the DEM-CFD simulation without significant loss of accuracy. Pebble flow is simulated by a full 3-D DEM, while the coolant flow field is calculatedmore » with a 2-D CFD simulation by averaging variables along the annular direction in the cylindrical geometry. Results show that this simplification can greatly enhance the efficiency for cylindrical core, which enables further inclusion of other physics such as thermal and neutronic effect in the multi-physics simulations for PBR's. (authors)« less

  11. Accuracy of non-resonant laser-induced thermal acoustics (LITA) in a convergent-divergent nozzle flow

    NASA Astrophysics Data System (ADS)

    Richter, J.; Mayer, J.; Weigand, B.

    2018-02-01

    Non-resonant laser-induced thermal acoustics (LITA) was applied to measure Mach number, temperature and turbulence level along the centerline of a transonic nozzle flow. The accuracy of the measurement results was systematically studied regarding misalignment of the interrogation beam and frequency analysis of the LITA signals. 2D steady-state Reynolds-averaged Navier-Stokes (RANS) simulations were performed for reference. The simulations were conducted using ANSYS CFX 18 employing the shear-stress transport turbulence model. Post-processing of the LITA signals is performed by applying a discrete Fourier transformation (DFT) to determine the beat frequencies. It is shown that the systematical error of the DFT, which depends on the number of oscillations, signal chirp, and damping rate, is less than 1.5% for our experiments resulting in an average error of 1.9% for Mach number. Further, the maximum calibration error is investigated for a worst-case scenario involving maximum in situ readjustment of the interrogation beam within the limits of constructive interference. It is shown that the signal intensity becomes zero if the interrogation angle is altered by 2%. This, together with the accuracy of frequency analysis, results in an error of about 5.4% for temperature throughout the nozzle. Comparison with numerical results shows good agreement within the error bars.

  12. Simulations Build Efficacy: Empirical Results from a Four-Week Congressional Simulation

    ERIC Educational Resources Information Center

    Mariani, Mack; Glenn, Brian J.

    2014-01-01

    This article describes a four-week congressional committee simulation implemented in upper level courses on Congress and the Legislative process at two liberal arts colleges. We find that the students participating in the simulation possessed high levels of political knowledge and confidence in their political skills prior to the simulation. An…

  13. Sea wind parameters retrieval using Y-configured Doppler navigation system data. Performance and accuracy

    NASA Astrophysics Data System (ADS)

    Khachaturian, A. B.; Nekrasov, A. V.; Bogachev, M. I.

    2018-05-01

    The authors report the results of the computer simulations of the performance and accuracy of the sea wind speed and direction retrieval. The analyzed measurements over the sea surface are made by the airborne microwave Doppler navigation system (DNS) with three Y-configured beams operated as a scatterometer enhancing its functionality. Single- and double-stage wind measurement procedures are proposed and recommendations for their implementation are described.

  14. Simulating reservoir leakage in ground-water models

    USGS Publications Warehouse

    Fenske, J.P.; Leake, S.A.; Prudic, David E.

    1997-01-01

    Leakage to ground water resulting from the expansion and contraction of reservoirs cannot be easily simulated by most ground-water flow models. An algorithm, entitled the Reservoir Package, was developed for the United States Geological Survey (USGS) three-dimensional finite-difference modular ground-water flow model MODFLOW. The Reservoir Package automates the process of specifying head-dependent boundary cells, eliminating the need to divide a simulation into many stress periods while improving accuracy in simulating changes in ground-water levels resulting from transient reservoir stage. Leakage between the reservoir and the underlying aquifer is simulated for each model cell corrresponding to the inundated area by multiplying the head difference between the reservoir and the aquifer with the hydraulic conductance of the reservoir-bed sediments.

  15. Accuracy and Numerical Stabilty Analysis of Lattice Boltzmann Method with Multiple Relaxation Time for Incompressible Flows

    NASA Astrophysics Data System (ADS)

    Pradipto; Purqon, Acep

    2017-07-01

    Lattice Boltzmann Method (LBM) is the novel method for simulating fluid dynamics. Nowadays, the application of LBM ranges from the incompressible flow, flow in the porous medium, until microflows. The common collision model of LBM is the BGK with a constant single relaxation time τ. However, BGK suffers from numerical instabilities. These instabilities could be eliminated by implementing LBM with multiple relaxation time. Both of those scheme have implemented for incompressible 2 dimensions lid-driven cavity. The stability analysis has done by finding the maximum Reynolds number and velocity for converged simulations. The accuracy analysis is done by comparing the velocity profile with the benchmark results from Ghia, et al and calculating the net velocity flux. The tests concluded that LBM with MRT are more stable than BGK, and have a similar accuracy. The maximum Reynolds number that converges for BGK is 3200 and 7500 for MRT respectively.

  16. Fast Plasma Instrument for MMS: Simulation Results

    NASA Technical Reports Server (NTRS)

    Figueroa-Vinas, Adolfo; Adrian, Mark L.; Lobell, James V.; Simpson, David G.; Barrie, Alex; Winkert, George E.; Yeh, Pen-Shu; Moore, Thomas E.

    2008-01-01

    Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. The Dual Electron Spectrometer (DES) of the Fast Plasma Instrument (FPI) for MMS meets these demanding requirements by acquiring the electron velocity distribution functions (VDFs) for the full sky with high-resolution angular measurements every 30 ms. This will provide unprecedented access to electron scale dynamics within the reconnection diffusion region. The DES consists of eight half-top-hat energy analyzers. Each analyzer has a 6 deg. x 11.25 deg. Full-sky coverage is achieved by electrostatically stepping the FOV of each of the eight sensors through four discrete deflection look directions. Data compression and burst memory management will provide approximately 30 minutes of high time resolution data during each orbit of the four MMS spacecraft. Each spacecraft will intelligently downlink the data sequences that contain the greatest amount of temporal structure. Here we present the results of a simulation of the DES analyzer measurements, data compression and decompression, as well as ground-based analysis using as a seed re-processed Cluster/PEACE electron measurements. The Cluster/PEACE electron measurements have been reprocessed through virtual DES analyzers with their proper geometrical, energy, and timing scale factors and re-mapped via interpolation to the DES angular and energy phase-space sampling measurements. The results of the simulated DES measurements are analyzed and the full moments of the simulated VDFs are compared with those obtained from the Cluster/PEACE spectrometer using a standard quadrature moment, a newly implemented spectral spherical harmonic method, and a singular value decomposition method. Our preliminary moment calculations show a remarkable agreement within the uncertainties of the measurements, with the

  17. TH-A-9A-05: Initial Setup Accuracy Comparison Between Frame-Based and Frameless Stereotactic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tseng, T; Sheu, R; Todorov, B

    2014-06-15

    Purpose: To evaluate initial setup accuracy for stereotactic radiosurgery (SRS) between Brainlab frame-based and frameless immobilization system, also to discern the magnitude frameless system has on setup parameters. Methods: The correction shifts from the original setup were compared for total 157 SRS cranial treatments (69 frame-based vs. 88 frameless). All treatments were performed on a Novalis linac with ExacTrac positioning system. Localization box with isocenter overlay was used for initial setup and correction shift was determined by ExacTrac 6D auto-fusion to achieve submillimeter accuracy for treatment. For frameless treatments, mean time interval between simulation and treatment was 5.7 days (rangemore » 0–13). Pearson Chi-Square was used for univariate analysis. Results: The correctional radial shifts (mean±STD, median) for the frame and frameless system measured by ExacTrac were 1.2±1.2mm, 1.1mm and 3.1±3.3mm, 2.0mm, respectively. Treatments with frameless system had a radial shift >2mm more often than those with frames (51.1% vs. 2.9%; p<.0001). To achieve submillimeter accuracy, 85.5% frame-based treatments did not require shift and only 23.9% frameless treatment could succeed with initial setup. There was no statistical significant system offset observed in any direction for either system. For frameless treatments, those treated ≥ 3 days from simulation had statistically higher rates of radial shifts between 1–2mm and >2mm compared to patients treated in a shorter amount of time from simulation (34.3% and 56.7% vs. 28.6% and 33.3%, respectively; p=0.006). Conclusion: Although image-guided positioning system can also achieve submillimeter accuracy for frameless system, users should be cautious regarding the inherent uncertainty of its capability of immobilization. A proper quality assurance procedure for frameless mask manufacturing and a protocol for intra-fraction imaging verification will be crucial for frameless system. Time interval

  18. The effect of spatial, spectral and radiometric factors on classification accuracy using thematic mapper data

    NASA Technical Reports Server (NTRS)

    Wrigley, R. C.; Acevedo, W.; Alexander, D.; Buis, J.; Card, D.

    1984-01-01

    An experiment of a factorial design was conducted to test the effects on classification accuracy of land cover types due to the improved spatial, spectral and radiometric characteristics of the Thematic Mapper (TM) in comparison to the Multispectral Scanner (MSS). High altitude aircraft scanner data from the Airborne Thematic Mapper instrument was acquired over central California in August, 1983 and used to simulate Thematic Mapper data as well as all combinations of the three characteristics for eight data sets in all. Results for the training sites (field center pixels) showed better classification accuracies for MSS spatial resolution, TM spectral bands and TM radiometry in order of importance.

  19. Complexity, accuracy and practical applicability of different biogeochemical model versions

    NASA Astrophysics Data System (ADS)

    Los, F. J.; Blaas, M.

    2010-04-01

    The construction of validated biogeochemical model applications as prognostic tools for the marine environment involves a large number of choices particularly with respect to the level of details of the .physical, chemical and biological aspects. Generally speaking, enhanced complexity might enhance veracity, accuracy and credibility. However, very complex models are not necessarily effective or efficient forecast tools. In this paper, models of varying degrees of complexity are evaluated with respect to their forecast skills. In total 11 biogeochemical model variants have been considered based on four different horizontal grids. The applications vary in spatial resolution, in vertical resolution (2DH versus 3D), in nature of transport, in turbidity and in the number of phytoplankton species. Included models range from 15 year old applications with relatively simple physics up to present state of the art 3D models. With all applications the same year, 2003, has been simulated. During the model intercomparison it has been noticed that the 'OSPAR' Goodness of Fit cost function (Villars and de Vries, 1998) leads to insufficient discrimination of different models. This results in models obtaining similar scores although closer inspection of the results reveals large differences. In this paper therefore, we have adopted the target diagram by Jolliff et al. (2008) which provides a concise and more contrasting picture of model skill on the entire model domain and for the entire period of the simulations. Correctness in prediction of the mean and the variability are separated and thus enhance insight in model functioning. Using the target diagrams it is demonstrated that recent models are more consistent and have smaller biases. Graphical inspection of time series confirms this, as the level of variability appears more realistic, also given the multi-annual background statistics of the observations. Nevertheless, whether the improvements are all genuine for the particular

  20. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL

  1. Experimental characterization and numerical simulation of riveted lap-shear joints using Rivet Element

    NASA Astrophysics Data System (ADS)

    Vivio, Francesco; Fanelli, Pierluigi; Ferracci, Michele

    2018-03-01

    In aeronautical and automotive industries the use of rivets for applications requiring several joining points is now very common. In spite of a very simple shape, a riveted junction has many contact surfaces and stress concentrations that make the local stiffness very difficult to be calculated. To overcome this difficulty, commonly finite element models with very dense meshes are performed for single joint analysis because the accuracy is crucial for a correct structural analysis. Anyhow, when several riveted joints are present, the simulation becomes computationally too heavy and usually significant restrictions to joint modelling are introduced, sacrificing the accuracy of local stiffness evaluation. In this paper, we tested the accuracy of a rivet finite element presented in previous works by the authors. The structural behaviour of a lap joint specimen with a rivet joining is simulated numerically and compared to experimental measurements. The Rivet Element, based on a closed-form solution of a reference theoretical model of the rivet joint, simulates local and overall stiffness of the junction combining high accuracy with low degrees of freedom contribution. In this paper the Rivet Element performances are compared to that of a FE non-linear model of the rivet, built with solid elements and dense mesh, and to experimental data. The promising results reported allow to consider the Rivet Element able to simulate, with a great accuracy, actual structures with several rivet connections.

  2. Numerical simulation and analysis of accurate blood oxygenation measurement by using optical resolution photoacoustic microscopy

    NASA Astrophysics Data System (ADS)

    Yu, Tianhao; Li, Qian; Li, Lin; Zhou, Chuanqing

    2016-10-01

    Accuracy of photoacoustic signal is the crux on measurement of oxygen saturation in functional photoacoustic imaging, which is influenced by factors such as defocus of laser beam, curve shape of large vessels and nonlinear saturation effect of optical absorption in biological tissues. We apply Monte Carlo model to simulate energy deposition in tissues and obtain photoacoustic signals reaching a simulated focused surface detector to investigate corresponding influence of these factors. We also apply compensation on photoacoustic imaging of in vivo cat cerebral cortex blood vessels, in which signals from different lateral positions of vessels are corrected based on simulation results. And this process on photoacoustic images can improve the smoothness and accuracy of oxygen saturation results.

  3. Fast simulation of yttrium-90 bremsstrahlung photons with GATE.

    PubMed

    Rault, Erwann; Staelens, Steven; Van Holen, Roel; De Beenhouwer, Jan; Vandenberghe, Stefaan

    2010-06-01

    Multiple investigators have recently reported the use of yttrium-90 (90Y) bremsstrahlung single photon emission computed tomography (SPECT) imaging for the dosimetry of targeted radionuclide therapies. Because Monte Carlo (MC) simulations are useful for studying SPECT imaging, this study investigates the MC simulation of 90Y bremsstrahlung photons in SPECT. To overcome the computationally expensive simulation of electrons, the authors propose a fast way to simulate the emission of 90Y bremsstrahlung photons based on prerecorded bremsstrahlung photon probability density functions (PDFs). The accuracy of bremsstrahlung photon simulation is evaluated in two steps. First, the validity of the fast bremsstrahlung photon generator is checked. To that end, fast and analog simulations of photons emitted from a 90Y point source in a water phantom are compared. The same setup is then used to verify the accuracy of the bremsstrahlung photon simulations, comparing the results obtained with PDFs generated from both simulated and measured data to measurements. In both cases, the energy spectra and point spread functions of the photons detected in a scintillation camera are used. Results show that the fast simulation method is responsible for a 5% overestimation of the low-energy fluence (below 75 keV) of the bremsstrahlung photons detected using a scintillation camera. The spatial distribution of the detected photons is, however, accurately reproduced with the fast method and a computational acceleration of approximately 17-fold is achieved. When measured PDFs are used in the simulations, the simulated energy spectrum of photons emitted from a point source of 90Y in a water phantom and detected in a scintillation camera closely approximates the measured spectrum. The PSF of the photons imaged in the 50-300 keV energy window is also accurately estimated with a 12.4% underestimation of the full width at half maximum and 4.5% underestimation of the full width at tenth maximum

  4. 3D-Printed Visceral Aneurysm Models Based on CT Data for Simulations of Endovascular Embolization: Evaluation of Size and Shape Accuracy.

    PubMed

    Shibata, Eisuke; Takao, Hidemasa; Amemiya, Shiori; Ohtomo, Kuni

    2017-08-01

    The objective of this study is to verify the accuracy of 3D-printed hollow models of visceral aneurysms created from CT angiography (CTA) data, by evaluating the sizes and shapes of aneurysms and related arteries. From March 2006 to August 2015, 19 true visceral aneurysms were embolized via interventional radiologic treatment provided by the radiology department at our institution; aneurysms with bleeding (n = 3) or without thin-slice (< 1 mm) preembolization CT data (n = 1) were excluded. A total of 15 consecutive true visceral aneurysms from 11 patients (eight women and three men; mean age, 61 years; range, 53-72 years) whose aneurysms were embolized via endovascular procedures were included in this study. Three-dimensional-printed hollow models of aneurysms and related arteries were fabricated from CTA data. The accuracies of the sizes and shapes of the 3D-printed hollow models were evaluated using the nonparametric Wilcoxon signed rank test and the Dice coefficient index. Aneurysm sizes ranged from 138 to 18,691 mm 3 (diameter, 6.1-35.7 mm), and no statistically significant difference was noted between patient data and 3D-printed models (p = 0.56). Shape analysis of whole aneurysms and related arteries indicated a high level of accuracy (Dice coefficient index value, 84.2-95.8%; mean [± SD], 91.1 ± 4.1%). The sizes and shapes of 3D-printed hollow visceral aneurysm models created from CTA data were accurate. These models can be used for simulations of endovascular treatment and precise anatomic information.

  5. Single-breath diffusing capacity for carbon monoxide instrument accuracy across 3 health systems.

    PubMed

    Hegewald, Matthew J; Markewitz, Boaz A; Wilson, Emily L; Gallo, Heather M; Jensen, Robert L

    2015-03-01

    Measuring diffusing capacity of the lung for carbon monoxide (DLCO) is complex and associated with wide intra- and inter-laboratory variability. Increased D(LCO) variability may have important clinical consequences. The objective of the study was to assess instrument performance across hospital pulmonary function testing laboratories using a D(LCO) simulator that produces precise and repeatable D(LCO) values. D(LCO) instruments were tested with CO gas concentrations representing medium and high range D(LCO) values. The absolute difference between observed and target D(LCO) value was used to determine measurement accuracy; accuracy was defined as an average deviation from the target value of < 2.0 mL/min/mm Hg. Accuracy of inspired volume measurement and gas sensors were also determined. Twenty-three instruments were tested across 3 healthcare systems. The mean absolute deviation from the target value was 1.80 mL/min/mm Hg (range 0.24-4.23) with 10 of 23 instruments (43%) being inaccurate. High volume laboratories performed better than low volume laboratories, although the difference was not significant. There was no significant difference among the instruments by manufacturers. Inspired volume was not accurate in 48% of devices; mean absolute deviation from target value was 3.7%. Instrument gas analyzers performed adequately in all instruments. D(LCO) instrument accuracy was unacceptable in 43% of devices. Instrument inaccuracy can be primarily attributed to errors in inspired volume measurement and not gas analyzer performance. D(LCO) instrument performance may be improved by regular testing with a simulator. Caution should be used when comparing D(LCO) results reported from different laboratories. Copyright © 2015 by Daedalus Enterprises.

  6. Development of a Prototype Automation Simulation Scenario Generator for Air Traffic Management Software Simulations

    NASA Technical Reports Server (NTRS)

    Khambatta, Cyrus F.

    2007-01-01

    A technique for automated development of scenarios for use in the Multi-Center Traffic Management Advisor (McTMA) software simulations is described. The resulting software is designed and implemented to automate the generation of simulation scenarios with the intent of reducing the time it currently takes using an observational approach. The software program is effective in achieving this goal. The scenarios created for use in the McTMA simulations are based on data taken from data files from the McTMA system, and were manually edited before incorporation into the simulations to ensure accuracy. Despite the software s overall favorable performance, several key software issues are identified. Proposed solutions to these issues are discussed. Future enhancements to the scenario generator software may address the limitations identified in this paper.

  7. Temporal bone borehole accuracy for cochlear implantation influenced by drilling strategy: an in vitro study.

    PubMed

    Kobler, Jan-Philipp; Schoppe, Michael; Lexow, G Jakob; Rau, Thomas S; Majdani, Omid; Kahrs, Lüder A; Ortmaier, Tobias

    2014-11-01

    Minimally invasive cochlear implantation is a surgical technique which requires drilling a canal from the mastoid surface toward the basal turn of the cochlea. The choice of an appropriate drilling strategy is hypothesized to have significant influence on the achievable targeting accuracy. Therefore, a method is presented to analyze the contribution of the drilling process and drilling tool to the targeting error isolated from other error sources. The experimental setup to evaluate the borehole accuracy comprises a drill handpiece attached to a linear slide as well as a highly accurate coordinate measuring machine (CMM). Based on the specific requirements of the minimally invasive cochlear access, three drilling strategies, mainly characterized by different drill tools, are derived. The strategies are evaluated by drilling into synthetic temporal bone substitutes containing air-filled cavities to simulate mastoid cells. Deviations from the desired drill trajectories are determined based on measurements using the CMM. Using the experimental setup, a total of 144 holes were drilled for accuracy evaluation. Errors resulting from the drilling process depend on the specific geometry of the tool as well as the angle at which the drill contacts the bone surface. Furthermore, there is a risk of the drill bit deflecting due to synthetic mastoid cells. A single-flute gun drill combined with a pilot drill of the same diameter provided the best results for simulated minimally invasive cochlear implantation, based on an experimental method that may be used for testing further drilling process improvements.

  8. Improving LUC estimation accuracy with multiple classification system for studying impact of urbanization on watershed flood

    NASA Astrophysics Data System (ADS)

    Dou, P.

    2017-12-01

    Guangzhou has experienced a rapid urbanization period called "small change in three years and big change in five years" since the reform of China, resulting in significant land use/cover changes(LUC). To overcome the disadvantages of single classifier for remote sensing image classification accuracy, a multiple classifier system (MCS) is proposed to improve the quality of remote sensing image classification. The new method combines advantages of different learning algorithms, and achieves higher accuracy (88.12%) than any single classifier did. With the proposed MCS, land use/cover (LUC) on Landsat images from 1987 to 2015 was obtained, and the LUCs were used on three watersheds (Shijing river, Chebei stream, and Shahe stream) to estimate the impact of urbanization on water flood. The results show that with the high accuracy LUC, the uncertainty in flood simulations are reduced effectively (for Shijing river, Chebei stream, and Shahe stream, the uncertainty reduced 15.5%, 17.3% and 19.8% respectively).

  9. Analysis of the impact of simulation model simplifications on the quality of low-energy buildings simulation results

    NASA Astrophysics Data System (ADS)

    Klimczak, Marcin; Bojarski, Jacek; Ziembicki, Piotr; Kęskiewicz, Piotr

    2017-11-01

    The requirements concerning energy performance of buildings and their internal installations, particularly HVAC systems, have been growing continuously in Poland and all over the world. The existing, traditional calculation methods following from the static heat exchange model are frequently not sufficient for a reasonable heating design of a building. Both in Poland and elsewhere in the world, methods and software are employed which allow a detailed simulation of the heating and moisture conditions in a building, and also an analysis of the performance of HVAC systems within a building. However, these systems are usually difficult in use and complex. In addition, the development of a simulation model that is sufficiently adequate to the real building requires considerable time involvement of a designer, is time-consuming and laborious. A simplification of the simulation model of a building renders it possible to reduce the costs of computer simulations. The paper analyses in detail the effect of introducing a number of different variants of the simulation model developed in Design Builder on the quality of final results obtained. The objective of this analysis is to find simplifications which allow obtaining simulation results which have an acceptable level of deviations from the detailed model, thus facilitating a quick energy performance analysis of a given building.

  10. The shared neural basis of empathy and facial imitation accuracy.

    PubMed

    Braadbaart, L; de Grauw, H; Perrett, D I; Waiter, G D; Williams, J H G

    2014-01-01

    Empathy involves experiencing emotion vicariously, and understanding the reasons for those emotions. It may be served partly by a motor simulation function, and therefore share a neural basis with imitation (as opposed to mimicry), as both involve sensorimotor representations of intentions based on perceptions of others' actions. We recently showed a correlation between imitation accuracy and Empathy Quotient (EQ) using a facial imitation task and hypothesised that this relationship would be mediated by the human mirror neuron system. During functional Magnetic Resonance Imaging (fMRI), 20 adults observed novel 'blends' of facial emotional expressions. According to instruction, they either imitated (i.e. matched) the expressions or executed alternative, pre-prescribed mismatched actions as control. Outside the scanner we replicated the association between imitation accuracy and EQ. During fMRI, activity was greater during mismatch compared to imitation, particularly in the bilateral insula. Activity during imitation correlated with EQ in somatosensory cortex, intraparietal sulcus and premotor cortex. Imitation accuracy correlated with activity in insula and areas serving motor control. Overlapping voxels for the accuracy and EQ correlations occurred in premotor cortex. We suggest that both empathy and facial imitation rely on formation of action plans (or a simulation of others' intentions) in the premotor cortex, in connection with representations of emotional expressions based in the somatosensory cortex. In addition, the insula may play a key role in the social regulation of facial expression. © 2013.

  11. Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding.

    PubMed

    Ould Estaghvirou, Sidi Boubacar; Ogutu, Joseph O; Schulz-Streeck, Torben; Knaak, Carsten; Ouzunova, Milena; Gordillo, Andres; Piepho, Hans-Peter

    2013-12-06

    In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least

  12. Application of round grating angle measurement composite error amendment in the online measurement accuracy improvement of large diameter

    NASA Astrophysics Data System (ADS)

    Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu

    2008-10-01

    The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.

  13. The Theory and Practice of Estimating the Accuracy of Dynamic Flight-Determined Coefficients

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1981-01-01

    Means of assessing the accuracy of maximum likelihood parameter estimates obtained from dynamic flight data are discussed. The most commonly used analytical predictors of accuracy are derived and compared from both statistical and simplified geometrics standpoints. The accuracy predictions are evaluated with real and simulated data, with an emphasis on practical considerations, such as modeling error. Improved computations of the Cramer-Rao bound to correct large discrepancies due to colored noise and modeling error are presented. The corrected Cramer-Rao bound is shown to be the best available analytical predictor of accuracy, and several practical examples of the use of the Cramer-Rao bound are given. Engineering judgement, aided by such analytical tools, is the final arbiter of accuracy estimation.

  14. Effects of Recovery Behavior and Strain-Rate Dependence of Stress-Strain Curve on Prediction Accuracy of Thermal Stress Analysis During Casting

    NASA Astrophysics Data System (ADS)

    Motoyama, Yuichi; Shiga, Hidetoshi; Sato, Takeshi; Kambe, Hiroshi; Yoshida, Makoto

    2017-06-01

    Recovery behavior (recovery) and strain-rate dependence of the stress-strain curve (strain-rate dependence) are incorporated into constitutive equations of alloys to predict residual stress and thermal stress during casting. Nevertheless, few studies have systematically investigated the effects of these metallurgical phenomena on the prediction accuracy of thermal stress in a casting. This study compares the thermal stress analysis results with in situ thermal stress measurement results of an Al-Si-Cu specimen during casting. The results underscore the importance for the alloy constitutive equation of incorporating strain-rate dependence to predict thermal stress that develops at high temperatures where the alloy shows strong strain-rate dependence of the stress-strain curve. However, the prediction accuracy of the thermal stress developed at low temperatures did not improve by considering the strain-rate dependence. Incorporating recovery into the constitutive equation improved the accuracy of the simulated thermal stress at low temperatures. Results of comparison implied that the constitutive equation should include strain-rate dependence to simulate defects that develop from thermal stress at high temperatures, such as hot tearing and hot cracking. Recovery should be incorporated into the alloy constitutive equation to predict the casting residual stress and deformation caused by the thermal stress developed mainly in the low temperature range.

  15. Estimating Achievable Accuracy for Global Imaging Spectroscopy Measurement of Non-Photosynthetic Vegetation Cover

    NASA Astrophysics Data System (ADS)

    Dennison, P. E.; Kokaly, R. F.; Daughtry, C. S. T.; Roberts, D. A.; Thompson, D. R.; Chambers, J. Q.; Nagler, P. L.; Okin, G. S.; Scarth, P.

    2016-12-01

    Terrestrial vegetation is dynamic, expressing seasonal, annual, and long-term changes in response to climate and disturbance. Phenology and disturbance (e.g. drought, insect attack, and wildfire) can result in a transition from photosynthesizing "green" vegetation to non-photosynthetic vegetation (NPV). NPV cover can include dead and senescent vegetation, plant litter, agricultural residues, and non-photosynthesizing stem tissue. NPV cover is poorly captured by conventional remote sensing vegetation indices, but it is readily separable from substrate cover based on spectral absorption features in the shortwave infrared. We will present past research motivating the need for global NPV measurements, establishing that mapping seasonal NPV cover is critical for improving our understanding of ecosystem function and carbon dynamics. We will also present new research that helps determine a best achievable accuracy for NPV cover estimation. To test the sensitivity of different NPV cover estimation methods, we simulated satellite imaging spectrometer data using field spectra collected over mixtures of NPV, green vegetation, and soil substrate. We incorporated atmospheric transmittance and modeled sensor noise to create simulated spectra with spectral resolutions ranging from 10 to 30 nm. We applied multiple methods of NPV estimation to the simulated spectra, including spectral indices, spectral feature analysis, multiple endmember spectral mixture analysis, and partial least squares regression, and compared the accuracy and bias of each method. These results prescribe sensor characteristics for an imaging spectrometer mission with NPV measurement capabilities, as well as a "Quantified Earth Science Objective" for global measurement of NPV cover. Copyright 2016, all rights reserved.

  16. Design and evaluation of an augmented reality simulator using leap motion.

    PubMed

    Wright, Trinette; de Ribaupierre, Sandrine; Eagleson, Roy

    2017-10-01

    Advances in virtual and augmented reality (AR) are having an impact on the medical field in areas such as surgical simulation. Improvements to surgical simulation will provide students and residents with additional training and evaluation methods. This is particularly important for procedures such as the endoscopic third ventriculostomy (ETV), which residents perform regularly. Simulators such as NeuroTouch, have been designed to aid in training associated with this procedure. The authors have designed an affordable and easily accessible ETV simulator, and compare it with the existing NeuroTouch for its usability and training effectiveness. This simulator was developed using Unity, Vuforia and the leap motion (LM) for an AR environment. The participants, 16 novices and two expert neurosurgeons, were asked to complete 40 targeting tasks. Participants used the NeuroTouch tool or a virtual hand controlled by the LM to select the position and orientation for these tasks. The length of time to complete each task was recorded and the trajectory log files were used to calculate performance. The resulting data from the novices' and experts' speed and accuracy are compared, and they discuss the objective performance of training in terms of the speed and accuracy of targeting accuracy for each system.

  17. Design and evaluation of an augmented reality simulator using leap motion

    PubMed Central

    de Ribaupierre, Sandrine; Eagleson, Roy

    2017-01-01

    Advances in virtual and augmented reality (AR) are having an impact on the medical field in areas such as surgical simulation. Improvements to surgical simulation will provide students and residents with additional training and evaluation methods. This is particularly important for procedures such as the endoscopic third ventriculostomy (ETV), which residents perform regularly. Simulators such as NeuroTouch, have been designed to aid in training associated with this procedure. The authors have designed an affordable and easily accessible ETV simulator, and compare it with the existing NeuroTouch for its usability and training effectiveness. This simulator was developed using Unity, Vuforia and the leap motion (LM) for an AR environment. The participants, 16 novices and two expert neurosurgeons, were asked to complete 40 targeting tasks. Participants used the NeuroTouch tool or a virtual hand controlled by the LM to select the position and orientation for these tasks. The length of time to complete each task was recorded and the trajectory log files were used to calculate performance. The resulting data from the novices' and experts' speed and accuracy are compared, and they discuss the objective performance of training in terms of the speed and accuracy of targeting accuracy for each system. PMID:29184667

  18. Multiscale optical simulation settings: challenging applications handled with an iterative ray-tracing FDTD interface method.

    PubMed

    Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian

    2016-03-20

    We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.

  19. Data accuracy assessment using enterprise architecture

    NASA Astrophysics Data System (ADS)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  20. Efficient and Robust Optimization for Building Energy Simulation.

    PubMed

    Pourarian, Shokouh; Kearsley, Anthony; Wen, Jin; Pertzborn, Amanda

    2016-06-15

    Efficiently, robustly and accurately solving large sets of structured, non-linear algebraic and differential equations is one of the most computationally expensive steps in the dynamic simulation of building energy systems. Here, the efficiency, robustness and accuracy of two commonly employed solution methods are compared. The comparison is conducted using the HVACSIM+ software package, a component based building system simulation tool. The HVACSIM+ software presently employs Powell's Hybrid method to solve systems of nonlinear algebraic equations that model the dynamics of energy states and interactions within buildings. It is shown here that the Powell's method does not always converge to a solution. Since a myriad of other numerical methods are available, the question arises as to which method is most appropriate for building energy simulation. This paper finds considerable computational benefits result from replacing the Powell's Hybrid method solver in HVACSIM+ with a solver more appropriate for the challenges particular to numerical simulations of buildings. Evidence is provided that a variant of the Levenberg-Marquardt solver has superior accuracy and robustness compared to the Powell's Hybrid method presently used in HVACSIM+.

  1. Systematic review of discharge coding accuracy

    PubMed Central

    Burns, E.M.; Rigby, E.; Mamidanna, R.; Bottle, A.; Aylin, P.; Ziprin, P.; Faiz, O.D.

    2012-01-01

    Introduction Routinely collected data sets are increasingly used for research, financial reimbursement and health service planning. High quality data are necessary for reliable analysis. This study aims to assess the published accuracy of routinely collected data sets in Great Britain. Methods Systematic searches of the EMBASE, PUBMED, OVID and Cochrane databases were performed from 1989 to present using defined search terms. Included studies were those that compared routinely collected data sets with case or operative note review and those that compared routinely collected data with clinical registries. Results Thirty-two studies were included. Twenty-five studies compared routinely collected data with case or operation notes. Seven studies compared routinely collected data with clinical registries. The overall median accuracy (routinely collected data sets versus case notes) was 83.2% (IQR: 67.3–92.1%). The median diagnostic accuracy was 80.3% (IQR: 63.3–94.1%) with a median procedure accuracy of 84.2% (IQR: 68.7–88.7%). There was considerable variation in accuracy rates between studies (50.5–97.8%). Since the 2002 introduction of Payment by Results, accuracy has improved in some respects, for example primary diagnoses accuracy has improved from 73.8% (IQR: 59.3–92.1%) to 96.0% (IQR: 89.3–96.3), P= 0.020. Conclusion Accuracy rates are improving. Current levels of reported accuracy suggest that routinely collected data are sufficiently robust to support their use for research and managerial decision-making. PMID:21795302

  2. Assessing genomic selection prediction accuracy in a dynamic barley breeding

    USDA-ARS?s Scientific Manuscript database

    Genomic selection is a method to improve quantitative traits in crops and livestock by estimating breeding values of selection candidates using phenotype and genome-wide marker data sets. Prediction accuracy has been evaluated through simulation and cross-validation, however validation based on prog...

  3. [Results of Simulation Studies

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Lattice Monte Carlo and off-lattice molecular dynamics simulations of h(sub 1)t(sub 4) and h(sub 4)t(sub l) (head/tail) amphiphile solutions have been performed as a function of surfactant concentration and temperature. The lattice and off-lattice systems exhibit quite different self-assembly behavior at equivalent thermodynamic conditions. We found that in the weakly aggregating regime (no preferred-size micelles), all models yield similar micelle size distributions at the same average aggregation number, albeit at different thermodynamic conditions (temperatures). In the strongly aggregating regime, this mapping between models (through temperature adjustment) fails, and the models exhibit qualitatively different micellization behavior. Incipient micellization in a model self-associating telechelic polymer solution results in a network with a transient elastic response that decays by a two-step relaxation: the first is due to a heterogeneous jump-diffusion process involving entrapment of end-groups within well-defined clusters and this is followed by rapid diffusion to neighboring clusters and a decay (terminal relaxation) due to cluster disintegration. The viscoelastic response of the solution manifests characteristics of a glass transition and entangled polymer network.

  4. Existing methods for improving the accuracy of digital-to-analog converters

    NASA Astrophysics Data System (ADS)

    Eielsen, Arnfinn A.; Fleming, Andrew J.

    2017-09-01

    The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.

  5. Team Communication Influence on Procedure Performance: Findings From Interprofessional Simulations with Nursing and Medical Students.

    PubMed

    Reising, Deanna L; Carr, Douglas E; Gindling, Sally; Barnes, Roxie; Garletts, Derrick; Ozdogan, Zulfukar

    Interprofessional team performance is believed to be dependent on the development of effective team communication skills. Yet, little evidence exists in undergraduate nursing programs on whether team communication skills affect team performance. A secondary analysis of a larger study on interprofessional student teams in simulations was conducted to determine if there is a relationship between team communication and team procedure performance. The results showed a positive, significant correlation between interprofessional team communication ratings and procedure accuracy in the simulation. Interprofessional team training in communication skills for nursing and medical students improves the procedure accuracy in a simulated setting.

  6. Quantitative modeling of the accuracy in registering preoperative patient-specific anatomic models into left atrial cardiac ablation procedures

    PubMed Central

    Rettmann, Maryam E.; Holmes, David R.; Kwartowitz, David M.; Gunawan, Mia; Johnson, Susan B.; Camp, Jon J.; Cameron, Bruce M.; Dalegrave, Charles; Kolasa, Mark W.; Packer, Douglas L.; Robb, Richard A.

    2014-01-01

    Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Data from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamic in vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved

  7. [Simulation of lung motions using an artificial neural network].

    PubMed

    Laurent, R; Henriet, J; Salomon, M; Sauget, M; Nguyen, F; Gschwind, R; Makovicka, L

    2011-04-01

    A way to improve the accuracy of lung radiotherapy for a patient is to get a better understanding of its lung motion. Indeed, thanks to this knowledge it becomes possible to follow the displacements of the clinical target volume (CTV) induced by the lung breathing. This paper presents a feasibility study of an original method to simulate the positions of points in patient's lung at all breathing phases. This method, based on an artificial neural network, allowed learning the lung motion on real cases and then to simulate it for new patients for which only the beginning and the end breathing data are known. The neural network learning set is made up of more than 600 points. These points, shared out on three patients and gathered on a specific lung area, were plotted by a MD. The first results are promising: an average accuracy of 1mm is obtained for a spatial resolution of 1 × 1 × 2.5mm(3). We have demonstrated that it is possible to simulate lung motion with accuracy using an artificial neural network. As future work we plan to improve the accuracy of our method with the addition of new patient data and a coverage of the whole lungs. Copyright © 2010 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  8. Statistical algorithms improve accuracy of gene fusion detection

    PubMed Central

    Hsieh, Gillian; Bierman, Rob; Szabo, Linda; Lee, Alex Gia; Freeman, Donald E.; Watson, Nathaniel; Sweet-Cordero, E. Alejandro

    2017-01-01

    Abstract Gene fusions are known to play critical roles in tumor pathogenesis. Yet, sensitive and specific algorithms to detect gene fusions in cancer do not currently exist. In this paper, we present a new statistical algorithm, MACHETE (Mismatched Alignment CHimEra Tracking Engine), which achieves highly sensitive and specific detection of gene fusions from RNA-Seq data, including the highest Positive Predictive Value (PPV) compared to the current state-of-the-art, as assessed in simulated data. We show that the best performing published algorithms either find large numbers of fusions in negative control data or suffer from low sensitivity detecting known driving fusions in gold standard settings, such as EWSR1-FLI1. As proof of principle that MACHETE discovers novel gene fusions with high accuracy in vivo, we mined public data to discover and subsequently PCR validate novel gene fusions missed by other algorithms in the ovarian cancer cell line OVCAR3. These results highlight the gains in accuracy achieved by introducing statistical models into fusion detection, and pave the way for unbiased discovery of potentially driving and druggable gene fusions in primary tumors. PMID:28541529

  9. Accuracy versus transparency in pharmacoeconomic modelling: finding the right balance.

    PubMed

    Eddy, David M

    2006-01-01

    As modellers push to make their models more accurate, the ability of others to understand the models can decrease, causing the models to lose transparency. When this type of conflict between accuracy and transparency occurs, the question arises, "Where do we want to operate on that spectrum?" This paper argues that in such cases we should give absolute priority to accuracy: push for whatever degree of accuracy is needed to answer the question being asked, try to maximise transparency within that constraint, and find other ways to replace what we wanted to get from transparency. There are several reasons. The fundamental purpose of a model is to help us get the right answer to a question and, by any measure, the expected value of a model is proportional to its accuracy. Ironically, we use transparency as a way to judge accuracy. But transparency is not a very powerful or useful way to do this. It rarely enables us to actually replicate the model's results and, even if we could, replication would not tell us the model's accuracy. Transparency rarely provides even face validity; from the content expert's perspective, the simplifications that modellers have to make usually raise more questions than they answer. Transparency does enable modellers to alert users to weaknesses in their models, but that can be achieved simply by listing the model's limitations and does not get us any closer to real accuracy. Sensitivity analysis tests the importance of uncertainty about the variables in a model, but does not tell us about the variables that were omitted or the structure of the model. What people really want to know is whether a model actually works. Transparency by itself can't answer this; only demonstrations that the model accurately calculates or predicts real events can. Rigorous simulations of clinical trials are a good place to start. This is the type of empirical validation we need to provide if the potential of mathematical models in pharmacoeconomics is to be

  10. Petascale Kinetic Simulations in Space Sciences: New Simulations and Data Discovery Techniques and Physics Results

    NASA Astrophysics Data System (ADS)

    Karimabadi, Homa

    2012-03-01

    Recent advances in simulation technology and hardware are enabling breakthrough science where many longstanding problems can now be addressed for the first time. In this talk, we focus on kinetic simulations of the Earth's magnetosphere and magnetic reconnection process which is the key mechanism that breaks the protective shield of the Earth's dipole field, allowing the solar wind to enter the Earth's magnetosphere. This leads to the so-called space weather where storms on the Sun can affect space-borne and ground-based technological systems on Earth. The talk will consist of three parts: (a) overview of a new multi-scale simulation technique where each computational grid is updated based on its own unique timestep, (b) Presentation of a new approach to data analysis that we refer to as Physics Mining which entails combining data mining and computer vision algorithms with scientific visualization to extract physics from the resulting massive data sets. (c) Presentation of several recent discoveries in studies of space plasmas including the role of vortex formation and resulting turbulence in magnetized plasmas.

  11. A data-driven dynamics simulation framework for railway vehicles

    NASA Astrophysics Data System (ADS)

    Nie, Yinyu; Tang, Zhao; Liu, Fengjia; Chang, Jian; Zhang, Jianjun

    2018-03-01

    The finite element (FE) method is essential for simulating vehicle dynamics with fine details, especially for train crash simulations. However, factors such as the complexity of meshes and the distortion involved in a large deformation would undermine its calculation efficiency. An alternative method, the multi-body (MB) dynamics simulation provides satisfying time efficiency but limited accuracy when highly nonlinear dynamic process is involved. To maintain the advantages of both methods, this paper proposes a data-driven simulation framework for dynamics simulation of railway vehicles. This framework uses machine learning techniques to extract nonlinear features from training data generated by FE simulations so that specific mesh structures can be formulated by a surrogate element (or surrogate elements) to replace the original mechanical elements, and the dynamics simulation can be implemented by co-simulation with the surrogate element(s) embedded into a MB model. This framework consists of a series of techniques including data collection, feature extraction, training data sampling, surrogate element building, and model evaluation and selection. To verify the feasibility of this framework, we present two case studies, a vertical dynamics simulation and a longitudinal dynamics simulation, based on co-simulation with MATLAB/Simulink and Simpack, and a further comparison with a popular data-driven model (the Kriging model) is provided. The simulation result shows that using the legendre polynomial regression model in building surrogate elements can largely cut down the simulation time without sacrifice in accuracy.

  12. A simulation study of Large Area Crop Inventory Experiment (LACIE) technology

    NASA Technical Reports Server (NTRS)

    Ziegler, L. (Principal Investigator); Potter, J.

    1979-01-01

    The author has identified the following significant results. The LACIE performance predictor (LPP) was used to replicate LACIE phase 2 for a 15 year period, using accuracy assessment results for phase 2 error components. Results indicated that the (LPP) simulated the LACIE phase 2 procedures reasonably well. For the 15 year simulation, only 7 of the 15 production estimates were within 10 percent of the true production. The simulations indicated that the acreage estimator, based on CAMS phase 2 procedures, has a negative bias. This bias was too large to support the 90/90 criterion with the CV observed and simulated for the phase 2 production estimator. Results of this simulation study validate the theory that the acreage variance estimator in LACIE was conservative.

  13. Accuracy and equivalence testing of crown ratio models and assessment of their impact on diameter growth and basal area increment predictions of two variants of the Forest Vegetation Simulator

    Treesearch

    Laura P. Leites; Andrew P. Robinson; Nicholas L. Crookston

    2009-01-01

    Diameter growth (DG) equations in many existing forest growth and yield models use tree crown ratio (CR) as a predictor variable. Where CR is not measured, it is estimated from other measured variables. We evaluated CR estimation accuracy for the models in two Forest Vegetation Simulator variants: the exponential and the logistic CR models used in the North...

  14. Three-Dimensional Radiative Hydrodynamics for Disk Stability Simulations: A Proposed Testing Standard and New Results

    NASA Astrophysics Data System (ADS)

    Boley, Aaron C.; Durisen, Richard H.; Nordlund, Åke; Lord, Jesse

    2007-08-01

    Recent three-dimensional radiative hydrodynamics simulations of protoplanetary disks report disparate disk behaviors, and these differences involve the importance of convection to disk cooling, the dependence of disk cooling on metallicity, and the stability of disks against fragmentation and clump formation. To guarantee trustworthy results, a radiative physics algorithm must demonstrate the capability to handle both the high and low optical depth regimes. We develop a test suite that can be used to demonstrate an algorithm's ability to relax to known analytic flux and temperature distributions, to follow a contracting slab, and to inhibit or permit convection appropriately. We then show that the radiative algorithm employed by Mejía and Boley et al. and the algorithm employed by Cai et al. pass these tests with reasonable accuracy. In addition, we discuss a new algorithm that couples flux-limited diffusion with vertical rays, we apply the test suite, and we discuss the results of evolving the Boley et al. disk with this new routine. Although the outcome is significantly different in detail with the new algorithm, we obtain the same qualitative answers. Our disk does not cool fast due to convection, and it is stable to fragmentation. We find an effective α~10-2. In addition, transport is dominated by low-order modes.

  15. Simulation study on the maximum continuous working condition of a power plant boiler

    NASA Astrophysics Data System (ADS)

    Wang, Ning; Han, Jiting; Sun, Haitian; Cheng, Jiwei; Jing, Ying'ai; Li, Wenbo

    2018-05-01

    First of all, the boiler is briefly introduced to determine the mathematical model and the boundary conditions, then the boiler under the BMCR condition numerical simulation study, and then the BMCR operating temperature field analysis. According to the boiler actual test results and the hot BMCR condition boiler output test results, the simulation results are verified. The main conclusions are as follows: the position and size of the inscribed circle in the furnace and the furnace temperature distribution and test results under different elevation are compared and verified; Accuracy of numerical simulation results.

  16. Earth resources mission performance studies. Volume 2: Simulation results

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Simulations were made at three month intervals to investigate the EOS mission performance over the four seasons of the year. The basic objectives of the study were: (1) to evaluate the ability of an EOS type system to meet a representative set of specific collection requirements, and (2) to understand the capabilities and limitations of the EOS that influence the system's ability to satisfy certain collection objectives. Although the results were obtained from a consideration of a two sensor EOS system, the analysis can be applied to any remote sensing system having similar optical and operational characteristics. While the category related results are applicable only to the specified requirement configuration, the results relating to general capability and limitations of the sensors can be applied in extrapolating to other U.S. based EOS collection requirements. The TRW general purpose mission simulator and analytic techniques discussed in this report can be applied to a wide range of collection and planning problems of earth orbiting imaging systems.

  17. Effect of Lamina Thickness of Prepreg on the Surface Accuracy of Carbon Fiber Composite Space Mirrors

    NASA Astrophysics Data System (ADS)

    Yang, Zhiyong; Tang, Zhanwen; Xie, Yongjie; Shi, Hanqiao; Zhang, Boming; Guo, Hongjun

    2018-02-01

    Composite space mirror can completely replicate the high-precision surface of mould by replication process, but the actual surface accuracy of the replication composite mirror always decreases. Lamina thickness of prepreg affects the layers and layup sequence of composite space mirror, and which would affect surface accuracy of space mirror. In our research, two groups of contrasting cases through finite element analyses (FEA) and comparative experiments were studied; the effect of different lamina thicknesses of prepreg and corresponding lay-up sequences was focused as well. We describe a special analysis model, validated process and result analysis. The simulated and measured surface figures both get the same conclusion. Reducing lamina thickness of prepreg used in replicating composite space mirror is propitious to optimal design of layup sequence for fabricating composite mirror, and could improve its surface accuracy.

  18. Predicting the Accuracy of Protein–Ligand Docking on Homology Models

    PubMed Central

    BORDOGNA, ANNALISA; PANDINI, ALESSANDRO; BONATI, LAURA

    2011-01-01

    Ligand–protein docking is increasingly used in Drug Discovery. The initial limitations imposed by a reduced availability of target protein structures have been overcome by the use of theoretical models, especially those derived by homology modeling techniques. While this greatly extended the use of docking simulations, it also introduced the need for general and robust criteria to estimate the reliability of docking results given the model quality. To this end, a large-scale experiment was performed on a diverse set including experimental structures and homology models for a group of representative ligand–protein complexes. A wide spectrum of model quality was sampled using templates at different evolutionary distances and different strategies for target–template alignment and modeling. The obtained models were scored by a selection of the most used model quality indices. The binding geometries were generated using AutoDock, one of the most common docking programs. An important result of this study is that indeed quantitative and robust correlations exist between the accuracy of docking results and the model quality, especially in the binding site. Moreover, state-of-the-art indices for model quality assessment are already an effective tool for an a priori prediction of the accuracy of docking experiments in the context of groups of proteins with conserved structural characteristics. PMID:20607693

  19. A method for data handling numerical results in parallel OpenFOAM simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anton, Alin; Muntean, Sebastian

    Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.

  20. A multiscale approach to accelerate pore-scale simulation of porous electrodes

    NASA Astrophysics Data System (ADS)

    Zheng, Weibo; Kim, Seung Hyun

    2017-04-01

    A new method to accelerate pore-scale simulation of porous electrodes is presented. The method combines the macroscopic approach with pore-scale simulation by decomposing a physical quantity into macroscopic and local variations. The multiscale method is applied to the potential equation in pore-scale simulation of a Proton Exchange Membrane Fuel Cell (PEMFC) catalyst layer, and validated with the conventional approach for pore-scale simulation. Results show that the multiscale scheme substantially reduces the computational cost without sacrificing accuracy.

  1. Numerical simulation of cavitating flows in shipbuilding

    NASA Astrophysics Data System (ADS)

    Bagaev, D.; Yegorov, S.; Lobachev, M.; Rudnichenko, A.; Taranov, A.

    2018-05-01

    The paper presents validation of numerical simulations of cavitating flows around different marine objects carried out at the Krylov State Research Centre (KSRC). Preliminary validation was done with reference to international test objects. The main part of the paper contains results of solving practical problems of ship propulsion design. The validation of numerical simulations by comparison with experimental data shows a good accuracy of the supercomputer technologies existing at Krylov State Research Centre for both hydrodynamic and cavitation characteristics prediction.

  2. Tracking accuracy assessment for concentrator photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Norton, Matthew S. H.; Anstey, Ben; Bentley, Roger W.; Georghiou, George E.

    2010-10-01

    The accuracy to which a concentrator photovoltaic (CPV) system can track the sun is an important parameter that influences a number of measurements that indicate the performance efficiency of the system. This paper presents work carried out into determining the tracking accuracy of a CPV system, and illustrates the steps involved in gaining an understanding of the tracking accuracy. A Trac-Stat SL1 accuracy monitor has been used in the determination of pointing accuracy and has been integrated into the outdoor CPV module test facility at the Photovoltaic Technology Laboratories in Nicosia, Cyprus. Results from this work are provided to demonstrate how important performance indicators may be presented, and how the reliability of results is improved through the deployment of such accuracy monitors. Finally, recommendations on the use of such sensors are provided as a means to improve the interpretation of real outdoor performance.

  3. Migrating Shoals on Ebb-tidal Deltas: Results from Numerical Simulations

    NASA Astrophysics Data System (ADS)

    van der Vegt, M.; Ridderinkhof, W.; De Swart, H. E.; Hoekstra, P.

    2016-02-01

    Many ebb-tidal deltas show repetitive patterns of channel- shoal generation, migration and attachment of shoals to the downdrift barrier coast. For the Wadden Sea coast along the Dutch, German en Danish coastline the typical time scale of shoal attachment ranges from several to hundred years. There is a weak correlation between the tidal prism and the typical time scale of shoal attachment. The main aim of this research is to clarify the physical processes that result in the formation of shoals on ebb-tidal deltas and to study what determines their propagation speed. To this end numerical simulations were performed in Delft3D. Starting from an idealized geometry with a sloping bed on the shelf sea and a flat bed in the back barrier basin, the model was spun up until an approximate morphodynamic steady state was realized. The model was forced with tides and constant wave forcing based on the yearly average conditions along the Dutch Wadden coast. The resulting ebb-tidal delta is called the equilibrium delta. Next, two types of scenarios were run. First, the equilibrium delta was breached by creating a channel and adding the removed sand volume to the downdrift shoal. Second, the wave climate was made more realistic by adding storms and subsequently its effect on the equilibrium delta was simulated. Based on the model results we conclude the following. First, the model is able to realistically simulate the migration of shoals and the attachment to the downdrift barrier island. Second, larger waves result in faster propagation of the shoals. Third, simulations suggest that shoals only migrate when they are shallower than a critical maximum depth with respect to the wave height. These shallow shoals can be `man-made' or be generated during storms. When no storms were added to the wave climate and the bed was not artificially disturbed, no migrating shoals were simulated. During the presentation the underlying physical processes will be discussed in detail.

  4. Do knowledge, knowledge sources and reasoning skills affect the accuracy of nursing diagnoses? a randomised study

    PubMed Central

    2012-01-01

    Background This paper reports a study about the effect of knowledge sources, such as handbooks, an assessment format and a predefined record structure for diagnostic documentation, as well as the influence of knowledge, disposition toward critical thinking and reasoning skills, on the accuracy of nursing diagnoses. Knowledge sources can support nurses in deriving diagnoses. A nurse’s disposition toward critical thinking and reasoning skills is also thought to influence the accuracy of his or her nursing diagnoses. Method A randomised factorial design was used in 2008–2009 to determine the effect of knowledge sources. We used the following instruments to assess the influence of ready knowledge, disposition, and reasoning skills on the accuracy of diagnoses: (1) a knowledge inventory, (2) the California Critical Thinking Disposition Inventory, and (3) the Health Science Reasoning Test. Nurses (n = 249) were randomly assigned to one of four factorial groups, and were instructed to derive diagnoses based on an assessment interview with a simulated patient/actor. Results The use of a predefined record structure resulted in a significantly higher accuracy of nursing diagnoses. A regression analysis reveals that almost half of the variance in the accuracy of diagnoses is explained by the use of a predefined record structure, a nurse’s age and the reasoning skills of `deduction’ and `analysis’. Conclusions Improving nurses’ dispositions toward critical thinking and reasoning skills, and the use of a predefined record structure, improves accuracy of nursing diagnoses. PMID:22852577

  5. Multiscale Macromolecular Simulation: Role of Evolving Ensembles

    PubMed Central

    Singharoy, A.; Joshi, H.; Ortoleva, P.J.

    2013-01-01

    Multiscale analysis provides an algorithm for the efficient simulation of macromolecular assemblies. This algorithm involves the coevolution of a quasiequilibrium probability density of atomic configurations and the Langevin dynamics of spatial coarse-grained variables denoted order parameters (OPs) characterizing nanoscale system features. In practice, implementation of the probability density involves the generation of constant OP ensembles of atomic configurations. Such ensembles are used to construct thermal forces and diffusion factors that mediate the stochastic OP dynamics. Generation of all-atom ensembles at every Langevin timestep is computationally expensive. Here, multiscale computation for macromolecular systems is made more efficient by a method that self-consistently folds in ensembles of all-atom configurations constructed in an earlier step, history, of the Langevin evolution. This procedure accounts for the temporal evolution of these ensembles, accurately providing thermal forces and diffusions. It is shown that efficiency and accuracy of the OP-based simulations is increased via the integration of this historical information. Accuracy improves with the square root of the number of historical timesteps included in the calculation. As a result, CPU usage can be decreased by a factor of 3-8 without loss of accuracy. The algorithm is implemented into our existing force-field based multiscale simulation platform and demonstrated via the structural dynamics of viral capsomers. PMID:22978601

  6. Electron-cloud updated simulation results for the PSR, and recent results for the SNS

    NASA Astrophysics Data System (ADS)

    Pivi, M.; Furman, M. A.

    2002-05-01

    Recent simulation results for the main features of the electron cloud in the storage ring of the Spallation Neutron Source (SNS) at Oak Ridge, and updated results for the Proton Storage Ring (PSR) at Los Alamos are presented in this paper. A refined model for the secondary emission process including the so called true secondary, rediffused and backscattered electrons has recently been included in the electron-cloud code.

  7. Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods for /sup 201/Tl cardiac SPECT

    NASA Astrophysics Data System (ADS)

    Narita, Y.; Iida, H.; Ebert, S.; Nakamura, T.

    1997-12-01

    Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for three numerical phantoms for /sup 201/Tl. Data were reconstructed with ordered-subset EM algorithm including noise-less transmission data based attenuation correction. Accuracy of TDCS and TEW scatter corrections were assessed by comparison with simulated true primary data. The uniform cylindrical phantom simulation demonstrated better quantitative accuracy with TDCS than with TEW (-2.0% vs. 16.7%) and better S/N (6.48 vs. 5.05). A uniform ring myocardial phantom simulation demonstrated better homogeneity with TDCS than TEW in the myocardium; i.e., anterior-to-posterior wall count ratios were 0.99 and 0.76 with TDCS and TEW, respectively. For the MCAT phantom, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT.

  8. The multinomial simulation algorithm for discrete stochastic simulation of reaction-diffusion systems.

    PubMed

    Lampoudi, Sotiria; Gillespie, Dan T; Petzold, Linda R

    2009-03-07

    The Inhomogeneous Stochastic Simulation Algorithm (ISSA) is a variant of the stochastic simulation algorithm in which the spatially inhomogeneous volume of the system is divided into homogeneous subvolumes, and the chemical reactions in those subvolumes are augmented by diffusive transfers of molecules between adjacent subvolumes. The ISSA can be prohibitively slow when the system is such that diffusive transfers occur much more frequently than chemical reactions. In this paper we present the Multinomial Simulation Algorithm (MSA), which is designed to, on the one hand, outperform the ISSA when diffusive transfer events outnumber reaction events, and on the other, to handle small reactant populations with greater accuracy than deterministic-stochastic hybrid algorithms. The MSA treats reactions in the usual ISSA fashion, but uses appropriately conditioned binomial random variables for representing the net numbers of molecules diffusing from any given subvolume to a neighbor within a prescribed distance. Simulation results illustrate the benefits of the algorithm.

  9. Effects of accuracy motivation and anchoring on metacomprehension judgment and accuracy.

    PubMed

    Zhao, Qin

    2012-01-01

    The current research investigates how accuracy motivation impacts anchoring and adjustment in metacomprehension judgment and how accuracy motivation and anchoring affect metacomprehension accuracy. Participants were randomly assigned to one of six conditions produced by the between-subjects factorial design involving accuracy motivation (incentive or no) and peer performance anchor (95%, 55%, or no). Two studies showed that accuracy motivation did not impact anchoring bias, but the adjustment-from-anchor process occurred. Accuracy incentive increased anchor-judgment gap for the 95% anchor but not for the 55% anchor, which induced less certainty about the direction of adjustment. The findings offer support to the integrative theory of anchoring. Additionally, the two studies revealed a "power struggle" between accuracy motivation and anchoring in influencing metacomprehension accuracy. Accuracy motivation could improve metacomprehension accuracy in spite of anchoring effect, but if anchoring effect is too strong, it could overpower the motivation effect. The implications of the findings were discussed.

  10. Assuring high quality treatment delivery in clinical trials - Results from the Trans-Tasman Radiation Oncology Group (TROG) study 03.04 "RADAR" set-up accuracy study.

    PubMed

    Haworth, Annette; Kearvell, Rachel; Greer, Peter B; Hooton, Ben; Denham, James W; Lamb, David; Duchesne, Gillian; Murray, Judy; Joseph, David

    2009-03-01

    A multi-centre clinical trial for prostate cancer patients provided an opportunity to introduce conformal radiotherapy with dose escalation. To verify adequate treatment accuracy prior to patient recruitment, centres submitted details of a set-up accuracy study (SUAS). We report the results of the SUAS, the variation in clinical practice and the strategies used to help centres improve treatment accuracy. The SUAS required each of the 24 participating centres to collect data on at least 10 pelvic patients imaged on a minimum of 20 occasions. Software was provided for data collection and analysis. Support to centres was provided through educational lectures, the trial quality assurance team and an information booklet. Only two centres had recently carried out a SUAS prior to the trial opening. Systematic errors were generally smaller than those previously reported in the literature. The questionnaire identified many differences in patient set-up protocols. As a result of participating in this QA activity more than 65% of centres improved their treatment delivery accuracy. Conducting a pre-trial SUAS has led to improvement in treatment delivery accuracy in many centres. Treatment techniques and set-up accuracy varied greatly, demonstrating a need to ensure an on-going awareness for such studies in future trials and with the introduction of dose escalation or new technologies.

  11. Adaptive constructive processes and memory accuracy: Consequences of counterfactual simulations in young and older adults

    PubMed Central

    Gerlach, Kathy D.; Dornblaser, David W.; Schacter, Daniel L.

    2013-01-01

    People frequently engage in counterfactual thinking: mental simulations of alternative outcomes to past events. Like simulations of future events, counterfactual simulations serve adaptive functions. However, future simulation can also result in various kinds of distortions and has thus been characterized as an adaptive constructive process. Here we approach counterfactual thinking as such and examine whether it can distort memory for actual events. In Experiments 1a/b, young and older adults imagined themselves experiencing different scenarios. Participants then imagined the same scenario again, engaged in no further simulation of a scenario, or imagined a counterfactual outcome. On a subsequent recognition test, participants were more likely to make false alarms to counterfactual lures than novel scenarios. Older adults were more prone to these memory errors than younger adults. In Experiment 2, younger and older participants selected and performed different actions, then recalled performing some of those actions, imagined performing alternative actions to some of the selected actions, and did not imagine others. Participants, especially older adults, were more likely to falsely remember counterfactual actions than novel actions as previously performed. The findings suggest that counterfactual thinking can cause source confusion based on internally generated misinformation, consistent with its characterization as an adaptive constructive process. PMID:23560477

  12. Adaptive constructive processes and memory accuracy: consequences of counterfactual simulations in young and older adults.

    PubMed

    Gerlach, Kathy D; Dornblaser, David W; Schacter, Daniel L

    2014-01-01

    People frequently engage in counterfactual thinking: mental simulations of alternative outcomes to past events. Like simulations of future events, counterfactual simulations serve adaptive functions. However, future simulation can also result in various kinds of distortions and has thus been characterised as an adaptive constructive process. Here we approach counterfactual thinking as such and examine whether it can distort memory for actual events. In Experiments 1a/b young and older adults imagined themselves experiencing different scenarios. Participants then imagined the same scenario again, engaged in no further simulation of a scenario, or imagined a counterfactual outcome. On a subsequent recognition test participants were more likely to make false alarms to counterfactual lures than novel scenarios. Older adults were more prone to these memory errors than younger adults. In Experiment 2 younger and older participants selected and performed different actions, then recalled performing some of those actions, imagined performing alternative actions to some of the selected actions, and did not imagine others. Participants, especially older adults, were more likely to falsely remember counterfactual actions than novel actions as previously performed. The findings suggest that counterfactual thinking can cause source confusion based on internally generated misinformation, consistent with its characterisation as an adaptive constructive process.

  13. Accuracy improvement of the H-drive air-levitating wafer inspection stage based on error analysis and compensation

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Liu, Pinkuan

    2018-04-01

    In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.

  14. Improving prediction accuracy of cooling load using EMD, PSR and RBFNN

    NASA Astrophysics Data System (ADS)

    Shen, Limin; Wen, Yuanmei; Li, Xiaohong

    2017-08-01

    To increase the accuracy for the prediction of cooling load demand, this work presents an EMD (empirical mode decomposition)-PSR (phase space reconstruction) based RBFNN (radial basis function neural networks) method. Firstly, analyzed the chaotic nature of the real cooling load demand, transformed the non-stationary cooling load historical data into several stationary intrinsic mode functions (IMFs) by using EMD. Secondly, compared the RBFNN prediction accuracies of each IMFs and proposed an IMF combining scheme that is combine the lower-frequency components (called IMF4-IMF6 combined) while keep the higher frequency component (IMF1, IMF2, IMF3) and the residual unchanged. Thirdly, reconstruct phase space for each combined components separately, process the highest frequency component (IMF1) by differential method and predict with RBFNN in the reconstructed phase spaces. Real cooling load data of a centralized ice storage cooling systems in Guangzhou are used for simulation. The results show that the proposed hybrid method outperforms the traditional methods.

  15. Accuracy investigation of phthalate metabolite standards.

    PubMed

    Langlois, Éric; Leblanc, Alain; Simard, Yves; Thellen, Claude

    2012-05-01

    Phthalates are ubiquitous compounds whose metabolites are usually determined in urine for biomonitoring studies. Following suspect and unexplained results from our laboratory in an external quality-assessment scheme, we investigated the accuracy of all phthalate metabolite standards in our possession by comparing them with those of several suppliers. Our findings suggest that commercial phthalate metabolite certified solutions are not always accurate and that lot-to-lot discrepancies significantly affect the accuracy of the results obtained with several of these standards. These observations indicate that the reliability of the results obtained from different lots of standards is not equal, which reduces the possibility of intra-laboratory and inter-laboratory comparisons of results. However, agreements of accuracy have been observed for a majority of neat standards obtained from different suppliers, which indicates that a solution to this issue is available. Data accuracy of phthalate metabolites should be of concern for laboratories performing phthalate metabolite analysis because of the standards used. The results of our investigation are presented from the perspective that laboratories performing phthalate metabolite analysis can obtain accurate and comparable results in the future. Our findings will contribute to improving the quality of future phthalate metabolite analyses and will affect the interpretation of past results.

  16. Accuracy of localization of prostate lesions using manual palpation and ultrasound elastography

    NASA Astrophysics Data System (ADS)

    Kut, Carmen; Schneider, Caitlin; Carter-Monroe, Naima; Su, Li-Ming; Boctor, Emad; Taylor, Russell

    2009-02-01

    Purpose: To compare the accuracy of detecting tumor location and size in the prostate using both manual palpation and ultrasound elastography (UE). Methods: Tumors in the prostate were simulated using both synthetic and ex vivo tissue phantoms. 25 participants were asked to provide the presence, size and depth of these simulated lesions using manual palpation and UE. Ultrasound images were captured using a laparoscopic ultrasound probe, fitted with a Gore-Tetrad transducer with frequency of 7.5 MHz and a RF capture depth of 4-5 cm. A MATLAB GUI application was employed to process the RF data for ex vivo phantoms, and to generate UE images using a cross-correlation algorithm. Ultrasonix software was used to provide real time elastography during laparoscopic palpation of the synthetic phantoms. Statistical analyses were performed based on a two-tailed, student t-test with α = 0.05. Results: UE displays both a higher accuracy and specificity in tumor detection (sensitivity = 84%, specificity = 74%). Tumor diameters and depths are better estimated using ultrasound elastography when compared with manual palpation. Conclusions: Our results indicate that UE has strong potential in assisting surgeons to intra-operatively evaluate the tumor depth and size. We have also demonstrated that ultrasound elastography can be implemented in a laparoscopic environment, in which manual palpation would not be feasible. With further work, this application can provide accurate and clinically relevant information for surgeons during prostate resection.

  17. Direct simulations of chemically reacting turbulent mixing layers

    NASA Technical Reports Server (NTRS)

    Riley, J. J.; Metcalfe, R. W.

    1984-01-01

    The report presents the results of direct numerical simulations of chemically reacting turbulent mixing layers. The work consists of two parts: (1) the development and testing of a spectral numerical computer code that treats the diffusion reaction equations; and (2) the simulation of a series of cases of chemical reactions occurring on mixing layers. The reaction considered is a binary, irreversible reaction with no heat release. The reacting species are nonpremixed. The results of the numerical tests indicate that the high accuracy of the spectral methods observed for rigid body rotation are also obtained when diffusion, reaction, and more complex flows are considered. In the simulations, the effects of vortex rollup and smaller scale turbulence on the overall reaction rates are investigated. The simulation results are found to be in approximate agreement with similarity theory. Comparisons of simulation results with certain modeling hypotheses indicate limitations in these hypotheses. The nondimensional product thickness computed from the simulations is compared with laboratory values and is found to be in reasonable agreement, especially since there are no adjustable constants in the method.

  18. Experimental and simulational result multipactors in 112 MHz QWR injector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xin, T.; Ben-Zvi, I.; Belomestnykh, S.

    2015-05-03

    The first RF commissioning of 112 MHz QWR superconducting electron gun was done in late 2014. The coaxial Fundamental Power Coupler (FPC) and Cathode Stalk (stalk) were installed and tested for the first time. During this experiment, we observed several multipacting barriers at different gun voltage levels. The simulation work was done within the same range. The comparison between the experimental observation and the simulation results are presented in this paper. The observations during the test are consisted with the simulation predictions. We were able to overcome most of the multipacting barriers and reach 1.8 MV gun voltage under pulsedmore » mode after several round of conditioning processes.« less

  19. Systematic bias of correlation coefficient may explain negative accuracy of genomic prediction.

    PubMed

    Zhou, Yao; Vales, M Isabel; Wang, Aoxue; Zhang, Zhiwu

    2017-09-01

    Accuracy of genomic prediction is commonly calculated as the Pearson correlation coefficient between the predicted and observed phenotypes in the inference population by using cross-validation analysis. More frequently than expected, significant negative accuracies of genomic prediction have been reported in genomic selection studies. These negative values are surprising, given that the minimum value for prediction accuracy should hover around zero when randomly permuted data sets are analyzed. We reviewed the two common approaches for calculating the Pearson correlation and hypothesized that these negative accuracy values reflect potential bias owing to artifacts caused by the mathematical formulas used to calculate prediction accuracy. The first approach, Instant accuracy, calculates correlations for each fold and reports prediction accuracy as the mean of correlations across fold. The other approach, Hold accuracy, predicts all phenotypes in all fold and calculates correlation between the observed and predicted phenotypes at the end of the cross-validation process. Using simulated and real data, we demonstrated that our hypothesis is true. Both approaches are biased downward under certain conditions. The biases become larger when more fold are employed and when the expected accuracy is low. The bias of Instant accuracy can be corrected using a modified formula. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Convergence studies in meshfree peridynamic simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seleson, Pablo; Littlewood, David J.

    2016-04-15

    Meshfree methods are commonly applied to discretize peridynamic models, particularly in numerical simulations of engineering problems. Such methods discretize peridynamic bodies using a set of nodes with characteristic volume, leading to particle-based descriptions of systems. In this article, we perform convergence studies of static peridynamic problems. We show that commonly used meshfree methods in peridynamics suffer from accuracy and convergence issues, due to a rough approximation of the contribution to the internal force density of nodes near the boundary of the neighborhood of a given node. We propose two methods to improve meshfree peridynamic simulations. The first method uses accuratemore » computations of volumes of intersections between neighbor cells and the neighborhood of a given node, referred to as partial volumes. The second method employs smooth influence functions with a finite support within peridynamic kernels. Numerical results demonstrate great improvements in accuracy and convergence of peridynamic numerical solutions, when using the proposed methods.« less

  1. A comparison of two emergency medical dispatch protocols with respect to accuracy.

    PubMed

    Torlén, Klara; Kurland, Lisa; Castrén, Maaret; Olanders, Knut; Bohm, Katarina

    2017-12-29

    Emergency medical dispatching should be as accurate as possible in order to ensure patient safety and optimize the use of ambulance resources. This study aimed to compare the accuracy, measured as priority level, between two Swedish dispatch protocols - the three-graded priority protocol Medical Index and a newly developed prototype, the four-graded priority protocol, RETTS-A. A simulation study was carried out at the Emergency Medical Communication Centre (EMCC) in Stockholm, Sweden, between October and March 2016. Fifty-three voluntary telecommunicators working at SOS Alarm were recruited nationally. Each telecommunicator handled 26 emergency medical calls, simulated by experienced standard patients. Manuscripts for the scenarios were based on recorded real-life calls, representing the six most common complaints. A cross-over design with 13 + 13 calls was used. Priority level and medical condition for each scenario was set through expert consensus and used as gold standard in the study. A total of 1293 calls were included in the analysis. For priority level, n = 349 (54.0%) of the calls were assessed correctly with Medical Index and n = 309 (48.0%) with RETTS-A (p = 0.012). Sensitivity for the highest priority level was 82.6% (95% confidence interval: 76.6-87.3%) in the Medical Index and 54.0% (44.3-63.4%) in RETTS-A. Overtriage was 37.9% (34.2-41.7%) in the Medical Index and 28.6% (25.2-32.2%) in RETTS-A. The corresponding proportion of undertriage was 6.3% (4.7-8.5%) and 23.4% (20.3-26.9%) respectively. In this simulation study we demonstrate that Medical Index had a higher accuracy for priority level and less undertriage than the new prototype RETTS-A. The overall accuracy of both protocols is to be considered as low. Overtriage challenges resource utilization while undertriage threatens patient safety. The results suggest that in order to improve patient safety both protocols need revisions in order to guarantee safe emergency medical

  2. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin

    2015-03-15

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR.more » Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.« less

  3. Assessing the performance of the MM/PBSA and MM/GBSA methods. 1. The accuracy of binding free energy calculations based on molecular dynamics simulations.

    PubMed

    Hou, Tingjun; Wang, Junmei; Li, Youyong; Wang, Wei

    2011-01-24

    The Molecular Mechanics/Poisson-Boltzmann Surface Area (MM/PBSA) and the Molecular Mechanics/Generalized Born Surface Area (MM/GBSA) methods calculate binding free energies for macromolecules by combining molecular mechanics calculations and continuum solvation models. To systematically evaluate the performance of these methods, we report here an extensive study of 59 ligands interacting with six different proteins. First, we explored the effects of the length of the molecular dynamics (MD) simulation, ranging from 400 to 4800 ps, and the solute dielectric constant (1, 2, or 4) on the binding free energies predicted by MM/PBSA. The following three important conclusions could be observed: (1) MD simulation length has an obvious impact on the predictions, and longer MD simulation is not always necessary to achieve better predictions. (2) The predictions are quite sensitive to the solute dielectric constant, and this parameter should be carefully determined according to the characteristics of the protein/ligand binding interface. (3) Conformational entropy often show large fluctuations in MD trajectories, and a large number of snapshots are necessary to achieve stable predictions. Next, we evaluated the accuracy of the binding free energies calculated by three Generalized Born (GB) models. We found that the GB model developed by Onufriev and Case was the most successful model in ranking the binding affinities of the studied inhibitors. Finally, we evaluated the performance of MM/GBSA and MM/PBSA in predicting binding free energies. Our results showed that MM/PBSA performed better in calculating absolute, but not necessarily relative, binding free energies than MM/GBSA. Considering its computational efficiency, MM/GBSA can serve as a powerful tool in drug design, where correct ranking of inhibitors is often emphasized.

  4. Accuracy requirements. [for monitoring of climate changes

    NASA Technical Reports Server (NTRS)

    Delgenio, Anthony

    1993-01-01

    Satellite and surface measurements, if they are to serve as a climate monitoring system, must be accurate enough to permit detection of changes of climate parameters on decadal time scales. The accuracy requirements are difficult to define a priori since they depend on unknown future changes of climate forcings and feedbacks. As a framework for evaluation of candidate Climsat instruments and orbits, we estimate the accuracies that would be needed to measure changes expected over two decades based on theoretical considerations including GCM simulations and on observational evidence in cases where data are available for rates of change. One major climate forcing known with reasonable accuracy is that caused by the anthropogenic homogeneously mixed greenhouse gases (CO2, CFC's, CH4 and N2O). Their net forcing since the industrial revolution began is about 2 W/sq m and it is presently increasing at a rate of about 1 W/sq m per 20 years. Thus for a competing forcing or feedback to be important, it needs to be of the order of 0.25 W/sq m or larger on this time scale. The significance of most climate feedbacks depends on their sensitivity to temperature change. Therefore we begin with an estimate of decadal temperature change. Presented are the transient temperature trends simulated by the GISS GCM when subjected to various scenarios of trace gas concentration increases. Scenario B, which represents the most plausible near-term emission rates and includes intermittent forcing by volcanic aerosols, yields a global mean surface air temperature increase Delta Ts = 0.7 degrees C over the time period 1995-2015. This is consistent with the IPCC projection of about 0.3 degrees C/decade global warming (IPCC, 1990). Several of our estimates below are based on this assumed rate of warming.

  5. Analysis of Waves in Space Plasma (WISP) near field simulation and experiment

    NASA Technical Reports Server (NTRS)

    Richie, James E.

    1992-01-01

    The WISP payload scheduler for a 1995 space transportation system (shuttle flight) will include a large power transmitter on board at a wide range of frequencies. The levels of electromagnetic interference/electromagnetic compatibility (EMI/EMC) must be addressed to insure the safety of the shuttle crew. This report is concerned with the simulation and experimental verification of EMI/EMC for the WISP payload in the shuttle cargo bay. The simulations have been carried out using the method of moments for both thin wires and patches to stimulate closed solids. Data obtained from simulation is compared with experimental results. An investigation of the accuracy of the modeling approach is also included. The report begins with a description of the WISP experiment. A description of the model used to simulate the cargo bay follows. The results of the simulation are compared to experimental data on the input impedance of the WISP antenna with the cargo bay present. A discussion of the methods used to verify the accuracy of the model is shown to illustrate appropriate methods for obtaining this information. Finally, suggestions for future work are provided.

  6. Face and construct validity of a computer-based virtual reality simulator for ERCP.

    PubMed

    Bittner, James G; Mellinger, John D; Imam, Toufic; Schade, Robert R; Macfadyen, Bruce V

    2010-02-01

    Currently, little evidence supports computer-based simulation for ERCP training. To determine face and construct validity of a computer-based simulator for ERCP and assess its perceived utility as a training tool. Novice and expert endoscopists completed 2 simulated ERCP cases by using the GI Mentor II. Virtual Education and Surgical Simulation Laboratory, Medical College of Georgia. Outcomes included times to complete the procedure, reach the papilla, and use fluoroscopy; attempts to cannulate the papilla, pancreatic duct, and common bile duct; and number of contrast injections and complications. Subjects assessed simulator graphics, procedural accuracy, difficulty, haptics, overall realism, and training potential. Only when performance data from cases A and B were combined did the GI Mentor II differentiate novices and experts based on times to complete the procedure, reach the papilla, and use fluoroscopy. Across skill levels, overall opinions were similar regarding graphics (moderately realistic), accuracy (similar to clinical ERCP), difficulty (similar to clinical ERCP), overall realism (moderately realistic), and haptics. Most participants (92%) claimed that the simulator has definite training potential or should be required for training. Small sample size, single institution. The GI Mentor II demonstrated construct validity for ERCP based on select metrics. Most subjects thought that the simulated graphics, procedural accuracy, and overall realism exhibit face validity. Subjects deemed it a useful training tool. Study repetition involving more participants and cases may help confirm results and establish the simulator's ability to differentiate skill levels based on ERCP-specific metrics.

  7. Assessing the accuracy of predictive models for numerical data: Not r nor r2, why not? Then what?

    PubMed Central

    2017-01-01

    Assessing the accuracy of predictive models is critical because predictive models have been increasingly used across various disciplines and predictive accuracy determines the quality of resultant predictions. Pearson product-moment correlation coefficient (r) and the coefficient of determination (r2) are among the most widely used measures for assessing predictive models for numerical data, although they are argued to be biased, insufficient and misleading. In this study, geometrical graphs were used to illustrate what were used in the calculation of r and r2 and simulations were used to demonstrate the behaviour of r and r2 and to compare three accuracy measures under various scenarios. Relevant confusions about r and r2, has been clarified. The calculation of r and r2 is not based on the differences between the predicted and observed values. The existing error measures suffer various limitations and are unable to tell the accuracy. Variance explained by predictive models based on cross-validation (VEcv) is free of these limitations and is a reliable accuracy measure. Legates and McCabe’s efficiency (E1) is also an alternative accuracy measure. The r and r2 do not measure the accuracy and are incorrect accuracy measures. The existing error measures suffer limitations. VEcv and E1 are recommended for assessing the accuracy. The applications of these accuracy measures would encourage accuracy-improved predictive models to be developed to generate predictions for evidence-informed decision-making. PMID:28837692

  8. Numerical Simulations Of Flagellated Micro-Swimmers

    NASA Astrophysics Data System (ADS)

    Rorai, Cecilia; Markesteijn, Anton; Zaitstev, Mihail; Karabasov, Sergey

    2017-11-01

    We study flagellated microswimmers locomotion by representing the entire swimmer body. We discuss and contrast the accuracy and computational cost of different numerical approaches including the Resistive Force Theory, the Regularized Stokeslet Method and the Finite Element Method. We focus on how the accuracy of the methods in reproducing the swimming trajectories, velocities and flow field, compares to the sensitivity of these quantities to certain physical parameters, such as the body shape and the location of the center of mass. We discuss the opportunity and physical relevance of retaining inertia in our models. Finally, we present some preliminary results toward collective motion simulations. Marie Skodowska-Curie Individual Fellowship.

  9. Navier-Stokes simulations of slender axisymmetric shapes in supersonic, turbulent flow

    NASA Astrophysics Data System (ADS)

    Moran, Kenneth J.; Beran, Philip S.

    1994-07-01

    Computational fluid dynamics is used to study flows about slender, axisymmetric bodies at very high speeds. Numerical experiments are conducted to simulate a broad range of flight conditions. Mach number is varied from 1.5 to 8 and Reynolds number is varied from 1 X 10(exp 6)/m to 10(exp 8)/m. The primary objective is to develop and validate a computational and methodology for the accurate simulation of a wide variety of flow structures. Accurate results are obtained for detached bow shocks, recompression shocks, corner-point expansions, base-flow recirculations, and turbulent boundary layers. Accuracy is assessed through comparison with theory and experimental data; computed surface pressure, shock structure, base-flow structure, and velocity profiles are within measurement accuracy throughout the range of conditions tested. The methodology is both practical and general: general in its applicability, and practicaal in its performance. To achieve high accuracy, modifications to previously reported techniques are implemented in the scheme. These modifications improve computed results in the vicinity of symmetry lines and in the base flow region, including the turbulent wake.

  10. Efficient and Robust Optimization for Building Energy Simulation

    PubMed Central

    Pourarian, Shokouh; Kearsley, Anthony; Wen, Jin; Pertzborn, Amanda

    2016-01-01

    Efficiently, robustly and accurately solving large sets of structured, non-linear algebraic and differential equations is one of the most computationally expensive steps in the dynamic simulation of building energy systems. Here, the efficiency, robustness and accuracy of two commonly employed solution methods are compared. The comparison is conducted using the HVACSIM+ software package, a component based building system simulation tool. The HVACSIM+ software presently employs Powell’s Hybrid method to solve systems of nonlinear algebraic equations that model the dynamics of energy states and interactions within buildings. It is shown here that the Powell’s method does not always converge to a solution. Since a myriad of other numerical methods are available, the question arises as to which method is most appropriate for building energy simulation. This paper finds considerable computational benefits result from replacing the Powell’s Hybrid method solver in HVACSIM+ with a solver more appropriate for the challenges particular to numerical simulations of buildings. Evidence is provided that a variant of the Levenberg-Marquardt solver has superior accuracy and robustness compared to the Powell’s Hybrid method presently used in HVACSIM+. PMID:27325907

  11. Simulation results of corkscrew motion in DARHT-II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, K. D.; Ekdahl, C. A.; Chen, Y. J.

    2003-01-01

    DARHT-II, the second axis of the Dual-Axis Radiographic Hydrodynamics Test Facility, is being commissioned. DARHT-II is a linear induction accelerator producing 2-microsecond electron beam pulses at 20 MeV and 2 kA. These 2-microsecond pulses will be chopped into four short pulses to produce time resolved x-ray images. Radiographic application requires the DARHT-II beam to have excellent beam quality, and it is important to study various beam effects that may cause quality degradation of a DARHT-II beam. One of the beam dynamic effects under study is 'corkscrew' motion. For corkscrew motion, the beam centroid is deflected off axis due to misalignmentsmore » of the solenoid magnets. The deflection depends on the beam energy variation, which is expected to vary by {+-}0.5% during the 'flat-top' part of a beam pulse. Such chromatic aberration will result in broadening of beam spot size. In this paper, we will report simulation results of our study of corkscrew motion in DARHT-II. Sensitivities of beam spot size to various accelerator parameters and the strategy for minimizing corkscrew motion will be described. Measured magnet misalignment is used in the simulation.« less

  12. Giant cell arteritis: diagnostic accuracy of MR imaging of superficial cranial arteries in initial diagnosis-results from a multicenter trial.

    PubMed

    Klink, Thorsten; Geiger, Julia; Both, Marcus; Ness, Thomas; Heinzelmann, Sonja; Reinhard, Matthias; Holl-Ulrich, Konstanze; Duwendag, Dirk; Vaith, Peter; Bley, Thorsten Alexander

    2014-12-01

    To assess the diagnostic accuracy of contrast material-enhanced magnetic resonance (MR) imaging of superficial cranial arteries in the initial diagnosis of giant cell arteritis ( GCA giant cell arteritis ). Following institutional review board approval and informed consent, 185 patients suspected of having GCA giant cell arteritis were included in a prospective three-university medical center trial. GCA giant cell arteritis was diagnosed or excluded clinically in all patients (reference standard [final clinical diagnosis]). In 53.0% of patients (98 of 185), temporal artery biopsy ( TAB temporal artery biopsy ) was performed (diagnostic standard [ TAB temporal artery biopsy ]). Two observers independently evaluated contrast-enhanced T1-weighted MR images of superficial cranial arteries by using a four-point scale. Diagnostic accuracy, involvement pattern, and systemic corticosteroid ( sCS systemic corticosteroid ) therapy effects were assessed in comparison with the reference standard (total study cohort) and separately in comparison with the diagnostic standard TAB temporal artery biopsy ( TAB temporal artery biopsy subcohort). Statistical analysis included diagnostic accuracy parameters, interobserver agreement, and receiver operating characteristic analysis. Sensitivity of MR imaging was 78.4% and specificity was 90.4% for the total study cohort, and sensitivity was 88.7% and specificity was 75.0% for the TAB temporal artery biopsy subcohort (first observer). Diagnostic accuracy was comparable for both observers, with good interobserver agreement ( TAB temporal artery biopsy subcohort, κ = 0.718; total study cohort, κ = 0.676). MR imaging scores were significantly higher in patients with GCA giant cell arteritis -positive results than in patients with GCA giant cell arteritis -negative results ( TAB temporal artery biopsy subcohort and total study cohort, P < .001). Diagnostic accuracy of MR imaging was high in patients without and with sCS systemic

  13. Role of Boundary Conditions in Monte Carlo Simulation of MEMS Devices

    NASA Technical Reports Server (NTRS)

    Nance, Robert P.; Hash, David B.; Hassan, H. A.

    1997-01-01

    A study is made of the issues surrounding prediction of microchannel flows using the direct simulation Monte Carlo method. This investigation includes the introduction and use of new inflow and outflow boundary conditions suitable for subsonic flows. A series of test simulations for a moderate-size microchannel indicates that a high degree of grid under-resolution in the streamwise direction may be tolerated without loss of accuracy. In addition, the results demonstrate the importance of physically correct boundary conditions, as well as possibilities for reducing the time associated with the transient phase of a simulation. These results imply that simulations of longer ducts may be more feasible than previously envisioned.

  14. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    PubMed Central

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion. PMID:25097873

  15. Accuracy improvement capability of advanced projectile based on course correction fuze concept.

    PubMed

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  16. Three-Dimensional Imaging in Rhinoplasty: A Comparison of the Simulated versus Actual Result.

    PubMed

    Persing, Sarah; Timberlake, Andrew; Madari, Sarika; Steinbacher, Derek

    2018-05-22

    Computer imaging has become increasingly popular for rhinoplasty. Three-dimensional (3D) analysis permits a more comprehensive view from multiple vantage points. However, the predictability and concordance between the simulated and actual result have not been morphometrically studied. The purpose of this study was to aesthetically and quantitatively compare the simulated to actual rhinoplasty result. A retrospective review of 3D images (VECTRA, Canfield) for rhinoplasty patients was performed. Images (preop, simulated, and actual) were randomized. A blinded panel of physicians rated the images (1 = poor, 5 = excellent). The image series considered "best" was also recorded. A quantitative assessment of nasolabial angle and tip projection was compared. Paired and two-sample t tests were performed for statistical analysis (P < 0.05 as significant). Forty patients were included. 67.5% of preoperative images were rated as poor (mean = 1.7). The simulation received a mean score of 2.9 (good in 60% of cases). 82.5% of actual cases were rated good to excellent (mean 3.4) (P < 0.001). Overall, the panel significantly preferred the actual postoperative result in 77.5% of cases compared to the simulation in 22.5% of cases (P < 0.001). The actual nasal tip was more projected compared to the simulations for both males and females. There was no significant difference in nasal tip rotation between simulated and postoperative groups. 3D simulation is a powerful communication and planning tool in rhinoplasty. In this study, the actual result was deemed more aesthetic than the simulated image. Surgeon experience is important to translate the plan and achieve favorable postoperative results. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  17. Interactive visualization of numerical simulation results: A tool for mission planning and data analysis

    NASA Technical Reports Server (NTRS)

    Berchem, J.; Raeder, J.; Walker, R. J.; Ashour-Abdalla, M.

    1995-01-01

    We report on the development of an interactive system for visualizing and analyzing numerical simulation results. This system is based on visualization modules which use the Application Visualization System (AVS) and the NCAR graphics packages. Examples from recent simulations are presented to illustrate how these modules can be used for displaying and manipulating simulation results to facilitate their comparison with phenomenological model results and observations.

  18. Quantitative modeling of the accuracy in registering preoperative patient-specific anatomic models into left atrial cardiac ablation procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rettmann, Maryam E., E-mail: rettmann.maryam@mayo.edu; Holmes, David R.; Camp, Jon J.

    2014-02-15

    Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Datamore » from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamicin vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration

  19. Effect of anisoplanatism on the measurement accuracy of an extended-source Hartmann-Shack wavefront sensor

    NASA Astrophysics Data System (ADS)

    Woeger, Friedrich; Rimmele, Thomas

    2009-10-01

    We analyze the effect of anisoplanatic atmospheric turbulence on the measurement accuracy of an extended-source Hartmann-Shack wavefront sensor (HSWFS). We have numerically simulated an extended-source HSWFS, using a scenery of the solar surface that is imaged through anisoplanatic atmospheric turbulence and imaging optics. Solar extended-source HSWFSs often use cross-correlation algorithms in combination with subpixel shift finding algorithms to estimate the wavefront gradient, two of which were tested for their effect on the measurement accuracy. We find that the measurement error of an extended-source HSWFS is governed mainly by the optical geometry of the HSWFS, employed subpixel finding algorithm, and phase anisoplanatism. Our results show that effects of scintillation anisoplanatism are negligible when cross-correlation algorithms are used.

  20. High accuracy switched-current circuits using an improved dynamic mirror

    NASA Technical Reports Server (NTRS)

    Zweigle, G.; Fiez, T.

    1991-01-01

    The switched-current technique, a recently developed circuit approach to analog signal processing, has emerged as an alternative/compliment to the well established switched-capacitor circuit technique. High speed switched-current circuits offer potential cost and power savings over slower switched-capacitor circuits. Accuracy improvements are a primary concern at this stage in the development of the switched-current technique. Use of the dynamic current mirror has produced circuits that are insensitive to transistor matching errors. The dynamic current mirror has been limited by other sources of error including clock-feedthrough and voltage transient errors. In this paper we present an improved switched-current building block using the dynamic current mirror. Utilizing current feedback the errors due to current imbalance in the dynamic current mirror are reduced. Simulations indicate that this feedback can reduce total harmonic distortion by as much as 9 dB. Additionally, we have developed a clock-feedthrough reduction scheme for which simulations reveal a potential 10 dB total harmonic distortion improvement. The clock-feedthrough reduction scheme also significantly reduces offset errors and allows for cancellation with a constant current source. Experimental results confirm the simulated improvements.

  1. Simultaneous acquisition sequence for improved hepatic pharmacokinetics quantification accuracy (SAHA) for dynamic contrast-enhanced MRI of liver.

    PubMed

    Ning, Jia; Sun, Yongliang; Xie, Sheng; Zhang, Bida; Huang, Feng; Koken, Peter; Smink, Jouke; Yuan, Chun; Chen, Huijun

    2018-05-01

    To propose a simultaneous acquisition sequence for improved hepatic pharmacokinetics quantification accuracy (SAHA) method for liver dynamic contrast-enhanced MRI. The proposed SAHA simultaneously acquired high temporal-resolution 2D images for vascular input function extraction using Cartesian sampling and 3D large-coverage high spatial-resolution liver dynamic contrast-enhanced images using golden angle stack-of-stars acquisition in an interleaved way. Simulations were conducted to investigate the accuracy of SAHA in pharmacokinetic analysis. A healthy volunteer and three patients with cirrhosis or hepatocellular carcinoma were included in the study to investigate the feasibility of SAHA in vivo. Simulation studies showed that SAHA can provide closer results to the true values and lower root mean square error of estimated pharmacokinetic parameters in all of the tested scenarios. The in vivo scans of subjects provided fair image quality of both 2D images for arterial input function and portal venous input function and 3D whole liver images. The in vivo fitting results showed that the perfusion parameters of healthy liver were significantly different from those of cirrhotic liver and HCC. The proposed SAHA can provide improved accuracy in pharmacokinetic modeling and is feasible in human liver dynamic contrast-enhanced MRI, suggesting that SAHA is a potential tool for liver dynamic contrast-enhanced MRI. Magn Reson Med 79:2629-2641, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  2. Accuracy and Reproducibility of Adipose Tissue Measurements in Young Infants by Whole Body Magnetic Resonance Imaging

    PubMed Central

    Bauer, Jan Stefan; Noël, Peter Benjamin; Vollhardt, Christiane; Much, Daniela; Degirmenci, Saliha; Brunner, Stefanie; Rummeny, Ernst Josef; Hauner, Hans

    2015-01-01

    Purpose MR might be well suited to obtain reproducible and accurate measures of fat tissues in infants. This study evaluates MR-measurements of adipose tissue in young infants in vitro and in vivo. Material and Methods MR images of ten phantoms simulating subcutaneous fat of an infant’s torso were obtained using a 1.5T MR scanner with and without simulated breathing. Scans consisted of a cartesian water-suppression turbo spin echo (wsTSE) sequence, and a PROPELLER wsTSE sequence. Fat volume was quantified directly and by MR imaging using k-means clustering and threshold-based segmentation procedures to calculate accuracy in vitro. Whole body MR was obtained in sleeping young infants (average age 67±30 days). This study was approved by the local review board. All parents gave written informed consent. To obtain reproducibility in vivo, cartesian and PROPELLER wsTSE sequences were repeated in seven and four young infants, respectively. Overall, 21 repetitions were performed for the cartesian sequence and 13 repetitions for the PROPELLER sequence. Results In vitro accuracy errors depended on the chosen segmentation procedure, ranging from 5.4% to 76%, while the sequence showed no significant influence. Artificial breathing increased the minimal accuracy error to 9.1%. In vivo reproducibility errors for total fat volume of the sleeping infants ranged from 2.6% to 3.4%. Neither segmentation nor sequence significantly influenced reproducibility. Conclusion With both cartesian and PROPELLER sequences an accurate and reproducible measure of body fat was achieved. Adequate segmentation was mandatory for high accuracy. PMID:25706876

  3. Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.

    2000-01-01

    Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.

  4. Comparative analysis of numerical simulation techniques for incoherent imaging of extended objects through atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Lachinova, Svetlana L.; Vorontsov, Mikhail A.; Filimonov, Grigory A.; LeMaster, Daniel A.; Trippel, Matthew E.

    2017-07-01

    Computational efficiency and accuracy of wave-optics-based Monte-Carlo and brightness function numerical simulation techniques for incoherent imaging of extended objects through atmospheric turbulence are evaluated. Simulation results are compared with theoretical estimates based on known analytical solutions for the modulation transfer function of an imaging system and the long-exposure image of a Gaussian-shaped incoherent light source. It is shown that the accuracy of both techniques is comparable over the wide range of path lengths and atmospheric turbulence conditions, whereas the brightness function technique is advantageous in terms of the computational speed.

  5. Accuracy of a radiofrequency identification (RFID) badge system to monitor hand hygiene behavior during routine clinical activities.

    PubMed

    Pineles, Lisa L; Morgan, Daniel J; Limper, Heather M; Weber, Stephen G; Thom, Kerri A; Perencevich, Eli N; Harris, Anthony D; Landon, Emily

    2014-02-01

    Hand hygiene (HH) is a critical part of infection prevention in health care settings. Hospitals around the world continuously struggle to improve health care personnel (HCP) HH compliance. The current gold standard for monitoring compliance is direct observation; however, this method is time-consuming and costly. One emerging area of interest involves automated systems for monitoring HH behavior such as radiofrequency identification (RFID) tracking systems. To assess the accuracy of a commercially available RFID system in detecting HCP HH behavior, we compared direct observation with data collected by the RFID system in a simulated validation setting and to a real-life clinical setting over 2 hospitals. A total of 1,554 HH events was observed. Accuracy for identifying HH events was high in the simulated validation setting (88.5%) but relatively low in the real-life clinical setting (52.4%). This difference was significant (P < .01). Accuracy for detecting HCP movement into and out of patient rooms was also high in the simulated setting but not in the real-life clinical setting (100% on entry and exit in simulated setting vs 54.3% entry and 49.5% exit in real-life clinical setting, P < .01). In this validation study of an RFID system, almost half of the HH events were missed. More research is necessary to further develop these systems and improve accuracy prior to widespread adoption. Copyright © 2014 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  6. Thematic accuracy of the 1992 National Land-Cover Data for the eastern United States: Statistical methodology and regional results

    USGS Publications Warehouse

    Stehman, S.V.; Wickham, J.D.; Smith, J.H.; Yang, L.

    2003-01-01

    The accuracy of the 1992 National Land-Cover Data (NLCD) map is assessed via a probability sampling design incorporating three levels of stratification and two stages of selection. Agreement between the map and reference land-cover labels is defined as a match between the primary or alternate reference label determined for a sample pixel and a mode class of the mapped 3×3 block of pixels centered on the sample pixel. Results are reported for each of the four regions comprising the eastern United States for both Anderson Level I and II classifications. Overall accuracies for Levels I and II are 80% and 46% for New England, 82% and 62% for New York/New Jersey (NY/NJ), 70% and 43% for the Mid-Atlantic, and 83% and 66% for the Southeast.

  7. Modeling Extra-Long Tsunami Propagation: Assessing Data, Model Accuracy and Forecast Implications

    NASA Astrophysics Data System (ADS)

    Titov, V. V.; Moore, C. W.; Rabinovich, A.

    2017-12-01

    Detecting and modeling tsunamis propagating tens of thousands of kilometers from the source is a formidable scientific challenge and seemingly satisfies only scientific curiosity. However, results of such analyses produce a valuable insight into the tsunami propagation dynamics, model accuracy and would provide important implications for tsunami forecast. The Mw = 9.3 megathrust earthquake of December 26, 2004 off the coast of Sumatra generated a tsunami that devastated Indian Ocean coastlines and spread into the Pacific and Atlantic oceans. The tsunami was recorded by a great number of coastal tide gauges, including those located in 15-25 thousand kilometers from the source area. To date, it is still the farthest instrumentally detected tsunami. The data from these instruments throughout the world oceans enabled to estimate various statistical parameters and energy decay of this event. High-resolution records of this tsunami from DARTs 32401 (offshore of northern Chile), 46405 and NeMO (both offshore of the US West Coast), combined with the mainland tide gauge measurements enabled us to examine far-field characteristics of the 2004 in the Pacific Ocean and to compare the results of global numerical simulations with the observations. Despite their small heights (less than 2 cm at deep-ocean locations), the records demonstrated consistent spatial and temporal structure. The numerical model described well the frequency content, amplitudes and general structure of the observed waves at deep-ocean and coastal gages. We present analysis of the measurements and comparison with model data to discuss implication for tsunami forecast accuracy. Model study for such extreme distances from the tsunami source and at extra-long times after the event is an attempt to find accuracy bounds for tsunami models and accuracy limitations of model use for forecast. We discuss results in application to tsunami model forecast and tsunami modeling in general.

  8. The zero-multipole summation method for estimating electrostatic interactions in molecular dynamics: analysis of the accuracy and application to liquid systems.

    PubMed

    Fukuda, Ikuo; Kamiya, Narutoshi; Nakamura, Haruki

    2014-05-21

    In the preceding paper [I. Fukuda, J. Chem. Phys. 139, 174107 (2013)], the zero-multipole (ZM) summation method was proposed for efficiently evaluating the electrostatic Coulombic interactions of a classical point charge system. The summation takes a simple pairwise form, but prevents the electrically non-neutral multipole states that may artificially be generated by a simple cutoff truncation, which often causes large energetic noises and significant artifacts. The purpose of this paper is to judge the ability of the ZM method by investigating the accuracy, parameter dependencies, and stability in applications to liquid systems. To conduct this, first, the energy-functional error was divided into three terms and each term was analyzed by a theoretical error-bound estimation. This estimation gave us a clear basis of the discussions on the numerical investigations. It also gave a new viewpoint between the excess energy error and the damping effect by the damping parameter. Second, with the aid of these analyses, the ZM method was evaluated based on molecular dynamics (MD) simulations of two fundamental liquid systems, a molten sodium-chlorine ion system and a pure water molecule system. In the ion system, the energy accuracy, compared with the Ewald summation, was better for a larger value of multipole moment l currently induced until l ≲ 3 on average. This accuracy improvement with increasing l is due to the enhancement of the excess-energy accuracy. However, this improvement is wholly effective in the total accuracy if the theoretical moment l is smaller than or equal to a system intrinsic moment L. The simulation results thus indicate L ∼ 3 in this system, and we observed less accuracy in l = 4. We demonstrated the origins of parameter dependencies appearing in the crossing behavior and the oscillations of the energy error curves. With raising the moment l we observed, smaller values of the damping parameter provided more accurate results and smoother

  9. The zero-multipole summation method for estimating electrostatic interactions in molecular dynamics: Analysis of the accuracy and application to liquid systems

    NASA Astrophysics Data System (ADS)

    Fukuda, Ikuo; Kamiya, Narutoshi; Nakamura, Haruki

    2014-05-01

    In the preceding paper [I. Fukuda, J. Chem. Phys. 139, 174107 (2013)], the zero-multipole (ZM) summation method was proposed for efficiently evaluating the electrostatic Coulombic interactions of a classical point charge system. The summation takes a simple pairwise form, but prevents the electrically non-neutral multipole states that may artificially be generated by a simple cutoff truncation, which often causes large energetic noises and significant artifacts. The purpose of this paper is to judge the ability of the ZM method by investigating the accuracy, parameter dependencies, and stability in applications to liquid systems. To conduct this, first, the energy-functional error was divided into three terms and each term was analyzed by a theoretical error-bound estimation. This estimation gave us a clear basis of the discussions on the numerical investigations. It also gave a new viewpoint between the excess energy error and the damping effect by the damping parameter. Second, with the aid of these analyses, the ZM method was evaluated based on molecular dynamics (MD) simulations of two fundamental liquid systems, a molten sodium-chlorine ion system and a pure water molecule system. In the ion system, the energy accuracy, compared with the Ewald summation, was better for a larger value of multipole moment l currently induced until l ≲ 3 on average. This accuracy improvement with increasing l is due to the enhancement of the excess-energy accuracy. However, this improvement is wholly effective in the total accuracy if the theoretical moment l is smaller than or equal to a system intrinsic moment L. The simulation results thus indicate L ˜ 3 in this system, and we observed less accuracy in l = 4. We demonstrated the origins of parameter dependencies appearing in the crossing behavior and the oscillations of the energy error curves. With raising the moment l we observed, smaller values of the damping parameter provided more accurate results and smoother

  10. Simulation-based Mastery Learning Improves Cardiac Auscultation Skills in Medical Students

    PubMed Central

    McGaghie, William C.; Cohen, Elaine R.; Kaye, Marsha; Wayne, Diane B.

    2010-01-01

    Background Cardiac auscultation is a core clinical skill. However, prior studies show that trainee skills are often deficient and that clinical experience is not a proxy for competence. Objective To describe a mastery model of cardiac auscultation education and evaluate its effectiveness in improving bedside cardiac auscultation skills. Design Untreated control group design with pretest and posttest. Participants Third-year students who received a cardiac auscultation curriculum and fourth year students who did not. Intervention A cardiac auscultation curriculum consisting of a computer tutorial and a cardiac patient simulator. All third-year students were required to meet or exceed a minimum passing score (MPS) set by an expert panel at posttest. Measurements Diagnostic accuracy with simulated heart sounds and actual patients. Results Trained third-year students (n = 77) demonstrated significantly higher cardiac auscultation accuracy compared to untrained fourth year students (n = 31) in assessment of simulated heart sounds (93.8% vs. 73.9%, p < 0.001) and with real patients (81.8% vs. 75.1%, p = 0.003). USMLE scores correlated modestly with a computer-based multiple choice assessment using simulated heart sounds but not with bedside skills on real patients. Conclusions A cardiac auscultation curriculum consisting of deliberate practice with a computer-based tutorial and a cardiac patient simulator resulted in improved assessment of simulated heart sounds and more accurate examination of actual patients. PMID:20339952

  11. High Fidelity BWR Fuel Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Su Jong

    This report describes the Consortium for Advanced Simulation of Light Water Reactors (CASL) work conducted for completion of the Thermal Hydraulics Methods (THM) Level 3 milestone THM.CFD.P13.03: High Fidelity BWR Fuel Simulation. High fidelity computational fluid dynamics (CFD) simulation for Boiling Water Reactor (BWR) was conducted to investigate the applicability and robustness performance of BWR closures. As a preliminary study, a CFD model with simplified Ferrule spacer grid geometry of NUPEC BWR Full-size Fine-mesh Bundle Test (BFBT) benchmark has been implemented. Performance of multiphase segregated solver with baseline boiling closures has been evaluated. Although the mean values of void fractionmore » and exit quality of CFD result for BFBT case 4101-61 agreed with experimental data, the local void distribution was not predicted accurately. The mesh quality was one of the critical factors to obtain converged result. The stability and robustness of the simulation was mainly affected by the mesh quality, combination of BWR closure models. In addition, the CFD modeling of fully-detailed spacer grid geometry with mixing vane is necessary for improving the accuracy of CFD simulation.« less

  12. Simulation and Measurement of Stray Light in the CLASP

    NASA Technical Reports Server (NTRS)

    Narukage, Noriyuki; Kano, Ryohei; Bando, Takamasa; Ishikawa, Ryoko; Kubo, Masahito; Tsuzuki, Toshihiro; Katsukawa, Yukio; Ishikawa, Shin-nosuke; Giono, Gabriel; Suematsu, Yoshinori; hide

    2015-01-01

    We are planning an international rocket experiment Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) is (2015 planned) that Lyman Alpha line polarization spectroscopic observations from the sun. The purpose of this experiment, detected with high accuracy of the linear polarization of the Ly?? lines to 0.1% by using a Hanle effect is to measure the magnetic field of the chromosphere-transition layer directly. For total flux of the sun visible light overwhelmingly larger and about 200 000 times the Ly?? line wavelength region, also hinder to 0.1% of the polarization photometric accuracy achieved in the stray light of slight visible light. Therefore we were first carried out using the illumination design analysis software called stray light simulation CLASP Light Tools. Feature of this simulation, using optical design file (ZEMAX format) and structural design file (STEP format), to reproduce realistic CLASP as possible to calculate machine is that it was stray study. And, at the stage in the actual equipment that made the provisional set of CLASP, actually put sunlight into CLASP using coelostat of National Astronomical Observatory of Japan, was subjected to measurement of stray light (San test). Pattern was not observed in the simulation is observed in the stray light measurement results need arise that measures. However, thanks to the stray light measurement and simulation was performed by adding, it was found this pattern is due to the diffracted light at the slit. Currently, the simulation results is where you have taken steps to reference. In this presentation, we report the stray light simulation and stray light measurement results that we have implemented

  13. Optimizing Tsunami Forecast Model Accuracy

    NASA Astrophysics Data System (ADS)

    Whitmore, P.; Nyland, D. L.; Huang, P. Y.

    2015-12-01

    Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.

  14. Computer simulations of alkali-acetate solutions: Accuracy of the forcefields in difference concentrations

    NASA Astrophysics Data System (ADS)

    Ahlstrand, Emma; Zukerman Schpector, Julio; Friedman, Ran

    2017-11-01

    When proteins are solvated in electrolyte solutions that contain alkali ions, the ions interact mostly with carboxylates on the protein surface. Correctly accounting for alkali-carboxylate interactions is thus important for realistic simulations of proteins. Acetates are the simplest carboxylates that are amphipathic, and experimental data for alkali acetate solutions are available and can be compared with observables obtained from simulations. We carried out molecular dynamics simulations of alkali acetate solutions using polarizable and non-polarizable forcefields and examined the ion-acetate interactions. In particular, activity coefficients and association constants were studied in a range of concentrations (0.03, 0.1, and 1M). In addition, quantum-mechanics (QM) based energy decomposition analysis was performed in order to estimate the contribution of polarization, electrostatics, dispersion, and QM (non-classical) effects on the cation-acetate and cation-water interactions. Simulations of Li-acetate solutions in general overestimated the binding of Li+ and acetates. In lower concentrations, the activity coefficients of alkali-acetate solutions were too high, which is suggested to be due to the simulation protocol and not the forcefields. Energy decomposition analysis suggested that improvement of the forcefield parameters to enable accurate simulations of Li-acetate solutions can be achieved but may require the use of a polarizable forcefield. Importantly, simulations with some ion parameters could not reproduce the correct ion-oxygen distances, which calls for caution in the choice of ion parameters when protein simulations are performed in electrolyte solutions.

  15. Initial Data Analysis Results for ATD-2 ISAS HITL Simulation

    NASA Technical Reports Server (NTRS)

    Lee, Hanbong

    2017-01-01

    To evaluate the operational procedures and information requirements for the core functional capabilities of the ATD-2 project, such as tactical surface metering tool, APREQ-CFR procedure, and data element exchanges between ramp and tower, human-in-the-loop (HITL) simulations were performed in March, 2017. This presentation shows the initial data analysis results from the HITL simulations. With respect to the different runway configurations and metering values in tactical surface scheduler, various airport performance metrics were analyzed and compared. These metrics include gate holding time, taxi-out in time, runway throughput, queue size and wait time in queue, and TMI flight compliance. In addition to the metering value, other factors affecting the airport performance in the HITL simulation, including run duration, runway changes, and TMI constraints, are also discussed.

  16. Update and review of accuracy assessment techniques for remotely sensed data

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Heinen, J. T.; Oderwald, R. G.

    1983-01-01

    Research performed in the accuracy assessment of remotely sensed data is updated and reviewed. The use of discrete multivariate analysis techniques for the assessment of error matrices, the use of computer simulation for assessing various sampling strategies, and an investigation of spatial autocorrelation techniques are examined.

  17. Finite Element Simulation of Articular Contact Mechanics with Quadratic Tetrahedral Elements

    PubMed Central

    Maas, Steve A.; Ellis, Benjamin J.; Rawlins, David S.; Weiss, Jeffrey A.

    2016-01-01

    Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. PMID:26900037

  18. Finite element simulation of articular contact mechanics with quadratic tetrahedral elements.

    PubMed

    Maas, Steve A; Ellis, Benjamin J; Rawlins, David S; Weiss, Jeffrey A

    2016-03-21

    Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. An integrated system for the online monitoring of particle therapy treatment accuracy

    NASA Astrophysics Data System (ADS)

    Fiorina, E.; INSIDE Collaboration

    2016-07-01

    Quality assurance in hadrontherapy remains an open issue that can be addressed with reliable monitoring of treatment accuracy. The INSIDE (INnovative SolutIons for DosimEtry in hadrontherapy) project aims to develop an integrated online monitoring system based on two dedicated PET panels and a tracking system, called Dose Profiler. The proposed solution is designed to operate in-beam and provide an immediate feedback on the particle range acquiring both photons produced by β+ decays and prompt secondary particle signals. Monte Carlo simulations cover an important role both in the system development, by confirming the design feasibility, and in the system operation, by understanding data. A FLUKA-based integrated simulation was developed taking into account the hadron beam structure, the phantom/patient features and the PET detector and Dose Profiler specifications. In addition, to reduce simulation time in signal generation on PET detectors, a two-step technique has been implemented and validated. The first PET modules were tested in May 2015 at the Centro Nazionale Adroterapia Oncologica (CNAO) in Pavia (Italy) with very satisfactory results: in-spill, inter-spill and post-treatment PET images were reconstructed and a quantitative agreement between data and simulation was found.

  20. Matters of accuracy and conventionality: prior accuracy guides children's evaluations of others' actions.

    PubMed

    Scofield, Jason; Gilpin, Ansley Tullos; Pierucci, Jillian; Morgan, Reed

    2013-03-01

    Studies show that children trust previously reliable sources over previously unreliable ones (e.g., Koenig, Clément, & Harris, 2004). However, it is unclear from these studies whether children rely on accuracy or conventionality to determine the reliability and, ultimately, the trustworthiness of a particular source. In the current study, 3- and 4-year-olds were asked to endorse and imitate one of two actors performing an unfamiliar action, one actor who was unconventional but successful and one who was conventional but unsuccessful. These data demonstrated that children preferred endorsing and imitating the unconventional but successful actor. Results suggest that when the accuracy and conventionality of a source are put into conflict, children may give priority to accuracy over conventionality when estimating the source's reliability and, ultimately, when deciding who to trust.

  1. Acoustic-based proton range verification in heterogeneous tissue: simulation studies

    NASA Astrophysics Data System (ADS)

    Jones, Kevin C.; Nie, Wei; Chu, James C. H.; Turian, Julius V.; Kassaee, Alireza; Sehgal, Chandra M.; Avery, Stephen

    2018-01-01

    Acoustic-based proton range verification (protoacoustics) is a potential in vivo technique for determining the Bragg peak position. Previous measurements and simulations have been restricted to homogeneous water tanks. Here, a CT-based simulation method is proposed and applied to a liver and prostate case to model the effects of tissue heterogeneity on the protoacoustic amplitude and time-of-flight range verification accuracy. For the liver case, posterior irradiation with a single proton pencil beam was simulated for detectors placed on the skin. In the prostate case, a transrectal probe measured the protoacoustic pressure generated by irradiation with five separate anterior proton beams. After calculating the proton beam dose deposition, each CT voxel’s material properties were mapped based on Hounsfield Unit values, and thermoacoustically-generated acoustic wave propagation was simulated with the k-Wave MATLAB toolbox. By comparing the simulation results for the original liver CT to homogenized variants, the effects of heterogeneity were assessed. For the liver case, 1.4 cGy of dose at the Bragg peak generated 50 mPa of pressure (13 cm distal), a 2×  lower amplitude than simulated in a homogeneous water tank. Protoacoustic triangulation of the Bragg peak based on multiple detector measurements resulted in 0.4 mm accuracy for a δ-function proton pulse irradiation of the liver. For the prostate case, higher amplitudes are simulated (92-1004 mPa) for closer detectors (<8 cm). For four of the prostate beams, the protoacoustic range triangulation was accurate to  ⩽1.6 mm (δ-function proton pulse). Based on the results, application of protoacoustic range verification to heterogeneous tissue will result in decreased signal amplitudes relative to homogeneous water tank measurements, but accurate range verification is still expected to be possible.

  2. Numerical simulation of granular flows : comparison with experimental results

    NASA Astrophysics Data System (ADS)

    Pirulli, M.; Mangeney-Castelnau, A.; Lajeunesse, E.; Vilotte, J.-P.; Bouchut, F.; Bristeau, M. O.; Perthame, B.

    2003-04-01

    Granular avalanches such as rock or debris flows regularly cause large amounts of human and material damages. Numerical simulation of granular avalanches should provide a useful tool for investigating, within realistic geological contexts, the dynamics of these flows and of their arrest phase and for improving the risk assessment of such natural hazards. Validation of debris avalanche numerical model on granular experiments over inclined plane is performed here. The comparison is performed by simulating granular flow of glass beads from a reservoir through a gate down an inclined plane. This unsteady situation evolves toward the steady state observed in the laboratory. Furthermore simulation exactly reproduces the arrest phase obtained by suddenly closing the gate of the reservoir once a thick flow has developped. The spreading of a granular mass released from rest at the top of a rough inclined plane is also investigated. The evolution of the avalanche shape, the velocity and the characteristics of the arrest phase are compared with experimental results and analysis of the involved forces are studied for various flow laws.

  3. MOCCA code for star cluster simulation: comparison with optical observations using COCOA

    NASA Astrophysics Data System (ADS)

    Askar, Abbas; Giersz, Mirek; Pych, Wojciech; Olech, Arkadiusz; Hypki, Arkadiusz

    2016-02-01

    We introduce and present preliminary results from COCOA (Cluster simulatiOn Comparison with ObservAtions) code for a star cluster after 12 Gyr of evolution simulated using the MOCCA code. The COCOA code is being developed to quickly compare results of numerical simulations of star clusters with observational data. We use COCOA to obtain parameters of the projected cluster model. For comparison, a FITS file of the projected cluster was provided to observers so that they could use their observational methods and techniques to obtain cluster parameters. The results show that the similarity of cluster parameters obtained through numerical simulations and observations depends significantly on the quality of observational data and photometric accuracy.

  4. A Reduced-Order Model for Efficient Simulation of Synthetic Jet Actuators

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2003-01-01

    A new reduced-order model of multidimensional synthetic jet actuators that combines the accuracy and conservation properties of full numerical simulation methods with the efficiency of simplified zero-order models is proposed. The multidimensional actuator is simulated by solving the time-dependent compressible quasi-1-D Euler equations, while the diaphragm is modeled as a moving boundary. The governing equations are approximated with a fourth-order finite difference scheme on a moving mesh such that one of the mesh boundaries coincides with the diaphragm. The reduced-order model of the actuator has several advantages. In contrast to the 3-D models, this approach provides conservation of mass, momentum, and energy. Furthermore, the new method is computationally much more efficient than the multidimensional Navier-Stokes simulation of the actuator cavity flow, while providing practically the same accuracy in the exterior flowfield. The most distinctive feature of the present model is its ability to predict the resonance characteristics of synthetic jet actuators; this is not practical when using the 3-D models because of the computational cost involved. Numerical results demonstrating the accuracy of the new reduced-order model and its limitations are presented.

  5. Effect of provider volume on the accuracy of hospital report cards: a Monte Carlo study.

    PubMed

    Austin, Peter C; Reeves, Mathew J

    2014-03-01

    Hospital report cards, in which outcomes after the provision of medical or surgical care are compared across healthcare providers, are being published with increasing frequency. However, the accuracy of such comparisons is controversial, especially when case volumes are small. The objective was to determine the relationship between hospital case volume and the accuracy of hospital report cards. Monte Carlo simulations were used to examine the influence of hospital case volume on the accuracy of hospital report cards in a setting in which true hospital performance was known with certainty, and perfect risk-adjustment was feasible. The parameters used to generate the simulated data sets were obtained from empirical analyses of data on patients hospitalized with acute myocardial infarction in Ontario, Canada, in which the overall 30-day mortality rate was 11.1%. We found that provider volume had a strong effect on the accuracy of hospital report cards. However, provider volume had to be >300 before ≥70% of hospitals were correctly classified. Furthermore, hospital volume had to be >1000 before ≥80% of hospitals were correctly classified. Producers and users of hospital report cards need to be aware that, even when perfect risk adjustment is possible, the accuracy of hospital report cards is, at best, modest for small to medium-sized case loads (i.e., 100-300). Hospital report cards displayed high degrees of accuracy only when provider volumes exceeded the typical annual hospital case load for many cardiovascular conditions and procedures.

  6. A fast and robust TOUGH2 module to simulate geological CO2 storage in saline aquifers

    NASA Astrophysics Data System (ADS)

    Shabani, Babak; Vilcáez, Javier

    2018-02-01

    A new TOUGH2 module to simulate geological CO2 storage (GCS) in saline aquifers is developed based on the widely employed ECO2N module of TOUGH2. The newly developed TOUGH2 module uses a new non-iterative fugacity-activity thermodynamic model to obtain the partitioning of CO2 and H2O between the aqueous and gas phases. Simple but robust thermophysical correlations are used to obtain density, viscosity, and enthalpy of the gas phase. The implementation and accuracy of the employed thermophysical correlations are verified by comparisons against the national institute of standards and technology (NIST) online thermophysical database. To assess the computation accuracy and efficiency, simulation results obtained with the new TOUGH2 module for a one-dimensional non-isothermal radial and a three-dimensional isothermal system are compared against the simulation results obtained with the ECO2N module. Treating salt mass fraction in the aqueous phase as a constant, along with the inclusion of a non-iterative fugacity-activity thermodynamic model, and simple thermophysical correlations, resulted in simulations much faster than simulations with ECO2N module, without losing numerical accuracy. Both modules yield virtually identical results. Additional field-scale simulations of CO2 injection into an actual non-isothermal and heterogeneous geological formation confirmed that the new module is much faster than the ECO2N module in simulating complex field-scale conditions. Owing to its capability to handle CO2-CH4-H2S-N2 gas mixtures and its compatibility with TOUGHREACT, this new TOUGH2 module offers the possibility of developing a fast and robust TOUGHREACT module to predict the fate of CO2 in GCS sites under biotic conditions where CO2, CH4, H2S, and N2 gases can be formed.

  7. Pairwise adaptive thermostats for improved accuracy and stability in dissipative particle dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leimkuhler, Benedict, E-mail: b.leimkuhler@ed.ac.uk; Shang, Xiaocheng, E-mail: x.shang@brown.edu

    2016-11-01

    We examine the formulation and numerical treatment of dissipative particle dynamics (DPD) and momentum-conserving molecular dynamics. We show that it is possible to improve both the accuracy and the stability of DPD by employing a pairwise adaptive Langevin thermostat that precisely matches the dynamical characteristics of DPD simulations (e.g., autocorrelation functions) while automatically correcting thermodynamic averages using a negative feedback loop. In the low friction regime, it is possible to replace DPD by a simpler momentum-conserving variant of the Nosé–Hoover–Langevin method based on thermostatting only pairwise interactions; we show that this method has an extra order of accuracy for anmore » important class of observables (a superconvergence result), while also allowing larger timesteps than alternatives. All the methods mentioned in the article are easily implemented. Numerical experiments are performed in both equilibrium and nonequilibrium settings; using Lees–Edwards boundary conditions to induce shear flow.« less

  8. Super-rogue waves in simulations based on weakly nonlinear and fully nonlinear hydrodynamic equations.

    PubMed

    Slunyaev, A; Pelinovsky, E; Sergeeva, A; Chabchoub, A; Hoffmann, N; Onorato, M; Akhmediev, N

    2013-07-01

    The rogue wave solutions (rational multibreathers) of the nonlinear Schrödinger equation (NLS) are tested in numerical simulations of weakly nonlinear and fully nonlinear hydrodynamic equations. Only the lowest order solutions from 1 to 5 are considered. A higher accuracy of wave propagation in space is reached using the modified NLS equation, also known as the Dysthe equation. This numerical modeling allowed us to directly compare simulations with recent results of laboratory measurements in Chabchoub et al. [Phys. Rev. E 86, 056601 (2012)]. In order to achieve even higher physical accuracy, we employed fully nonlinear simulations of potential Euler equations. These simulations provided us with basic characteristics of long time evolution of rational solutions of the NLS equation in the case of near-breaking conditions. The analytic NLS solutions are found to describe the actual wave dynamics of steep waves reasonably well.

  9. Pediatric Disaster Triage: Multiple Simulation Curriculum Improves Prehospital Care Providers' Assessment Skills.

    PubMed

    Cicero, Mark Xavier; Whitfill, Travis; Overly, Frank; Baird, Janette; Walsh, Barbara; Yarzebski, Jorge; Riera, Antonio; Adelgais, Kathleen; Meckler, Garth D; Baum, Carl; Cone, David Christopher; Auerbach, Marc

    2017-01-01

    Paramedics and emergency medical technicians (EMTs) triage pediatric disaster victims infrequently. The objective of this study was to measure the effect of a multiple-patient, multiple-simulation curriculum on accuracy of pediatric disaster triage (PDT). Paramedics, paramedic students, and EMTs from three sites were enrolled. Triage accuracy was measured three times (Time 0, Time 1 [two weeks later], and Time 2 [6 months later]) during a disaster simulation, in which high and low fidelity manikins and actors portrayed 10 victims. Accuracy was determined by participant triage decision concordance with predetermined expected triage level (RED [Immediate], YELLOW [Delayed], GREEN [Ambulatory], BLACK [Deceased]) for each victim. Between Time 0 and Time 1, participants completed an interactive online module, and after each simulation there was an individual debriefing. Associations between participant level of training, years of experience, and enrollment site were determined, as were instances of the most dangerous mistriage, when RED and YELLOW victims were triaged BLACK. The study enrolled 331 participants, and the analysis included 261 (78.9%) participants who completed the study, 123 from the Connecticut site, 83 from Rhode Island, and 55 from Massachusetts. Triage accuracy improved significantly from Time 0 to Time 1, after the educational interventions (first simulation with debriefing, and an interactive online module), with a median 10% overall improvement (p < 0.001). Subgroup analyses showed between Time 0 and Time 1, paramedics and paramedic students improved more than EMTs (p = 0.002). Analysis of triage accuracy showed greatest improvement in overall accuracy for YELLOW triage patients (Time 0 50% accurate, Time1 100%), followed by RED patients (Time 0 80%, Time 1 100%). There was no significant difference in accuracy between Time 1 and Time 2 (p = 0.073). This study shows that the multiple-victim, multiple-simulation curriculum yields a durable 10

  10. Results of intravehicular manned cargo-transfer studies in simulated weightlessness

    NASA Technical Reports Server (NTRS)

    Spady, A. A., Jr.; Beasley, G. P.; Yenni, K. R.; Eisele, D. F.

    1972-01-01

    A parametric investigation was conducted in a water immersion simulator to determine the effect of package mass, moment of inertia, and size on the ability of man to transfer cargo in simulated weightlessness. Results from this study indicate that packages with masses of at least 744 kg and moments of inertia of at least 386 kg-m2 can be manually handled and transferred satisfactorily under intravehicular conditions using either one- or two-rail motion aids. Data leading to the conclusions and discussions of test procedures and equipment are presented.

  11. Accuracy of the lattice-Boltzmann method using the Cell processor

    NASA Astrophysics Data System (ADS)

    Harvey, M. J.; de Fabritiis, G.; Giupponi, G.

    2008-11-01

    Accelerator processors like the new Cell processor are extending the traditional platforms for scientific computation, allowing orders of magnitude more floating-point operations per second (flops) compared to standard central processing units. However, they currently lack double-precision support and support for some IEEE 754 capabilities. In this work, we develop a lattice-Boltzmann (LB) code to run on the Cell processor and test the accuracy of this lattice method on this platform. We run tests for different flow topologies, boundary conditions, and Reynolds numbers in the range Re=6 350 . In one case, simulation results show a reduced mass and momentum conservation compared to an equivalent double-precision LB implementation. All other cases demonstrate the utility of the Cell processor for fluid dynamics simulations. Benchmarks on two Cell-based platforms are performed, the Sony Playstation3 and the QS20/QS21 IBM blade, obtaining a speed-up factor of 7 and 21, respectively, compared to the original PC version of the code, and a conservative sustained performance of 28 gigaflops per single Cell processor. Our results suggest that choice of IEEE 754 rounding mode is possibly as important as double-precision support for this specific scientific application.

  12. Vibronic coupling simulations for linear and nonlinear optical processes: Simulation results

    NASA Astrophysics Data System (ADS)

    Silverstein, Daniel W.; Jensen, Lasse

    2012-02-01

    A vibronic coupling model based on time-dependent wavepacket approach is applied to simulate linear optical processes, such as one-photon absorbance and resonance Raman scattering, and nonlinear optical processes, such as two-photon absorbance and resonance hyper-Raman scattering, on a series of small molecules. Simulations employing both the long-range corrected approach in density functional theory and coupled cluster are compared and also examined based on available experimental data. Although many of the small molecules are prone to anharmonicity in their potential energy surfaces, the harmonic approach performs adequately. A detailed discussion of the non-Condon effects is illustrated by the molecules presented in this work. Linear and nonlinear Raman scattering simulations allow for the quantification of interference between the Franck-Condon and Herzberg-Teller terms for different molecules.

  13. Modelling Accuracy of a Car Steering Mechanism with Rack and Pinion and McPherson Suspension

    NASA Astrophysics Data System (ADS)

    Knapczyk, J.; Kucybała, P.

    2016-08-01

    Modelling accuracy of a car steering mechanism with a rack and pinion and McPherson suspension is analyzed. Geometrical parameters of the model are described by using the coordinates of centers of spherical joints, directional unit vectors and axis points of revolute, cylindrical and prismatic joints. Modelling accuracy is assumed as the differences between the values of the wheel knuckle position and orientation coordinates obtained using a simulation model and the corresponding measured values. The sensitivity analysis of the parameters on the model accuracy is illustrated by two numerical examples.

  14. Shot Peening Numerical Simulation of Aircraft Aluminum Alloy Structure

    NASA Astrophysics Data System (ADS)

    Liu, Yong; Lv, Sheng-Li; Zhang, Wei

    2018-03-01

    After shot peening, the 7050 aluminum alloy has good anti-fatigue and anti-stress corrosion properties. In the shot peening process, the pellet collides with target material randomly, and generated residual stress distribution on the target material surface, which has great significance to improve material property. In this paper, a simplified numerical simulation model of shot peening was established. The influence of pellet collision velocity, pellet collision position and pellet collision time interval on the residual stress of shot peening was studied, which is simulated by the ANSYS/LS-DYNA software. The analysis results show that different velocity, different positions and different time intervals have great influence on the residual stress after shot peening. Comparing with the numerical simulation results based on Kriging model, the accuracy of the simulation results in this paper was verified. This study provides a reference for the optimization of the shot peening process, and makes an effective exploration for the precise shot peening numerical simulation.

  15. Simulator for beam-based LHC collimator alignment

    NASA Astrophysics Data System (ADS)

    Valentino, Gianluca; Aßmann, Ralph; Redaelli, Stefano; Sammut, Nicholas

    2014-02-01

    In the CERN Large Hadron Collider, collimators need to be set up to form a multistage hierarchy to ensure efficient multiturn cleaning of halo particles. Automatic algorithms were introduced during the first run to reduce the beam time required for beam-based setup, improve the alignment accuracy, and reduce the risk of human errors. Simulating the alignment procedure would allow for off-line tests of alignment policies and algorithms. A simulator was developed based on a diffusion beam model to generate the characteristic beam loss signal spike and decay produced when a collimator jaw touches the beam, which is observed in a beam loss monitor (BLM). Empirical models derived from the available measurement data are used to simulate the steady-state beam loss and crosstalk between multiple BLMs. The simulator design is presented, together with simulation results and comparison to measurement data.

  16. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, wemore » introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.« less

  17. Mapping soil texture classes and optimization of the result by accuracy assessment

    NASA Astrophysics Data System (ADS)

    Laborczi, Annamária; Takács, Katalin; Bakacsi, Zsófia; Szabó, József; Pásztor, László

    2014-05-01

    There are increasing demands nowadays on spatial soil information in order to support environmental related and land use management decisions. The GlobalSoilMap.net (GSM) project aims to make a new digital soil map of the world using state-of-the-art and emerging technologies for soil mapping and predicting soil properties at fine resolution. Sand, silt and clay are among the mandatory GSM soil properties. Furthermore, soil texture class information is input data of significant agro-meteorological and hydrological models. Our present work aims to compare and evaluate different digital soil mapping methods and variables for producing the most accurate spatial prediction of texture classes in Hungary. In addition to the Hungarian Soil Information and Monitoring System as our basic data, digital elevation model and its derived components, geological database, and physical property maps of the Digital Kreybig Soil Information System have been applied as auxiliary elements. Two approaches have been applied for the mapping process. At first the sand, silt and clay rasters have been computed independently using regression kriging (RK). From these rasters, according to the USDA categories, we have compiled the texture class map. Different combinations of reference and training soil data and auxiliary covariables have resulted several different maps. However, these results consequentially include the uncertainty factor of the three kriged rasters. Therefore we have suited data mining methods as the other approach of digital soil mapping. By working out of classification trees and random forests we have got directly the texture class maps. In this way the various results can be compared to the RK maps. The performance of the different methods and data has been examined by testing the accuracy of the geostatistically computed and the directly classified results. We have used the GSM methodology to assess the most predictive and accurate way for getting the best among the

  18. Accuracy of 3 different impression techniques for internal connection angulated implants.

    PubMed

    Tsagkalidis, George; Tortopidis, Dimitrios; Mpikos, Pavlos; Kaisarlis, George; Koidis, Petros

    2015-10-01

    Making implant impressions with different angulations requires a more precise and time-consuming impression technique. The purpose of this in vitro study was to compare the accuracy of nonsplinted, splinted, and snap-fit impression techniques of internal connection implants with different angulations. An experimental device was used to allow a clinical simulation of impression making by means of open and closed tray techniques. Three different impression techniques (nonsplinted, acrylic-resin splinted, and indirect snap-fit) for 6 internal-connected implants at different angulations (0, 15, 25 degrees) were examined using polyether. Impression accuracy was evaluated by measuring the differences in 3-dimensional (3D) position deviations between the implant body/impression coping before the impression procedure and the coping/laboratory analog positioned within the impression, using a coordinate measuring machine. Data were analyzed by 2-way ANOVA. Means were compared with the least significant difference criterion at P<.05. Results showed that at 25 degrees of implant angulation, the highest accuracy was obtained with the splinted technique (mean ±SE: 0.39 ±0.05 mm) and the lowest with the snap-fit technique (0.85 ±0.09 mm); at 15 degrees of angulation, there were no significant differences among splinted (0.22 ±0.04 mm) and nonsplinted technique (0.15 ±0.02 mm) and the lowest accuracy obtained with the snap-fit technique (0.95 ±0.15 mm); and no significant differences were found between nonsplinted and splinted technique at 0 degrees of implant placement. Splinted impression technique exhibited a higher accuracy than the other techniques studied when increased implant angulations at 25 degrees were involved. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  19. SU-F-J-95: Impact of Shape Complexity On the Accuracy of Gradient-Based PET Volume Delineation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dance, M; Wu, G; Gao, Y

    2016-06-15

    Purpose: Explore correlation of tumor complexity shape with PET target volume accuracy when delineated with gradient-based segmentation tool. Methods: A total of 24 clinically realistic digital PET Monte Carlo (MC) phantoms of NSCLC were used in the study. The phantom simulated 29 thoracic lesions (lung primary and mediastinal lymph nodes) of varying size, shape, location, and {sup 18}F-FDG activity. A program was developed to calculate a curvature vector along the outline and the standard deviation of this vector was used as a metric to quantify a shape’s “complexity score”. This complexity score was calculated for standard geometric shapes and MC-generatedmore » target volumes in PET phantom images. All lesions were contoured using a commercially available gradient-based segmentation tool and the differences in volume from the MC-generated volumes were calculated as the measure of the accuracy of segmentation. Results: The average absolute percent difference in volumes between the MC-volumes and gradient-based volumes was 11% (0.4%–48.4%). The complexity score showed strong correlation with standard geometric shapes. However, no relationship was found between the complexity score and the accuracy of segmentation by gradient-based tool on MC simulated tumors (R{sup 2} = 0.156). When the lesions were grouped into primary lung lesions and mediastinal/mediastinal adjacent lesions, the average absolute percent difference in volumes were 6% and 29%, respectively. The former group is more isolated and the latter is more surround by tissues with relatively high SUV background. Conclusion: The complexity shape of NSCLC lesions has little effect on the accuracy of the gradient-based segmentation method and thus is not a good predictor of uncertainty in target volume delineation. Location of lesion within a relatively high SUV background may play a more significant role in the accuracy of gradient-based segmentation.« less

  20. Evaluation of the Accuracy of Conventional and Digital Impression Techniques for Implant Restorations.

    PubMed

    Moura, Renata Vasconcellos; Kojima, Alberto Noriyuki; Saraceni, Cintia Helena Coury; Bassolli, Lucas; Balducci, Ivan; Özcan, Mutlu; Mesquita, Alfredo Mikail Melo

    2018-05-01

    The increased use of CAD systems can generate doubt about the accuracy of digital impressions for angulated implants. The aim of this study was to evaluate the accuracy of different impression techniques, two conventional and one digital, for implants with and without angulation. We used a polyurethane cast that simulates the human maxilla according to ASTM F1839, and 6 tapered implants were installed with external hexagonal connections to simulate tooth positions 17, 15, 12, 23, 25, and 27. Implants 17 and 23 were placed with 15° of mesial angulation and distal angulation, respectively. Mini cone abutments were installed on these implants with a metal strap 1 mm in height. Conventional and digital impression procedures were performed on the maxillary master cast, and the implants were separated into 6 groups based on the technique used and measurement type: G1 - control, G2 - digital impression, G3 - conventional impression with an open tray, G4 - conventional impression with a closed tray, G5 - conventional impression with an open tray and a digital impression, and G6 - conventional impression with a closed tray and a digital impression. A statistical analysis was performed using two-way repeated measures ANOVA to compare the groups, and a Kruskal-Wallis test was conducted to analyze the accuracy of the techniques. No significant difference in the accuracy of the techniques was observed between the groups. Therefore, no differences were found among the conventional impression and the combination of conventional and digital impressions, and the angulation of the implants did not affect the accuracy of the techniques. All of the techniques exhibited trueness and had acceptable precision. The variation of the angle of the implants did not affect the accuracy of the techniques. © 2018 by the American College of Prosthodontists.

  1. Simulations of black-hole binaries with unequal masses or nonprecessing spins: Accuracy, physical properties, and comparison with post-Newtonian results

    NASA Astrophysics Data System (ADS)

    Hannam, Mark; Husa, Sascha; Ohme, Frank; Müller, Doreen; Brügmann, Bernd

    2010-12-01

    We present gravitational waveforms for the last orbits and merger of black-hole-binary systems along two branches of the black-hole-binary parameter space: equal-mass binaries with equal nonprecessing spins, and nonspinning unequal-mass binaries. The waveforms are calculated from numerical solutions of Einstein’s equations for black-hole binaries that complete between six and ten orbits before merger. Along the equal-mass spinning branch, the spin parameter of each black hole is χi=Si/Mi2∈[-0.85,0.85], and along the unequal-mass branch the mass ratio is q=M2/M1∈[1,4]. We discuss the construction of low-eccentricity puncture initial data for these cases, the properties of the final merged black hole, and compare the last 8-10 gravitational-wave cycles up to Mω=0.1 with the phase and amplitude predicted by standard post-Newtonian (PN) approximants. As in previous studies, we find that the phase from the 3.5PN TaylorT4 approximant is most accurate for nonspinning binaries. For equal-mass spinning binaries the 3.5PN TaylorT1 approximant (including spin terms up to only 2.5PN order) gives the most robust performance, but it is possible to treat TaylorT4 in such a way that it gives the best accuracy for spins χi>-0.75. When high-order amplitude corrections are included, the PN amplitude of the (ℓ=2,m=±2) modes is larger than the numerical relativity amplitude by between 2-4%.

  2. Using meta-analysis to inform the design of subsequent studies of diagnostic test accuracy.

    PubMed

    Hinchliffe, Sally R; Crowther, Michael J; Phillips, Robert S; Sutton, Alex J

    2013-06-01

    An individual diagnostic accuracy study rarely provides enough information to make conclusive recommendations about the accuracy of a diagnostic test; particularly when the study is small. Meta-analysis methods provide a way of combining information from multiple studies, reducing uncertainty in the result and hopefully providing substantial evidence to underpin reliable clinical decision-making. Very few investigators consider any sample size calculations when designing a new diagnostic accuracy study. However, it is important to consider the number of subjects in a new study in order to achieve a precise measure of accuracy. Sutton et al. have suggested previously that when designing a new therapeutic trial, it could be more beneficial to consider the power of the updated meta-analysis including the new trial rather than of the new trial itself. The methodology involves simulating new studies for a range of sample sizes and estimating the power of the updated meta-analysis with each new study added. Plotting the power values against the range of sample sizes allows the clinician to make an informed decision about the sample size of a new trial. This paper extends this approach from the trial setting and applies it to diagnostic accuracy studies. Several meta-analytic models are considered including bivariate random effects meta-analysis that models the correlation between sensitivity and specificity. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Determination of optimum mounting configurations for flat-plate photovoltaic modules based on a structured field experiment and simulated results from PVFORM, a photovoltaic system performance model

    NASA Astrophysics Data System (ADS)

    Menicucci, D. F.

    1986-01-01

    The performance of a photovoltaic (PV) system is affected by its mounting configuration. The optimal configuration is unclear because of lack of experience and data. Sandia National Laboratories, Albuquerque (SNLA), has conducted a controlled field experiment to compare four types of the most common module mounting. The data from the experiment were used to verify the accuracy of PVFORM, a new computer program that simulates PV performance. PVFORM was then used to simulate the performance of identical PV modules on different mounting configurations at 10 sites throughout the US. This report describes the module mounting configurations, the experimental methods used, the specialized statistical techniques used in the analysis, and the final results of the effort. The module mounting configurations are rank ordered at each site according to their annual and seasonal energy production performance, and each is briefly discussed in terms of its advantages and disadvantages in various applications.

  4. Simulation loop between cad systems, GEANT-4 and GeoModel: Implementation and results

    NASA Astrophysics Data System (ADS)

    Sharmazanashvili, A.; Tsutskiridze, Niko

    2016-09-01

    Compare analysis of simulation and as-built geometry descriptions of detector is important field of study for data_vs_Monte-Carlo discrepancies. Shapes consistency and detalization is not important while adequateness of volumes and weights of detector components are essential for tracking. There are 2 main reasons of faults of geometry descriptions in simulation: (1) Difference between simulated and as-built geometry descriptions; (2) Internal inaccuracies of geometry transformations added by simulation software infrastructure itself. Georgian Engineering team developed hub on the base of CATIA platform and several tools enabling to read in CATIA different descriptions used by simulation packages, like XML->CATIA; VP1->CATIA; Geo-Model->CATIA; Geant4->CATIA. As a result it becomes possible to compare different descriptions with each other using the full power of CATIA and investigate both classes of reasons of faults of geometry descriptions. Paper represents results of case studies of ATLAS Coils and End-Cap toroid structures.

  5. A Novel Simulation Technician Laboratory Design: Results of a Survey-Based Study

    PubMed Central

    Hughes, Patrick G; Friedl, Ed; Ortiz Figueroa, Fabiana; Cepeda Brito, Jose R; Frey, Jennifer; Birmingham, Lauren E; Atkinson, Steven Scott

    2016-01-01

    Objective  The purpose of this study was to elicit feedback from simulation technicians prior to developing the first simulation technician-specific simulation laboratory in Akron, OH. Background Simulation technicians serve a vital role in simulation centers within hospitals/health centers around the world. The first simulation technician degree program in the US has been approved in Akron, OH. To satisfy the requirements of this program and to meet the needs of this special audience of learners, a customized simulation lab is essential.  Method A web-based survey was circulated to simulation technicians prior to completion of the lab for the new program. The survey consisted of questions aimed at identifying structural and functional design elements of a novel simulation center for the training of simulation technicians. Quantitative methods were utilized to analyze data. Results Over 90% of technicians (n=65) think that a lab designed explicitly for the training of technicians is novel and beneficial. Approximately 75% of respondents think that the space provided appropriate audiovisual (AV) infrastructure and space to evaluate the ability of technicians to be independent. The respondents think that the lab needed more storage space, visualization space for a large number of students, and more space in the technical/repair area. Conclusions  A space designed for the training of simulation technicians was considered to be beneficial. This laboratory requires distinct space for technical repair, adequate bench space for the maintenance and repair of simulators, an appropriate AV infrastructure, and space to evaluate the ability of technicians to be independent. PMID:27096134

  6. Developing a Weighted Measure of Speech Sound Accuracy

    PubMed Central

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2010-01-01

    Purpose The purpose is to develop a system for numerically quantifying a speaker’s phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, we describe a system for differentially weighting speech sound errors based on various levels of phonetic accuracy with a Weighted Speech Sound Accuracy (WSSA) score. We then evaluate the reliability and validity of this measure. Method Phonetic transcriptions are analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy is compared to existing measures, is used to discriminate typical and disordered speech production, and is evaluated to determine whether it is sensitive to changes in phonetic accuracy over time. Results Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners’ judgments of severity of a child’s speech disorder. The measure separates children with and without speech sound disorders. WSSA scores also capture growth in phonetic accuracy in toddler’s speech over time. Conclusion Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children’s speech. PMID:20699344

  7. A comparison among observations and earthquake simulator results for the allcal2 California fault model

    USGS Publications Warehouse

    Tullis, Terry. E.; Richards-Dinger, Keith B.; Barall, Michael; Dieterich, James H.; Field, Edward H.; Heien, Eric M.; Kellogg, Louise; Pollitz, Fred F.; Rundle, John B.; Sachs, Michael K.; Turcotte, Donald L.; Ward, Steven N.; Yikilmaz, M. Burak

    2012-01-01

    In order to understand earthquake hazards we would ideally have a statistical description of earthquakes for tens of thousands of years. Unfortunately the ∼100‐year instrumental, several 100‐year historical, and few 1000‐year paleoseismological records are woefully inadequate to provide a statistically significant record. Physics‐based earthquake simulators can generate arbitrarily long histories of earthquakes; thus they can provide a statistically meaningful history of simulated earthquakes. The question is, how realistic are these simulated histories? This purpose of this paper is to begin to answer that question. We compare the results between different simulators and with information that is known from the limited instrumental, historic, and paleoseismological data.As expected, the results from all the simulators show that the observational record is too short to properly represent the system behavior; therefore, although tests of the simulators against the limited observations are necessary, they are not a sufficient test of the simulators’ realism. The simulators appear to pass this necessary test. In addition, the physics‐based simulators show similar behavior even though there are large differences in the methodology. This suggests that they represent realistic behavior. Different assumptions concerning the constitutive properties of the faults do result in enhanced capabilities of some simulators. However, it appears that the similar behavior of the different simulators may result from the fault‐system geometry, slip rates, and assumed strength drops, along with the shared physics of stress transfer.This paper describes the results of running four earthquake simulators that are described elsewhere in this issue of Seismological Research Letters. The simulators ALLCAL (Ward, 2012), VIRTCAL (Sachs et al., 2012), RSQSim (Richards‐Dinger and Dieterich, 2012), and ViscoSim (Pollitz, 2012) were run on our most recent all‐California fault

  8. Analysis on accuracy improvement of rotor-stator rubbing localization based on acoustic emission beamforming method.

    PubMed

    He, Tian; Xiao, Denghong; Pan, Qiang; Liu, Xiandong; Shan, Yingchun

    2014-01-01

    This paper attempts to introduce an improved acoustic emission (AE) beamforming method to localize rotor-stator rubbing fault in rotating machinery. To investigate the propagation characteristics of acoustic emission signals in casing shell plate of rotating machinery, the plate wave theory is used in a thin plate. A simulation is conducted and its result shows the localization accuracy of beamforming depends on multi-mode, dispersion, velocity and array dimension. In order to reduce the effect of propagation characteristics on the source localization, an AE signal pre-process method is introduced by combining plate wave theory and wavelet packet transform. And the revised localization velocity to reduce effect of array size is presented. The accuracy of rubbing localization based on beamforming and the improved method of present paper are compared by the rubbing test carried on a test table of rotating machinery. The results indicate that the improved method can localize rub fault effectively. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Gravitational waveforms for neutron star binaries from binary black hole simulations

    NASA Astrophysics Data System (ADS)

    Barkett, Kevin; Scheel, Mark; Haas, Roland; Ott, Christian; Bernuzzi, Sebastiano; Brown, Duncan; Szilagyi, Bela; Kaplan, Jeffrey; Lippuner, Jonas; Muhlberger, Curran; Foucart, Francois; Duez, Matthew

    2016-03-01

    Gravitational waves from binary neutron star (BNS) and black-hole/neutron star (BHNS) inspirals are primary sources for detection by the Advanced Laser Interferometer Gravitational-Wave Observatory. The tidal forces acting on the neutron stars induce changes in the phase evolution of the gravitational waveform, and these changes can be used to constrain the nuclear equation of state. Current methods of generating BNS and BHNS waveforms rely on either computationally challenging full 3D hydrodynamical simulations or approximate analytic solutions. We introduce a new method for computing inspiral waveforms for BNS/BHNS systems by adding the post-Newtonian (PN) tidal effects to full numerical simulations of binary black holes (BBHs), effectively replacing the non-tidal terms in the PN expansion with BBH results. Comparing a waveform generated with this method against a full hydrodynamical simulation of a BNS inspiral yields a phase difference of < 1 radian over ~ 15 orbits. The numerical phase accuracy required of BNS simulations to measure the accuracy of the method we present here is estimated as a function of the tidal deformability parameter λ.

  10. Gravitational waveforms for neutron star binaries from binary black hole simulations

    NASA Astrophysics Data System (ADS)

    Barkett, Kevin; Scheel, Mark A.; Haas, Roland; Ott, Christian D.; Bernuzzi, Sebastiano; Brown, Duncan A.; Szilágyi, Béla; Kaplan, Jeffrey D.; Lippuner, Jonas; Muhlberger, Curran D.; Foucart, Francois; Duez, Matthew D.

    2016-02-01

    Gravitational waves from binary neutron star (BNS) and black hole/neutron star (BHNS) inspirals are primary sources for detection by the Advanced Laser Interferometer Gravitational-Wave Observatory. The tidal forces acting on the neutron stars induce changes in the phase evolution of the gravitational waveform, and these changes can be used to constrain the nuclear equation of state. Current methods of generating BNS and BHNS waveforms rely on either computationally challenging full 3D hydrodynamical simulations or approximate analytic solutions. We introduce a new method for computing inspiral waveforms for BNS/BHNS systems by adding the post-Newtonian (PN) tidal effects to full numerical simulations of binary black holes (BBHs), effectively replacing the nontidal terms in the PN expansion with BBH results. Comparing a waveform generated with this method against a full hydrodynamical simulation of a BNS inspiral yields a phase difference of <1 radian over ˜15 orbits. The numerical phase accuracy required of BNS simulations to measure the accuracy of the method we present here is estimated as a function of the tidal deformability parameter λ .

  11. Finite element simulation of a novel composite light-weight microporous cladding panel

    NASA Astrophysics Data System (ADS)

    Tian, Lida; Wang, Dongyan

    2018-04-01

    A novel composite light-weight microporous cladding panel with matched connection detailing is developed. Numerical simulation on the experiment is conducted by ABAQUS. The accuracy and rationality of the finite element model is verified by comparison between the simulation and the experiment results. It is also indicated that the novel composite cladding panel is of desirable bearing capacity, stiffness and deformability under out-of-plane load.

  12. Boundary pint corrections for variable radius plots - simulation results

    Treesearch

    Margaret Penner; Sam Otukol

    2000-01-01

    The boundary plot problem is encountered when a forest inventory plot includes two or more forest conditions. Depending on the correction method used, the resulting estimates can be biased. The various correction alternatives are reviewed. No correction, area correction, half sweep, and toss-back methods are evaluated using simulation on an actual data set. Based on...

  13. Clinical Implications and Economic Impact of Accuracy Differences among Commercially Available Blood Glucose Monitoring Systems

    PubMed Central

    Budiman, Erwin S.; Samant, Navendu; Resch, Ansgar

    2013-01-01

    Background Despite accuracy standards, there are performance differences among commercially available blood glucose monitoring (BGM) systems. The objective of this analysis was to assess the potential clinical and economic impact of accuracy differences of various BGM systems using a modeling approach. Methods We simulated additional risk of hypoglycemia due to blood glucose (BG) measurement errors of five different BGM systems based on results of a real-world accuracy study, while retaining other sources of glycemic variability. Using data from published literature, we estimated an annual additional number of required medical interventions as a result of hypoglycemia. We based our calculations on patients with type 1 diabetes mellitus (T1DM) and T2DM requiring multiple daily injections (MDIs) of insulin in a U.S. health care system. We estimated additional costs attributable to treatment of severe hypoglycemic episodes resulting from BG measurement errors.. Results Results from our model predict an annual difference of approximately 296,000 severe hypoglycemic episodes from BG measurement errors for T1DM (105,000 for T2DM MDI) patients for the estimated U.S. population of 958,800 T1DM and 1,353,600 T2DM MDI patients, using the least accurate BGM system versus patients using the most accurate system in a U.S. health care system. This resulted in additional direct costs of approximately $339 million for T1DM and approximately $121 million for T2DM MDI patients per year. Conclusions Our analysis shows that error patterns over the operating range of BGM meter may lead to relevant clinical and economic outcome differences that may not be reflected in a common accuracy metric or standard. PMID:23566995

  14. High-accuracy and high-sensitivity spectroscopic measurement of dinitrogen pentoxide (N2O5) in an atmospheric simulation chamber using a quantum cascade laser.

    PubMed

    Yi, Hongming; Wu, Tao; Lauraguais, Amélie; Semenov, Vladimir; Coeur, Cecile; Cassez, Andy; Fertein, Eric; Gao, Xiaoming; Chen, Weidong

    2017-12-04

    A spectroscopic instrument based on a mid-infrared external cavity quantum cascade laser (EC-QCL) was developed for high-accuracy measurements of dinitrogen pentoxide (N 2 O 5 ) at the ppbv-level. A specific concentration retrieval algorithm was developed to remove, from the broadband absorption spectrum of N 2 O 5 , both etalon fringes resulting from the EC-QCL intrinsic structure and spectral interference lines of H 2 O vapour absorption, which led to a significant improvement in measurement accuracy and detection sensitivity (by a factor of 10), compared to using a traditional algorithm for gas concentration retrieval. The developed EC-QCL-based N 2 O 5 sensing platform was evaluated by real-time tracking N 2 O 5 concentration in its most important nocturnal tropospheric chemical reaction of NO 3 + NO 2 ↔ N 2 O 5 in an atmospheric simulation chamber. Based on an optical absorption path-length of L eff = 70 m, a minimum detection limit of 15 ppbv was achieved with a 25 s integration time and it was down to 3 ppbv in 400 s. The equilibrium rate constant K eq involved in the above chemical reaction was determined with direct concentration measurements using the developed EC-QCL sensing platform, which was in good agreement with the theoretical value deduced from a referenced empirical formula under well controlled experimental conditions. The present work demonstrates the potential and the unique advantage of the use of a modern external cavity quantum cascade laser for applications in direct quantitative measurement of broadband absorption of key molecular species involved in chemical kinetic and climate-change related tropospheric chemistry.

  15. High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu

    2017-05-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.

  16. Simulating Wet Deposition of Radiocesium from the Chernobyl Accident

    DTIC Science & Technology

    2001-03-01

    In response to the Chernobyl nuclear power plant accident of 1986, a cesium-137 deposition dataset was assembled. Most of the airborne Chernobyl ... Chernobyl cesium-137. A cloud base parameterization modification is tested and appears to slightly improve the accuracy of one HYSPLIT simulation of...daily Chernobyl cesium-137 deposition over the course of the accident at isolated European sites, and degrades the accuracy of another HYSPLIT simulation

  17. Experimental validation of a direct simulation by Monte Carlo molecular gas flow model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shufflebotham, P.K.; Bartel, T.J.; Berney, B.

    1995-07-01

    The Sandia direct simulation Monte Carlo (DSMC) molecular/transition gas flow simulation code has significant potential as a computer-aided design tool for the design of vacuum systems in low pressure plasma processing equipment. The purpose of this work was to verify the accuracy of this code through direct comparison to experiment. To test the DSMC model, a fully instrumented, axisymmetric vacuum test cell was constructed, and spatially resolved pressure measurements made in N{sub 2} at flows from 50 to 500 sccm. In a ``blind`` test, the DSMC code was used to model the experimental conditions directly, and the results compared tomore » the measurements. It was found that the model predicted all the experimental findings to a high degree of accuracy. Only one modeling issue was uncovered. The axisymmetric model showed localized low pressure spots along the axis next to surfaces. Although this artifact did not significantly alter the accuracy of the results, it did add noise to the axial data. {copyright} {ital 1995} {ital American} {ital Vacuum} {ital Society}« less

  18. A Basketball Simulation.

    ERIC Educational Resources Information Center

    Noone, E. T., Jr.

    1991-01-01

    Presented is an activity in which probability and percents are taught using a basketball computer simulation. Computer programs that replicate the free-throw accuracy of college and professional stars and allow students to compete with those stars are included. (KR)

  19. MODFLOW equipped with a new method for the accurate simulation of axisymmetric flow

    NASA Astrophysics Data System (ADS)

    Samani, N.; Kompani-Zare, M.; Barry, D. A.

    2004-01-01

    Axisymmetric flow to a well is an important topic of groundwater hydraulics, the simulation of which depends on accurate computation of head gradients. Groundwater numerical models with conventional rectilinear grid geometry such as MODFLOW (in contrast to analytical models) generally have not been used to simulate aquifer test results at a pumping well because they are not designed or expected to closely simulate the head gradient near the well. A scaling method is proposed based on mapping the governing flow equation from cylindrical to Cartesian coordinates, and vice versa. A set of relationships and scales is derived to implement the conversion. The proposed scaling method is then embedded in MODFLOW 2000. To verify the accuracy of the method steady and unsteady flows in confined and unconfined aquifers with fully or partially penetrating pumping wells are simulated and compared with the corresponding analytical solutions. In all cases a high degree of accuracy is achieved.

  20. Comparison of the analytical and simulation results of the equilibrium beam profile

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Z. J.; Zhu Shaoping; Cao, L. H.

    2007-10-15

    The evolution of high current electron beams in dense plasmas has been investigated by using two-dimensional particle-in-cell (PIC) simulations with immobile ions. It is shown that electron beams are split into many filaments at the beginning due to the Weibel instability, and then different filamentation beams attract each other and coalesce. The profile of the filaments can be described by formulas. Hammer et al. [Phys. Fluids 13, 1831 (1970)] developed a self-consistent relativistic electron beam model that allows the propagation of relativistic electron fluxes in excess of the Alfven-Lawson critical-current limit for a fully neutralized beam. The equilibrium solution hasmore » been observed in the simulation results, but the electron distribution function assumed by Hammer et al. is different from the simulation results.« less

  1. Diagnostic accuracy at several reduced radiation dose levels for CT imaging in the diagnosis of appendicitis

    NASA Astrophysics Data System (ADS)

    Zhang, Di; Khatonabadi, Maryam; Kim, Hyun; Jude, Matilda; Zaragoza, Edward; Lee, Margaret; Patel, Maitraya; Poon, Cheryce; Douek, Michael; Andrews-Tang, Denise; Doepke, Laura; McNitt-Gray, Shawn; Cagnon, Chris; DeMarco, John; McNitt-Gray, Michael

    2012-03-01

    Purpose: While several studies have investigated the tradeoffs between radiation dose and image quality (noise) in CT imaging, the purpose of this study was to take this analysis a step further by investigating the tradeoffs between patient radiation dose (including organ dose) and diagnostic accuracy in diagnosis of appendicitis using CT. Methods: This study was IRB approved and utilized data from 20 patients who underwent clinical CT exams for indications of appendicitis. Medical record review established true diagnosis of appendicitis, with 10 positives and 10 negatives. A validated software tool used raw projection data from each scan to create simulated images at lower dose levels (70%, 50%, 30%, 20% of original). An observer study was performed with 6 radiologists reviewing each case at each dose level in random order over several sessions. Readers assessed image quality and provided confidence in their diagnosis of appendicitis, each on a 5 point scale. Liver doses at each case and each dose level were estimated using Monte Carlo simulation based methods. Results: Overall diagnostic accuracy varies across dose levels: 92%, 93%, 91%, 90% and 90% across the 100%, 70%, 50%, 30% and 20% dose levels respectively. And it is 93%, 95%, 88%, 90% and 90% across the 13.5-22mGy, 9.6-13.5mGy, 6.4-9.6mGy, 4-6.4mGy, and 2-4mGy liver dose ranges respectively. Only 4 out of 600 observations were rated "unacceptable" for image quality. Conclusion: The results from this pilot study indicate that the diagnostic accuracy does not change dramatically even at significantly reduced radiation dose.

  2. Face-based smoothed finite element method for real-time simulation of soft tissue

    NASA Astrophysics Data System (ADS)

    Mendizabal, Andrea; Bessard Duparc, Rémi; Bui, Huu Phuoc; Paulus, Christoph J.; Peterlik, Igor; Cotin, Stéphane

    2017-03-01

    In soft tissue surgery, a tumor and other anatomical structures are usually located using the preoperative CT or MR images. However, due to the deformation of the concerned tissues, this information suffers from inaccuracy when employed directly during the surgery. In order to account for these deformations in the planning process, the use of a bio-mechanical model of the tissues is needed. Such models are often designed using the finite element method (FEM), which is, however, computationally expensive, in particular when a high accuracy of the simulation is required. In our work, we propose to use a smoothed finite element method (S-FEM) in the context of modeling of the soft tissue deformation. This numerical technique has been introduced recently to overcome the overly stiff behavior of the standard FEM and to improve the solution accuracy and the convergence rate in solid mechanics problems. In this paper, a face-based smoothed finite element method (FS-FEM) using 4-node tetrahedral elements is presented. We show that in some cases, the method allows for reducing the number of degrees of freedom, while preserving the accuracy of the discretization. The method is evaluated on a simulation of a cantilever beam loaded at the free end and on a simulation of a 3D cube under traction and compression forces. Further, it is applied to the simulation of the brain shift and of the kidney's deformation. The results demonstrate that the method outperforms the standard FEM in a bending scenario and that has similar accuracy as the standard FEM in the simulations of the brain-shift and of the kidney's deformation.

  3. Planck 2015 results. XII. Full focal plane simulations

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Castex, G.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dolag, K.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Karakci, A.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Melin, J.-B.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Roman, M.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Welikala, N.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-09-01

    We present the 8th full focal plane simulation set (FFP8), deployed in support of the Planck 2015 results. FFP8 consists of 10 fiducial mission realizations reduced to 18 144 maps, together with the most massive suite of Monte Carlo realizations of instrument noise and CMB ever generated, comprising 104 mission realizations reduced to about 106 maps. The resulting maps incorporate the dominant instrumental, scanning, and data analysis effects, and the remaining subdominant effects will be included in future updates. Generated at a cost of some 25 million CPU-hours spread across multiple high-performance-computing (HPC) platforms, FFP8 is used to validate and verify analysis algorithms and their implementations, and to remove biases from and quantify uncertainties in the results of analyses of the real data.

  4. Spectral reflectance inversion with high accuracy on green target

    NASA Astrophysics Data System (ADS)

    Jiang, Le; Yuan, Jinping; Li, Yong; Bai, Tingzhu; Liu, Shuoqiong; Jin, Jianzhou; Shen, Jiyun

    2016-09-01

    Using Landsat-7 ETM remote sensing data, the inversion of spectral reflectance of green wheat in visible and near infrared waveband in Yingke, China is studied. In order to solve the problem of lower inversion accuracy, custom atmospheric conditions method based on moderate resolution transmission model (MODTRAN) is put forward. Real atmospheric parameters are considered when adopting this method. The atmospheric radiative transfer theory to calculate atmospheric parameters is introduced first and then the inversion process of spectral reflectance is illustrated in detail. At last the inversion result is compared with simulated atmospheric conditions method which was a widely used method by previous researchers. The comparison shows that the inversion accuracy of this paper's method is higher in all inversion bands; the inversed spectral reflectance curve by this paper's method is more similar to the measured reflectance curve of wheat and better reflects the spectral reflectance characteristics of green plant which is very different from green artificial target. Thus, whether a green target is a plant or artificial target can be judged by reflectance inversion based on remote sensing image. This paper's research is helpful for the judgment of green artificial target hidden in the greenery, which has a great significance on the precise strike of green camouflaged weapons in military field.

  5. Estimating solar radiation for plant simulation models

    NASA Technical Reports Server (NTRS)

    Hodges, T.; French, V.; Leduc, S.

    1985-01-01

    Five algorithms producing daily solar radiation surrogates using daily temperatures and rainfall were evaluated using measured solar radiation data for seven U.S. locations. The algorithms were compared both in terms of accuracy of daily solar radiation estimates and terms of response when used in a plant growth simulation model (CERES-wheat). Requirements for accuracy of solar radiation for plant growth simulation models are discussed. One algorithm is recommended as being best suited for use in these models when neither measured nor satellite estimated solar radiation values are available.

  6. The Effects of Alcohol Intoxication on Accuracy and the Confidence–Accuracy Relationship in Photographic Simultaneous Line‐ups

    PubMed Central

    Colloff, Melissa F.; Karoğlu, Nilda; Zelek, Katarzyna; Ryder, Hannah; Humphries, Joyce E.; Takarangi, Melanie K.T.

    2017-01-01

    Summary Acute alcohol intoxication during encoding can impair subsequent identification accuracy, but results across studies have been inconsistent, with studies often finding no effect. Little is also known about how alcohol intoxication affects the identification confidence–accuracy relationship. We randomly assigned women (N = 153) to consume alcohol (dosed to achieve a 0.08% blood alcohol content) or tonic water, controlling for alcohol expectancy. Women then participated in an interactive hypothetical sexual assault scenario and, 24 hours or 7 days later, attempted to identify the assailant from a perpetrator present or a perpetrator absent simultaneous line‐up and reported their decision confidence. Overall, levels of identification accuracy were similar across the alcohol and tonic water groups. However, women who had consumed tonic water as opposed to alcohol identified the assailant with higher confidence on average. Further, calibration analyses suggested that confidence is predictive of accuracy regardless of alcohol consumption. The theoretical and applied implications of our results are discussed.© 2017 The Authors Applied Cognitive Psychology Published by John Wiley & Sons Ltd. PMID:28781426

  7. Evaluation of structural and thermophysical effects on the measurement accuracy of deep body thermometers based on dual-heat-flux method.

    PubMed

    Huang, Ming; Tamura, Toshiyo; Chen, Wenxi; Kanaya, Shigehiko

    2015-01-01

    To help pave a path toward the practical use of continuous unconstrained noninvasive deep body temperature measurement, this study aims to evaluate the structural and thermophysical effects on measurement accuracy for the dual-heat-flux method (DHFM). By considering the thermometer's height, radius, conductivity, density and specific heat as variables affecting the accuracy of DHFM measurement, we investigated the relationship between those variables and accuracy using 3-D models based on finite element method. The results of our simulation study show that accuracy is proportional to the radius but inversely proportional to the thickness of the thermometer when the radius is less than 30.0mm, and is also inversely proportional to the heat conductivity of the heat insulator inside the thermometer. The insights from this study would help to build a guideline for design, fabrication and optimization of DHFM-based thermometers, as well as their practical use. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results.

    PubMed

    Humada, Ali M; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M; Ahmed, Mushtaq N

    2016-01-01

    A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions.

  9. Ca-Pri a Cellular Automata Phenomenological Research Investigation: Simulation Results

    NASA Astrophysics Data System (ADS)

    Iannone, G.; Troisi, A.

    2013-05-01

    Following the introduction of a phenomenological cellular automata (CA) model capable to reproduce city growth and urban sprawl, we develop a toy model simulation considering a realistic framework. The main characteristic of our approach is an evolution algorithm based on inhabitants preferences. The control of grown cells is obtained by means of suitable functions which depend on the initial condition of the simulation. New born urban settlements are achieved by means of a logistic evolution of the urban pattern while urban sprawl is controlled by means of the population evolution function. In order to compare model results with a realistic urban framework we have considered, as the area of study, the island of Capri (Italy) in the Mediterranean Sea. Two different phases of the urban evolution on the island have been taken into account: a new born initial growth as induced by geographic suitability and the simulation of urban spread after 1943 induced by the population evolution after this date.

  10. Multicategory reclassification statistics for assessing improvements in diagnostic accuracy

    PubMed Central

    Li, Jialiang; Jiang, Binyan; Fine, Jason P.

    2013-01-01

    In this paper, we extend the definitions of the net reclassification improvement (NRI) and the integrated discrimination improvement (IDI) in the context of multicategory classification. Both measures were proposed in Pencina and others (2008. Evaluating the added predictive ability of a new marker: from area under the receiver operating characteristic (ROC) curve to reclassification and beyond. Statistics in Medicine 27, 157–172) as numeric characterizations of accuracy improvement for binary diagnostic tests and were shown to have certain advantage over analyses based on ROC curves or other regression approaches. Estimation and inference procedures for the multiclass NRI and IDI are provided in this paper along with necessary asymptotic distributional results. Simulations are conducted to study the finite-sample properties of the proposed estimators. Two medical examples are considered to illustrate our methodology. PMID:23197381

  11. The Positioning Accuracy of BAUV Using Fusion of Data from USBL System and Movement Parameters Measurements

    PubMed Central

    Krzysztof, Naus; Aleksander, Nowak

    2016-01-01

    The article presents a study of the accuracy of estimating the position coordinates of BAUV (Biomimetic Autonomous Underwater Vehicle) by the extended Kalman filter (EKF) method. The fusion of movement parameters measurements and position coordinates fixes was applied. The movement parameters measurements are carried out by on-board navigation devices, while the position coordinates fixes are done by the USBL (Ultra Short Base Line) system. The problem of underwater positioning and the conceptual design of the BAUV navigation system constructed at the Naval Academy (Polish Naval Academy—PNA) are presented in the first part of the paper. The second part consists of description of the evaluation results of positioning accuracy, the genesis of the problem of selecting method for underwater positioning, and the mathematical description of the method of estimating the position coordinates using the EKF method by the fusion of measurements with on-board navigation and measurements obtained with the USBL system. The main part contains a description of experimental research. It consists of a simulation program of navigational parameter measurements carried out during the BAUV passage along the test section. Next, the article covers the determination of position coordinates on the basis of simulated parameters, using EKF and DR methods and the USBL system, which are then subjected to a comparative analysis of accuracy. The final part contains systemic conclusions justifying the desirability of applying the proposed fusion method of navigation parameters for the BAUV positioning. PMID:27537884

  12. The Positioning Accuracy of BAUV Using Fusion of Data from USBL System and Movement Parameters Measurements.

    PubMed

    Krzysztof, Naus; Aleksander, Nowak

    2016-08-15

    The article presents a study of the accuracy of estimating the position coordinates of BAUV (Biomimetic Autonomous Underwater Vehicle) by the extended Kalman filter (EKF) method. The fusion of movement parameters measurements and position coordinates fixes was applied. The movement parameters measurements are carried out by on-board navigation devices, while the position coordinates fixes are done by the USBL (Ultra Short Base Line) system. The problem of underwater positioning and the conceptual design of the BAUV navigation system constructed at the Naval Academy (Polish Naval Academy-PNA) are presented in the first part of the paper. The second part consists of description of the evaluation results of positioning accuracy, the genesis of the problem of selecting method for underwater positioning, and the mathematical description of the method of estimating the position coordinates using the EKF method by the fusion of measurements with on-board navigation and measurements obtained with the USBL system. The main part contains a description of experimental research. It consists of a simulation program of navigational parameter measurements carried out during the BAUV passage along the test section. Next, the article covers the determination of position coordinates on the basis of simulated parameters, using EKF and DR methods and the USBL system, which are then subjected to a comparative analysis of accuracy. The final part contains systemic conclusions justifying the desirability of applying the proposed fusion method of navigation parameters for the BAUV positioning.

  13. How sleep problems contribute to simulator sickness: Preliminary results from a realistic driving scenario.

    PubMed

    Altena, Ellemarije; Daviaux, Yannick; Sanz-Arigita, Ernesto; Bonhomme, Emilien; de Sevin, Étienne; Micoulaud-Franchi, Jean-Arthur; Bioulac, Stéphanie; Philip, Pierre

    2018-04-17

    Virtual reality and simulation tools enable us to assess daytime functioning in environments that simulate real life as close as possible. Simulator sickness, however, poses a problem in the application of these tools, and has been related to pre-existing health problems. How sleep problems contribute to simulator sickness has not yet been investigated. In the current study, 20 female chronic insomnia patients and 32 female age-matched controls drove in a driving simulator covering realistic city, country and highway scenes. Fifty percent of the insomnia patients as opposed to 12.5% of controls reported excessive simulator sickness leading to experiment withdrawal. In the remaining participants, patients with insomnia showed overall increased levels of oculomotor symptoms even before driving, while nausea symptoms further increased after driving. These results, as well as the realistic simulation paradigm developed, give more insight on how vestibular and oculomotor functions as well as interoceptive functions are affected in insomnia. Importantly, our results have direct implications for both the actual driving experience and the wider context of deploying simulation techniques to mimic real life functioning, in particular in those professions often exposed to sleep problems. © 2018 European Sleep Research Society.

  14. Toward robust estimation of the components of forest population change: simulation results

    Treesearch

    Francis A. Roesch

    2014-01-01

    This report presents the full simulation results of the work described in Roesch (2014), in which multiple levels of simulation were used to test the robustness of estimators for the components of forest change. In that study, a variety of spatial-temporal populations were created based on, but more variable than, an actual forest monitoring dataset, and then those...

  15. Do knowledge, knowledge sources and reasoning skills affect the accuracy of nursing diagnoses? a randomised study.

    PubMed

    Paans, Wolter; Sermeus, Walter; Nieweg, Roos Mb; Krijnen, Wim P; van der Schans, Cees P

    2012-08-01

    This paper reports a study about the effect of knowledge sources, such as handbooks, an assessment format and a predefined record structure for diagnostic documentation, as well as the influence of knowledge, disposition toward critical thinking and reasoning skills, on the accuracy of nursing diagnoses.Knowledge sources can support nurses in deriving diagnoses. A nurse's disposition toward critical thinking and reasoning skills is also thought to influence the accuracy of his or her nursing diagnoses. A randomised factorial design was used in 2008-2009 to determine the effect of knowledge sources. We used the following instruments to assess the influence of ready knowledge, disposition, and reasoning skills on the accuracy of diagnoses: (1) a knowledge inventory, (2) the California Critical Thinking Disposition Inventory, and (3) the Health Science Reasoning Test. Nurses (n = 249) were randomly assigned to one of four factorial groups, and were instructed to derive diagnoses based on an assessment interview with a simulated patient/actor. The use of a predefined record structure resulted in a significantly higher accuracy of nursing diagnoses. A regression analysis reveals that almost half of the variance in the accuracy of diagnoses is explained by the use of a predefined record structure, a nurse's age and the reasoning skills of `deduction' and `analysis'. Improving nurses' dispositions toward critical thinking and reasoning skills, and the use of a predefined record structure, improves accuracy of nursing diagnoses.

  16. An experimental method for the assessment of color simulation tools.

    PubMed

    Lillo, Julio; Alvaro, Leticia; Moreira, Humberto

    2014-07-22

    The Simulcheck method for evaluating the accuracy of color simulation tools in relation to dichromats is described and used to test three color simulation tools: Variantor, Coblis, and Vischeck. A total of 10 dichromats (five protanopes, five deuteranopes) and 10 normal trichromats participated in the current study. Simulcheck includes two psychophysical tasks: the Pseudoachromatic Stimuli Identification task and the Minimum Achromatic Contrast task. The Pseudoachromatic Stimuli Identification task allows determination of the two chromatic angles (h(uv) values) that generate a minimum response in the yellow–blue opponent mechanism and, consequently, pseudoachromatic stimuli (greens or reds). The Minimum Achromatic Contrast task requires the selection of the gray background that produces minimum contrast (near zero change in the achromatic mechanism) for each pseudoachromatic stimulus selected in the previous task (L(R) values). Results showed important differences in the colorimetric transformations performed by the three evaluated simulation tools and their accuracy levels. Vischeck simulation accurately implemented the algorithm of Brettel, Viénot, and Mollon (1997). Only Vischeck appeared accurate (similarity in huv and L(R) values between real and simulated dichromats) and, consequently, could render reliable color selections. It is concluded that Simulcheck is a consistent method because it provided an equivalent pattern of results for huv and L(R) values irrespective of the stimulus set used to evaluate a simulation tool. Simulcheck was also considered valid because real dichromats provided expected huv and LR values when performing the two psychophysical tasks included in this method. © 2014 ARVO.

  17. Improving Prediction Accuracy for WSN Data Reduction by Applying Multivariate Spatio-Temporal Correlation

    PubMed Central

    Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman

    2011-01-01

    This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626

  18. Simulations of black-hole binaries with unequal masses or nonprecessing spins: Accuracy, physical properties, and comparison with post-Newtonian results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hannam, Mark; School of Physics and Astronomy, Cardiff University, Cardiff, CF24 3AA; Husa, Sascha

    We present gravitational waveforms for the last orbits and merger of black-hole-binary systems along two branches of the black-hole-binary parameter space: equal-mass binaries with equal nonprecessing spins, and nonspinning unequal-mass binaries. The waveforms are calculated from numerical solutions of Einstein's equations for black-hole binaries that complete between six and ten orbits before merger. Along the equal-mass spinning branch, the spin parameter of each black hole is {chi}{sub i}=S{sub i}/M{sub i}{sup 2}(set-membership sign)[-0.85,0.85], and along the unequal-mass branch the mass ratio is q=M{sub 2}/M{sub 1}(set-membership sign)[1,4]. We discuss the construction of low-eccentricity puncture initial data for these cases, the properties ofmore » the final merged black hole, and compare the last 8-10 gravitational-wave cycles up to M{omega}=0.1 with the phase and amplitude predicted by standard post-Newtonian (PN) approximants. As in previous studies, we find that the phase from the 3.5PN TaylorT4 approximant is most accurate for nonspinning binaries. For equal-mass spinning binaries the 3.5PN TaylorT1 approximant (including spin terms up to only 2.5PN order) gives the most robust performance, but it is possible to treat TaylorT4 in such a way that it gives the best accuracy for spins {chi}{sub i}>-0.75. When high-order amplitude corrections are included, the PN amplitude of the (l=2, m={+-}2) modes is larger than the numerical relativity amplitude by between 2-4%.« less

  19. Accuracy of different impression materials in parallel and nonparallel implants

    PubMed Central

    Vojdani, Mahroo; Torabi, Kianoosh; Ansarifard, Elham

    2015-01-01

    Background: A precise impression is mandatory to obtain passive fit in implant-supported prostheses. The aim of this study was to compare the accuracy of three impression materials in both parallel and nonparallel implant positions. Materials and Methods: In this experimental study, two partial dentate maxillary acrylic models with four implant analogues in canines and lateral incisors areas were used. One model was simulating the parallel condition and the other nonparallel one, in which implants were tilted 30° bucally and 20° in either mesial or distal directions. Thirty stone casts were made from each model using polyether (Impregum), additional silicone (Monopren) and vinyl siloxanether (Identium), with open tray technique. The distortion values in three-dimensions (X, Y and Z-axis) were measured by coordinate measuring machine. Two-way analysis of variance (ANOVA), one-way ANOVA and Tukey tests were used for data analysis (α = 0.05). Results: Under parallel condition, all the materials showed comparable, accurate casts (P = 0.74). In the presence of angulated implants, while Monopren showed more accurate results compared to Impregum (P = 0.01), Identium yielded almost similar results to those produced by Impregum (P = 0.27) and Monopren (P = 0.26). Conclusion: Within the limitations of this study, in parallel conditions, the type of impression material cannot affect the accuracy of the implant impressions; however, in nonparallel conditions, polyvinyl siloxane is shown to be a better choice, followed by vinyl siloxanether and polyether respectively. PMID:26288620

  20. Clinical implications and economic impact of accuracy differences among commercially available blood glucose monitoring systems.

    PubMed

    Budiman, Erwin S; Samant, Navendu; Resch, Ansgar

    2013-03-01

    Despite accuracy standards, there are performance differences among commercially available blood glucose monitoring (BGM) systems. The objective of this analysis was to assess the potential clinical and economic impact of accuracy differences of various BGM systems using a modeling approach. We simulated additional risk of hypoglycemia due to blood glucose (BG) measurement errors of five different BGM systems based on results of a real-world accuracy study, while retaining other sources of glycemic variability. Using data from published literature, we estimated an annual additional number of required medical interventions as a result of hypoglycemia. We based our calculations on patients with type 1 diabetes mellitus (T1DM) and T2DM requiring multiple daily injections (MDIs) of insulin in a U.S. health care system. We estimated additional costs attributable to treatment of severe hypoglycemic episodes resulting from BG measurement errors. Results from our model predict an annual difference of approximately 296,000 severe hypoglycemic episodes from BG measurement errors for T1DM (105,000 for T2DM MDI) patients for the estimated U.S. population of 958,800 T1DM and 1,353,600 T2DM MDI patients, using the least accurate BGM system versus patients using the most accurate system in a U.S. health care system. This resulted in additional direct costs of approximately $339 million for T1DM and approximately $121 million for T2DM MDI patients per year. Our analysis shows that error patterns over the operating range of BGM meter may lead to relevant clinical and economic outcome differences that may not be reflected in a common accuracy metric or standard. Further research is necessary to validate the findings of this model-based approach. © 2013 Diabetes Technology Society.

  1. Decision Accuracy and the Role of Spatial Interaction in Opinion Dynamics

    NASA Astrophysics Data System (ADS)

    Torney, Colin J.; Levin, Simon A.; Couzin, Iain D.

    2013-04-01

    The opinions and actions of individuals within interacting groups are frequently determined by both social and personal information. When sociality (or the pressure to conform) is strong and individual preferences are weak, groups will remain cohesive until a consensus decision is reached. When group decisions are subject to a bias, representing for example private information known by some members of the population or imperfect information known by all, then the accuracy achieved for a fixed level of bias will increase with population size. In this work we determine how the scaling between accuracy and group size can be related to the microscopic properties of the decision-making process. By simulating a spatial model of opinion dynamics we show that the relationship between the instantaneous fraction of leaders in the population ( L), system size ( N), and accuracy depends on the frequency of individual opinion switches and the level of population viscosity. When social mixing is slow, and individual opinion changes are frequent, accuracy is determined by the absolute number of informed individuals. As mixing rates increase, or the rate of opinion updates decrease, a transition occurs to a regime where accuracy is determined by the value of L√{ N}. We investigate the transition between different scaling regimes analytically by examining a well-mixed limit.

  2. Fourth-grade children’s dietary reporting accuracy by meal component: Results from a validation study that manipulated retention interval and prompts

    PubMed Central

    Baxter, Suzanne D.; Hitchcock, David B.; Royer, Julie A.; Smith, Albert F.; Guinn, Caroline H.

    2017-01-01

    We examined reporting accuracy by meal component (beverage, bread, breakfast meat, combination entrée, condiment, dessert, entrée, fruit, vegetable) with validation-study data on 455 fourth-grade children (mean age = 9.92 ± 0.41 years) observed eating school meals and randomized to one of eight dietary recall conditions (two retention intervals [short, long] crossed with four prompts [forward, meal-name, open, reverse]). Accuracy category (match [observed and reported], omission [observed but unreported], intrusion [unobserved but reported]) was a polytomous nominal item response variable. We fit a multilevel cumulative logit model with item variables meal component and serving period (breakfast, lunch) and child variables retention interval, prompt and sex. Significant accuracy category predictors were meal component (p < 0.0003), retention interval (p < 0.0003), meal-component × serving-period (p < 0.0003) and meal-component × retention-interval (p = 0.001). The relationship of meal component and accuracy category was much stronger for lunch than breakfast. For lunch, beverages were matches more often, omissions much less often and intrusions more often than expected under independence; fruits and desserts were omissions more often. For the meal-component × retention-interval interaction, for the short retention interval, beverages were intrusions much more often but combination entrées and condiments were intrusions less often; for the long retention interval, beverages were matches more often and omissions less often but fruits were matches less often. Accuracy for each meal component appeared better with the short than long retention interval. For lunch and for the short retention interval, children’s reporting was most accurate for entrée and combination entrée meal components, whereas it was least accurate for vegetable and fruit meal components. Results have implications for conclusions of studies and interventions assessed with dietary recalls

  3. Mixed reality ventriculostomy simulation: experience in neurosurgical residency.

    PubMed

    Hooten, Kristopher G; Lister, J Richard; Lombard, Gwen; Lizdas, David E; Lampotang, Samsun; Rajon, Didier A; Bova, Frank; Murad, Gregory J A

    2014-12-01

    Medicine and surgery are turning toward simulation to improve on limited patient interaction during residency training. Many simulators today use virtual reality with augmented haptic feedback with little to no physical elements. In a collaborative effort, the University of Florida Department of Neurosurgery and the Center for Safety, Simulation & Advanced Learning Technologies created a novel "mixed" physical and virtual simulator to mimic the ventriculostomy procedure. The simulator contains all the physical components encountered for the procedure with superimposed 3-D virtual elements for the neuroanatomical structures. To introduce the ventriculostomy simulator and its validation as a necessary training tool in neurosurgical residency. We tested the simulator in more than 260 residents. An algorithm combining time and accuracy was used to grade performance. Voluntary postperformance surveys were used to evaluate the experience. Results demonstrate that more experienced residents have statistically significant better scores and completed the procedure in less time than inexperienced residents. Survey results revealed that most residents agreed that practice on the simulator would help with future ventriculostomies. This mixed reality simulator provides a real-life experience, and will be an instrumental tool in training the next generation of neurosurgeons. We have now implemented a standard where incoming residents must prove efficiency and skill on the simulator before their first interaction with a patient.

  4. Simulation of land use change in the three gorges reservoir area based on CART-CA

    NASA Astrophysics Data System (ADS)

    Yuan, Min

    2018-05-01

    This study proposes a new method to simulate spatiotemporal complex multiple land uses by using classification and regression tree algorithm (CART) based CA model. In this model, we use classification and regression tree algorithm to calculate land class conversion probability, and combine neighborhood factor, random factor to extract cellular transformation rules. The overall Kappa coefficient is 0.8014 and the overall accuracy is 0.8821 in the land dynamic simulation results of the three gorges reservoir area from 2000 to 2010, and the simulation results are satisfactory.

  5. The Aurora radiation-hydrodynamical simulations of reionization: calibration and first results

    NASA Astrophysics Data System (ADS)

    Pawlik, Andreas H.; Rahmati, Alireza; Schaye, Joop; Jeon, Myoungwon; Dalla Vecchia, Claudio

    2017-04-01

    We introduce a new suite of radiation-hydrodynamical simulations of galaxy formation and reionization called Aurora. The Aurora simulations make use of a spatially adaptive radiative transfer technique that lets us accurately capture the small-scale structure in the gas at the resolution of the hydrodynamics, in cosmological volumes. In addition to ionizing radiation, Aurora includes galactic winds driven by star formation and the enrichment of the universe with metals synthesized in the stars. Our reference simulation uses 2 × 5123 dark matter and gas particles in a box of size 25 h-1 comoving Mpc with a force softening scale of at most 0.28 h-1 kpc. It is accompanied by simulations in larger and smaller boxes and at higher and lower resolution, employing up to 2 × 10243 particles, to investigate numerical convergence. All simulations are calibrated to yield simulated star formation rate functions in close agreement with observational constraints at redshift z = 7 and to achieve reionization at z ≈ 8.3, which is consistent with the observed optical depth to reionization. We focus on the design and calibration of the simulations and present some first results. The median stellar metallicities of low-mass galaxies at z = 6 are consistent with the metallicities of dwarf galaxies in the Local Group, which are believed to have formed most of their stars at high redshifts. After reionization, the mean photoionization rate decreases systematically with increasing resolution. This coincides with a systematic increase in the abundance of neutral hydrogen absorbers in the intergalactic medium.

  6. Piloted simulation of a ground-based time-control concept for air traffic control

    NASA Technical Reports Server (NTRS)

    Davis, Thomas J.; Green, Steven M.

    1989-01-01

    A concept for aiding air traffic controllers in efficiently spacing traffic and meeting scheduled arrival times at a metering fix was developed and tested in a real time simulation. The automation aid, referred to as the ground based 4-D descent advisor (DA), is based on accurate models of aircraft performance and weather conditions. The DA generates suggested clearances, including both top-of-descent-point and speed-profile data, for one or more aircraft in order to achieve specific time or distance separation objectives. The DA algorithm is used by the air traffic controller to resolve conflicts and issue advisories to arrival aircraft. A joint simulation was conducted using a piloted simulator and an advanced concept air traffic control simulation to study the acceptability and accuracy of the DA automation aid from both the pilot's and the air traffic controller's perspectives. The results of the piloted simulation are examined. In the piloted simulation, airline crews executed controller issued descent advisories along standard curved path arrival routes, and were able to achieve an arrival time precision of + or - 20 sec at the metering fix. An analysis of errors generated in turns resulted in further enhancements of the algorithm to improve the predictive accuracy. Evaluations by pilots indicate general support for the concept and provide specific recommendations for improvement.

  7. High accuracy differential pressure measurements using fluid-filled catheters - A feasibility study in compliant tubes.

    PubMed

    Rotman, Oren Moshe; Weiss, Dar; Zaretsky, Uri; Shitzer, Avraham; Einav, Shmuel

    2015-09-18

    High accuracy differential pressure measurements are required in various biomedical and medical applications, such as in fluid-dynamic test systems, or in the cath-lab. Differential pressure measurements using fluid-filled catheters are relatively inexpensive, yet may be subjected to common mode pressure errors (CMP), which can significantly reduce the measurement accuracy. Recently, a novel correction method for high accuracy differential pressure measurements was presented, and was shown to effectively remove CMP distortions from measurements acquired in rigid tubes. The purpose of the present study was to test the feasibility of this correction method inside compliant tubes, which effectively simulate arteries. Two tubes with varying compliance were tested under dynamic flow and pressure conditions to cover the physiological range of radial distensibility in coronary arteries. A third, compliant model, with a 70% stenosis severity was additionally tested. Differential pressure measurements were acquired over a 3 cm tube length using a fluid-filled double-lumen catheter, and were corrected using the proposed CMP correction method. Validation of the corrected differential pressure signals was performed by comparison to differential pressure recordings taken via a direct connection to the compliant tubes, and by comparison to predicted differential pressure readings of matching fluid-structure interaction (FSI) computational simulations. The results show excellent agreement between the experimentally acquired and computationally determined differential pressure signals. This validates the application of the CMP correction method in compliant tubes of the physiological range for up to intermediate size stenosis severity of 70%. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Long-range speckle imaging theory, simulation, and brassboard results

    NASA Astrophysics Data System (ADS)

    Riker, Jim F.; Tyler, Glenn A.; Vaughn, Jeff L.

    2017-09-01

    In the SPIE 2016 Unconventional Imaging session, the authors laid out a breakthrough new theory for active array imaging that exploits the speckle return to generate a high-resolution picture of the target. Since then, we have pursued that theory even in long-range (<1000-km) engagement scenarios and shown how we can obtain that high-resolution image of the target using only a few illuminators, or by using many illuminators. There is a trade of illuminators versus receivers, but many combinations provide the same synthetic aperture resolution. We will discuss that trade, along with the corresponding radiometric and speckle-imaging Signal-to-Noise Ratios (SNR) for geometries that can fit on relatively small aircraft, such as an Unmanned Aerial Vehicle (UAV). Furthermore, we have simulated the performance of the technique, and we have created a laboratory version of the approach that is able to obtain high-resolution speckle imagery. The principal results presented in this paper are the Signal to Noise Ratios (SNR) for both the radiometric and the speckle imaging portions of the problem, and the simulated results obtained for representative arrays.

  9. Interpolation methods and the accuracy of lattice-Boltzmann mesh refinement

    DOE PAGES

    Guzik, Stephen M.; Weisgraber, Todd H.; Colella, Phillip; ...

    2013-12-10

    A lattice-Boltzmann model to solve the equivalent of the Navier-Stokes equations on adap- tively refined grids is presented. A method for transferring information across interfaces between different grid resolutions was developed following established techniques for finite- volume representations. This new approach relies on a space-time interpolation and solving constrained least-squares problems to ensure conservation. The effectiveness of this method at maintaining the second order accuracy of lattice-Boltzmann is demonstrated through a series of benchmark simulations and detailed mesh refinement studies. These results exhibit smaller solution errors and improved convergence when compared with similar approaches relying only on spatial interpolation. Examplesmore » highlighting the mesh adaptivity of this method are also provided.« less

  10. Development of a device to simulate tooth mobility.

    PubMed

    Erdelt, Kurt-Jürgen; Lamper, Timea

    2010-10-01

    The testing of new materials under simulation of oral conditions is essential in medicine. For simulation of fracture strength different simulation devices are used for test set-up. The results of these in vitro tests differ because there is no standardization of tooth mobility in simulation devices. The aim of this study is to develop a simulation device that depicts the tooth mobility curve as accurately as possible and creates reproducible and scalable mobility curves. With the aid of published literature and with the help of dentists, average forms of tooth classes were generated. Based on these tooth data, different abutment tooth shapes and different simulation devices were designed with a CAD system and were generated with a Rapid Prototyping system. Then, for all simulation devices the displacement curves were created with a universal testing machine and compared with the tooth mobility curve. With this new information, an improved adapted simulation device was constructed. A simulations device that is able to simulate the mobility curve of natural teeth with high accuracy and where mobility is reproducible and scalable was developed.

  11. Numerical simulations in the development of propellant management devices

    NASA Astrophysics Data System (ADS)

    Gaulke, Diana; Winkelmann, Yvonne; Dreyer, Michael

    Propellant management devices (PMDs) are used for positioning the propellant at the propel-lant port. It is important to provide propellant without gas bubbles. Gas bubbles can inflict cavitation and may lead to system failures in the worst case. Therefore, the reliable operation of such devices must be guaranteed. Testing these complex systems is a very intricate process. Furthermore, in most cases only tests with downscaled geometries are possible. Numerical sim-ulations are used here as an aid to optimize the tests and to predict certain results. Based on these simulations, parameters can be determined in advance and parts of the equipment can be adjusted in order to minimize the number of experiments. In return, the simulations are validated regarding the test results. Furthermore, if the accuracy of the numerical prediction is verified, then numerical simulations can be used for validating the scaling of the experiments. This presentation demonstrates some selected numerical simulations for the development of PMDs at ZARM.

  12. Planck 2015 results: XII. Full focal plane simulations

    DOE PAGES

    Ade, P. A. R.; Aghanim, N.; Arnaud, M.; ...

    2016-09-20

    In this paper, we present the 8th full focal plane simulation set (FFP8), deployed in support of the Planck 2015 results. FFP8 consists of 10 fiducial mission realizations reduced to 18 144 maps, together with the most massive suite of Monte Carlo realizations of instrument noise and CMB ever generated, comprising 10 4 mission realizations reduced to about 10 6 maps. The resulting maps incorporate the dominant instrumental, scanning, and data analysis effects, and the remaining subdominant effects will be included in future updates. Finally, generated at a cost of some 25 million CPU-hours spread across multiple high-performance-computing (HPC) platforms,more » FFP8 is used to validate and verify analysis algorithms and their implementations, and to remove biases from and quantify uncertainties in the results of analyses of the real data.« less

  13. Nano-level instrumentation for analyzing the dynamic accuracy of a rolling element bearing.

    PubMed

    Yang, Z; Hong, J; Zhang, J; Wang, M Y; Zhu, Y

    2013-12-01

    The rotational performance of high-precision rolling bearings is fundamental to the overall accuracy of complex mechanical systems. A nano-level instrument to analyze rotational accuracy of high-precision bearings of machine tools under working conditions was developed. In this instrument, a high-precision (error motion < 0.15 μm) and high-stiffness (2600 N axial loading capacity) aerostatic spindle was applied to spin the test bearing. Operating conditions could be simulated effectively because of the large axial loading capacity. An air-cylinder, controlled by a proportional pressure regulator, was applied to drive an air-bearing subjected to non-contact and precise loaded axial forces. The measurement results on axial loading and rotation constraint with five remaining degrees of freedom were completely unconstrained and uninfluenced by the instrument's structure. Dual capacity displacement sensors with 10 nm resolution were applied to measure the error motion of the spindle using a double-probe error separation method. This enabled the separation of the spindle's error motion from the measurement results of the test bearing which were measured using two orthogonal laser displacement sensors with 5 nm resolution. Finally, a Lissajous figure was used to evaluate the non-repetitive run-out (NRRO) of the bearing at different axial forces and speeds. The measurement results at various axial loadings and speeds showed the standard deviations of the measurements' repeatability and accuracy were less than 1% and 2%. Future studies will analyze the relationship between geometrical errors and NRRO, such as the ball diameter differences of and the geometrical errors in the grooves of rings.

  14. Advanced EMT and Phasor-Domain Hybrid Simulation with Simulation Mode Switching Capability for Transmission and Distribution Systems

    DOE PAGES

    Huang, Qiuhua; Vittal, Vijay

    2018-05-09

    Conventional electromagnetic transient (EMT) and phasor-domain hybrid simulation approaches presently exist for trans-mission system level studies. Their simulation efficiency is generally constrained by the EMT simulation. With an increasing number of distributed energy resources and non-conventional loads being installed in distribution systems, it is imperative to extend the hybrid simulation application to include distribution systems and integrated transmission and distribution systems. Meanwhile, it is equally important to improve the simulation efficiency as the modeling scope and complexity of the detailed system in the EMT simulation increases. To meet both requirements, this paper introduces an advanced EMT and phasor-domain hybrid simulationmore » approach. This approach has two main features: 1) a comprehensive phasor-domain modeling framework which supports positive-sequence, three-sequence, three-phase and mixed three-sequence/three-phase representations and 2) a robust and flexible simulation mode switching scheme. The developed scheme enables simulation switching from hybrid simulation mode back to pure phasor-domain dynamic simulation mode to achieve significantly improved simulation efficiency. The proposed method has been tested on integrated transmission and distribution systems. In conclusion, the results show that with the developed simulation switching feature, the total computational time is significantly reduced compared to running the hybrid simulation for the whole simulation period, while maintaining good simulation accuracy.« less

  15. Advanced EMT and Phasor-Domain Hybrid Simulation with Simulation Mode Switching Capability for Transmission and Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Qiuhua; Vittal, Vijay

    Conventional electromagnetic transient (EMT) and phasor-domain hybrid simulation approaches presently exist for trans-mission system level studies. Their simulation efficiency is generally constrained by the EMT simulation. With an increasing number of distributed energy resources and non-conventional loads being installed in distribution systems, it is imperative to extend the hybrid simulation application to include distribution systems and integrated transmission and distribution systems. Meanwhile, it is equally important to improve the simulation efficiency as the modeling scope and complexity of the detailed system in the EMT simulation increases. To meet both requirements, this paper introduces an advanced EMT and phasor-domain hybrid simulationmore » approach. This approach has two main features: 1) a comprehensive phasor-domain modeling framework which supports positive-sequence, three-sequence, three-phase and mixed three-sequence/three-phase representations and 2) a robust and flexible simulation mode switching scheme. The developed scheme enables simulation switching from hybrid simulation mode back to pure phasor-domain dynamic simulation mode to achieve significantly improved simulation efficiency. The proposed method has been tested on integrated transmission and distribution systems. In conclusion, the results show that with the developed simulation switching feature, the total computational time is significantly reduced compared to running the hybrid simulation for the whole simulation period, while maintaining good simulation accuracy.« less

  16. High Fidelity Thermal Simulators for Non-Nuclear Testing: Analysis and Initial Results

    NASA Technical Reports Server (NTRS)

    Bragg-Sitton, Shannon M.; Dickens, Ricky; Dixon, David

    2007-01-01

    Non-nuclear testing can be a valuable tool in the development of a space nuclear power system, providing system characterization data and allowing one to work through various fabrication, assembly and integration issues without the cost and time associated with a full ground nuclear test. In a non-nuclear test bed, electric heaters are used to simulate the heat from nuclear fuel. Testing with non-optimized heater elements allows one to assess thermal, heat transfer, and stress related attributes of a given system, but fails to demonstrate the dynamic response that would be present in an integrated, fueled reactor system. High fidelity thermal simulators that match both the static and the dynamic fuel pin performance that would be observed in an operating, fueled nuclear reactor can vastly increase the value of non-nuclear test results. With optimized simulators, the integration of thermal hydraulic hardware tests with simulated neutronie response provides a bridge between electrically heated testing and fueled nuclear testing, providing a better assessment of system integration issues, characterization of integrated system response times and response characteristics, and assessment of potential design improvements' at a relatively small fiscal investment. Initial conceptual thermal simulator designs are determined by simple one-dimensional analysis at a single axial location and at steady state conditions; feasible concepts are then input into a detailed three-dimensional model for comparison to expected fuel pin performance. Static and dynamic fuel pin performance for a proposed reactor design is determined using SINDA/FLUINT thermal analysis software, and comparison is made between the expected nuclear performance and the performance of conceptual thermal simulator designs. Through a series of iterative analyses, a conceptual high fidelity design can developed. Test results presented in this paper correspond to a "first cut" simulator design for a potential

  17. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.

    1994-01-01

    This paper summarizes a compilation of attitude determination accuracies attained by a number of satellites supported by the Goddard Space Flight Center Flight Dynamics Facility. The compilation is designed to assist future mission planners in choosing and placing attitude hardware and selecting the attitude determination algorithms needed to achieve given accuracy requirements. The major goal of the compilation is to indicate realistic accuracies achievable using a given sensor complement based on mission experience. It is expected that the use of actual spacecraft experience will make the study especially useful for mission design. A general description of factors influencing spacecraft attitude accuracy is presented. These factors include determination algorithms, inertial reference unit characteristics, and error sources that can affect measurement accuracy. Possible techniques for mitigating errors are also included. Brief mission descriptions are presented with the attitude accuracies attained, grouped by the sensor pairs used in attitude determination. The accuracies for inactive missions represent a compendium of missions report results, and those for active missions represent measurements of attitude residuals. Both three-axis and spin stabilized missions are included. Special emphasis is given to high-accuracy sensor pairs, such as two fixed-head star trackers (FHST's) and fine Sun sensor plus FHST. Brief descriptions of sensor design and mode of operation are included. Also included are brief mission descriptions and plots summarizing the attitude accuracy attained using various sensor complements.

  18. Analysis of point source size on measurement accuracy of lateral point-spread function of confocal Raman microscopy

    NASA Astrophysics Data System (ADS)

    Fu, Shihang; Zhang, Li; Hu, Yao; Ding, Xiang

    2018-01-01

    Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spread function (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.

  19. RFI in hybrid loops - Simulation and experimental results.

    NASA Technical Reports Server (NTRS)

    Ziemer, R. E.; Nelson, D. R.; Raghavan, H. R.

    1972-01-01

    A digital simulation of an imperfect second-order hybrid phase-locked loop (HPLL) operating in radio frequency interference (RFI) is described. Its performance is characterized in terms of phase error variance and phase error probability density function (PDF). Monte-Carlo simulation is used to show that the HPLL can be superior to the conventional phase-locked loops in RFI backgrounds when minimum phase error variance is the goodness criterion. Similar experimentally obtained data are given in support of the simulation data.

  20. Configuration optimization and experimental accuracy evaluation of a bone-attached, parallel robot for skull surgery.

    PubMed

    Kobler, Jan-Philipp; Nuelle, Kathrin; Lexow, G Jakob; Rau, Thomas S; Majdani, Omid; Kahrs, Lueder A; Kotlarski, Jens; Ortmaier, Tobias

    2016-03-01

    Minimally invasive cochlear implantation is a novel surgical technique which requires highly accurate guidance of a drilling tool along a trajectory from the mastoid surface toward the basal turn of the cochlea. The authors propose a passive, reconfigurable, parallel robot which can be directly attached to bone anchors implanted in a patient's skull, avoiding the need for surgical tracking systems. Prior to clinical trials, methods are necessary to patient specifically optimize the configuration of the mechanism with respect to accuracy and stability. Furthermore, the achievable accuracy has to be determined experimentally. A comprehensive error model of the proposed mechanism is established, taking into account all relevant error sources identified in previous studies. Two optimization criteria to exploit the given task redundancy and reconfigurability of the passive robot are derived from the model. The achievable accuracy of the optimized robot configurations is first estimated with the help of a Monte Carlo simulation approach and finally evaluated in drilling experiments using synthetic temporal bone specimen. Experimental results demonstrate that the bone-attached mechanism exhibits a mean targeting accuracy of [Formula: see text] mm under realistic conditions. A systematic targeting error is observed, which indicates that accurate identification of the passive robot's kinematic parameters could further reduce deviations from planned drill trajectories. The accuracy of the proposed mechanism demonstrates its suitability for minimally invasive cochlear implantation. Future work will focus on further evaluation experiments on temporal bone specimen.

  1. Accuracy of Digital vs Conventional Implant Impression Approach: A Three-Dimensional Comparative In Vitro Analysis.

    PubMed

    Basaki, Kinga; Alkumru, Hasan; De Souza, Grace; Finer, Yoav

    To assess the three-dimensional (3D) accuracy and clinical acceptability of implant definitive casts fabricated using a digital impression approach and to compare the results with those of a conventional impression method in a partially edentulous condition. A mandibular reference model was fabricated with implants in the first premolar and molar positions to simulate a patient with bilateral posterior edentulism. Ten implant-level impressions per method were made using either an intraoral scanner with scanning abutments for the digital approach or an open-tray technique and polyvinylsiloxane material for the conventional approach. 3D analysis and comparison of implant location on resultant definitive casts were performed using laser scanner and quality control software. The inter-implant distances and interimplant angulations for each implant pair were measured for the reference model and for each definitive cast (n = 20 per group); these measurements were compared to calculate the magnitude of error in 3D for each definitive cast. The influence of implant angulation on definitive cast accuracy was evaluated for both digital and conventional approaches. Statistical analysis was performed using t test (α = .05) for implant position and angulation. Clinical qualitative assessment of accuracy was done via the assessment of the passivity of a master verification stent for each implant pair, and significance was analyzed using chi-square test (α = .05). A 3D error of implant positioning was observed for the two impression techniques vs the reference model, with mean ± standard deviation (SD) error of 116 ± 94 μm and 56 ± 29 μm for the digital and conventional approaches, respectively (P = .01). In contrast, the inter-implant angulation errors were not significantly different between the two techniques (P = .83). Implant angulation did not have a significant influence on definitive cast accuracy within either technique (P = .64). The verification stent

  2. Development and evaluation of a Kalman-filter algorithm for terminal area navigation using sensors of moderate accuracy

    NASA Technical Reports Server (NTRS)

    Kanning, G.; Cicolani, L. S.; Schmidt, S. F.

    1983-01-01

    Translational state estimation in terminal area operations, using a set of commonly available position, air data, and acceleration sensors, is described. Kalman filtering is applied to obtain maximum estimation accuracy from the sensors but feasibility in real-time computations requires a variety of approximations and devices aimed at minimizing the required computation time with only negligible loss of accuracy. Accuracy behavior throughout the terminal area, its relation to sensor accuracy, its effect on trajectory tracking errors and control activity in an automatic flight control system, and its adequacy in terms of existing criteria for various terminal area operations are examined. The principal investigative tool is a simulation of the system.

  3. Insensitivity of the octahedral spherical hohlraum to power imbalance, pointing accuracy, and assemblage accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huo, Wen Yi; Zhao, Yiqing; Zheng, Wudi

    2014-11-15

    The random radiation asymmetry in the octahedral spherical hohlraum [K. Lan et al., Phys. Plasmas 21, 0 10704 (2014)] arising from the power imbalance, pointing accuracy of laser quads, and the assemblage accuracy of capsule is investigated by using the 3-dimensional view factor model. From our study, for the spherical hohlraum, the random radiation asymmetry arising from the power imbalance of the laser quads is about half of that in the cylindrical hohlraum; the random asymmetry arising from the pointing error is about one order lower than that in the cylindrical hohlraum; and the random asymmetry arising from the assemblage errormore » of capsule is about one third of that in the cylindrical hohlraum. Moreover, the random radiation asymmetry in the spherical hohlraum is also less than the amount in the elliptical hohlraum. The results indicate that the spherical hohlraum is more insensitive to the random variations than the cylindrical hohlraum and the elliptical hohlraum. Hence, the spherical hohlraum can relax the requirements to the power imbalance and pointing accuracy of laser facility and the assemblage accuracy of capsule.« less

  4. Exact and approximate stochastic simulation of intracellular calcium dynamics.

    PubMed

    Wieder, Nicolas; Fink, Rainer H A; Wegner, Frederic von

    2011-01-01

    In simulations of chemical systems, the main task is to find an exact or approximate solution of the chemical master equation (CME) that satisfies certain constraints with respect to computation time and accuracy. While Brownian motion simulations of single molecules are often too time consuming to represent the mesoscopic level, the classical Gillespie algorithm is a stochastically exact algorithm that provides satisfying results in the representation of calcium microdomains. Gillespie's algorithm can be approximated via the tau-leap method and the chemical Langevin equation (CLE). Both methods lead to a substantial acceleration in computation time and a relatively small decrease in accuracy. Elimination of the noise terms leads to the classical, deterministic reaction rate equations (RRE). For complex multiscale systems, hybrid simulations are increasingly proposed to combine the advantages of stochastic and deterministic algorithms. An often used exemplary cell type in this context are striated muscle cells (e.g., cardiac and skeletal muscle cells). The properties of these cells are well described and they express many common calcium-dependent signaling pathways. The purpose of the present paper is to provide an overview of the aforementioned simulation approaches and their mutual relationships in the spectrum ranging from stochastic to deterministic algorithms.

  5. Near-field diffraction from amplitude diffraction gratings: theory, simulation and results

    NASA Astrophysics Data System (ADS)

    Abedin, Kazi Monowar; Rahman, S. M. Mujibur

    2017-08-01

    We describe a computer simulation method by which the complete near-field diffract pattern of an amplitude diffraction grating can be generated. The technique uses the method of iterative Fresnel integrals to calculate and generate the diffraction images. Theoretical background as well as the techniques to perform the simulation is described. The program is written in MATLAB, and can be implemented in any ordinary PC. Examples of simulated diffraction images are presented and discussed. The generated images in the far-field where they reduce to Fraunhofer diffraction pattern are also presented for a realistic grating, and compared with the results predicted by the grating equation, which is applicable in the far-field. The method can be used as a tool to teach the complex phenomenon of diffraction in classrooms.

  6. MHD simulation of transition process from the magneto-rotational instability to magnetic turbulence by using a high-order MHD simulation scheme

    NASA Astrophysics Data System (ADS)

    Hirai, K.; Katoh, Y.; Terada, N.; Kawai, S.

    2016-12-01

    In accretion disks, magneto-rotational instability (MRI; Balbus & Hawley, 1991) makes the disk gas in the magnetic turbulent state and drives efficient mass accretion into a central star. MRI drives turbulence through the evolution of the parasitic instability (PI; Goodman & Xu, 1994), which is related to both Kelvin-Helmholtz (K-H) instability and magnetic reconnection. The wave number vector of PI is strongly affected by both magnetic diffusivity and fluid viscosity (Pessah, 2010). This fact makes MHD simulation of MRI difficult, because we need to employ the numerical diffusivity for treating discontinuities in compressible MHD simulation schemes. Therefore, it is necessary to use an MHD scheme that has both high-order accuracy so as to resolve MRI driven turbulence and small numerical diffusivity enough to treat discontinuities. We have originally developed an MHD code by employing the scheme proposed by Kawai (2013). This scheme focuses on resolving turbulence accurately by using a high-order compact difference scheme (Lele, 1992), and meanwhile, the scheme treats discontinuities by using the localized artificial diffusivity method (Kawai, 2013). Our code also employs the pipeline algorithm (Matsuura & Kato, 2007) for MPI parallelization without diminishing the accuracy of the compact difference scheme. We carry out a 3-dimensional ideal MHD simulation with a net vertical magnetic field in the local shearing box disk model. We use 256x256x128 grids. Simulation results show that the spatially averaged turbulent stress induced by MRI linearly grows until around 2.8 orbital periods, and decreases after the saturation. We confirm the strong enhancement of the K-H mode PI at a timing just before the saturation, identified by the enhancement of its anisotropic wavenumber spectra in the 2-dimensional wavenumber space. The wave number of the maximum growth of PI reproduced in the simulation result is larger than the linear analysis. This discrepancy is explained by

  7. Self-consistent simulation of CdTe solar cells with active defects

    DOE PAGES

    Brinkman, Daniel; Guo, Da; Akis, Richard; ...

    2015-07-21

    We demonstrate a self-consistent numerical scheme for simulating an electronic device which contains active defects. As a specific case, we consider copper defects in cadmium telluride solar cells. The presence of copper has been shown experimentally to play a crucial role in predicting device performance. The primary source of this copper is migration away from the back contact during annealing, which likely occurs predominantly along grain boundaries. We introduce a mathematical scheme for simulating this effect in 2D and explain the numerical implementation of the system. Lastly, we will give numerical results comparing our results to known 1D simulations tomore » demonstrate the accuracy of the solver and then show results unique to the 2D case.« less

  8. Optics of the human cornea influence the accuracy of stereo eye-tracking methods: a simulation study.

    PubMed

    Barsingerhorn, A D; Boonstra, F N; Goossens, H H L M

    2017-02-01

    Current stereo eye-tracking methods model the cornea as a sphere with one refractive surface. However, the human cornea is slightly aspheric and has two refractive surfaces. Here we used ray-tracing and the Navarro eye-model to study how these optical properties affect the accuracy of different stereo eye-tracking methods. We found that pupil size, gaze direction and head position all influence the reconstruction of gaze. Resulting errors range between ± 1.0 degrees at best. This shows that stereo eye-tracking may be an option if reliable calibration is not possible, but the applied eye-model should account for the actual optics of the cornea.

  9. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    PubMed

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  10. Numerical Simulation of Transitional, Hypersonic Flows using a Hybrid Particle-Continuum Method

    NASA Astrophysics Data System (ADS)

    Verhoff, Ashley Marie

    Analysis of hypersonic flows requires consideration of multiscale phenomena due to the range of flight regimes encountered, from rarefied conditions in the upper atmosphere to fully continuum flow at low altitudes. At transitional Knudsen numbers there are likely to be localized regions of strong thermodynamic nonequilibrium effects that invalidate the continuum assumptions of the Navier-Stokes equations. Accurate simulation of these regions, which include shock waves, boundary and shear layers, and low-density wakes, requires a kinetic theory-based approach where no prior assumptions are made regarding the molecular distribution function. Because of the nature of these types of flows, there is much to be gained in terms of both numerical efficiency and physical accuracy by developing hybrid particle-continuum simulation approaches. The focus of the present research effort is the continued development of the Modular Particle-Continuum (MPC) method, where the Navier-Stokes equations are solved numerically using computational fluid dynamics (CFD) techniques in regions of the flow field where continuum assumptions are valid, and the direct simulation Monte Carlo (DSMC) method is used where strong thermodynamic nonequilibrium effects are present. Numerical solutions of transitional, hypersonic flows are thus obtained with increased physical accuracy relative to CFD alone, and improved numerical efficiency is achieved in comparison to DSMC alone because this more computationally expensive method is restricted to those regions of the flow field where it is necessary to maintain physical accuracy. In this dissertation, a comprehensive assessment of the physical accuracy of the MPC method is performed, leading to the implementation of a non-vacuum supersonic outflow boundary condition in particle domains, and more consistent initialization of DSMC simulator particles along hybrid interfaces. The relative errors between MPC and full DSMC results are greatly reduced as a

  11. Simulation studies for the evaluation of health information technologies: experiences and results.

    PubMed

    Ammenwerth, Elske; Hackl, Werner O; Binzer, Kristine; Christoffersen, Tue E H; Jensen, Sanne; Lawton, Kitta; Skjoet, Peter; Nohr, Christian

    It is essential for new health information technologies (IT) to undergo rigorous evaluations to ensure they are effective and safe for use in real-world situations. However, evaluation of new health IT is challenging, as field studies are often not feasible when the technology being evaluated is not sufficiently mature. Laboratory-based evaluations have also been shown to have insufficient external validity. Simulation studies seem to be a way to bridge this gap. The aim of this study was to evaluate, using a simulation methodology, the impact of a new prototype of an electronic medication management system on the appropriateness of prescriptions and drug-related activities, including laboratory test ordering or medication changes. This article presents the results of a controlled simulation study with 50 simulation runs, including ten doctors and five simulation patients, and discusses experiences and lessons learnt while conducting the study. Although the new electronic medication management system showed tendencies to improve medication safety when compared with the standard system, this tendency was not significant. Altogether, five distinct situations were identified where the new medication management system did help to improve medication safety. This simulation study provided a good compromise between internal validity and external validity. However, several challenges need to be addressed when undertaking simulation evaluations including: preparation of adequate test cases; training of participants before using unfamiliar applications; consideration of time, effort and costs of conducting the simulation; technical maturity of the evaluated system; and allowing adequate preparation of simulation scenarios and simulation setting. Simulation studies are an interesting but time-consuming approach, which can be used to evaluate newly developed health IT systems, particularly those systems that are not yet sufficiently mature to undergo field evaluation studies.

  12. Evaluation of Three Models for Simulating Pesticide Runoff from Irrigated Agricultural Fields.

    PubMed

    Zhang, Xuyang; Goh, Kean S

    2015-11-01

    Three models were evaluated for their accuracy in simulating pesticide runoff at the edge of agricultural fields: Pesticide Root Zone Model (PRZM), Root Zone Water Quality Model (RZWQM), and OpusCZ. Modeling results on runoff volume, sediment erosion, and pesticide loss were compared with measurements taken from field studies. Models were also compared on their theoretical foundations and ease of use. For runoff events generated by sprinkler irrigation and rainfall, all models performed equally well with small errors in simulating water, sediment, and pesticide runoff. The mean absolute percentage errors (MAPEs) were between 3 and 161%. For flood irrigation, OpusCZ simulated runoff and pesticide mass with the highest accuracy, followed by RZWQM and PRZM, likely owning to its unique hydrological algorithm for runoff simulations during flood irrigation. Simulation results from cold model runs by OpusCZ and RZWQM using measured values for model inputs matched closely to the observed values. The MAPE ranged from 28 to 384 and 42 to 168% for OpusCZ and RZWQM, respectively. These satisfactory model outputs showed the models' abilities in mimicking reality. Theoretical evaluations indicated that OpusCZ and RZWQM use mechanistic approaches for hydrology simulation, output data on a subdaily time-step, and were able to simulate management practices and subsurface flow via tile drainage. In contrast, PRZM operates at daily time-step and simulates surface runoff using the USDA Soil Conservation Service's curve number method. Among the three models, OpusCZ and RZWQM were suitable for simulating pesticide runoff in semiarid areas where agriculture is heavily dependent on irrigation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  13. Thermal airborne multispectral aster simulator and its preliminary results

    NASA Astrophysics Data System (ADS)

    Mills, F.; Kannari, Y.; Watanabe, H.; Sano, M.; Chang, S. H.

    1994-03-01

    An Airborne ASTER Simulator (AAS) is being developed for the Japan Resources Observation System Organization (JAROS) by the Geophysical Environmental Research (GER) Corporation. The first test flights of the AAS were over Cuprite, Nevada; Long Valley, California; and Death Valley, California, in December 1991. Preliminary laboratory tests at NASA's Stennis Space Center (SSC) were completed in April 1992. The results of the these tests indicate the AAS can discriminate between silicate and non-silicate rocks. The improvements planned for the next two years may give a spectral Full-Width at Half-Maximum (FWHM) of 0.3 μm and NEΔT of 0.2 - 0.5°K. The AAS has the potential to become a good tool for airborne TIR research and can be used for simulations of future satellite-borne TIR sensors. Flight tests over Cuprite, Nevada, and Castaic Lake, California, are planned for October-December 1992.

  14. A pilot feasibility study of virtual patient simulation to enhance social work students' brief mental health assessment skills.

    PubMed

    Washburn, Micki; Bordnick, Patrick; Rizzo, Albert Skip

    2016-10-01

    This study presents preliminary feasibility and acceptability data on the use of virtual patient (VP) simulations to develop brief assessment skills within an interdisciplinary care setting. Results support the acceptability of technology-enhanced simulations and offer preliminary evidence for an association between engagement in VP practice simulations and improvements in diagnostic accuracy and clinical interviewing skills. Recommendations and next steps for research on technology-enhanced simulations within social work are discussed.

  15. Model improvements to simulate charging in SEM

    NASA Astrophysics Data System (ADS)

    Arat, K. T.; Klimpel, T.; Hagen, C. W.

    2018-03-01

    Charging of insulators is a complex phenomenon to simulate since the accuracy of the simulations is very sensitive to the interaction of electrons with matter and electric fields. In this study, we report model improvements for a previously developed Monte-Carlo simulator to more accurately simulate samples that charge. The improvements include both modelling of low energy electron scattering and charging of insulators. The new first-principle scattering models provide a more realistic charge distribution cloud in the material, and a better match between non-charging simulations and experimental results. Improvements on charging models mainly focus on redistribution of the charge carriers in the material with an induced conductivity (EBIC) and a breakdown model, leading to a smoother distribution of the charges. Combined with a more accurate tracing of low energy electrons in the electric field, we managed to reproduce the dynamically changing charging contrast due to an induced positive surface potential.

  16. Simulation of General Physics laboratory exercise

    NASA Astrophysics Data System (ADS)

    Aceituno, P.; Hernández-Aceituno, J.; Hernández-Cabrera, A.

    2015-01-01

    Laboratory exercises are an important part of general Physics teaching, both during the last years of high school and the first year of college education. Due to the need to acquire enough laboratory equipment for all the students, and the widespread access to computers rooms in teaching, we propose the development of computer simulated laboratory exercises. A representative exercise in general Physics is the calculation of the gravity acceleration value, through the free fall motion of a metal ball. Using a model of the real exercise, we have developed an interactive system which allows students to alter the starting height of the ball to obtain different fall times. The simulation was programmed in ActionScript 3, so that it can be freely executed in any operative system; to ensure the accuracy of the calculations, all the input parameters of the simulations were modelled using digital measurement units, and to allow a statistical management of the resulting data, measurement errors are simulated through limited randomization.

  17. Accuracy of real time radiography burning rate measurement

    NASA Astrophysics Data System (ADS)

    Olaniyi, Bisola

    The design of a solid propellant rocket motor requires the determination of a propellant's burning-rate and its dependency upon environmental parameters. The requirement that the burning-rate be physically measured, establishes the need for methods and equipment to obtain such data. A literature review reveals that no measurement has provided the desired burning rate accuracy. In the current study, flash x-ray modeling and digitized film-density data were employed to predict motor-port area to length ratio. The pre-fired port-areas and base burning rate were within 2.5% and 1.2% of their known values, respectively. To verify the accuracy of the method, a continuous x-ray and a solid propellant rocket motor model (Plexiglas cylinder) were used. The solid propellant motor model was translated laterally through a real-time radiography system at different speeds simulating different burning rates. X-ray images were captured and the burning-rate was then determined. The measured burning rate was within 1.65% of the known values.

  18. A study of workstation computational performance for real-time flight simulation

    NASA Technical Reports Server (NTRS)

    Maddalon, Jeffrey M.; Cleveland, Jeff I., II

    1995-01-01

    With recent advances in microprocessor technology, some have suggested that modern workstations provide enough computational power to properly operate a real-time simulation. This paper presents the results of a computational benchmark, based on actual real-time flight simulation code used at Langley Research Center, which was executed on various workstation-class machines. The benchmark was executed on different machines from several companies including: CONVEX Computer Corporation, Cray Research, Digital Equipment Corporation, Hewlett-Packard, Intel, International Business Machines, Silicon Graphics, and Sun Microsystems. The machines are compared by their execution speed, computational accuracy, and porting effort. The results of this study show that the raw computational power needed for real-time simulation is now offered by workstations.

  19. Simulation study of the ROMPS robot control system

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Liu, HUI-I.

    1994-01-01

    This is a report presenting the progress of a research grant funded by NASA for work performed from June 1, 1993 to August 1, 1993. The report deals with the Robot Operated Material Processing System (ROMPS). It presents results of a computer simulation study conducted to investigate the performance of the control systems controlling the azimuth, elevation, and radial axes of the ROMPS and its gripper. Four study cases are conducted. The first case investigates the control of free motion of the three areas. In the second case, the compliant motion in the elevation axis with the wrist compliant device is studied in terms of position accuracy and impact forces. The third case focuses on the behavior of the control system in controlling the robot motion along the radial axis when pulling the pallet out of the rack. In the fourth case, the compliant motion of the gripper grasping a solid object under the effect of the gripper passive compliance is studied in terms of position accuracy and contact forces. For each of the above cases, a set of PIR gains will be selected to optimize the controller performance and computer simulation results will be presented and discussed.

  20. Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results

    PubMed Central

    Humada, Ali M.; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M.; Ahmed, Mushtaq N.

    2016-01-01

    A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions. PMID:27035575

  1. Transport link scanner: simulating geographic transport network expansion through individual investments

    NASA Astrophysics Data System (ADS)

    Jacobs-Crisioni, C.; Koopmans, C. C.

    2016-07-01

    This paper introduces a GIS-based model that simulates the geographic expansion of transport networks by several decision-makers with varying objectives. The model progressively adds extensions to a growing network by choosing the most attractive investments from a limited choice set. Attractiveness is defined as a function of variables in which revenue and broader societal benefits may play a role and can be based on empirically underpinned parameters that may differ according to private or public interests. The choice set is selected from an exhaustive set of links and presumably contains those investment options that best meet private operator's objectives by balancing the revenues of additional fare against construction costs. The investment options consist of geographically plausible routes with potential detours. These routes are generated using a fine-meshed regularly latticed network and shortest path finding methods. Additionally, two indicators of the geographic accuracy of the simulated networks are introduced. A historical case study is presented to demonstrate the model's first results. These results show that the modelled networks reproduce relevant results of the historically built network with reasonable accuracy.

  2. [3D Virtual Reality Laparoscopic Simulation in Surgical Education - Results of a Pilot Study].

    PubMed

    Kneist, W; Huber, T; Paschold, M; Lang, H

    2016-06-01

    The use of three-dimensional imaging in laparoscopy is a growing issue and has led to 3D systems in laparoscopic simulation. Studies on box trainers have shown differing results concerning the benefit of 3D imaging. There are currently no studies analysing 3D imaging in virtual reality laparoscopy (VRL). Five surgical fellows, 10 surgical residents and 29 undergraduate medical students performed abstract and procedural tasks on a VRL simulator using conventional 2D and 3D imaging in a randomised order. No significant differences between the two imaging systems were shown for students or medical professionals. Participants who preferred three-dimensional imaging showed significantly better results in 2D as wells as in 3D imaging. First results on three-dimensional imaging on box trainers showed different results. Some studies resulted in an advantage of 3D imaging for laparoscopic novices. This study did not confirm the superiority of 3D imaging over conventional 2D imaging in a VRL simulator. In the present study on 3D imaging on a VRL simulator there was no significant advantage for 3D imaging compared to conventional 2D imaging. Georg Thieme Verlag KG Stuttgart · New York.

  3. Accuracy of endoscopic intraoperative assessment of urologic stone size.

    PubMed

    Patel, Nishant; Chew, Ben; Knudsen, Bodo; Lipkin, Michael; Wenzler, David; Sur, Roger L

    2014-05-01

    Endoscopic treatment of renal calculi relies on surgeon assessment of residual stone fragment size for either basket removal or for the passage of fragments postoperatively. We therefore sought to determine the accuracy of endoscopic assessment of renal calculi size. Between January and May 2013, five board-certified endourologists participated in an ex vivo artificial endoscopic simulation. A total of 10 stones (pebbles) were measured (mm) by nonparticipating urologist (N.D.P.) with electronic calibers and placed into separate labeled opaque test tubes to prevent visualization of the stones through the side of the tube. Endourologists were blinded to the actual size of the stones. A flexible digital ureteroscope with a 200-μm core sized laser fiber in the working channel as a size reference was placed through the ureteroscope into the test tube to estimate the stone size (mm). Accuracy was determined by obtaining the correlation coefficient (r) and constructing an Altman-Bland plot. Endourologists tended to overestimate actual stone size by a margin of 0.05 mm. The Pearson correlation coefficient was r=0.924, with a p-value<0.01. The estimation of small stones (<4 mm) had a greater accuracy than large stones (≥4 mm), r=0.911 vs r=0.666. Altman-bland plot analysis suggests that surgeons are able to accurately estimate stone size within a range of -1.8 to +1.9 mm. This ex vivo simulation study demonstrates that endoscopic assessment is reliable when assessing stone size. On average, there was a slight tendency to overestimate stone size by 0.05 mm. Most endourologists could visually estimate stone size within 2 mm of the actual size. These findings could be generalized to state that endourologists are accurately able to intraoperatively assess residual stone fragment size to guide decision making.

  4. Accuracy of UTE-MRI-based patient setup for brain cancer radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yingli; Cao, Minsong; Kaprealian, Tania

    2016-01-15

    Purpose: Radiation therapy simulations solely based on MRI have advantages compared to CT-based approaches. One feature readily available from computed tomography (CT) that would need to be reproduced with MR is the ability to compute digitally reconstructed radiographs (DRRs) for comparison against on-board radiographs commonly used for patient positioning. In this study, the authors generate MR-based bone images using a single ultrashort echo time (UTE) pulse sequence and quantify their 3D and 2D image registration accuracy to CT and radiographic images for treatments in the cranium. Methods: Seven brain cancer patients were scanned at 1.5 T using a radial UTEmore » sequence. The sequence acquired two images at two different echo times. The two images were processed using an in-house software to generate the UTE bone images. The resultant bone images were rigidly registered to simulation CT data and the registration error was determined using manually annotated landmarks as references. DRRs were created based on UTE-MRI and registered to simulated on-board images (OBIs) and actual clinical 2D oblique images from ExacTrac™. Results: UTE-MRI resulted in well visualized cranial, facial, and vertebral bones that quantitatively matched the bones in the CT images with geometric measurement errors of less than 1 mm. The registration error between DRRs generated from 3D UTE-MRI and the simulated 2D OBIs or the clinical oblique x-ray images was also less than 1 mm for all patients. Conclusions: UTE-MRI-based DRRs appear to be promising for daily patient setup of brain cancer radiotherapy with kV on-board imaging.« less

  5. Development of an Output-based Adaptive Method for Multi-Dimensional Euler and Navier-Stokes Simulations

    NASA Technical Reports Server (NTRS)

    Darmofal, David L.

    2003-01-01

    The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.

  6. Transition index maps for urban growth simulation: application of artificial neural networks, weight of evidence and fuzzy multi-criteria evaluation.

    PubMed

    Shafizadeh-Moghadam, Hossein; Tayyebi, Amin; Helbich, Marco

    2017-06-01

    Transition index maps (TIMs) are key products in urban growth simulation models. However, their operationalization is still conflicting. Our aim was to compare the prediction accuracy of three TIM-based spatially explicit land cover change (LCC) models in the mega city of Mumbai, India. These LCC models include two data-driven approaches, namely artificial neural networks (ANNs) and weight of evidence (WOE), and one knowledge-based approach which integrates an analytical hierarchical process with fuzzy membership functions (FAHP). Using the relative operating characteristics (ROC), the performance of these three LCC models were evaluated. The results showed 85%, 75%, and 73% accuracy for the ANN, FAHP, and WOE. The ANN was clearly superior compared to the other LCC models when simulating urban growth for the year 2010; hence, ANN was used to predict urban growth for 2020 and 2030. Projected urban growth maps were assessed using statistical measures, including figure of merit, average spatial distance deviation, producer accuracy, and overall accuracy. Based on our findings, we recomend ANNs as an and accurate method for simulating future patterns of urban growth.

  7. A Method for Improving the Pose Accuracy of a Robot Manipulator Based on Multi-Sensor Combined Measurement and Data Fusion

    PubMed Central

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua

    2015-01-01

    An improvement method for the pose accuracy of a robot manipulator by using a multiple-sensor combination measuring system (MCMS) is presented. It is composed of a visual sensor, an angle sensor and a series robot. The visual sensor is utilized to measure the position of the manipulator in real time, and the angle sensor is rigidly attached to the manipulator to obtain its orientation. Due to the higher accuracy of the multi-sensor, two efficient data fusion approaches, the Kalman filter (KF) and multi-sensor optimal information fusion algorithm (MOIFA), are used to fuse the position and orientation of the manipulator. The simulation and experimental results show that the pose accuracy of the robot manipulator is improved dramatically by 38%∼78% with the multi-sensor data fusion. Comparing with reported pose accuracy improvement methods, the primary advantage of this method is that it does not require the complex solution of the kinematics parameter equations, increase of the motion constraints and the complicated procedures of the traditional vision-based methods. It makes the robot processing more autonomous and accurate. To improve the reliability and accuracy of the pose measurements of MCMS, the visual sensor repeatability is experimentally studied. An optimal range of 1 × 0.8 × 1 ∼ 2 × 0.8 × 1 m in the field of view (FOV) is indicated by the experimental results. PMID:25850067

  8. Running accuracy analysis of a 3-RRR parallel kinematic machine considering the deformations of the links

    NASA Astrophysics Data System (ADS)

    Wang, Liping; Jiang, Yao; Li, Tiemin

    2014-09-01

    Parallel kinematic machines have drawn considerable attention and have been widely used in some special fields. However, high precision is still one of the challenges when they are used for advanced machine tools. One of the main reasons is that the kinematic chains of parallel kinematic machines are composed of elongated links that can easily suffer deformations, especially at high speeds and under heavy loads. A 3-RRR parallel kinematic machine is taken as a study object for investigating its accuracy with the consideration of the deformations of its links during the motion process. Based on the dynamic model constructed by the Newton-Euler method, all the inertia loads and constraint forces of the links are computed and their deformations are derived. Then the kinematic errors of the machine are derived with the consideration of the deformations of the links. Through further derivation, the accuracy of the machine is given in a simple explicit expression, which will be helpful to increase the calculating speed. The accuracy of this machine when following a selected circle path is simulated. The influences of magnitude of the maximum acceleration and external loads on the running accuracy of the machine are investigated. The results show that the external loads will deteriorate the accuracy of the machine tremendously when their direction coincides with the direction of the worst stiffness of the machine. The proposed method provides a solution for predicting the running accuracy of the parallel kinematic machines and can also be used in their design optimization as well as selection of suitable running parameters.

  9. Magnetoencephalographic accuracy profiles for the detection of auditory pathway sources.

    PubMed

    Bauer, Martin; Trahms, Lutz; Sander, Tilmann

    2015-04-01

    The detection limits for cortical and brain stem sources associated with the auditory pathway are examined in order to analyse brain responses at the limits of the audible frequency range. The results obtained from this study are also relevant to other issues of auditory brain research. A complementary approach consisting of recordings of magnetoencephalographic (MEG) data and simulations of magnetic field distributions is presented in this work. A biomagnetic phantom consisting of a spherical volume filled with a saline solution and four current dipoles is built. The magnetic fields outside of the phantom generated by the current dipoles are then measured for a range of applied electric dipole moments with a planar multichannel SQUID magnetometer device and a helmet MEG gradiometer device. The inclusion of a magnetometer system is expected to be more sensitive to brain stem sources compared with a gradiometer system. The same electrical and geometrical configuration is simulated in a forward calculation. From both the measured and the simulated data, the dipole positions are estimated using an inverse calculation. Results are obtained for the reconstruction accuracy as a function of applied electric dipole moment and depth of the current dipole. We found that both systems can localize cortical and subcortical sources at physiological dipole strength even for brain stem sources. Further, we found that a planar magnetometer system is more suitable if the position of the brain source can be restricted in a limited region of the brain. If this is not the case, a helmet-shaped sensor system offers more accurate source estimation.

  10. Accuracy of Handheld Blood Glucose Meters at High Altitude

    PubMed Central

    de Vries, Suzanna T.; Fokkert, Marion J.; Dikkeschei, Bert D.; Rienks, Rienk; Bilo, Karin M.; Bilo, Henk J. G.

    2010-01-01

    Background Due to increasing numbers of people with diabetes taking part in extreme sports (e.g., high-altitude trekking), reliable handheld blood glucose meters (BGMs) are necessary. Accurate blood glucose measurement under extreme conditions is paramount for safe recreation at altitude. Prior studies reported bias in blood glucose measurements using different BGMs at high altitude. We hypothesized that glucose-oxidase based BGMs are more influenced by the lower atmospheric oxygen pressure at altitude than glucose dehydrogenase based BGMs. Methodology/Principal Findings Glucose measurements at simulated altitude of nine BGMs (six glucose dehydrogenase and three glucose oxidase BGMs) were compared to glucose measurement on a similar BGM at sea level and to a laboratory glucose reference method. Venous blood samples of four different glucose levels were used. Moreover, two glucose oxidase and two glucose dehydrogenase based BGMs were evaluated at different altitudes on Mount Kilimanjaro. Accuracy criteria were set at a bias <15% from reference glucose (when >6.5 mmol/L) and <1 mmol/L from reference glucose (when <6.5 mmol/L). No significant difference was observed between measurements at simulated altitude and sea level for either glucose oxidase based BGMs or glucose dehydrogenase based BGMs as a group phenomenon. Two GDH based BGMs did not meet set performance criteria. Most BGMs are generally overestimating true glucose concentration at high altitude. Conclusion At simulated high altitude all tested BGMs, including glucose oxidase based BGMs, did not show influence of low atmospheric oxygen pressure. All BGMs, except for two GDH based BGMs, performed within predefined criteria. At true high altitude one GDH based BGM had best precision and accuracy. PMID:21103399

  11. The relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards: A Monte Carlo study

    PubMed Central

    Austin, Peter C.; Reeves, Mathew J.

    2015-01-01

    Background Hospital report cards, in which outcomes following the provision of medical or surgical care are compared across health care providers, are being published with increasing frequency. Essential to the production of these reports is risk-adjustment, which allows investigators to account for differences in the distribution of patient illness severity across different hospitals. Logistic regression models are frequently used for risk-adjustment in hospital report cards. Many applied researchers use the c-statistic (equivalent to the area under the receiver operating characteristic curve) of the logistic regression model as a measure of the credibility and accuracy of hospital report cards. Objectives To determine the relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards. Research Design Monte Carlo simulations were used to examine this issue. We examined the influence of three factors on the accuracy of hospital report cards: the c-statistic of the logistic regression model used for risk-adjustment, the number of hospitals, and the number of patients treated at each hospital. The parameters used to generate the simulated datasets came from analyses of patients hospitalized with a diagnosis of acute myocardial infarction in Ontario, Canada. Results The c-statistic of the risk-adjustment model had, at most, a very modest impact on the accuracy of hospital report cards, whereas the number of patients treated at each hospital had a much greater impact. Conclusions The c-statistic of a risk-adjustment model should not be used to assess the accuracy of a hospital report card. PMID:23295579

  12. From single Debye-Hückel chains to polyelectrolyte solutions: Simulation results

    NASA Astrophysics Data System (ADS)

    Kremer, Kurt

    1996-03-01

    This lecture will present results from simulations of single weakly charged flexible chains, where the electrostatic part of the interaction is modeled by a Debye-Hückel potential,( with U. Micka, IFF, Forschungszentrum Jülich, 52425 Jülich, Germany) as well as simulations of polyelectrolyte solutions, where the counterions are explicitly taken into account( with M. J. Stevens, Sandia Nat. Lab., Albuquerque, NM 87185-1111) ( M. J. Stevens, K. Kremer, JCP 103), 1669 (1995). The first set of the simulations is meant to clear a recent contoversy on the dependency of the persistence length LP on the screening length Γ. While the analytic theories give Lp ~ Γ^x with either x=1 or x=2, the simulations find for all experimentally accessible chain lengths a varying exponent, which is significantly smaller than 1. This causes serious doubts on the applicability of this model for weakly charged polyelectrolytes in general. The second part deals with strongly charged flexible polyelectrolytes in salt free solution. These simulations are performed for multichain systems. The full Coulomb interactions of the monomers and counterions are treated explicitly. Experimental measurements of the osmotic pressure and the structure factor are reproduced and extended. The simulations reveal a new picture of the chain structure based on calculations of the structure factor, persistence length, end-to-end distance, etc. Even at very low density, the chains show significant bending. Furthermore, the chains contract significantly before they start to overlap. We also show that counterion condensation dramatically alters the chain structure, even for a good solvent backbone.

  13. Seismic wavefield propagation in 2D anisotropic media: Ray theory versus wave-equation simulation

    NASA Astrophysics Data System (ADS)

    Bai, Chao-ying; Hu, Guang-yi; Zhang, Yan-teng; Li, Zhong-sheng

    2014-05-01

    Despite the ray theory that is based on the high frequency assumption of the elastic wave-equation, the ray theory and the wave-equation simulation methods should be mutually proof of each other and hence jointly developed, but in fact parallel independent progressively. For this reason, in this paper we try an alternative way to mutually verify and test the computational accuracy and the solution correctness of both the ray theory (the multistage irregular shortest-path method) and the wave-equation simulation method (both the staggered finite difference method and the pseudo-spectral method) in anisotropic VTI and TTI media. Through the analysis and comparison of wavefield snapshot, common source gather profile and synthetic seismogram, it is able not only to verify the accuracy and correctness of each of the methods at least for kinematic features, but also to thoroughly understand the kinematic and dynamic features of the wave propagation in anisotropic media. The results show that both the staggered finite difference method and the pseudo-spectral method are able to yield the same results even for complex anisotropic media (such as a fault model); the multistage irregular shortest-path method is capable of predicting similar kinematic features as the wave-equation simulation method does, which can be used to mutually test each other for methodology accuracy and solution correctness. In addition, with the aid of the ray tracing results, it is easy to identify the multi-phases (or multiples) in the wavefield snapshot, common source point gather seismic section and synthetic seismogram predicted by the wave-equation simulation method, which is a key issue for later seismic application.

  14. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  15. Cadastral Database Positional Accuracy Improvement

    NASA Astrophysics Data System (ADS)

    Hashim, N. M.; Omar, A. H.; Ramli, S. N. M.; Omar, K. M.; Din, N.

    2017-10-01

    Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.

  16. Research on material removal accuracy analysis and correction of removal function during ion beam figuring

    NASA Astrophysics Data System (ADS)

    Wu, Weibin; Dai, Yifan; Zhou, Lin; Xu, Mingjin

    2016-09-01

    Material removal accuracy has a direct impact on the machining precision and efficiency of ion beam figuring. By analyzing the factors suppressing the improvement of material removal accuracy, we conclude that correcting the removal function deviation and reducing the removal material amount during each iterative process could help to improve material removal accuracy. Removal function correcting principle can effectively compensate removal function deviation between actual figuring and simulated processes, while experiments indicate that material removal accuracy decreases with a long machining time, so a small amount of removal material in each iterative process is suggested. However, more clamping and measuring steps will be introduced in this way, which will also generate machining errors and suppress the improvement of material removal accuracy. On this account, a free-measurement iterative process method is put forward to improve material removal accuracy and figuring efficiency by using less measuring and clamping steps. Finally, an experiment on a φ 100-mm Zerodur planar is preformed, which shows that, in similar figuring time, three free-measurement iterative processes could improve the material removal accuracy and the surface error convergence rate by 62.5% and 17.6%, respectively, compared with a single iterative process.

  17. Probabilistic Simulation of Multi-Scale Composite Behavior

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2012-01-01

    A methodology is developed to computationally assess the non-deterministic composite response at all composite scales (from micro to structural) due to the uncertainties in the constituent (fiber and matrix) properties, in the fabrication process and in structural variables (primitive variables). The methodology is computationally efficient for simulating the probability distributions of composite behavior, such as material properties, laminate and structural responses. Bi-products of the methodology are probabilistic sensitivities of the composite primitive variables. The methodology has been implemented into the computer codes PICAN (Probabilistic Integrated Composite ANalyzer) and IPACS (Integrated Probabilistic Assessment of Composite Structures). The accuracy and efficiency of this methodology are demonstrated by simulating the uncertainties in composite typical laminates and comparing the results with the Monte Carlo simulation method. Available experimental data of composite laminate behavior at all scales fall within the scatters predicted by PICAN. Multi-scaling is extended to simulate probabilistic thermo-mechanical fatigue and to simulate the probabilistic design of a composite redome in order to illustrate its versatility. Results show that probabilistic fatigue can be simulated for different temperature amplitudes and for different cyclic stress magnitudes. Results also show that laminate configurations can be selected to increase the redome reliability by several orders of magnitude without increasing the laminate thickness--a unique feature of structural composites. The old reference denotes that nothing fundamental has been done since that time.

  18. Simulation of size-exclusion chromatography distribution coefficients of comb-shaped molecules in spherical pores comparison of simulation and experiment.

    PubMed

    Radke, Wolfgang

    2004-03-05

    Simulations of the distribution coefficients of linear polymers and regular combs with various spacings between the arms have been performed. The distribution coefficients were plotted as a function of the number of segments in order to compare the size exclusion chromatography (SEC)-elution behavior of combs relative to linear molecules. By comparing the simulated SEC-calibration curves it is possible to predict the elution behavior of comb-shaped polymers relative to linear ones. In order to compare the results obtained by computer simulations with experimental data, a variety of comb-shaped polymers varying in side chain length, spacing between the side chains and molecular weights of the backbone were analyzed by SEC with light-scattering detection. It was found that the computer simulations could predict the molecular weights of linear molecules having the same retention volume with an accuracy of about 10%, i.e. the error in the molecular weight obtained by calculating the molecular weight of the comb-polymer based on a calibration curve constructed using linear standards and the results of the computer simulations are of the same magnitude as the experimental error of absolute molecular weight determination.

  19. HEBS and Binary 1-sinc masks simulations, HCIT experiments and results

    NASA Technical Reports Server (NTRS)

    Balasubramanian, Bala K.; Hoppe, Dan; Wilson, Dan; Echternach, Pierre; Trauger, John; Halverson, Peter; Niessner, Al; Shi, Fang; Lowman, Andrew

    2005-01-01

    Based on preliminary experiments and results with a binary 1-sinc mask in the HCIT early in August 2004, we planned for a detailed experiment to compare the performance of HEBS and Binary masks under nearly identical conditions in the HCIT. This report details the design and fabrication of the masks, simulated predictions, and experimental results.

  20. Results of computer calculations for a simulated distribution of kidney cells

    NASA Technical Reports Server (NTRS)

    Micale, F. J.

    1985-01-01

    The results of computer calculations for a simulated distribution of kidney cells are given. The calculations were made for different values of electroosmotic flow, U sub o, and the ratio of sample diameter to channel diameter, R.